Sluts for Security: How Artificial Intelligence Affects Sex Workers
The boss asked me to write a post about the wave of artificial intelligence services popping up lately and how they will impact sex work. Of course, my first thought was to ask Microsoft's new "AI-powered copilot for the web" version of its Bing search engine for its opinion on what sex workers should be aware of regarding artificial intelligence:
"Sex workers should be aware of the potential for AI to be used to monitor and control their work. AI could be used to monitor sex workers' online activity, which could lead to increased surveillance and criminalization. AI could also be used to monitor sex workers' physical activity, which could lead to increased surveillance and harassment by law enforcement. However, AI could also be used to help sex workers stay safe by providing them with information about their clients and the areas they work in. AI could also be used to help sex workers find clients and work more safely. It is important for sex workers to be aware of these potential uses of AI and to advocate for their rights and safety".
To my surprise, it's not a bad summary! But how did this happen? How did Bing come up with this paragraph?
Although the term ‘artificial intelligence’ (AI) is often used to describe this kind of newfangled offering, the whole thing is actually driven by something called ‘machine learning’ (ML). Machine learning involves programs that harvest huge amounts of data from the internet, often without the permission of creators. This data helps the software to model realistic responses when interacting with users.
When applied to text information, we call this approach a ‘large language model’. But it can also be applied to images - leading to some of the ‘deepfake’ pics that have hit the newsstands in recent months.
At their core, these AI systems take a prompt - i.e: a sentence or paragraph detailing what a user wants to see - then search rapidly for data in their model that might match the phrases in that prompt. It might look like magic, but they're just very large databases, with extremely fine-grained labels and descriptions, paired with complex image or text-generation algorithms and loads of computational power. AI and ML are huge areas of computer science so one single article isn't going to make anyone an expert. The key thing to remember is that it's not magic, organic, original, or thinking for itself. AI and ML are computer programs made by humans, with the same biases and problems humans are saddled with.
The phrase "garbage in, garbage out" is a classic expression from the 1950s when a US Army computer specialist described to a newspaper that computers cannot think for themselves and that sloppy inputs inevitably result in incorrect outputs. 70 years later and that expression is more real than ever. These modern AI models are only as good as the data fed into them, so if the data is bad or misinterpreted, then the outcomes will be bad too. There are unfortunately many examples of AI-based systems scraping the internet or ingesting data from various systems blindly without any thought to the ramifications of those actions.
Systems like COMPAS and PredPol are used by dozens of cities in the USA to predict crime and assist in decision-making for criminal sentences. An article from 2020 in MIT's Technology Review explains how black people are more likely to be unfairly treated by these systems due to a circular system of bad data.
The model in these systems takes US government statistics and other data showing that Black people are more likely to be stopped without cause in the USA than a white person, then uses that anecdote to rank a Black person more likely to commit a crime or re-offend, disregarding the individual's circumstances. Because more Black people are targeted by the AI system, that data is fed back into the system and the cycle of bad faith assumptions continues, only this time at the hands of an unthinking, uncaring robot instead of a person that might be persuaded to show leniency or investigate further.
A similar AI system developed by Accenture for the Dutch city of Rotterdam generated a "risk score" for welfare recipients that was used to work out who should be investigated for welfare fraud. This excerpt from Wired's 2023 article on the system explains how it was used:
"Rotterdam's algorithm is best thought of as a suspicion machine. It judges people on many characteristics they cannot control (like gender and ethnicity). What might appear to a caseworker to be a vulnerability, such as a person showing signs of low self-esteem, is treated by the machine as grounds for suspicion when the caseworker enters a comment into the system. The data fed into the algorithm ranges from invasive (the length of someone's last romantic relationship) and subjective (someone's ability to convince and influence others) to banal (how many times someone has emailed the city) and seemingly irrelevant (whether someone plays sports). Despite the scale of data used to calculate risk scores, it performs little better than random selection".
German government authorities have started using a system called KIVI to crack down on pornography that's published on platforms without age verification systems. As explained in this Wired article, KIVI scrapes Twitter, TikTok, YouTube, Telegram and other public platforms for anything its AI thinks is pornography, then forwards its findings to police who then issue notices to remove the content or face fines or even imprisonment. Paulita Pappel, from the European branch of the Free Speech Coalition, told Wired that KIVI "creates an economic sphere where legally and ethically produced content is financially and politically strangled, thus enabling illegal platforms to thrive. It makes the internet a less safe space both for minors and adults".
This form of misguided application of AI-based systems by institutions of power without any oversight or transparency - sold to them as panaceas for their pet peeves, by technologists that should know better but don't care because there's a buck to be made - is an existential threat for many at-risk groups in society. It seems obvious to most of us that such a blunt instrument should not be used for something as vital as welfare or criminal justice, but despite thorough investigations and research into how unjust and discriminatory these AI-based systems are, governments around the world keep deploying them.
Tools available to the public, like the text chatbot ChatGPT, or image generator Midjourney, are out there and going by how much money, time, and effort big tech companies like Microsoft and Google are pumping into them, they're going to be part of our lives indefinitely. It's tough to put that genie back in the bottle.
The good news is that they're relatively easy to use. It's trivial to massage ChatGPT into spitting out some appropriate text-based content or generate unique images using Midjourney by giving it a sentence or two of what you want the image to contain.
The bad news is that like most mainstream internet platforms, they are terrified of sex. There's the usual concern of credit card processors like Mastercard and Visa randomly cutting off access if they get a whiff of their services being used for adult-related content. On top of that, the operators of these AI/ML platforms are terrified of people using them to generate adult content and the inevitable outrage from the media during a critical phase of the industry's mainstream acceptance.
To try and avoid the chances of something adult-related coming out of the AI, many models deliberately don't train using data containing adult content, be it text or images. There are platforms out there that take the open source models and train them on spicier content, but they're usually difficult to use (need to build it yourself on your computer/server) and are of lower quality than the cutting-edge mainstream AI models.
When using these AI tools, it's important to keep in mind that they can make mistakes. Sometimes this is due to dud data getting fed into the model or a limited range of data (i.e: outdated information), but sometimes the AI can hallucinate. That's right, these AI models might not know an answer so they just make something up using the data they do have. It's a known problem in the AI research community and even OpenAI's CEO says not to trust it:
"ChatGPT is incredibly limited but good enough at some things to create a misleading impression of greatness. It's a mistake to be relying on it for anything important right now. It's a preview of progress; we have lots of work to do on robustness and truthfulness".
Like many other technologies, AI-based tools can be abused. Voice cloning is easier than ever, with realistic-sounding text-to-speech capabilities possible with only 60 seconds of audio from the original speaker. Voice-based authentication, like for bank accounts, is vulnerable to such an attack. Fake videos impersonating people, aka "deepfakes", have been a problem for a while, but their quality and ease of generation have increased rapidly in the last few months. Even ChatGPT is used by scammers to improve their generally poor English skills, making the scams harder to spot.
There's an incredible amount of hype around AI-based technology at the moment and it's easy to fall into the trap that we are in a new technological age where computers can do the thinking for us. For the moment at least, we are a long way from true artificial intelligence, and the best approach for sex workers to take with AI is to be incredibly skeptical of it. Don’t trust that any system claiming to use AI to make decisions is doing so without bias and with data that’s relevant. Also, don’t be surprised when the hot new consumer AI tool prevents you from doing anything mildly sex-related with it and even if it does, double-check that the output it’s giving you is correct.
The most important thing we as a society at large need to do is put pressure on companies and governments to be open and specific about their AI-based systems, so we can learn how it impacts our industry. If they don't know or aren't willing to explain in detail what goes on in an AI system and how it determines what comes out, the AI can't be trusted.