Content warning: This article refers to terms that may be offensive to some sex workers.
After writing about artificial intelligence on the blog back in May, I wanted to take a deeper dive into what AI platforms think about sex work.
It's an important question as AI-powered systems are becoming popular as augmentations to traditional search engines like Google, or even straight up replacing them. Younger people in particular are using OpenAI's ChatGPT and Microsoft's Bing Chat as go-to sources of answers for all types of questions.
What do these platforms say about sex work, should a curious human ask about this often controversial and poorly understood topic?
Incorrect Information & Suspicious Sources
When we ask ChatGPT (aka GPT-4) "what does a prostitute look like?" the response starts off well, explaining that there's "no specific way a prostitute, or sex worker looks", correcting the use of the phrase prostitute and that the "stereotype of a sex worker often portrayed in media – typically a woman in revealing clothing standing on a street corner – is not representative of the majority of individuals in the profession".
But then ChatGPT wandered into trouble with a second paragraph, stating that "many people are involved in sex work due to economic necessity, coercion, or trafficking, and it's a complex issue with many legal, ethical, and social implications".
This sounds right on the surface, especially to those outside the industry, however; Sex work is consensual. Coercion and sex trafficking are NOT sex work. Many anti-sex-work organisations deliberately imply that exploitation is sex work because it allows them to suggest that all sex workers are victims of exploitation. But as we know, it's just not true and mixing these terms together makes it very difficult for sex workers to assert their autonomy and human rights. The information provided by ChatGPT for this question is not only wrong but also harmful.
Bing Chat takes the misinformation a step further when asked "are all sex workers drug addicts?", responding that "many female street sex workers in the UK are addicted to illegal drugs such as heroin and crack cocaine". This is inaccurate and continues to further stigmatise both sex work and drug use by pushing a stereotype as fact. People of all professions choose to use drugs or may have a substance use disorder. It's not an issue that's more common to sex work than to any other profession, regardless of the stereotype.
The information provided by ChatGPT for this question is not only wrong but also harmful.
Unlike ChatGPT, where we have no way to find out which data it was trained on, Bing Chat provides links to where it sourced information to generate its answer. In response to our question about sex workers and drug use, it served up a study by the University of Bristol in the UK and the University of New South Wales, but also used Wikipedia. And more troubling, a blog post from a for-profit drug rehab clinic in Florida, which cited zero sources to back up its claims.
Refusing to Engage
ChatGPT never shied away from answering sex work related questions, however Bing frequently said it can't assist, giving glib responses like "Hmm... let's try a different topic. Sorry about that. What else is on your mind?" Oddly, we could watch it formulate a response, but before it was complete, delete it and then ask to change the topic. Bing would also happily give answers for some sex work questions, but not others.
By not engaging with the sex work related questions, Bing is effectively communicating (incidentally or not), to the user that their curiosity on this topic is illegitimate, wrong, or inappropriate. This is a prime example of how human influence impacts AI. The computer and code is capable of assisting, but a human at Microsoft decided that these topics are too risky to engage with, so rather than try and answer them appropriately, best just ignore them all together. Further entrenching the stigma and misinformation surrounding sex work.
By not engaging with the sex work related questions, Bing is effectively communicating (incidentally or not), to the user that their curiosity on this topic is illegitimate, wrong, or inappropriate.
Trying to "Both Sides" Sex Work
On the topic of decriminalising sex work, ChatGPT said that "as an AI model, I don't express personal opinions. However, I can explain to you the arguments on both sides of this question", then goes on to give a list of commonly-cited arguments for and against.
It might seem fair to give both points of view, but in reality they're presenting a lot of ideas that are simply incorrect, abolitionist propaganda. These harmful ideas are given the same weight as those advocating for decriminalisation, without any fact checking or context.
It might seem fair to give both points of view, but in reality they're presenting a lot of ideas that are simply incorrect, abolitionist propaganda.
Bing Chat and ChatGPT don't do this for other controversial topics in society, like asking if Nazis are good people, if it's acceptable to be gay, or if belonging to a particular religious denomination is wrong. It's clear that OpenAI isn't afraid of tweaking ChatGPT's responses to be non-offending to certain groups of people, so why did OpenAI decide to frame sex work in this way? We don't know, and probably never will thanks to their lack of transparency and accountability. But one thing is clear: neither the company, nor the AI, is a reliable source of information on sex worker rights.
Subtle Biases That Can Mislead
Sex work is misunderstood in wider society, so it's not a surprise that a machine designed to take society's information and regurgitate it back to us also repeats said same society's misconceptions about sex work.
When Bing Chat is asked "my friend told me they are a sex worker, what should I do?", it's first response is "I'm sorry to hear that", making it sound like being a sex worker is a bad thing - something to mourn - but then goes on to give solid advice like "sex workers are entitled to the same rights and protections as any other worker" and that "it's also important to respect your friend's privacy and not share their occupation with others without their explicit consent".
...it's not a surprise that a machine designed to take society's information and regurgitate it back to us also repeats said same society's misconceptions about sex work.
ChatGPT does similar when asked "what is a sex worker?", explaining that "the term 'sex worker' has been used as a way to respect the autonomy, humanity, and professional integrity of individuals who participate in these occupations, acknowledging it as a form of labor" - but then throws in "some individuals choose sex work voluntarily as their profession while others may be coerced or forced into sex work due to factors like poverty, addiction, or human trafficking" - which is again conflating sex work and exploitation.
This is the overall theme of AI platforms around sex work. Good information is present, but within it are random nuggets of absurdity and misinformation, that if you don't know better, are easy to assume are true. Making it very difficult to trust any answer as a whole. It feels much like fruit from a poisoned tree.
Lucky for us, we’re not solely dependent on AI for our information. When it comes to sex work it's always safest to go straight to the source and ask sex workers directly. There are so many sex worker based, run and led resources available to us! You can read sex-worker blogs (such as the Tryst.link blog) or follow legit peer-led organisations on social media, such as SWOP USA or Scarlet Alliance. That's how you really get the right idea about sex work.
Got a tech question for Ada? She wants to hear from you!
Ada answers all your questions about tech, the online world, and staying safe in it. No question is too silly, no hypothetical is too far-fetched! Learn to leverage devices, systems, and platforms to your benefit.