copyright abuse, terrorist memes, and deepfake revenge porn: generative ai is out of control
After unleashing generative AI on the world, companies are discovering that far too many users are having way too much fun abusing the technology.
One of the strangest episodes of my life happened during a party at a friend’s house. A person I barely recognized pointed to me and loudly announced to all those within earshot that I deserved a special drink because I was actually a secret porn star, and therefore, awesome. No amount of laughter or denials could convince the unwanted drunken admirer that the closest my life ever came to adult entertainment was using the same video editing tools to make promotional videos for websites I was designing at the time. Eventually, I allowed him to toast me for all those imaginary exploits just to drop the subject and move on with the night. Many whispered questions followed.
But imagine what could’ve happened if in response to my befuddlement, he whipped out his phone with a pornographic clip starring me. Well, sort of. The performer’s face would look almost exactly like mine, and his body a passable, reasonable rendition of my own, seemingly irrefutable evidence which would have certainly got people talking and upturned my life as the rumor and link spread. Today, this scenario is a reality for countless women (and a handful of men) who found themselves on the wrong side of a breakup with a vengeful person, or an argument involving an angry troll who doesn’t know them but now has a vendetta against them and some time on his hands.
These women may have never taken their clothes off in front of a camera in their lives. Doesn’t matter. With enough pictures on social media, deepfake technology has all it needs to create almost any kind of pornographic scene. While this sounds jarring and alarming, we need to remember that the world was introduced to deepfakes, a system of artificial neural networks competing with each other to manipulate images until the end result looks realistic enough to the human eye, through social media communities that wanted to use these AI tools to turn their favorite models and celebrities into porn stars for their own enjoyment. Now, they’ve moved on to random strangers.
Of course, just saying they moved on would be an understatement. Out of the quarter of a million or so deepfake porn videos uploaded to the top 35 relevant sites since the dawn of the technology, 113,000 were added just this year so far. We’re well on track to more than double last year’s totals from the same online spaces, and remember we are only talking about the few dozen biggest offenders and not even counting the still images, which likely number in the millions. Leaving aside the more than questionable ethics in play, we could say that celebrities expect this to happen and have an army of publicists and lawyers on call. Random people? Not so much.
digital horrors meet real life consequences
But wait, it gets worse. While anodyne headshots and video clips are enough to put a victim in a salacious scene, AI models also need explicit images with which to splice faces and other distinguishing features. For those, they use actual pornography and those catalogs can and do include revenge and coerced porn from the bowels of the web. In the end, women may be getting double, if not triple exploited for the sake of embarrassing someone else into silence often simply because it’s possible. Usually this would be the part where I also explain what happens with male revenge porn and explicit deepfakes, but this phenomenon is so gendered, men account for fewer than 5% of all victims and the effect on them is not well studied.
Just to add a poison cherry on the excrement sundae, there’s very little victims can do when deepfake porn of them surfaces, no matter how much anxiety and trauma it can cause, especially if it ends up being sent to older family members who simply cannot understand the concept of deepfakes. They could lobby the porn sites to take videos down, but sites are free to ignore those requests. They could threaten lawsuits after parting with a large sum of money to pay a very hefty retainer, but this process could still take a year or more. If the case ever gets to court, the perpetrators, if they’re even discovered, can invoke free speech since the images were, technically, fake.
Even if by some miracle you do manage to get them taken down, they’re very likely to simply resurface on another site, starting the process detailed above all over again. In short, once it’s out there, it’s out there for good. Have fun, enjoy the anxiety, insomnia, and feeling virtually violated by countless strangers picking apart your digitally faked lewd avatar. If you’re looking toward politicians and new laws to help you out, don’t. A number of jurisdictions have tried only to find themselves up against a maze of other laws around art, criticism, profanity, censorship, benign uses of the AI technology, and the limits of international enforcement, even when the subject is a minor.
Now, hold on, you might say, yes deepfake porn can do major damage to a person’s life but isn’t it an extreme outlier when it comes to generative AI? It certainly is, and it isn’t the only extreme outlier either. These tools are also being used to generate racist and fascist propaganda, their guardrails being easily overwritten by determined users with enough experience in persistently crafting the right prompts. Mickey Mouse and Dora The Explorer can now commit war crimes and suicide bombings with just a few prompts and a few minutes of rendering time, ready to be spread across social media for acclimating “normies” to ever more extreme content.
how to poison an artificial intelligence
So, what do we do about this rampant abuse and misuse of tools that were supposed to be helpful idea generators and easy ways to create stock images for articles? Don’t upload anything online? Too late for that. There are billions of images out there ready to be scraped, and by the nature of how the internet works, anything you can open in a browser can be scraped, added to a training set, and iterated through by one of the models in question. Artists, writers, and creators are filing lawsuits against AIs being trained on their work, but judges appointed decades ago struggle to understand the cases before them and in the meantime, the generative tools are already out there, so the damage is already done.
Right now, the most promising approach is by using the potential training data to hack the generative AI models by “poisoning” them. Using an approach called Nightshade by its creators, images that could be scraped by these tools have their pixels secretly scrambled in ways humans don’t notice but convolutional neural networks designed to understand the content of images do. When they try to make sense of the features of these images, things are weird, blurry, difficult to reliably predict, and easy to steer into seeing things that aren’t there or just completely incorrect. Nightshade tests can even get generative AI to mix up cats and dogs, and cubism with anime.
Sadly, this is not a silver bullet. There are ways around it, capturing the screenshots of the images rather than trusting the files, or applying the algorithm backwards to lower resolution, remove the scrambled pixels, and train at a lower resolution. It could slow down and complicate the training process while yielding slightly worse results, but the tools could still work. If their owners don’t haven’t even thought about the impact of a mass scrape of the internet for training content, or gave much thought about putting guardrails that aren’t purely performative and very trivially bypassed, all they’re going to care about is training and deploying the next version.
And that’s the crux of the matter. It’s not the tools that are to blame for the deepfake porn, terrorist memes, and fascist recruitment posters filling up social media feeds or discussion forums. It’s the people who create them not caring about how they would be used and abused, interested only in building a thing and selling access to it, and a legal system which sees generative AI as a magical black box and the internet as not a real place that can have real world impact. There are no rules or laws to govern the development of AI, no consequences for negligence, and a vicious opposition to any regulation whatsoever. And until those rules exist, the chaos will continue.