Taylor Swift’s deepfaked images surfaced on social media, outraging everyone from fans to the White House.
Several explicit AI-generated deepfakes of pop star Taylor Swift went viral. The White House called it alarming, urging Congress to act. Even if it took a famous star for the government to finally take notice, it’s something. The images, now taken down, were all fake and generated with AI, displaying Swift in explicit poses, and were simply awful and in horribly bad taste. The fans took to social media to show their outrage, though little can be done to take the images down completely or to stop the pervasive spread of deepfaking technology as a society in general.
The Congress is thinking of new legislation now, which is a step in the right direction and one that was long overdue. Though the images were being taken down constantly, one image reportedly had 47 million views before it was taken down.
There has been a 550% rise in the creation of doctored images from 2019 to 2023, mainly due to more powerful AI tools. The UK acted first, making deepfake pornography illegal as part of its Online Safety Act.
Generative AI of all kinds is developing much faster than we can keep up, make legislation for, or put safeguards in place for. However, the other tools and applications have a considerably lower risk, at least for now, when compared to deepfake and AI image manipulation. In the wrong hands, it can cause severe harm and upset masses, as was just made clear in this entire episode.