10 Arguments for AI Being the Digital Atom Bomb


Here’s an update on the dangers, fears, and risks of AI. Below are 10 recent developments that should scare human beings.

By all accounts, the majority of growth during this ongoing race for control points to generative AI as a ticking time bomb. From the Doomsday scenarios to national security threats, AI is looming large in every sphere of our lives, potent enough to fundamentally rewrite social contracts.

1. Deepfaking

There are serious concerns about what deepfake technology can do to the world. Spreading misinformation about nuclear strikes, for example, with videos of missiles being launched, can trigger a confrontation on absolutely no real evidence. Given the tools to verify the authenticity of AI-generated photos and videos are in their nascent stages, a superpower nation doesn’t have a long window to confirm any suspicions. In a piece on The Hill aptly titled “Humanity is one convincing deepfake away from a global catastrophe”, Hamza Chaudhry writes, “The national security risks extend beyond nuclear exchange.” The only relief is in the fact that the cost of war and the technical literacy of governments is much higher today than it was before.

2. AI as Nuclear Weapon

Warren Buffer saw his “likeness” deliver a message. His comment was, “So when you think of the potential for scamming people … scamming has always been part of the American scene but … it’s going to be the growth industry of all time.” Likening deepfake technology to a genie out of its bottle, he said that it scares the hell out of him. His warning, reported by Axios here, decries AI to be similar to the rise of nuclear weapons during the Second World War. The one good thing, however, that will come out of this in the world of finance and investing is that the job of many middlemen and analysts will be replaced by AI-based financial agents who can retain more historical data and give more informed opinions tirelessly.

3. Ease of Bioterrorism

Rand Corporation is an American think tank and research institute. The company’s CEO commented on the research into the dangers of AI as, “advances in AI are making it easier to learn how to build biological weapons and other tools of destruction.” Threats such as AI-enabled bioterrorism are dangerous on their own, and when you add to the mix the fact that the public is more vulnerable today, as exemplified by the COVID-19 pandemic, things suddenly become very dark. His interview with WIRED brings to light a very common challenge that people with malicious intent face. They can get their hands on the raw materials off eBay easily, but it’s the lack of knowledge that keeps them from doing anything very dangerous. And AI is well-poised to change that.

4. AI Customer Support

And it’s not just deepfakes, national security threats, and bioterrorism. Another facet that troubles policymakers deeply is the job market. You’ve probably heard a million times by now how CEOs continually tell you not to worry. AI is not the replacement for humans, they say. Well, as it turns out, that’s not factual or even practical. AI is indeed replacing humans as we speak. I’ve covered major milestones of AI replacing humans so many times on this website. In fact, TCS head Krithi Krithivasan said that there will be a minimal need for call centers within a year. The original interview was with the Financial Times (link – paywalled). You can read more on this at TechSpot. As MNCs adopt generative AI, he says, there is a job reduction to be expected. The customer help center industry employs 17 million people (Gartner).

5. CEOs Might Be Replaced

AND Digital, a company focused on reducing the digital skills gap, recently published a whitepaper titled, “The CEO Digital Divide.” The report analyzes how CEOs think and what they value most – speed, security, scale, innovation, legacy, etc. In the report, they found that 45% of CEOs make major business decisions based on ChatGPT’s output and, more alarmingly, 43% of CEOs believe that AI can replace the job of the CEO. The data is based on a survey of 600 CEOs in the UK. Will AI be the great leveler, as Luke Hughes concludes in his coverage of the story, or will it see CEOs finding clever ways to tiptoe around the looming probability when the time comes?

6. Changing Political Perspectives

AI is in everyone’s hands. Less than a month ago, WSJ writer Jack Brewster narrated his story of how he built an AI-powered propaganda machine for just $105 (link – paywalled). Essentially, Brewster paid a developer to create an AI-generated, automated news site to create fake political news stories. It took him 2 days to get this done. The website could publish thousands of articles a day using ChatGPT. The Fiverr guy who created the website for Brewster told him that he has created 500+ AI news websites, each project taking him 2-3 days. That’s just one developer on Fiverr. Basically, such an automated system can take news stories from reputed sources and rewrite them to change the perspective, introducing the preferred bias. The next election cycle is upon us (UK – January 2025; US – November 2024; and our own elections scheduled for April-June 2024). This is going to be the first time after the explosion of AI that we will see major elections happen all over the world. With manipulation being so easy today, how much of what we consume online is the truth?

7. Violating Copyrights

Politicians in power have it easy, those in the minority have it difficult. But swaying elections doesn’t change the underlying, core apparatus of law and policymaking as far as the general society is concerned. In contrast, companies developing AI technologies, models, and tools at breakneck speed do affect society on a very large scale. The reason? They are used by the general people like you and me. Amazon was allegedly so desperate to compete in the generative AI race that it breached its own copyright rules. The company was apparently not meeting its goals on search quality through Alexa and the higher-ups didn’t like that one bit. They asked an ex-Amazon employee whose role was to flag violations of the company’s internal copyright violation policies to ignore them in order to improve the results citing the golden rule itself – “Everyone else is doing it.” This ended up in a lawsuit (report | PDF). But how many lawsuits will you file? People still believe the company at the head of it all, OpenAI, has protections in place keeping ex-employees from breaching the NDA.

8. Getting Sued to Getting Licenses

Copyright is a hot topic. Kristina Korotenko in her blog covered the key trends influencing the generative AI market in detail. She chronicles events such as Stability AI being sued by Getty Images, OpenAI by Tremblay, and Nvidia by Nazemian. This is the first phase of what happened. In my opinion, AI companies didn’t think the process of AI training through, doing more than just grossly violating every piece of copyright legislation. The second step, also chronicled in her blog post, is companies partnering with media houses and websites to license the rights to content. This is certainly a more ethical approach, as outlined by Apple’s partnership with Shutterstock, OpenAI’s partnership with Axel Springer (owner of publications like Politico and Business Insider), and Reddit’s signing a deal with a yet-unnamed company. Certainly beats paying $6M to news sites in fines as Google did. But is this the solution to the problem? The blog narrates that additional copyright protections are cropping up for creators and that there’s a general demand for transparency in AI content creation. Perhaps, we’re on the right track.

9. Outperforming Humans

The problem is further accentuated by the LLMs behind modern AI tools becoming too powerful. Every week you’ll hear a new story on how a particular chatbot like ChatGPT, Claude, or Gemini beat humans in a particular field of study. Most recently, GPT-4 outperformed real human psychologists in social intelligence (report | study). It’s not easy to see a future, barely a couple of years down the line, where the latest AI LLMs beat humans in almost every sphere of life. They are even conquering the world of physical labor with robotics, as exemplified by the many humanoids that promise to cut costs for factories and conglomerates.

10. Self-Replicating AI

If that wasn’t enough, AI might be on its way to successfully “replicate and survive in the wild,” if you were to believe the words of Anthropic CEO. Dario Amodei was talking about responsible scaling of AI tech in a podcast with Ezra Klein from the New York Times. Long story short, he said that the world is at ASL 2. ASL 4, which will include persuasion and autonomy, might be just around the corner. And this ties back to the point of geopolitics. He says, “ASL 4 is going to be more about, on the misuse side, enabling state-level actors to greatly increase their capability, which is much harder than enabling random people. So where we would worry that North Korea or China or Russia could greatly enhance their offensive capabilities in various military areas with AI in a way that would give them a substantial advantage at the geopolitical level.” Anthropic recently released Claude, a model that beat GPT-4. The CEO of Mistral AI, another company in the same field (just open-source) said that AI companies are trying to build God.


Where do we go from here? The risks and potential dangers are too real. They stare us in the face. The common answer seems to be making companies building AI tools more responsible, transparent, and accountable. Ex-OpenAI board member even called for the audit of top AI companies to share information with the public about the capabilities and risks of their brainchildren (link – paywalled). Another measure seems to be policymaking. Create stronger laws to protect consumers. The European Union is on track to do this, some would say. The AI Act is groundbreaking not because of any results, but only when you compare it with what the rest of the world is doing (read nothing).

It’s both fascinating and scary being in this cesspool of constant development, companies trying to one-up each other, and AI models becoming uncannily good. Stay tuned for what’s to come. We’re living in exciting times. Scary, yes, but exciting nonetheless.

This is the 200th post on this website. And to find a suitable featured image for this post, I Googled “ai generated image dystopia.”

By Abhimanyu

Unwrapping the fast-evolving AI popular culture.