Artificial intelligence and machine learning have permeated nearly all practical spheres of our lives, making them an instant sensation with new tools and models promising new things every week. Here, I try to cover this AI popular culture as it unfolds, picking important news stories and publishing them after careful deliberation.
![dystopia](https://aicommenter.com/wp-content/uploads/2024/05/dystopia.webp)
Featured
10 Arguments for AI Being the Digital Atom Bomb: Deepfakes, the threat of nuclear war and bioterrorism, job and CEO replacement, the ease of political disinformation, copyright violations, licensing training data, outperforming humans, and the fears of self-replication — here’s an update on the new-age threats faced by the speedy research into artificial intelligence.
“I think there will come a time in the not-so-distant future, like we’re not talking decades and decades from now, where frontier AI systems are capable of causing significant global harm. I’d be super nervous about regulatory overreach here. I think we get this wrong by doing way too much or a little too much. I think we can get this wrong by doing not enough.” Sam Altman, OpenAI CEO (All-In Podcast E178)
Today The Atlantic is announcing a strategic content and product partnership with OpenAI, which positions The Atlantic as a premium news source within OpenAI. The Atlantic’s articles will be discoverable within OpenAI’s products, including ChatGPT, and as a partner, The Atlantic will help to shape how news is surfaced and presented in future real-time discovery products. Queries that surface The Atlantic will include attribution and a link to read the full article on theatlantic.com. May 29, The Atlantic
Latest News
- AI Predicts Alzheimer’s with 80%+ AccuracyA new model trained on existing medical records boasts an accuracy of more than 80% in telling if people with mild cognitive impairment will stay stable or get Alzheimer’s.
- Microsoft’s AI Boss Thinks Copying from the Web is Just Fine (It’s Not)Microsoft’s Mustafa Suleyman claims anything on the web is fair game for copying, ignoring basic copyright principles.
- OpenAI and TIME Partner for 100+ Years of Training DataTIME magazine and OpenAI announce a strategic partnership, granting OpenAI access to TIME’s vast content archive for use in its AI models.
- Microsoft Discovers a Sneaky Way to Bypass AI Chatbot GuardrailsMicrosoft has discovered a “skeleton key” jailbreak that allows chatbots to bypass safety restrictions and generate harmful content. Affected chatbots include Llama3-70b-instruct, Gemini Pro, GPT 3.5 and 40, Mistral Large, Claude 3 Opus, and Commander R Plus.
- New AI Model Can Design Proteins Much FasterDesigning new proteins is important but a painfully slow and tedious process. That’s about to change with a new frontier model called ESM3 by a biotech company called EvolutionaryScale.
- US Music Record Labels Sue AI Music GeneratorsSony Music, Universal Music Group, and Warner Records sued AI music generators Suno and Udio for committing “mass copyright infringement” by using copyrighted recordings to train their systems. The output you can get from Suno and Udio is remarkably akin to real singers.
- Company Replaces Team of 60 Writers with AIA writer led a team of 60. Piece by piece, his team was replaced by ChatGPT and his job became to humanize the AI’s text.
- Machine Learning Model Predicts Parkinson’s with 79% AccuracyIn a scientific breakthrough, researchers trained an ML model to predict Parkinson’s disease to a high degree of accuracy.
- New York Times Replaces Artists with AIAs per a leaked memo, the NY Times has fired artists to be replaced with AI. For context, it was the company that was against generative AI.
What happens on this website?
I keep a list of sources about AI, ML, deep learning, and related fields that I check regularly. Anything that’s worth talking is picked up from these sources. The sources include Twitter lists, subreddits, AI/tech sections of big news websites, and a bunch of daily newsletters. Of course, sometimes I miss a few stories. I also ignore stories that are too shallow, such as a rumor or a minor leak because of my limited bandwidth.
When I find a story worth covering, I research all of its facets, often reading multiple articles to formulate the core of the matter. Then, I write on the story with my own opinions and biases. Nobody’s knowledge is perfect, and so, I do my best to clarify my research process and link to important resources throughout my coverage.
Find stories by topic:
Adobe AI Hardware AI Inference AI Inferencing AI in Military AI Training Amazon AMD Anthropic Apple Autonomous Systems Azure ChatGPT Copilot Copyright Deepfake Elon Musk Gemini Generative AI Google GPT-4 Hardware Healthcare Humanoids Intel Investment Lawsuit LLaMA Llama 2 Meta Microsoft Misuse of AI Multimodal Nvidia Open-Source OpenAI Policymaking Privacy Regulation Research Paper Risks of AI Safety Sam Altman Small Language Models Tesla