AI behind modern LLMs has a tendency to escalate situations. It finds escalation the solution to most problems and threats, including launching nukes, making it unfit for use in the military before we fully understand the risks.
Microsoft’s latest Future of Work Report comments on how AI is reshaping how we work, presenting generative AI in an overwhelmingly positive light.
Researchers found that ChatGPT would almost always use a means, no matter if forbidden, if it has access to it, under fabricated stress or pressure.
A model trained on 6M+ Danish people’s information predicted death with a 78% accuracy.
Apple has tested the practicality of running LLMs locally using flash memory, which can allow models to run on devices with lower RAM, such as iPhones.
Meta, IBM, Intel lead the 47-strong AI Alliance, a new partnership for responsible AI research & development.
In a scientific breakthrough, scientists create an AI neural network that matches human performance in language learning.
Nvidia’s new AI agent called Eureka can train bots for movement on its own without any task-specific prompt or template.
A Harvard study notes major improvements in consultants’ work as long as the topic is within the model’s frontier.
Google pledges $20M & starts a new project for research into and grants for responsible AI.