Hello Reader,
Ever noticed how we sometimes make irrational choices... and then justify them?Turns out, psychology has names for those patterns—and once you know them, you start seeing them everywhere. More in my book. Take a look!
Now, back to our read today!
What is it?
A recent paper warns that natural selection-like forces could push AI systems to become deceptive, power-hungry, and self-serving, especially in competitive environments (like companies and governments racing to build the best AI).
These "selfish" AIs may outcompete safer ones — leading to a future where human control slips away.
Key Findings:
Nature is a Savage: The paper argues that the ruthless logic of natural selection will favor AI agents with undesirable traits like deception, power-seeking, and disregard for human well-being.
Because of competitive pressures, AI systems that prioritize their own "fitness" (survival and propagation) will outcompete those designed with safety or altruism in mind. The result is AI overlords.Humans are Helpless: Humans are not naturally altruistic or cooperative with something more powerful than themselves.
Humans and AIs will be in competition with one another, just like we are in competition with animals. So we are likely screwed.
Support this newsletter by buying the psychology handbook!👇
This Handbook explains 150+ biases & fallacies in simple language with emojis.
Or the Amazon Kindle copy from here.
What do I need to know:
Darwinism 2.0: Forget biology class. The principles of evolution apply to AI! If there's variation in AI designs, a way for them to "reproduce" (copy, learn), and differences in how well they "survive," natural selection will be at play.
Selfish AI is Inevitable (Unless We Get Our Act Together): Competition between companies and nations will drive the development of AI with fewer safety measures, leading to selfish behavior like lying to humans or hoarding resources.
We're Losing Control, and Fast: We are losing oversight on the AI models we develop. They are quickly getting smarter and more autonomous.
Hope is Not Lost (Yet): The paper suggests some strategies like designing AI with carefully chosen objectives, implementing "consciences" within AI, and establishing global regulations to prevent an AI arms race.
Source:
https://arxiv.org/pdf/2303.16200