Hey Reader,
This research explores a new and concerning phenomenon which occurs when people interact with increasingly human-like AI.
*In last week’s poll, I asked if you’d like more of Marketing Psychology. The results were 60% Yes + 40% Maybe.
What is it?
The central argument is that when people perceive autonomous agents (AI programs, robots) as having strong socio-emotional capabilities, they start attributing a human-like mind to those agents.
This, paradoxically, leads to dehumanizing perceptions of actual humans, as our judgments of humanness get pulled down toward the (still less-than-human) level of the AI.
Consequently, people become more likely to mistreat others, especially employees, after interacting with emotionally intelligent AI.
Major Findings:
Socio-emotional capability, not cognitive, drives dehumanization: Perceiving a robot or AI program as having feelings and emotions (e.g., empathy) has a stronger dehumanizing effect on perceptions of humans compared to perceiving it as highly intelligent.
Extreme capabilities can trigger contrast, reducing dehumanization: When AI exhibits capabilities far beyond human potential, the contrast becomes stark, making people see a clear distinction between humans and machines. This reduces the dehumanizing effect.
Experience dimension of mind is key: The dehumanization effect is primarily driven by the perception of experience (the ability to feel emotions) in AI, rather than agency (the ability to act intentionally).
"…One of the best Psychology eBook Purchase I Ever Had…"
The Psych Handbook has 150+ biases & fallacies explained with emojis!
Get the Amazon Kindle copy from here. (40% off )
What do I need to know:
AI-induced dehumanization is a real risk: As AI becomes more sophisticated and human-like, we need to be aware of its potential to negatively impact our perceptions and treatment of other people.
The "uncanny valley" can be protective: When AI capabilities are extreme and clearly beyond human reach, it can create a boundary that minimizes dehumanization.
Socio-emotional capabilities are more impactful than cognitive: AI that can understand and respond to emotions poses a greater risk for dehumanization than AI that is simply intelligent.
Awareness is crucial for mitigation: Companies and marketers developing and implementing AI technologies need to consider the potential for dehumanization and design strategies to mitigate it.
Source:
https://myscp.onlinelibrary.wiley.com/doi/full/10.1002/jcpy.1441?campaign=wolearlyview