A recent cross-disciplinary study conducted by researchers at Washington University in St. Louis has shed light on an intriguing psychological phenomenon that occurs when individuals are informed that they are training artificial intelligence (AI) to play a bargaining game. Participants in the study showed a tendency to adjust their behavior in order to appear more fair and just when they knew their actions would be used to teach AI. This impulse, as identified by Lauren Treiman, a Ph.D. student at the Division of Computational and Data Sciences and lead author of the study, has significant implications for real-world AI developers.
The study, which was published in Proceedings of the National Academy of Sciences, consisted of five experiments involving approximately 200-300 participants each. These subjects were tasked with playing the “Ultimatum Game,” where they had to negotiate small cash payouts with either human players or a computer. When informed that their decisions would be utilized to train an AI bot to play the game, participants were more inclined to seek a fair share of the payout, even if it meant sacrificing some of their own earnings.
Interestingly, this behavioral change persisted even after participants were informed that their decisions were no longer being used for AI training. This suggests that the experience of shaping technology had a lasting impact on their decision-making processes. Wouter Kool, an assistant professor of psychological and brain sciences, emphasized the significance of this continuing behavior and its implications for habit formation.
While the study highlighted the strong inclination of participants to behave in a fair manner when training AI, the motivations driving this behavior remain ambiguous. Kool expressed that researchers did not delve into specific motivations and strategies behind the participants’ actions. It is possible that individuals were simply following their natural instincts to reject offers they perceived as unfair, without considering the broader consequences of their actions.
According to Kool, participants may not have felt a compelling need to prioritize the ethics of AI during the experiment. Instead, their actions could have been driven by a desire to take the path of least resistance. This lack of consideration for future implications raises questions about the underlying thought processes that guide human behavior in the context of AI training.
Chien-Ju Ho, an assistant professor of computer science and engineering, emphasized the crucial role of human decisions in the training of AI. Ho stressed that human biases during AI training can lead to biased outcomes in AI algorithms, which has been evident in various applications such as facial recognition software. Ho highlighted the challenges posed by biased training data, which can result in inaccuracies in AI applications, particularly in identifying individuals from diverse racial backgrounds.
The study underscores the importance of considering the psychological aspects of computer science when developing AI technology. By acknowledging the impact of human behavior on AI training, developers can strive to mitigate biases and ensure more ethical and equitable outcomes in the deployment of artificial intelligence.
Leave a Reply