學刊論文
Understanding Potentially Biased Artificial Agents Powered by Supervised Learning: Perspectives from Cognitive Psychology and Cognitive Neuroscience

DOI:10.6129/CJP.201909_61(3).0002
中華心理學刊 民108,61 卷,3 期,197-208
Chinese Journal of Psychology 2019, Vol.61, No.3, 197-208


Tsung-Ren Huang(Department of Psychology, National Taiwan University;Neurobiology and Cognitive Neuroscience Center, National Taiwan University;Center for Artificial Intelligence and Advanced Robotics, National Taiwan University;Center for Research in Econometric Theory and Applications, National Taiwan University;MOST Joint Research Center for AI Technology and All Vista Healthcare;MOST AI Biomedical Research Center)

Despite being machines, many artificial agents, similar to humans, make biased decisions. The present article discusses when a machine learning system learns to make biased decisions and how to understand its potentially biased decision-making processes using methods developed or inspired by cognitive psychology and cognitive neuroscience. Specifically, we explain how the inductive nature of supervised machine learning leads to nontransparent decision biases, such as a relative ignorance of minority groups. By treating an artificial agent like a human research participant, we then review how to apply neural and behavioral methods from the cognitive sciences, such as brain ablation and image occlusion, to reveal the decision criteria and tendencies of an artificial agent. Finally, we discuss the social implications of biased artificial agents and encourage cognitive scientists to join the movement of uncovering and correcting machine biases.

Keywords: artificial intelligence, cognitive neuroscience, cognitive psychology, deep learning, machine learning

登入
會員登入
更新驗證碼