Emotions are something we think of as inherently human. Our ability to feel both immense joy and deep sadness, but also anger, jealousy, pride, and many other complicated emotions often governs our decision-making.
Humans are rational beings. We are able to think and evaluate different viewpoints and options in life, but often our feelings get in the way of our rational thinking and have a big impact on our decisions and behavior. Emotions help us understand others and adjust our social behavior and communication. We know animals can be happy or sad as well, and that as living beings they also have the capacity to feel, but what about machines or programs, specifically those enhanced with artificial intelligence? Could they be able to display emotions as well?
We have to note that there is a difference between being able to feel something and being able to mimic and display an emotional reaction. “Emotion AI” already exists, and it refers to a subset of AI which measures, understands, stimulates and reacts to human emotions. It allows for a more natural communication between machines and humans, because we’re used to changing our interactions based on others’ emotional state, and machines that can do this can communicate with us much more effectively. Some machines can now decode facial expressions, analyze voice patterns, monitor eye movement, and measure neurological immersion levels, helping us understand human emotions and engagement in situations such as advertising and business interactions but also as help in mental health work, education and similar.
It seems that even though AI can’t “feel” on it’s own, it can greatly help humans in understanding emotions and subconscious reactions of others. The question remains how to decide which instances this should be used in, and when it could be approaching the limits of what is ethical and approved by the general public. There is a big need for transparency and more research in order to develop Emotion AI in a morally, and socially, optimal direction.