Nothing's in my cart
2-minute read
Can you imagine yourself approaching an AI robot that's equipped with a lifelike face, and just as you're getting ready to greet it with a friendly gesture, you find the robot smiling and saying hello to you first? Emo, an emo robot which has been in development for over five years by the Creative Machines Lab at Columbia University's School of Engineering, has made its debut in a recent research article. Emo is not only capable of making eye contact but can also predict and mimic human facial expressions through two AI models. The technology demonstrated not only enables robots to "read the room" but also helps to establish a new bridge of trust between humans and robots through deeper communication interaction.
Lead researcher Yuhang Hu with Emo, wearing a silicone face. (Source: Columbia Engineering)
Creating an AI robot face that can accurately mimic a human smile is no small feat. The research team had to design actuators capable of executing complex expressions and ensure these expressions appear natural and on time. They equipped Emo with 26 precision actuators, each responsible for simulating the subtle movements of human facial muscles, allowing Emo to display a range of robot emotions from smiles to surprise. Additionally, Emo's eyes are fitted with high-resolution cameras, enabling it to engage in incredibly realistic eye contact and capture every subtle facial expression change in front of it.
Researchers sharing a smile with Emo. (Source: Columbia Engineering)
To ensure Emo's responses are not only quick but also accurate, the Columbia University research team developed two AI models. The first model is dedicated to analyzing minute changes in human faces to predict upcoming expressions. This allows Emo to detect the initiation of a smile and respond before the expression is fully formed. The second model generates corresponding actuator commands based on the first model's analysis, enabling Emo to predict and synchronize a smile approximately 840 milliseconds before a human does.
The research team employed a method known as "self-modeling" to train Emo. This method allowed Emo to observe its own random facial movements and the corresponding relationships with the actuators. Through this process, Emo continuously adjusted and established an internal model that detailed how different actuator configurations affect facial expression changes, similar to how humans practice different facial expressions in front of a mirror.
Initially, Emo's expressions were not quite right. (Source: Columbia Engineering)
By self-observing, Emo can optimize the accuracy and naturalness of its expressions without external references. Then, by observing a vast array of human facial expression videos, Emo learns and memorizes the triggers and appropriate timing for specific expressions.
In the future, the research team plans to integrate Emo with other large language models like GPT to enhance its language comprehension and interactive capabilities further. This will enable Emo to engage more deeply in conversations, accurately understand contexts, and discern emotions. Such highly empathetic robots introduce both excitement and apprehension about what the future might hold. As depicted in 'Ex Machina,' could we be deceived by their vivid performances, leaving us spinning in circles?
Ava from "Ex Machina" smiles at you, sending chills down your spine.