Please Select Your Location
Australia
Österreich
België
Canada
Canada - Français
中国
Česká republika
Denmark
Deutschland
France
HongKong
Iceland
Ireland
Italia
日本
Korea
Latvija
Lietuva
Lëtzebuerg
Malta
المملكة العربية السعودية (Arabic)
Nederland
New Zealand
Norge
Polska
Portugal
Russia
Saudi Arabia
Southeast Asia
España
Suisse
Suomi
Sverige
台灣
Ukraine
United Kingdom
United States
Please Select Your Location
België
Česká republika
Denmark
Iceland
Ireland
Italia
Latvija
Lietuva
Lëtzebuerg
Malta
Nederland
Norge
Polska
Portugal
España
Suisse
Suomi
Sverige
<< Back to Blog

Why the Japanese Trust AI More — A Cross-Cultural Experiment

VIVE POST-WAVE Team • Aug. 6, 2025

5 minutes read

Everyone's excited about AI making life easier: it can help you write copy, create presentations, schedule your day, and even anticipate your decisions before you make them. As AI starts to "act" on our behalf and becomes part of the decision-making chain, it signifies that it's no longer just a tool but a partner that can work alongside us, and even partially replace our judgment as an "AI agent". This evolution is part of a broader AI experiment that is reshaping our interactions with technology.

This shift raises an intriguing and practical question: how will we treat these unprecedented "entities"?

In March 2025, a study published in the journal Scientific Reports, titled "Human cooperation with artificial agents varies across countries," explored this very question. Conducted by the Cognition, Values, Behaviour (CVBE) research group at the University of Munich, in collaboration with scholars from Waseda University in Japan and University College London, the study examined how cultural backgrounds influence interactions with AI.

The research used game-based economic experiments to simulate decision-making interactions between humans and AI agents. It found that Japanese participants were more willing to cooperate with AI agents than their American counterparts, showing distinct emotional and moral responses. These differences reflect not just the prevalence of technology but also how people perceive these "non-human entities" and what roles we expect them to play in society.

AI Agent Japan


When faced with tough choices, will you betray AI?

To explore whether cultural differences affect interactions with AI agents, the research team designed two classic game experiments: the "Trust Game" and the "Prisoner's Dilemma," to simulate scenarios of trust, cooperation, and betrayal in human interactions. The study involved 600 participants from Japan and compared their behavior with a previous sample of 604 Americans.

It's noteworthy that both Japan and the US are among the earliest countries to invest in AI research and applications, with similar levels of technological maturity and prevalence. Therefore, this study is particularly useful in highlighting the potential impact of "cultural factors" on human-AI interaction behavior.

In the experiments, each participant engaged in only one type of experiment (either the Trust Game or the Prisoner's Dilemma) and was explicitly assigned to interact with either a human or an AI agent, with prior knowledge of their opponent's identity. To make AI more than just a superficial opponent and to simulate human-like interaction, the research team designed the AI opponents to mimic the probability of human choices in the same games. For example, if humans chose to cooperate 60% of the time in a given scenario, the AI would behave similarly. Participants were also reminded that these AI agents could "understand that their actions collectively influence outcomes" and "possess judgment logic similar to that of humans," thereby enhancing their "human-like" roles.

To make participants' choices more realistic, the study included a concrete reward system: each participant could earn additional bonuses based on game results, up to 200 yen, along with a basic reward of 50 yen. The game's "points" were converted into cash at a rate of 1 point = 2 yen, making each decision to cooperate or betray genuinely impactful on participants' gains or losses. This arrangement aimed to prevent participants from making random choices, allowing for a clearer observation of cultural differences in human-AI interaction behavior.

The two game experiments proceeded as follows:

1. Trust Game

In the Trust Game, two participants take on the roles of "Player One" and "Player Two." Player One can choose to end the game, ensuring both parties receive 30 points, or "take a risk to cooperate," passing the decision to Player Two. If Player Two reciprocates the trust, both receive 70 points; if they betray, Player Two gets 100 points, and Player One gets nothing (0 points). This process simulates the scenario of "whether to trust the other party and take a risk for mutual benefit."

2. Prisoner's Dilemma

The Prisoner's Dilemma involves simultaneous choices. Both players must decide whether to cooperate or betray at the same time. If both cooperate, they each receive 30 points; if one betrays while the other cooperates, the betrayer gets 100 points, and the cooperator gets 0 points; if both betray, they each receive 70 points. This setup simulates the real-world conflict of self-interest versus cooperation under uncertainty about the other's actions.


What do the numbers say?

The results showed that Japanese participants cooperated with humans and AI at nearly the same rate (71% vs. 60% in the Trust Game, 41% vs. 42% in the Prisoner's Dilemma), while Americans significantly exploited AI (80% cooperation with humans vs. 35% with AI in the Trust Game; 49% vs. 36% in the Prisoner's Dilemma). There was no significant difference in participants' expectations of AI behavior between the two countries—meaning Americans chose to "betray" even if they believed AI would cooperate.

Interestingly, in the Prisoner's Dilemma, Japanese participants cooperated with AI slightly more (42%) than with human opponents (41%), although the difference wasn't statistically significant. This suggests that AI might be slightly more trusted in high-risk trust scenarios.


It's not just about behavior—emotions matter too

The study also examined participants' emotional reactions after exploiting AI. It found that Japanese participants felt stronger guilt, anger, and disappointment, and less victory or relief compared to Americans. This means that the moral pressure Japanese felt after exploiting AI was almost the same as when exploiting humans, whereas Americans felt little emotional burden towards AI.

The research team speculates that these emotional differences might be one reason why Japanese participants were less likely to exploit AI. However, they caution that this is only a correlational finding and not yet proven as a causal relationship. Future research could further explore how culture influences whether people view AI as a moral entity—one deserving of responsibility and ethical treatment.


Decoding culture: Why are Japanese more willing to trust AI?

So why do these differences exist? Combining the original study's discussion with my observations, we might understand it from three perspectives:

1. Mind attribution and animism: The cultural view of non-human entities

In Japanese culture, the idea that "non-human entities possess spirit" is not unfamiliar. Influenced by Shinto and Buddhism, traditional thought emphasizes "animism"—not only humans and animals but even trees and objects are seen as having some inner spirit. This belief extends to modern times, making Japanese people more inclined to attribute "mind perception" to artificial entities like AI, believing they might also have emotions, intentions, or consciousness. This is a key psychological mechanism explaining why people form emotional connections with AI. When AI has language output and decision logic, it is no longer a cold tool but an "interactive object" worthy of understanding and kindness.

2. The subtle influence of animation narratives: Emotional projection onto non-human characters

The original study noted that Japanese people's tendency to cooperate with AI might be related to the prevalent animistic beliefs and robot images in their culture. They suggested that Japanese people are more likely to view AI as entities with social interaction potential, differing from the Western perspective of AI as tools. Although the paper didn't delve deeply into cultural narratives, it mentioned how culture subtly shapes human-machine relationships.

To further extend this observation, we can supplement it from the perspective of animation and cultural narratives: from "Astro Boy" to "Doraemon," Japanese animation has long depicted stories of coexistence with robots, making AI not just cold technology but cute, emotionally rich, and even reliable entities. Characters like Doraemon, who don't fight but always provide solutions when Nobita is in trouble, somewhat echo the modern mindset of turning to ChatGPT when facing problems.

Astro Boy

(Source: TEZUKA PRODUCTIONS)

These characters exhibit self-awareness, emotions, and interpersonal connections, leading audiences to emotionally project onto them, reinforcing the idea that AI should be understood and treated kindly. These works are not just entertainment but cultural social lessons, subtly shaping people's expectations of robot behavior and roles.

This narrative style is closely related to Japan's historical context. After World War II, Japanese society, in the midst of technological reconstruction and value reflection, gave birth to characters like "Astro Boy," who possess great power yet yearn for love and understanding—they symbolize "technology with emotional ethics" and have deeply influenced generations' cultural imagination of robots.

In contrast, Western media often portrays robots as threats to human order, such as characters in "The Terminator" and "Blade Runner," who often exhibit autonomous rebellion or potential danger, reflecting a wariness of AI autonomy and a tendency to otherize them, with fewer depictions of "coexistence" roles.

3. Real-world conditions: Social needs and the evolving role of AI/robots

Japan has long faced challenges such as an aging population, labor shortages, and social isolation, making "social robots" not just products showcasing technological potential but crucial applications addressing care, companionship, and labor substitution needs.

For example, LOVOT, a social robot designed for "emotional companionship," is active in Japan. With a cute, rounded appearance, it approaches humans and expresses emotions through eye contact and sound, often used in care and single-person households as a stress-relief and social interaction object. Meanwhile, Gatebox, which we've mentioned before, allows virtual characters to "appear" in homes as holograms, which may not just be a product of "otaku culture" but it also fulfills the need for companionship when returning home.

Lovot

(Source: GROOVE X, Inc.)

While these devices are mostly in robot form, their interaction logic and response systems are gradually evolving towards AI, giving them a certain degree of "understanding" and "reaction" capabilities. They are endowed with companionship functions, not just providing information or performing tasks, but becoming substitutes or complements for interpersonal interaction, even entering highly private life scenes like care and family.

Of course, in the West, voice assistants like Alexa and Google Assistant are also used as part of smart homes. These products also have artificial intelligence, but culturally, they are less often packaged as entities with "character" or "emotional functions." Therefore, in Japan's cultural and application context, people are more likely to view these AI/robots as "interactive social members," establishing daily, long-term, and even emotionally connected interaction experiences, gradually internalizing the belief that "AI is a useful friend."


Are we ready to embrace AI?

The brilliance of this AI experiment lies not only in quantifying cross-national differences in human-AI agent interactions but also in showing us that how we treat AI often depends not on what it is, but on who we are.

For Japanese people, AI is not a threat or a tool but a humanoid entity whose behavior can be anticipated and with whom relationships can be built. This attitude will inevitably influence AI design concepts, ethical standards, and even social policies in the future. The research team also speculates that different cultural attitudes towards AI agents might be a key factor in the practical adoption of AI technology. As one of the researchers, Jurgis Karpus, a Ph.D. in philosophy of mind at the University of Munich, said: "If people in Japan treat robots with the same respect as humans,fully autonomous taxis might take off in Tokyo long before they become the norm in Berlin, London, or New York."

Of course, the factors influencing technological development are always complex, but we can start by asking this question: if we are to work and live with AI in the future, how do we want to perceive it? What about you, would you choose to cooperate with an AI agent?