Culture-Conscious AI for Conflict Resolution Coaching
By Ellen Kim & Mo-Yun Lei Fong
As generative artificial intelligence increasingly becomes a coaching and mediation tool for leaders, we are concerned about an under-covered risk. Almost all AI platforms come with a “cultural accent” that may result in misunderstandings or even conflict.
Cultural bias emerges for two reasons: how the AI model is prompted and how it is trained. A recent MIT Sloan study, Jackson G. Lu, Lesley Luyang Song & Lu Doris Zhang, “Cultural Tendencies in Generative AI,” 9 Nature Human Behav. 2360–2369 (June 2025), found that AI models exhibit different cultural tendencies depending on the language used in queries.
A recent World Economic Forum article noted that generative AI is trained on just a few of the world’s 7,000 languages, leading to language bias. The fact that LLMs are trained with great Western influence translates into cultural gaps that are problematic for leaders using generative AI as a conflict coaching tool.
While AI undoubtedly helps prevent workplace conflict—by, for instance, simulating customizable workplace scenarios, analyzing communication style, and roleplaying in the persona of multiple stakeholders—it is important to be aware that AI could inadvertently lead to cultural missteps and greater conflict in a global context.
While AI platforms may eventually be trained to recognize these gaps in the future, leaders today need to be adept at culture-conscious prompt engineering. Here are some practical tips to guide leaders using AI for conflict coaching:
Prompt Personas with Cultural Framing. Framing prompts with explicit cultural personas can partially mitigate LLM bias. When preparing for a workplace conversation, ask the model to roleplay each stakeholder with their cultural context and to surface a range of likely responses.
Evidence shows that input language itself cues cultural paradigms: in the MIT study, prompting ChatGPT in Chinese (and testing Baidu’s ERNIE) elicited more interdependent social orientations and a more holistic cognitive style than prompting in English.
Because LLMs are pattern‑matching systems, the language and persona specified at the outset steer which culturally encoded patterns are activated. To generate a realistic simulation--and a plausible band of follow‑up questions--direct the model to adopt the persona of every stakeholder, not just the voice of the script you want to deliver.
Look for Holistic and Situational Context. When prompting for a conflict scenario, start by situating the exchange in its broader context rather than isolating individual moves. Give the model a specific role and clearly state the goals, the relationship and hierarchy between parties, the intended audience and setting, and any relevant cultural undertones. Be explicit about constraints—length, format, keywords—and what to include or avoid, so the output fits the task rather than the model’s defaults.
Humans code‑switch across contexts; models won’t unless we tell them to. A negotiation in Shenzhen, Stuttgart, or Chicago can attach different meanings to the same words, so you need to specify the cultural lens you want applied and ask the system to assume the appropriate role within that context.
Leaders should be intentional here: the region matters, and so does the relational posture you want the model to emulate.
The need for explicit cues is borne out by research. In a recent article titled Nikta Gohari Sadr, et al., "We Politely Insist: Your LLM Must Learn the Persian Art of Taarof," (September 2025), researchers studied taarof, a social norm in Iranian interactions, which is a sophisticated system of ritual politeness that emphasizes deference, modesty, and indirectness.
The study found that AI language models from OpenAI, Anthropic, and Meta all fail to absorb these Persian social rituals, correctly navigating taarof situations only 34% to 42% of the time. Instead, the recent AI models default to Western-style directness, completely missing the cultural cues that govern everyday interactions for millions of Persian speakers worldwide.
The practical takeaway is to name the culture(s) and assign personas for each stakeholder so the model retrieves the right patterns rather than relying on Western defaults. For example: “Draft a script for a difficult conversation about underperformance with a country manager from [COUNTRY] who reports to me at a global consulting firm, in a business‑casual tone for our weekly 1:1. Reflect local norms around [deference/directness/indirectness], preserve trust, and anticipate likely follow‑ups.”
Finally, remember that culture is multi‑layered. In addition to national context, factors such as generation, gender, and socioeconomic class often shape how people approach conflict. The more precisely you identify the facets that influence each participant, the more realistic--and useful--the model’s roleplay and guidance will be. At minimum, you’ll add a layer of nuance that better approximates the actual situation.
The Human as Quality Assurance. Have someone who is fluent in the native language(s) and culture(s) provide a second opinion. Roleplay in person with a coach who has lived experiences.
At the end of the day, there is no replacement for a real life coach who is a cultural insider that has lived the nuance.
Allow the human to be the final QA on what is culturally appropriate for your particular difficult conversation or high-conflict scenario. Or practice a roleplay of a difficult conversation with a human and use AI as an “observer” to provide additional feedback.
* * *
Whether you're a global leader or work in teams with individuals from different cultural backgrounds, blending human cultural intelligence with AI's capabilities as a thought partner can help resolve conflicts in a way that honors people's identity, emotional needs, and power dynamics.
The next time you face your own cross-cultural conflict moment, you'll be equipped with both AI's analytical power and the cultural intelligence to turn potential friction into collaborative strength.
* * *
Ellen Kim is an independent mediator and arbitrator with a background as a corporate deal attorney. In addition, she is a career design/ executive coach at Santa Clara University's Graduate School of Business and an alternative dispute resolution fellow with JAMS, the world’s largest private alternative dispute resolution provider. She is also a conflict coach for startups in Silicon Valley.
Mo-Yun Lei Fong is an adjunct lecturer with the Stanford University’s Management Science and Engineering Department and a Harvard Business School career and executive coach. She is ICF ACC certified and the founder of LeiFongCoaching, LLC. She holds a BS in Chemical Engineering and MA in Education from Stanford University, and an MBA from Harvard Business School.
[END]