Wall Street English CTO Talks How to Bridge Cultures in AI

Share
While AI revolutionising industries is celebrated, its role in navigating cultural contexts raises significant questions. PICTURE: Getty
Wall Street English CTO Roberto Hortal explains how to navigate the cultural biases of AI to create a more holistic AI

As the growth in AI is seeing it becoming a bigger and bigger part of of our daily lives, its influence is beginning to extend far beyond its technological boundaries, permeating social, cultural, and ethical spheres.

While the potential for AI to revolutionise industries is celebrated, its role in navigating cultural contexts raises significant questions.

How does AI handle the intricacies of cultural understanding? Can it ever grasp the depth of human culture, or does it merely simulate comprehension? Will it give undue preference to the English language and all the associated idiosyncrasies due to that being the main way it has been trained?

To delve into these concerns, we spoke with Roberto Hortal, Chief Technology Officer at Wall Street English on the cultural risks AI poses, how it can be improved, and the ethical considerations that must guide its development.

Roberto Hortal, the CTO of Wall Street English

Navigating cultural risks in AI

AI systems are not infallible, particularly when it comes to cultural nuances.

"AI can sometimes misinterpret cultural nuances, potentially using language or references that are inappropriate in certain cultures. It might unintentionally bring up sensitive topics or reinforce stereotypes," says Roberto. 

This misinterpretation stems from the fact that AI relies on datasets that may lack sufficient diversity. As Roberto explained, misunderstandings or even conflicts can arise in environments such as international workplaces or educational settings.

The solution lies in addressing these blind spots through proactive measures. By enhancing the inclusivity of training datasets and refining AI's interpretive algorithms, developers can significantly mitigate such risks.

However, AI cannot navigate these nuances alone. Human oversight, particularly in culturally sensitive contexts, remains indispensable. Without it, the risks of perpetuating bias or alienating users are too great to ignore.

Enhancing AI’s cultural awareness

Improving AI’s cultural competence is a multi-faceted endeavour.

"We can improve AI’s cultural awareness in several ways," Roberto says. "One approach is to train AI on diverse, carefully curated datasets that represent a wide range of cultural perspectives. Another strategy is to use detailed prompts when interacting with AI."

Training data diversity ensures that AI systems are exposed to a broader spectrum of cultural norms, but it’s only part of the puzzle. 

"Involving experts in the development and testing process is crucial. These experts can identify potential issues and guide the AI to use appropriate language and topics," says Roberto.

An intriguing suggestion was to design multiple AI agents with specialised roles, such as handling language, cultural context, and appropriateness independently. This layered approach could enhance AI’s ability to handle complex cultural interactions more effectively.

Despite these advances, AI's limitations remain clear. It lacks the lived experiences and emotional depth that shape human cultural understanding. As Roberto observed, "AI can be an incredibly useful tool for enhancing cross-cultural understanding... The key is to view AI as a supplement to human cultural intelligence, not a replacement for it."

Ethical considerations in sensitive AI development

Developing culturally sensitive AI is not merely a technical challenge—it is an ethical responsibility. Roberto is emphatic about the need for robust safeguards: "One key concern is privacy and data protection. As AI systems collect and process a lot of data, we need robust measures to protect individuals’ personal information."

Another pressing concern is bias. AI systems must be designed to avoid amplifying stereotypes. "We need to be vigilant about the data we use to train AI and the outputs it produces to ensure they don’t reinforce harmful stereotypes or oversimplify cultural issues," Roberto warned. Transparency is equally vital, enabling users to understand AI’s capabilities and limitations.

Finally, preventing misuse is paramount. "How do we prevent bad actors from manipulating AI to produce insensitive or harmful content?" Roberto asked. The solution involves not only technical safeguards but also clear ethical guidelines and, potentially, regulatory frameworks to ensure accountability.

AI’s journey toward cultural sensitivity is fraught with challenges but brimming with potential. By combining advanced technical solutions with human expertise and ethical vigilance, we can create systems that respect and reflect the diverse cultural landscapes they aim to serve.

As Roberto puts it, “The key is not perfection but progress—leveraging AI as a tool to build bridges, not barriers.”


Explore the latest edition of AI Magazine and be part of the conversation at our global conference series, Tech & AI LIVE

Discover all our upcoming events and secure your tickets today. 


AI Magazine is a BizClik brand

Share

Featured Articles

Responsibility in the Age of AI: O’Reilly President Examines

O’Reilly President Laura Baldwin discusses the legal challenges unmitigated and unobserved use of Gen AI may present to enterprises

Schneider Electric Enhances AI Data Centre Operations

Schneider Electric teams with Nvidia to advance AI data centres, whilst emphasising global sustainability in energy management

How Can AI Firms Pay Publishers? Perplexity Has a Plan

AI search firm Perplexity extends its content licensing programme to 14 new media partners, offering revenue share and API access for publisher content

PwC and AWS Join Forces on Enterprise AI Controls System

AI Strategy

How Amazon Nova is Redefining AI for Enterprise Solutions

AI Strategy

MHP Study: AI Reshapes Global Auto Industry Trust Landscape

AI Strategy