Big Tech Companies Agree to Tackle AI Election Fraud

Despite the continued popularity of chatbot AI tools like Gemini (formerly Bard) and ChatGPT, technology companies are beginning to shift focus towards preventing harmful AI-generated content
With major elections taking place worldwide in 2024, tech giants including Amazon, Google and Microsoft agree to tackle deceptive AI and misinformation

Major technology companies rallied together at the Munich Security Conference on Friday (16th February 2024), committing to fighting voter-deceiving content.

With the spread of AI content seeking to mislead, including deepfake images, videos and audio, the voluntary accord suggests that the spread of such deception could “jeopardise the integrity of electoral processes.” It cites the explosive development of AI as creating both opportunities and challenges for democracy.

Companies that signed the accord include the likes of IBM, Amazon, Anthropic, OpenAI and Adobe, in addition to social media platforms such as Meta, TikTok and X (formerly Twitter), which will all face difficulties in keeping harmful content away from their sites.

Enterprise collaboration needed to avoid fake AI spread

According to Digit News, more than four billion people across more than 40 countries are set to vote in elections this year, including in the UK, US and India.

Technology companies are continually facing new situations and calls for greater safety and regulations when it comes to the development of generative AI (Gen AI) tools. Social media organisations in particular are coming under particular scrutiny to ensure that harmful content that could undermine elections is removed from their sites.

Twenty technology companies signed the accord last week, stating that they would collaborate on tools to both prevent and take action on the spread of AI election content on their platforms.

These efforts could include adding watermarks to images to make it clear that they are AI-generated, which Meta recently pledged to do across its platforms to better advocate for responsible AI.

Greater transparency when it comes to confronting malicious AI is paramount, which has led to the businesses who signed the accord agreeing to review their Gen AI models to better understand the risks that they could pose to elections.

In the wake of other AI safety talks, including the UK AI Safety Summit at the end of 2023, more world and business leaders are having important conversations over advocating for safe AI. With such a rapid increase in AI systems readily available, AI ethics must remain up-to-speed in order to protect people and businesses.

“With so many major elections taking place this year, it’s vital we do what we can to prevent people being deceived by AI-generated content,” says Nick Clegg, President of Global Affairs at Meta, as reported by The Financial Times. “This work is bigger than any one company and will require a huge effort across industry, government and civil society.”

Fears that AI could influence election voting

Gen AI is already being used to influence politics and even convince people not to vote. Reuters reports that in January 2024, a robocall using fake audio of US President Biden was circulated to voters across New Hampshire that urged them to stay home during the state’s presidential primary election.

Youtube Placeholder

Despite the continued popularity of chatbot AI tools like Gemini (formerly Bard) and ChatGPT, technology companies are beginning to shift focus towards preventing harmful AI-generated content. 

This is partly due to the threat landscape continuing to evolve significantly in the midst of global conflicts such as the Russo-Ukrainian War. If not addressed on a wide scale, the continued rapid advancement of AI technology could certainly continue to threaten global safety.

With this in mind, proposed regulations such as that of the EU AI Act are designed to confront malicious AI such as deepfakes, in order to better protect essential government services and business operations.

Ahead of this, technology companies have already partnered to facilitate open-source AI and provide better access to AI education in order to make development safer. In December 2023, Meta and IBM formed an AI Alliance alongside 50 other founding companies to accelerate responsible innovation.

Google, Anthropic, Microsoft and OpenAI also continue to champion safe AI via their Frontier Model Forum - advancing research into AI safety and identifying safety best practices for frontier models and leveraging AI to address social challenges.

******

Make sure you check out the latest edition of AI Magazine and also sign up to our global conference series - Tech & AI LIVE 2024

******

AI Magazine is a BizClik brand

Share

Featured Articles

IBM's VP of Build on Where Embeddable AI Stands to Benefit

IBM EMEA's VP of Build Dawn Herndon explains what embeddable AI is and where its main use cases and benefits will come from

Davies Increasing AI Focus with First Group Chief AI Officer

Although the first Group Chief AI Officer role at the firm, the appointment of Paul O'Brien is one step in a long walk to building their AI strategy

Tech & AI LIVE New York: Speaker Announcement

Executives from Ping Identity, ServiceNow and Consumer Technology Association are announced to be joining the line-up at Tech & AI LIVE New York

MLOps: What Is It and How Can It Enhance Operations?

Machine Learning

Kyocera CISO Talks Good Data Security in the Age of Gen AI

Data & Analytics

Sony & AI Singapore Join to Build Language Diversity in LLMs

Machine Learning