India Holds Tech Businesses to Account Over Deepfakes
The Indian government has spoken out against AI-generated deepfakes, highlighting that technology companies ‘will be held accountable’ if this content appears on their platforms.
A deepfake refers to an image or video that has been digitally manipulated, via the use of generative technology, to replace a person’s likeness with another. The technology has recently gathered widespread attention due to public concerns over the dangers of misusing deepfakes on a large scale.
With India’s digital economy only continuing to expand as a result of increased AI demand, it is one of the first countries to take firm action against deepfakes - prompting further discussions over the need for responsible AI.
As deepfake technology grows, so does public anxiety
This news comes in the midst of numerous celebrities in India having been impersonated by deepfake technology, sparking mistrust and confusion among their fanbases. In response, Rajeev Chandrasekhar, India's Junior Minister for Information Technology (IT), said that “deepfakes and misinformation powered by AI” threatened the safety of internet users.
According to The Financial Times, roughly 870 million people across India are connected to the internet, with 600 million social media users out of a population of 1.4 billion.
Under the country’s IT rules, social media platforms must ensure that no misinformation is posted by any user. If these platforms do not comply, they can be taken to court under Indian law.
The presence of deepfakes is continuing to expand worldwide. With the technology becoming increasingly more advanced, deepfakes are now harder to identify and prevent. This inevitably leads to deepfakes being used to damage reputations, fabricate evidence and undermine trust worldwide.
There are also fears that deepfakes will threaten political elections around the world, in addition to impersonating world leaders and public figures. At the end of last year (2023), the National Cyber Security Centre cautioned against the increased sophistication of deepfakes being used to sway voter opinion.
As a result, the pressure is mounting for technology companies to combat their misuse. Numerous social media platforms already have regulations in place to combat the spread of fake information. The likes of Meta, X (formerly Twitter) and TikTok now require that media that has been confirmed to be fake is either labelled as such or removed.
Global calls for more specific AI regulations
In addition to India, other countries around the world are starting to put AI regulations in place to prevent threat actors from exploiting the technology.
For instance, the United States (US) has imposed an executive order for AI developers to share safety results with the government. This was put in place with the goal of leading in AI governance, in addition to fostering a larger culture of ethical AI.
Likewise, the United Kingdom’s (UK) British Standards Institution (BSI) launched a pioneering AI management system in January 2024 to enable safe and responsible use of AI.
Time will tell if more countries will follow suit, but what is clear is that the AI sector is pivoting towards ensuring greater transparency to protect people’s data. Industry leaders have already forecast that AI will continue to play a huge role in how threats will be created - making it easier for cybercriminals to breach and steal data.
Jake Moore, Global Cybersecurity Advisor at ESET, says: “Other countries will no doubt be monitoring this rollout and taking on board any pitfalls and how it will work. Deepfake technology is fast becoming an inevitable beast of its own and needs to be contained as best it can.”
He adds: “Technology companies are going to have to work together to improve the ability to catch AI generated material but until then, it’s down to human intelligence.”
******
Make sure you check out the latest edition of AI Magazine and also sign up to our global conference series - Tech & AI LIVE 2024
******
AI Magazine is a BizClik brand