World faces flood of fake people now AI dodges bot spotters

Share
Credit: Paul Cordwell/EyeEm/Getty
Creating realistic fake profiles used to took time and effort, but AI advances mean authoritarian regimes and criminals can now spawn fake people at scale

Distinguishing between real and fake social media profiles has become increasingly difficult due to rapid advances in AI, a study conducted by researchers from Copenhagen Business School has revealed.

Until recently, creating convincing fake social media profiles at scale was challenging, as images could be traced back to their source and the text often lacked human-like characteristics. However, the rapid development of AI technologies, such as StyleGAN and GPT-3, has made it harder to tell the difference between genuine and fake accounts.

The researchers created a mock Twitter feed focused on the war in Ukraine, which included both real and artificially generated profiles. These profiles expressed opinions supporting both sides of the conflict. Fake profiles featured computer-generated synthetic profile pictures created using StyleGAN, while their posts were generated by GPT-3, the same language model that powers ChatGPT.

The study found participants were unable to differentiate between the real and fake Twitter accounts. In fact, they perceived AI-generated profiles as less likely to be fake than the genuine ones.

“Interestingly, the most divisive accounts on questions of accuracy and likelihood belonged to the genuine humans,” says Sippo Rossi, a PhD Fellow from the Centre for Business Data Analytics at the Department of Digitalization at Copenhagen Business School. “One of the real profiles was mislabelled as fake by 41.5% of the participants who saw it. Meanwhile, one of the best-performing fake profiles was only labelled as a bot by 10%.

“Our findings suggest that the technology for creating generated fake profiles has advanced to such a point that it is difficult to distinguish them from real profiles.”

The research was published in the Hawaii International Conference on System Sciences (HICSS).

Fake people manipulating politics with misinformation

“Previously it was a lot of work to create realistic fake profiles. Five years ago the average user did not have the technology to create fake profiles at this scale and easiness. Today it is very accessible and available to the many, not just the few,” says co-author Raghava Rao Mukkamala, the Director of the Centre for Business Data Analytics at Department of Digitalization at Copenhagen Business School.

The proliferation of deep learning-generated social media profiles has significant implications for society and democracy as a whole, say the researchers, including political manipulation, misinformation, cyberbullying and cybercrime. 

“Authoritarian governments are flooding social media with seemingly supportive people to manipulate information so it’s essential to consider the potential consequences of these technologies carefully and work towards mitigating these negative impacts,” says Mukkamala.

The researchers employed a streamlined approach in which participants viewed a single tweet along with the profile details of the account that shared it. The subsequent phase of research will aim to determine whether bots can be accurately identified within a news feed discussion, where various fake and real profiles comment on a specific news article within the same thread.

“We need new ways and new methods to deal with this as putting the genie back in the lamp is now virtually impossible,” says Rossi. “If humans are unable to detect fake profiles and posts and to report them then it will have to be the role of automated detection, like removing accounts and ID verification and the development of other safeguards by the companies operating these social networking sites. Right now my advice would be to only trust people on social media that you know.”

Share

Featured Articles

Harnessing AI to Propel 6G: Huawei's Connectivity Vision

Huawei Wireless CTO Dr. Wen Tong explained how in order to embrace 6G to its full capabilities, operators must implement AI

Pegasus Airlines Tech Push Yields In-Flight AI Announcements

Pegasus Airlines has developed its in-house capabilities via its Silicon Valley Innovation Lab to offer multilingual AI announcements to its passengers

Newsom Says No: California Governor Blocks Divisive AI Bill

California's Governor Gavin Newsom blocked the AI Bill that divided Silicon Valley due to lack of distinction between risks with model development

Automate and Innovate: Ayming Reveals Enterpise AI Use Areas

AI Strategy

STX Next AI Lead on Risk of Employing AI Without a Strategy

AI Strategy

Huawei Unveils Strategy To Lead Solutions for Enterprise AI

AI Strategy