Blackbird.AI uses AI and machine learning (ML) to help organisations detect and respond to disinformation and manipulation that causes reputational and financial harm.
Founded in 2016 by a team of experts from artificial intelligence, behavioral psychology and national security, Blackbird.AI’s mission is to defend authenticity and fight narrative manipulation.
The company has just closed a US$10 million Series A as it prepares to launch the next version of its disinformation intelligence platform. The Series A funding will allow Blackbird.AI to scale up its engineering and sales teams, fast-track new product development, and accelerate platform adoption by a global client base.
What does the platform do?
The real-time misinformation analysis platform is designed to identify misinformation for journalists, media organisations, governments, and concerned citizens. The company's platform understands discourse on the internet by surfacing deception, manipulation, and propaganda across the digital media ecosystem, enabling companies and government agencies to aid in critical decision-making and provide automated countermeasures.
Information Disorder is an existential threat that affects a wide range of problems from healthcare and climate change to market manipulation and brand reputation. Blackbird.AI’s threat and perception intelligence platform empowers businesses to proactively defend against disinformation, improve content safety compliance across digital platforms, and shed light on the forces behind key events.
The spread of information online
According to one of the company’s recent analyses, the Pfizer-BioNTech coronavirus vaccine became a target of conspiracy theories and disinformation campaigns as soon as it was announced, reaching millions of people on sites like Twitter and Reddit.
COVID-19 conspiracy narratives, like the false belief that the vaccine was delayed for political reasons, flourished on social networks in the early winter, according to the New York tech security firm Blackbird. The firm created an algorithm to analyse posts in real-time by hunting for signals of what CEO Wasim Khaled calls "synthetic amplification," which indicates activity by botnets and anti-vaccination influencers.
The AI technology found that some of the top hashtags used by bots and influencers to spread conspiracies were #StopTheSteal, #VaccineGate, #MAGA, #BigPhrama and #SleepyJoe. The tags #StopTheSteal and #MAGA2020LandslideVictory were particularly effective at connecting the Pfizer-election theft conspiracy with broad conspiracies about election fraud.
A survey held in spring 2020 showed that 60% of 16 to 24-year-olds in the UK had recently used social media for information about the coronavirus, and 59% had come across fake news on the subject. Meanwhile, in France, almost 30% of 15 to 18-year-olds were using social media as their primary source of coronavirus information, placing news consumers in this age bracket at greater risk of being exposed to misinformation.
Either knowingly or unknowingly, many consumers see fake news and pass it on to someone else, putting even the savviest news audiences at risk.