The State of AI: An Urgent Need for Transparency

As the Cybersecurity Landscape Continues to Shift, We Hear From Experts About How AI Can Fuel More Sophisticated Criminal Activity From Threat Actors

With increasingly sophisticated AI, threats will become more personalised, both for individuals and small businesses.

Some industry leaders have forecast that AI will continue to play a huge role in how threats will be created. In line with cybercriminals continuing to develop their attack methods, AI could make their ability to extort victims more successful and harder to track.

With this in mind, cybersecurity will only continue to become more ingrained into everyday life, as businesses find new ways to protect valuable data. AI Magazine speaks to experts in the field about how the landscape could look this year.

Issues of trust: AI-generated content causes a stir

The capabilities of AI will continue to expand in 2024, with generative AI (Gen AI) expected to continue to surge in popularity.

Increasing numbers of businesses are looking to invest in tools like Gen AI in attempts to expand their offerings for customers. Multiple large technology companies like Google, Microsoft and OpenAI are continuing to push forward with these developments in order to stay competitive.

It is the belief that models like Gen AI will keep boosting enterprise productivity by improving workplace efficiencies and assisting staff with complex or even risk-based tasks.

However, despite such positive changes forecast, those within the cybersecurity sector expect such advancements will also fuel a surge in cybercrime. With AI having such a range of functions, its rapid advancement will continue to challenge both organisations and individuals on matters of trust, including AI-generated content like deepfakes that aims to spread misinformation.

Gen, the parent company of Norton, has suggested that social engineering attacks will become more pronounced in this regard. Social engineering refers to threat actors manipulating people’s emotions and vulnerabilities to get access to what they want.

Luis Corrons, Security Evangelist at Gen, tells AI Magazine: “The increasing sophistication of AI has led to a rise of deepfake videos and other advanced cyber threats. AI is now being used by cybercriminals to craft attacks that are both increasingly realistic and difficult to detect.  

“To counter these advanced threats, it’s vital to maintain a healthy scepticism of potential scams; adopt effective password management and utilise comprehensive privacy, security and identity protection tools. In this fast-evolving threat landscape, it is critical for both businesses and individuals to stay informed and implement strong cybersecurity measures to protect their digital worlds.”

Debates over AI regulation continue

The importance of AI risk management has been discussed on the global stage, with world leaders meeting at the UK AI Safety Summit towards the end of 2023. The meeting brought about important conversations over if AI should be regulated and some of the steps that businesses could start to take to ensure transparency with their AI models.

The United States (US) has started to implement these steps to ensure that businesses are sharing AI safety information with the government. Measures include creating new safety and security standards for AI, which includes regulations that require technology companies to share safety test results with the federal government.

Ultimately, the goal is to create programmes to evaluate potentially harmful AI practices, as well as creating resources on how to responsibly use AI tools.

This has led to further announcements this week (January 2024), with the country announcing that it has teamed up with leading technology companies like Microsoft, Amazon and IBM, among others, to launch an AI pilot programme to provide resources for researchers and educators to access high-powered AI technologies. 

Now more than ever before, businesses are having to consider the ethical implications of harnessing AI.

Sridhar Iyengar, MD for Zoho Europe, says: “Businesses need to improve transparency around how they use data as a deliberate step to behave ethically, to establish trust and strengthen consumer relationships, not just as a legislative need. It is not just what they prescribe they will do with this data, and that they comply with regulation, but it is how they communicate this to their customers.

“Companies that use AI must prioritise ethical considerations and responsible data practises. Adopting the right AI procedures when building models, ensuring impartial algorithms, and complying with privacy regulations balances the benefits of AI without overlooking fundamental aspects like privacy and transparency.”

******

Make sure you check out the latest edition of AI Magazine and also sign up to our global conference series - Tech & AI LIVE 2024

******

AI Magazine is a BizClik brand

Share

Featured Articles

Upskilling Global Workers in AI with EY’s Beatriz Sanz Saiz

We speak with Beatriz Sanz Saiz, EY Global Consulting Data and AI Leader, about how AI shapes the global workforce and how EY is harnessing ethical AI

Intuitive Machines: NASA's Odysseus bets on Private Company

Discover more about the small private company that landed the first US spacecraft on the moon in 50 years, with NASA continuing to test new technologies

Unveiling Gemma: Google Commits to Open-Model AI & LLMs

Tech giant Google, with Google DeepMind, launches Gemma, consisting of new new state-of-the-art open AI models built for an open community of developers

Sustainability LIVE: Net Zero a Key Event for AI Leaders

AI Strategy

US to Form AI Task Force to Confront AI Threats to Safety

AI Strategy

Wipro to Advance Enterprise Gen AI Adoption with IBM watsonx

AI Strategy