BSI: How Facial Recognition Technology can Remain Ethical

Share
(BSI) has launched new guidance to ensure that AI technology continues to act as a force for good within society.
BSI releases new guidance concerning safe and ethical use of facial recognition tools, as ethical worries associated with its development are raised

As the facial recognition industry is predicted to swell to be worth US$13.4bn by 2028, anxieties over the safety of its development are also expected to rise.

In response, the British Standards Institution (BSI) has launched new guidance to ensure that the technology continues to act as a force for good within society. The guidance, Facial recognition technology: Ethical use and deployment in video surveillance-based systems, is designed to help businesses navigate ethical challenges associated with the use of facial recognition technology and build greater trust in it.

The company’s research has found that 40% of people around the world expect to be using biometric identification (face, voice, or fingerprint) in airports by 2030. 

With the technology continuing to rise, there are potential concerns over facial recognition use, such as the violation of rights, AI biases, data theft and the risk of relying on inaccurate digital systems.

Building up public trust in AI

AI being used to power facial recognition tools is becoming increasingly common, particularly for security purposes to keep the public safe. It works by mapping an individual’s physical features in an image which can be compared against other images stored within a database to verify likeness or identify an individual at a specific location.

Increasing use cases has sparked wider conversations about the ethics of using such technology, particularly when it comes to error rates linked to racial or gender differences. According to BSI, a 2022 audit of police use of the technology found that deployment regularly failed to meet minimum ethical and legal standards.

The organisation has released guidance concerning facial recognition in the wake of its previous Trust in AI poll that found more than three-quarters (77%) of people believed trust in AI is vital for it to be used in surveillance.

In order to mitigate against concerns, the new code of practice is designed for businesses using or monitoring video surveillance systems and biometric facial technologies. It offers guidance on appropriate guardrails for positive use cases of facial recognition technology, including that it be used in conjunction with human intervention to ensure accurate identification before any action is taken.

The guide also covers its applicability across governance and accountability, human agency and oversight, privacy and data governance, technical robustness and safety, transparency and explain-ability, diversity, non-discrimination and fairness.

In verification scenarios where the technology can operate autonomously, the standard puts guardrails in place for the technology’s learning by ensuring training data includes sets from diverse demographic pools and across a variety of lighting levels and camera angles. This is to ensure that inaccuracies are eliminated and risks can be mitigated to avoid bias.

In 2022 alone, researchers from USC found biases present in up to 38.6% of 'facts' used by AI.

Working to advance a future of safe AI

“AI-enabled facial recognition tools have the potential to be a driving force for good and benefit society through their ability to detect and monitor potential security threats,” says Scott Steedman, Director-General Standards at BSI. “This code of practice is designed to help organisations navigate the ethical challenges associated with the use of FRT systems and build trust in its use as a result. 

“It aims to embed best practice and give guidance on the appropriate guardrails organisations can put in place to safeguard civil rights and eliminate system bias and discrimination.”

These standards come in the midst of the EU AI Act being approved, which is designed to put a framework in place to address AI-associated risks in a rapidly-changing digital landscape.

During a time where deepfakes and misinformation concerns, in line with AI, are rife, it will be paramount for businesses to address such concerns to digitally innovate in a safe and ethically-conscious way.

******

Make sure you check out the latest edition of AI Magazine and also sign up to our global conference series - Tech & AI LIVE 2024

******

AI Magazine is a BizClik brand

Share

Featured Articles

Harnessing AI to Propel 6G: Huawei's Connectivity Vision

Huawei Wireless CTO Dr. Wen Tong explained how in order to embrace 6G to its full capabilities, operators must implement AI

Pegasus Airlines Tech Push Yields In-Flight AI Announcements

Pegasus Airlines has developed its in-house capabilities via its Silicon Valley Innovation Lab to offer multilingual AI announcements to its passengers

Newsom Says No: California Governor Blocks Divisive AI Bill

California's Governor Gavin Newsom blocked the AI Bill that divided Silicon Valley due to lack of distinction between risks with model development

Automate and Innovate: Ayming Reveals Enterpise AI Use Areas

AI Strategy

STX Next AI Lead on Risk of Employing AI Without a Strategy

AI Strategy

Huawei Unveils Strategy To Lead Solutions for Enterprise AI

AI Strategy