Mar 25, 2021

Is AI the new HR?

AI
Union
Regulation
Businesses
Tilly Kenyon
3 min
Trade unions call for new regulations to protect staff and ensure AI is used correctly
Trade unions call for new regulations to protect staff and ensure AI is used correctly...

A new report by the Trades Union Congress (TUC) has warned that new legal protections are needed to regulate the use of artificial intelligence in UK workplaces and to prevent workers being hired and fired by algorithm. 

The report highlighted issues of AI favouring men over women and judging people by their facial expressions. The TUC sighted one case of claims by Uber Eats couriers, who say they were fired unfairly due to facial identification software, which had been found unreliable when used with people from ethinc minority backgrounds. 

The use of AI at work is at "a fork in the road", according to TUC general secretary Frances O’Grady. 

“AI at work could be used to improve productivity and working lives,” she warned, "but it is already being used to make life-changing decisions about people at work – like who gets hired and fired. Without fair rules, the use of AI at work could lead to widespread discrimination and unfair treatment – especially for those in insecure work and the gig economy."

What is the union calling for?

The TUC’s report argues that the law has failed to keep up with the quick progress in AI in recent years. The union body is calling for:

  • “A legal duty on employers to consult trade unions on the use of ‘high-risk’ and intrusive forms of AI in the workplace;
  • “A legal right for all workers to have a human review of decisions made by AI systems so they can challenge decisions that are unfair and discriminatory;
  • “Amendments to the UK general data protection regulation (UK GDPR) and Equality Act to guard against discriminatory algorithms; and
  • “A legal right to ‘switch off’ from work so workers can create ‘communication-free’ time in their lives.”

A government spokesperson said: “Artificial Intelligence should be used to support workers and wider society, making working lives easier and more efficient.”

What does the future hold for AI in businesses?

AI is prevalent in businesses, often most noticeable to customers through chatbots and customer service. A digital transformation is taking place with new innovations developing and there are concerns about AI taking over human jobs. 

"There's definitely a lot of organizations that, more than displacing the workforce, they think of 'How do we reskill them so that they can continue to tap into the domain, the core knowledge that exists within those employees?" said Beena Ammanath, AI managing director at Deloitte Consulting. "There's a lot of value in the domain knowledge that these employees possess."

Conversely McKinsey & Co. projects that by 2030, roughly 40 million US workers, many of them union members, will have been replaced by robotics and automation.

AI technology has been refined over the years and recently there has been a huge demand for it during Covid-19. The future of AI now looks even more promising as long as rules and regulations are put in place to be followed. To ensure employees and AI-powered machines can work together effectively, companies need to plan and re-evaluate their workforce development strategies.

Share article

Jun 17, 2021

Facebook Develops AI to Crackdown on Deepfakes

Facebook
MSU
AI
Deepfakes
3 min
Social media giant, Facebook, has developed artificial intelligence that can supposedly identify and reverse-engineer deepfake images

In light of the large tidal wave of increasingly believable deepfake images and videos that have been hitting the feeds of every major social media and news outlet in recent years, global organisations have started to consider the risk factor behind them. While the majority of deepfakes are created purely for amusement, their increasing sophistication is leading to a very simple question: What happens when a deepfake is produced not for amusement, but for malicious intent on a grander scale? 

 

Yesterday, Facebook revealed that it was also concerned by that very question and that it had decided to take a stand against deepfakes. In partnership with Michigan State University, the social media giant presented “a research method of detecting and attributing deepfakes that relies on reverse engineering from a single AI-generated image to the generative model used to produce it.” 

 

The promise is that Facebook’s method will facilitate deepfake detection and tracing in real-world settings, where the deepfake image itself is often the only information detectors have to work with. 

Why Reverse Engineering? 

Right now, researchers identify deepfakes through two primary methods: detection, which distinguishes between real and deepfake images, and image attribution, which identifies whether the image was generated using one of the AI’s training models. But generative photo techniques have advanced in scale and sophistication over the past few years, and the old strategies are no longer sufficient. 

 

First, there are only so many images presented in AI training. If the deepfake was generated by an unknown, alternative model, even artificial intelligence won’t be able to spot it—at least, until now. Reverse engineering, common practice in machine learning (ML), can uncover unique patterns left by the generating model, regardless of whether it was included in the AI’s training set. This helps discover coordinated deepfake attacks or other instances in which multiple deepfakes come from the same source. 

 

How It Works 

Before we could use deep learning to generate images, criminals and other ill-intentioned actors had a limited amount of options. Cameras only had so many tools at their disposal, and most researchers could easily identify certain makes and models. But deep learning has ushered in an age of endless options, and as a result, it’s grown increasingly difficult to identify deepfakes.

 

To counteract this, Facebook ran deepfakes through a fingerprint estimation network (FEN) to estimate some of their details. Fingerprints are essentially patterns left on an image due to manufacturing imperfections, and they help identify where the image came from. By evaluating the fingerprint magnitude, repetition frequency, and symmetrical frequency, Facebook then applied those constraints to predict the model’s hyperparameters. 

 

What are hyperparameters? If you imagine a generative model as a car, hyperparameters are similar to the engine components: certain properties that distinguish your fancy automobile from others on the market. ‘Our reverse engineering technique is somewhat like recognising [the engine] components of a car based on how it sounds’, Facebook explained, ‘even if this is a new car we’ve never heard of before’. 

 

What Did They Find? 

‘On standard benchmarks, we get state-of-the-art results’, said Facebook research lead Tal Hassner. Facebook added that the fingerprint estimation network (FEN) method can be used for not only model parsing, but detection and image attribution. While this research is the first of its kind, making it difficult to assess the results, the future looks promising. 


Facebook’s AI will introduce model parsing for real-world applications, increasing our understanding of deepfake detection. As cybersecurity attacks proliferate, and generative AI falls into the hands of those who would do us harm, this method could help the ‘good guys’ stay one step ahead. As Hassner explained: ‘This is a cat-and-mouse game, and it continues to be a cat-and-mouse game’.

Share article