AI and Cloud Take the Video Surveillance Industry by Storm
Organisations are increasing their use of video surveillance as its benefits now extend beyond security into business efficiency. However, the challenge that many businesses are facing is that their existing video system was built using outdated, legacy technology. This either requires consistent monitoring by employees or a time-consuming manual review of footage after events have occurred to gain insights on what could be improved in the future. This creates limitations on what this technology can achieve and the improvements that it can make for organisations in meeting their security, safety and business needs.
Traditional on-premises infrastructure has also created challenges for businesses in the video surveillance industry that are trying to develop this sector and its capabilities. However, the rapid advances in cloud-based video surveillance applications, powered by Artificial Intelligence (AI) technologies, are disrupting established approaches and transforming systems from simple motion-based alerting devices to unified proactive and preventive solutions.
When AI and video collide
The use of smart technology with video surveillance is creating a wealth of new opportunities for businesses to gather new insights more efficiently. AI technologies can automatically overcome many of the common performance limitations of existing solutions, including issues such as bad weather, changes in light levels and obscured images, and even detecting the motion of a person on-premises versus an animal. AI also removes the need for constant monitoring by employees during off-hours, as it is able to precisely detect human activity in the live video stream and only sends relevant alerts for intervention as they’re required.
These capabilities are expanding every day, and they not only improve the speed of data analysis and the accuracy of the surveillance, but they also allow the technology to be applied to a much wider range of operational, efficiency and safety use cases than traditional CCTV. AI-enabled applications are introducing a huge range of options that can benefit businesses, such as monitoring employee arrivals, the presence of intruders, vehicle detection, moisture detection on floors, and other smart features such as detecting loitering or when people are wearing masks. Because of these abilities, AI-powered surveillance has been – and will continue to be – applied to help organisations and their teams keep within the latest COVID-19 compliance guidelines.
When applied to video surveillance systems, AI technology can enable users to significantly broaden their use of intelligent analytics. In doing so, businesses can monitor and enhance existing operational processes or adopt innovative new capabilities that provide new insights. For instance, retailers can use AI-powered video systems to measure the impact of marketing campaigns on store traffic, identify buying trends, and understand customer preferences. It’s like the in-store equivalent of how analytics on retail websites track customers as they navigate the site. It offers them a powerful way to understand the effectiveness of their promotional campaigns and their wider in-store strategy, enabling teams to make changes if necessary in real-time.
Retail security is another area in which this technology can make an impact. By integrating AI-powered surveillance with POS technologies, any suspicious transactions can be identified immediately and monitored for review and potential follow-up action.
There are a huge number of markets that are adopting cloud-based video surveillance technology, with retail being just one example – banking, healthcare, government, hospitality, and education are all markets powering significant growth in this area. As a result, global revenue from AI-powered surveillance analytic technologies is set to increase from $1.1 billion in 2018 to $4.5 billion in 2025, according to from Omdia.
Adding cloud into the mix
For all of these benefits of AI in video surveillance to be realised, cloud technology plays a huge part. Services hosted in the cloud are more convenient for most businesses – they’re typically cheaper, more reliable and more secure than the on-premises infrastructure often required to deliver and support legacy technologies. Scalability is also a popular benefit of cloud, which can be utilised at short notice and, by offering users an open platform, organisations can implement any additional third-party technologies to meet their specific needs.
This level of flexibility is crucial for expanding the wider application of AI in the video surveillance market. To increase this, outsourcing the platform and infrastructure to a specialist third party opens up the possibility for a greater range of services. This enables users to select whichever AI-powered functions most precisely meet their requirements. In addition, the impact of the COVID-19 pandemic has further increased the reliance organisations are placing on video surveillance technologies, driven by cloud-enabled remote access and monitoring capabilities that help organisations to increase efficiency and often reduce face-to-face contact.
AI technology is driving innovation across many markets, and video surveillance is currently undergoing a full transformation into a smarter, more impactful industry. Older and less efficient video systems are increasingly being updated for the latest innovations that are advancing the value for businesses. Because of this, video surveillance will not only continue to drive advances in safety and security but will also deliver a greater business impact as AI and cloud together take this technology to new heights.
Facebook Develops AI to Crackdown on Deepfakes
In light of the large tidal wave of increasingly believable deepfake images and videos that have been hitting the feeds of every major social media and news outlet in recent years, global organisations have started to consider the risk factor behind them. While the majority of deepfakes are created purely for amusement, their increasing sophistication is leading to a very simple question: What happens when a deepfake is produced not for amusement, but for malicious intent on a grander scale?
Yesterday, Facebook revealed that it was also concerned by that very question and that it had decided to take a stand against deepfakes. In partnership with Michigan State University, the social media giant presented “a research method of detecting and attributing deepfakes that relies on reverse engineering from a single AI-generated image to the generative model used to produce it.”
The promise is that Facebook’s method will facilitate deepfake detection and tracing in real-world settings, where the deepfake image itself is often the only information detectors have to work with.
Why Reverse Engineering?
Right now, researchers identify deepfakes through two primary methods: detection, which distinguishes between real and deepfake images, and image attribution, which identifies whether the image was generated using one of the AI’s training models. But generative photo techniques have advanced in scale and sophistication over the past few years, and the old strategies are no longer sufficient.
First, there are only so many images presented in AI training. If the deepfake was generated by an unknown, alternative model, even artificial intelligence won’t be able to spot it—at least, until now. Reverse engineering, common practice in machine learning (ML), can uncover unique patterns left by the generating model, regardless of whether it was included in the AI’s training set. This helps discover coordinated deepfake attacks or other instances in which multiple deepfakes come from the same source.
How It Works
Before we could use deep learning to generate images, criminals and other ill-intentioned actors had a limited amount of options. Cameras only had so many tools at their disposal, and most researchers could easily identify certain makes and models. But deep learning has ushered in an age of endless options, and as a result, it’s grown increasingly difficult to identify deepfakes.
To counteract this, Facebook ran deepfakes through a fingerprint estimation network (FEN) to estimate some of their details. Fingerprints are essentially patterns left on an image due to manufacturing imperfections, and they help identify where the image came from. By evaluating the fingerprint magnitude, repetition frequency, and symmetrical frequency, Facebook then applied those constraints to predict the model’s hyperparameters.
What are hyperparameters? If you imagine a generative model as a car, hyperparameters are similar to the engine components: certain properties that distinguish your fancy automobile from others on the market. ‘Our reverse engineering technique is somewhat like recognising [the engine] components of a car based on how it sounds’, Facebook explained, ‘even if this is a new car we’ve never heard of before’.
What Did They Find?
‘On standard benchmarks, we get state-of-the-art results’, said Facebook research lead Tal Hassner. Facebook added that the fingerprint estimation network (FEN) method can be used for not only model parsing, but detection and image attribution. While this research is the first of its kind, making it difficult to assess the results, the future looks promising.
Facebook’s AI will introduce model parsing for real-world applications, increasing our understanding of deepfake detection. As cybersecurity attacks proliferate, and generative AI falls into the hands of those who would do us harm, this method could help the ‘good guys’ stay one step ahead. As Hassner explained: ‘This is a cat-and-mouse game, and it continues to be a cat-and-mouse game’.