Using AI and ML to simplify live broadcast operations

By Andrew Broadstone, Director of Product Management at Zixi
Andrew Broadstone, Director of Product Management at Zixi discusses how AI and ML and be utilised to simplify live broadcast operations

In the broadcast media industry, artificial intelligence and machine learning are regarded as pillars of the next generation of technological advancement for a variety of reasons. These include the ability to scrutinize mountains of data while identifying anomalies, identifying trends and alerting users to potential problems before they occur without the need for human intervention. 

However, to truly understand why AI and ML offer such pertinent value for broadcasters, it’s necessary to recognise the issues that every broadcaster would prefer to avoid, and then look at use cases and components within broadcast media where these technologies can have the greatest impact.

To share an example of the types of challenges broadcasters would prefer to circumvent, imagine that a live sporting event stops streaming, or that frames start to drop for no apparent reason. Viewers are likely to notice quality issues and start to complain. Technicians are baffled, unable to resolve the problem in real-time, and the unfortunate result for customers is that they may have just missed a key live sporting moment. Revenue, therefore, takes a knock and executives are demanding answers. 

Situations like these are every broadcaster’s nightmare. During these tense moments, there is no time to lose – viewers can quickly and easily switch to other services, which instantly impacts ad revenue and reputation. What went wrong? Who or what is to blame and how can we get this backup and running immediately, while mitigating this risk in the future? 

Detecting anomalies

One of the biggest challenges facing broadcast operations engineers is knowing when things are not working before the viewers’ experience is affected. In a perfect world, operators and engineers want to predict outages and identify potential issues ahead of time. Machine learning models can be orchestrated to recognise the normal ranges based on hundreds to thousands of measurements – beyond the ability of a human operator – and alert the operator in real-time when a stream anomaly occurs. While this process normally requires monitoring logs on dozens of machines and keeping track of the performance of network links between multiple locations and partners, using ML enables the system to identify patterns in large data sets and helps operators focus only on workflow anomalies – dramatically reducing workload. 

Anomaly detection works by building a predictive model of what the next measurements related to a stream will be – for example, the round-trip time of packets on the network or the raw bitrate of the stream – then determining how different the expected value is from the next measurement. As a tool to sort through normal and abnormal streams, this can be essential, especially when managing hundreds or thousands of concurrent channels. One benefit of anomalous behaviour identification would be enabling an operator to switch to a backup link that uses a different network link before a failure occurs.

Anomaly detection can also be a vital component of reducing needless false alarms and reducing time wasted. Functionality such as customisable alerting preferences and aggregated health scores generated by threat-gauging data points assist operators to sift through and assimilate data trends so they can focus where they really need to. In addition, predictive and proactive alerting can be orders of magnitude less expensive and allow broadcasters to be able to identify the root causes of instability and failure faster and more easily.

Not all issues can be avoided, but ML-assisted Root Cause Analysis can reduce future risk

Predictive analytics, alerts and correlations are useful for automated failure prediction and alerting, but when all else fails - ML models can also be used to help operators concentrate on areas of concern following an outage, making retrospective analysis much easier and faster via Root Cause Analysis. 

With workflows that consist of dozens of machines and network segments, it is inherently difficult to know where to look for problems. However, ML models as we have seen provide trend identification, and by using data aggregation, help visualise issues. Even relatively straightforward visualisations of how a stream deviates from the norm are incredibly valuable, whether in the form of historical charts, customisable reports or questions as simple as how a particular stream compares to a similar recent stream is incredibly valuable for any modern broadcaster.

Reducing complexity

The interconnected world in which we live sees video workflows interacting, intertwining, and integrating into new ways every day, simultaneously increasing information sharing, agility and connectivity while producing increasingly complex challenges and issues to diagnose. In a broadcast context, as more on-premises and cloud resources become connected with equipment from different vendors, sources, and partner organisations distributing to new device types, an enormous, ever-expanding number of log and telemetry data is produced. 

As a result, broadcast engineers have more information than they can effectively process. They routinely silence frequent alerts and alarms, because with too much data overload it can be impossible to tell what is important and what is not. This inevitably leaves teams overwhelmed and lacking insights.

Advanced analytics and ML can help with these problems by making sense of overwhelming quantities of data, allowing human operators to sift through insignificant clutter and to focus and understand where issues are likely to occur before failures are noticed. Advanced analytics provide media companies with the unprecedented opportunity to leverage sophisticated event correlation, data aggregation, deep learning, and virtually limitless applications to improve broadcast workflows. The benefit is to be able to do more with less, to innovate faster than the competition and prepare for the future – both by increasing your knowledge base and opening the potential for cost reduction and time savings, focusing on the crucial details behind the data that matters most to both their users and organisation.

The data collection challenge and value of data aggregation 

A major challenge to any analytics system is data collection. When you have a video workflow comprised of machines in disparate data centres, running different OSs and tools, it can be difficult to assimilate and standardise reliable, relevant data that can be used in any AI/ML system. While there are natural data aggregation points in most broadcast architectures – for example, if you are using cloud operations and remote management platforms or a common protocol stack – this is certainly not a given.

Although standards exist for how video data should be formatted and transmitted, few describe how machine data, network measurements, and other telemetry should be collected, transmitted and stored. Therefore, it is essential to work with a technology partner that sends data to a common aggregation point where it is parsed, normalised and put into a database while supporting multiple protocols to support a robust AI/ML solution. 

Once you have a method in place for collecting real-time measurements from your video workflow, you can feed this data into an ML engine to detect patterns. From there you can train the system not only to understand normal operating behaviour for anomaly detection but also to recognize specific patterns leading up to video degradation events. With these patterns determined you can also identify common metadata related to degradation events across systems, allowing you to identify that the degradation event is related to a particular shared network segment.

For example, if a particular ISP in a particular region continues to experience latency or blackout issues, the system learns to pick up on warning signs ahead of time and notifies the engineer before an outage – preventing issues proactively while simultaneously improving root cause identification within your entire ecosystem. Developers can also see that errors are more often observed using certain settings on a common encoder or network hardware. Unexpected changes in the structure of the video stream or the encoding quality might also be important signals of impending problems. By observing correlations ML gives operators key insights into the causes of problems and how to solve them.

AI and ML underpins the future of successful broadcast workflows

Leveraging AI and ML to improve operational efficiency and quality provides a powerful advantage while preparing broadcasters for the future of live content delivery over IP. Selecting the right partner for system monitoring and orchestration that integrates AI and ML capabilities can help your organisation make sense of the vast amounts of data being sent across the media supply chain and be a powerful differentiator. 

As experiments to test hypotheses are essential to the traditional learning process, the same goes for ML models. Building, training, deploying, and updating ML models are inherently complex, meaning providers in cooperation with their users must continue to iterate, compare results, and adjust accordingly to understand the why behind the data, improving Root Cause Analysis and the customer experience.  

In the world of broadcast operations, ever-evolving AI and ML technologies present an exceptional opportunity for sophisticated event correlation, data aggregation, deep learning, and virtually unlimited applications across broadcast media operations. As models become more informed and interconnected, problem-solving and resolution technology based on Deep Learning and AI will become increasingly essential tools for simplifying and future-proofing broadcast workflows.

Share

Featured Articles

Should Tech Leaders be Concerned About the Power of AI?

With insights from Blackstone CEO Steve Schwarzman, we consider if tech leaders are right to be anxious about AI innovation and if regulation is necessary

Andrew Ng Joins Amazon Board to Support Enterprise AI

In the wake of Andrew Ng being appointed Amazon's Board of Directors, we consider his career from education towards artificial general intelligence (AGI)

GPT-4 Turbo: OpenAI Enhances ChatGPT AI Model for Developers

OpenAI announces updates for its GPT-4 Turbo model to improve efficiencies for AI developers and to remain competitive in a changing business landscape

Meta Launches AI Tools to Protect Against Online Image Abuse

AI Applications

Microsoft in Japan: Investing in AI Skills to Boost Future

Cloud & Infrastructure

Microsoft to Open New Hub to Advance State-of-the-Art AI

AI Strategy