How Fyma utilises AI technology for spatial analytics

Fyma CEO and co-founder Karen Burns outlines her company’s mission, its use of AI technology for analytics and the importance of ethical AI

Can you tell me about your company?

Fyma is a computer vision SaaS company and our platform enables anyone to upgrade their existing camera infrastructure without the need to install any new internet of things (IoT) devices or other ‘dumb’ sensors/hardware so they become powerful data-gathering sensors.

Instead of using inaccurate or incomplete proxy sources such as WiFi mac address tracking, Bluetooth beacons, questionnaires etc Fyma really allows our clients to ’see’ what is going on in their environments: who is using their buildings and open spaces, how, when, density, modal split, hazards etc.

We started working on Fyma around two years ago when an urban development project had the requirement to use an existing street camera to monitor traffic counts as opposed to sensors. Fyma was able to provide the data in a more accurate way, provide a wider variety of it and do it 4x cheaper per junction. That’s when we realised the potential of unused cameras everywhere. 

Of course, this brings privacy into play as none of us want to live in a surveillance society. Fyma has been built in a fully GDPR compliant way from the ground up and our AI has never seen - and never will see - a human face. We ensure privacy and compliance on three layers: 

  1. Operational
  2. Technical
  3. Legal

Our staff are GDPR trained, we go through a GDPR self-assessment on a regular basis, and we have a legal team alongside making sure our tech stack and operations remain fully GDPR and privacy regulation-compliant at all times. 

We also bring in urban planning experts to help our clients make even more sense of their data, monetise it better and advise them on interventions that could be undertaken based on the data we gather.

Fyma allows our clients to ask very concrete questions - and get answers - from their camera video feeds: what are my peak times, how many e-scooters is the new mobility programme bringing to my estate, what impact is COVID having on my footfall/parking occupancy? and so forth. 

What is your role and responsibilities at the company?

I am the CEO and co-founder of Fyma. My role is to lead on business development and investor relations. My co-founder Taavi Tammiste is the CTO and he has been working with applied AI for the past eight years, so we have a very complementary team and love working together. 

Most of the pitching activities fall on me, as well as PR opportunities, public speaking engagements and client relations management. My educational background is actually in Film and TV as well as law - my dream was to become a Hollywood producer when I left high school.

However, I started working in IT at the very beginning of my career - at British Telecom in 2006. I did take 1.5 years out of IT though and tried the ‘Hollywood thing’ out in the Middle East, where I had to bring large US franchises into Abu Dhabi and thanks to that worked on both the Star Wars and Fast and Furious franchises.

I realised very quickly that this was not for me after all though, and moved back to IT and then building Fyma - and have not looked back since!

Can you explain how you utilise AI in CCTV surveillance?

Fyma is a statistical tool and not a surveillance tool. We don’t allow tracking of an individual from camera to camera and we don’t do any kind of biometric identification. Computer vision, when used in surveillance, is mostly deployed in the defence sector and not in our vertical which is more concerned about urban space usage, modal share and traffic density. 

How can you ensure using AI in such a way remains ethical?

Ethics in AI is a very hot topic - as it should be. The biases we have as humans are usually also trained into the AI, and as it often learns from historical data those historical biases are then reflected in the algorithms as well.

We have been working with a privacy-specialised legal team since day one, our team is GDPR trained and we have a self-assessment every 6-8 months to make sure we remain compliant with EU privacy regulations.

Our team is gender-diverse and advocates no tolerance for face or biometric detection in public spaces - we regularly turn down clients for this reason as well as investors. Ethics is lived every day through how we work and with whom, and we are held accountable by our colleagues, investors and clients.  

What can we expect from Fyma in the future?

Fyma is seeking to expand to the US market in 2022 and add more opportunities to work with drone footage. In the longer term, Fyma’s goal is to be the go-to solution for all built environment organisations and companies for spatial analytics in order to build better for the future. In order to do that, they can finally gather detailed and accurate baseline data with Fyma.


Featured Articles

US to Form AI Task Force to Confront AI Threats to Safety

The United States aims to form an AI task force to explore AI safety and possible regulations, amid a global debate over developing responsible technology

Wipro to Advance Enterprise Gen AI Adoption with IBM watsonx

Wipro and IBM are expanding their partnership to launch a Gen AI platform, aiming to provide easy and secure AI adoption solutions for businesses

Dr Joy Buolamwini: Helping Tech Giants Recognise AI Biases

With a longstanding commitment to AI ethics, Dr Joy Buolamwini continues to hold AI developers accountable as they work to tackle machine learning biases

Big Tech Companies Agree to Tackle AI Election Fraud

AI Strategy

Microsoft to Offer AI Support to Europe Amid Economic Slump

AI Strategy

Microsoft: Digitally Transforming India via AI Upskilling

AI Strategy