If Hollywood movies are to be believed, security cameras can zoom in on any individual or object infinitely to identify them and help security personnel detect and track any suspicious activity or person in real-time with 100% accuracy. Security cameras are depicted to have these capabilities regardless of resolution rate, the blurriness of video footage, the distance from the view, or clustering of multiple objects/persons in a constricted area.
Contrary to those ‘dream’ security cameras, actual environments where security cameras are deployed are so complex and unpredictable to the extent that a fair number of genuine security threats evade camera detection. Similarly, high numbers of normal non-threatening incidents are erroneously identified as security threats, leading to ‘false alarms’. Security alarms activated by a spider that crawls onto the camera, by a dog that crosses over a designated restricted area, or by growing leaves in the distance that resembles a person are a few of the examples.
False Alarm rates remain a major shortcoming of security cameras as they impose an extra financial burden on businesses, lead to waste of resources and inefficiencies. For example, in California, businesses are fined as much as 200$ per false alarm if police are dispatched. Considering that most facilities have multiple cameras, sometimes dozens, false alarms can easily drain financial resources. What’s even worse, a big number of False Alarms can actually undermine your whole security efforts by distracting personnel from actual threats on site. Therefore, filtering false alarms as much as possible is critical.
70% of security professionals report that they implement AI video analytics solutions mainly because of the need to reduce false alarm rates to an optimal level. Let’s now have a look at how AI video analytics help you with false alarm filtering and how you can deploy it.
Humans can easily tell the difference between an intruder crossing over a fence and a random jogger who accidentally walked into that area thanks to being exposed to vast amounts of diverse examples of objects. We also have the ability to interpret the visual data within the relevant context.
AI video analytics, a technology based on machine learning, mimics the way human vision operates by learning from the mass amounts of video footage it analyzes. This technology can remove up to 95% of false alarms in video surveillance setup by accurately detecting and classifying objects of interest and filtering irrelevant noise. The most advanced systems allow businesses to define custom criteria to what is a true and false alarm, like to tell the difference between a spy drone and a harmless bird that fly over a restricted business facility. To understand how AI video analytics works, let’s explain how it can be used for the detection and recognition of dogs and spiders, one of the major causes of false alarms in video security. In an experiment where CCTV cameras with PIR sensors were placed on warehouses, for instance, there were so many false alarms, primarily caused by the movement of dogs(28%), and by the presence of insects(14%).
In the first step of its development, the AI algorithm is trained on a huge set of video data with multiple examples of objects of interest. In our case dogs and spiders. AI learns the distinguishing features of those objects, such as the center of a spider web or dog fur, and becomes aware of the corresponding classes these features represent. After extensive training AI algorithms are deployed into the production as a part of a video analytic solution. These systems can detect similar objects, like the movement of dogs, and classify them as non-threatening events, reducing the rate of false alarms.
Not all AI algorithms are equal and some of them will perform better than others in your specific case.
To learn more about AI and what makes one superior to another subscribe to get notified when we release our latest blog posts covering this topic.
Cloud: Cloud-based solutions are expected to experience the highest growth rate for deployment with a 24.5% ‘compound annual growth rate’ by 2025. Videos or frames collected from cameras can be directly transmitted to cloud servers where that can be analyzed and false alarms identified.
Edge AI: It refers to the storage and processing of video data on the camera that captures video data itself, locally or on the closest device such as a computer nearby(edge appliance), which often has limited computing power. AI model built into the camera or nearest edge appliance analyzes video streams and classifies alarm-triggering objects without the need to be transmitted to a central analytics server.
On-premise video analytics: Unlike edge AI where analysis of video data is performed at the place of collection, on-premise video analytics separates the data collection and analysis steps. Similar to Cloud deployment, the camera only collects data and then transfers this data to a video analytics server where AI algorithms detect and classify security threats and normal events. On-premise servers are collocated in private data centers or companies’ headquarters and managed by in-house personnel. This is the most expensive option with the highest level of control over the data.
Cameras embedded with built-in motion sensor technologies detect movements of any objects, individuals, or events by using a variety of techniques. This technology is widely adopted but has serious shortcomings that lead to high false alarm rates. Firstly, they frequently perceive normal objects and events such as growing trees, movements of animals, and accidental momentary intrusions as alarm-triggering events.
Secondly, while some CCTV cameras incorporate AI video analytics to reduce false alarm rates, the built-in algorithms can't get enough computing power out of a camera to generate accurate predictions. In other words, both Cloud and Edge AI appliances will outperform most CCTV cameras in nearly all aspects of false alarm filtering in the real-world application.
Cloud-based AI video analytics, on the other hand, is not constrained by resolution, frame-rate, power, or heat dissipation limitations. It provides you with the storage space and computing power needed to extract value from all video data with the highest accuracy, including scenarios that should and should not trigger an alarm.
Another advantage of cloud-based video analytics over camera analytics is the ability to learn in the field. CCTV cameras do not have the storage capacity and processing power needed to collect and analyze video data for learning purposes. We will get back to this point later in this article.
When it comes to Total Cost of Ownership (TCO), cloud-based AI solutions are more efficient than Edge AI. According to this case study comparing Edge against Cloud deployments, businesses can pay as much as 55% more for Edge AI in a three-year ownership period.
Contrary to Edge AI, Cloud computing saves businesses a lot because there are no hardware setup costs, and also costs such as electricity, security, cooling, and maintenance are all paid on a pay-per-use model. Furthermore, businesses only pay for what storage space they use and the processing power they consume.
Edge AI, on the other hand, requires a heavy investment. Not only must you purchase advanced AI cameras and appliances to handle video analytics tasks, but you also have to configure them, hire personnel for maintenance, cover initial infrastructure setup costs, and bear real estate costs to house the infrastructure.
Bandwidth consumption and latency could be another point of consideration here, but in the case of False alarm filtering it's not a decisive one. Historically the biggest advantages of Edge AI were lower bandwidth consumption and reduced communication latency, which enabled faster decisions in critical situations. However today it’s possible to achieve sub-second latency and relatable alarm filtering by sending less than 200 KByte of data per motion to the Сloud.
Even after huge investments in HW and maintenance, the most advanced video analytic AI algorithms can’t realise their full potential in the Edge AI setup. Simply put the best AI algorithms are too big and too powerful for edge devices to process without burning up.
Here is a raw compute power comparison of the two most advanced AI processors available today. The gap is huge at this point.
There are also strict limitations on the number of simultaneous streams, frame rate, and resolutions that one Edge AI appliance can handle. This forces businesses to comply with detrimental tradeoffs that result in less accurate threat detection and an increased number of false alarms. Cloud AI, on the other hand, offers almost infinite scalability and on various occasions huge volume discounts.
Lack of computation power in Edge setup also limits AI learning capabilities, which is arguably one of the biggest selling points of any AI. For the benefit of low latency processing and affordable data storage, edge AI appliances do not store or process videos for retraining purposes and because of this can neither prevent the decline in AI accuracy nor improve it. On-camera AI is even more susceptible to accuracy degradation due to severe resource constraints.
In cloud-based solutions, on the other hand, there are no such limitations. Sophisticated MLOPS infrastructure (DevOPS for AI) can be set up to continually monitor and improve AI models’ performance. Vast databases that contain petabytes of video data and powerful computing clusters with thousands of TFLOPS (floating-point calculations per second) work day and night to improve false alarm detection accuracy based on previous examples and new conditions of the dynamic environment.
The accuracy rate of an AI-based video analytics model may deteriorate over time. This phenomenon is called ‘model drift’, which arises due to significant changes in dynamic environments after the initial training of the AI model. When the accuracy rate declines, the model needs to be retrained.
Retraining requires a new set of videos to be used to retrain the previous model so that it can produce more accurate results in the given environment. In other words, retraining does not include any changes to the model’s architecture, hardware, or algorithm. It only requires running the same model on a new training data set that represents new conditions.
Most environments where security cameras are deployed are highly complex and dynamic, particularly outdoor spaces. There are countless numbers of objects in different shapes and with different colors, appearances, and behavioral features. Furthermore, different weather conditions, changes in light, or the emergence of new objects such as construction activity across the street can all have an impact on the accuracy of video analytics. There is also an infinite number of combinations for objects and events that can trigger false alarms. Therefore, the video analytics AI should be constantly retrained to avoid any model drifts and to ensure that its performance increases over time.
AI Retraining is a very important concept that cannot be overlooked if one wants to unlock the full potential of any AI solution.
Subscribe, to get notified when we release our latest blog posts covering this topic.
AI video analytics is less about the AI technology itself but more about what business goal you are trying to achieve.
Is it a reduction of false alarm rates to the fewest number possible?
Look no further!
Curious about how you can achieve the lowest false alarm rates possible?
Contact us to talk about your false alarm problems and how we can help.