3 Aug 2021

Editor Introduction

Artificial intelligence (AI) is simultaneously an emerging technology, a common term in popular culture, and a buzzword in the security industry. But these aspects of the term can lead to misunderstanding in the marketplace. AI technology is continuing to emerge, but what is the reality today? How do depictions of AI in popular culture impact how it is understood in the real world of security? As a buzzword, at what point does marketing hype garble our understanding of reality? We asked this week’s Expert Panel Roundtable: What are the misconceptions about AI in security? 


Sean Lawlor Genetec, Inc.

AI denotes a fully functional artificial brain that can reason, evolve, self-learn, and make human-like decisions. We are many years away from that. While machines are able to mimic behavior on specific tasks, they are not capable of thinking or acting like humans. In the physical security industry, subsets of artificial intelligence such as machine learning and deep learning can help organizations sift through their data and tackle real-world solutions such as facial recognition or people counting. Intelligent automation can further help organizations by using existing data and automating analysis based on that data, ultimately helping to improve operations and workflow, as well as reducing redundant responses. But neither technology is “intelligent” in the sense that they cannot think or act like humans. Using the term AI loosely only serves to misrepresent what machine learning can do and has the potential to generate misguided and unrealistic expectations.

From Amazon shopping suggestions to mortgage approvals, credit card fraud detection, and unlocking your phone with facial recognition, AI is everywhere. With so much attention on AI-based machines and deep learning, it is easy for people to assume it can do anything.  In reality, today’s AI-based algorithms must be carefully trained by humans for every task. A common misconception involves AI-based technology being able to make decisions or take actions on its own. It simply cannot. Today’s machine learning and deep learning algorithms can only compare data and present results. While it can do so faster and more efficiently than a human can, the decision on how to act ultimately relies upon humans. I don’t see this changing anytime soon due to the complex and sensitive nature of safety and security decisions that must be made in the context and knowledge of the broad environment where an event takes place.

The two greatest misconceptions about Artificial Intelligence (AI) are that it will solve everything and that it’s a simple solution. AI is a great tool, but it also requires a number of components to solve use cases – one of the most challenging components being the training data used to learn algorithms. Solutions based on AI technology require a large amount of relevant training data. It’s a complex solution and there are a number of steps to be performed in order to integrate that information into the decision-making process. There is a great opportunity to utilize AI in security applications, and the industry can work together on market education in order to avoid these misconceptions.

Sean Foley Interface Security Systems LLC

A common misconception by some customers is that AI equals Big Brother and that privacy and personal freedoms must be sacrificed. This is understandable when so much of popular culture and TV focuses on the “rise of the machines,” as in Terminator. Of course, we know that true “AI” at this level doesn’t exist yet and that we’re only really talking about machine learning and the derivatives of that technology. The security industry must better educate customers and prospects while clarifying what AI-based analytics are doing with the data they collect. We must reinforce the fact that 99.9% of the data collected is completely anonymous. Machine learning is doing nothing that a team of humans couldn’t do, it’s just doing it exponentially more efficiently. AI-based technology is poised to transform our industry as we go beyond security use cases, to operations and retail analysis for sales and marketing organizations.

On the end-customer side, you’ll find that the misconceptions about Artificial Intelligence (AI) in security are similar to those held by the general public. The concept that “AI is meant to make complex decisions,” is not the case today. Rather, AI is meant to address recurring patterns so it can filter for and enhance the human eye. Instead of what people perceive as a self-aware, intelligent entity, AI in security is a set of technologies employed to make solutions, like video analytics, more accurate and adaptable. In turn, this opens up new opportunities and avenues for use cases. Perhaps the biggest misconception is that AI is meant to replace humans. However, AI today cannot fully replace humans, but with the increasing ubiquity of sensors and the growth of unstructured data, AI is needed to analyze this data and make it digestible for humans. In turn, this enables human operators to make decisions more swiftly, improve options, and even save lives.


Editor Summary

The concept of artificial intelligence (AI) is common both in the world of science fiction and the world of security facts. But when does the more fanciful aura of the term overshadow or subvert understanding of the real-world features? AI sounds futuristic, and the security industry is becoming more futuristic all the time, but there are limits. Separating the fictional aspects of AI from reality in the security industry can promote a better understanding of a tool that will grow in value over the coming years.