Cameras that provide higher-resolution images require more computing power, bandwidth, and storage, which complicates their use with analytics
The better the sensors, the better the analytics

Garbage in, garbage out. The familiar cliché is just as applicable to the area of video analytics as any other field of computing. You simply must have a high-quality image in order to achieve a high-functioning analytics system. The good news is that video cameras, which are the sensors in video analytics systems, are providing images that are better than ever, offering higher quality – and more data – for use by video analytics.

For analytics that require a higher resolution to achieve superior results, megapixel cameras provide video that allows for better face recognition, clearer license plate numbers, reliable age and gender of customers, and other uses. These help prevent false positives and increase reliability in forensic searches, says Brian Lane, director of marketing, 3VR.

When Ipsotek considers a video analytics-based solution, 50 percent of that solution is reliant on the selection of the appropriate sensor (camera). With the emerging technologies of thermal, megapixel and advances in camera processing, this half of the solution is more readily achieved, says Dr. Boghos Boghossian, CTO, Ipsotek. In some areas like face recognition, the illumination of the face in challenging environmental conditions is key to the success of the solution. Therefore, Ipsotek has been evaluating cutting edge camera technology provided by Ipsotek’s technology partners to assist consultants and solution partners to design successful solutions for every growing video analytics market.

The better the sensors, the better the analytics, agrees Dr. Rustom Kanga, CEO of iOmniscient, and lower costs of thermal cameras make them a good choice. However, cameras that provide higher-resolution images require more computing power, bandwidth, and storage, which complicates their use with analytics. In general, the resolution is downgraded to the least resolution possible to detect the activity the analytics system is looking for.

For analytics that require a
higher resolution to achieve
superior results, megapixel
cameras provide video that
allows for better face
recognition, clearer licence
plate numbers, reliable age
and gender of customers,
and other uses

iOmniscient has a new technology called IQ Hawk that “pulls out of the image what is important,” says Kanga. It accesses higher resolution only for areas of interest in the photo – such as using higher resolution of a face or licenseplate viewed from a distance to enable facial or license plate recognition. The rest of the image is used at lower resolution. If there are three people in a video frame, IQ Hawk presents all three faces in high-res to enable identification. “With IQ Hawk, we can dynamically look at an image at high and low resolution, based on what’s important,” says Kanga.

In terms of using higher-resolution cameras with analytics, Zvika Ashani, chief technology officer (CTO), Agent Video Intelligence (Agent Vi), says it is important to consider the “lowest common denominator” in terms of usable resolution. For example, a megapixel camera might have a clearer image in good sunlight; but at nighttime, the image will suffer, and could be worse than a low-resolution image. “More pixels don’t mean more detection quality,” he says. “The more pixels you have, the more processing power you need inside the camera.” Therefore, high-resolution images may even be “downscaled” to a lower resolution for analysis to minimize the amount of data to be managed. Higher resolution can also introduce additional noise in many cases.

Some higher-resolution cameras have video analytics built in. DVTEL’s new ioimage HD Analytic IP cameras provide HD broadcast-quality IP video coupled with built-in military-grade analytics. These high-resolution, low-bandwidth cameras, available in both HD 1080p and 720p, are optimized for outdoor conditions and available with predictable storage. The cameras have enhanced low-light and no-light capabilities, high sensitivity, and true wide dynamic range. A new analytics feature provides a reduced false alarm rate for people standing upright, which benefits applications that don’t need sophisticated detection of camouflaged or crawling intruders. ioimage analytics now have improved detection distance, which allows for fewer cameras needed to cover the same area.

Share with LinkedIn Share with Twitter Share with Facebook Share with What's App Share with Facebook
Download PDF version Download PDF version

Author profile

Larry Anderson Editor, SecurityInformed.com & SourceSecurity.com

An experienced journalist and long-time presence in the US security industry, Larry is SecurityInformed.com's eyes and ears in the fast-changing security marketplace, attending industry and corporate events, interviewing security leaders and contributing original editorial content to the site. He leads SecurityInformed's team of dedicated editorial and content professionals, guiding the "editorial roadmap" to ensure the site provides the most relevant content for security professionals.

In case you missed it

What You Need To Know About Open Source Intelligence (OSINT) For Emergency Preparedness?
What You Need To Know About Open Source Intelligence (OSINT) For Emergency Preparedness?

Have you ever stopped to consider the volume of new data created daily on social media? It’s staggering. Take Twitter, for instance. Approximately 500 million tweets are published every day, adding up to more than 200 billion posts per year. On Facebook, users upload an additional 350 million photos per day, and on YouTube, nearly 720,000 hours of new video content is added every 24 hours. While this overwhelming volume of information may be of no concern to your average social media user posting updates to keep up with family and friends, it’s of particular interest to corporate security and safety professionals who are increasingly using it to monitor current events and detect potential risks around their people and locations—all in real-time. Meet the fast-paced and oft-confusing world of open-source intelligence (OSINT). What is Open Source Intelligence (OSINT)? The U.S. Department of State defines OSINT as, “intelligence that is produced from publicly available information and is collected, exploited, and disseminated promptly to an appropriate audience to address a specific intelligence requirement.” The concept of monitoring and leveraging publicly available information sources for intelligence purposes dates back to the 1930s. The British Broadcast Corporation (BBC) was approached by the British government and asked to develop a new service that would capture and analyze print journalism from around the world. Monitoring and identifying potential threats Originally named the “Digest of Foreign Broadcast, the service (later renamed BBC Monitoring which still exists today) captured and analyzed nearly 1.25 million broadcast words every day to help British intelligence officials keep tabs on conversations taking place abroad and what foreign governments were saying to their constituents. OSINT encompasses any publicly accessible information that can be used to monitor and identify potential threats Today, OSINT broadly encompasses any publicly accessible information that can be used to monitor and identify potential threats and/or relevant events with the potential to impact safety or business operations. The potential of OSINT data is extraordinary. Not only can it enable security and safety teams to quickly identify pertinent information that may pose a material risk to their business or people, but it can also be captured by anyone with the right set of tools and training. OSINT for cybersecurity and physical threat detection Whether it be a significant weather event, supply chain disruptions, or a world health crisis few saw coming, the threats facing organizations continue to increase in size and scale. Luckily, OSINT has been able to accelerate how organizations detect, validate, and respond to these threats, and it has proved invaluable in reducing risk and informing decision-making – especially during emergencies. OSINT is typically shared in real-time, so once a situation is reported, security teams can then work on verifying critical details such as the location or time an incident occurred or provide the most up-to-date information about rapidly developing events on the ground. They can then continue to monitor online chatter about the crisis, increasing their situational awareness and speeding up their incident response times. OSINT applications OSINT can help detect when sensitive company information may have been accessed by hackers  Severe weather offers a good example of OSINT in action. Say an organization is located in the Great Plains. They could use OSINT from sources like the National Weather Service or National Oceanic and Atmospheric Administration (NOAA) to initiate emergency communications to employees about tornado warnings, high winds, or other dangerous conditions as they are reported. Another common use case for OSINT involves data breaches and cyber-attacks. OSINT can help detect when sensitive company information may have been accessed by hackers by monitoring dark web messaging boards and forums. In 2019, T-Cellphone suffered a data breach that affected more than a million customers, but it was able to quickly alert affected users after finding their personal data online. OSINT is a well-established field with countless applications. Unfortunately, in an ever-changing digital world, it’s not always enough to help organizations weather a crisis. Why OSINT alone isn’t enough? One of the core challenges with leveraging OSINT data, especially social media intelligence (SOCMINT), is that much of it is unstructured and spread across many disparate sources, making it difficult to sort through, manage, and organize. Consider the social media statistics above. Assuming a business wanted to monitor all conversations on Twitter to ensure all relevant information was captured, it would need to both capture and analyze 500 million individual posts every day. Assuming a trained analyst spent just three seconds analyzing each post, that would amount to 1.5 billion seconds of labor—equivalent to 416,666 hours—just to keep pace. While technology and filters can greatly reduce the burden and help organizations narrow the scope of their analysis, it’s easy to see how quickly human capital constraints can limit the utility of OSINT data—even for the largest companies. Challenges with OSINT OSINT data collection includes both passive and active techniques, each requiring a different level of effort and skill Additionally, collecting OSINT data is time-consuming and resource-intensive. Making sense of it remains a highly specialized skill set requiring years of training. In an emergency where every second count, the time required to sift through copious amounts of information takes far longer than the time in which an organization must take meaningful action to alter the outcome. Compounding the issue, OSINT data is noisy and difficult to filter. Even trained analysts find the need to constantly monitor, search, and filter voluminous troves of unstructured data tedious. Artificial intelligence and machine learning have helped weed through some of this data faster, but for organizations with multiple locations tasked with monitoring hundreds or thousands of employees, it’s still a challenging task. Adding to the complexity, collecting OSINT data isn’t easy. OSINT data collection includes both passive and active techniques, each requiring a different level of effort and skill. Passive vs Active OSINT Passive OSINT is typically anonymous and meant to avoid drawing attention to the person requesting the information. Scrolling user posts on public social media profiles is a good example of passive OSINT. Active OSINT refers to information proactively sought out, but it often requires a more purposeful effort to retrieve it. That may mean specific login details are needed to access a website where information is stored. Lastly, unverified OSINT data can’t always be trusted. Analysts often encounter false positives or fake reports, which not only take time to confirm accuracy, but if they act on misinformation, the result could be damage to their organization’s reputation or worse. So, how can companies take advantage of it without staffing an army of analysts or creating operational headaches? A new path for OSINT Organisations can leverage the benefits of OSINT to improve situational awareness and aid decision-making Fortunately, organizations can leverage the benefits of OSINT to improve situational awareness and aid decision-making without hiring a dedicated team of analysts to comb through the data. By combining OSINT data with third-party threat intelligence solutions, organizations can get a cleaner, more actionable view of what’s happening in the world. Threat intelligence solutions not only offer speed by monitoring for only the most relevant events 24/7/365, but they also offer more comprehensive coverage of a wide range of threat types. What’s more, the data is often verified and married with location intelligence to help organizations better understand if, how, and to what extent each threat poses a risk to their people, facilities, and assets. In a world with a never-ending stream of information available, learning how to parse and interpret it becomes all the more important. OSINT is a necessary piece to any organization’s threat intelligence and monitoring system, but it can’t be the only solution. Paired with external threat intelligence tools, OSINT can help reduce risk and keep employees safe during emergencies and critical events.

Baltimore Is The Latest U.S. City To Target Facial Recognition Technology
Baltimore Is The Latest U.S. City To Target Facial Recognition Technology

The city of Baltimore has banned the use of facial recognition systems by residents, businesses and the city government (except for police). The criminalization in a major U.S. city of an important emerging technology in the physical security industry is an extreme example of the continuing backlash against facial recognition throughout the United States. Facial recognition technology ban Several localities – from Portland, Oregon, to San Francisco, from Oakland, California, to Boston – have moved to limit use of the technology, and privacy groups have even proposed a national moratorium on use of facial recognition. The physical security industry, led by the Security Industry Association (SIA), vigorously opposed the ban in Baltimore, urging a measured approach and ‘more rational policymaking’ that preserve the technology’s value while managing any privacy or other concerns. Physical security industry opposes ban In such cases, it is local businesses and residents who stand to lose the most" “Unfortunately, an outright ban on facial recognition continues a distressing pattern in which the clear value of this technology is ignored,” said SIA’s Chief Executive Officer (CEO) Don Erickson, adding “In such cases, it is local businesses and residents who stand to lose the most.” At the national level, a letter to US President Biden from the U.S. Chamber of Commerce Coalition asserts the need for a national dialog over the appropriate use of facial recognition technology and expresses concern about ‘a blanket moratorium on federal government use and procurement of the technology’. (The coalition includes Security Industry Association (SIA) and other industry groups.) The negativity comes at a peak moment for facial recognition and other biometric technologies, which saw an increase of interest for a variety of public and business applications, during the COVID-19 pandemic’s prioritization to improve public health hygiene and to promote ‘contactless’ technologies. Prohibition on banks, retailers and online sellers The ordinance in Baltimore prohibits banks from using facial recognition to enhance consumer security in financial transactions. It prevents retailers from accelerating checkout lines with contactless payment and prohibits remote online identity document verification, which is needed by online sellers or gig economy workers, according to the Security Industry Association (SIA). At a human level, SIA points out that the prohibition of facial recognition undermines the use of customized accessibility tools for disabled persons, including those suffering with blindness, memory loss or prosopagnosia (face blindness). Ban out of line with current state of facial recognition Addressing the Baltimore prohibition, the Information Technology and Innovation Foundation reacted to the measure as ‘shockingly out of line with the current state of facial recognition technology and its growing adoption in many sectors of the economy’. Before Baltimore’s decision to target facial recognition, Portland, Oregon, had perhaps the strictest ban, prohibiting city government agencies and private businesses from using the technology on the city’s grounds. San Francisco was the first U.S. city to ban the technology, with Boston, Oakland; Cambridge, Massachusetts; and Berkeley, California, among others, following suit. Police and federal units can use biometrics Unlike other bans, the Baltimore moratorium does not apply to police uses Unlike other bans, the Baltimore moratorium does not apply to police uses, but targets private uses of the technology. It also includes a one-year ‘sunset’ clause that requires city council approval for an extension. The measure carves out an exemption for use of biometrics in access control systems. However, violations of the measure are punishable by 12 months in jail. The law also establishes a task force to evaluate the cost and effectiveness of surveillance tools. Transparency in public sector use of facial recognition Currently, the state of Maryland controls the Baltimore Police Department, so the city council does not have authority to ban police use of facial recognition, which has been a human rights concern driving the bans in other jurisdictions. A measure to return local control of police to the city could pass before the year lapses. SIA advocates transparency in public-sector applications of facial recognition in identity verification, security and law enforcement investigative applications. SIA’s CEO, Don Erickson stated, “As public sector uses are more likely to be part of processes with consequential outcomes, it is especially important for transparency and sound policies to accompany government applications.”

What Are The Security Challenges Of Protecting Critical Infrastructure?
What Are The Security Challenges Of Protecting Critical Infrastructure?

Many of us take critical infrastructure for granted in our everyday lives. We turn on a tap, flip a switch, push a button, and water, light, and heat are all readily available. But it is important to remember that computerized systems manage critical infrastructure facilities, making them vulnerable to cyber-attacks. The recent ransomware attack on the Colonial Pipeline is an example of the new types of threats. In addition, any number of physical attacks is also possibilities. We asked this week’s Expert Panel Roundtable: What are the security challenges of protecting critical infrastructure?