Download PDF version Contact company

It wasn’t long ago when capturing low-light recordings of videos with forensic details was a very difficult task, requiring additional lighting to get a usable result. Ultimately, the images are supposed to identify potential intruders, which means things like facial details or the color of vehicles need to be visible in low-light conditions. Axis Lightfinder technology is one of the few solutions that can provide the latter - and it has grown-up and is constantly evolving.

Some of these adjustments and developments have been introduced with Lightfinder 2.0. The ability of humans to see in the dark is limited - especially when it comes to colors and details. Millions of years of refinements have enabled human beings to use the information that’s sent by their eyes to the brain and interpret night scenes for the viewer’s best survival and usability.

Addition to security staff

When incidents have occurred in the past, interviewed eyewitnesses often can’t describe events correctly

But human beings don’t have any possibility to improve the image and when incidents have occurred in the past, interviewed eyewitnesses often can’t describe events or details correctly. Which is where technology like AXIS Lightfinder in surveillance cameras become a useful addition to security staff - especially in low-light environments.

Overall, Lightfinder technology is a combination of extremely light-sensitive sensors with carefully tuned image processing algorithms. Together with a high-quality lens, this technology allows for sharp images in low-light conditions that still contain life-like colors. Given that dusk and generally poorly lit areas are preferred by potential intruders, it’s important to be able to see the details that might help later with the identification, be that the color of a car or the details of worn clothing.

Usable video footage

The algorithms in surveillance cameras are responsible for recovering the colors, removing the noise and, ultimately, creating a clear picture. They help significantly to turn even the smallest sensor signal into usable video footage. However, it’s essential that these surveillance algorithms act in a predictable way, and never add foreign information into the image in an effort to make it more appealing to the eye.

Preserving the original image content and its forensic details must be a priority over extensive filtering

Preserving the original image content and its forensic details must always be a priority over extensive filtering. Lightfinder 2.0 builds upon these advanced features and all the experience from the original version of Lightfinder. It contributes to the goal of being able to monitor an area 24/7 under any condition and with the highest possible quality. Which is why the updated version of Lightfinder has an increased light-sensitivity and other features that make images clearer and more colorful.

Photons and noise reduction

To understand how the new features work, it’s helpful to take a step back: Light-sensitivity is really the ability to detect minor changes in contrast even under difficult conditions. For this reason, it is essential to capture all of the very few photons that arrive at the image sensor to avoid them scattering on the glass-surface or the sensor: photons that are lost before converted into electrons in the pixel can never be recovered.

Image-signals captured under these conditions are often submerged in noise, which requires a significant amount of noise reduction and signal recondition. This also needs to be done without destroying critical temporal- or spatial-information or introducing other unwanted artifacts. The new, advanced version includes sliders to adjust the amount of noise reduction that is applied to the video. This is essential as some analytics applications are quite sensitive to noise reductions.

Avoid false alarms

So based on their needs an advanced system integrator can now optimize accuracy, by adjusting the noise level or the analytics can take care of this. While a number of analytics perform well with one level of noise, others need to maximize the noise reduction to avoid false alarms. It also enables adjustment of the performance of the camera to its environment as scenarios in which a camera is mounted can vary - be that light or other factors.

Most modern image-sensors use the so-called Bayer-filter array to create a color image

Which is why Lightfinder 2.0 makes it possible to adapt to the spatial noise filter (SNF) and temporal noise filter (TNF). Ultimately, this adjustment leads to more customized image processing. As mentioned, Lightfinder 2.0. captures images with even more life-like colors and more clarity, two factors that are to a certain extent connected. Most modern image-sensors use the so-called Bayer-filter array to create a color image.

White balance algorithm

This filter pattern, which is placed on each 2x2 group of photosensors, groups pixels into three categories: 50 per cent green, 25 per cent respectively red and blue. The reason for this split is that green allows the human eye best to see brightness or contrast. For cameras it means that blue or red objects only receive half the signal compared to a green one, which makes them noisier. That’s where the white balance algorithm can optimize the signal-to-noise-ratio and reduce the noise level.

Motion-adaptive exposure significantly reduces the motion blur from approaching or nearby objects by measuring speed and adjusting the exposure time accordingly. If a security operator pauses the video, the visible frame would still show an image that’s clear and detailed enough, so that the situation can be assessed: for example, being able to identify a person or a vehicle, prompting appropriate action or tracking an individual.

Fixed box cameras

Naturally it is important to select the right camera type for specific premises as they’re optimized for different tasks. While most of the cameras have Lightfinder implemented doesn’t mean all have the same level of light-sensitivity - they benefit differently from the feature. For example, long-zoom PTZ cameras are perfect to quickly change view from a nearby door to a facility hundreds of meters away but they come with a lower light-sensitivity.

Of course, Lightfinder is not the only way to capture images in low-light conditions or near darkness

On the other side there are fixed box cameras that usually contain image-sensors with larger pixels, which converts directly into a better contrast detection. Of course, Lightfinder is not the only way to capture images in low-light conditions or near darkness. What makes it outstanding is the capability to capture true colors, something that differentiates this solution from other technologies, such as cameras with infrared radiation (IR) or thermal imaging.

Low-light surveillance technology

While both of these options are valid for certain areas, they don’t provide footage with color information, which can make identification difficult. Plus, thermal cameras won’t capture any details that allow reliable identification, and IR cameras are dependent on LEDs in order to provide clear images at night. Which is why a system that combines both Lightfinder 2.0. technology and, for example, thermal cameras can be a great option for some areas.

Thermal cameras deliver images that allow analytics to reliably detect motion, while Lightfinder delivers the color information and the possibility to identify and investigate suspicious activity. Lightfinder 2.0. has not only taken on learnings from the first generation but the advanced technologies will make it possible to get even more detailed images and results, helping people and companies to stay safe and secure. Plus, it’s only the start of bringing ‘old’ technologies to the next level with the evolution of hardware and software.

Share with LinkedIn Share with Twitter Share with Facebook Share with Facebook
Download PDF version Download PDF version

In case you missed it

How Have Security Solutions Failed Our Schools?
How Have Security Solutions Failed Our Schools?

School shootings are a high-profile reminder of the need for the highest levels of security at our schools and education facilities. Increasingly, a remedy to boost the security at schools is to use more technology. However, no technology is a panacea, and ongoing violence and other threats at our schools suggest some level of failure. We asked this week’s Expert Panel Roundtable: How have security solutions failed our schools and what is the solution?

Why Visualization Platforms Are Vital For An Effective Security Operation Center (SOC)
Why Visualization Platforms Are Vital For An Effective Security Operation Center (SOC)

Display solutions play a key role in SOCs in providing the screens needed for individuals and teams to visualize and share the multiple data sources needed in an SOC today. Security Operation Center (SOC) Every SOC has multiple sources and inputs, both physical and virtual, all of which provide numerous data points to operators, in order to provide the highest levels of physical and cyber security, including surveillance camera feeds, access control and alarm systems for physical security, as well as dashboards and web apps for cyber security applications. Today’s advancements in technology and computing power not only have increasingly made security systems much more scalable, by adding hundreds, if not thousands, of more data points to an SOC, but the rate at which the data comes in has significantly increased as well. Accurate monitoring and surveillance This has made monitoring and surveillance much more accurate and effective, but also more challenging for operators, as they can’t realistically monitor the hundreds, even thousands of cameras, dashboards, calls, etc. in a reactive manner. Lacking situational awareness is often one of the primary factors in poor decision making In order for operators in SOC’s to be able to mitigate incidents in a less reactive way and take meaningful action, streamlined actionable data is needed. This is what will ensure operators in SOC truly have situational awareness. Situational awareness is a key foundation of effective decision making. In its simplest form, ‘It is knowing what is going on’. Lacking situational awareness is often one of the primary factors in poor decision making and in accidents attributed to human error. Achieving ‘true’ situational awareness Situational awareness isn’t just what has already happened, but what is likely to happen next and to achieve ‘true’ situational awareness, a combination of actionable data and the ability to deliver that information or data to the right people, at the right time. This is where visualization platforms (known as visual networking platforms) that provide both the situational real estate, as well as support for computer vision and AI, can help SOCs achieve true situational awareness Role of computer vision and AI technologies Proactive situational awareness is when the data coming into the SOC is analyzed in real time and then, brought forward to operators who are decision makers and key stakeholders in near real time for actionable visualization. Computer vision is a field of Artificial Intelligence that trains computers to interpret and understand digital images and videos. It is a way to automate tasks that the human visual system can also carry out, the automatic extraction, analysis and understanding of useful information from a single image or a sequence of images. There are numerous potential value adds that computer vision can provide to operation centers of different kinds. Here are some examples: Face Recognition: Face detection algorithms can be applied to filter and identify an individual. Biometric Systems: AI can be applied to biometric descriptions such as fingerprint, iris, and face matching. Surveillance: Computer vision supports IoT cameras used to monitor activities and movements of just about any kind that might be related to security and safety, whether that's on the job safety or physical security. Smart Cities: AI and computer vision can be used to improve mobility through quantitative, objective and automated management of resource use (car parks, roads, public squares, etc.) based on the analysis of CCTV data. Event Recognition: Improve the visualization and the decision-making process of human operators or existing video surveillance solutions, by integrating real-time video data analysis algorithms to understand the content of the filmed scene and to extract the relevant information from it. Monitoring: Responding to specific tasks in terms of continuous monitoring and surveillance in many different application frameworks: improved management of logistics in storage warehouses, counting of people during event gatherings, monitoring of subway stations, coastal areas, etc. Computer Vision applications When considering a Computer Vision application, it’s important to ensure that the rest of the infrastructure in the Operation Center, for example the solution that drives the displays and video walls, will connect and work well with the computer vision application. The best way to do this of course is to use a software-driven approach to displaying information and data, rather than a traditional AV hardware approach, which may present incompatibilities. Software-defined and open technology solutions Software-defined and open technology solutions provide a wider support for any type of application the SOC may need Software-defined and open technology solutions provide a wider support for any type of application the SOC may need, including computer vision. In the modern world, with everything going digital, all security services and applications have become networked, and as such, they belong to IT. AV applications and services have increasingly become an integral part of an organization’s IT infrastructure. Software-defined approach to AV IT teams responsible for data protection are more in favor of a software-defined approach to AV that allow virtualised, open technologies as opposed to traditional hardware-based solutions. Software’s flexibility allows for more efficient refreshment cycles, expansions and upgrades. The rise of AV-over-IP technologies have enabled IT teams in SOC’s to effectively integrate AV solutions into their existing stack, greatly reducing overhead costs, when it comes to technology investments, staff training, maintenance, and even physical infrastructure. AV-over-IP software platforms Moreover, with AV-over-IP, software-defined AV platforms, IT teams can more easily integrate AI and Computer Vision applications within the SOC, and have better control of the data coming in, while achieving true situational awareness. Situational awareness is all about actionable data delivered to the right people, at the right time, in order to address security incidents and challenges. Situational awareness is all about actionable data delivered to the right people Often, the people who need to know about security risks or breaches are not physically present in the operation centers, so having the data and information locked up within the four walls of the SOC does not provide true situational awareness. hyper-scalable visual platforms Instead there is a need to be able to deliver the video stream, the dashboard of the data and information to any screen anywhere, at any time — including desktops, tablets phones — for the right people to see, whether that is an executive in a different office or working from home, or security guards walking the halls or streets. New technologies are continuing to extend the reach and the benefits of security operation centers. However, interoperability plays a key role in bringing together AI, machine learning and computer vision technologies, in order to ensure data is turned into actionable data, which is delivered to the right people to provide ‘true’ situational awareness. Software-defined, AV-over-IP platforms are the perfect medium to facilitate this for any organizations with physical and cyber security needs.

Securing Mobile Vehicles: The Cloud and Solving Transportation Industry Challenges
Securing Mobile Vehicles: The Cloud and Solving Transportation Industry Challenges

Securing Intelligent Transportation Systems (ITS) in the transportation industry is multi-faceted for a multitude of reasons. Pressures build for transit industry players to modernise their security systems, while also mitigating the vulnerabilities, risks, and growth-restrictions associated with proprietary as well as integrated solutions. There are the usual physical security obstacles when it comes to increasingly integrated solutions and retrofitting updated technologies into legacy systems. Starting with edge devices like cameras and intelligent sensors acquiring video, analytics and beyond, these edge devices are now found in almost all public transportation like buses, trains, subways, airplanes, cruise lines, and so much more. You can even find them in the world’s last manually operated cable car systems in San Francisco. The next layer to consider is the infrastructure and networks that support these edge devices and connect them to centralized monitoring stations or a VMS. Without this layer, all efforts at the edge or stations are in vain as you lose the connection between the two. And the final layer to consider when building a comprehensive transit solution is the software, recording devices, or viewing stations themselves that capture and report the video. The challenge of mobility However, the transportation industry in particular has a very unique challenge that many others do not – mobility. As other industries become more connected and integrated, they don’t usually have to consider going in and out or bouncing between networks as edge devices physically move. Obviously in the nature of transportation, this is key. Have you ever had a bad experience with your cellular, broadband or Wi-Fi at your home or office? You are not alone. The transportation industry in particular has a very unique challenge that many others do not – mobility Can you trust these same environments to record your surveillance video to the Cloud without losing any frames, non-stop 24 hours a day, 7 days a week, 365 days a year? To add to the complexity – how do you not only provide a reliable and secure solution when it’s mobile, traveling at varying speeds, and can be in/out of coverage using various wireless technologies? Waiting to upload video from a transport vehicle when it comes into port, the station, or any centralized location is a reactive approach that simply will not do any longer. Transit operations require a more proactive approach today and the ability to constantly know what is going on at any given time on their mobile vehicles, and escalate that information to headquarters, authorities, or law enforcement if needed; which can only occur with real-time monitoring. This is the ultimate question when it comes to collecting, analyzing, and sharing data from mobile vehicles – how to get the video from public transportation vehicles alike to headquarters in real time! Managing video data In order to answer this question, let’s get back to basics. The management and nature of video data differs greatly from conventional (IT) data. Not only is video conducted of large frames, but there are specific and important relationships among the frames and the timing between them. This relationship can easily get lost in translation if not handled properly. This is why it’s critical to consider the proper way to transmit large frames while under unstable or variable networks. The Internet and its protocols were designed more than two decades ago and purposed for conventional data. Although the Internet itself has not changed, today’s network environments run a lot faster, expand to further ranges, and support a variety of different types of data. Because the internet is more reliable and affordable than in the past some might think it can handle anything. However, it is good for data, but not for video. This combination makes it the perfect time to convert video recording to the Cloud! Video transmission protocol One of the main issues with today’s technology is the degradation of video quality when transmitting video over the Internet. ITS are in dire need for reliable transmission of real-time video recording. To address this need a radical, yet proven, video transmission protocol has recently been introduced to the market. It uses AI technology and to adapt to different environments in order to always deliver high quality, complete video frames. This protocol, when equipped with encryption and authentication, enables video to be transmitted reliably and securely over the Internet in a cloud environment. One of the main issues with today’s technology is the degradation of video quality when transmitting video over the Internet Finally, transportation industry has a video recording Cloud solution that is designed for (massive) video that can handle networks that might be experiencing high error rate. Such a protocol will not only answer the current challenges of the transportation industry, but also make the previously risky Cloud environment safe for even the most reserved environments and entities. With revolutionary transmission protocols, the time is now to consider adopting private Cloud for your transportation operations.