Security camera mounts - Expert commentary

Ergonomic Standards Increase Control Room Productivity
Ergonomic Standards Increase Control Room Productivity

  Ergonomics are a critical, but often misunderstood aspect of designing control rooms for security. Ergonomics have a deep impact on the integrity of an operation, and the issue goes beyond the control room furniture. Matko Papic, Chief Technology Officer of Evans Consoles, divides ergonomics into three areas: physical (reach zones, touch points, monitors); cognitive (the individual’s ability to process information without overlooking a critical element) and organizational (how the facility operates in various situations; e.g., is it adequately designed for an emergency event?). He says the Evans approach is to determine the precise placement required for each element an operator needs, and then to design and build console furniture to position it there. Basically, the idea is to tailor the control room to the operation. What tasks must an operator perform? Are they manageable or should they be divided up among several operators? Control room design should accommodate the need to collaborate, and be flexible enough to adapt to various situations. It all begins with understanding the information that needs to be processed, says Papic. Increased Productivity In The Workplace Because personnel are often stationed at a specific console, desk or workstation for long hours, physical problems and productivity issues can result, says Jim Coleman, National Sales Manager, AFC Industries. Ergonomically designed furniture and related products have been proven to increase productivity and alleviate physical stress in the workplace. Ergonomic furniture solutions are crafted for the ultimate in safety, adaptability, comfort and functionality. Coleman says AFC Industries can tailor furniture to specific needs and environment. For example, a height-adjustable workstation can be combined with adjustable monitor arm mounts to create a relaxed, comfortable environment. Furniture offers modern designs, comfortable ergonomics, and comprehensive features. Rugged materials withstand the 24/7 use of command control centers. Health Benefits Of Ergonomic Workstations A sedentary office environment is often an unhealthy one. “For people who sit most of the day, their risk of heart attack is about the same as smoking,” says Martha Grogan, Cardiologist at the Mayo Clinic. Ongoing research and studies have shown that a change in posture (i.e., using ergonomic sit-to-stand workstations) is an effective means to combat these negative health issues. Using sit-to-stand workstations helps to eliminate musculoskeletal disorders caused by long-term sitting. They can also improve productivity and focus from the increased blood flow. Energy levels can rise and employees burn more calories. Control room design should accommodate the need to collaborate and be flexible enough to adapt to various situations “The ergonomic environment we create for control rooms involves considering every need of the staff at each workstation and their equipment, as well as workflow within the entire room,” says Coleman. “From the proper setting of screen focal lengths to sound absorption and glare reduction, each requirement and phase of a control room design is a necessary process to ensure the protection and safety of people and property.” Emergency Operations Center “The military has figured out that you are more alert when you are standing,” says Randy Smith, President of Winsted, and the realization is guiding emergency operations center (EOC) design toward sit-stand. “As soon as there is an emergency, everybody stands up,” Smith adds. Designing EOC environments also requires systems be integrated with annunciating signal lights to facilitate communication among operators. Winsted’s sit-stand consoles can be combined with a motorized M-View monitor wall mount, enabling a 60-inch wall monitor to be raised and lowered to match the positioning of the sit-stand console. Larger, wall-mounted screens are easier to use for operators, since a larger monitor size can make it easier to read text on a screen, for example. Combining the larger monitor with sit-stand capabilities provides the best of both options. Many operators today stand for 50 percent of their day, says Smith. Ergonomic standards guide the design of Winsted’s control room consoles, including ISO 11064 standards for the design of control centers. The furniture also is designed to accommodate industrial wire management (larger wire bundles), unlike furniture that might be bought in an office supply store. Read part 3 of our Control Rooms series here {##Poll37 - How well do you incorporate ergonomics into your control rooms?##}

Improving Security System Installations With Acceptance Testing
Improving Security System Installations With Acceptance Testing

Endless possibilities for security deployment have been made possible with technological advancements Significant technological advancements have created endless possibilities in how security is not only deployed, but also leveraged by the end user – the customer. For example, customers can now view surveillance at eight different offices in eight different states from a single, central location. A security director can manage an enterprise-wide access control system, including revoking or granting access control privileges, for 10,000 global employees from the company’s headquarters in Chicago. However, with that increased level of system sophistication comes an added level of complexity. After successfully completing the installation of a security system, integrators are now expected to formally and contractually prove that the system works as outlined in the project specification document. Tom Feilen, Director of National Accounts for Koorsen Security Technology explains that this formal checks and balance process is gaining momentum in the security industry. The step-by-step process of Acceptance Testing is more commonly being written into bid specifications, especially for projects that require the expertise of an engineer and/or architect. Simply put, it is a way for the end user to make sure the system they paid for works properly and is delivered by the integrator as outlined in the project’s request for proposal. While Acceptance Testing can be a time consuming process, it is a valuable industry tool. It is estimated that at least 95 percent of integrated security systems today have been brought through the Acceptance Testing process. Security systems have become more complicated in recent years. The introduction of IP-based, enterprise-wide and integrated solutions have all opened the door to more sophisticated access control and surveillance systems than ever thought possible. This process can vary depending upon the size of the project, but for a larger scale project, it is not uncommon for Acceptance Testing to take several weeks from start to finish. This timeline can be especially lengthy when the project involves hundreds of devices, such as access control readers, surveillance cameras, video recorders, intrusion sensors, and intercom systems. Most integrated security systems today have been brought through the Acceptance Testing process What is involved in the Acceptance Testing process? While the specific process can vary from integrator to integrator, many follow a similar process with their customer to ensure the system works accurately and that the customer has the proper certification documentation. The initial part of the process typically involves generating a report of each device installed as part of the system. This list enables the systems integrator to systematically test each device ensuring that individual devices are not specific points of failure for the overall system. For example, in a building equipped with a system that automatically releases the egress doors upon the fire alarm activation, it is important to make sure each door’s electro-magnetic locking system is operating properly. The systems integrator would not only test that a door releases when the fire alarm sounds, but also to make sure the access control system is notified if the door is propped open or held open longer than in normal usage parameters. For a door that is also monitored by a surveillance camera, part of the testing would also involve making sure that an image being transmitted to a video monitor is coming from the correct surveillance camera and that the actual angle of the image is what the customer has requested and is correctly labeled as such. If a device does not function as it should, it is then added to a punch list that would require the systems integrator to repair that device within a certain period of time. Once repairs are made, the system integrator would then submit a letter to the client stating that every device has been tested and works properly. It is also important for the integrator that once the testing process is complete to obtain a customer sign off (Certificate of Acceptance) on all systems tested and documentation provided. This limits liability once the system is turned over. From a safety perspective, Acceptance Testing is also used to verify that T-bars and safety chains are installed on cameras that are mounted in drop ceilings. It can confirm that panels are mounted in a room that is properly heated and cooled to avoid major temperature swings. Also, as part of the Acceptance Testing checklist, it can insure that power supplies that drive all the security systems are properly rated with the recommended batteries for back-up. And, that emergency exist devices or card readers are not mounted more than 48-inches above ground. An Acceptance Testing process serves to protect the end user's investment After the project is complete, Acceptance Testing protects both parties involved against liability issues. One example is if the building has a fire and the functionality of the life safety system comes into question. Acceptance Testing can be used to prove that the system was able to function as specified and dispel any concerns about its performance. At that time, all close out sheets are turned in, along with as-built drawings and a manual providing a complete listing of each device and system installed. Today, these manuals not only come in paper form as part of a large binder, but also digital files saved to a disc. The benefit of providing the customer with a binder or documentation of the system is that should the end user/customer replace the person who manages security at the company, valuable information will not leave with that former employee. While this checklist to close out a project may appear trivial at first, it is an important part of the security project process. By implementing an Acceptance Testing program, it serves to protect the end user’s investment, ensuring that the systems integrators hired for the project is knowledgeable and provides quality work. For the integrator, it helps towards the end goal of a satisfied customer.

Latest IndigoVision Inc. news

What Is The Changing Role Of Women In Security?
What Is The Changing Role Of Women In Security?

There was a time when men dominated the physical security industry. On second thought, that time is today. Even with increasing numbers of women entering our community, it’s an industry that is still mostly populated by men. But change is coming, and the industry as a whole is benefiting greatly from a surge in female voices. We asked this week’s Expert Panel Roundtable: What is the changing role of women in security?

What Is Artificial Intelligence And Should You Be Using It?
What Is Artificial Intelligence And Should You Be Using It?

Artificial Intelligence. You’ve heard the words in just about every facet of our lives, just two words, and they’re quite possibly the most moving, life-changing words employed in everyday conversations. So what exactly is AI, who currently uses it and should be using it? What is AI? AI is a powerful way of collecting, qualifying and quantifying data toward a meaningful conclusion to help us reach decisions more quickly or automate processes which could be considered mundane or repetitive. AI in its previous state was known as “machine learning” or “machine processing” which has evolved into “deep learning” or, here in the present, Artificial Intelligence. AI as it applies to the security and surveillance industry provides us the ability to discover and process meaningful information more quickly than at any other time in modern history. Flashback - VCR tapes, blurred images, fast-forward, rewind and repeat. This process became digital, though continued to be very time-consuming. Today’s surveillance video management systems have automated many of these processes with features like “museum search” seeking an object removed from a camera view or “motion detection” to create alerts when objects move through a selected viewpoint. These features are often confused with AI, and are really supportive analytics of the Artificial Intelligence, not AI themselves. Machine Learning Fully appreciating AI means employment of a machine or series of machines to collect, process and produce information obtained from basic video features or analytics. What the machines learn depends on what is asked of them. The truth is, the only way the AI can become meaningful is if there is enough information learned to provide the results desired. If there isn’t enough info, then we must dig deeper for information or learn more, properly described as “deep-learning” AI. Translated, this means that we need to learn more on a deeper level in order to obtain the collaborative combined information necessary to produce the desired result. Deep learning AI Deep learning AI can afford us the ability to understand more about person characteristic traits & behaviors. Applying this information can then further be applied to understand how to interpret patterns of behavior with the end goal of predictable behavior. This prediction requires some degree of human interpretation so that we are able to position ourselves to disrupt patterns of negative behavior or simply look for persons of interest based on these patterns of behavior. These same patterns evolve into intelligence which over time increases the machine’s ability to more accurately predict patterns that could allow for actions to be taken as a result. This intelligence which is now actionable could translate to life safety such as stopping a production manufacturing process, if a person were to move into an area where they shouldn’t be which might put them in danger. Useful applications of intelligence  Informative knowledge or intelligence gathered could be useful in retail applications as well by simply collecting traffic patterns as patrons enter a showroom. This is often displayed in the form of heat mapping of the most commonly traveled paths or determining choke points that detract from a shopper’s experience within the retail establishment. It could also mean relocating signage to more heavily traveled foot-paths to gain the highest possible exposure to communicating a sale or similar notice, perhaps lending itself to driving higher interest to a sale or product capability. Some of this signage or direction could even translate to increased revenues by realigning the customer engagement and purchasing points. Actionable Intelligence From a surveillance perspective, AI could be retranslated to actionable intelligence by providing behavioral data to allow law enforcement to engage individuals with malicious intent earlier, thus preventing crimes in whole or in part based on previously learned data. The data collection points now begin to depart from a more benign, passive role into an actionable role. As a result, new questions are being asked regarding the cameras intended purpose or role of its viewpoint such as detection, observation, recognition or identification. Detecting human presence By way of example, a camera or data collector may need to detect human presence, as well as positively identify who the person is. So the analytic trip line is crossed or motion box activated or counter-flow is detected which then creates an alert for a guard or observer to take action. Further up the food chain, a supervisor is also notified and the facial characteristics are captured. These remain camera analytics, but now we feed this collected facial information to a graphic processing unit (GPU) which could be employed to compare captured characteristics with pre-loaded facial characteristics. When the two sources are compared and a match produced, an alert could be generated which results in an intervention or other similar action with the effort of preventing a further action. This process- detect, disrupt, deter or detain could be considered life-saving by predictably displaying possible outcomes in advance of the intended actions. The next level is deep-learning AI which employs the same characteristics to determine where else within the CCTV ecosystem the individual may have been previously by comparatively analyzing other collected video data. This becomes deep-learning AI when the GPU machine is able to learn from user-tagged positive identification, which the machine learns and begins to further reprocess its own data to further understand where else the person of interest (POI) may have existed on the ecosystem and more correctly improve its own predictive capabilities, thus becoming faster at displaying alerts and better at the discovery of previously archived video data. The future In conclusion, the future of these “predictables” wholly rests in the hands of the purchasing end-user. Our job is to help everyone understand the capabilities and theirs is to continue to make the investment so that the research perpetuates upon itself. Just think where we’d be if purchasers didn’t invest in the smartphone?    

Looking To The Future With Edge Computing
Looking To The Future With Edge Computing

Edge devices (and edge computing) are the future. Although, this does seem a little cliché, it is the truth. The edge computing industry is growing as quickly as technology can support it and it looks like we will need it to. IoT Global Market The IoT (Internet of Things) industry alone will have put 15 billion new IoT devices into operation by the year 2020 according to a recent Forbes article titled, “10 Charts That Will Challenge Your Perspective of IoT’s growth”. IoT devices are not the only edge devices we have to deal with as the total number of connected edge devices includes the likes of devices like security devices, phones, sensors, retail sales devices, and industrial and home automation devices. The IoT (Internet of Things) industry alone will have put 15 billion new IoT devices into operation by the year 2020 The sheer number of devices begins to bring thoughts of possible security and bandwidth implications into perspective. The amount of data that will need to be passed and processed with all of these devices will be massive. There needs to be consideration taken by all business owners and automation engineers into how this amount of data and processing will be conducted. Ever-Expanding Edge Devices Market As the number of edge devices in the marketplace and their use among consumers and businesses rises, the need to be able to handle the data from all of these devices is no longer going to be suitable for central server architectures. We are talking about hundreds of billions and even trillions of devices. According to IHS Markit researchers’ study, there were 245 million CCTV cameras worldwide. One has to imagine there are at least 25% of that many access control devices (61.25 million devices) based on a $344 million market cap also calculated by IHS Markit’s researchers. If all the other edge devices mentioned earlier are considered then one can see that trying to route them all through servers for processing is going to start to become difficult if it hasn’t already, -which arguably it already has, as is evidenced by the popularity of cloud-based solutions amongst those businesses that already use a lot of edge devices or are processing a lot of information on a constant basis. Cloud Computing The question is whether cloud computing the most effective and efficient solution as the IoT industry grows The question is this; is cloud computing the most effective and efficient solution as the IoT industry grows and the amount of edge devices becomes so numerous? My belief is that it is not. Taking the example of a $399 USD device that is just larger than the size of a pack of cards and runs a CPU benchmarked at the same level as a mid-size desktop. This device has 8GB RAM and 64GB EMMC built-in and a GPU that can comfortably support a 4K signal at 60Hz with support for NVMe SSDs for add-on storage. This would have been unbelievable five years ago. As the price of edge computing goes down, which it has done in a dramatic way over the last 10 years (as can be seen with my recent purchase), the price to maintain a central server that can perform the processing required for all of the new devices being introduced to the world (due to the low cost of entry for edge device manufacturers) becomes more expensive. This introduces the guarantee that there will be a point where it will be less expensive for businesses, and consumers alike, to do the bulk of their processing at the edge as opposed to in central server architectures. Cloud computing is now being overtaken by edge computing, the method of processing data at the edge of the network in the devices themselves Edge Computing There are a plethora of articles discussing and detailing the opposition between the two sides of the computing technology coin, cloud computing and edge computing. The gist of it is that “cloud computing” was the hot new buzzword three years ago and is now being overtaken by “edge computing.” The truth is that cloud computing is a central server architecture hosted at someone else’s location. Edge computing is going to be a necessary development in the technology industry Edge computing is the method of processing data at the edge of the network (in the devices themselves) and allowing for less resources required at a central location. There is certainly a use case for both, however the shift to edge computing amongst the general public and small to mid-sized businesses will not be a surprise to those players, who have been paying attention.  One article titled, “Next Big Thing In Cloud Computing Puts Amazon And Its Peers On The Edge” by Investor’s Business Daily takes the stance that edge computing is going to completely displace centralized cloud computing and even coins the phrase, “Cloud computing, decentralized” to explain edge computing. It speaks for the stance that most experts in technology seem to be taking, including Amazon Web Services’ VP of Technology, Marco Argenti according to the same article. We know that edge computing is going to be a necessary development in the technology industry, and it is happening as I write this, and quickly at that. Cost Efficiency Of Edge Processing As time goes on, the intersection between the prices of network bandwidth, edge processing and maintaining super powerful central servers will cause edge processing to be the most efficient and cost-effective way to maintain a scalable network in any environment, including datacenters. Owning a central server or utilizing edge computing become the better options As it currently stands, most residential users can only achieve a 1Gbps WAN (internet) connection, and small to medium-sized business can’t get much more but seem to get much less, based on my personal experience. When more than 1Gbps needs to be processed, cloud computing becomes very expensive at which point, owning a central server or utilizing edge computing become the better options. Then you look a total cost of ownership and when the cost of edge computing is less expensive than the cost of maintaining central server architectures, edge computing becomes the single best option. So, I’ll say it again, edge devices (and edge computing) are the future.