100+ Million Cameras – How AI and Video are Changing the Edge


Over 116 million network cameras were shipped in the professional surveillance market last year, with the capability to generate almost 9 petabytes of video each and every day. [1] As video demand and the use of AI increases, these numbers continue to grow, and it’s forcing us to rethink edge architectures.

Seeing Eye to AI –  How Computers View Our World

If there’s one thing we’ve learned about AI in the last few years, it is that AI can do narrow tasks incredibly well. Computer vision isn’t necessarily about teaching computers to see the world as we (humans) would. It is about allowing computers to capture, analyze, and learn about the human world. 

Taking computer intelligence capabilities – such as object recognition, movement detection, and tracking or counting objects/persons – and using them in the right application is where AI has profound value.  It’s no wonder that the amalgamation of video, artificial intelligence and sensor data is a hotbed for new services across industries. 

Smarter Use Cases

While we often limit real-time video analytics to the context of security or surveillance activity, the market is expanding through a growing amount of use cases. These include medical applications, sports analysis, smart factory, traffic management, and even agricultural drones.  

The use of intelligent technology has created a new generation of “smart” use cases. For example, in “smart cities” cameras and AI analyze traffic patterns and adjust traffic lights to improve vehicle flow, reduce congestion and pollution, and increase pedestrian safety.  “Smart factories” implement the type of narrow tasks AI excels in – like detecting flaws or deviations in the production line in real-time and adjusting production to reduce errors. Smart cameras can be very effective at quality assurance that can also reduce costs greatly through automation and earlier fault detection. 

The Edge is Changing

The evolution of smart video is happening alongside other technological and data infrastructure advancements, such as 5G. As these technologies come together, they’re impacting how we architect the edge. And, they’re driving a demand for specialized storage. Here are some of the biggest trends we’re seeing: 

1. More of Everything 

The number and types of cameras continue to grow, and each new type brings new capabilities. Having more cameras allows more to be seen and captured. This could mean having more coverage or more angles. It also means more real-time video can be captured and used to train AI.  

At the same time, more and more cameras are supporting higher resolutions (4K video and above). This is important because video is rich media. The more detailed the video, the more insights can be extracted from it. And, the more effective the AI algorithms can become. In addition, new cameras transmit not just a main video stream but also additional low-bitrate streams used for low-bandwidth monitoring and AI pattern matching.

Some of the biggest challenges for these types of workloads is the fact that they’re always on. Whether traffic, security or manufacturing, many of these smart cameras operate 24/7, 365 days a year. Storage technology has to be able to keep up. For one thing, storage has evolved to deliver high performance data transfer speeds and data writing speed, to ensure high quality video capture. But moreover, on-camera storage technology that can deliver longevity and reliability has become even more critical. 

2. Endpoints Comes in All Shapes and Sizes

It doesn’t matter if it’s for a business, for scientific research or even for our personal lives – it seems that we try to capture data about everything in our world. As a result, we’re seeing new types of cameras that can capture new types of data that could be analyzed. 

For example, the onslaught of the current pandemic has given rise to thermal cameras that help identify those with a fever. Explosion-proof cameras are being used in areas of highest environmental risk. Cameras can be found everywhere – atop buildings, inside moving vehicles, in drones, and even in doorbells.  

As we design storage technology, we must take location and form factor into consideration. We need to think of the accessibility of cameras (or lack thereof) – are they atop a tall building? Maybe amid a remote jungle? Such locations might also need to withstand extreme temperature variations. All of these possibilities need to be taken into account to ensure long-lasting, reliable continuous recording of critical video data.  

3. Specialized AI Chipsets

Improved compute capabilities in cameras means processing happens at the device level, enabling real-time decisions at the edge. We’re seeing new chipsets arrive for cameras that deliver improved AI capability, and more advanced chipsets add deep neural network processing for on-camera deep learning analytics. AI keeps getting smarter and more capable.  

According to industry analysts Omdia, shipments of cameras with embedded deep-learning analytics capability will grow at a rate of 67% annually between 2019 and 2024.[2]  This reflects not only the innovation happening within cameras but also the expectation that deep learning – which requires large video data sets to be effective – will happen on-camera too, driving the need for more primary on-camera storage.  

Even for solutions that employ standard security cameras, AI-enhanced chipsets and discrete GPUs are being utilized in network video recorders (NVR), video analytics appliances, and edge gateways to enable advanced AI functions and deep learning analytics. NVR firmware and OS architecture are evolving to add such capabilities to mainstream recorders, and so storage must also evolve to handle the changing workload that results. 

One of the biggest changes is that there is a need to go beyond just storing single and multiple camera streams. Today, metadata from real-time AI and reference data for pattern matching needs to be stored as well. This has greatly altered the workload dynamic and how we tailor storage devices for new types of workload. 

4. Deep Learning Still Requires a Capable Cloud

Just as camera and recorder chipsets are coming with more compute power, in today’s smart video solutions most of the video analytics and deep learning is still done with discrete video analytics appliances or in the cloud. That’s where BIG data resides.  

Broader Internet of Things (IoT) applications that use sensor data beyond video are also tapping into the power of the deep learning cloud to create more effective, smarter AI.

To support these new AI workloads, the cloud has gone through some transformation. Neural network processors within the cloud have adopted the use of massive GPU clusters or custom FPGAs. They’re being fed thousands of hours of training video, and petabytes of data. These workloads depend on the high capacity capabilities of enterprise-class hard drives (HDDs) – which can already support 20TB per drive –  and high performance enterprise SSD flash devices, platforms or arrays.  

5. Network, and Lack of

Wired and wireless internet have enabled the scalability and ease of installation that has fueled the explosive adoption of security cameras – but it could only do so where LAN and WAN infrastructures already exist.  But 5G is coming!

5G removes many barriers to deployment, allowing expansive options for placement and ease of installation of cameras at a metropolitan level.  With this ease of deployment comes new greater scalability, which drives use cases and further advancements in both camera and cloud design.  

For example, cameras can now be stand-alone, with direct connectivity to a centralized cloud – they’re no longer dependent on a local network. Emerging cameras that are 5G-ready are being designed to load and run 3rd party applications that can bring broader capabilities.  Really, the sky’s the limit on smart video innovation brought about by 5G.  

Yet with greater autonomy, these cameras will need even more dynamic storage. They will require new combinations of endurance, capacity, performance, and power efficiency to be able to optimally handle the variability of new app-driven functions, and we’re designing solutions with these new capabilities in mind. 

Leading the Evolution of Storage at the Edge 

It’s a brave new world for smart video, and it’s as complex as it is exciting. Architectural changes are being made to handle new workloads and prep for even more dynamic capabilities at the edge and at endpoints. At the same time deep learning analytics continue to evolve at the back end and cloud. 

Western Digital has a strong history of innovation, going back to the origins of both hard disk drive technology and flash technology.  We work closely with market and innovation leaders in smart video to develop a deep understanding of today’s and tomorrow’s advanced AI-enabled architectures. We get a close look at how the changes in video and metadata stream management affect the workload on storage devices.  

Understanding workload changes – whether at the camera, recorder or cloud – is critical to ensuring that new architectural changes are augmented by continuous innovation in storage technology.  That’s why we continue to optimize, tune and if needed, even re-architect our storage firmware and interface technology to ensure storage technology not only continues to keep up with growing demands of smart video, but also can drive new capabilities and smart use cases. 


Smart Video-Ready

Learn how we’ve engineered our drives for the extreme demands of 24/7 surveillance systems here: WD Purple 

Get to know key considerations when planning an IP-based smart video system – download the paper.


Omdia Research – Video Surveillance & Analytics Intelligence Database, August 2020, and Western Digital estimates

Omdia Research – Video Surveillance & Analytics Intelligence Database, August 2020

Leave a Comment