In part 1 of this post, we evaluated the potential for a new wave of networked camera technology that could be mobile, persistently streaming, and shared among members of subscription services. We now turn to a critical part of a new Internet infrastructure for interpreting and cross-referencing the content of such video streams.
Computer vision methods have come a long way since the early 1990s, when there was significant stagnation. Part of the problem back then was limited computational power, which made research difficult because some ideas that have ultimately proven themselves took too long to evaluate (weeks or months for each 'tweak'). Affordable digital cameras were still struggling to emerge as well. The 'perfect storm' has emerged in the years since 2000, with a combination of massively available 'still' imagery and full-motion video, as well as sufficient computing power to test processing strategies on reasonable timescales.
Today, we are better able to use software to identify where human figures and faces are in scenes, to do biometric identity analysis on human faces, to recognize or categorize a wide range of objects, to interpret the 3D structure of scenes from motion or other cues, to perform intelligent background modeling (which helps to identify moving things like vehicles, animals, people, and so on), to interpret a limited variety of human behaviors, to recognize license plates or other text, bar-codes, facial expressions, hand gestures, human body poses, and on and on. Traffic cameras and overhead imagery can be used to track vehicles and pedestrians very effectively. All of this is extremely useful in patching together a collective picture of what is going on, and where.
The methods for automatic interpretation of relevant activities in video are on a steady path of improvement. In part 3 of this post, we examine how all of this information could be combined to provide civilian 'intelligence' products that could be extremely useful to all who choose to sign up.
The author's affiliation with The MITRE Corporation is provided for identification purposes only, and is not intended to convey or imply MITRE's concurrence with, or support for, the positions, opinions, or viewpoints expressed by the author.