|Forum Home > General Discussion > Drowning in Video, DoD Looks for High-Tech Help|
May 03, 2012
Military.com|by Michael Hoffman
Real time video has become a staple of war. Whether it's the video recorded from a special operator's headset, a sensor mounted to a drone, or a robot disabling a roadside bomb, today's military commanders demand to see operations as they happen.
But the quantity and quality of the video is growing faster than servicemembers can keep pace with.
Right now, the Air Force has an airman assigned to watch every second of full motion video broadcast off its remotely piloted aircraft over Afghanistan. If the service that system after fielding a new sensor that can collect 65 additional angles of action on a battlefield it would run out of airmen. The Air Force would need 117,000 airmen dedicated to motion imagery exploitation, or roughly one-third of available troops, according to an Air Force study.
Computer programmers at the Defense Advanced Research Projects Agency and private firms in Silicon Valley believe they are on the cusp of a series of considerable breakthroughs to solve this puzzle for the military.
U.S. Special Operations Command has already requested an additional $143 million from Congress to fund advances made in full motion video sensors. Air Force Lt. Gen. Bradley Heithold, vice commander of Special Operations Command, wants all of special operations' full motion video sensors aboard manned and unmanned aircraft to record high definition video.
Clearing up the static
Sean Varah, founder of Motion DSP, has developed video processing technology that can stabilize drone video imagery with a click of a mouse. The process can also clear up any haze allowing analysts to clearly distinguish different people in the picture.
Varah actually started his company to improve the quality of You Tube clips, but, he found that people posting videos of their cats playing the piano weren't eager to pay to improve the video quality. His company took a turn toward the military when the CIA contacted him about using the same technology to salvage video confiscated in intelligence raids.
The Ikena ISR software created by MotionDSP allows analysts to improve the resolution in the video and make it easier to extract intelligence, such as a license plate number or the type of clothing a person is wearing.
Varah said he expects the demand for this type of software to increase as the military moves from standard to HD video collected by sensors found on drones.
"As the video quality improves, so will the expectations for it," he said.
Military commanders want the intelligence in real time, Varah said. Any delay is considered unacceptable.
"They don't want to wait for someone to restructure a video. They just don't have the time. They want it now, if not sooner," Varah said.
Drowning in video
Improved video quality is not the only advancement making its way to the full motion video realm. A deluge of video is about to wash over ground commanders and intelligence agencies with the introduction of Wide-Area Motion Intelligence sensors.
The Air Force has already introduced one of the new sensors, called Gorgon Stare. The sensor has experienced many problems as its deployment to Afghanistan was repeatedly delayed, but it provides a new capability the military has not had. Its successor, the ARGUS, will take WAMI sensors to the next level, imagery experts said.
The WAMI sensors film the ground differently, taking in a large field of view. In fact, the ARGUS can film up to 100 square kilometers at once. From those 100 square kilometers, the cameras film 65 different angles within that frame of view from which airmen and soldiers can pick.
As the Air Force study pointed out, the services can no longer afford to assign a pair of human eyeballs to each feed. DARPA has assigned multiple teams to tackle the problem.
DARPA computer programmers have set forth to develop a "Video and Image Retrieval and Analysis Tool" that would allow intelligence analysts or ground teams to type in what they want the software to find in a real time video, then alert them when it appears. For example, airmen could set it to notify them whenever a person walks out of a targeted compound or someone is digging next to a road.
The program also allows analysts to search a video to find these specific events. An analyst can replay a suspicious activity or quickly find out how often an event is happening; for instance, when and how frequently a truck pulls up to a compound.
The focus is not only video collected by airborne sensors. DARPA is also working on a program to teach a computer to recognize actions seen in video collected by ground sensors set up by Army and Marine Corps scouts.
James Donlon runs the Minds Eye program for DARPA and said the key is teaching a computer to recognize verbs.
"We've come a long way in recognizing objects, but the next challenge is the verb," Donlon said at the Institute for Defense and Government Advancement's Full Motion Video for Defense Summit. "We want it to recognize the verb and then provide an English language descriptive."
Programming teams from universities and the private sector have made major strides forward but challenges still remain. Donlon and his team have set out a list of 7,676 verbs they want the software to recognize -- actions such as "pick up," "dig" and "run." In the program's second year, Donlon wants to see the software recognize the actions with more consistency.
Reliability is one of the largest factors of the Minds Eye program because ground commanders have to trust the software enough to not assign a soldier to watch cameras doing the vital work of protecting the unit's perimeter, Donlon said.
"We want to relieve units of tired eyeballs, but we have to do it the right way and make sure commanders have confidence in what we are doing," he said.