IE 11 is not supported. For an optimal experience visit our site on another browser.

Computer has eye for suspicious behavior

That guy meandering back and forth on a downtown street could be seriously lost. Or serious trouble. A computerized surveillance system in the works could help decide which.
Duane Hoffmann / msnbc.com

That guy meandering back and forth on a downtown street could be seriously lost. Or serious trouble. A computerized surveillance system in the works could help decide which, with its wide-angle panoramic shots, location tracking software and ‘smart’ video cameras that flag suspicious behavior.

Ultimately, the system could chart the normal walking trajectories of pedestrians to spot atypical movement and then seamlessly track people of interest as they travel across a city. The goal is an intuitive control system that allows security or law enforcement personnel to see “what people do, when they do it and where they do it,” said James Davis, an associate professor of computer science and engineering at Ohio State University.

“The best pair of eyes are still on a human,” he said, stressing that his research isn’t meant to procure a computerized replacement. With a new generation of intelligent surveillance systems promising to recognize relevant patterns amid all the clutter, however, human controllers may gain more control than ever before over the constellations of video cameras monitoring public streets and private companies.

At many surveillance centers, Davis said, workers face a bank of TV monitors showing tens or hundreds of video streams. “When they’re looking at one monitor, that means that 99 others are not being observed,” he said. Computers that filter each video stream for atypical content, on the other hand, could alert security personnel to those scenes that merit a closer look.

For the first phase of the project, the researchers expanded the narrow “soda straw” views of many security cameras into wide-angle panoramas. Each computer-guided camera snaps pictures from every angle within its field of view, Davis said, and software merges them into a seamless panoramic image resembling that of a large fish-eye lens.

The stitched-together view isn’t quite live, but a controller can click on any spot within it to capture live footage of the specified location or draw a line on the screen to indicate where the camera’s angle should align, like along a crime-prone alleyway.

But one camera can only see so much.

To expand the system’s utility, Davis and his team have designed software that maps the fish-eye panoramas onto an aerial view of an area. Click on a spot on the Google-like map, and any monitors displaying that area will pop up.

Tying the panoramas to map coordinates means every pixel can be assigned a latitude and longitude. Essentially, one worker could recruit multiple cameras to converge upon a street corner or city hall with a simple point and click of the computer mouse.

With each camera’s field of vision linked to geo-referenced coordinates, anyone who walks or drives across a scene also can be tracked. “Say you’re looking at someone through a particular camera, and you click on that person on your video screen,” Davis said. “The camera would latch onto that person and pan and zoom around them as long as it could.”

The task of tracking a suspicious pedestrian could be handed off to successive cameras within the network, while the walking trajectory is captured on the aerial map. “In real time, we could feed back where that person’s coordinates are in the real world, so someone at a remote location could go check it out,” Davis said.

Scrutinizing a walk in the park
But how can computers help decide who warrants a closer look? Over time, he said, the system can build patterns about where people typically go, and when, capturing them as ribbons of movement within a database. With statistical models of those trajectories to draw upon, a controller can begin asking questions like, “Does this person’s movement pattern fit one of the models we’ve seen over time?”

Scientists like Rama Chellappa  at the University of Maryland also have tapped gait as a powerful indicator that can distinguish the typical from the potentially dangerous. Chellapa’s system assigns signature patterns to videotaped activities like walking or carrying a backpack, enabling computer software to recognize differences in movement symmetry when someone is walking with a hidden object attached to a belt or an ankle.

At the University of Washington, other researchers are developing tools to manipulate objects or people within videos, easily embedding identifying tags, attaching arrows that highlight a person’s walking trajectory, or capturing the moment when a specific movement occurs.

At Ohio State, an experimental surveillance network feeds right into the Davis lab. External information such as class schedules, crime reports and even weather data could add the proper context to each field of view and help create models of typical behavior at different locations. Beyond acts flagged as suspiciously odd and in need of further review, the surveillance system also might help locate a person who is in distress.

“We want the system to learn that on its own,” Davis said. “You might imagine someone coming out and meandering around the area. We’d like to able to detect that event and hand that off to a security guard. It could be someone who is intoxicated, who is lost, or who is trying to break into a building.”

In a park setting, however, a meander may be the norm, underscoring the need for proper context. Similarly, when new students arrive on campus at the beginning of fall quarter, Davis routinely spots the ones who are lost, circling the spaces between buildings and turning back and forth repeatedly.

James A. Fogarty, an assistant professor of computer science and engineering at the University of Washington, said he sees the system being developed by Davis as a good collaboration between security personnel and a computerized system, and a “nice posing” of the growing problem of what to do with all that information. “You’ve already got systems that are deleting your spam for you — it’s saving the person the trouble of viewing the tedious and uninteresting,” he said.

With a similarly smart analysis of data streams at a security center, Fogarty said, “one person, instead of monitoring 20 cameras, might be able to monitor 200 cameras.” And because the computerized system is intended to be used in collaboration with humans instead of as a stand-alone watchdog, “you don’t have to be 100 percent right that something’s going wrong.” Instead, he said, the real question becomes, “Can you design something that will be effective even given those unavoidable mistakes?”

Another question, increasingly, is how to balance privacy with security.

Davis said he recognizes the privacy concerns and stressed that his research isn’t intended to gather specific information about individuals. “We care about the activity and not the identity, and that’s how we deal with privacy,” he said. “We don’t use any face recognition.” He suggested that other users could add such software to the system, however.

If all goes well, Davis said his team’s tracking systems could be commercialized within a year. The behavioral analysis, while more challenging, could still yield a “really robust” system within 3 to 5 years. The military has already shown interest, with the Air Force Research Laboratory’s TecEdge Collaboration supporting two undergraduate researchers in his lab (the National Science Foundation also has lent support to a graduate student).

For the foreseeable future, though, security workers are unlikely to entirely displaced by computers. After all, even an efficient model of typical human behavior only goes so far. “For things that break that typicality,” Davis said, “we can hand that off and say, ‘This looks odd – have someone go check it out.’”