AN IMAGE processing system that obscures the position from which photographs are taken could help protestors in repressive regimes escape arrest - and give journalists "plausible deniability" over the provenance of leaked photos.
The technology was conceived in September 2007, when the Burmese junta began arresting people who had taken photos of the violence meted out by police against pro-democracy protestors, many of whom were monks. "Burmese government agents video-recorded the protests and analysed the footage to identify people with cameras," says security engineer Shishir Nagaraja of the Indraprastha Institute of Information Technology in Delhi, India. By checking the perspective of pictures subsequently published on the internet, the agents worked out who was responsible for them.
If a photographer's "location privacy" is not protected, their personal safety is at risk, Nagaraja says. This inspired him and security researcher Péter Schaffer and computer-vision specialist Djamila Aouada at the University of Luxembourg to find a way of disguising the photographer's viewpoint.
Their method is to use graphics processors to artificially create photos taken from a perspective where there was no photographer.
"We use a computer-vision technique called view synthesis to combine two or more photographs to create another very realistic-looking one that looks like it was taken from an arbitrary viewpoint," explains Schaffer.
The images can come from more than one source: what's important is that they are taken at around the same time of a reasonably static scene from different viewing angles. Software then examines the pictures and generates a 3D "depth map" of the scene. Next, the user chooses an arbitrary viewing angle for a photo they want to post online.
The photo then goes through a "dewarping" stage, in which straight lines like walls and kerb angles are corrected for the new point of view, and "hole filling", in which nearby pixels are copied to fill in gaps in the image created because some original elements were obscured. The result is pretty convincing, says Schaffer. "There are some image artefacts but they are acceptable," he says (arxiv.org/abs/1106.2696). The team intends to make the software open source.
Matthias Zwicker, a graphics engineer at the University of Bern in Switzerland, thinks the technology is on the right track. "Anonymising the photographer could be a crucial step in protecting the source of contentious material. I'm sure this computer-vision technology will evolve into a valuable tool."
Schaffer's team knows it is entering an arms race of sorts: even consumer-level imaging tools could help oppressive regimes. For instance, University of Washington and Google researchers last week unveiled software that can identify a specific person in every picture in a large set of photos on a website like Flickr.com.