Wednesday 30 November 2011

Computer Model Spots Image Fraud The UK, France and Norway are considering legislation to require digitally altered images to be labelled as such

From Scientific American


"Scientists in the United States have come up with a tool for automatically analysing digital photographs, making it possible to gauge the extent to which images have been altered or retouched.
Advances in image-manipulation softwarehave made it trivial to radically alter the appearance of models and celebrities in photos, notes Hany Farid, a computer scientist who studies digital forensics and image analysis at Dartmouth College in Hanover, New Hampshire. Farid created the analysis tool with his colleague Eric Kee, also at Dartmouth College. The promotion of unrealistic body images in some advertisements and magazines is thought to have a role in triggering eating disorders, explains Farid, and some countries, including the United Kingdom, France and Norway, are now considering legislation to require digitally altered images to be labelled as such.
The idea is to use the software to generate a scale that can be printed next to published images, say Farid and Kee, so that readers can tell how accurately they represent the originals. The hope is that this will shed light on the culture of 'airbrushing' in the advertising and fashion-magazine industries. The software could also help to deter fraud in scientific images, they say.
However, simply labelling manipulated images is not the solution, says Farid, because this would tar all altered images with the same brush — even those that used legitimate adjustments such as cropping and colour modification. Farid and Kee's solution, published online today in the Proceedings of the National Academy of Sciences USA, is a system that can score on a scale of one to five how much an altered image has strayed from reality.
Compare and contrast
Farid and Kee first compared more than 450 pairs of images before and after manipulation, quantifying their dissimilarity according to eight different statistical parameters. These ignore any global changes, such as cropping, and instead focus on local geometric modifications—for example, by how many pixels the shape of a person has altered—and photometric changes such as smoothing or sharpening.
To combine these parameters into one metric, the researchers asked more than 350 volunteers to compare the same pairs of images, ranking them on a scale of 1 (very similar) to 5 (very different). These ratings were then used to train a machine-learning algorithm to extract a single score from the measured values that would faithfully reflect the perceptual judgement of the volunteers.
The resulting system is able to rate the extent of manipulation in new pairs of images with an accuracy of about 80%, says Farid. Although the technique is currently specifically tuned to images of people, Farid says that the underlying algorithms could easily be adapted to analyse scientific images, using journal editors and scientists during the training process.
Farid notes that image manipulation is a growing problem in the scientific community, calling it "extremely disturbing”. He explains that it has become all too easy for some researchers to misrepresent their results, enhancing DNA bands in a gel, for example, or scrubbing out background blemishes, either to innocently make images look better or, in some cases, to skew the results deliberately.
Picture imperfect
It is not clear why scientific image fraud is a growing problem, says John Dahlberg, director of investigative oversight for the Office of Research Integrity in Rockville, Maryland, whose division investigates cases of alleged research misconduct. “It seems the scientific community is very aggressive about beautifying its images,” he says. “About 70% of our cases involve questioned images.”"

No comments:

Post a Comment