Neural network was taught to change pictures so as to deceive face recognition systems

1 min


According to the creators, the program works effectively in 99.5% of cases.

A team of scientists from the University of Toronto has developed the Privacy Filter, the purpose of which is to prevent face recognition systems from successfully accomplishing their task. A detailed description of the project is published on the university’s website.

At the heart of the program is a combination of two neural networks that compete with each other: one of them learned to recognize faces in photographs, the second — to change the images so as to deceive the first. As a result, the system has learned to interfere with the identification of people in the photo in almost 100% of cases.

This effect was achieved by changing the individual pixels of the original image. The basic principle of such programs is that the algorithms find the key points of the face, for example, the corners of the eyes or lips, and calculate their distance from each other. Data on this help to create a unique “imprint” of a person’s face. To deceive the program, it is enough to make only small changes to the key points.

At the beginning of the Privacy Filter testing, the face recognition system identified people in the photo in 100% of cases. Now, when the program is finalized, this indicator fell by 0.5%.

Like it? Share with your friends!



Choose A Format
Formatted Text with Embeds and Visuals
Open List
Submit your own item and vote up for the best submission
Ranked List
Upvote or downvote to decide the best list item