Physio·gnomy

Loyal citizen calculator

The project Physio·gnomy questions and rethinks how facial recognition technologies interact with humans, further exploring the possibilities and limits of this interaction. Time and time again it has been noted that facial recognition technology is used to unjustly profile people. To a large extent, this is made possible due to the technology’s ability to invade our privacy. With the growing advancement and reach of this technology, are we able to choose when to be visible or invisible?

Physio·gnomy aims to address and problematize this issue. This project explores the tension between facial recognition technologies, privacy and visibility.

Physio·gnomy is a dystopian setup that mimics a face recognition algorithm calculates the score of a citizen’s criminal record, by predicting it on the facial traits of a person. With this project, we aim to show a setting where the audience faces a calculation of their “citizen score” and is judged by an algorithm with their appearance. In an era where we compensate our digital privacy for our safety, what is the ultimate limit we can sacrifice as a law-abiding citizen?

Physio·gnomy is a pseudoscience based on associating personal characteristics and traits with physical differences and especially with elements of people’s faces. The word is derived from the ancient Greek word for “nature”, physis, and the one for “judge”, gnomon.

Facial recognition systems can detect our emotions: when we are sad, happy, frustrated or excited. In 2010, researchers at Harrisburg University claimed they had developed a deep neural network model that could predict whether someone was a criminal based solely on a picture of their face, with 80 percent accuracy and no racial bias. They said the software was intended to help law enforcement and prevent crime. In fact, the researchers never provided evidence for their claim, and following the publication of a paper by 1,000 machine-learning researchers, sociologists, historians, and ethicists condemning the concept, no study was ever published. But what if this technology was to be combined with an algorithm that ‘predicts’ criminality based on our faces? How can we unveil the invisible systems behind criminal facial recognition in riots?

Here, we present a prototype and work in progress version of Physio·gnomy, which will ultimately construct a visual and aural space that recreates facial recognition systems. Visitors will enter the space and experience what it is like to be profiled by facial recognition, based on material from real-life riots and protests where protesters have been criminally profiled and sometimes convicted.

The project will also include a machine-learning algorithm that detects emotions and generates a variety of mappings showing an individual’s facial traits, creating a story around this technology. A fake website screen tells us our crime rate as we change our emotions and our facial impressions and the audience interprets our facial traits in a gamified and dystopian environment. Is this the e-law platform of the future?

Xsenofemme (Ines Borovac & Ginevra Petrozzi / Rotterdam, the Netherlands)

Xsenofemme is an artistic duo formed by interdisciplinary artists Ines Borovac and Ginevra Petrozzi. The duo started to take shape during the Social design master at Design Academy Eindhoven in 2019, to which they both graduated. The motivation behind the duo is to merge the different sensibilities to investigate systems of control and their distribution of power on global bodies. Overlapping the performative feminist practice of Borovac with the techno-politics practice of Petrozzi, the ultimate aim of the duo is transforming intangible digital mechanisms and bringing them back into physical reality, whilst modelling new purposes for them. Borovac and Petrozzi are both developing independent practices whilst collaborating within Xsenofemme and producing new works while merging knowledge. Their work is often materialised in the form of performances, interactive installations and video. The goal of Xsenofemme is to contribute to the discourse on emerging technological conditions in the context of Surveillance, bringing back bodies as potent political tools.

Arina Kapitanova (Berlin, Germany)

Arina Kapitanova is a designer and researcher with experience in urban planning and strategic consulting based in Berlin. Her focus is the urban environment and everything that comes with it, including infrastructure, social connections, the interweaving of agencies and actors, and space and information design.

Greeshma Chauhan (Amsterdam, the Netherlands)

Greeshma Chauhan is a philosophy Master’s graduate from the University of Amsterdam with a particular interest in the philosophy of technology, as well as a freelance artist specialising in illustrations, murals and fine arts.

Deniz Kurt (Amsterdam, the Netherlands)

Deniz Kurt is a media artist and creative technologist based in Amsterdam. Deniz works with AI and machine-learning applications such as GAN for data visualization.

Michael Zerba (Tilburg, the Netherlands)

Michael Zerba seeks to shape the digital world through his passions for privacy, data protection and fitness. He explores various topics in the field of law, business and technology at mrdataprotection.de and mischatech.de.

This project questions our ways of reclaiming our face and visibility in digital realm, and learn & unlearn our practices. We combined some tips and toolkits for you to look at:
Trevor Paglen’s “ImageNet Roulette” featured in The New York Times | Pace Gallery

About Facial Recognition & Surveillance

The theme of surveillance, specifically that of facial recognition, came up during multiple debates by CODE 2022 participants. It was mainly discussed because this technology can be used to track people in public space and threaten privacy. Governments make the argument that camera surveillance improves safety, but they fail to acknowledge that privacy is often a crucial part of safety. Physio·gnomy was developed during CODE 2022 to address the issue that citizens can no longer choose when they want to remain anonymous.