dOOR aND BOX

DOOR and BOX are two artworks of an ongoing series of works aiming to research and exposing some of the social and political impact of artificial intelligence, computer vision, and automation. These two projects use a computer vision system that predicts the interactor’s ethnicity, and change their behaviour depending on this prediction: the door is only unlocked if its user is white, the gumball machine delivers free candy only if its user is white.

The artworks showcase a possible use of computer vision making explicit the fact that every technological implantation crystallises a political worldview.

 

What are the politics of everyday objects?

Machine learning (ML) algorithms find complex relation- ships in data, recognising patterns that humans may not be able to identify, or not know how to formally express for explicit rule–based programming. This allows to create systems with unpredictable, yet somehow controllable behaviour using training datasets. While this dataset–based control is where its greatest strength resides, it is also very dangerous, for it often introduces biases through over–fitting or under–fitting, and, more importantly, learns biases already existing in the training data. These dangers are even more amplified as these systems are relatively difficult to debug.

Recent advances in computer vision and artificial intelligence, have allowed the creation of systems able to infer (predict) a information on a person from camera data, including facial recognition, facial expressions, ethnicity, among others. Nowadays, companies like as Affectiva, Clarify and Haystack provide image processing services that include these predictions, among several others.

In spite of potential benefits that face recognition proposes, its widespread application entails several risks, from privacy breaches to systematic discrimination in areas such as hiring, policing, benefits assignment, marketing, and other purposes

DOOR and BOX are everyday objects augmented by artificial intelligence. The pieces reflect on the power asymmetries that technology instantiates, aiming at providing with a reflection on the aesthetics of our relationship with it. The artworks also aim at showcasing the advancements and limitations in computer vision and machine learning, allowing the public to experience in person its power as well as its inherent biases.

ethnicities.jpg

The artworks use a standard webcam (HD, 30FPS) to acquire data on its interactors. Two different implementations have been created, both running at interactive rates on a 2018 MacBook Pro laptop.

The first uses a deep neural network based on David Sandberg’s FaceNet, and trained using Tensorflow on the UTKFace dataset. UTKFace contains 23,708 faces with five labelled races (White, Black, Asian, Indian, and Others (e.g. Hispanic, Latino, Middle Eastern)).

A second implementation uses Affectiva.ai’s pre-trained model run locally. This service classifies the input images also in five different ethnicities (White, Black, South East Asian, Asian, and Latino).

Once the interactor’s ethnicity has been predicted by the system, the device is either kept locked or unlocked, using a standard electric strike and a relay (in DOOR) and a custom system (in BOX), both controlled by an Arduino Uno microcontroller.

DOOR and BOX are collaborations between Tomás Laurenzo and Katia Vega.

DOOR was only possible thanks to Stochastic Labs in Berkeley, CA who hosted the artists, and to the help of Alejandro Rodríguez and Tatjana Kudinova.

BOX was exhibited at TEI 2019, in Tempe, Arizona, USA.

DOOR
BOX