woman, holding 

The work addresses the algorithmic bias of commercially available image description and image creation services. Arranged into an ambiguous form that could be interpreted as a shrine, as a memorial, or as a futuristic display, the work engages with the inherent biases of commercial facial analysis and image-description services that are trained on data sets (such as ImageNet) and explores the consequent bias that comes with those data sets.

In the creation of the work, a number of images of the artist were taken which were processed through multiple machine learning services that describe imagery. The services were quite unbiased when describing nonhuman and male objects, meaning the service did not use evaluative descriptors like ‘pretty’, ‘good looking’, or ‘sexy’, when describing nature, cityscapes or men wearing clothes. However, when the algorithms described images of women, they used evaluative descriptors in most cases. The same level of evaluative descriptor in the case of a male image was achieved very rarely, compared to the images of women that did not project any sexualised undertones. A shirtless man posing on a club advertisement was labelled as ‘serious’ and ‘fine-looking.

After performing a converse action using commercial text-to-image services the text output from the image description service was run through a text-to-image process for which the artist removed the evaluative descriptors like ‘pretty’ and ‘good looking’. For example, a ‘woman in front of a mirror’ image descriptor results in a semi-abstract blob that can be recognised as a posed selfie in underwear, a post beach photo, a mirror selfie, and other varieties that have in common a visual language of objectification of women
.
The title of the work stems from the often-encountered image description of the medium shot of the researcher ‘holding’, implying that the data sets with which the algorithms trained view women as carers.
 
We may think of algorithms as somehow neutral, but ultimately not only have they been created by people who have their own biases and prejudices, but descriptive algorithms use data sets that contain the biases of people who have labelled the images have inherited biases about what type of person could likely be involved in a crime and what gender should be attributed to a doctor, lawyer and scientist. 
 
 

Steel, plastic, epoxy, printed image,wax, electronics,tablets, thread, fabric.
190cm x 260cm x 60cm