After performing a converse action using text-to-image AttnGAN (that is trained on COCO dataset) the text output from the image description service was run through a text-to-image process with the evaluative descriptors like ‘pretty’ and ‘good looking’ removed. For example, a ‘woman in front of a mirror’ image descriptor results in a semi-abstract blob that can be recognised as a posed selfie in underwear, a beach photo, a mirror selfie, and other varieties that have in common a visual language of objectification of women
The title of the work stems from the often-encountered image description of the medium shot of the artist ‘holding’, implying that the data sets with which the algorithms trained view women as carers.
We may think of algorithms as somehow neutral, but ultimately not only have they been created by people who have their own biases and prejudices, but descriptive algorithms use data sets that contain the biases of people who have labelled the images have inherited biases about what type of person could likely be involved in a crime and what gender should be attributed to a doctor, lawyer and scientist.
Steel, plastic, epoxy, printed image, wax, electronics, tablets, thread, fabric.
190cm x 260cm x 60cm