In recent months ChatGPT, MidJourney and other AI tools have taken the world by storm. But as these tools that seemingly write miraculous poems, marketing copy and dream up the next Monet in a matter of moments, I’ve been left wondering how we ensure that these create progressive, forward thinking content.
Would anyone feel represented by this image?
The below is an image that DALL·E 2 generated for me when I asked for “an inclusive image of the disabled community”, and I’m sure you’ll agree, we’re not quite there yet!
Beyond the obvious lack of human features, I wonder whether these tools will simply re-affirm what has come before?
With decades of under-representation and depictions of disability that lack authenticity – will these AI tools simply use this as a reference point and perpetuate the failings that at Purple Goat Agency we are tirelessly trying to address?
Will we see an array of disabilities or just images of individuals in wheelchairs? How will those with non-visible disabilities feel counted? Will these images account for intersectionality?
As AI applications continue to evolve, it is crucial that they are designed to promote progressive thinking around the subjects of disability and inclusion. AI systems like GPT-4 hold immense potential, but they also risk perpetuating entrenched ableism and stereotypes around disability.
We need to strike the right balance between generalising information for efficiency and respecting the unique experiences of disabled people. Overgeneralization may lead to assumptions and stereotypes, while overemphasis on differences can end up being exclusive and othering in it’s own right.
So in the not too distant future, when you request something like ‘a group of disabled people having fun’ in an AI image generator with the aim of being inclusive, just take a moment to consider if what you are seeing showcases the true nuances, lived experiences and ‘reality’ of the community that you are trying to show.
We may be a few more clicks away from getting it quite right.