HUMAN·S 2
(2022 - ongoing)
A stereotyped view of the world generates a stereotyped structuring of the world.
Image-generating Artificial Intelligence is a new iconographic creation tool that is as revolutionary as it is incomplete, and now accessible to everyone. Based on algorithmic learning from billions of images, it builds its own understanding of our world, which we cannot "see" except through the results it produces. And yet, on closer inspection, the way it works is nothing more than an averaging of all the images that feed it. These images are the "traditional" iconographic modes of representation (pictorial, photographic, cinematographic, media, etc.), which are certainly part of a variety of intentions (be they political, advertising, journalistic, civic, etc.), but all suffer - to varying degrees - from a lack of impartiality and a whole range of biases, under-representativeness or over-representativeness, to name but a few. And when it comes to illustrating human "typologies", AI simply repeats and amplifies these shortcomings, thus participating in the latent discrimination of a whole section of our society.
This is what I wanted to point out, through a layout that evokes encyclopedic, non-equivocal definitions, normally imbued with absolute truth and acting as a common basis. And this is what these images will become - if nothing is done to curb these double simulacra - for the world of "content"-generating AIs which - now only a few months old - aspire only to perfect and diversify, both in their form with the arrival of similar tools for video, and in their use, where it is only a matter of time before they are widely used for disinformation purposes.