Imaginary images blur into each other. For days, for years I've heard these stories. Some are heavier today, finding their categories and do almost paint an exact picture in my mind. Others are less weighted, blurred with noise, the random fantasy, into something new, an idea. They are all related and I am connected to them: the stories of my childhood, telling you truths about the growing up of others. Mechanized stories are shaping this narrative and thus, representations are partly produced mechanically. Networks are colouring created contours and placing heterogeneous data in new contexts. They elude simple attributions of meaning and confront us with our own conditioning, our socialisation, our imagination. Here, since nothing has a life outside of representation, perception is a narrative fiction. Structures of correlation. Copy and paste. Selective moments and memories are neuroyal patterns. Say it: Rho-do-den-dron. The dreamed image symbolizes the recognition of potential. In the end, a private moment raises the question of technical memory: how are the technologies we create related to us and what training material are we fed with. How does a cyborg remember?
This project uses Image-to-Image Translation with Conditional Generativ Adversarial Network (cGAN) called Pix2Pix, based on a paper by Phillip Isola, Jun-Yan Zhu, Tinghui Zhou and Alexei A. Efros. The generation of the output image is conditional on an input. In this case, the machine colorizes drawings of the artist based on an old photo archive by familiy members. Credits and thanks to Karl Rogel, with whom I developed the code for this machine learning tool.
Julia Maja Funke, RHO-DO-DEN-DRON, 2020