07/22 — LNDN
Info
︎
︎︎︎
︎ ABOUT
Iulia is an interdisciplinary artist and technologist interested in social phenomena that emerge in the face of increased automation and algorithmic living. Her work explores the space between the technical and social facets of AI and the co-construction of meaning in human-AI interaction.
She expands on this work in her role as Programme Director of Creative Computing and Robotics PG courses at the University of the Arts London and visiting Senior Lecturer roles at the Royal College of Art and Imperial College London. She holds a Microsoft-sponsored PhD in AI Design, an MA from the Royal College of Art & MSc from Imperial College London.
Iulia is an interdisciplinary artist and technologist interested in social phenomena that emerge in the face of increased automation and algorithmic living. Her work explores the space between the technical and social facets of AI and the co-construction of meaning in human-AI interaction.
She expands on this work in her role as Programme Director of Creative Computing and Robotics PG courses at the University of the Arts London and visiting Senior Lecturer roles at the Royal College of Art and Imperial College London. She holds a Microsoft-sponsored PhD in AI Design, an MA from the Royal College of Art & MSc from Imperial College London.
︎ ETHOS
Despite the need for systems more aligned with human expectations and values, it remains extremely difficult to computationally embed concepts as fundamentally fluid and situational as value and meaning. This is partly because present-day research is trying to formalise the principles of human behaviour, in all their complexity, through predictive machine learning models built on data extracted from how people behave, not in relationship to AI but, in the absence of it.
In my work, I am interested in exploring the layers of social complexity emerging from our interaction with AI systems in real time and how to address, through design, some of the more fleeting aspects of this interaction.
Despite the need for systems more aligned with human expectations and values, it remains extremely difficult to computationally embed concepts as fundamentally fluid and situational as value and meaning. This is partly because present-day research is trying to formalise the principles of human behaviour, in all their complexity, through predictive machine learning models built on data extracted from how people behave, not in relationship to AI but, in the absence of it.
In my work, I am interested in exploring the layers of social complexity emerging from our interaction with AI systems in real time and how to address, through design, some of the more fleeting aspects of this interaction.
︎︎︎