With the rapid rise of tools such as ChatGPT, systems equipped with artificial intelligence (AI) are becoming progressively more integrated into our daily lives. Increasingly, we may find ourselves relying on these systems to perform tasks that were once carried out by humans. However, there are many reasons why we should not be quite so open to perceiving these systems as trustworthy. News reports detail cases of AI systems citing fake academic references, inventing scandals about real individuals, and, in one case, contributing to the false arrest of a teenager due to misidentifying a crisp packet as a weapon. But why are some of the errors they make so simple and …nonsensical? In this talk, I will argue that one of the key reasons why AI systems are prone to these absurd errors is because of their lack of common sense. I will explore what we mean by ‘common sense’ and its (somewhat surprising) complexity, how a lack of common sense can result in AI errors, and the implications that this can have for AI safety and the level of trust we ought to place in these systems.
This talk is for members only: you can find out more about becoming a TRIP member here.
About the Speaker:
Isobel Standen is a PhD student at the University of York and a researcher from the Centre for Assuring Autonomy. Her work is at the intersection of philosophy and computer science, where she is involved in several projects that bring together multidisciplinary researchers for collaboration.
Isobel is the co-organiser and co-chair of the annual ‘Requirement Engineering for Trustworthy Artificial Intelligence’ (or ‘RETRAI’) workshop which aims to bring together both technical and non-technical researchers to discuss the challenges associated with developing truly trustworthy AI systems and identify strategies for overcoming associated risks.
Isobel’s research explores the contrast between human capacities and those attributed to AI systems. In particular, she highlights how it is a lack of basic common sense that is one of the leading causes of errors made by AI systems.