Thursday May 23 at 16:00 in Centro Carlos Santamaria Room 4.
ABSTRACT: This paper explores the different characterizations and understanding that have been given to ChatGPT and similar generative forms of AI technologies based on transformer architectures for Large Language Models (LLMs). We pay special attention to their characterization as agents. We next explain in detail the architecture, processing and training procedures of GPT to provide a proper understanding of its working. A critical evaluation of LLMs agentive capacities is provided in the light of phenomenological and enactive theories of life and mind. According to this view, ChatGPT fails to meet the individuality criteria (it is not the product of its own activity, it is not even directly affected by it), the normativity criteria (it does not generate its own norms or goals), and, partially the interactional asymmetry criteria (it is not the origin and sustained source of its interaction with the environment), all three required for autonomous agency. We finally discuss the mode of existence of ChatGPT under the light of enactive and embodied approaches to cognition. We suggest that ChatGPT should be thought of as an interlocutor or linguistic automaton, a library-that-talks, devoid of (autonomous) agency, but capable to engage performatively in non-purposeful yet purpose-structured and purpose-bounded tasks on our digital linguistic environments. Finally, we explore how LLMs hold the expanding potential to deeply transform human agency and digital environments.
KEYWORDS: Transformers, enactivism, agency, LLMs, ChatGPT, philosophy of mind, philosophy of technology, autonomy, automatism.