PriTEM research seminar: Behaviour artificial intelligence

Topic: Behaviour artificial intelligence

Zoom link: https://uio.zoom.us/j/69628298272

Speaker: Tore Pedersen

About the speaker: Tore Pedersen is a Professor of Intelligence Studies at Centre for Intelligence Studies at NORIS: Norwegian Intelligence School. He is Affiliate Professor of Psychological Science at ONH: Oslo New University College, and Visiting Professor at Department of War Studies, King’s College London. He employs experimental methods to study cognitive aspects within National Intelligence and National Security. More specifically, human cognitive biases, human – technology interaction, and biases in artificial intelligence. He holds a Norwegian National Authorization as Military Intelligence Specialist and he is Elected Fellow of RSA: Royal Society of Arts, UK.

About the presentation: Artificial intelligence (AI) receives attention in media as well as in academe and business. In media coverage and reporting, AI is predominantly described in contrasted terms, either as the ultimate solution to all human problems or the ultimate threat to all human existence. In academe, the focus of computer scientists is on developing systems that function, whereas philosophy scholars theorize about the implications of this functionality for human life. In the interface between technology and philosophy there is, however, one imperative aspect of AI yet to be articulated: how do intelligent systems make inferences? We use the overarching concept ‘Artificial Intelligent Behaviour’ which would include both cognition/processing and judgment/behaviour. We argue that due to the complexity and opacity of artificial inference, one needs to initiate systematic empirical studies of artificial intelligent behavior similar to what has previously been done to study human cognition in terms of judgement, reasoning, and decision making. This approach could have the potential of providing knowledge outside of what current computer science methods can offer, about the ‘judgments’, ‘reasoning’, and ‘decisions’ made by intelligent systems. In all domains of societal affairs in-depth knowledge of ontology, epistemology, ‘critical thinking’, judgment, and reasoning will improve human oversight of AI-inference processes and thus ensure AI accountability. Such insights require systematic studies of AI-behaviour founded on natural science and philosophy of science, as well as the employment of methodologies from cognitive and behavioral sciences.

 

Published Apr. 21, 2023 5:40 PM - Last modified Apr. 21, 2023 5:40 PM