How can we use Generative AI in Norwegian Higher Education?

Active learning in the light of artificial intelligens. Image created with Midjourney.

Generative AI will clearly change both work and education

As we have seen, generative AI will make many kinds of tasks easier: writing text, formulating lists and arguments, and writing simple computer programs. This will make these kinds of tasks significantly easier for many people, especially those with lower technical skills and/or disabilities who may have struggled with these tasks in the past. For example, the way generative AI can summarize text may be a great help for people with difficulties in reading or writing, such as dyslexia. They can be useful translation aids, allowing students who struggle with English to translate texts into Norwegian (or vice versa), or helping professionals to communicate technical ideas in simple language. And large language models are also being used to develop new technological aids, like glasses that can caption speech in real time.

However, these developments will also mean that a lot of low-skill technical labor will be threatened: for example, tasks like writing copy text, data entry, basic computer programming, etc. that have previously required trained professionals will soon likely be automated. Companies are already realizing that it can be significantly cheaper and quicker to use an AI to automate these tasks than to pay an employee to do them, even if the AI requires some oversight and correction.

In this new world, we, as educators, will have to shift our focus. Historically, we have focused on training people to create output (text, code, analyses, etc.). Now that we have tools that can create output, but which don't always produce it correctly, we will need to train people who can critically evaluate the quality of the output. For example, it's not enough to just be able to write code, now: you also need to be able to critically review the code produced by an AI and see if it is actually doing what you want it to be doing.

Challenges and Opportunities of Generative AI in Education

We see both major opportunities and challenges of generative AI in education.

Opportunities:

Generative AI can be very helpful in offloading some intellectual "grunt work" and opening up more room for creativity, and/or content coverage in the curriculum that we haven't had before. Similar to how calculators reduced the need for people to do intensive by-hand calculations, use slide-rules, look up calculation tables, etc. and how mathematical software allows people to solve matrices and integrals, generative AI will offload many of the tasks that are necessary to intellectual work but people will still need to tell it what to do (i.e., figure out problem solutions and formulate queries in a good way, now known as “prompt engineering”) and evaluate its output.

In education, that means we can have students attack more difficult problems, do more complex analyses, do more creative work (on problems they couldn't access before), and possibly open up more time in the curriculum for more advanced topics with the time saved on this intellectual "grunt work".

Generative AI can provide immediate feedback on how to improve a text, on what is wrong with a particular argument or a solution to a problem, or the error in a computer program. This opens many opportunities for learning through dialogues, but it requires careful use in the same way as the use of a solution manual for problems requires restraint and skill to be a useful pedagogical tool. With good use of AI, you can get a tutor that knows your strengths and weaknesses and provides reasonable advice and indeed may provide you with problems that keep you in the proximal zone of development. However, with uncritical use of AI, you may instead induce a false sense of understanding and mastery, which does not provide sufficiently deep understanding needed to build successive concepts on top of your current understanding and which may unravel when you attempt to solve problems on your own.

In addition, AI theoretically has the capacity to provide similar levels of feedback and support to students as an experienced tutor or learning assistant. Up to now, students have largely relied on teachers to provide detailed feedback on their work, help to understand concepts through rephrasing or dialogue, and pedagogical support when solving problems. For many problems, a carefully posed prompt to the AI can be used to provide timely and useful feedback to the student while they are working on building conceptual understanding, struggling with following arguments, or getting stuck wile solving problems or programming tasks. In these cases, the use of AI lowers the barriers for students to get feedback, help and support. However, this currently requires students to carefully phrase questions, understand the answers, and critically assess the output provided by the AI.

Dangers:

The purpose of education is to learn new skills, techniques, and ideas, but it's also mental training. Writing essays and solving problems are meant to train people in the ways of thinking and working in their disciplines. If too much of this mental training gets offloaded to AI, then people will lose out on the critical, disciplinary ways of thinking that they need to learn in their university studies. For example, if students use generative AI to tell them how to solve simple problems, or to tell them how to critically analyze a piece of literature, they are missing out on the most important purpose of education: learning how to use the intellectual tools at your disposal. This becomes a bit like going to a weight room and having a machine lift the weights for you. At the same time, this challenge may be, itself an opportunity, since it will push instructors towards more open-ended problems that require creative solutions and are less accessible to direct solution by an AI.

Generative AI will also affect different disciplines in different ways, since they are based on different representational forms. Generative AI is very good at summarizing and restating text. That means that disciplines like the humanities and social sciences where critical text analysis and summarization is key, will be more affected than disciplines in the mathematics and natural sciences, where we use text, equations, diagrams, etc. in order to do our work. But even though AI can generate critical arguments about text, it still can't "think like a historian" or "think like a physicist"; that's what our students need to be trained to do.

How can generative AI be used in education now?

Generative AI based on large language models have some clear strengths and some clear weaknesses, and these can help us think through the ways we can use it in our educational practice today.

  • Generative AI is very good at summarizing and restating text. This makes it a natural tool for helping to summarize key points of technical papers, articles, or textbooks. It can also be an excellent explainer, allowing students to ask for non-technical explanations of complex concepts. Even more useful, it can re-generate or tweak these explanations based on students’ needs and feedback
  • Generative AI can generate lists of ideas or topics, like essay arguments, paper titles, to-do lists, etc. These are not always especially creative (since they are, in some ways, an “average” of the represented ideas and topics from the training data) but in educational contexts we are seldom looking for completely new ideas or topics—it is more important that something is new to a student than that it is new to the instructor.
  • Generative AI can write code and copy-text. This can be very useful as a first draft for a project description, or as a way to reduce the amount of time spent, for example, writing the structure of a code class to simulate a physics phenomenon. (However, as mentioned, it’s important that students take the time to critically review and re-write the details of any AI-generated output, and to cite it as one of their sources)
  • Generative AI can criticize texts, arguments and code and provide students with immediate formative assessment that can support their learning. However, getting the AI to provide formative feedback that is adapted to the student’s needs requires careful phrasing and construction of the prompts to the AI (prompt engineering). Currently, the state of development varies by discipline. For example, for simple programming tasks, the AI can provide useful formative feedback with simple prompts. However, in other domains, such as more complex mathematical arguments, good results may require specialized (finetuned) AI models.
  • Generative AI often produces errors. In professional settings this can be a problem (especially if one uncritically uses it to, for example a legal brief). However, in educational settings this can be an asset, since it forces students to think critically about the information they are getting out. For example, if one asks ChatGPT to answer a conceptual physics question, it may provide a correct answer or it may provide a common (but incorrect) answer; it’s up to the student to decide how much to trust the answer and potentially even try to correct the AI’s reasoning

Where is this going in the future?

It's clear that Generative AI is becoming better and better at solving problems and generating text-based analyses and arguments. The question is, how good will it get in the future? Will increasing amounts of training data continue to improve its abilities to the same degree that we see so far? Will it get to the point where it (almost) always provides correct information and answers? Or will it continue to "hallucinate" and produce errors like it does now?

We think this question is at the heart of how we approach generative AI in education. If it continues to produce good-but-errorful information, then our focus will shift to training people who can use and critically evaluate the products of AI. This is a bit like using Wikipedia: everyone knows it's not always correct, but we can use it in many cases and just have to be aware of its limitations.

If, on the other hand, AI becomes so good that it's almost always right, then we will have to rethink how we do education around it. It will present us with a new set of challenges and opportunities: for example, the dangers above will be exacerbated, but we may also be able to build new generations of automated tutoring systems (virtual teaching assistants or learning assistants) that could be personalized to individual student use. However, this will not happen for several years, if at all, so we will have some time to think through the consequences.

Either way, we will likely see differences in the future between people who uncritically use AI (and don't realize when it's wrong) and the people who critically use it as a tool to offload intellectual "grunt" work. We predict that the second category will always do better than the first, and it is our job as educators to help our students learn to be as successful as they can.

Resources

More information on use of generative AI in university-level science and math education can be found on the KURT website:

https://www.mn.uio.no/kurt/utdanningsutvikling/emne-og-undervisningsplanlegging/undervisningsplanlegging/vurdering-i-sprakmodellenes-tid.html

https://audunsh.github.io/ki-undervisning/snakk_om.html

Av Tor Ole B. Odden, Tone Gregers, and Anders Malthe-Sørenssen
Publisert 17. aug. 2023 13:11 - Sist endret 19. jan. 2024 11:27