Threats and Vulnerabilities of LLMs

In the last couple of years, AI, in the form of generative AI based on LLMs, has become available to everyone. When new technology is widely used and adopted, moving from research into everyday use, we also learn more about its potential for misuse and harmful actions.

This master project aims to develop a threat model (or set of models) for large language models (LLMs). The threat model should summarize and visualize the vulnerabilities inherent in LLMs and how they may be exploited.

To reduce the project's scope, we will select one or more well-known LLMs to use as study use cases. This will be decided in the initial phase of the project.

This project may be selected by more than one student who will work on different LLMs.

Publisert 8. aug. 2024 15:17 - Sist endret 8. aug. 2024 15:17

Veileder(e)

Omfang (studiepoeng)

60