Knowledge-enhanced large language models
Language intelligence for specialised and socially relevant applications

Large language models (LLMs) such as GPT, BERT or LLaMA have developed rapidly in recent years and have become central components of modern AI systems. Their ability to generate texts, extract information or understand language makes them versatile tools in the digital transformation. However, in order for them to realise their full potential, they need to be tailored to specific contexts and tasks.
Our research in the field of knowledge-enhanced large language models deals with the adaptation and extension of LLMs for specific application domains. Using methods such as fine-tuning, in-context learning, retrieval augmented generation or the integration of semantic knowledge sources, we develop models that are not only generic, but also technically competent, comprehensible and practical.
A central goal is to enrich existing language models with structured knowledge, for example from ontologies, knowledge graphs or specialised databases. This creates systems that go beyond pure linguistic understanding and can also provide well-founded answers in terms of content. These enriched models are used in areas such as education, administrative digitisation, public participation and barrier-free communication.
In addition to the technical development, we also focus on the accessibility of these technologies: we develop intuitive interfaces, provide documented models and design APIs that enable easy use even for non-experts. In doing so, we pay attention to transparency, traceability and social relevance.
Our research thus contributes to making large language models not only more powerful, but also more inclusive and responsible - as tools that support people in their everyday lives instead of overburdening them.