




About
Our Vision

Large Language Models (LLMs), like those used in ChatGPT and virtual assistants, are cutting-edge artificial intelligence algorithms trained on massive amounts of text data. They can generate human-like text as well as creative content, translate across languages, and answer questions in an informative way. However they have known technical limitations such as biases, privacy leaks, poor reasoning and lack of explainability, which raises concerns about their use in critical domains such as healthcare and law.
Our vision addresses the socio-technical limitations of LLMs that challenge their responsible and trustworthy use, particularly in the context of medical and legal use cases. Our goal is two-fold:
- Firstly to create an extensive evaluation benchmark (including suitable novel criteria, metrics and tasks) for assessing the limitations of LLMs in real world settings, enabling our standards and policy partners to implement responsible regulations, and industry and third sector partners to robustly assess their systems. To achieve this synergy we will be running co-creation and evaluation workshops throughout the project to create a co-production feedback loop with our stakeholders.
- The second part is to devise novel mitigating solutions based on new machine learning methodology, informed by expertise in law, ethics and healthcare, via co-creation with domain experts, that can be incorporated in products and services. Such methodology includes development of modules for temporal reasoning and situational awareness in long-form text, dialogue and multi-modal data, as well as alignment with human-preferences, bias reduction and privacy preservation.
Partners
Team
PI/Co-Is

Prof. Maria Liakata
NLP, Temporality QMUL (PI)
Dr. Julia Ive
NLP, Privacy QMUL
Prof. Matthew Purver
NLP, Dialogue QMUL
Prof. Greg Slabaugh
Computer Vision DERI, QMUL
Prof. Claude Chelala
Cancer Research QMUL
Dr. Michael Schlichtkrull
NLP QMUL
Prof. Domenico Giacco
Clinical and Social Psychiatry Warwick
Prof. Tom Sorell
Ethics for AI Warwick
Prof. Rob Procter
Trustworthy, Ethical and Safe AI Warwick
Prof. Nikos Aletras
NLP, Legal NLP Sheffield
Dr. Jiahong Chen
Law, AI Standards Sheffield
Dr. Aislinn Gómez Bergin
Responsible AI Nottingham, RAi UKProgramme Manager

Dorothée Loziak
QMULResearch Staff

Dr. Dimitris Gkoumas
NLP, Multi-modal QMUL
Jenny Chim
NLP, Evaluation, Generation QMUL
Dr. Joshua Kelsall
Ethics and RAI Warwick
Emily Thelwell
Clinical and Social Psychiatry Warwick
Dr. Maria Waheed
Responsible AI Nottingham
Dr. Xingwei Tan
NLP, Language Understanding, Reasoning Sheffield
Serene Chi
LegalTech, Impact Assessment, Public Engagement, Technological Law WarwickUpdates
News & Events
Outputs
Publications & Software
Publications
Software
Contact
Get in Touch