![](assets/img/team/logo_qmul.png)
![](assets/img/team/logo_nottingham.png)
![](assets/img/team/logo_sheffield.png)
![](assets/img/team/logo_warwick.png)
![](assets/img/team/logo_rai.png)
About
Our Vision
![](assets/img/about1.jpg)
Large Language Models (LLMs), like those used in ChatGPT and virtual assistants, are cutting-edge artificial intelligence algorithms trained on massive amounts of text data. They can generate human-like text as well as creative content, translate across languages, and answer questions in an informative way. However they have known technical limitations such as biases, privacy leaks, poor reasoning and lack of explainability, which raises concerns about their use in critical domains such as healthcare and law.
Our vision addresses the socio-technical limitations of LLMs that challenge their responsible and trustworthy use, particularly in the context of medical and legal use cases. Our goal is two-fold:
- Firstly to create an extensive evaluation benchmark (including suitable novel criteria, metrics and tasks) for assessing the limitations of LLMs in real world settings, enabling our standards and policy partners to implement responsible regulations, and industry and third sector partners to robustly assess their systems. To achieve this synergy we will be running co-creation and evaluation workshops throughout the project to create a co-production feedback loop with our stakeholders.
- The second part is to devise novel mitigating solutions based on new machine learning methodology, informed by expertise in law, ethics and healthcare, via co-creation with domain experts, that can be incorporated in products and services. Such methodology includes development of modules for temporal reasoning and situational awareness in long-form text, dialogue and multi-modal data, as well as alignment with human-preferences, bias reduction and privacy preservation.
Partners
Team
PI/Co-Is
![](assets/img/team/liakata.jpeg)
Prof. Maria Liakata
NLP, Temporality QMUL (PI)![](assets/img/team/ive.jpeg)
Dr. Julia Ive
NLP, Privacy QMUL![](assets/img/team/purver.png)
Prof. Matthew Purver
NLP, Dialogue QMUL![](assets/img/team/slabaugh.jpeg)
Prof. Greg Slabaugh
Computer Vision DERI, QMUL![](assets/img/team/chelala.jpg)
Prof. Claude Chelala
Cancer Research QMUL![](assets/img/team/giacco.png)
Prof. Domenico Giacco
Clinical and Social Psychiatry Warwick![](assets/img/team/sorell.jpeg)
Prof. Tom Sorell
Ethics for AI Warwick![](assets/img/team/procter.png)
Prof. Rob Procter
Trustworthy, Ethical and Safe AI Warwick![](assets/img/team/aletras.jpg)
Prof. Nikos Aletras
NLP, Legal NLP Sheffield![](assets/img/team/chen.jpeg)
Dr. Jiahong Chen
Law, AI Standards Sheffield![](assets/img/team/gomez-bergin.png)
Dr. Aislinn Gómez Bergin
Responsible AI Nottingham, RAi UKProgramme Manager
![](assets/img/team/loziak.jpeg)
Dorothée Loziak
QMULResearch Staff
![](assets/img/team/gkoumas.jpeg)
Dr. Dimitris Gkoumas
NLP, Multi-modal QMUL![](assets/img/team/chim.jpeg)
Jenny Chim
NLP, Evaluation, Generation QMUL![](assets/img/team/kelsall.jpeg)
Dr. Joshua Kelsall
Ethics and RAI Warwick![](assets/img/team/thelwell.png)
Emily Thelwell
Clinical and Social Psychiatry WarwickUpdates
News & Events
Outputs
Publications & Software
Publications
Software
Contact
Get in Touch