About
Our Vision
Large Language Models (LLMs), like those used in ChatGPT and virtual assistants, are cutting-edge artificial intelligence algorithms trained on massive amounts of text data. They can generate human-like text as well as creative content, translate across languages, and answer questions in an informative way. However they have known technical limitations such as biases, privacy leaks, poor reasoning and lack of explainability, which raises concerns about their use in critical domains such as healthcare and law.
Our vision addresses the socio-technical limitations of LLMs that challenge their responsible and trustworthy use, particularly in the context of medical and legal use cases. Our goal is two-fold:
- Firstly to create an extensive evaluation benchmark (including suitable novel criteria, metrics and tasks) for assessing the limitations of LLMs in real world settings, enabling our standards and policy partners to implement responsible regulations, and industry and third sector partners to robustly assess their systems. To achieve this synergy we will be running co-creation and evaluation workshops throughout the project to create a co-production feedback loop with our stakeholders.
- The second part is to devise novel mitigating solutions based on new machine learning methodology, informed by expertise in law, ethics and healthcare, via co-creation with domain experts, that can be incorporated in products and services. Such methodology includes development of modules for temporal reasoning and situational awareness in long-form text, dialogue and multi-modal data, as well as alignment with human-preferences, bias reduction and privacy preservation.
Partners
Team
PI/Co-Is
Prof. Maria Liakata
NLP, Temporality QMUL (PI)Dr. Julia Ive
NLP, Privacy QMULProf. Matthew Purver
NLP, Dialogue QMULProf. Greg Slabaugh
Computer Vision DERI, QMULProf. Claude Chelala
Cancer Research QMULDr. Michael Schlichtkrull
NLP QMULProf. Domenico Giacco
Clinical and Social Psychiatry WarwickProf. Tom Sorell
Ethics for AI WarwickProf. Rob Procter
Trustworthy, Ethical and Safe AI WarwickProf. Nikos Aletras
NLP, Legal NLP SheffieldDr. Jiahong Chen
Law, AI Standards SheffieldDr. Aislinn Gómez Bergin
Responsible AI Nottingham, RAi UKProgramme Manager
Dorothée Loziak
QMULResearch Staff
Dr. Dimitris Gkoumas
NLP, Multi-modal QMULJenny Chim
NLP, Evaluation, Generation QMULDr. Joshua Kelsall
Ethics and RAI WarwickEmily Thelwell
Clinical and Social Psychiatry WarwickDr. Maria Waheed
NottinghamUpdates
News & Events
Outputs
Publications & Software
Publications
Software
Contact
Get in Touch