RAmBLA: Reliability AssesMent for Biomedical LLM Assistants

Abstract

Large Language Models (LLMs) increasingly support applications in a wide range of domains, some with potential high societal impact such as biomedicine, yet their reliability in realistic use cases is under-researched. In this work we introduce the Reliability AssesMent for Biomedical LLM Assistants (RAmBLA) framework and evaluate whether four state-of-the-art foundation LLMs can serve as reliable assistants in the biomedical domain. We identify prompt robustness, high recall, and a lack of hallucinations as necessary criteria for this use case. We design shortform tasks and tasks requiring LLM freeform responses mimick- ing real-world user interactions. We evaluate LLM performance using semantic similarity with a ground truth response, through an evaluator LLM.

Publication
In International Conference on Learning Representations 2024 Workshop on Reliable and Responsible Foundation Models