In the era of rapidly evolving Large Language Models (LLMs) and chatbot systems, we highlight the advantages of using LLM systems based on RAG (Retrieval Augmented Generation). These systems excel when accurate answers are preferred over creative ones, such as when answering questions about medical patients or clinical guidelines. RAG LLMs have the advantage of reducing hallucinations, by explaining the source of each fact, and enabling the use of private documents to answer questions. They also enable near-real-time data updates without re-tuning the LLM.

This session walks through the construction of a RAG (Retrieval Augmented Generation) Large Language Model (LLM) clinical chatbot system, leveraging John Snow Labs’ healthcare-specific LLM and NLP models within the Databricks platform.

The system leverages LLMs to query the knowledge base via vector database that is populated by Healthcare NLP at scale within a Databricks notebook. Coupled with a user-friendly graphical interface, this setup allows users to engage in productive conversations with the system, enhancing the efficiency and effectiveness of healthcare workflows. Acknowledging the need for data privacy, security, and compliance, this system runs fully within customers’ cloud infrastructure – with zero data sharing and no calls to external API’s.

 

Resources:

Notebook
Veysel's Slides
Amir's Slides 

Amir Kermany

Amir Kermany

Amir is the Technical Industry Lead for Healthcare & Life Sciences at Databricks, where he focuses on developing advanced analytics solution accelerators to help health care and life sciences organizations in their data and AI journey.

 

Veysel-Kocaman

Veysel Kocaman

Veysel is a Lead Data Scientist and ML Engineer at John Snow Labs, improving the Spark NLP for the Healthcare library and delivering hands-on projects in Healthcare and Life Science.


Watch the webinar