Large-language-based models (LLMs), including Generative Pre-trained Transformer (GPT) artificial intelligence (AI) models like ChatGPT, are used in healthcare to improve patient-clinician communication.
Traditional natural language processing (NLP) approaches like deep learning require problem-specific annotations and model training. However, the lack of human-annotated data, combined with the expenses associated with these models, makes building these algorithms difficult.