It's pretty easy to see the problem here ... how much medical information can be included in a large language model (LLM) training set before it spits out inaccurate answers.
However, rather than utilizing a large language model to generate an explanation in natural language, the researchers use the LLM to transform an existing SHAP explanation into a readable narrative.
Apple’s overhaul of the Siri conversational assistant has been labeled by employees internally as “LLM Siri.” What is Apple planning?
The way that this is generally dealt with by LLM companies such as OpenAI is ... this answer from the official policy document link. An explanation could be that the backing model was trained ...