Trustworthy LLM systems --- require citing external sources --- no more hallucination
From olcott@polcott333@gmail.com to comp.theory,comp.lang.c++,comp.lang.c,comp.ai.philosophy on Wed Oct 1 08:40:46 2025
From Newsgroup: comp.lang.c++
The way that we can trust the reliability of
LLM systems and thus get rid of AI hallucination
is to require them to cite their external sources.
--
Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer