From Newsgroup: comp.misc
Lawrence =?iso-8859-13?q?D=FFOliveiro?= <
ldo@nz.invalid> wrote or quoted:
:The researchers demonstrated that hallucinations stemmed from
. . .
: many Ds are in DEEPSEEK?rCY the DeepSeek-V3 model with 600 billion
. . .
The stuff about characters is not really the best way to show what
counts as "hallucination", since inputs to an AI chatbot usually
get broken down first into semantic chunks that are bigger than
a single character. So a word like "desktop" might get represented
by one number, or sometimes two, like one for "desk" and another for
"top". That means chatbots often do not even look at letters at all.
Here's a small experiment with a chatbot:
Me:
How many "D" in "Desktop"?
, chatbot:
|There is 1 letter "D" in the word "Desktop".
, me:
Given what you know about the semantic representation of your
input and about how chatbots like you are constructed, what is
your confidence in your preceding answer being correct in
percent between 0 and 100?
, chatbot:
|My confidence in the preceding answer is 100%, because the
|word "Desktop" clearly contains exactly one uppercase "D" at
|the beginning and no other instances of the letter "D".
. Yeah, that's a tough argument to wiggle out of. So I'd say,
in this case, "Turing test passed with flying colors!".
(So with that, I do not mean to challenge the researchers'
findings otherwise.)
--- Synchronet 3.21a-Linux NewsLink 1.2