• =?UTF-8?Q?Re=3A_Why_large_language_models_are_mysterious_=E2=80=93_?= =?UTF-8?Q?even_to_their_creators?=

    From Dude@punditster@gmail.com to alt.buddha.short.fat.guy on Fri Jan 2 18:38:06 2026
    From Newsgroup: alt.buddha.short.fat.guy

    On 1/2/2026 2:21 PM, Noah Sombrero wrote:
    On Fri, 2 Jan 2026 21:39:01 +0000, Julian <julianlzb87@gmail.com>
    wrote:

    https://aeon.co/videos/why-large-language-models-are-mysterious-even-to-their-creators

    Sorry, not a bit mysterious. But it sure sounds good in sales
    propaganda.

    It kind of goes without saying that nothing is mysterious to that
    political scientist, Sombrero, and that new math genius, Nick!
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Noah Sombrero@fedora@fea.st to alt.buddha.short.fat.guy on Fri Jan 2 23:18:24 2026
    From Newsgroup: alt.buddha.short.fat.guy

    On Fri, 2 Jan 2026 18:38:06 -0800, Dude <punditster@gmail.com> wrote:

    On 1/2/2026 2:21 PM, Noah Sombrero wrote:
    On Fri, 2 Jan 2026 21:39:01 +0000, Julian <julianlzb87@gmail.com>
    wrote:

    https://aeon.co/videos/why-large-language-models-are-mysterious-even-to-their-creators

    Sorry, not a bit mysterious. But it sure sounds good in sales
    propaganda.

    It kind of goes without saying that nothing is mysterious to that
    political scientist, Sombrero, and that new math genius, Nick!

    Plenty mysterious to us ignorant types. But to the guys who made it?
    No, not mysterious. Every bit of it was put there with severe
    deliberateness. It takes a programmer to understand exactly how
    deliberate software must be. How a program will do exactly what the
    code says, every damn time right or wrong.
    --
    Noah Sombrero mustachioed villain
    Don't get political with me young man
    or I'll tie you to a railroad track and
    <<<talk>>> to <<<YOOooooo>>>
    Who dares to talk to El Sombrero?
    dares: Ned
    does not dare: Julian shrinks in horror and warns others away

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to alt.buddha.short.fat.guy on Sat Jan 3 02:06:19 2026
    From Newsgroup: alt.buddha.short.fat.guy

    On 1/2/26 1:39 PM, Julian wrote:
    https://aeon.co/videos/why-large-language-models-are-mysterious-even-to- their-creators

    LLMs always impress until they don't Efn+
    --
    hi, i'm nick! let's end war EfOa

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Julian@julianlzb87@gmail.com to alt.buddha.short.fat.guy on Sat Jan 3 10:44:53 2026
    From Newsgroup: alt.buddha.short.fat.guy

    On 03/01/2026 10:06, dart200 wrote:
    On 1/2/26 1:39 PM, Julian wrote:
    https://aeon.co/videos/why-large-language-models-are-mysterious-even-
    to- their-creators

    LLMs always impress until they don't Efn+
    Where did you get that? Platitudes-R-Us?
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to alt.buddha.short.fat.guy on Sat Jan 3 12:58:08 2026
    From Newsgroup: alt.buddha.short.fat.guy

    On 1/3/26 2:44 AM, Julian wrote:
    On 03/01/2026 10:06, dart200 wrote:
    On 1/2/26 1:39 PM, Julian wrote:
    https://aeon.co/videos/why-large-language-models-are-mysterious-even-
    to- their-creators

    LLMs always impress until they don't Efn+
    Where did you get that? Platitudes-R-Us?

    where did you get that, jokes-r-us???
    --
    hi, i'm nick! let's end war EfOa

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Dude@punditster@gmail.com to alt.buddha.short.fat.guy on Sat Jan 3 13:40:11 2026
    From Newsgroup: alt.buddha.short.fat.guy

    On 1/3/2026 12:58 PM, dart200 wrote:
    On 1/3/26 2:44 AM, Julian wrote:
    On 03/01/2026 10:06, dart200 wrote:
    On 1/2/26 1:39 PM, Julian wrote:
    https://aeon.co/videos/why-large-language-models-are-mysterious-
    even- to- their-creators

    LLMs always impress until they don't Efn+
    Where did you get that? Platitudes-R-Us?

    where did you get that, jokes-r-us???

    That's the limitations of LLMs - they can produce convincing but
    incorrect information. They may also reflect biases in their training data. code
    This would be the perfect job for you, Nick: fine-tuning and data
    retrieval, augmented generation (RAG) to improve accuracy and safety.

    You can do this!
    --- Synchronet 3.21a-Linux NewsLink 1.2