• What to do, if chatgpt gives a wrong answer

    From ft@ml@ft-c.de to muc.lists.freebsd.stable on Sat Oct 11 13:14:24 2025
    From Newsgroup: muc.lists.freebsd.stable

    Hello,-a
    When I ask ChatGPT a question about FreeBSD, I very often get the wrong
    answer. Does FreeBSD have a plan for improving the quality of answers
    so that it can be taken into account in the next version?
    Here are some suggestions (which I received from ChatGPT) on how this
    could be done:
    First, use the built-in feedback system. If you are using ChatGPT in a
    browser, you will usually see thumbs-up and thumbs-down buttons under
    each answer. If an answer is incorrect, click the thumbs-down button.
    In the feedback form, explain what is wrong and provide the correct information. For example, if ChatGPT suggests an invalid option for the
    pkg command, you could write: rCLThe response about pkg usage is
    incorrect. The option -b does not exist as described. The correct
    option to build a package is -p, as documented in man pkg(8).rCY
    Second, it would be great if the FreeBSD community could create a list
    to collect and document wrong answers. Keeping a simple personal record
    of mistakes is also very helpful. The idea is to systematically
    document any wrong answers before submitting them as feedback or
    sharing them with others. This makes it easier to prove what was
    incorrect, ensures you donrCOt forget important corrections, and helps
    others understand the issue clearly.
    A practical way to do this is to note:
    - The topic, command, or function the answer relates to, e.g., rCLpkg
    buildrCY
    - The incorrect answer given by ChatGPT, e.g., rCLOption -b builds a
    packagerCY
    - The correct answer, e.g., rCLOption -p builds a packagerCY
    - The source or reference, e.g., rCLman pkg(8)rCY
    - You can collect this information in a simple text document or table.
    Later, when giving feedback to OpenAI, you have all the information
    ready and verifiable.
    Third, use community channels to amplify the feedback. OpenAIrCOs
    community forum (community.openai.com) allows you to create threads
    about repeated errors or FreeBSD-specific issues. Other users can
    comment, verify, or add additional information. For open-source AI
    models, you can submit issues directly on GitHub or the projectrCOs issue tracker.
    Finally, when giving feedback about FreeBSD, try to provide context.
    Include the exact FreeBSD version, quote commands or man pages, and
    explain why the original answer is incorrect. Logs or screenshots can
    also be helpful.
    By following these steps, we can gradually improve the quality of AI
    responses for FreeBSD topics and help future users get more accurate information.
    Best regards,
    Franz
    --
    Posted automagically by a mail2news gateway at muc.de e.V.
    Please direct questions, flames, donations, etc. to news-admin@muc.de
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Marco Moock@mm@dorfdsl.de to muc.lists.freebsd.stable on Sat Oct 11 13:25:57 2025
    From Newsgroup: muc.lists.freebsd.stable

    --Sig_/P0PVy3uhFbw+XLuCSo6CaHh
    Content-Type: text/plain; charset=US-ASCII
    Content-Transfer-Encoding: quoted-printable

    On 11.10.2025 13:14 ft <ml@ft-c.de> wrote:

    When I ask ChatGPT a question about FreeBSD, I very often get the
    wrong answer. Does FreeBSD have a plan for improving the quality of
    answers so that it can be taken into account in the next version?

    AI gives bad answers by its design concept. I also see no reason that
    is related to this mailing list.

    --=20
    kind regards
    Marco

    Send spam to abfall1760181264@stinkedores.dorfdsl.de

    --Sig_/P0PVy3uhFbw+XLuCSo6CaHh
    Content-Type: application/pgp-signature
    Content-Description: Digitale Signatur von OpenPGP

    -----BEGIN PGP SIGNATURE-----

    iQJPBAEBCAA5FiEEpXefSZn9R6zNZtTQE76RLz2tRfAFAmjqPsUbFIAAAAAABAAO bWFudTIsMi41KzEuMTEsMiwyAAoJEBO+kS89rUXwsd0P/0WGvyoQ0cghfVmiJTgl koYgflyMoVxPAU7LMPbY+GjMVUjgrrFrGY6IAGlimx5MU4k6Q5ysDfHBa5xXqge9 SPBpD5oy+JC4whyKrEhY+4Z2OMDIHMuPZLhNWy4G7UisLsyu+GOvs3DBEXI+TEBN e1MHumMvVkQCx5yb6i2bB2FxoNbi2L7JJj4l5S7bA72gUg84pX/j+qVfiVP5qDor +ctlMs809AeNk7lg7qGFP9GW5z4KGhVzDWgn2DxZhXgD7xkq06dNOobH/woJHLHy 3ADTR7tDyIoD28jGQEdnhcx6dXNaJESkmK7YOLsgGlMrCPULo5HxjtCGj5iXXPOi X0DZIEwkxzjc3ElzEavE5LsSY5FVh3h8Uja7+APF8jCzUb19nxE/xuEbRtqkL8YX RWtvTSnTF/4tiBRsZk+C0jWr2DmAZV5lUV6HjJOAGVggrCAmEXDPl9E68rIV/mB0 RqFC1f3DEHrIVtVHpEHg/pJB6tmRxV/oCAUVwp+rD/Eo2iASGGfpzNGFRQ91imtT xmB9ntQlawBmEQu1MQlXCsbNj1FRmDrfiHozpMvsFP8pIZ0G/pZKTteLT4mDTQo7 7h0m4NiidSlvIyMZzMJPbR7/8TsdKaYzXIKOKBobltkoPXXpfnwPkdoL7zLjE6ss HNXowMTDkS5yc8NvTumEy4Xp
    =pf+b
    -----END PGP SIGNATURE-----

    --Sig_/P0PVy3uhFbw+XLuCSo6CaHh--


    --
    Posted automagically by a mail2news gateway at muc.de e.V.
    Please direct questions, flames, donations, etc. to news-admin@muc.de
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Christos Margiolis@christos@freebsd.org to muc.lists.freebsd.stable on Sat Oct 11 14:55:11 2025
    From Newsgroup: muc.lists.freebsd.stable

    On Sat Oct 11, 2025 at 1:14 PM CEST, ft wrote:
    Hello,-a

    When I ask ChatGPT a question about FreeBSD, I very often get the wrong answer. Does FreeBSD have a plan for improving the quality of answers
    so that it can be taken into account in the next version?
    I'll bite, even though it's not the appropriate mailing list.
    Why should we take any LLM into account for the next version? We have documentation (man pages, handbook, other docs), and real people you can
    speak to. There is no reason to waste already scarce resources to
    improve ChatGPT.
    Christos
    --
    Posted automagically by a mail2news gateway at muc.de e.V.
    Please direct questions, flames, donations, etc. to news-admin@muc.de
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From christian baltini@christian.baltini@gmail.com to muc.lists.freebsd.stable on Sat Oct 11 10:55:45 2025
    From Newsgroup: muc.lists.freebsd.stable

    With all respect, this seems very much like solving a problem that doesnrCOt exist.
    This community leans (heavily) towards the technical, and in my experience thatrCOs the crowd thatrCOs least likely to use tools like ChatGPT to answer technical questions. IrCOm not saying itrCOs completely useless, but anyone using it to identify command flags is probably doing things rCLincorrectlyrCY already.
    Better to spend time fixing errors in the actual product/documentation than trying to train a model we ultimately have very limited (at best) control over.
    The only way to use ChatGPT correctly is to distrust it entirely. ItrCOs handy for quick reminders regarding information you can already vet, but no, please donrCOt use it as a replacement for the actual docs, the community, or the actual underlying code.
    ThatrCOs my two cents.
    - C
    On Oct 11, 2025, at 4:14rC>AM, ft <ml@ft-c.de> wrote:

    Hello,

    When I ask ChatGPT a question about FreeBSD, I very often get the wrong answer. Does FreeBSD have a plan for improving the quality of answers
    so that it can be taken into account in the next version?

    Here are some suggestions (which I received from ChatGPT) on how this
    could be done:

    First, use the built-in feedback system. If you are using ChatGPT in a browser, you will usually see thumbs-up and thumbs-down buttons under
    each answer. If an answer is incorrect, click the thumbs-down button.
    In the feedback form, explain what is wrong and provide the correct information. For example, if ChatGPT suggests an invalid option for the
    pkg command, you could write: rCLThe response about pkg usage is
    incorrect. The option -b does not exist as described. The correct
    option to build a package is -p, as documented in man pkg(8).rCY

    Second, it would be great if the FreeBSD community could create a list
    to collect and document wrong answers. Keeping a simple personal record
    of mistakes is also very helpful. The idea is to systematically
    document any wrong answers before submitting them as feedback or
    sharing them with others. This makes it easier to prove what was
    incorrect, ensures you donrCOt forget important corrections, and helps
    others understand the issue clearly.

    A practical way to do this is to note:
    - The topic, command, or function the answer relates to, e.g., rCLpkg buildrCY
    - The incorrect answer given by ChatGPT, e.g., rCLOption -b builds a packagerCY
    - The correct answer, e.g., rCLOption -p builds a packagerCY> - The source or reference, e.g., rCLman pkg(8)rCY
    - You can collect this information in a simple text document or table.
    Later, when giving feedback to OpenAI, you have all the information
    ready and verifiable.

    Third, use community channels to amplify the feedback. OpenAIrCOs
    community forum (community.openai.com) allows you to create threads
    about repeated errors or FreeBSD-specific issues. Other users can
    comment, verify, or add additional information. For open-source AI
    models, you can submit issues directly on GitHub or the projectrCOs issue tracker.

    Finally, when giving feedback about FreeBSD, try to provide context.
    Include the exact FreeBSD version, quote commands or man pages, and
    explain why the original answer is incorrect. Logs or screenshots can
    also be helpful.

    By following these steps, we can gradually improve the quality of AI responses for FreeBSD topics and help future users get more accurate information.

    Best regards,
    Franz

    --
    Posted automagically by a mail2news gateway at muc.de e.V.
    Please direct questions, flames, donations, etc. to news-admin@muc.de
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Julian H. Stacey@jhs@berklix.com to muc.lists.freebsd.stable on Sat Oct 11 20:07:36 2025
    From Newsgroup: muc.lists.freebsd.stable

    Sat, 11 Oct 2025 14:55:11 +0200 "Christos Margiolis" <christos@freebsd.org> wrote:
    "Christos Margiolis" wrote:
    On Sat Oct 11, 2025 at 1:14 PM CEST, ft wrote:
    Hello,-a

    When I ask ChatGPT a question about FreeBSD, I very often get the wrong answer. Does FreeBSD have a plan for improving the quality of answers
    so that it can be taken into account in the next version?

    I'll bite, even though it's not the appropriate mailing list.

    Why should we take any LLM into account for the next version? We have documentation (man pages, handbook, other docs), and real people you can speak to. There is no reason to waste already scarce resources to
    improve ChatGPT.

    Christos
    Agreed. A very old music hall joke :
    "Doctor! Doctor! It hurts when I do this ... ! "Well, don't do that then!" ie Don't expect quality AI about small moving targets (eg FreeBSD).
    AI will do better against static large targets, where a vast general
    public can be tempted to contribute responses free for AI to hone improvements. That's not us.
    AI is proving useful to smaller groups eg doctors, ("Did you also
    consider the rare XYZ disease?") I assume theres lots more doctors
    in the world than FreeBSD users, & some doctors will see it as part
    of their paid job to correct false responses from AI.
    That's not us.
    Unpaid volunteer time seems better spent improving FreeBSD & docs, not inefficiently correcting old AI mis-understanding of a moving target.
    If/ when AI learns source code & SVN etc, all bets off though - Shudder ! Cheers,
    -- Julian Stacey http://berklix.org/jhs/mail/ @gmail blocks replies.
    Arm Ukraine. Contraception V. global warming. Israel starves Gaza.
    --
    Posted automagically by a mail2news gateway at muc.de e.V.
    Please direct questions, flames, donations, etc. to news-admin@muc.de
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mark Saad@nonesuch@longcount.org to muc.lists.freebsd.stable on Sat Oct 11 15:57:57 2025
    From Newsgroup: muc.lists.freebsd.stable

    --000000000000181f6e0640e77451
    Content-Type: text/plain; charset="UTF-8"
    Content-Transfer-Encoding: quoted-printable

    On Sat, Oct 11, 2025 at 2:08=E2=80=AFPM Julian H. Stacey <jhs@berklix.com> = wrote:

    Sat, 11 Oct 2025 14:55:11 +0200 "Christos Margiolis" <christos@freebsd.or=

    wrote:

    "Christos Margiolis" wrote:
    On Sat Oct 11, 2025 at 1:14 PM CEST, ft wrote:
    Hello,=C3=82

    When I ask ChatGPT a question about FreeBSD, I very often get the wro=
    ng
    answer. Does FreeBSD have a plan for improving the quality of answers
    so that it can be taken into account in the next version?

    I'll bite, even though it's not the appropriate mailing list.

    Why should we take any LLM into account for the next version? We have documentation (man pages, handbook, other docs), and real people you ca=
    n
    speak to. There is no reason to waste already scarce resources to
    improve ChatGPT.

    Christos

    Agreed. A very old music hall joke :
    "Doctor! Doctor! It hurts when I do this ... ! "Well, don't do that
    then!"
    ie Don't expect quality AI about small moving targets (eg FreeBSD).

    AI will do better against static large targets, where a vast general
    public can be tempted to contribute responses free for AI to hone improvements.
    That's not us.

    AI is proving useful to smaller groups eg doctors, ("Did you also
    consider the rare XYZ disease?") I assume theres lots more doctors
    in the world than FreeBSD users, & some doctors will see it as part
    of their paid job to correct false responses from AI.
    That's not us.

    Unpaid volunteer time seems better spent improving FreeBSD & docs, not inefficiently correcting old AI mis-understanding of a moving target.

    If/ when AI learns source code & SVN etc, all bets off though - Shudder !

    Cheers,
    --
    Julian Stacey http://berklix.org/jhs/mail/ @gmail blocks replies.
    Arm Ukraine. Contraception V. global warming. Israel starves Gaza.


    All kidding aside, I run into this at work a lot. I work for a nameless computer company , where we are not encouraged to ask an AI, but no one
    really cares either way.
    So a co-worker used one of them, whatever is approved don't remember, to
    draw up some details on X vs Y for some upcoming project. The short
    version, they made a report
    based on what this thing said and it was garbage. If they had just asked
    the SMEs in the group what do you favor , X or Y. They would have had a
    great data, if they didn't
    trust the results they could have gone off and investigated for their
    selves.

    So back to this mailing list. The ChatGPT's , Claude whatever scraped this mailing list to build some sort of data about FreeBSD, who know what it
    really did with any of that data.

    But if you need a primary source or a subject matter expert, The Forums and
    the mailing lists are the place to go. If you do not trust what people are saying, go test it, report back what
    you see .

    The only people who benefit form the LLM AI boom are some super rich people
    who are trying to buy the world .



    --=20
    mark saad | nonesuch@longcount.org

    --000000000000181f6e0640e77451
    Content-Type: text/html; charset="UTF-8"
    Content-Transfer-Encoding: quoted-printable

    <div dir=3D"ltr"><div dir=3D"ltr"><br></div><br><div class=3D"gmail_quote g= mail_quote_container"><div dir=3D"ltr" class=3D"gmail_attr">On Sat, Oct 11,=
    2025 at 2:08=E2=80=AFPM Julian H. Stacey &lt;<a href=3D"mailto:jhs@berklix= .com">jhs@berklix.com</a>&gt; wrote:<br></div><blockquote class=3D"gmail_qu= ote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,20= 4);padding-left:1ex">Sat, 11 Oct 2025 14:55:11 +0200 &quot;Christos Margiol= is&quot; &lt;<a href=3D"mailto:christos@freebsd.org" target=3D"_blank">chri= stos@freebsd.org</a>&gt; wrote:<br>

    &quot;Christos Margiolis&quot; wrote:<br>
    &gt; On Sat Oct 11, 2025 at 1:14 PM CEST, ft wrote:<br>
    &gt; &gt; Hello,=C3=82=C2=A0<br>
    &gt; &gt;<br>
    &gt; &gt; When I ask ChatGPT a question about FreeBSD, I very often get the=
    wrong<br>
    &gt; &gt; answer. Does FreeBSD have a plan for improving the quality of ans= wers<br>
    &gt; &gt; so that it can be taken into account in the next version?<br> &gt;<br>
    &gt; I&#39;ll bite, even though it&#39;s not the appropriate mailing list.<=

    &gt;<br>
    &gt; Why should we take any LLM into account for the next version? We have<=

    &gt; documentation (man pages, handbook, other docs), and real people you c= an<br>
    &gt; speak to. There is no reason to waste already scarce resources to<br>
    &gt; improve ChatGPT.<br>
    &gt;<br>
    &gt; Christos<br>

    Agreed.=C2=A0 A very old music hall joke :<br>
    =C2=A0 &quot;Doctor! Doctor! It hurts when I do this ... !=C2=A0 &quot;Well=
    , don&#39;t do that then!&quot;<br>
    ie Don&#39;t expect quality AI about small moving targets (eg FreeBSD).<br>

    AI will do better against static large targets, where a vast general<br>
    public can be tempted to contribute responses free for AI to hone improveme= nts.<br>
    That&#39;s not us.<br>

    AI is proving useful to smaller groups eg doctors, (&quot;Did you also<br> consider the rare XYZ disease?&quot;) I assume theres lots more doctors<br>
    in the world than FreeBSD users, &amp; some doctors will see it as part<br>
    of their paid job to correct false responses from AI.<br>
    That&#39;s not us.<br>

    Unpaid volunteer time seems better spent improving FreeBSD &amp; docs, not =

    inefficiently correcting old AI mis-understanding of a moving target.<br>

    If/ when AI learns source code &amp; SVN etc, all bets off though - Shudder=
    !<br>

    Cheers,<br>
    -- <br>
    Julian Stacey=C2=A0 <a href=3D"http://berklix.org/jhs/mail/" rel=3D"norefer= rer" target=3D"_blank">http://berklix.org/jhs/mail/</a>=C2=A0 @gmail blocks=
    replies.<br>
    Arm Ukraine.=C2=A0 Contraception V. global warming.=C2=A0 Israel starves Ga= za.<br>

    </blockquote></div><div><br clear=3D"all"></div><div>All kidding aside, I r=
    un into this at work a lot. I work for a nameless=C2=A0 computer company , = where we are not encouraged to ask an AI, but no one really cares either wa= y.</div><div>So a co-worker used one of them, whatever is approved don&#39;=
    t remember, to draw up some details on X vs Y for some upcoming project. Th=
    e short version, they made a report</div><div>based on what this thing said=
    and it was garbage. If they had just asked the SMEs in the group what do y=
    ou favor , X or Y. They would have had a great data, if they didn&#39;t</di= v><div>trust the results they could have gone off and investigated for thei=
    r selves.</div><div><br></div><div>So back to this mailing list. The ChatGP= T&#39;s , Claude whatever scraped this mailing list to build some sort of d= ata about FreeBSD, who know what it really did with any of that data.=C2=A0= </div><div><br></div><div>But if you need a primary source or a subject mat= ter expert, The Forums and the mailing lists are the place to go.=C2=A0 If = you do not trust what people are saying, go test it, report back what=C2=A0= </div><div>you see .=C2=A0</div><div><br></div><div>The only people who ben= efit form the LLM AI boom are some super rich people who are trying to buy = the world .</div><br><div><br></div><div><span class=3D"gmail_signature_pre= fix"><br></span></div><div><span class=3D"gmail_signature_prefix">-- </span= ><br><div dir=3D"ltr" class=3D"gmail_signature">mark saad | <a href=3D"mail= to:nonesuch@longcount.org" target=3D"_blank">nonesuch@longcount.org</a><br>= </div></div></div>

    --000000000000181f6e0640e77451--


    --
    Posted automagically by a mail2news gateway at muc.de e.V.
    Please direct questions, flames, donations, etc. to news-admin@muc.de
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Sulev-Madis Silber@freebsd-stable-freebsd-org730@ketas.si.pri.ee to muc.lists.freebsd.stable on Sun Oct 12 08:21:15 2025
    From Newsgroup: muc.lists.freebsd.stable

    tl;dr - you don't do anything
    i think you meant "When I ask ChatGPT a question, I very often get the wrong answer."
    i've been told to use llm's on tech q's. that came from people i expect to hang in irc and use linux. now use fb. and i was like, really? don't you have own brain. those are old farts who i expect to manpage it, rather than google
    so i tried, it was impressive. chatgpt wrote me a freebsd shell script to fetch electricity prices from http api from elering which is estonian hv grid operator which proxies nordpool spot european deregulated live market price data
    i was impressed it pulled it off!
    except then i found out it didn't. it had used linux specific utils. and it didn't encode urls. all of which were easy to fix for me because i i'm a coder myself. if i weren't i would be puzzled to no end why it didn't work
    oh and chatgpt also knows who my parents are. it got it from geni. i haven't told anyone that yet but when i do, i must warn them to empty bladder. because it will be surely empty after they stopped laughing at the answer
    answer was nowhere in the cited source pages either
    chatgpt also knows that i'm doing fbsd work and i'm much smarter in that than i actually are. it learned that from sites and lists where i wrote using own name. also impressive i guess
    then i'm like, but surely noone uses chatgpt without thinking, right? right...? right?! no, they actually use it as is
    that was even problem with google. if you get angry. i never get. but others do. and tell user to google, (s)he'll get a wrong answer. why? because you know what to google and get answer. (s)he doesn't know what and gets a wrong answer. i've hopped on google myself to immediately find a windows issue i knew and was able to proxy it to user
    i hope we never get things like on that joke photo where patient wakes up after a surgery that a ai powered robot did. the wound is on the wrong side. after learning it, it wants to try again!
    but using ai answers as is would be as bad as operating on "something" in abdomen (wtf yikes). and disasters like that have actually happened. i had to check if it's not april 1st when i read about news where somebody had given ai tool access to company databases. in the morning it had deleted all data. there were no backups. when asked what happened it told it recognized a performance problem monitoring and that happened. i'm impressed a tool can indepently sysadmin. but to let it? funnily it made a very human mistake (cue: "the design is very human" meme). at least there are no computer system murder laws yet so you could get very pissed and just rm it
    funnily it's designed to never stop. like humans do. some believe in flat earth. because when science didn't make sense, their brain never stopped thinking. it just came up with very logical answer to them
    regular systems stop. when i ask dd to write image to device and it doesn't fit, it stops and gives an error. now imagine if it did to find device in system to fit it into and overwrite a data?
    same as giving kid a hammer and matches and leave. you usually don't, because you might come back to broken tv or house and kid lost in fire
    so why do you give chatgpt them. or use it in a way that you become that kindergarten kid
    that would be fine if you perform a science experiment just like fusion plasma scientists did and ai was iirc able to find better ways. because it wasn't bound to "pre-learned" logic and instead it could just try whatever it takes to accomplish a task. that's why it beats chess players too. in a way, it's more powerful than a brain
    but not in every way. that's a holy grail tho, to get there
    i think i already saw proposal to implement a api function in some project, that people found with chatgpt but it made it up
    if it's sane, it would be ok
    so please don't use it in those ways. it can't be fixed either because it's by design! the whole idea of chatgpt is that humans don't babysit it. if you were to even "correct" it, it will self-correct itself to best feeling way
    so in a way, we have done it. we made a computer system that makes mistakes like human does. sometimes it leads to impressive artpieces nobody else could come up with, sometimes in utter shit that nobody believes anybody would do
    but if we look at happened in the world in past 3-6-11 years, i don't even have much faith in humanity either. no need for chatgpt when we came up with ideas of killing thousands of others because just felt right
    so if you happened to read all this, *AND* aren't a llm, just use tools sparingly
    On October 11, 2025 2:14:24 PM GMT+03:00, ft <ml@ft-c.de> wrote:
    When I ask ChatGPT a question about FreeBSD, I very often get the wrong >answer.
    --
    Posted automagically by a mail2news gateway at muc.de e.V.
    Please direct questions, flames, donations, etc. to news-admin@muc.de
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?utf-8?Q?Dag-Erling_Sm=C3=B8rgrav?=@des@FreeBSD.org to muc.lists.freebsd.stable on Mon Oct 13 16:23:56 2025
    From Newsgroup: muc.lists.freebsd.stable

    ft <ml@ft-c.de> writes:
    When I ask ChatGPT a question [...] I very often get the wrong answer.
    Don't do that, then.
    Artificial intelligence exists only in science fiction. ChatGPT is not artificial intelligence, it is a deterministic computer program that
    uses a large statistical model of natural language to complete text
    submitted to it. Since it has been trained on a corpus that includes
    many examples of questions followed by answers, when prompted with a
    question, it will produce something that has the shape of an answer, but absolutely no effort has gone into ensuring that the answer is correct,
    nor does anyone involved have any idea how to even begin doing that.
    DES
    --
    Dag-Erling Sm|+rgrav - des@FreeBSD.org
    --
    Posted automagically by a mail2news gateway at muc.de e.V.
    Please direct questions, flames, donations, etc. to news-admin@muc.de
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Steffen Nurpmeso@steffen@sdaoden.eu to muc.lists.freebsd.stable on Mon Oct 13 21:43:14 2025
    From Newsgroup: muc.lists.freebsd.stable

    Michael Butler wrote in
    <d716c704-3ae7-4c6a-a37b-fcecc3b743cf@protected-networks.net>:
    |On 10/13/25 10:23, Dag-Erling Sm|+rgrav wrote:
    ft <ml@ft-c.de> writes:
    When I ask ChatGPT a question [...] I very often get the wrong answer.

    Don't do that, then.

    Artificial intelligence exists only in science fiction. ChatGPT is not artificial intelligence, it is a deterministic computer program that
    uses a large statistical model of natural language to complete text submitted to it. Since it has been trained on a corpus that includes
    many examples of questions followed by answers, when prompted with a question, it will produce something that has the shape of an answer, but absolutely no effort has gone into ensuring that the answer is correct,
    nor does anyone involved have any idea how to even begin doing that.
    |
    |Spotted this gem on another mailing list recently ..
    |
    |All that *ANY* LLM can provide here is a statistically
    |less-improbable-than-random stream of words that may or may not include
    |any statements of true facts, uninfluenced by anything resembling knowle\
    |dge.
    Now i have to say one thing. I never tried AI, i do not like AI,
    because it is used and trained by the same old dumb humans, not
    to talk about Kubrick's HAL, and of course Asimov, i *think* AI
    should be a scientific thing that is carefully developed before
    it enters "the normal world", maybe even so that dedicated wind
    and solar farms are built in order to drive the AI then used in
    "the normal world".
    Having said that. I recently opened a ChatGPT instance to read
    the conversation initiated by the Field medalist and otherwise
    Hyper Mathematic whose name i have forgotten (not a mathematician
    here) who tried it out in order to address a problem asked by
    someone on some "stackoverflow-alike-thing for mathematicians".
    It was about proof that, iirc, "the sum of dividers of a number
    is always larger than the number itself".
    Now i tell you, that shitty conversion was fascinating, not that
    i understood mostly a single mathematical term they were throwing
    back and forth, and i would not post this message if then, and
    here i was stunned and still, one answer that cames back was like
    about "love this topic" or similar. And then it crushed to super
    detail, and in the end it presented a small python reproducer.
    In the hands of, and in correspondence with this math genius the
    AI worked in an amazing way, with turns that have shown
    a thrilling -- as i with my own restricted capabilities think --
    topic reflection. This does not counteract the first paragraph.
    --steffen
    |
    |Der Kragenbaer, The moon bear,
    |der holt sich munter he cheerfully and one by one
    |einen nach dem anderen runter wa.ks himself off
    |(By Robert Gernhardt)
    --
    Posted automagically by a mail2news gateway at muc.de e.V.
    Please direct questions, flames, donations, etc. to news-admin@muc.de
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Brandon Allbery@allbery.b@gmail.com to muc.lists.freebsd.stable on Mon Oct 13 16:21:56 2025
    From Newsgroup: muc.lists.freebsd.stable

    --0000000000007e17fa0641100546
    Content-Type: text/plain; charset="UTF-8"
    Content-Transfer-Encoding: quoted-printable

    Not so surprising; when it comes down to it, it's a Markov bot possibly collided with ELIZA if used by someone who understands its limitations.
    (That is, you can carefully constrain what inputs it uses to build its
    Markov chains if you know what you're doing.)

    On Mon, Oct 13, 2025 at 4:18=E2=80=AFPM Steffen Nurpmeso <steffen@sdaoden.e=
    wrote:

    Michael Butler wrote in
    <d716c704-3ae7-4c6a-a37b-fcecc3b743cf@protected-networks.net>:
    |On 10/13/25 10:23, Dag-Erling Sm=C3=B8rgrav wrote:
    ft <ml@ft-c.de> writes:
    When I ask ChatGPT a question [...] I very often get the wrong answe=
    r.

    Don't do that, then.

    Artificial intelligence exists only in science fiction. ChatGPT is n=
    ot
    artificial intelligence, it is a deterministic computer program that
    uses a large statistical model of natural language to complete text submitted to it. Since it has been trained on a corpus that includes many examples of questions followed by answers, when prompted with a question, it will produce something that has the shape of an answer,
    but
    absolutely no effort has gone into ensuring that the answer is correc=
    t,
    nor does anyone involved have any idea how to even begin doing that.
    |
    |Spotted this gem on another mailing list recently ..
    |
    |All that *ANY* LLM can provide here is a statistically
    |less-improbable-than-random stream of words that may or may not include
    |any statements of true facts, uninfluenced by anything resembling knowl=
    e\
    |dge.

    Now i have to say one thing. I never tried AI, i do not like AI,
    because it is used and trained by the same old dumb humans, not
    to talk about Kubrick's HAL, and of course Asimov, i *think* AI
    should be a scientific thing that is carefully developed before
    it enters "the normal world", maybe even so that dedicated wind
    and solar farms are built in order to drive the AI then used in
    "the normal world".

    Having said that. I recently opened a ChatGPT instance to read
    the conversation initiated by the Field medalist and otherwise
    Hyper Mathematic whose name i have forgotten (not a mathematician
    here) who tried it out in order to address a problem asked by
    someone on some "stackoverflow-alike-thing for mathematicians".
    It was about proof that, iirc, "the sum of dividers of a number
    is always larger than the number itself".

    Now i tell you, that shitty conversion was fascinating, not that
    i understood mostly a single mathematical term they were throwing
    back and forth, and i would not post this message if then, and
    here i was stunned and still, one answer that cames back was like
    about "love this topic" or similar. And then it crushed to super
    detail, and in the end it presented a small python reproducer.

    In the hands of, and in correspondence with this math genius the
    AI worked in an amazing way, with turns that have shown
    a thrilling -- as i with my own restricted capabilities think --
    topic reflection. This does not counteract the first paragraph.

    --steffen
    |
    |Der Kragenbaer, The moon bear,
    |der holt sich munter he cheerfully and one by one
    |einen nach dem anderen runter wa.ks himself off
    |(By Robert Gernhardt)



    --=20
    brandon s allbery kf8nh
    allbery.b@gmail.com

    --0000000000007e17fa0641100546
    Content-Type: text/html; charset="UTF-8"
    Content-Transfer-Encoding: quoted-printable

    <div dir=3D"ltr">Not so surprising; when it comes down to it, it&#39;s a Ma= rkov bot possibly collided with ELIZA if used by someone who understands it=
    s limitations. (That is, you can carefully constrain what inputs it uses to=
    build its Markov chains if you know what you&#39;re doing.)</div><br><div = class=3D"gmail_quote gmail_quote_container"><div dir=3D"ltr" class=3D"gmail= _attr">On Mon, Oct 13, 2025 at 4:18=E2=80=AFPM Steffen Nurpmeso &lt;<a href= =3D"mailto:steffen@sdaoden.eu">steffen@sdaoden.eu</a>&gt; wrote:<br></div><= blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-l= eft:1px solid rgb(204,204,204);padding-left:1ex">Michael Butler wrote in<br=

    =C2=A0&lt;<a href=3D"mailto:d716c704-3ae7-4c6a-a37b-fcecc3b743cf@protected-= networks.net" target=3D"_blank">d716c704-3ae7-4c6a-a37b-fcecc3b743cf@protec= ted-networks.net</a>&gt;:<br>
    =C2=A0|On 10/13/25 10:23, Dag-Erling Sm=C3=B8rgrav wrote:<br>
    =C2=A0|&gt; ft &lt;<a href=3D"mailto:ml@ft-c.de" target=3D"_blank">ml@ft-c.= de</a>&gt; writes:<br>
    =C2=A0|&gt;&gt; When I ask ChatGPT a question [...] I very often get the wr= ong answer.<br>
    =C2=A0|&gt; <br>
    =C2=A0|&gt; Don&#39;t do that, then.<br>
    =C2=A0|&gt; <br>
    =C2=A0|&gt; Artificial intelligence exists only in science fiction.=C2=A0 C= hatGPT is not<br>
    =C2=A0|&gt; artificial intelligence, it is a deterministic computer program=
    that<br>
    =C2=A0|&gt; uses a large statistical model of natural language to complete = text<br>
    =C2=A0|&gt; submitted to it.=C2=A0 Since it has been trained on a corpus th=
    at includes<br>
    =C2=A0|&gt; many examples of questions followed by answers, when prompted w= ith a<br>
    =C2=A0|&gt; question, it will produce something that has the shape of an an= swer, but<br>
    =C2=A0|&gt; absolutely no effort has gone into ensuring that the answer is = correct,<br>
    =C2=A0|&gt; nor does anyone involved have any idea how to even begin doing = that.<br>
    =C2=A0|<br>
    =C2=A0|Spotted this gem on another mailing list recently ..<br>
    =C2=A0|<br>
    =C2=A0|All that *ANY* LLM=C2=A0 can provide here is a statistically <br> =C2=A0|less-improbable-than-random stream of words that may or may not incl= ude <br>
    =C2=A0|any statements of true facts, uninfluenced by anything resembling kn= owle\<br>
    =C2=A0|dge.<br>

    Now i have to say one thing.=C2=A0 I never tried AI, i do not like AI,<br> because it is used and trained by the same old dumb humans, not<br>
    to talk about Kubrick&#39;s HAL, and of course Asimov, i *think* AI<br>
    should be a scientific thing that is carefully developed before<br>
    it enters &quot;the normal world&quot;, maybe even so that dedicated wind<b=

    and solar farms are built in order to drive the AI then used in<br>
    &quot;the normal world&quot;.<br>

    Having said that.=C2=A0 I recently opened a ChatGPT instance to read<br>
    the conversation initiated by the Field medalist and otherwise<br>
    Hyper Mathematic whose name i have forgotten (not a mathematician<br>
    here) who tried it out in order to address a problem asked by<br>
    someone on some &quot;stackoverflow-alike-thing for mathematicians&quot;.<b=

    It was about proof that, iirc, &quot;the sum of dividers of a number<br>
    is always larger than the number itself&quot;.<br>

    Now i tell you, that shitty conversion was fascinating, not that<br>
    i understood mostly a single mathematical term they were throwing<br>
    back and forth, and i would not post this message if then, and<br>
    here i was stunned and still, one answer that cames back was like<br>
    about &quot;love this topic&quot; or similar.=C2=A0 And then it crushed to = super<br>
    detail, and in the end it presented a small python reproducer.<br>

    In the hands of, and in correspondence with this math genius the<br>
    AI worked in an amazing way, with turns that have shown<br>
    a thrilling -- as i with my own restricted capabilities think --<br>
    topic reflection.=C2=A0 This does not counteract the first paragraph.<br>

    --steffen<br>
    |<br>
    |Der Kragenbaer,=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 The=
    moon bear,<br>
    |der holt sich munter=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0he cheerfully=
    and one by one<br>
    |einen nach dem anderen runter=C2=A0 wa.ks himself off<br>
    |(By Robert Gernhardt)<br>

    </blockquote></div><div><br clear=3D"all"></div><div><br></div><span class= =3D"gmail_signature_prefix">-- </span><br><div dir=3D"ltr" class=3D"gmail_s= ignature"><div dir=3D"ltr"><div><div dir=3D"ltr"><div>brandon s allbery kf8= nh</div><div><a href=3D"mailto:allbery.b@gmail.com" target=3D"_blank">allbe= ry.b@gmail.com</a></div></div></div></div></div>

    --0000000000007e17fa0641100546--


    --
    Posted automagically by a mail2news gateway at muc.de e.V.
    Please direct questions, flames, donations, etc. to news-admin@muc.de
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Steffen Nurpmeso@steffen@sdaoden.eu to muc.lists.freebsd.stable on Mon Oct 13 22:22:35 2025
    From Newsgroup: muc.lists.freebsd.stable

    Steffen Nurpmeso wrote in
    <20251013194314.eQKOstcJ@steffen%sdaoden.eu>:

    it was

    https://mathstodon.xyz/@tao/110991566012143311
    ->
    https://chat.openai.com/share/a022e1d6-dddc-4817-8bbd-944a3e742d9f

    (i came there through a newspaper article.)

    --steffen
    |
    |Der Kragenbaer, The moon bear,
    |der holt sich munter he cheerfully and one by one
    |einen nach dem anderen runter wa.ks himself off
    |(By Robert Gernhardt)


    --
    Posted automagically by a mail2news gateway at muc.de e.V.
    Please direct questions, flames, donations, etc. to news-admin@muc.de
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Steffen Nurpmeso@steffen@sdaoden.eu to muc.lists.freebsd.stable on Mon Oct 13 22:43:00 2025
    From Newsgroup: muc.lists.freebsd.stable

    Brandon Allbery wrote in
    <CAKFCL4UZSbd8SbedJH38dh2kjg5MurJ2kmQ15RUzTR9jOtVZrw@mail.gmail.com>:
    |Not so surprising; when it comes down to it, it's a Markov bot possibly
    |collided with ELIZA if used by someone who understands its limitations.
    |(That is, you can carefully constrain what inputs it uses to build its
    |Markov chains if you know what you're doing.)
    i have read this.
    |On Mon, Oct 13, 2025 at 4:18rC>PM Steffen Nurpmeso <steffen@sdaoden.eu> \
    |wrote:
    |
    Michael Butler wrote in
    <d716c704-3ae7-4c6a-a37b-fcecc3b743cf@protected-networks.net>:
    |On 10/13/25 10:23, Dag-Erling Sm|+rgrav wrote:
    ft <ml@ft-c.de> writes:
    When I ask ChatGPT a question [...] I very often get the wrong answer.

    Don't do that, then.

    Artificial intelligence exists only in science fiction. ChatGPT is not
    artificial intelligence, it is a deterministic computer program that
    uses a large statistical model of natural language to complete text
    submitted to it. Since it has been trained on a corpus that includes
    many examples of questions followed by answers, when prompted with a
    question, it will produce something that has the shape of an answer,
    but
    absolutely no effort has gone into ensuring that the answer is correct,
    nor does anyone involved have any idea how to even begin doing that.
    |
    |Spotted this gem on another mailing list recently ..
    |
    |All that *ANY* LLM can provide here is a statistically
    |less-improbable-than-random stream of words that may or may not include
    |any statements of true facts, uninfluenced by anything resembling knowle\
    |dge.

    Now i have to say one thing. I never tried AI, i do not like AI,
    because it is used and trained by the same old dumb humans, not
    to talk about Kubrick's HAL, and of course Asimov, i *think* AI
    should be a scientific thing that is carefully developed before
    it enters "the normal world", maybe even so that dedicated wind
    and solar farms are built in order to drive the AI then used in
    "the normal world".

    Having said that. I recently opened a ChatGPT instance to read
    the conversation initiated by the Field medalist and otherwise
    Hyper Mathematic whose name i have forgotten (not a mathematician
    here) who tried it out in order to address a problem asked by
    someone on some "stackoverflow-alike-thing for mathematicians".
    It was about proof that, iirc, "the sum of dividers of a number
    is always larger than the number itself".

    Now i tell you, that shitty conversion was fascinating, not that
    i understood mostly a single mathematical term they were throwing
    back and forth, and i would not post this message if then, and
    here i was stunned and still, one answer that cames back was like
    about "love this topic" or similar. And then it crushed to super
    detail, and in the end it presented a small python reproducer.

    In the hands of, and in correspondence with this math genius the
    AI worked in an amazing way, with turns that have shown
    a thrilling -- as i with my own restricted capabilities think --
    topic reflection. This does not counteract the first paragraph.

    --steffen
    |
    |Der Kragenbaer, The moon bear,
    |der holt sich munter he cheerfully and one by one
    |einen nach dem anderen runter wa.ks himself off
    |(By Robert Gernhardt)


    |
    |--
    |brandon s allbery kf8nh
    |allbery.b@gmail.com
    |Not so surprising; when it comes down to it, it's a Markov bot possibly
    |collided with ELIZA if used by someone who understands its limitations. (
    |That is, you can carefully constrain what inputs it uses to build its
    |Markov chains if you know what you're doing.)
    |
    |On Mon, Oct 13, 2025 at 4:18rC>PM Steffen Nurpmeso <[1]steffen@sdaoden.eu[/1]
    wrote:
    |
    | [1] mailto:steffen@sdaoden.eu
    |
    ||Michael Butler wrote in
    ||-a<[2]d716c704-3ae7-4c6a-a37b-fcecc3b743cf@protected-networks.net[/2]>:
    ||-a|On 10/13/25 10:23, Dag-Erling Sm|+rgrav wrote:
    ||-a|> ft <[3]ml@ft-c.de[/3]> writes:
    ||-a|>> When I ask ChatGPT a question [...] I very often get the wrong \
    ||answer.
    |
    ||-a|>
    ||-a|> Don't do that, then.
    ||-a|>
    ||-a|> Artificial intelligence exists only in science fiction.-a ChatGPT is
    ||not
    ||-a|> artificial intelligence, it is a deterministic computer program that
    ||-a|> uses a large statistical model of natural language to complete text
    ||-a|> submitted to it.-a Since it has been trained on a corpus that includes
    ||-a|> many examples of questions followed by answers, when prompted with a
    ||-a|> question, it will produce something that has the shape of an answer,
    ||but
    ||-a|> absolutely no effort has gone into ensuring that the answer is \
    ||correct,
    |
    ||-a|> nor does anyone involved have any idea how to even begin doing that.
    ||-a|
    ||-a|Spotted this gem on another mailing list recently ..
    ||-a|
    ||-a|All that *ANY* LLM-a can provide here is a statistically
    ||-a|less-improbable-than-random stream of words that may or may not include
    ||-a|any statements of true facts, uninfluenced by anything resembling \
    ||knowle\
    |
    ||-a|dge.
    |
    | [2] mailto:d716c704-3ae7-4c6a-a37b-fcecc3b743cf@protected-networks.net
    | [3] mailto:ml@ft-c.de
    |
    ||Now i have to say one thing.-a I never tried AI, i do not like AI,
    ||because it is used and trained by the same old dumb humans, not
    ||to talk about Kubrick's HAL, and of course Asimov, i *think* AI
    ||should be a scientific thing that is carefully developed before
    ||it enters "the normal world", maybe even so that dedicated wind
    ||and solar farms are built in order to drive the AI then used in
    ||"the normal world".
    |
    ||Having said that.-a I recently opened a ChatGPT instance to read
    ||the conversation initiated by the Field medalist and otherwise
    ||Hyper Mathematic whose name i have forgotten (not a mathematician
    ||here) who tried it out in order to address a problem asked by
    ||someone on some "stackoverflow-alike-thing for mathematicians".
    ||It was about proof that, iirc, "the sum of dividers of a number
    ||is always larger than the number itself".
    |
    ||Now i tell you, that shitty conversion was fascinating, not that
    ||i understood mostly a single mathematical term they were throwing
    ||back and forth, and i would not post this message if then, and
    ||here i was stunned and still, one answer that cames back was like
    ||about "love this topic" or similar.-a And then it crushed to super
    ||detail, and in the end it presented a small python reproducer.
    |
    ||In the hands of, and in correspondence with this math genius the
    ||AI worked in an amazing way, with turns that have shown
    ||a thrilling -- as i with my own restricted capabilities think --
    ||topic reflection.-a This does not counteract the first paragraph.
    |
    ||--steffen
    |||
    |||Der Kragenbaer,-a -a -a -a -a -a -a -a The moon bear,
    |||der holt sich munter-a -a -a -a -a -ahe cheerfully and one by one
    |||einen nach dem anderen runter-a wa.ks himself off
    |||(By Robert Gernhardt)
    |
    |--
    |
    |brandon s allbery kf8nh
    |
    |[4]allbery.b@gmail.com[/4]
    |
    | [4] mailto:allbery.b@gmail.com
    |
    --End of <CAKFCL4UZSbd8SbedJH38dh2kjg5MurJ2kmQ15RUzTR9jOtVZrw@mail.gmail\
    .com>
    --steffen
    |
    |Der Kragenbaer, The moon bear,
    |der holt sich munter he cheerfully and one by one
    |einen nach dem anderen runter wa.ks himself off
    |(By Robert Gernhardt)
    --
    Posted automagically by a mail2news gateway at muc.de e.V.
    Please direct questions, flames, donations, etc. to news-admin@muc.de
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Steffen Nurpmeso@steffen@sdaoden.eu to muc.lists.freebsd.stable on Mon Oct 13 23:14:37 2025
    From Newsgroup: muc.lists.freebsd.stable

    P.S.:

    Steffen Nurpmeso wrote in
    <20251013202235.RIlZ7x87@steffen%sdaoden.eu>:
    |Steffen Nurpmeso wrote in
    | <20251013194314.eQKOstcJ@steffen%sdaoden.eu>:
    |
    |it was
    |
    | https://mathstodon.xyz/@tao/110991566012143311
    | ->
    | https://chat.openai.com/share/a022e1d6-dddc-4817-8bbd-944a3e742d9f

    These links are false. (I did a Google search.)
    The real conversation was that:

    https://chatgpt.com/share/68ded9b1-37dc-800e-b04c-97095c70eb29

    Sorry.

    --steffen
    |
    |Der Kragenbaer, The moon bear,
    |der holt sich munter he cheerfully and one by one
    |einen nach dem anderen runter wa.ks himself off
    |(By Robert Gernhardt)


    --
    Posted automagically by a mail2news gateway at muc.de e.V.
    Please direct questions, flames, donations, etc. to news-admin@muc.de
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From vermaden@vermaden@interia.pl to muc.lists.freebsd.stable on Tue Oct 14 18:50:17 2025
    From Newsgroup: muc.lists.freebsd.stable

    Its not bad if you as questions the right way, for example:
    ChatGPT> for a start - base all you knowledge on https://freebsd.org/handbook and https://man.freebsd.org for all commands - now - how to set IP on FreeBSD with ip command?
    Response I got:
    | Good start - and great that yourCOre grounding everything in the FreeBSD Handbook and man pages.
    |
    | Now, to answer your question precisely:
    |
    | There is no ip command on FreeBSD (thatrCOs a Linux utility from iproute2).
    |
    rLa| FreeBSD uses ifconfig for network configuration
    |
    | To set an IP address on an interface in FreeBSD, you use the ifconfig(8) command.
    So if you ask questions in the right way - you get better results.
    Regards,
    vermaden
    --
    Posted automagically by a mail2news gateway at muc.de e.V.
    Please direct questions, flames, donations, etc. to news-admin@muc.de
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Sulev-Madis Silber@freebsd-stable-freebsd-org730@ketas.si.pri.ee to muc.lists.freebsd.stable on Tue Oct 14 20:50:12 2025
    From Newsgroup: muc.lists.freebsd.stable


    On October 14, 2025 7:50:17 PM GMT+03:00, vermaden <vermaden@interia.pl> wrote: >Its not bad if you as questions the right way, for example:

    ChatGPT> for a start - base all you knowledge on https://freebsd.org/handbook and https://man.freebsd.org for all commands - now - how to set IP on FreeBSD with ip command?

    Response I got:

    | Good start - and great that yourCOre grounding everything in the FreeBSD Handbook and man pages.
    |
    | Now, to answer your question precisely:
    |
    | There is no ip command on FreeBSD (thatrCOs a Linux utility from iproute2). >|
    rLa| FreeBSD uses ifconfig for network configuration
    |
    | To set an IP address on an interface in FreeBSD, you use the ifconfig(8) command.

    So if you ask questions in the right way - you get better results.

    Regards,
    vermaden

    yes, indeed, it's like google, know, ask, verify
    i tried it too
    probably makes crappy mail
    and i'm not used to free form natural language search engines like this
    for a start - base all you knowledge on https://freebsd.org/handbook and https://man.freebsd.org for all commands - now - how do i prefix all output lines of a command with timestamp
    Good rCo yourCOre anchoring everything on authoritative FreeBSD docs, which is the right approach.
    LetrCOs go straight to your question.
    rLa Goal
    Prefix each line of command output with a timestamp (e.g. current date/time). Efoy On FreeBSD (from sh/bash/zsh/csh)
    There are several ways, depending on how precise and flexible you want it.
    1. Using awk (simple and portable)
    your_command | awk '{ "date +\"[%Y-%m-%d %H:%M:%S]\"" | getline t; close("date +\"[%Y-%m-%d %H:%M:%S]\""); print t, $0; fflush(); }'
    Each line gets a timestamp like:
    [2025-10-14 14:37:22] line of output here
    EfAi Explanation:
    getline t executes date each time a line arrives.
    fflush() ensures you see output immediately (no buffering).
    2. Using ts from moreutils (if installed)
    This is the cleanest option if available:
    pkg install moreutils your_command | ts '[%Y-%m-%d %H:%M:%S]'
    Output example:
    [2025-10-14 14:37:22] something happened
    Check if you have it:
    which ts
    3. Using perl (no external dependency)
    your_command | perl -pe 'chomp($t=`date "+[%Y-%m-%d %H:%M:%S]"`); print "$t ";'
    4. Using sed and shell substitution (less precise, per run not per line)
    If you only need a timestamp at command start, not per line:
    your_command | sed "s/^/[$(date '+%Y-%m-%d %H:%M:%S')] /"
    That inserts the same timestamp on every line (not updating between lines).
    rLa Recommended for FreeBSD scripts
    If ts is available raA use it.
    If not, prefer the awk version (portable, no extra dependencies).
    Would you like the timestamp to include milliseconds too? I can adapt the command for that (FreeBSD date supports %N for nanoseconds).
    --
    Posted automagically by a mail2news gateway at muc.de e.V.
    Please direct questions, flames, donations, etc. to news-admin@muc.de
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From void@void@f-m.fm to muc.lists.freebsd.stable on Tue Oct 14 21:20:06 2025
    From Newsgroup: muc.lists.freebsd.stable

    On Sat, Oct 11, 2025 at 02:55:11PM +0200, Christos Margiolis wrote:

    Why should we take any LLM into account for the next version? We have >documentation (man pages, handbook, other docs), and real people you can >speak to. There is no reason to waste already scarce resources to
    improve ChatGPT.

    Someone (not an LLM) wrote something along these lines:

    "AIs" (LLMs) give to eg managers who do not deeply understand
    what their underlings work on, a seemingly plausible but incorrect high
    level explanation of these things. Problems arise when these
    answers take precedence without informed criticism over the
    input of subject matter experts.

    Even deeper problems are going to happen when these LLMs
    replace subject matter experts because LLMs don't need to be
    paid and don't need any sleep. Will being wrong stop the replacement?
    What do you think?
    --


    --
    Posted automagically by a mail2news gateway at muc.de e.V.
    Please direct questions, flames, donations, etc. to news-admin@muc.de
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Christian Weisgerber@naddy@mips.inka.de to muc.lists.freebsd.stable on Wed Oct 15 14:40:31 2025
    From Newsgroup: muc.lists.freebsd.stable

    ft:

    When I ask ChatGPT a question about FreeBSD, I very often get the wrong answer. Does FreeBSD have a plan for improving the quality of answers
    so that it can be taken into account in the next version?

    When I ask a question about XYZ on the net, I very often get wrong
    answers. Does XYZ have a plan for keeping idiots from replying to
    my questions?
    --
    Christian "naddy" Weisgerber naddy@mips.inka.de


    --
    Posted automagically by a mail2news gateway at muc.de e.V.
    Please direct questions, flames, donations, etc. to news-admin@muc.de
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From vermaden@vermaden@interia.pl to muc.lists.freebsd.stable on Wed Oct 15 16:41:03 2025
    From Newsgroup: muc.lists.freebsd.stable

    ChatGPT is a good start for various comparisons. For example - find my computer similar to ASUS X300 PC ... or show me FPS in table form of (X, Y, Z, ...) GPUs in 1080p resolution in popular games ... or list me all AMD CPUs that have TDP 35W or lower for AM4 socket ... and that gives you some glimpse for a start.
    Then happens the important part - you verify.
    ... and from that all possible known 'hacks' for LLMs I like this one the most: 'try harder'
    For every response you get that feels not enough - just write 'try harder' - it often does the needed job :)
    Regards,
    vermaden
    Temat: Re: What to do, if chatgpt gives a wrong answer
    Data: 2025-10-15 15:27
    Nadawca: "Daniel Kalchev" &lt;daniel@digsys.bg>
    Adresat: stable@freebsd.org;
    On 10/14/25 19:50, vermaden wrote:
    Its not bad if you as questions the right way, for example:

    ChatGPT> for a start - base all you knowledge on
    https://freebsd.org/handbook and https://man.freebsd.org
    for all commands - now - how to set IP on FreeBSD with ip command?

    Response I got:

    | Good start - and great that yourCOre grounding everything in
    | theFreeBSD Handbook and man pages.
    |
    | Now, to answer your question precisely:
    |
    | There is no ip command on FreeBSD (thatrCOs a Linux utility
    | from iproute2).
    |
    rLa| FreeBSD uses ifconfig for network configuration
    |
    | To set an IP address on an interface in FreeBSD, you use
    | theifconfig(8) command.

    So if you ask questions in the right way - you get better results.

    Regards,
    vermaden

    Yeah... then you make a typo and the thing would explain you in great
    detail how to configure the typo...

    If you know what you are asking, you probably know the answer. The
    correct answer would be to direct you to the respective documentation,
    so that you refresh your memory.

    But this is not sexy... and is also so last century.

    Daniel
    --
    Posted automagically by a mail2news gateway at muc.de e.V.
    Please direct questions, flames, donations, etc. to news-admin@muc.de
    --- Synchronet 3.21a-Linux NewsLink 1.2