• Re: stray ipv6 router????

    From Graham J@nobody@nowhere.co.uk to alt.comp.os.windows-11 on Sat Jan 31 12:53:42 2026
    From Newsgroup: alt.comp.os.windows-11

    T wrote:
    [snip]

    Go there and charge them accordingly.


    Appt set for Friday afternoon.

    What did you find?
    --
    Graham J
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From T@T@invalid.invalid to alt.comp.os.windows-11 on Tue Feb 24 15:10:41 2026
    From Newsgroup: alt.comp.os.windows-11

    On 2/2/26 18:15, Carlos E.R. wrote:

    RTFM

    "Just Google it". The new RTFM. (For respondents
    that do not know the answer but feel the need
    to condescend.)

    ChatGPT is extremely helpful.

    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Carlos E.R.@robin_listas@es.invalid to alt.comp.os.windows-11 on Wed Feb 25 01:15:22 2026
    From Newsgroup: alt.comp.os.windows-11

    On 2026-02-25 00:10, T wrote:
    On 2/2/26 18:15, Carlos E.R. wrote:

    RTFM

    "Just Google it".-a The new RTFM.-a (For respondents
    that do not know the answer but feel the need
    to condescend.)

    ChatGPT is extremely helpful.


    Not in this case.

    If you read carefully the context of me saying "RTFM", you should know
    that neither Google nor ChatGPT will help in this case.
    --
    Cheers, Carlos.
    ESEfc-Efc+, EUEfc-Efc|;
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Paul@nospam@needed.invalid to alt.comp.os.windows-11 on Tue Feb 24 21:26:58 2026
    From Newsgroup: alt.comp.os.windows-11

    On Tue, 2/24/2026 6:10 PM, T wrote:
    On 2/2/26 18:15, Carlos E.R. wrote:

    RTFM

    "Just Google it".-a The new RTFM.-a (For respondents
    that do not know the answer but feel the need
    to condescend.)

    ChatGPT is extremely helpful.


    A Google Search today, can have a Gemini summary result
    at the top. But, this is not consistent. Any sort of
    load present on the Google end, causes the Gemini part
    to go missing.

    I even had one search result, where Google sends back
    the formatted page. It leaves a "hole" at the top
    for the Gemini result (meaning it thinks it has the
    hardware resources to Gemini it), and no result at all comes
    from Gemini. Kinda a race condition, where the back end
    says "no resources available" and the rest of the
    query is already on the end-users screen :-)

    As a result of the under-resourced feature, I haven't
    actually seen too many of these summaries/AI answers
    at the top. But if you check, you might get one.

    I think that is called "legal gambling".

    Paul
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Char Jackson@none@none.invalid to alt.comp.os.windows-11 on Tue Feb 24 22:08:03 2026
    From Newsgroup: alt.comp.os.windows-11

    On Tue, 24 Feb 2026 21:26:58 -0500, Paul <nospam@needed.invalid> wrote:

    A Google Search today, can have a Gemini summary result
    at the top. But, this is not consistent. Any sort of
    load present on the Google end, causes the Gemini part
    to go missing.

    I haven't seen that here, nor have I seen anyone else reporting it. Is
    it common? It is only happening at your house? It'd be cool to figure
    out what's causing that.

    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Paul@nospam@needed.invalid to alt.comp.os.windows-11 on Wed Feb 25 10:38:04 2026
    From Newsgroup: alt.comp.os.windows-11

    On Tue, 2/24/2026 11:08 PM, Char Jackson wrote:
    On Tue, 24 Feb 2026 21:26:58 -0500, Paul <nospam@needed.invalid> wrote:

    A Google Search today, can have a Gemini summary result
    at the top. But, this is not consistent. Any sort of
    load present on the Google end, causes the Gemini part
    to go missing.

    I haven't seen that here, nor have I seen anyone else reporting it. Is
    it common? It is only happening at your house? It'd be cool to figure
    out what's causing that.


    I do a lot of searches.

    Gemini summaries are at the 5-10% level. Most
    searches just return links. The number of returned
    links varies with time of day. Sometimes, I only
    get one page of links and no next page button.

    That's why, when three searches in a row have a
    Gemini summary at the top, that's some kind of miracle.

    I have tried experimenting with context sensitive search.
    The first is a keyword search.
    The second is intended to trigger Gemini summary.

    lollypop locomotive shoeshine

    How many items are there in a bakers dozen ?

    The syntax is supposed to differentiate an attempt
    to trigger Gemini summary, versus not doing it.
    The first query does not contain enough sentence
    structure, for the AI filtering to make a high
    probability conclusion about what you're asking.
    It then just finds links that might contain
    the keywords. Sometimes a Google search uses
    a "spelling lame" when it gets bored. Ask for
    ASM1066, it returns Analog Devices ADM1066.
    Gee, thanks.

    Paul
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Char Jackson@none@none.invalid to alt.comp.os.windows-11 on Wed Feb 25 14:47:24 2026
    From Newsgroup: alt.comp.os.windows-11

    On Wed, 25 Feb 2026 10:38:04 -0500, Paul <nospam@needed.invalid> wrote:

    On Tue, 2/24/2026 11:08 PM, Char Jackson wrote:
    On Tue, 24 Feb 2026 21:26:58 -0500, Paul <nospam@needed.invalid> wrote:

    A Google Search today, can have a Gemini summary result
    at the top. But, this is not consistent. Any sort of
    load present on the Google end, causes the Gemini part
    to go missing.

    I haven't seen that here, nor have I seen anyone else reporting it. Is
    it common? It is only happening at your house? It'd be cool to figure
    out what's causing that.


    I do a lot of searches.

    Gemini summaries are at the 5-10% level. Most
    searches just return links. The number of returned
    links varies with time of day. Sometimes, I only
    get one page of links and no next page button.

    That's why, when three searches in a row have a
    Gemini summary at the top, that's some kind of miracle.

    I'm leaning toward an ad blocker or maybe a script blocker, something at
    your end.

    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From T@T@invalid.invalid to alt.comp.os.windows-11 on Sun Mar 1 18:55:58 2026
    From Newsgroup: alt.comp.os.windows-11

    On 2/24/26 16:15, Carlos E.R. wrote:
    On 2026-02-25 00:10, T wrote:
    On 2/2/26 18:15, Carlos E.R. wrote:

    RTFM

    "Just Google it".-a The new RTFM.-a (For respondents
    that do not know the answer but feel the need
    to condescend.)

    ChatGPT is extremely helpful.


    Not in this case.

    If you read carefully the context of me saying "RTFM", you should know
    that neither Google nor ChatGPT will help in this case.


    I was actually cracking a joke.

    With ChatGPT you, or any AI, you have to be able
    to discern good result form AI slop. Often times
    when I know I am getting slop from ChatGPT, I will
    switch to search.brave.com's AI and get good results

    I will tell ChatGPT when it has made a mistake so
    it trains on it, which it assures me (hahaha) it
    does.
    --- Synchronet 3.21d-Linux NewsLink 1.2
  • From Paul@nospam@needed.invalid to alt.comp.os.windows-11 on Sun Mar 1 23:22:16 2026
    From Newsgroup: alt.comp.os.windows-11

    On Sun, 3/1/2026 9:55 PM, T wrote:


    I was actually cracking a joke.

    With ChatGPT you, or any AI, you have to be able
    to discern good result form AI slop.-a Often times
    when I know I am getting slop from ChatGPT, I will
    switch to search.brave.com's AI and get good results

    I will tell ChatGPT when it has made a mistake so
    it trains on it, which it assures me (hahaha) it
    does.

    I'm sure the AI is snickering while it assures you
    about the training.

    An Inference machine is not a Training Machine.

    These are at opposite ends of the shop. They also don't
    have to be the same kind of equipment (for efficiency
    reasons).

    *******

    Brave LEO uses Mixtral, Llama 2, and Claude (subscription service).

    Ask Brave uses...

    "In an internal evaluation of major AI search engines, Ask Brave-powered
    by Brave's LLM Context API and open-weights Qwen3 - outperforms ChatGPT"

    https://en.wikipedia.org/wiki/Qwen

    They're buying tokens from a lot of different sources. It would be
    a bit expensive, to build their own datacenter for example. But with
    some of the models being available for download, they can
    play with some of it locally.

    Paul
    --- Synchronet 3.21d-Linux NewsLink 1.2
  • From T@T@invalid.invalid to alt.comp.os.windows-11 on Mon Mar 2 17:40:23 2026
    From Newsgroup: alt.comp.os.windows-11

    On 3/1/26 20:22, Paul wrote:
    On Sun, 3/1/2026 9:55 PM, T wrote:


    I was actually cracking a joke.

    With ChatGPT you, or any AI, you have to be able
    to discern good result form AI slop.-a Often times
    when I know I am getting slop from ChatGPT, I will
    switch to search.brave.com's AI and get good results

    I will tell ChatGPT when it has made a mistake so
    it trains on it, which it assures me (hahaha) it
    does.

    I'm sure the AI is snickering while it assures you
    about the training.

    An Inference machine is not a Training Machine.

    These are at opposite ends of the shop. They also don't
    have to be the same kind of equipment (for efficiency
    reasons).

    *******

    Brave LEO uses Mixtral, Llama 2, and Claude (subscription service).

    Ask Brave uses...

    "In an internal evaluation of major AI search engines, Ask Brave-powered
    by Brave's LLM Context API and open-weights Qwen3 - outperforms ChatGPT"

    https://en.wikipedia.org/wiki/Qwen

    They're buying tokens from a lot of different sources. It would be
    a bit expensive, to build their own datacenter for example. But with
    some of the models being available for download, they can
    play with some of it locally.

    Paul


    ChatGTP royally goofed up a GnuCash question. I found the answer
    over on search.brave.com. I told Chat about it and asked if
    it could use the correction in its learning. This is what I
    got back:


    About rCLadding this to my learningrCY

    I donrCOt have the ability to permanently update my training or
    store long-term knowledge from individual conversations. I
    also canrCOt modify my underlying training data.

    What I can do:

    Use this information within our current conversation

    Adjust my answers based on feedback like this

    Acknowledge when documentation or UI changes over time

    And your correction absolutely helps improve how I answer
    similar questions in the future sessions conceptually rCo even
    if I canrCOt directly edit my training data.

    --- Synchronet 3.21d-Linux NewsLink 1.2
  • From Paul@nospam@needed.invalid to alt.comp.os.windows-11 on Mon Mar 2 23:51:26 2026
    From Newsgroup: alt.comp.os.windows-11

    On Mon, 3/2/2026 8:40 PM, T wrote:
    On 3/1/26 20:22, Paul wrote:
    On Sun, 3/1/2026 9:55 PM, T wrote:


    I was actually cracking a joke.

    With ChatGPT you, or any AI, you have to be able
    to discern good result form AI slop.-a Often times
    when I know I am getting slop from ChatGPT, I will
    switch to search.brave.com's AI and get good results

    I will tell ChatGPT when it has made a mistake so
    it trains on it, which it assures me (hahaha) it
    does.

    I'm sure the AI is snickering while it assures you
    about the training.

    An Inference machine is not a Training Machine.

    These are at opposite ends of the shop. They also don't
    have to be the same kind of equipment (for efficiency
    reasons).

    *******

    Brave LEO uses Mixtral, Llama 2, and Claude (subscription service).

    Ask Brave uses...

    -a-a-a "In an internal evaluation of major AI search engines, Ask Brave-powered
    -a-a-a-a by Brave's LLM Context API and open-weights Qwen3 - outperforms ChatGPT"

    -a-a-a https://en.wikipedia.org/wiki/Qwen

    They're buying tokens from a lot of different sources. It would be
    a bit expensive, to build their own datacenter for example. But with
    some of the models being available for download, they can
    play with some of it locally.

    -a-a-a Paul


    ChatGTP royally goofed up a GnuCash question.-a I found the answer
    over on search.brave.com.-a-a-a I told Chat about it and asked if
    it could use the correction in its learning.-a This is what I
    got back:


    About rCLadding this to my learningrCY

    I donrCOt have the ability to permanently update my training or
    store long-term knowledge from individual conversations. I
    also canrCOt modify my underlying training data.

    What I can do:

    Use this information within our current conversation

    Adjust my answers based on feedback like this

    Acknowledge when documentation or UI changes over time

    And your correction absolutely helps improve how I answer
    similar questions in the future sessions conceptually rCo even
    if I canrCOt directly edit my training data.


    There is reasoning behind this.

    it it did have learning capability, it would be haxored six
    ways from Sunday. It would be unrecognizable ("Tay"), in
    half a day. It would be an AI limping on a crutch, wearing
    a pirate hat, and swearing at you in a foreign language.

    https://en.wikipedia.org/wiki/Tay_%28chatbot%29

    "He compared the issue to IBM's Watson, which began
    to use profanity after reading entries from the
    website Urban Dictionary."

    There is a difference between an AI being "agentic" and
    "reading" a web page. Versus the training process (which
    is math intensive and the AI is not "conscious" while this
    is happening).

    As a result of that, the training set has to remain immutable.
    You can see in the case of Tay, they could not interact with
    it in real time, and put it back on the rails. If the training
    is locked down, it is easier to manage.

    Your "statement of fact" is different than "ingesting AI slop".
    They try not to feed AI answers into AI training material,
    as that does not end well (machine gets more and more slop-happy).

    Paul
    --- Synchronet 3.21d-Linux NewsLink 1.2
  • From T@T@invalid.invalid to alt.comp.os.windows-11 on Mon Mar 2 21:54:53 2026
    From Newsgroup: alt.comp.os.windows-11

    On 3/2/26 20:51, Paul wrote:
    Your "statement of fact" is different than "ingesting AI slop".
    They try not to feed AI answers into AI training material,
    as that does not end well (machine gets more and more slop-happy).

    Point taken!

    Sort of like the parrots at that London zoo that human
    swore at and then started laughing. Now the parrots
    swear and start laughing at the humans. It is now
    a prime attraction.


    --- Synchronet 3.21d-Linux NewsLink 1.2
  • From Andy Burns@usenet@andyburns.uk to alt.comp.os.windows-11 on Tue Mar 3 09:30:57 2026
    From Newsgroup: alt.comp.os.windows-11

    T wrote:

    I will tell ChatGPT when it has made a mistake so
    it trains on it, which it assures me (hahaha) it
    does.

    Does it? I though many previous AI chatbots got taught to be right-wing
    nazis by that process?
    --- Synchronet 3.21d-Linux NewsLink 1.2