Go there and charge them accordingly.
Appt set for Friday afternoon.
RTFM
On 2/2/26 18:15, Carlos E.R. wrote:
RTFM
"Just Google it".-a The new RTFM.-a (For respondents
that do not know the answer but feel the need
to condescend.)
ChatGPT is extremely helpful.
On 2/2/26 18:15, Carlos E.R. wrote:
RTFM
"Just Google it".-a The new RTFM.-a (For respondents
that do not know the answer but feel the need
to condescend.)
ChatGPT is extremely helpful.
A Google Search today, can have a Gemini summary result
at the top. But, this is not consistent. Any sort of
load present on the Google end, causes the Gemini part
to go missing.
On Tue, 24 Feb 2026 21:26:58 -0500, Paul <nospam@needed.invalid> wrote:
A Google Search today, can have a Gemini summary result
at the top. But, this is not consistent. Any sort of
load present on the Google end, causes the Gemini part
to go missing.
I haven't seen that here, nor have I seen anyone else reporting it. Is
it common? It is only happening at your house? It'd be cool to figure
out what's causing that.
On Tue, 2/24/2026 11:08 PM, Char Jackson wrote:
On Tue, 24 Feb 2026 21:26:58 -0500, Paul <nospam@needed.invalid> wrote:
A Google Search today, can have a Gemini summary result
at the top. But, this is not consistent. Any sort of
load present on the Google end, causes the Gemini part
to go missing.
I haven't seen that here, nor have I seen anyone else reporting it. Is
it common? It is only happening at your house? It'd be cool to figure
out what's causing that.
I do a lot of searches.
Gemini summaries are at the 5-10% level. Most
searches just return links. The number of returned
links varies with time of day. Sometimes, I only
get one page of links and no next page button.
That's why, when three searches in a row have a
Gemini summary at the top, that's some kind of miracle.
On 2026-02-25 00:10, T wrote:
On 2/2/26 18:15, Carlos E.R. wrote:
RTFM
"Just Google it".-a The new RTFM.-a (For respondents
that do not know the answer but feel the need
to condescend.)
ChatGPT is extremely helpful.
Not in this case.
If you read carefully the context of me saying "RTFM", you should know
that neither Google nor ChatGPT will help in this case.
I was actually cracking a joke.
With ChatGPT you, or any AI, you have to be able
to discern good result form AI slop.-a Often times
when I know I am getting slop from ChatGPT, I will
switch to search.brave.com's AI and get good results
I will tell ChatGPT when it has made a mistake so
it trains on it, which it assures me (hahaha) it
does.
On Sun, 3/1/2026 9:55 PM, T wrote:
I was actually cracking a joke.
With ChatGPT you, or any AI, you have to be able
to discern good result form AI slop.-a Often times
when I know I am getting slop from ChatGPT, I will
switch to search.brave.com's AI and get good results
I will tell ChatGPT when it has made a mistake so
it trains on it, which it assures me (hahaha) it
does.
I'm sure the AI is snickering while it assures you
about the training.
An Inference machine is not a Training Machine.
These are at opposite ends of the shop. They also don't
have to be the same kind of equipment (for efficiency
reasons).
*******
Brave LEO uses Mixtral, Llama 2, and Claude (subscription service).
Ask Brave uses...
"In an internal evaluation of major AI search engines, Ask Brave-powered
by Brave's LLM Context API and open-weights Qwen3 - outperforms ChatGPT"
https://en.wikipedia.org/wiki/Qwen
They're buying tokens from a lot of different sources. It would be
a bit expensive, to build their own datacenter for example. But with
some of the models being available for download, they can
play with some of it locally.
Paul
On 3/1/26 20:22, Paul wrote:
On Sun, 3/1/2026 9:55 PM, T wrote:
I was actually cracking a joke.
With ChatGPT you, or any AI, you have to be able
to discern good result form AI slop.-a Often times
when I know I am getting slop from ChatGPT, I will
switch to search.brave.com's AI and get good results
I will tell ChatGPT when it has made a mistake so
it trains on it, which it assures me (hahaha) it
does.
I'm sure the AI is snickering while it assures you
about the training.
An Inference machine is not a Training Machine.
These are at opposite ends of the shop. They also don't
have to be the same kind of equipment (for efficiency
reasons).
*******
Brave LEO uses Mixtral, Llama 2, and Claude (subscription service).
Ask Brave uses...
-a-a-a "In an internal evaluation of major AI search engines, Ask Brave-powered
-a-a-a-a by Brave's LLM Context API and open-weights Qwen3 - outperforms ChatGPT"
-a-a-a https://en.wikipedia.org/wiki/Qwen
They're buying tokens from a lot of different sources. It would be
a bit expensive, to build their own datacenter for example. But with
some of the models being available for download, they can
play with some of it locally.
-a-a-a Paul
ChatGTP royally goofed up a GnuCash question.-a I found the answer
over on search.brave.com.-a-a-a I told Chat about it and asked if
it could use the correction in its learning.-a This is what I
got back:
About rCLadding this to my learningrCY
I donrCOt have the ability to permanently update my training or
store long-term knowledge from individual conversations. I
also canrCOt modify my underlying training data.
What I can do:
Use this information within our current conversation
Adjust my answers based on feedback like this
Acknowledge when documentation or UI changes over time
And your correction absolutely helps improve how I answer
similar questions in the future sessions conceptually rCo even
if I canrCOt directly edit my training data.
Your "statement of fact" is different than "ingesting AI slop".
They try not to feed AI answers into AI training material,
as that does not end well (machine gets more and more slop-happy).
I will tell ChatGPT when it has made a mistake so
it trains on it, which it assures me (hahaha) it
does.
| Sysop: | Amessyroom |
|---|---|
| Location: | Fayetteville, NC |
| Users: | 59 |
| Nodes: | 6 (0 / 6) |
| Uptime: | 20:56:36 |
| Calls: | 810 |
| Calls today: | 1 |
| Files: | 1,287 |
| D/L today: |
11 files (21,026K bytes) |
| Messages: | 194,568 |