Sysop: | Amessyroom |
---|---|
Location: | Fayetteville, NC |
Users: | 27 |
Nodes: | 6 (0 / 6) |
Uptime: | 36:13:02 |
Calls: | 631 |
Calls today: | 2 |
Files: | 1,187 |
D/L today: |
22 files (29,767K bytes) |
Messages: | 173,011 |
Anyone here played with AI?
I've toyed with Windows Copilot and quite impressed, but not tried
ChatGPT yet.
On 30/05/2025 17:30, Bri. wrote:
Anyone here played with AI?
I've toyed with Windows Copilot and quite impressed, but not tried
ChatGPT yet.
I have not used it myself, but I came across it when I contributed to a Newsgroup discussion, and someone replied that ChatGPT disagreed with
me.-a I could provide a link to the source of my information, but I got curious to why ChatGPT hadn't mentioned the information I had found.-a I
did some digging.
What I discovered is that built into ChatGPT is a "reward" process. Each ChatGPT user has to log in, and that log-in links to a user profile that
is continuously updated as ChatGPT analyses how that user has used the facility.-a It learns what each user likes to hear and provides selected answers that the user would appreciate.
To take a simplified example, if it learns that you like red but don't
like blue, then it would find examples of sunsets but not blue sky. The design aim is that you feel it is on your side, so you are more likely
to use it again, and the more you use it the more refined your profile
gets.
It is mostly harmless, and the reward process informs its developers how
it could be further improved.-a It probably has access to both sides of
an argument but it is deliberately selective about what of its
information base is appropriate to deliver for any particular question
from any particular user.-a Therefore, its output can't realistically be used as unbiased evidence.-a It can be regarded as "the truth" but not
"the whole truth".
On 30/05/2025 17:30, Bri. wrote:
Anyone here played with AI?
I've toyed with Windows Copilot and quite impressed, but not tried
ChatGPT yet.
I have not used it myself, but I came across it when I contributed to a Newsgroup discussion, and someone replied that ChatGPT disagreed with
me. I could provide a link to the source of my information, but I got curious to why ChatGPT hadn't mentioned the information I had found. I
did some digging.
What I discovered is that built into ChatGPT is a "reward" process.
Each ChatGPT user has to log in, and that log-in links to a user profile that is continuously updated as ChatGPT analyses how that user has used
the facility. It learns what each user likes to hear and provides
selected answers that the user would appreciate.
To take a simplified example, if it learns that you like red but don't
like blue, then it would find examples of sunsets but not blue sky. The design aim is that you feel it is on your side, so you are more likely
to use it again, and the more you use it the more refined your profile gets.
It is mostly harmless, and the reward process informs its developers how
it could be further improved. It probably has access to both sides of
an argument but it is deliberately selective about what of its
information base is appropriate to deliver for any particular question
from any particular user. Therefore, its output can't realistically be
used as unbiased evidence. It can be regarded as "the truth" but not
"the whole truth".
Anyone here played with AI?
I've toyed with Windows Copilot and quite impressed, but not tried
ChatGPT yet.
On 30/5/25 5:30 pm, Bri. wrote:
Anyone here played with AI?
I've toyed with Windows Copilot and quite impressed, but not tried
ChatGPT yet.
My assessment so far is that AI is extremely, and most likely dangerously, >inaccurate when used as it is currently being used. Those who operate >illegal scams must be rubbing their hands with glee at the opportunities
it presents. As long as it is applied to social media applications it can >and will cause nothing but disinformation with resultant chaos and damage
- a sort of computerised Trump (Donald, not Judd).
It has many legitimate and conceivably life changing uses are in things
like number crunching and data analysis in medicine, for example, where it >can reduce meta-analysis from years to days, to the benefit of all >concerned. It needs to be kept out of the hands of those who do not have
the brains to understand the possible consequences of misuse.
On 31/05/2025 in message <m9vu6oFt26cU1@mid.individual.net> Bob Henson
wrote:
On 30/5/25 5:30 pm, Bri. wrote:
Anyone here played with AI?
I've toyed with Windows Copilot and quite impressed, but not tried >>>ChatGPT yet.
My assessment so far is that AI is extremely, and most likely dangerously, >>inaccurate when used as it is currently being used. Those who operate >>illegal scams must be rubbing their hands with glee at the opportunities >>it presents. As long as it is applied to social media applications it can >>and will cause nothing but disinformation with resultant chaos and damage >>- a sort of computerised Trump (Donald, not Judd).
It has many legitimate and conceivably life changing uses are in things >>like number crunching and data analysis in medicine, for example, where it >>can reduce meta-analysis from years to days, to the benefit of all >>concerned. It needs to be kept out of the hands of those who do not have >>the brains to understand the possible consequences of misuse.
Is it actually "intelligent" or is it just a broader based search engine?
It is annoying in that it pops up with answers on Google many of which are nonsense.