• AI

    From =?iso-8859-1?Q?Bri=2E?=@Brian@Derby.invalid to uk.people.silversurfers on Fri May 30 17:30:32 2025
    From Newsgroup: uk.people.silversurfers

    Anyone here played with AI?
    I've toyed with Windows Copilot and quite impressed, but not tried
    ChatGPT yet.
    --
    Bri.
    (W11 Desktop)
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Indy Jess John@bathwatchdog@OMITTHISgooglemail.com to uk.people.silversurfers on Fri May 30 23:57:32 2025
    From Newsgroup: uk.people.silversurfers

    On 30/05/2025 17:30, Bri. wrote:
    Anyone here played with AI?
    I've toyed with Windows Copilot and quite impressed, but not tried
    ChatGPT yet.

    I have not used it myself, but I came across it when I contributed to a Newsgroup discussion, and someone replied that ChatGPT disagreed with
    me. I could provide a link to the source of my information, but I got
    curious to why ChatGPT hadn't mentioned the information I had found. I
    did some digging.

    What I discovered is that built into ChatGPT is a "reward" process.
    Each ChatGPT user has to log in, and that log-in links to a user profile
    that is continuously updated as ChatGPT analyses how that user has used
    the facility. It learns what each user likes to hear and provides
    selected answers that the user would appreciate.

    To take a simplified example, if it learns that you like red but don't
    like blue, then it would find examples of sunsets but not blue sky. The
    design aim is that you feel it is on your side, so you are more likely
    to use it again, and the more you use it the more refined your profile gets.

    It is mostly harmless, and the reward process informs its developers how
    it could be further improved. It probably has access to both sides of
    an argument but it is deliberately selective about what of its
    information base is appropriate to deliver for any particular question
    from any particular user. Therefore, its output can't realistically be
    used as unbiased evidence. It can be regarded as "the truth" but not
    "the whole truth".
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Abandoned Trolley@that.bloke@microsoft.com to uk.people.silversurfers on Sat May 31 09:20:58 2025
    From Newsgroup: uk.people.silversurfers

    On 30/05/2025 23:57, Indy Jess John wrote:
    On 30/05/2025 17:30, Bri. wrote:
    Anyone here played with AI?
    I've toyed with Windows Copilot and quite impressed, but not tried
    ChatGPT yet.

    I have not used it myself, but I came across it when I contributed to a Newsgroup discussion, and someone replied that ChatGPT disagreed with
    me.-a I could provide a link to the source of my information, but I got curious to why ChatGPT hadn't mentioned the information I had found.-a I
    did some digging.

    What I discovered is that built into ChatGPT is a "reward" process. Each ChatGPT user has to log in, and that log-in links to a user profile that
    is continuously updated as ChatGPT analyses how that user has used the facility.-a It learns what each user likes to hear and provides selected answers that the user would appreciate.

    To take a simplified example, if it learns that you like red but don't
    like blue, then it would find examples of sunsets but not blue sky. The design aim is that you feel it is on your side, so you are more likely
    to use it again, and the more you use it the more refined your profile
    gets.

    It is mostly harmless, and the reward process informs its developers how
    it could be further improved.-a It probably has access to both sides of
    an argument but it is deliberately selective about what of its
    information base is appropriate to deliver for any particular question
    from any particular user.-a Therefore, its output can't realistically be used as unbiased evidence.-a It can be regarded as "the truth" but not
    "the whole truth".


    So, basically, it trains itself to tell people what they want to hear -
    the foundation of "fake news" ?
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?iso-8859-1?Q?Bri=2E?=@Brian@Derby.invalid to uk.people.silversurfers on Sat May 31 09:22:36 2025
    From Newsgroup: uk.people.silversurfers

    Indy Jess John wrote:

    On 30/05/2025 17:30, Bri. wrote:
    Anyone here played with AI?
    I've toyed with Windows Copilot and quite impressed, but not tried
    ChatGPT yet.

    I have not used it myself, but I came across it when I contributed to a Newsgroup discussion, and someone replied that ChatGPT disagreed with
    me. I could provide a link to the source of my information, but I got curious to why ChatGPT hadn't mentioned the information I had found. I
    did some digging.

    What I discovered is that built into ChatGPT is a "reward" process.
    Each ChatGPT user has to log in, and that log-in links to a user profile that is continuously updated as ChatGPT analyses how that user has used
    the facility. It learns what each user likes to hear and provides
    selected answers that the user would appreciate.

    To take a simplified example, if it learns that you like red but don't
    like blue, then it would find examples of sunsets but not blue sky. The design aim is that you feel it is on your side, so you are more likely
    to use it again, and the more you use it the more refined your profile gets.

    It is mostly harmless, and the reward process informs its developers how
    it could be further improved. It probably has access to both sides of
    an argument but it is deliberately selective about what of its
    information base is appropriate to deliver for any particular question
    from any particular user. Therefore, its output can't realistically be
    used as unbiased evidence. It can be regarded as "the truth" but not
    "the whole truth".

    Thank you for a thoroughly comprehensive reply. It even includes the
    answer to what would have been my next question, following any
    response.

    Your experience with ChatGPT sounds similar to my dealings with
    Windows Copilot, I reckon they're very closely related. :-)
    I attempted to change its mind regarding a generally accepted
    misconception. Despite accepting my reasoning, it wouldn't actually
    say that I was correct, it just asked if it could help me with another
    topic.

    Good clean fun though and can be very useful at times.
    --
    Bri.
    (W11 Desktop)
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Bob Henson@bob.henson@outlook.com to uk.people.silversurfers on Sat May 31 09:50:39 2025
    From Newsgroup: uk.people.silversurfers

    On 30/5/25 5:30 pm, Bri. wrote:
    Anyone here played with AI?
    I've toyed with Windows Copilot and quite impressed, but not tried
    ChatGPT yet.

    My assessment so far is that AI is extremely, and most likely
    dangerously, inaccurate when used as it is currently being used. Those
    who operate illegal scams must be rubbing their hands with glee at the opportunities it presents. As long as it is applied to social media applications it can and will cause nothing but disinformation with
    resultant chaos and damage - a sort of computerised Trump (Donald, not
    Judd).

    It has many legitimate and conceivably life changing uses are in things
    like number crunching and data analysis in medicine, for example, where
    it can reduce meta-analysis from years to days, to the benefit of all concerned. It needs to be kept out of the hands of those who do not have
    the brains to understand the possible consequences of misuse.
    --
    Tetbury, Gloucestershirel, UK

    The early bird may get the worm, but the second mouse gets the cheese.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Jeff Gaines@jgnewsid@outlook.com to uk.people.silversurfers on Sat May 31 09:34:24 2025
    From Newsgroup: uk.people.silversurfers

    On 31/05/2025 in message <m9vu6oFt26cU1@mid.individual.net> Bob Henson
    wrote:

    On 30/5/25 5:30 pm, Bri. wrote:
    Anyone here played with AI?
    I've toyed with Windows Copilot and quite impressed, but not tried
    ChatGPT yet.

    My assessment so far is that AI is extremely, and most likely dangerously, >inaccurate when used as it is currently being used. Those who operate >illegal scams must be rubbing their hands with glee at the opportunities
    it presents. As long as it is applied to social media applications it can >and will cause nothing but disinformation with resultant chaos and damage
    - a sort of computerised Trump (Donald, not Judd).

    It has many legitimate and conceivably life changing uses are in things
    like number crunching and data analysis in medicine, for example, where it >can reduce meta-analysis from years to days, to the benefit of all >concerned. It needs to be kept out of the hands of those who do not have
    the brains to understand the possible consequences of misuse.

    Is it actually "intelligent" or is it just a broader based search engine?
    It is annoying in that it pops up with answers on Google many of which are nonsense.
    --
    Jeff Gaines Dorset UK
    Captcha is thinking of stopping the use of pictures with traffic lights as cyclists don't know what they are.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Bob Henson@bob.henson@outlook.com to uk.people.silversurfers on Sat May 31 11:58:52 2025
    From Newsgroup: uk.people.silversurfers

    On 31/5/25 10:34 am, Jeff Gaines wrote:
    On 31/05/2025 in message <m9vu6oFt26cU1@mid.individual.net> Bob Henson
    wrote:

    On 30/5/25 5:30 pm, Bri. wrote:
    Anyone here played with AI?
    I've toyed with Windows Copilot and quite impressed, but not tried >>>ChatGPT yet.

    My assessment so far is that AI is extremely, and most likely dangerously, >>inaccurate when used as it is currently being used. Those who operate >>illegal scams must be rubbing their hands with glee at the opportunities >>it presents. As long as it is applied to social media applications it can >>and will cause nothing but disinformation with resultant chaos and damage >>- a sort of computerised Trump (Donald, not Judd).

    It has many legitimate and conceivably life changing uses are in things >>like number crunching and data analysis in medicine, for example, where it >>can reduce meta-analysis from years to days, to the benefit of all >>concerned. It needs to be kept out of the hands of those who do not have >>the brains to understand the possible consequences of misuse.

    Is it actually "intelligent" or is it just a broader based search engine?
    It is annoying in that it pops up with answers on Google many of which are nonsense.


    Not intelligent as we understand the word. Used in medicine and other
    fields, it speeds up processes by orders of magnitude by selecting and applying "shortcuts" of which it knows or has acquired (I suppose in its "search engine" mode and probably more by pattern spotting) That's
    roughly as I understand it anyway. I wouldn't claim any expertise, but
    I do know that in the right hands it can be a life saver. I can't
    remember details, but a Bristol medical team recently made breakthroughs
    in their (genetic?) medicine field which without AI would have taken
    many years to achieve.
    --
    Tetbury, Gloucestershirel, UK

    The early bird may get the worm, but the second mouse gets the cheese.
    --- Synchronet 3.21a-Linux NewsLink 1.2