• AI power

    From john larkin@jl@glen--canyon.com to sci.electronics.design on Fri Sep 26 08:13:39 2025
    From Newsgroup: sci.electronics.design

    The Futurism site is mostly hokey, but this is kinda interesting:

    https://futurism.com/future-society/ai-power-usage-text-to-video-generator

    "To spit out a five-second clip, the researchers found that it takes
    the equivalent of running a microwave for over an hour."

    Imagine a sort of DDOS attack on an AI site that deliberately burns
    power.

    Maybe it could be viral, as in making various AI sites bomb one
    another.

    There might be one magical question that spins off gigawatt-hours of
    energy.

    The only defense would be to require secure payment for energy used,
    and even that could be hacked.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Martin Rid@martin_riddle@verison.net to sci.electronics.design on Fri Sep 26 14:05:37 2025
    From Newsgroup: sci.electronics.design

    john larkin <jl@glen--canyon.com> Wrote in message:r
    The Futurism site is mostly hokey, but this is kinda interesting:https://futurism.com/future-society/ai-power-usage-text-to-video-generator"To spit out a five-second clip, the researchers found that it takesthe equivalent of running a microwave for over an hour."Imagine a sort of DDOS attack on an AI site that deliberately burnspower.Maybe it could be viral, as in making various AI sites bomb oneanother.There might be one magical question that spins off gigawatt-hours ofenergy.The only defense would be to require secure payment for energy used,and even that could be hacked.

    I'm sure they watched, A hitchhiker guide to the galaxy.

    Cheers
    --


    ----Android NewsGroup Reader---- https://piaohong.s3-us-west-2.amazonaws.com/usenet/index.html
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Edward Rawde@invalid@invalid.invalid to sci.electronics.design on Sat Sep 27 01:12:44 2025
    From Newsgroup: sci.electronics.design

    "john larkin" <jl@glen--canyon.com> wrote in message news:vtaddkhc15mbu5vabu1n4olvc4rj9sl79p@4ax.com...
    The Futurism site is mostly hokey, but this is kinda interesting:

    https://futurism.com/future-society/ai-power-usage-text-to-video-generator

    "To spit out a five-second clip, the researchers found that it takes
    the equivalent of running a microwave for over an hour."

    Imagine a sort of DDOS attack on an AI site that deliberately burns
    power.

    Maybe it could be viral, as in making various AI sites bomb one
    another.

    There might be one magical question that spins off gigawatt-hours of
    energy.

    That made me wonder what current AI would make of a paragraph which might
    be nonsense but can you be really sure about that?

    So I enjoyed myself by typing the following paragraph.

    When ending the roads should be on time for everything has meaning and
    we can just throw out the bad parts. So I would like to know why there
    is no end to this and what we can do to strengthen the reasoning.
    I will start with more detail when I have arranged the content as necessary for reasoning.

    And I pasted it into Grok.

    Now I know why AI is going to need Gigawatts, or is that Jigawatts?

    I'd have been way more impressed with a more Sloman-like one line response to this,
    such as "You have no clue what you are talking about."


    The only defense would be to require secure payment for energy used,
    and even that could be hacked.




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Don Y@blockedofcourse@foo.invalid to sci.electronics.design on Sat Sep 27 01:20:53 2025
    From Newsgroup: sci.electronics.design

    On 9/26/2025 10:12 PM, Edward Rawde wrote:
    That made me wonder what current AI would make of a paragraph which might
    be nonsense but can you be really sure about that?

    It would be interesting to get a metric regarding how much "work" (effort?)
    it put into solving particular problems -- not elapsed time but, rather, MIPS-secs or Watts or some other measure of effort.

    And, see if there is some threshold beyond which it simply "gives up"...
    Or, some other indication of what "keeps it interested"!

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Edward Rawde@invalid@invalid.invalid to sci.electronics.design on Mon Sep 29 00:05:59 2025
    From Newsgroup: sci.electronics.design

    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:10b86pf$1ir4a$1@dont-email.me...
    On 9/26/2025 10:12 PM, Edward Rawde wrote:
    That made me wonder what current AI would make of a paragraph which might
    be nonsense but can you be really sure about that?

    It would be interesting to get a metric regarding how much "work" (effort?) it put into solving particular problems -- not elapsed time but, rather, MIPS-secs or Watts or some other measure of effort.

    And, see if there is some threshold beyond which it simply "gives up"...
    Or, some other indication of what "keeps it interested"!


    If you want to really confuse Grok, try this:

    Un petit d'un petit s'etonne aux halles
    Un petit d'un petit a degre te falle

    If you want to be even more ridiculous try this:

    Center Alley worse jester pore ladle gull how lift wetter stop-murder an toe heft-cisterns.
    Daze worming war furry wicket an shellfish parsons, spatially dole stop-murder, hoe dint lack
    Center Alley an, infect, word orphan traitor pore gull mar lichen ammonol dinner hormone bang.

    Grok does get some of it right.

    Hard to say what keeps it interested but it clearly spends longer "thinking" about any problem
    which produces no search engine results at all. Unlike the two examples above.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Don Y@blockedofcourse@foo.invalid to sci.electronics.design on Sun Sep 28 21:57:33 2025
    From Newsgroup: sci.electronics.design

    On 9/28/2025 9:05 PM, Edward Rawde wrote:
    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:10b86pf$1ir4a$1@dont-email.me...
    On 9/26/2025 10:12 PM, Edward Rawde wrote:
    That made me wonder what current AI would make of a paragraph which might >>> be nonsense but can you be really sure about that?

    It would be interesting to get a metric regarding how much "work" (effort?) >> it put into solving particular problems -- not elapsed time but, rather,
    MIPS-secs or Watts or some other measure of effort.

    And, see if there is some threshold beyond which it simply "gives up"...
    Or, some other indication of what "keeps it interested"!


    If you want to really confuse Grok, try this:

    Un petit d'un petit s'etonne aux halles
    Un petit d'un petit a degre te falle

    Un petit d'un petit
    Ah! degr|-s te fallent
    ...

    _Mots d'Heures: Gousses, Rames_

    If you want to be even more ridiculous try this:

    Center Alley worse jester pore ladle gull how lift wetter stop-murder an toe heft-cisterns.
    Daze worming war furry wicket an shellfish parsons, spatially dole stop-murder, hoe dint lack
    Center Alley an, infect, word orphan traitor pore gull mar lichen ammonol dinner hormone bang.

    Grok does get some of it right.

    Hard to say what keeps it interested but it clearly spends longer "thinking" about any problem
    which produces no search engine results at all. Unlike the two examples above.

    We don't have convenient metrics for how "hard" things are
    for humans to "solve". It would be interesting to have a metric
    that represented the "effort" expended by the machine and
    ponder how that corresponds (or fails to!) with how "hard"
    humans consider problems to be.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Don Y@blockedofcourse@foo.invalid to sci.electronics.design on Sun Sep 28 22:07:40 2025
    From Newsgroup: sci.electronics.design

    Hard to say what keeps it interested but it clearly spends longer "thinking" about any problem
    which produces no search engine results at all. Unlike the two examples above.

    James while John had had had had had had had had had had had a better effect on
    the teacher

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From liz@liz@poppyrecords.invalid.invalid (Liz Tuddenham) to sci.electronics.design on Mon Sep 29 09:31:23 2025
    From Newsgroup: sci.electronics.design

    Edward Rawde <invalid@invalid.invalid> wrote:

    [...]
    That made me wonder what current AI would make of a paragraph which might
    be nonsense but can you be really sure about that?

    So I enjoyed myself by typing the following paragraph.

    When ending the roads should be on time for everything has meaning and we
    can just throw out the bad parts. So I would like to know why there is no
    end to this and what we can do to strengthen the reasoning. I will start
    with more detail when I have arranged the content as necessary for
    reasoning.

    Wonderful! Have you ever considered becoming a politician?
    --
    ~ Liz Tuddenham ~
    (Remove the ".invalid"s and add ".co.uk" to reply)
    www.poppyrecords.co.uk
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Edward Rawde@invalid@invalid.invalid to sci.electronics.design on Mon Sep 29 10:00:16 2025
    From Newsgroup: sci.electronics.design

    "Liz Tuddenham" <liz@poppyrecords.invalid.invalid> wrote in message news:1rjezzo.1oug2361hg7i28N%liz@poppyrecords.invalid.invalid...
    Edward Rawde <invalid@invalid.invalid> wrote:

    [...]
    That made me wonder what current AI would make of a paragraph which might
    be nonsense but can you be really sure about that?

    So I enjoyed myself by typing the following paragraph.

    When ending the roads should be on time for everything has meaning and we
    can just throw out the bad parts. So I would like to know why there is no
    end to this and what we can do to strengthen the reasoning. I will start
    with more detail when I have arranged the content as necessary for
    reasoning.

    Wonderful! Have you ever considered becoming a politician?

    Actually no. I prefer facts. Coming up with nonsense takes too much effort.
    So maybe I'm a reverse politician.

    Back when I had to study the following, I also remember the maths teacher showing
    us how to prove the quadratic formula. https://specialcollections.luc.edu/exhibits/show/schoder-hopkins/hopkins-lectures/speltsibyl
    I can still prove the quadratic formula (as can most people here) but I can only remember
    the first few words of that poem.



    --
    ~ Liz Tuddenham ~
    (Remove the ".invalid"s and add ".co.uk" to reply)
    www.poppyrecords.co.uk


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Edward Rawde@invalid@invalid.invalid to sci.electronics.design on Mon Sep 29 10:10:08 2025
    From Newsgroup: sci.electronics.design

    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:10bd46u$2ppnb$1@dont-email.me...
    Hard to say what keeps it interested but it clearly spends longer "thinking" about any problem
    which produces no search engine results at all. Unlike the two examples above.

    James while John had had had had had had had had had had had a better effect on the teacher


    Yeah I've seen that one too.

    Looks like Grok had online help with that to me.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Edward Rawde@invalid@invalid.invalid to sci.electronics.design on Mon Sep 29 10:24:34 2025
    From Newsgroup: sci.electronics.design

    "Liz Tuddenham" <liz@poppyrecords.invalid.invalid> wrote in message news:1rjezzo.1oug2361hg7i28N%liz@poppyrecords.invalid.invalid...
    Edward Rawde <invalid@invalid.invalid> wrote:

    [...]
    That made me wonder what current AI would make of a paragraph which might
    be nonsense but can you be really sure about that?

    So I enjoyed myself by typing the following paragraph.


    When ending the roads should be on time for everything has meaning and we
    can just throw out the bad parts. So I would like to know why there is no
    end to this and what we can do to strengthen the reasoning. I will start
    with more detail when I have arranged the content as necessary for
    reasoning.

    Now I can't stop my mind making me hear (in my mind) the president of a large country saying this.
    Maybe someone out there has an AI system which can make a video clip such that I don't need
    to hear it in my mind.



    Wonderful! Have you ever considered becoming a politician?


    --
    ~ Liz Tuddenham ~
    (Remove the ".invalid"s and add ".co.uk" to reply)
    www.poppyrecords.co.uk


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Don Y@blockedofcourse@foo.invalid to sci.electronics.design on Mon Sep 29 08:10:07 2025
    From Newsgroup: sci.electronics.design

    On 9/29/2025 7:10 AM, Edward Rawde wrote:
    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:10bd46u$2ppnb$1@dont-email.me...
    Hard to say what keeps it interested but it clearly spends longer "thinking" about any problem
    which produces no search engine results at all. Unlike the two examples above.

    James while John had had had had had had had had had had had a better effect on the teacher


    Yeah I've seen that one too.

    Looks like Grok had online help with that to me.

    That's the problem with "exhaustive" solutions -- there's no "reasoning" involved. A *human* can readily solve it, even if not "instantaneously".

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Edward Rawde@invalid@invalid.invalid to sci.electronics.design on Mon Sep 29 13:32:47 2025
    From Newsgroup: sci.electronics.design

    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:10be7gi$32u0q$1@dont-email.me...
    On 9/29/2025 7:10 AM, Edward Rawde wrote:
    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:10bd46u$2ppnb$1@dont-email.me...
    Hard to say what keeps it interested but it clearly spends longer "thinking" about any problem
    which produces no search engine results at all. Unlike the two examples above.

    James while John had had had had had had had had had had had a better effect on the teacher


    Yeah I've seen that one too.

    Looks like Grok had online help with that to me.

    That's the problem with "exhaustive" solutions -- there's no "reasoning" involved. A *human* can readily solve it, even if not "instantaneously".


    It's clearly true that the kind of chat based AI currently found online does not
    come anywhere close to thinking for itself. It does not even learn from its mistakes.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Don Y@blockedofcourse@foo.invalid to sci.electronics.design on Mon Sep 29 11:13:25 2025
    From Newsgroup: sci.electronics.design

    On 9/29/2025 10:32 AM, Edward Rawde wrote:
    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:10be7gi$32u0q$1@dont-email.me...
    On 9/29/2025 7:10 AM, Edward Rawde wrote:
    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:10bd46u$2ppnb$1@dont-email.me...
    Hard to say what keeps it interested but it clearly spends longer "thinking" about any problem
    which produces no search engine results at all. Unlike the two examples above.

    James while John had had had had had had had had had had had a better effect on the teacher

    Yeah I've seen that one too.

    Looks like Grok had online help with that to me.

    That's the problem with "exhaustive" solutions -- there's no "reasoning"
    involved. A *human* can readily solve it, even if not "instantaneously".

    It's clearly true that the kind of chat based AI currently found online does not
    come anywhere close to thinking for itself. It does not even learn from its mistakes.

    The Public has made the wrong assumptions about AI based on a particular
    *type* of AI -- that is being coerced to "solve" a variety of problems for which it wasn't actually designed. It appears to perform well (adequately?)
    in some situations and that leads folks to assuming it's performance will
    be consistent across a wide variety of applications/problems.

    Years ago, the ability to recognize graphemes regardless of typeface was considered an AI challenge. Now, we dismiss it as "OCR" -- once the
    problem is solved, AI is no longer "credited" with the solution.
    ("What have you done for me LATELY?")

    Or, recognizing speaker-independent speech; identifying speakers; etc.
    They've all acquired specific lexicons that fail to credit "AI".

    "Production systems" can give the appearance of thinking by forward chaining rules -- if a implies B and B implies C then A implies C (even though this
    was not explicitly stated in the initial ruleset).

    And, a lot of AI technology is just a framework for solving problems -- but
    has to be explicitly applied by the practitioner. E.g., sorting out the environmental conditions at which occupants would like their residence to
    be maintained can be done by observation and rule deduction.

    An LLM would be hard-pressed to perform well in such examples.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Tom Del Rosso@fizzbintuesday@that-google-mail-domain.com to sci.electronics.design on Thu Oct 2 21:32:56 2025
    From Newsgroup: sci.electronics.design

    On 9/27/2025 1:12 AM, Edward Rawde wrote:

    So I enjoyed myself by typing the following paragraph.

    When ending the roads should be on time for everything has meaning and
    we can just throw out the bad parts. So I would like to know why there
    is no end to this and what we can do to strengthen the reasoning.
    I will start with more detail when I have arranged the content as necessary for reasoning.

    And I pasted it into Grok.

    Was "I will start with more detail when I have arranged the content as necessary for reasoning." the response, or part of what you asked it?
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Edward Rawde@invalid@invalid.invalid to sci.electronics.design on Thu Oct 2 21:48:59 2025
    From Newsgroup: sci.electronics.design

    "Tom Del Rosso" <fizzbintuesday@that-google-mail-domain.com> wrote in message news:10bn947$1bm5d$1@dont-email.me...
    On 9/27/2025 1:12 AM, Edward Rawde wrote:

    So I enjoyed myself by typing the following paragraph.

    When ending the roads should be on time for everything has meaning and
    we can just throw out the bad parts. So I would like to know why there
    is no end to this and what we can do to strengthen the reasoning.
    I will start with more detail when I have arranged the content as necessary for reasoning.

    And I pasted it into Grok.

    Was "I will start with more detail when I have arranged the content as necessary for reasoning." the response, or part of what you
    asked it?

    Just part of the nonsense I fed into it.

    We all know Garbage in Garbage out.
    Current LLMs seem to be a case of Garbage in and hope something useful comes out.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Tom Del Rosso@fizzbintuesday@that-google-mail-domain.com to sci.electronics.design on Thu Oct 2 22:28:10 2025
    From Newsgroup: sci.electronics.design

    On 10/2/2025 9:48 PM, Edward Rawde wrote:
    Just part of the nonsense I fed into it.

    We all know Garbage in Garbage out.
    Current LLMs seem to be a case of Garbage in and hope something useful comes out.

    Well then aren't you going to say what its response was?
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Edward Rawde@invalid@invalid.invalid to sci.electronics.design on Thu Oct 2 23:26:12 2025
    From Newsgroup: sci.electronics.design

    "Tom Del Rosso" <fizzbintuesday@that-google-mail-domain.com> wrote in message news:10bncbq$1bm5d$2@dont-email.me...
    On 10/2/2025 9:48 PM, Edward Rawde wrote:
    Just part of the nonsense I fed into it.

    We all know Garbage in Garbage out.
    Current LLMs seem to be a case of Garbage in and hope something useful comes out.

    Well then aren't you going to say what its response was?

    The response given by an LLM AI Chatbot, when fed the following:

    [Start text]
    When ending the roads should be on time for everything has meaning and
    we can just throw out the bad parts. So I would like to know why there
    is no end to this and what we can do to strengthen the reasoning.
    I will start with more detail when I have arranged the content as necessary for reasoning.
    [End text]

    appears to be different each time.

    I do not record all such interactions so I can't tell you exactly what its response was.
    From memory it took maybe 30 seconds to "ponder" my request and come up with a response.

    So if you want to see one possible response please paste the above text into https://grok.com/c

    I tried again a few minutes ago and it "thought" for 22 seconds before producing
    word salad which was debatably as good as the input.

    The things I find interesting about the response to this text are:

    1. It spends some time searching the web for my text but has to concede that a web
    search isn't going to help.

    2. It comes up with biblical references (Ecclesiastes 3) in response to nonsense.

    So I decided to clear my mind and exercise my word salad generator once more:

    [Begin word salad]
    If the words of many are expected to find time and reasons then
    why can't we deduce the complexity of forever by using our skills
    to manipulate the characteristics of the associated events?
    Is it because we are not centred on the structure of the problem?
    Or could it be that there are infinitely possible solutions?
    [End word salad]

    That seems to really short circuit it.
    You can almost see the smoke rising.

    Interesting that the word "mathematics" turns up in the answer I got.
    I kinda thought that "infinitely possible solutions" might provoke that.





    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Edward Rawde@invalid@invalid.invalid to sci.electronics.design on Fri Oct 3 10:42:41 2025
    From Newsgroup: sci.electronics.design

    "Edward Rawde" <invalid@invalid.invalid> wrote in message news:10bnfol$99e$1@nnrp.usenet.blueworldhosting.com...
    "Tom Del Rosso" <fizzbintuesday@that-google-mail-domain.com> wrote in message news:10bncbq$1bm5d$2@dont-email.me...
    On 10/2/2025 9:48 PM, Edward Rawde wrote:
    Just part of the nonsense I fed into it.

    We all know Garbage in Garbage out.
    Current LLMs seem to be a case of Garbage in and hope something useful comes out.
    ....

    [Begin word salad]
    If the words of many are expected to find time and reasons then
    why can't we deduce the complexity of forever by using our skills
    to manipulate the characteristics of the associated events?
    Is it because we are not centred on the structure of the problem?
    Or could it be that there are infinitely possible solutions?
    [End word salad]


    After trying that again 12 hours later I suspect that it keeps a cache of recent web searches
    so it doesn't have to admit to using a web search to assist with the reply.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Edward Rawde@invalid@invalid.invalid to sci.electronics.design on Fri Oct 3 10:52:28 2025
    From Newsgroup: sci.electronics.design

    "Edward Rawde" <invalid@invalid.invalid> wrote in message news:10bond3$1e0s$1@nnrp.usenet.blueworldhosting.com...
    "Edward Rawde" <invalid@invalid.invalid> wrote in message news:10bnfol$99e$1@nnrp.usenet.blueworldhosting.com...
    "Tom Del Rosso" <fizzbintuesday@that-google-mail-domain.com> wrote in message news:10bncbq$1bm5d$2@dont-email.me...
    On 10/2/2025 9:48 PM, Edward Rawde wrote:
    Just part of the nonsense I fed into it.

    We all know Garbage in Garbage out.
    Current LLMs seem to be a case of Garbage in and hope something useful comes out.
    ....

    This gets a very fast and short response.
    I didn't have the heart to reply with "Actually I just wanted to see your response to a load of nonsense."

    On the timeless raze by the rocks of laze
    There's a firmless haze with a dappled blaze
    And the fearless maze is ablaze with taze
    Like the sexless phase of an endless gaze.


    --- Synchronet 3.21a-Linux NewsLink 1.2