• Gone awfully quiet!

    From Cursitor Doom@cd@notformail.com to sci.electronics.design on Sun Oct 5 17:54:56 2025
    From Newsgroup: sci.electronics.design

    I can't help noticing since I drew everyone's attention to Grok that
    it's gone awfully quiet around here. I did postulate that AI might
    kill this group, but maybe it's happening quicker than I'd expected.
    :-(
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From john larkin@jl@glen--canyon.com to sci.electronics.design on Sun Oct 5 10:42:34 2025
    From Newsgroup: sci.electronics.design

    On Sun, 05 Oct 2025 17:54:56 +0100, Cursitor Doom <cd@notformail.com>
    wrote:

    I can't help noticing since I drew everyone's attention to Grok that
    it's gone awfully quiet around here. I did postulate that AI might
    kill this group, but maybe it's happening quicker than I'd expected.
    :-(

    No, it's just that few people design electronics now.


    John Larkin
    Highland Tech Glen Canyon Design Center
    Lunatic Fringe Electronics
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Edward Rawde@invalid@invalid.invalid to sci.electronics.design on Sun Oct 5 15:42:48 2025
    From Newsgroup: sci.electronics.design

    "Cursitor Doom" <cd@notformail.com> wrote in message news:6k85ekhb58ummrpfsg8scf61l1d8adcbr9@4ax.com...
    I can't help noticing since I drew everyone's attention to Grok that
    it's gone awfully quiet around here. I did postulate that AI might
    kill this group, but maybe it's happening quicker than I'd expected.
    :-(

    LLMs are clearly not useful for solving electronic circuit design problems.

    And neither is this group if young newcomers are indirectly referred to as newbie dimwits.
    (Or something like that, I forget the exact words.)

    AI which can learn from its mistakes clearly exists, otherwise how did AlphaGo learn?

    I see no sign of AI which can learn to design electronics but electronics isn't like playing Go.

    What's likely to happen at present is that many young people will insist that what Grok says must be correct.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Don Y@blockedofcourse@foo.invalid to sci.electronics.design on Sun Oct 5 13:55:07 2025
    From Newsgroup: sci.electronics.design

    On 10/5/2025 12:42 PM, Edward Rawde wrote:
    I see no sign of AI which can learn to design electronics but electronics isn't like playing Go.

    AI is already being used to design electronics. Not *yet* to the point where you can give a general specification without many details -- but, that will come.

    What's likely to happen at present is that many young people will insist that what Grok says must be correct.

    The bigger fear is that OLDER people will defer to AIs -- out of concern for their positions.

    Imagine an AI telling a doctor that a patient likely has a cancer
    (or other malady). Doctor can see no evidence of this.

    Yet, is savvy enough to realize that if the patient DOES have a cancer
    and he has ignored the advice of his "learned companion" ("Ladies and
    gentlemen of the jury..."), *he* will be on the hook for the malpractice
    claim.

    So, the safe bet is to just accept the diagnosis of the AI -- even if
    it is incorrect.

    Its easy to see how similar claims can be made about other complex
    systems ("The airliner will suffer a catastrophic structural failure...").

    If challenging an "authority" only results in downside risk for the
    challenger, then what incentive to make said challenge?
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Edward Rawde@invalid@invalid.invalid to sci.electronics.design on Sun Oct 5 17:28:35 2025
    From Newsgroup: sci.electronics.design

    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:10bulvc$3p33v$1@dont-email.me...
    On 10/5/2025 12:42 PM, Edward Rawde wrote:
    I see no sign of AI which can learn to design electronics but electronics isn't like playing Go.

    AI is already being used to design electronics.

    Where can I find an AI designer I can test?


    Not *yet* to the point where
    you can give a general specification without many details -- but, that will come.

    What's likely to happen at present is that many young people will insist that what Grok says must be correct.

    The bigger fear is that OLDER people will defer to AIs -- out of concern for their positions.

    Imagine an AI telling a doctor that a patient likely has a cancer
    (or other malady). Doctor can see no evidence of this.

    So the doctor should do more tests.
    I'm no cancer expert but I would hope there is a test or two which can confirm or deny any type of cancer.


    Yet, is savvy enough to realize that if the patient DOES have a cancer
    and he has ignored the advice of his "learned companion" ("Ladies and gentlemen of the jury..."), *he* will be on the hook for the malpractice claim.

    Not if all the relevant tests say no cancer.

    You likely don't want cancer treatment for cancer you don't have.


    So, the safe bet is to just accept the diagnosis of the AI -- even if
    it is incorrect.

    Its easy to see how similar claims can be made about other complex
    systems ("The airliner will suffer a catastrophic structural failure...").

    I doubt Boeing used AI.


    If challenging an "authority" only results in downside risk for the challenger, then what incentive to make said challenge?

    That's always been a risk of that.
    I can think of at least one manager who wanted to get rid of me for pointing out
    issues with the project when he wanted to tell managers above him that everything
    was wonderful.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Cursitor Doom@cd@notformail.com to sci.electronics.design on Sun Oct 5 22:42:32 2025
    From Newsgroup: sci.electronics.design

    On Sun, 5 Oct 2025 15:42:48 -0400, "Edward Rawde"
    <invalid@invalid.invalid> wrote:

    "Cursitor Doom" <cd@notformail.com> wrote in message news:6k85ekhb58ummrpfsg8scf61l1d8adcbr9@4ax.com...
    I can't help noticing since I drew everyone's attention to Grok that
    it's gone awfully quiet around here. I did postulate that AI might
    kill this group, but maybe it's happening quicker than I'd expected.
    :-(

    LLMs are clearly not useful for solving electronic circuit design problems.

    Not very effective at interpreting images either.


    And neither is this group if young newcomers are indirectly referred to as newbie dimwits.
    (Or something like that, I forget the exact words.)

    Sounds like classic Bill Sloman.

    AI which can learn from its mistakes clearly exists, otherwise how did AlphaGo learn?

    I see no sign of AI which can learn to design electronics but electronics isn't like playing Go.

    Give it time; still early days yet.

    What's likely to happen at present is that many young people will insist that what Grok says must be correct.

    I don't think they will, but if they do they'll quickly learn
    otherwise.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Phil Hobbs@pcdhSpamMeSenseless@electrooptical.net to sci.electronics.design on Sun Oct 5 21:49:14 2025
    From Newsgroup: sci.electronics.design

    john larkin <jl@glen--canyon.com> wrote:
    On Sun, 05 Oct 2025 17:54:56 +0100, Cursitor Doom <cd@notformail.com>
    wrote:

    I can't help noticing since I drew everyone's attention to Grok that
    it's gone awfully quiet around here. I did postulate that AI might
    kill this group, but maybe it's happening quicker than I'd expected.
    :-(

    No, it's just that few people design electronics now.


    Simon and I are planning to submit a patent application this week on the
    topic of high performance temperature control. Once itrCOs done, we could discuss it here if folks are interested.

    Cheers

    Phil Hobbs
    --
    Dr Philip C D Hobbs Principal Consultant ElectroOptical Innovations LLC / Hobbs ElectroOptics Optics, Electro-optics, Photonics, Analog Electronics
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Cursitor Doom@cd@notformail.com to sci.electronics.design on Sun Oct 5 22:58:00 2025
    From Newsgroup: sci.electronics.design

    On Sun, 05 Oct 2025 10:42:34 -0700, john larkin <jl@glen--canyon.com>
    wrote:

    On Sun, 05 Oct 2025 17:54:56 +0100, Cursitor Doom <cd@notformail.com>
    wrote:

    I can't help noticing since I drew everyone's attention to Grok that
    it's gone awfully quiet around here. I did postulate that AI might
    kill this group, but maybe it's happening quicker than I'd expected.
    :-(

    No, it's just that few people design electronics now.

    For what reason?



    John Larkin
    Highland Tech Glen Canyon Design Center
    Lunatic Fringe Electronics
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From john larkin@jl@glen--canyon.com to sci.electronics.design on Sun Oct 5 14:59:08 2025
    From Newsgroup: sci.electronics.design

    On Sun, 5 Oct 2025 21:49:14 -0000 (UTC), Phil Hobbs <pcdhSpamMeSenseless@electrooptical.net> wrote:

    john larkin <jl@glen--canyon.com> wrote:
    On Sun, 05 Oct 2025 17:54:56 +0100, Cursitor Doom <cd@notformail.com>
    wrote:

    I can't help noticing since I drew everyone's attention to Grok that
    it's gone awfully quiet around here. I did postulate that AI might
    kill this group, but maybe it's happening quicker than I'd expected.
    :-(

    No, it's just that few people design electronics now.


    Simon and I are planning to submit a patent application this week on the >topic of high performance temperature control. Once itAs done, we could >discuss it here if folks are interested.

    Cheers

    Phil Hobbs

    Sure. We've had several adventures in that area.



    John Larkin
    Highland Tech Glen Canyon Design Center
    Lunatic Fringe Electronics
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From john larkin@jl@glen--canyon.com to sci.electronics.design on Sun Oct 5 15:06:00 2025
    From Newsgroup: sci.electronics.design

    On Sun, 5 Oct 2025 17:28:35 -0400, "Edward Rawde"
    <invalid@invalid.invalid> wrote:

    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:10bulvc$3p33v$1@dont-email.me...
    On 10/5/2025 12:42 PM, Edward Rawde wrote:
    I see no sign of AI which can learn to design electronics but electronics isn't like playing Go.

    AI is already being used to design electronics.

    Where can I find an AI designer I can test?

    Flux.ai



    John Larkin
    Highland Tech Glen Canyon Design Center
    Lunatic Fringe Electronics
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Edward Rawde@invalid@invalid.invalid to sci.electronics.design on Sun Oct 5 18:29:55 2025
    From Newsgroup: sci.electronics.design

    "john larkin" <jl@glen--canyon.com> wrote in message news:diq5ek1l9al75fgca79e440ng33ra2isnh@4ax.com...
    On Sun, 5 Oct 2025 17:28:35 -0400, "Edward Rawde"
    <invalid@invalid.invalid> wrote:

    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:10bulvc$3p33v$1@dont-email.me...
    On 10/5/2025 12:42 PM, Edward Rawde wrote:
    I see no sign of AI which can learn to design electronics but electronics isn't like playing Go.

    AI is already being used to design electronics.

    Where can I find an AI designer I can test?

    Flux.ai

    Oh that. I've avoided it so far because of other feedback. https://www.reddit.com/r/AskElectronics/comments/1ejrvpq/best_ai_currently_for_designing_electronic/

    That page mentions https://claude.ai/
    Despite not being happy with the "signup" process I tested it just now with the inverting op amp question.
    Like all the others I got -12V




    John Larkin
    Highland Tech Glen Canyon Design Center
    Lunatic Fringe Electronics


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Don Y@blockedofcourse@foo.invalid to sci.electronics.design on Sun Oct 5 19:24:00 2025
    From Newsgroup: sci.electronics.design

    On 10/5/2025 2:28 PM, Edward Rawde wrote:
    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:10bulvc$3p33v$1@dont-email.me...
    On 10/5/2025 12:42 PM, Edward Rawde wrote:
    I see no sign of AI which can learn to design electronics but electronics isn't like playing Go.

    AI is already being used to design electronics.

    Where can I find an AI designer I can test?

    If you're afraid that your ability to design electronics is being
    threatened by technology, best to find some other thing to hang your
    hat on. Cleaning bedpans will probably be a human-required skill
    for the foreseeable future -- to complex to automate, too cheap to
    find humans who can do it.

    SnapMagic Copilot claims to be generative AI suiting your "need".
    Circuit Mind makes similar claims. Ditto Flux and it's Copilot.
    There are similar tools that aid with design of ASICs.

    Of course, if you find it meets your challenge causing you to move the goalposts, ("Ah, but it can't do THIS!") you're just fighting a
    losing battle.

    Expecting an AI to be able to read your mind is a long way off.
    E.g., I can't ask any of them to "design a system that monitors every
    aspect of an occupant's life and learn his habits and needs from
    those observations." But, such a task wouldn't be beyond the limits
    of an organic entity to solve. And, said entity could leverage an AI
    to reduce the effort required to design said hardware and software
    by eliminating much of the grunt work.

    Not *yet* to the point where
    you can give a general specification without many details -- but, that will >> come.

    What's likely to happen at present is that many young people will insist that what Grok says must be correct.

    The bigger fear is that OLDER people will defer to AIs -- out of concern for >> their positions.

    Imagine an AI telling a doctor that a patient likely has a cancer
    (or other malady). Doctor can see no evidence of this.

    So the doctor should do more tests.
    I'm no cancer expert but I would hope there is a test or two which can confirm
    or deny any type of cancer.

    That depends on the cancer and the risk/cost of a delayed diagnosis and treatment plan. And, how much money society (and individuals) are willing
    to invest in earlier (and more reliable) detection.

    Many diagnostics aren't certain or rely on subjective interpretations of
    data. How many mammograms of cancerous breasts are taken before a cancer
    is large enough to be *confidently* diagnosed? How much extra breast tissue
    is put at risk in that process? What chance for the cancer to metastasize before being noticeable, there?

    AI is another diagnostic tool to further increase confidence in a
    diagnosis OR detect conditions that "mere mortals" miss.

    Yet, is savvy enough to realize that if the patient DOES have a cancer
    and he has ignored the advice of his "learned companion" ("Ladies and
    gentlemen of the jury..."), *he* will be on the hook for the malpractice
    claim.

    Not if all the relevant tests say no cancer.

    The AI represents just such a test. So, if IT claims a cancer, do
    you ignore it -- because it's an AI and not a chemical assay?

    You likely don't want cancer treatment for cancer you don't have.

    You likely DO want treatment ASAP for a cancer that you *do*!

    So, the safe bet is to just accept the diagnosis of the AI -- even if
    it is incorrect.

    Its easy to see how similar claims can be made about other complex
    systems ("The airliner will suffer a catastrophic structural failure...").

    I doubt Boeing used AI.

    Past is past. Your concern should always be what's happening
    today and tomorrow.

    If challenging an "authority" only results in downside risk for the
    challenger, then what incentive to make said challenge?

    That's always been a risk of that.
    I can think of at least one manager who wanted to get rid of me for pointing out
    issues with the project when he wanted to tell managers above him that everything
    was wonderful.

    Sure. When a project manager "announced" that our team of *50* would
    be done in 4 weeks, I told him "You're fucked" (there was no other term
    that could adequately describe how far off his assessment was!). I
    then queried the various people in the room as to the efforts that *I* knew
    lay ahead of them.

    He complained to the department head. That didn't change the reality.
    ("Don, could you be a bit more diplomatic?")
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Don Y@blockedofcourse@foo.invalid to sci.electronics.design on Sun Oct 5 20:01:29 2025
    From Newsgroup: sci.electronics.design

    If challenging an "authority" only results in downside risk for the
    challenger, then what incentive to make said challenge?

    That's always been a risk of that.
    I can think of at least one manager who wanted to get rid of me for pointing out
    issues with the project when he wanted to tell managers above him that
    everything
    was wonderful.

    Sure.-a When a project manager "announced" that our team of *50* would

    No, I think that was *30* (based on the names I can recall). Unless there
    were a bunch of folks burdened from other departments (as is often the case
    in big companies)

    be done in 4 weeks, I told him "You're fucked" (there was no other term
    that could adequately describe how far off his assessment was!).-a I
    then queried the various people in the room as to the efforts that *I* knew lay ahead of them.

    He complained to the department head.-a That didn't change the reality. ("Don, could you be a bit more diplomatic?")

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Bill Sloman@bill.sloman@ieee.org to sci.electronics.design on Mon Oct 6 16:24:38 2025
    From Newsgroup: sci.electronics.design

    On 6/10/2025 6:42 am, Edward Rawde wrote:
    "Cursitor Doom" <cd@notformail.com> wrote in message news:6k85ekhb58ummrpfsg8scf61l1d8adcbr9@4ax.com...
    I can't help noticing since I drew everyone's attention to Grok that
    it's gone awfully quiet around here. I did postulate that AI might
    kill this group, but maybe it's happening quicker than I'd expected.
    :-(

    LLMs are clearly not useful for solving electronic circuit design problems.

    And neither is this group if young newcomers are indirectly referred to as newbie dimwits.
    (Or something like that, I forget the exact words.)

    Dim newbies is the traditional term, and it is reserved for new-comers
    who don't know what they are talking about, and are reluctant to take advantage of better-informed advice.

    AI which can learn from its mistakes clearly exists, otherwise how did AlphaGo learn?

    Structuring the software so that it can earn from experience is
    obviously possible, but it is lot easier when there is a well-defined
    target - in AlphaGo winning the game - that it is in more opened ended situations.

    I see no sign of AI which can learn to design electronics but electronics isn't like playing Go.

    What's likely to happen at present is that many young people will insist that what Grok says must be correct.

    In the same way that John Larkin insists that Donald Trump has common
    sense. Common sense is an ill-defined term, and "correct" doesn't mean
    much if you don't know how to recognise mistakes.
    --
    Bill Sloman, Sydney

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Bill Sloman@bill.sloman@ieee.org to sci.electronics.design on Mon Oct 6 16:53:40 2025
    From Newsgroup: sci.electronics.design

    On 6/10/2025 8:42 am, Cursitor Doom wrote:
    On Sun, 5 Oct 2025 15:42:48 -0400, "Edward Rawde"
    <invalid@invalid.invalid> wrote:

    "Cursitor Doom" <cd@notformail.com> wrote in message news:6k85ekhb58ummrpfsg8scf61l1d8adcbr9@4ax.com...
    I can't help noticing since I drew everyone's attention to Grok that
    it's gone awfully quiet around here. I did postulate that AI might
    kill this group, but maybe it's happening quicker than I'd expected.
    :-(

    LLMs are clearly not useful for solving electronic circuit design problems.

    Not very effective at interpreting images either.


    And neither is this group if young newcomers are indirectly referred to as newbie dimwits.
    (Or something like that, I forget the exact words.)

    Sounds like classic Bill Sloman.

    I didn't invent the term "dim newbie" but I do remember the time when it showed up here from time to time. Probably back before 2000. Searching sci.electronics.design on Google groups didn't show up much.

    Spehro Pefhany used the term newbie on Oct 26, 2012, at 7:07:23rC>PM.

    Curistor Doom is dim, and did show up here after the term had mostly
    fallen out of use, but he's had enough experience of electronics that he
    isn't any kind of newbie. He doesn't seem to have learn all that much
    from his experience, but that's a different kind of problem.
    --
    Bill Sloman, Sydney

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bitrex@user@example.net to sci.electronics.design on Mon Oct 6 02:08:30 2025
    From Newsgroup: sci.electronics.design

    On 10/5/2025 6:06 PM, john larkin wrote:
    On Sun, 5 Oct 2025 17:28:35 -0400, "Edward Rawde"
    <invalid@invalid.invalid> wrote:

    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:10bulvc$3p33v$1@dont-email.me...
    On 10/5/2025 12:42 PM, Edward Rawde wrote:
    I see no sign of AI which can learn to design electronics but electronics isn't like playing Go.

    AI is already being used to design electronics.

    Where can I find an AI designer I can test?

    Flux.ai



    John Larkin
    Highland Tech Glen Canyon Design Center
    Lunatic Fringe Electronics


    ChatGPT is designing audiophile grade circuits:

    <https://imgur.com/a/8I0DuEs>
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From john larkin@jl@glen--canyon.com to sci.electronics.design on Mon Oct 6 08:06:33 2025
    From Newsgroup: sci.electronics.design

    On Mon, 6 Oct 2025 16:24:38 +1100, Bill Sloman <bill.sloman@ieee.org>
    wrote:

    On 6/10/2025 6:42 am, Edward Rawde wrote:
    "Cursitor Doom" <cd@notformail.com> wrote in message news:6k85ekhb58ummrpfsg8scf61l1d8adcbr9@4ax.com...
    I can't help noticing since I drew everyone's attention to Grok that
    it's gone awfully quiet around here. I did postulate that AI might
    kill this group, but maybe it's happening quicker than I'd expected.
    :-(

    LLMs are clearly not useful for solving electronic circuit design problems. >>
    And neither is this group if young newcomers are indirectly referred to as newbie dimwits.
    (Or something like that, I forget the exact words.)

    Dim newbies is the traditional term, and it is reserved for new-comers
    who don't know what they are talking about, and are reluctant to take >advantage of better-informed advice.

    It's impressive how tribal people get, rallying against outsiders
    based on any, or no, real issues.



    AI which can learn from its mistakes clearly exists, otherwise how did AlphaGo learn?

    Structuring the software so that it can earn from experience is
    obviously possible, but it is lot easier when there is a well-defined
    target - in AlphaGo winning the game - that it is in more opened ended >situations.

    I see no sign of AI which can learn to design electronics but electronics isn't like playing Go.

    What's likely to happen at present is that many young people will insist that what Grok says must be correct.

    In the same way that John Larkin insists that Donald Trump has common
    sense. Common sense is an ill-defined term, and "correct" doesn't mean
    much if you don't know how to recognise mistakes.

    Really, you obsess about me too much for your own good.

    Design something. Build it. You will feel better.


    John Larkin
    Highland Tech Glen Canyon Design Center
    Lunatic Fringe Electronics
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Edward Rawde@invalid@invalid.invalid to sci.electronics.design on Mon Oct 6 15:04:37 2025
    From Newsgroup: sci.electronics.design

    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:10bv981$c44$1@dont-email.me...
    On 10/5/2025 2:28 PM, Edward Rawde wrote:
    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:10bulvc$3p33v$1@dont-email.me...
    On 10/5/2025 12:42 PM, Edward Rawde wrote:
    I see no sign of AI which can learn to design electronics but electronics isn't like playing Go.

    AI is already being used to design electronics.

    Where can I find an AI designer I can test?

    If you're afraid that your ability to design electronics is being
    threatened by technology, best to find some other thing to hang your
    hat on. Cleaning bedpans will probably be a human-required skill
    for the foreseeable future -- to complex to automate, too cheap to
    find humans who can do it.


    I didn't say I was afraid of technology in any way at all.

    I would be happy to use an AI assistant which can provide a useful contribution.

    But not one which thinks the output of the op amp circuit I posted recently is -12V
    (Or +12V in some cases. I've also seen 8V.)

    The AI design services you mentioned don't seem to be quick to show examples of their work.

    This doesn't mean I wouldn't want to use them but I prefer to try before I buy. I also prefer to see examples of specifications which were turned into designs by AI.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Don Y@blockedofcourse@foo.invalid to sci.electronics.design on Mon Oct 6 14:21:38 2025
    From Newsgroup: sci.electronics.design

    On 10/6/2025 12:04 PM, Edward Rawde wrote:
    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:10bv981$c44$1@dont-email.me...
    On 10/5/2025 2:28 PM, Edward Rawde wrote:
    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:10bulvc$3p33v$1@dont-email.me...
    On 10/5/2025 12:42 PM, Edward Rawde wrote:
    I see no sign of AI which can learn to design electronics but electronics isn't like playing Go.

    AI is already being used to design electronics.

    Where can I find an AI designer I can test?

    If you're afraid that your ability to design electronics is being
    threatened by technology, best to find some other thing to hang your
    hat on. Cleaning bedpans will probably be a human-required skill
    for the foreseeable future -- to complex to automate, too cheap to
    find humans who can do it.

    I didn't say I was afraid of technology in any way at all.

    I would be happy to use an AI assistant which can provide a useful contribution.

    But not one which thinks the output of the op amp circuit I posted recently is -12V
    (Or +12V in some cases. I've also seen 8V.)

    You're trying to use the wrong tool for the job. Would you use Spice to perform a finite element analysis on a cutting tool?

    LLMs are just one type of AI technology. You wouldn't use them to
    recognize faces in a photograph (though you could use an LMM for
    such a task).

    You want an AI that can *reason* and not just "look for some prior
    example of the challenge you are presenting".

    E.g., it's relatively easy (computationally inexpensive) to use SOMs to recognize handwritten digits. But, there's no *reasoning* there.
    And, as such, it can't *explain* why it has come to a particular decision.

    The AI design services you mentioned don't seem to be quick to show examples of their work.

    Likely only shown to people with $eriou$ intention$ and likely not intended to be "open". Here's an example of a person that you can replace!

    This doesn't mean I wouldn't want to use them but I prefer to try before I buy.

    You're likely already offloading some of your design efforts -- do you buy SoC's? Power supply modules? ICs instead of building everything with discretes?

    There, you've allowed a human/company to replace a chunk of engineering
    instead of doing it yourself. How many designs are produced with at
    least some "subsystem" purchased for inclusion within? Are you really
    an analog/digital/software designer if you don't do EVERYTHING yourself?? :>

    I also prefer to see examples of specifications which were turned into designs by AI.

    Ditto for anything that an AI "claims" to have generated.
    Proving that an AI can regurgitate a previously solved
    problem is just "automated retrieval".

    "Write a program that prints 'Hello, World!'..."

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Don Y@blockedofcourse@foo.invalid to sci.electronics.design on Mon Oct 6 17:38:21 2025
    From Newsgroup: sci.electronics.design

    On 10/6/2025 2:21 PM, Don Y wrote:
    Ditto for anything that an AI "claims" to have generated.
    Proving that an AI can regurgitate a previously solved
    problem is just "automated retrieval".

    "Write a program that prints 'Hello, World!'..."

    A better example:

    "Write a program/function/module that counts the number of
    set bits in a 64b integer."

    There are only 65 possible answers for 2^64 possible inputs.
    What approach does the AI take in pursuing this explicit
    (though vague) specification?
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Edward Rawde@invalid@invalid.invalid to sci.electronics.design on Mon Oct 6 23:49:59 2025
    From Newsgroup: sci.electronics.design

    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:10c1ne0$l8u1$1@dont-email.me...
    On 10/6/2025 2:21 PM, Don Y wrote:
    Ditto for anything that an AI "claims" to have generated.
    Proving that an AI can regurgitate a previously solved
    problem is just "automated retrieval".

    "Write a program that prints 'Hello, World!'..."

    A better example:

    "Write a program/function/module that counts the number of
    set bits in a 64b integer."

    There are only 65 possible answers for 2^64 possible inputs.
    What approach does the AI take in pursuing this explicit
    (though vague) specification?

    Depends on what you ask.
    Try asking Grok:
    Map the number of set bits in a 64-bit word to a 7-bit code

    But we all know LLMs were trained on code.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Don Y@blockedofcourse@foo.invalid to sci.electronics.design on Mon Oct 6 21:11:47 2025
    From Newsgroup: sci.electronics.design

    On 10/6/2025 8:49 PM, Edward Rawde wrote:
    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:10c1ne0$l8u1$1@dont-email.me...
    On 10/6/2025 2:21 PM, Don Y wrote:
    Ditto for anything that an AI "claims" to have generated.
    Proving that an AI can regurgitate a previously solved
    problem is just "automated retrieval".

    "Write a program that prints 'Hello, World!'..."

    A better example:

    "Write a program/function/module that counts the number of
    set bits in a 64b integer."

    There are only 65 possible answers for 2^64 possible inputs.
    What approach does the AI take in pursuing this explicit
    (though vague) specification?

    Depends on what you ask.
    Try asking Grok:
    Map the number of set bits in a 64-bit word to a 7-bit code

    But we all know LLMs were trained on code.

    But there are many (practical) different solutions to the problem
    among many thousands of *possible* solutions. Each has different resource/performance issues. Will it opt for speed? code size?
    portability? intuitiveness? "cleverness"?

    Will it try to optimize for particular cases?

    Or, does it settle for "sufficiency"?

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Bill Sloman@bill.sloman@ieee.org to sci.electronics.design on Tue Oct 7 15:46:10 2025
    From Newsgroup: sci.electronics.design

    On 7/10/2025 2:06 am, john larkin wrote:
    On Mon, 6 Oct 2025 16:24:38 +1100, Bill Sloman <bill.sloman@ieee.org>
    wrote:

    On 6/10/2025 6:42 am, Edward Rawde wrote:
    "Cursitor Doom" <cd@notformail.com> wrote in message news:6k85ekhb58ummrpfsg8scf61l1d8adcbr9@4ax.com...
    I can't help noticing since I drew everyone's attention to Grok that
    it's gone awfully quiet around here. I did postulate that AI might
    kill this group, but maybe it's happening quicker than I'd expected.
    :-(

    LLMs are clearly not useful for solving electronic circuit design problems. >>>
    And neither is this group if young newcomers are indirectly referred to as newbie dimwits.
    (Or something like that, I forget the exact words.)

    Dim newbies is the traditional term, and it is reserved for new-comers
    who don't know what they are talking about, and are reluctant to take
    advantage of better-informed advice.

    It's impressive how tribal people get, rallying against outsiders
    based on any, or no, real issues.

    Dim newbies were a rather restricted class of outsiders. This group has
    never been all that tribal, and people who knew what they are talking
    about were accepted without fuss.

    AI which can learn from its mistakes clearly exists, otherwise how did AlphaGo learn?

    Structuring the software so that it can earn from experience is
    obviously possible, but it is lot easier when there is a well-defined
    target - in AlphaGo winning the game - that it is in more opened ended
    situations.

    I see no sign of AI which can learn to design electronics but electronics isn't like playing Go.

    What's likely to happen at present is that many young people will insist that what Grok says must be correct.

    In the same way that John Larkin insists that Donald Trump has common
    sense. Common sense is an ill-defined term, and "correct" doesn't mean
    much if you don't know how to recognise mistakes.

    Really, you obsess about me too much for your own good.

    You do provide a very convenient bad example. For a long time you were
    the groups most prolific poster (and may still be). The personality
    defect that encourages you to sound off when you don't have anything to
    say does generate a lot of bad examples available to anybody who needs one.

    Design something. Build it. You will feel better.

    I'd be delighted if somebody came up with a problem that was worth my
    while to solve. Even more if they offered to pay for the solution - not because I need the money, but because it would suggest that they were genuinely interested in getting a result.

    All you are interested in is getting flattered, and you get rather
    spiteful when it doesn't happen.
    --
    Bill Sloman, Sydney

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Buzz McCool@buzz_mccool@yahoo.com to sci.electronics.design on Tue Oct 7 10:54:16 2025
    From Newsgroup: sci.electronics.design

    On 10/5/2025 2:49 PM, Phil Hobbs wrote:

    Simon and I are planning to submit a patent application this week on the topic of high performance temperature control. Once itrCOs done, we could discuss it here if folks are interested.
    Yes, I enjoy reading about your work.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Gerhard Hoffmann@dk4xp@arcor.de to sci.electronics.design on Tue Oct 7 20:42:15 2025
    From Newsgroup: sci.electronics.design

    Am 07.10.25 um 19:54 schrieb Buzz McCool:
    On 10/5/2025 2:49 PM, Phil Hobbs wrote:

    Simon and I are planning to submit a patent application this week on the
    topic of high performance temperature control. Once itrCOs done, we could
    discuss it here if folks are interested.
    Yes, I enjoy reading about your work.

    +1

    Cheers, Gerhard



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Bill Sloman@bill.sloman@ieee.org to sci.electronics.design on Thu Oct 9 15:16:16 2025
    From Newsgroup: sci.electronics.design

    On 6/10/2025 8:49 am, Phil Hobbs wrote:
    john larkin <jl@glen--canyon.com> wrote:
    On Sun, 05 Oct 2025 17:54:56 +0100, Cursitor Doom <cd@notformail.com>
    wrote:

    I can't help noticing since I drew everyone's attention to Grok that
    it's gone awfully quiet around here. I did postulate that AI might
    kill this group, but maybe it's happening quicker than I'd expected.
    :-(

    No, it's just that few people design electronics now.

    Simon and I are planning to submit a patent application this week on the topic of high performance temperature control. Once itrCOs done, we could discuss it here if folks are interested.

    A good literature search used to cost more than a patent. Posting here,
    once you've got the patent safely submitted, could save you some money.

    Is the high performance limited to the level of temperature control you
    can get, or are you more interested in getting to the desired
    temperature range as fast as possible?
    --
    Bill Sloman, Sydney

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From albert@albert@spenarnc.xs4all.nl to sci.electronics.design on Sat Oct 11 00:36:26 2025
    From Newsgroup: sci.electronics.design

    In article <10bv981$c44$1@dont-email.me>,
    Don Y <blockedofcourse@foo.invalid> wrote:
    <SNIP>
    Many diagnostics aren't certain or rely on subjective interpretations of >data. How many mammograms of cancerous breasts are taken before a cancer
    is large enough to be *confidently* diagnosed? How much extra breast tissue >is put at risk in that process? What chance for the cancer to metastasize >before being noticeable, there?

    Reportedly Chinese hospitals are using ai successfully to interpret
    Roentgen photos. It speeds up the diagnosis process, but they don't
    eliminate radiologists.


    AI is another diagnostic tool to further increase confidence in a
    diagnosis OR detect conditions that "mere mortals" miss.


    Groetjes Albert
    --
    The Chinese government is satisfied with its military superiority over USA.
    The next 5 year plan has as primary goal to advance life expectancy
    over 80 years, like Western Europe.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Don Y@blockedofcourse@foo.invalid to sci.electronics.design on Fri Oct 10 16:19:23 2025
    From Newsgroup: sci.electronics.design

    On 10/10/2025 3:36 PM, albert@spenarnc.xs4all.nl wrote:
    In article <10bv981$c44$1@dont-email.me>,
    Don Y <blockedofcourse@foo.invalid> wrote:
    <SNIP>
    Many diagnostics aren't certain or rely on subjective interpretations of
    data. How many mammograms of cancerous breasts are taken before a cancer
    is large enough to be *confidently* diagnosed? How much extra breast tissue >> is put at risk in that process? What chance for the cancer to metastasize >> before being noticeable, there?

    Reportedly Chinese hospitals are using ai successfully to interpret
    Roentgen photos. It speeds up the diagnosis process, but they don't
    eliminate radiologists.

    Put that in the *legal* environment of the US: the AI makes a claim (Dx).
    If the radiologist doesn't accept the claim and the claim proves,
    LATER, to have been correct, the pt incurs a "loss". The radiologist
    gets sued.

    OTOH, if the radiologist defers to the AI and THAT proves to be wrong,
    the AI doesn't get sued... I the radiologist gets sued, part of their
    defense will be to cite the expertise of the AI and his reliance on its
    "expert opinion".

    In effect, the AI's "opinion" is overweighted instead of being just an
    advisory one.

    AI is another diagnostic tool to further increase confidence in a
    diagnosis OR detect conditions that "mere mortals" miss.


    Groetjes Albert

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Theo@theom+news@chiark.greenend.org.uk to sci.electronics.design on Sat Oct 11 13:02:42 2025
    From Newsgroup: sci.electronics.design

    Don Y <blockedofcourse@foo.invalid> wrote:
    On 10/6/2025 8:49 PM, Edward Rawde wrote:
    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:10c1ne0$l8u1$1@dont-email.me...
    On 10/6/2025 2:21 PM, Don Y wrote:
    Ditto for anything that an AI "claims" to have generated.
    Proving that an AI can regurgitate a previously solved
    problem is just "automated retrieval".

    "Write a program that prints 'Hello, World!'..."

    A better example:

    "Write a program/function/module that counts the number of
    set bits in a 64b integer."

    There are only 65 possible answers for 2^64 possible inputs.
    What approach does the AI take in pursuing this explicit
    (though vague) specification?

    Depends on what you ask.
    Try asking Grok:
    Map the number of set bits in a 64-bit word to a 7-bit code

    But we all know LLMs were trained on code.

    But there are many (practical) different solutions to the problem
    among many thousands of *possible* solutions. Each has different resource/performance issues. Will it opt for speed? code size?
    portability? intuitiveness? "cleverness"?

    Will it try to optimize for particular cases?

    Or, does it settle for "sufficiency"?

    None of the above. It looks for examples of the same code having been
    written before, and mashes up something to present to you. It doesn't
    'know' about speed or portability or code size, absent somebody remarking
    about those in its input data.

    There's a lot of code out there, so if you ask for a fast algorithm then it
    can probably dredge one up, but it doesn't 'know' why it's fast.

    If you ask it why it's fast, it can look for somebody talking about that in
    the training data and present that as an argument, but it doesn't guarantee
    to relate to the same code example it provided.

    For toy problems that have been done a million times before, the training
    data is pretty solid so it might look good. But once you start going off
    track into areas the training data is sparse then you can look more closely.

    Theo
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From john larkin@jl@glen--canyon.com to sci.electronics.design on Sat Oct 11 10:06:10 2025
    From Newsgroup: sci.electronics.design

    On Sat, 11 Oct 2025 09:56:29 -0700, Don Y
    <blockedofcourse@foo.invalid> wrote:

    On 10/11/2025 5:02 AM, Theo wrote:
    Don Y <blockedofcourse@foo.invalid> wrote:
    On 10/6/2025 8:49 PM, Edward Rawde wrote:
    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:10c1ne0$l8u1$1@dont-email.me...
    On 10/6/2025 2:21 PM, Don Y wrote:
    Ditto for anything that an AI "claims" to have generated.
    Proving that an AI can regurgitate a previously solved
    problem is just "automated retrieval".

    "Write a program that prints 'Hello, World!'..."

    A better example:

    "Write a program/function/module that counts the number of
    set bits in a 64b integer."

    There are only 65 possible answers for 2^64 possible inputs.
    What approach does the AI take in pursuing this explicit
    (though vague) specification?

    Depends on what you ask.
    Try asking Grok:
    Map the number of set bits in a 64-bit word to a 7-bit code

    But we all know LLMs were trained on code.

    But there are many (practical) different solutions to the problem
    among many thousands of *possible* solutions. Each has different
    resource/performance issues. Will it opt for speed? code size?
    portability? intuitiveness? "cleverness"?

    Will it try to optimize for particular cases?

    Or, does it settle for "sufficiency"?

    None of the above.

    It was a rhetorical question illustrating how easy it is
    to NOT properly constrain a solution space. I.e., someone
    has to "tell" an AI what a suitable answer will look like.
    If that someone can't imagine all of the criteria appropriate
    to that solution, then you *may* get an implementation
    that fails many criteria that you've not realized are
    important to your problem.

    Like asking someone to build you a house -- and ending up
    with a house sized for *dolls*!

    It looks for examples of the same code having been
    written before, and mashes up something to present to you. It doesn't
    'know' about speed or portability or code size, absent somebody remarking
    about those in its input data.

    There's a lot of code out there, so if you ask for a fast algorithm then it >> can probably dredge one up, but it doesn't 'know' why it's fast.

    But, a fast algorithm on a 64 bit machine will be very different than
    the same function written for an 8 bit machine. See how easy it is to >"forget" pertinent details?

    Note that we're just talking about a *tiny* piece of code (dozen lines?), >here -- and how easy it is to NOT ask for the correct constraints.

    If you ask it why it's fast, it can look for somebody talking about that in >> the training data and present that as an argument, but it doesn't guarantee >> to relate to the same code example it provided.

    For toy problems that have been done a million times before, the training
    data is pretty solid so it might look good. But once you start going off
    track into areas the training data is sparse then you can look more closely.

    That was the point of my "Hello, World" example.

    I suspect it does reasonably well with javascript and html5 for web pages >(which tend to largely resemble each other save for minor details and >graphics)

    But, think of how much effort you would have to put into "specifying"
    a *real* problem -- enough to be sure the solution presented actually
    does fit *your* needs. I.e., if you aren't already writing such >specifications for your code, you likely aren't competent to direct
    an AI any more than your own "coders".

    And electronic design is not just coding. It needs real, organic
    intelligence.

    It's impressive that a human brain only needs about a hundred watts.


    John Larkin
    Highland Tech Glen Canyon Design Center
    Lunatic Fringe Electronics
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Don Y@blockedofcourse@foo.invalid to sci.electronics.design on Sat Oct 11 09:56:29 2025
    From Newsgroup: sci.electronics.design

    On 10/11/2025 5:02 AM, Theo wrote:
    Don Y <blockedofcourse@foo.invalid> wrote:
    On 10/6/2025 8:49 PM, Edward Rawde wrote:
    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:10c1ne0$l8u1$1@dont-email.me...
    On 10/6/2025 2:21 PM, Don Y wrote:
    Ditto for anything that an AI "claims" to have generated.
    Proving that an AI can regurgitate a previously solved
    problem is just "automated retrieval".

    "Write a program that prints 'Hello, World!'..."

    A better example:

    "Write a program/function/module that counts the number of
    set bits in a 64b integer."

    There are only 65 possible answers for 2^64 possible inputs.
    What approach does the AI take in pursuing this explicit
    (though vague) specification?

    Depends on what you ask.
    Try asking Grok:
    Map the number of set bits in a 64-bit word to a 7-bit code

    But we all know LLMs were trained on code.

    But there are many (practical) different solutions to the problem
    among many thousands of *possible* solutions. Each has different
    resource/performance issues. Will it opt for speed? code size?
    portability? intuitiveness? "cleverness"?

    Will it try to optimize for particular cases?

    Or, does it settle for "sufficiency"?

    None of the above.

    It was a rhetorical question illustrating how easy it is
    to NOT properly constrain a solution space. I.e., someone
    has to "tell" an AI what a suitable answer will look like.
    If that someone can't imagine all of the criteria appropriate
    to that solution, then you *may* get an implementation
    that fails many criteria that you've not realized are
    important to your problem.

    Like asking someone to build you a house -- and ending up
    with a house sized for *dolls*!

    It looks for examples of the same code having been
    written before, and mashes up something to present to you. It doesn't
    'know' about speed or portability or code size, absent somebody remarking about those in its input data.

    There's a lot of code out there, so if you ask for a fast algorithm then it can probably dredge one up, but it doesn't 'know' why it's fast.

    But, a fast algorithm on a 64 bit machine will be very different than
    the same function written for an 8 bit machine. See how easy it is to
    "forget" pertinent details?

    Note that we're just talking about a *tiny* piece of code (dozen lines?),
    here -- and how easy it is to NOT ask for the correct constraints.

    If you ask it why it's fast, it can look for somebody talking about that in the training data and present that as an argument, but it doesn't guarantee to relate to the same code example it provided.

    For toy problems that have been done a million times before, the training data is pretty solid so it might look good. But once you start going off track into areas the training data is sparse then you can look more closely.

    That was the point of my "Hello, World" example.

    I suspect it does reasonably well with javascript and html5 for web pages (which tend to largely resemble each other save for minor details and
    graphics)

    But, think of how much effort you would have to put into "specifying"
    a *real* problem -- enough to be sure the solution presented actually
    does fit *your* needs. I.e., if you aren't already writing such
    specifications for your code, you likely aren't competent to direct
    an AI any more than your own "coders".
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Bill Sloman@bill.sloman@ieee.org to sci.electronics.design on Sun Oct 12 22:31:22 2025
    From Newsgroup: sci.electronics.design

    On 12/10/2025 4:06 am, john larkin wrote:
    On Sat, 11 Oct 2025 09:56:29 -0700, Don Y
    <blockedofcourse@foo.invalid> wrote:

    On 10/11/2025 5:02 AM, Theo wrote:
    Don Y <blockedofcourse@foo.invalid> wrote:
    On 10/6/2025 8:49 PM, Edward Rawde wrote:
    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:10c1ne0$l8u1$1@dont-email.me...
    On 10/6/2025 2:21 PM, Don Y wrote:
    Ditto for anything that an AI "claims" to have generated.
    Proving that an AI can regurgitate a previously solved
    problem is just "automated retrieval".

    "Write a program that prints 'Hello, World!'..."

    A better example:

    "Write a program/function/module that counts the number of
    set bits in a 64b integer."

    There are only 65 possible answers for 2^64 possible inputs.
    What approach does the AI take in pursuing this explicit
    (though vague) specification?

    Depends on what you ask.
    Try asking Grok:
    Map the number of set bits in a 64-bit word to a 7-bit code

    But we all know LLMs were trained on code.

    But there are many (practical) different solutions to the problem
    among many thousands of *possible* solutions. Each has different
    resource/performance issues. Will it opt for speed? code size?
    portability? intuitiveness? "cleverness"?

    Will it try to optimize for particular cases?

    Or, does it settle for "sufficiency"?

    None of the above.

    It was a rhetorical question illustrating how easy it is
    to NOT properly constrain a solution space. I.e., someone
    has to "tell" an AI what a suitable answer will look like.
    If that someone can't imagine all of the criteria appropriate
    to that solution, then you *may* get an implementation
    that fails many criteria that you've not realized are
    important to your problem.

    Like asking someone to build you a house -- and ending up
    with a house sized for *dolls*!

    It looks for examples of the same code having been
    written before, and mashes up something to present to you. It doesn't
    'know' about speed or portability or code size, absent somebody remarking >>> about those in its input data.

    There's a lot of code out there, so if you ask for a fast algorithm then it >>> can probably dredge one up, but it doesn't 'know' why it's fast.

    But, a fast algorithm on a 64 bit machine will be very different than
    the same function written for an 8 bit machine. See how easy it is to
    "forget" pertinent details?

    Note that we're just talking about a *tiny* piece of code (dozen lines?),
    here -- and how easy it is to NOT ask for the correct constraints.

    If you ask it why it's fast, it can look for somebody talking about that in >>> the training data and present that as an argument, but it doesn't guarantee >>> to relate to the same code example it provided.

    For toy problems that have been done a million times before, the training >>> data is pretty solid so it might look good. But once you start going off >>> track into areas the training data is sparse then you can look more closely.

    That was the point of my "Hello, World" example.

    I suspect it does reasonably well with javascript and html5 for web pages
    (which tend to largely resemble each other save for minor details and
    graphics)

    But, think of how much effort you would have to put into "specifying"
    a *real* problem -- enough to be sure the solution presented actually
    does fit *your* needs. I.e., if you aren't already writing such
    specifications for your code, you likely aren't competent to direct
    an AI any more than your own "coders".

    And electronic design is not just coding. It needs real, organic intelligence.

    To do it well. More or less adequate electronic design is easier. I've
    cleaned up after few people whose idea of "adequate" fell a bit short.

    It's impressive that a human brain only needs about a hundred watts.

    It is woefully slow.
    --
    Bill Sloman, Sydney

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Edward Rawde@invalid@invalid.invalid to sci.electronics.design on Sun Oct 12 10:25:18 2025
    From Newsgroup: sci.electronics.design

    "Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cg3ib$1e2gm$2@dont-email.me...
    On 12/10/2025 4:06 am, john larkin wrote:
    On Sat, 11 Oct 2025 09:56:29 -0700, Don Y
    <blockedofcourse@foo.invalid> wrote:

    On 10/11/2025 5:02 AM, Theo wrote:
    Don Y <blockedofcourse@foo.invalid> wrote:
    On 10/6/2025 8:49 PM, Edward Rawde wrote:
    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:10c1ne0$l8u1$1@dont-email.me...
    On 10/6/2025 2:21 PM, Don Y wrote:
    Ditto for anything that an AI "claims" to have generated.
    Proving that an AI can regurgitate a previously solved
    problem is just "automated retrieval".

    "Write a program that prints 'Hello, World!'..."

    A better example:

    "Write a program/function/module that counts the number of
    set bits in a 64b integer."

    There are only 65 possible answers for 2^64 possible inputs.
    What approach does the AI take in pursuing this explicit
    (though vague) specification?

    Depends on what you ask.
    Try asking Grok:
    Map the number of set bits in a 64-bit word to a 7-bit code

    But we all know LLMs were trained on code.

    But there are many (practical) different solutions to the problem
    among many thousands of *possible* solutions. Each has different
    resource/performance issues. Will it opt for speed? code size?
    portability? intuitiveness? "cleverness"?

    Will it try to optimize for particular cases?

    Or, does it settle for "sufficiency"?

    None of the above.

    It was a rhetorical question illustrating how easy it is
    to NOT properly constrain a solution space. I.e., someone
    has to "tell" an AI what a suitable answer will look like.
    If that someone can't imagine all of the criteria appropriate
    to that solution, then you *may* get an implementation
    that fails many criteria that you've not realized are
    important to your problem.

    Like asking someone to build you a house -- and ending up
    with a house sized for *dolls*!

    It looks for examples of the same code having been
    written before, and mashes up something to present to you. It doesn't >>>> 'know' about speed or portability or code size, absent somebody remarking >>>> about those in its input data.

    There's a lot of code out there, so if you ask for a fast algorithm then it
    can probably dredge one up, but it doesn't 'know' why it's fast.

    But, a fast algorithm on a 64 bit machine will be very different than
    the same function written for an 8 bit machine. See how easy it is to
    "forget" pertinent details?

    Note that we're just talking about a *tiny* piece of code (dozen lines?), >>> here -- and how easy it is to NOT ask for the correct constraints.

    If you ask it why it's fast, it can look for somebody talking about that in
    the training data and present that as an argument, but it doesn't guarantee
    to relate to the same code example it provided.

    For toy problems that have been done a million times before, the training >>>> data is pretty solid so it might look good. But once you start going off >>>> track into areas the training data is sparse then you can look more closely.

    That was the point of my "Hello, World" example.

    I suspect it does reasonably well with javascript and html5 for web pages >>> (which tend to largely resemble each other save for minor details and
    graphics)

    But, think of how much effort you would have to put into "specifying"
    a *real* problem -- enough to be sure the solution presented actually
    does fit *your* needs. I.e., if you aren't already writing such
    specifications for your code, you likely aren't competent to direct
    an AI any more than your own "coders".

    And electronic design is not just coding. It needs real, organic
    intelligence.

    To do it well. More or less adequate electronic design is easier.

    I've cleaned up after few people whose idea of "adequate" fell a bit short.

    That must be the 234,412,265th time you've said that.


    It's impressive that a human brain only needs about a hundred watts.

    It is woefully slow.

    At some things. Not at others.


    --
    Bill Sloman, Sydney



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Bill Sloman@bill.sloman@ieee.org to sci.electronics.design on Mon Oct 13 03:01:25 2025
    From Newsgroup: sci.electronics.design

    On 13/10/2025 1:25 am, Edward Rawde wrote:
    "Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cg3ib$1e2gm$2@dont-email.me...
    On 12/10/2025 4:06 am, john larkin wrote:
    On Sat, 11 Oct 2025 09:56:29 -0700, Don Y
    <blockedofcourse@foo.invalid> wrote:

    On 10/11/2025 5:02 AM, Theo wrote:
    Don Y <blockedofcourse@foo.invalid> wrote:
    On 10/6/2025 8:49 PM, Edward Rawde wrote:
    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:10c1ne0$l8u1$1@dont-email.me...
    On 10/6/2025 2:21 PM, Don Y wrote:
    Ditto for anything that an AI "claims" to have generated.
    Proving that an AI can regurgitate a previously solved
    problem is just "automated retrieval".

    "Write a program that prints 'Hello, World!'..."

    A better example:

    "Write a program/function/module that counts the number of
    set bits in a 64b integer."

    There are only 65 possible answers for 2^64 possible inputs.
    What approach does the AI take in pursuing this explicit
    (though vague) specification?

    Depends on what you ask.
    Try asking Grok:
    Map the number of set bits in a 64-bit word to a 7-bit code

    But we all know LLMs were trained on code.

    But there are many (practical) different solutions to the problem
    among many thousands of *possible* solutions. Each has different
    resource/performance issues. Will it opt for speed? code size?
    portability? intuitiveness? "cleverness"?

    Will it try to optimize for particular cases?

    Or, does it settle for "sufficiency"?

    None of the above.

    It was a rhetorical question illustrating how easy it is
    to NOT properly constrain a solution space. I.e., someone
    has to "tell" an AI what a suitable answer will look like.
    If that someone can't imagine all of the criteria appropriate
    to that solution, then you *may* get an implementation
    that fails many criteria that you've not realized are
    important to your problem.

    Like asking someone to build you a house -- and ending up
    with a house sized for *dolls*!

    It looks for examples of the same code having been
    written before, and mashes up something to present to you. It doesn't >>>>> 'know' about speed or portability or code size, absent somebody remarking >>>>> about those in its input data.

    There's a lot of code out there, so if you ask for a fast algorithm then it
    can probably dredge one up, but it doesn't 'know' why it's fast.

    But, a fast algorithm on a 64 bit machine will be very different than
    the same function written for an 8 bit machine. See how easy it is to >>>> "forget" pertinent details?

    Note that we're just talking about a *tiny* piece of code (dozen lines?), >>>> here -- and how easy it is to NOT ask for the correct constraints.

    If you ask it why it's fast, it can look for somebody talking about that in
    the training data and present that as an argument, but it doesn't guarantee
    to relate to the same code example it provided.

    For toy problems that have been done a million times before, the training >>>>> data is pretty solid so it might look good. But once you start going off >>>>> track into areas the training data is sparse then you can look more closely.

    That was the point of my "Hello, World" example.

    I suspect it does reasonably well with javascript and html5 for web pages >>>> (which tend to largely resemble each other save for minor details and
    graphics)

    But, think of how much effort you would have to put into "specifying"
    a *real* problem -- enough to be sure the solution presented actually
    does fit *your* needs. I.e., if you aren't already writing such
    specifications for your code, you likely aren't competent to direct
    an AI any more than your own "coders".

    And electronic design is not just coding. It needs real, organic
    intelligence.

    To do it well. More or less adequate electronic design is easier.

    I've cleaned up after few people whose idea of "adequate" fell a bit short.

    That must be the 234,412,265th time you've said that.

    That may be something of an exaggeration.

    It's impressive that a human brain only needs about a hundred watts.

    It is woefully slow.

    At some things. Not at others.

    Name one. The basic operations in the human brain seem to work in the millisecond range, and ECL can do stuff in a nanosecond.

    The human brain does well on large data sets because it has a lot more parallel processing than a regular computer, but - as demonstrated by
    the solution of the protein folding problem - you can get more data into
    a big computer than you can into a human brain.
    --
    Bill Sloman, Sydney


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Edward Rawde@invalid@invalid.invalid to sci.electronics.design on Sun Oct 12 12:35:47 2025
    From Newsgroup: sci.electronics.design

    "Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cgjcn$1i52v$4@dont-email.me...
    On 13/10/2025 1:25 am, Edward Rawde wrote:
    "Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cg3ib$1e2gm$2@dont-email.me...
    On 12/10/2025 4:06 am, john larkin wrote:
    On Sat, 11 Oct 2025 09:56:29 -0700, Don Y
    <blockedofcourse@foo.invalid> wrote:

    On 10/11/2025 5:02 AM, Theo wrote:
    Don Y <blockedofcourse@foo.invalid> wrote:
    On 10/6/2025 8:49 PM, Edward Rawde wrote:
    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:10c1ne0$l8u1$1@dont-email.me...
    On 10/6/2025 2:21 PM, Don Y wrote:
    Ditto for anything that an AI "claims" to have generated.
    .....

    And electronic design is not just coding. It needs real, organic
    intelligence.

    To do it well. More or less adequate electronic design is easier.

    I've cleaned up after few people whose idea of "adequate" fell a bit short. >>
    That must be the 234,412,265th time you've said that.

    That may be something of an exaggeration.

    Ok 234,412,104th


    It's impressive that a human brain only needs about a hundred watts.

    It is woefully slow.

    At some things. Not at others.

    Name one. The basic operations in the human brain seem to work in the millisecond range, and ECL can do stuff in a nanosecond.

    Understanding a language you're fluent in appears to be near enough instant. Why would you need it any faster?
    I've never seen ECL do that.


    The human brain does well on large data sets because it has a lot more parallel processing than a regular computer, but - as
    demonstrated by the solution of the protein folding problem - you can get more data into a big computer than you can into a human
    brain.

    Sure but I've yet to see an online AI which learns from its mistakes.
    And when that happens, who is going to teach it what a mistake is and what isn't?
    Some subjects, such as politics, may run into the same difficulties humans have.
    Where will a DT made of ECL with a much larger data set lead us?


    --
    Bill Sloman, Sydney




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From john larkin@jl@glen--canyon.com to sci.electronics.design on Sun Oct 12 10:13:18 2025
    From Newsgroup: sci.electronics.design

    On Sun, 12 Oct 2025 10:25:18 -0400, "Edward Rawde"
    <invalid@invalid.invalid> wrote:

    "Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cg3ib$1e2gm$2@dont-email.me...
    On 12/10/2025 4:06 am, john larkin wrote:
    On Sat, 11 Oct 2025 09:56:29 -0700, Don Y
    <blockedofcourse@foo.invalid> wrote:

    On 10/11/2025 5:02 AM, Theo wrote:
    Don Y <blockedofcourse@foo.invalid> wrote:
    On 10/6/2025 8:49 PM, Edward Rawde wrote:
    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:10c1ne0$l8u1$1@dont-email.me...
    On 10/6/2025 2:21 PM, Don Y wrote:
    Ditto for anything that an AI "claims" to have generated.
    Proving that an AI can regurgitate a previously solved
    problem is just "automated retrieval".

    "Write a program that prints 'Hello, World!'..."

    A better example:

    "Write a program/function/module that counts the number of
    set bits in a 64b integer."

    There are only 65 possible answers for 2^64 possible inputs.
    What approach does the AI take in pursuing this explicit
    (though vague) specification?

    Depends on what you ask.
    Try asking Grok:
    Map the number of set bits in a 64-bit word to a 7-bit code

    But we all know LLMs were trained on code.

    But there are many (practical) different solutions to the problem
    among many thousands of *possible* solutions. Each has different
    resource/performance issues. Will it opt for speed? code size?
    portability? intuitiveness? "cleverness"?

    Will it try to optimize for particular cases?

    Or, does it settle for "sufficiency"?

    None of the above.

    It was a rhetorical question illustrating how easy it is
    to NOT properly constrain a solution space. I.e., someone
    has to "tell" an AI what a suitable answer will look like.
    If that someone can't imagine all of the criteria appropriate
    to that solution, then you *may* get an implementation
    that fails many criteria that you've not realized are
    important to your problem.

    Like asking someone to build you a house -- and ending up
    with a house sized for *dolls*!

    It looks for examples of the same code having been
    written before, and mashes up something to present to you. It doesn't >>>>> 'know' about speed or portability or code size, absent somebody remarking >>>>> about those in its input data.

    There's a lot of code out there, so if you ask for a fast algorithm then it
    can probably dredge one up, but it doesn't 'know' why it's fast.

    But, a fast algorithm on a 64 bit machine will be very different than
    the same function written for an 8 bit machine. See how easy it is to >>>> "forget" pertinent details?

    Note that we're just talking about a *tiny* piece of code (dozen lines?), >>>> here -- and how easy it is to NOT ask for the correct constraints.

    If you ask it why it's fast, it can look for somebody talking about that in
    the training data and present that as an argument, but it doesn't guarantee
    to relate to the same code example it provided.

    For toy problems that have been done a million times before, the training >>>>> data is pretty solid so it might look good. But once you start going off >>>>> track into areas the training data is sparse then you can look more closely.

    That was the point of my "Hello, World" example.

    I suspect it does reasonably well with javascript and html5 for web pages >>>> (which tend to largely resemble each other save for minor details and
    graphics)

    But, think of how much effort you would have to put into "specifying"
    a *real* problem -- enough to be sure the solution presented actually
    does fit *your* needs. I.e., if you aren't already writing such
    specifications for your code, you likely aren't competent to direct
    an AI any more than your own "coders".

    And electronic design is not just coding. It needs real, organic
    intelligence.

    To do it well. More or less adequate electronic design is easier.

    I've cleaned up after few people whose idea of "adequate" fell a bit short.

    That must be the 234,412,265th time you've said that.


    It's impressive that a human brain only needs about a hundred watts.

    It is woefully slow.

    At some things. Not at others.


    Some very impressive things can happen in milliseconds.

    Sometimes complex things are processed in background, and can take
    days or even years.

    What's cool is that one can have a problem, forget about it for years,
    see some new component that makes it work, and have a new circuit pop
    up instantly.


    John Larkin
    Highland Tech Glen Canyon Design Center
    Lunatic Fringe Electronics
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Bill Sloman@bill.sloman@ieee.org to sci.electronics.design on Mon Oct 13 18:51:55 2025
    From Newsgroup: sci.electronics.design

    On 13/10/2025 3:35 am, Edward Rawde wrote:
    "Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cgjcn$1i52v$4@dont-email.me...
    On 13/10/2025 1:25 am, Edward Rawde wrote:
    "Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cg3ib$1e2gm$2@dont-email.me...
    On 12/10/2025 4:06 am, john larkin wrote:
    On Sat, 11 Oct 2025 09:56:29 -0700, Don Y
    <blockedofcourse@foo.invalid> wrote:

    On 10/11/2025 5:02 AM, Theo wrote:
    Don Y <blockedofcourse@foo.invalid> wrote:
    On 10/6/2025 8:49 PM, Edward Rawde wrote:
    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:10c1ne0$l8u1$1@dont-email.me...
    On 10/6/2025 2:21 PM, Don Y wrote:
    Ditto for anything that an AI "claims" to have generated.
    .....

    And electronic design is not just coding. It needs real, organic
    intelligence.

    To do it well. More or less adequate electronic design is easier.

    I've cleaned up after few people whose idea of "adequate" fell a bit short.

    That must be the 234,412,265th time you've said that.

    That may be something of an exaggeration.

    Ok 234,412,104th


    It's impressive that a human brain only needs about a hundred watts.

    It is woefully slow.

    At some things. Not at others.

    Name one. The basic operations in the human brain seem to work in the millisecond range, and ECL can do stuff in a nanosecond.

    Understanding a language you're fluent in appears to be near enough instant. Why would you need it any faster?
    I've never seen ECL do that.

    You don't need ECL for that. Google translate uses a large language
    model to do rapid translation - faster than a human simultaneous
    translator can manage - and one of my wife's friends from her
    undergraduate days did that for a living, as well as teaching the skill.
    You don't process the speech all that fast - psycholinguists have
    measured that process in some detail.

    The human brain does well on large data sets because it has a lot more parallel processing than a regular computer, but - as
    demonstrated by the solution of the protein folding problem - you can get more data into a big computer than you can into a human
    brain.

    Sure but I've yet to see an online AI which learns from its mistakes.

    You don't move in those circles.

    And when that happens, who is going to teach it what a mistake is and what isn't?

    That's what large language models are for.

    Some subjects, such as politics, may run into the same difficulties humans have.
    Where will a DT made of ECL with a much larger data set lead us?

    You don't need a computer to notice that Trump lies a lot, and sounds
    off about subject where his understanding is imperfect.

    Science - in the peer-reviewed literature - has worked out a mechanism
    to suppress this kind of output. Fact-checkers are the nearest thing to
    that in the political system, and Trump and his supporters are happy to
    ignore them.

    Hitler and Mao provide perfectly splendid examples of the corrosive
    effects of misinformation, but quite a few people seem to be incapable
    of recognising more modern examples of the breed.

    The answer is probably better education, but schools are frequently
    exploited by religious institutions to implant nonsense in the minds of
    the next generation. And most American's seem to be taught that the US constitution is perfect, even though it was remarkably primitive when it
    was first put together, and seems unlikely to ever adopt features like proportional representation and votes of confidence. Trump may make a
    big enough mess of the US to prompt some kind of reform, but his
    supporters who post here don't seem be getting the message.
    --
    Bill Sloman, Sydney


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Bill Sloman@bill.sloman@ieee.org to sci.electronics.design on Mon Oct 13 20:10:43 2025
    From Newsgroup: sci.electronics.design

    On 13/10/2025 4:13 am, john larkin wrote:
    On Sun, 12 Oct 2025 10:25:18 -0400, "Edward Rawde"
    <invalid@invalid.invalid> wrote:

    "Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cg3ib$1e2gm$2@dont-email.me...
    On 12/10/2025 4:06 am, john larkin wrote:
    On Sat, 11 Oct 2025 09:56:29 -0700, Don Y
    <blockedofcourse@foo.invalid> wrote:

    On 10/11/2025 5:02 AM, Theo wrote:
    Don Y <blockedofcourse@foo.invalid> wrote:
    On 10/6/2025 8:49 PM, Edward Rawde wrote:
    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:10c1ne0$l8u1$1@dont-email.me...
    On 10/6/2025 2:21 PM, Don Y wrote:
    Ditto for anything that an AI "claims" to have generated.
    Proving that an AI can regurgitate a previously solved
    problem is just "automated retrieval".

    "Write a program that prints 'Hello, World!'..."

    A better example:

    "Write a program/function/module that counts the number of
    set bits in a 64b integer."

    There are only 65 possible answers for 2^64 possible inputs. >>>>>>>>> What approach does the AI take in pursuing this explicit
    (though vague) specification?

    Depends on what you ask.
    Try asking Grok:
    Map the number of set bits in a 64-bit word to a 7-bit code

    But we all know LLMs were trained on code.

    But there are many (practical) different solutions to the problem >>>>>>> among many thousands of *possible* solutions. Each has different >>>>>>> resource/performance issues. Will it opt for speed? code size? >>>>>>> portability? intuitiveness? "cleverness"?

    Will it try to optimize for particular cases?

    Or, does it settle for "sufficiency"?

    None of the above.

    It was a rhetorical question illustrating how easy it is
    to NOT properly constrain a solution space. I.e., someone
    has to "tell" an AI what a suitable answer will look like.
    If that someone can't imagine all of the criteria appropriate
    to that solution, then you *may* get an implementation
    that fails many criteria that you've not realized are
    important to your problem.

    Like asking someone to build you a house -- and ending up
    with a house sized for *dolls*!

    It looks for examples of the same code having been
    written before, and mashes up something to present to you. It doesn't >>>>>> 'know' about speed or portability or code size, absent somebody remarking
    about those in its input data.

    There's a lot of code out there, so if you ask for a fast algorithm then it
    can probably dredge one up, but it doesn't 'know' why it's fast.

    But, a fast algorithm on a 64 bit machine will be very different than >>>>> the same function written for an 8 bit machine. See how easy it is to >>>>> "forget" pertinent details?

    Note that we're just talking about a *tiny* piece of code (dozen lines?), >>>>> here -- and how easy it is to NOT ask for the correct constraints.

    If you ask it why it's fast, it can look for somebody talking about that in
    the training data and present that as an argument, but it doesn't guarantee
    to relate to the same code example it provided.

    For toy problems that have been done a million times before, the training
    data is pretty solid so it might look good. But once you start going off
    track into areas the training data is sparse then you can look more closely.

    That was the point of my "Hello, World" example.

    I suspect it does reasonably well with javascript and html5 for web pages >>>>> (which tend to largely resemble each other save for minor details and >>>>> graphics)

    But, think of how much effort you would have to put into "specifying" >>>>> a *real* problem -- enough to be sure the solution presented actually >>>>> does fit *your* needs. I.e., if you aren't already writing such
    specifications for your code, you likely aren't competent to direct
    an AI any more than your own "coders".

    And electronic design is not just coding. It needs real, organic
    intelligence.

    To do it well. More or less adequate electronic design is easier.

    I've cleaned up after few people whose idea of "adequate" fell a bit short. >>
    That must be the 234,412,265th time you've said that.


    It's impressive that a human brain only needs about a hundred watts.

    It is woefully slow.

    At some things. Not at others.

    Some very impressive things can happen in milliseconds.

    John Larkin is easily impressed by his own brilliance,

    Sometimes complex things are processed in background, and can take
    days or even years.

    If you have a very slow brain.

    What's cool is that one can have a problem, forget about it for years,
    see some new component that makes it work, and have a new circuit pop
    up instantly.

    That just lots of memory. One time I did that was when an impractical
    way of dealling with ripple on pulse width modulated output which I came
    up with in 1975 became practical in 1992 when I got my hands on a
    big-enough chunk of programable logic - not all that big, as it was a
    plug-in replacement for a 22V10 chip, but big enough. Obviously, I
    hadn't forgotten about it. I hadn't been obssessing about it for the
    previous 17 years, but I hadn't forgotten about it either.
    --
    Bill Sloman, Sydney

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From john larkin@jl@glen--canyon.com to sci.electronics.design on Mon Oct 13 07:41:58 2025
    From Newsgroup: sci.electronics.design

    On Mon, 13 Oct 2025 20:10:43 +1100, Bill Sloman <bill.sloman@ieee.org>
    wrote:

    On 13/10/2025 4:13 am, john larkin wrote:
    On Sun, 12 Oct 2025 10:25:18 -0400, "Edward Rawde"
    <invalid@invalid.invalid> wrote:

    "Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cg3ib$1e2gm$2@dont-email.me...
    On 12/10/2025 4:06 am, john larkin wrote:
    On Sat, 11 Oct 2025 09:56:29 -0700, Don Y
    <blockedofcourse@foo.invalid> wrote:

    On 10/11/2025 5:02 AM, Theo wrote:
    Don Y <blockedofcourse@foo.invalid> wrote:
    On 10/6/2025 8:49 PM, Edward Rawde wrote:
    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:10c1ne0$l8u1$1@dont-email.me...
    On 10/6/2025 2:21 PM, Don Y wrote:
    Ditto for anything that an AI "claims" to have generated. >>>>>>>>>>> Proving that an AI can regurgitate a previously solved
    problem is just "automated retrieval".

    "Write a program that prints 'Hello, World!'..."

    A better example:

    "Write a program/function/module that counts the number of >>>>>>>>>> set bits in a 64b integer."

    There are only 65 possible answers for 2^64 possible inputs. >>>>>>>>>> What approach does the AI take in pursuing this explicit
    (though vague) specification?

    Depends on what you ask.
    Try asking Grok:
    Map the number of set bits in a 64-bit word to a 7-bit code

    But we all know LLMs were trained on code.

    But there are many (practical) different solutions to the problem >>>>>>>> among many thousands of *possible* solutions. Each has different >>>>>>>> resource/performance issues. Will it opt for speed? code size? >>>>>>>> portability? intuitiveness? "cleverness"?

    Will it try to optimize for particular cases?

    Or, does it settle for "sufficiency"?

    None of the above.

    It was a rhetorical question illustrating how easy it is
    to NOT properly constrain a solution space. I.e., someone
    has to "tell" an AI what a suitable answer will look like.
    If that someone can't imagine all of the criteria appropriate
    to that solution, then you *may* get an implementation
    that fails many criteria that you've not realized are
    important to your problem.

    Like asking someone to build you a house -- and ending up
    with a house sized for *dolls*!

    It looks for examples of the same code having been
    written before, and mashes up something to present to you. It doesn't >>>>>>> 'know' about speed or portability or code size, absent somebody remarking
    about those in its input data.

    There's a lot of code out there, so if you ask for a fast algorithm then it
    can probably dredge one up, but it doesn't 'know' why it's fast.

    But, a fast algorithm on a 64 bit machine will be very different than >>>>>> the same function written for an 8 bit machine. See how easy it is to >>>>>> "forget" pertinent details?

    Note that we're just talking about a *tiny* piece of code (dozen lines?),
    here -- and how easy it is to NOT ask for the correct constraints. >>>>>>
    If you ask it why it's fast, it can look for somebody talking about that in
    the training data and present that as an argument, but it doesn't guarantee
    to relate to the same code example it provided.

    For toy problems that have been done a million times before, the training
    data is pretty solid so it might look good. But once you start going off
    track into areas the training data is sparse then you can look more closely.

    That was the point of my "Hello, World" example.

    I suspect it does reasonably well with javascript and html5 for web pages
    (which tend to largely resemble each other save for minor details and >>>>>> graphics)

    But, think of how much effort you would have to put into "specifying" >>>>>> a *real* problem -- enough to be sure the solution presented actually >>>>>> does fit *your* needs. I.e., if you aren't already writing such
    specifications for your code, you likely aren't competent to direct >>>>>> an AI any more than your own "coders".

    And electronic design is not just coding. It needs real, organic
    intelligence.

    To do it well. More or less adequate electronic design is easier.

    I've cleaned up after few people whose idea of "adequate" fell a bit short.

    That must be the 234,412,265th time you've said that.


    It's impressive that a human brain only needs about a hundred watts.

    It is woefully slow.

    At some things. Not at others.

    Some very impressive things can happen in milliseconds.

    John Larkin is easily impressed by his own brilliance,

    I am impressed by my brain, the one I was born with.


    Sometimes complex things are processed in background, and can take
    days or even years.

    If you have a very slow brain.

    Or if you allow it to work at all time scales.


    What's cool is that one can have a problem, forget about it for years,
    see some new component that makes it work, and have a new circuit pop
    up instantly.

    That just lots of memory. One time I did that was when an impractical
    way of dealling with ripple on pulse width modulated output which I came
    up with in 1975 became practical in 1992 when I got my hands on a
    big-enough chunk of programable logic - not all that big, as it was a >plug-in replacement for a 22V10 chip, but big enough. Obviously, I
    hadn't forgotten about it. I hadn't been obssessing about it for the >previous 17 years, but I hadn't forgotten about it either.

    So, you have a very slow brain?


    John Larkin
    Highland Tech Glen Canyon Design Center
    Lunatic Fringe Electronics
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From john larkin@jl@glen--canyon.com to sci.electronics.design on Mon Oct 13 09:01:11 2025
    From Newsgroup: sci.electronics.design

    On Mon, 13 Oct 2025 07:41:58 -0700, john larkin <jl@glen--canyon.com>
    wrote:

    On Mon, 13 Oct 2025 20:10:43 +1100, Bill Sloman <bill.sloman@ieee.org>
    wrote:




    It's impressive that a human brain only needs about a hundred watts. >>>>>
    It is woefully slow.

    At some things. Not at others.

    Some very impressive things can happen in milliseconds.

    John Larkin is easily impressed by his own brilliance,

    I am impressed by my brain, the one I was born with.



    I don't think our brains are a lot different from the ones our
    ancestors had 5,000, or 50,000 years ago. So why did evolution make
    them/us able to do calculus and design electronics and program in
    Rust?

    It's assumed that, since brains are such energy hogs, critters don't
    evolve much more brain than they really need. And most don't.

    Humans benefit from making fire and making weapons, but those wouldn't
    need the ability to do abstract math.


    John Larkin
    Highland Tech Glen Canyon Design Center
    Lunatic Fringe Electronics
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Edward Rawde@invalid@invalid.invalid to sci.electronics.design on Mon Oct 13 12:26:24 2025
    From Newsgroup: sci.electronics.design

    "Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cib2r$21rgr$1@dont-email.me...
    On 13/10/2025 3:35 am, Edward Rawde wrote:
    "Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cgjcn$1i52v$4@dont-email.me...
    On 13/10/2025 1:25 am, Edward Rawde wrote:
    "Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cg3ib$1e2gm$2@dont-email.me...
    On 12/10/2025 4:06 am, john larkin wrote:
    On Sat, 11 Oct 2025 09:56:29 -0700, Don Y
    <blockedofcourse@foo.invalid> wrote:

    On 10/11/2025 5:02 AM, Theo wrote:
    Don Y <blockedofcourse@foo.invalid> wrote:
    On 10/6/2025 8:49 PM, Edward Rawde wrote:
    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:10c1ne0$l8u1$1@dont-email.me...
    On 10/6/2025 2:21 PM, Don Y wrote:
    Ditto for anything that an AI "claims" to have generated.
    .....

    And electronic design is not just coding. It needs real, organic
    intelligence.

    To do it well. More or less adequate electronic design is easier.

    I've cleaned up after few people whose idea of "adequate" fell a bit short.

    That must be the 234,412,265th time you've said that.

    That may be something of an exaggeration.

    Ok 234,412,104th


    It's impressive that a human brain only needs about a hundred watts. >>>>>
    It is woefully slow.

    At some things. Not at others.

    Name one. The basic operations in the human brain seem to work in the millisecond range, and ECL can do stuff in a nanosecond.

    Understanding a language you're fluent in appears to be near enough instant. >> Why would you need it any faster?
    I've never seen ECL do that.

    You don't need ECL for that. Google translate uses a large language model to do rapid translation - faster than a human
    simultaneous translator can manage - and one of my wife's friends from her undergraduate days did that for a living, as well as
    teaching the skill.
    You don't process the speech all that fast - psycholinguists have measured that process in some detail.

    Not long ago I used it for help with translation into French.
    I had to get a human translator to check it and they made a lot of changes.


    The human brain does well on large data sets because it has a lot more parallel processing than a regular computer, but - as
    demonstrated by the solution of the protein folding problem - you can get more data into a big computer than you can into a
    human
    brain.

    Sure but I've yet to see an online AI which learns from its mistakes.

    You don't move in those circles.

    And you do?


    And when that happens, who is going to teach it what a mistake is and what isn't?

    That's what large language models are for.

    Oh dear.


    Some subjects, such as politics, may run into the same difficulties humans have.
    Where will a DT made of ECL with a much larger data set lead us?

    You don't need a computer to notice that Trump lies a lot, and sounds off about subject where his understanding is imperfect.

    But suppose you have a computer which can model DT with a much larger data set and
    a CPU with a similar personality?


    Science - in the peer-reviewed literature - has worked out a mechanism to suppress this kind of output. Fact-checkers are the
    nearest thing to that in the political system, and Trump and his supporters are happy to ignore them.

    So there's a good possibility that future AI will too.


    Hitler and Mao provide perfectly splendid examples of the corrosive effects of misinformation, but quite a few people seem to be
    incapable of recognising more modern examples of the breed.

    Probably because if you haven't lived through it then it may as well not have happened.


    The answer is probably better education, but schools are frequently exploited by religious institutions to implant nonsense in the
    minds of the next generation.

    The same will probably happen with AI.
    Religion knows that the earlier you educate, the more likely that there will be lifetime
    adoption of the religion without question.

    And most American's seem to be taught that the US constitution is perfect, even though it was remarkably primitive when it was
    first put together, and seems unlikely to ever adopt features like proportional representation and votes of confidence. Trump may
    make a big enough mess of the US to prompt some kind of reform, but his supporters who post here don't seem be getting the
    message.

    There seems to be a need to make things "great again" which implies that it is believed
    that they were great in the past but are no longer great.
    So there seems to be a push to go backwards.
    I wonder what AGI will make of that.

    It may depend on whether you can separate intelligence from personality.
    It does not look to me that you can.



    --
    Bill Sloman, Sydney




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Bill Sloman@bill.sloman@ieee.org to sci.electronics.design on Tue Oct 14 03:53:06 2025
    From Newsgroup: sci.electronics.design

    On 14/10/2025 1:41 am, john larkin wrote:
    On Mon, 13 Oct 2025 20:10:43 +1100, Bill Sloman <bill.sloman@ieee.org>
    wrote:

    On 13/10/2025 4:13 am, john larkin wrote:
    On Sun, 12 Oct 2025 10:25:18 -0400, "Edward Rawde"
    <invalid@invalid.invalid> wrote:

    "Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cg3ib$1e2gm$2@dont-email.me...
    On 12/10/2025 4:06 am, john larkin wrote:
    On Sat, 11 Oct 2025 09:56:29 -0700, Don Y
    <blockedofcourse@foo.invalid> wrote:

    On 10/11/2025 5:02 AM, Theo wrote:
    Don Y <blockedofcourse@foo.invalid> wrote:
    On 10/6/2025 8:49 PM, Edward Rawde wrote:
    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:10c1ne0$l8u1$1@dont-email.me...
    On 10/6/2025 2:21 PM, Don Y wrote:

    <snip>

    Some very impressive things can happen in milliseconds.

    John Larkin is easily impressed by his own brilliance,

    I am impressed by my brain, the one I was born with.

    Most egomaniacs are. It's fairly prominent feature of the condition.
    Trump has described himself as a "stable genius" which is a comic
    illustration of the egomaniac capacity for self-delusion.

    Sometimes complex things are processed in background, and can take
    days or even years.

    If you have a very slow brain.

    Or if you allow it to work at all time scales.

    You do have some conscious control of what your conscious mind does. The sub-conscious is less accessible.

    What's cool is that one can have a problem, forget about it for years,
    see some new component that makes it work, and have a new circuit pop
    up instantly.

    That just lots of memory. One time I did that was when an impractical
    way of dealing with ripple on pulse width modulated output which I came
    up with in 1975 became practical in 1992 when I got my hands on a
    big-enough chunk of programable logic - not all that big, as it was a
    plug-in replacement for a 22V10 chip, but big enough. Obviously, I
    hadn't forgotten about it. I hadn't been obssessing about it for the
    previous 17 years, but I hadn't forgotten about it either.

    So, you have a very slow brain?

    The human brain doesn't seem to any kind of delay line store. Stuff gets encoded, and you can decode it when you need it. I do find myself
    remembering stuff from sixty or seventy years ago, so there may be some
    kind of house-keeping processing sorting through the memory banks in background.

    I have met Elizabeth Loftus, and know that this gets complicated when
    there's significant emotional content, but I'm not getting a lot of that.

    https://en.wikipedia.org/wiki/Elizabeth_Loftus
    --
    Bill Sloman, Sydney







    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From john larkin@jl@glen--canyon.com to sci.electronics.design on Mon Oct 13 19:29:19 2025
    From Newsgroup: sci.electronics.design

    On Tue, 14 Oct 2025 03:53:06 +1100, Bill Sloman <bill.sloman@ieee.org>
    wrote:

    On 14/10/2025 1:41 am, john larkin wrote:
    On Mon, 13 Oct 2025 20:10:43 +1100, Bill Sloman <bill.sloman@ieee.org>
    wrote:

    On 13/10/2025 4:13 am, john larkin wrote:
    On Sun, 12 Oct 2025 10:25:18 -0400, "Edward Rawde"
    <invalid@invalid.invalid> wrote:

    "Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cg3ib$1e2gm$2@dont-email.me...
    On 12/10/2025 4:06 am, john larkin wrote:
    On Sat, 11 Oct 2025 09:56:29 -0700, Don Y
    <blockedofcourse@foo.invalid> wrote:

    On 10/11/2025 5:02 AM, Theo wrote:
    Don Y <blockedofcourse@foo.invalid> wrote:
    On 10/6/2025 8:49 PM, Edward Rawde wrote:
    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:10c1ne0$l8u1$1@dont-email.me...
    On 10/6/2025 2:21 PM, Don Y wrote:

    <snip>

    Some very impressive things can happen in milliseconds.

    John Larkin is easily impressed by his own brilliance,

    I am impressed by my brain, the one I was born with.

    Most egomaniacs are. It's fairly prominent feature of the condition.
    Trump has described himself as a "stable genius" which is a comic >illustration of the egomaniac capacity for self-delusion.


    I take no credit for having my brain, and I'm impressed by most
    anybody's brain.

    DT does seem to have created a lot of peace and saved a lot of lives,
    so far.

    Sometimes complex things are processed in background, and can take
    days or even years.

    If you have a very slow brain.

    Or if you allow it to work at all time scales.

    You do have some conscious control of what your conscious mind does. The >sub-conscious is less accessible.

    What's cool is that one can have a problem, forget about it for years, >>>> see some new component that makes it work, and have a new circuit pop
    up instantly.

    That just lots of memory. One time I did that was when an impractical
    way of dealing with ripple on pulse width modulated output which I came
    up with in 1975 became practical in 1992 when I got my hands on a
    big-enough chunk of programable logic - not all that big, as it was a
    plug-in replacement for a 22V10 chip, but big enough. Obviously, I
    hadn't forgotten about it. I hadn't been obssessing about it for the
    previous 17 years, but I hadn't forgotten about it either.

    So, you have a very slow brain?

    The human brain doesn't seem to any kind of delay line store. Stuff gets >encoded, and you can decode it when you need it. I do find myself >remembering stuff from sixty or seventy years ago, so there may be some
    kind of house-keeping processing sorting through the memory banks in >background.

    I have met Elizabeth Loftus, and know that this gets complicated when >there's significant emotional content, but I'm not getting a lot of that.

    https://en.wikipedia.org/wiki/Elizabeth_Loftus

    She sounds awful.


    John Larkin
    Highland Tech Glen Canyon Design Center
    Lunatic Fringe Electronics
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Bill Sloman@bill.sloman@ieee.org to sci.electronics.design on Tue Oct 14 16:27:57 2025
    From Newsgroup: sci.electronics.design

    On 14/10/2025 1:29 pm, john larkin wrote:
    On Tue, 14 Oct 2025 03:53:06 +1100, Bill Sloman <bill.sloman@ieee.org>
    wrote:

    On 14/10/2025 1:41 am, john larkin wrote:
    On Mon, 13 Oct 2025 20:10:43 +1100, Bill Sloman <bill.sloman@ieee.org>
    wrote:

    On 13/10/2025 4:13 am, john larkin wrote:
    On Sun, 12 Oct 2025 10:25:18 -0400, "Edward Rawde"
    <invalid@invalid.invalid> wrote:

    "Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cg3ib$1e2gm$2@dont-email.me...
    On 12/10/2025 4:06 am, john larkin wrote:
    On Sat, 11 Oct 2025 09:56:29 -0700, Don Y
    <blockedofcourse@foo.invalid> wrote:

    On 10/11/2025 5:02 AM, Theo wrote:
    Don Y <blockedofcourse@foo.invalid> wrote:
    On 10/6/2025 8:49 PM, Edward Rawde wrote:
    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:10c1ne0$l8u1$1@dont-email.me...
    On 10/6/2025 2:21 PM, Don Y wrote:

    <snip>

    Some very impressive things can happen in milliseconds.

    John Larkin is easily impressed by his own brilliance,

    I am impressed by my brain, the one I was born with.

    Most egomaniacs are. It's fairly prominent feature of the condition.
    Trump has described himself as a "stable genius" which is a comic
    illustration of the egomaniac capacity for self-delusion.


    I take no credit for having my brain, and I'm impressed by most
    anybody's brain.

    More if they go to they trouble of flattering you.

    DT does seem to have created a lot of peace and saved a lot of lives,
    so far.

    It does get him the attention he craves. The Gaza riviera was something
    of a false start - but the images of Trump resort hotels along the Mediterrainean coast must have been seductive.

    Sometimes complex things are processed in background, and can take
    days or even years.

    If you have a very slow brain.

    Or if you allow it to work at all time scales.

    You do have some conscious control of what your conscious mind does. The
    sub-conscious is less accessible.

    What's cool is that one can have a problem, forget about it for years, >>>>> see some new component that makes it work, and have a new circuit pop >>>>> up instantly.

    That just lots of memory. One time I did that was when an impractical
    way of dealing with ripple on pulse width modulated output which I came >>>> up with in 1975 became practical in 1992 when I got my hands on a
    big-enough chunk of programable logic - not all that big, as it was a
    plug-in replacement for a 22V10 chip, but big enough. Obviously, I
    hadn't forgotten about it. I hadn't been obssessing about it for the
    previous 17 years, but I hadn't forgotten about it either.

    So, you have a very slow brain?

    The human brain doesn't seem to any kind of delay line store. Stuff gets
    encoded, and you can decode it when you need it. I do find myself
    remembering stuff from sixty or seventy years ago, so there may be some
    kind of house-keeping processing sorting through the memory banks in
    background.

    I have met Elizabeth Loftus, and know that this gets complicated when
    there's significant emotional content, but I'm not getting a lot of that.

    https://en.wikipedia.org/wiki/Elizabeth_Loftus

    She sounds awful.

    She was great company. My wife had known and liked her for years.
    --
    Bill Sloman, Sydney


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Bill Sloman@bill.sloman@ieee.org to sci.electronics.design on Tue Oct 14 16:43:22 2025
    From Newsgroup: sci.electronics.design

    On 14/10/2025 3:01 am, john larkin wrote:
    On Mon, 13 Oct 2025 07:41:58 -0700, john larkin <jl@glen--canyon.com>
    wrote:

    On Mon, 13 Oct 2025 20:10:43 +1100, Bill Sloman <bill.sloman@ieee.org>
    wrote:




    It's impressive that a human brain only needs about a hundred watts. >>>>>>
    It is woefully slow.

    At some things. Not at others.

    Some very impressive things can happen in milliseconds.

    John Larkin is easily impressed by his own brilliance,

    I am impressed by my brain, the one I was born with.

    I don't think our brains are a lot different from the ones our
    ancestors had 5,000, or 50,000 years ago. So why did evolution make
    them/us able to do calculus and design electronics and program in
    Rust?

    Chomsky thinks that our capacity to use language to communicate depends
    on fairly recent tweaks to our brains. Human language is a more
    complicated communication system than anything else we've looked at, and presumably this lets us move to a higher level of abstraction than our competitors. When we got to be able to talk about mathematics we'd got
    into a more productive region than any other creature we know.

    It's assumed that, since brains are such energy hogs, critters don't
    evolve much more brain than they really need. And most don't.

    But if there's an ecological niche that a big brain can exploit, brains
    will get bigger.

    Humans benefit from making fire and making weapons, but those wouldn't
    need the ability to do abstract math.

    They got a lot more from cooperative hunting and defense. Dunbar's
    number is 150 which means that we live in bigger packs than most social mammals. Language lets us coordinate even bigger groups.

    Some people don't like that, and Trump does seem freeze out experts who
    don't know him well enough to be aware of his need for flattery.
    --
    Bill Sloman, Sydney

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Bill Sloman@bill.sloman@ieee.org to sci.electronics.design on Tue Oct 14 17:17:22 2025
    From Newsgroup: sci.electronics.design

    On 14/10/2025 3:26 am, Edward Rawde wrote:
    "Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cib2r$21rgr$1@dont-email.me...
    On 13/10/2025 3:35 am, Edward Rawde wrote:
    "Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cgjcn$1i52v$4@dont-email.me...
    On 13/10/2025 1:25 am, Edward Rawde wrote:
    "Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cg3ib$1e2gm$2@dont-email.me...
    On 12/10/2025 4:06 am, john larkin wrote:
    On Sat, 11 Oct 2025 09:56:29 -0700, Don Y
    <blockedofcourse@foo.invalid> wrote:

    On 10/11/2025 5:02 AM, Theo wrote:
    Don Y <blockedofcourse@foo.invalid> wrote:
    On 10/6/2025 8:49 PM, Edward Rawde wrote:
    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:10c1ne0$l8u1$1@dont-email.me...
    On 10/6/2025 2:21 PM, Don Y wrote:
    Ditto for anything that an AI "claims" to have generated.
    .....

    And electronic design is not just coding. It needs real, organic >>>>>>> intelligence.

    To do it well. More or less adequate electronic design is easier.

    I've cleaned up after few people whose idea of "adequate" fell a bit short.

    That must be the 234,412,265th time you've said that.

    That may be something of an exaggeration.

    Ok 234,412,104th


    It's impressive that a human brain only needs about a hundred watts. >>>>>>
    It is woefully slow.

    At some things. Not at others.

    Name one. The basic operations in the human brain seem to work in the millisecond range, and ECL can do stuff in a nanosecond.

    Understanding a language you're fluent in appears to be near enough instant.
    Why would you need it any faster?
    I've never seen ECL do that.

    You don't need ECL for that. Google translate uses a large language model to do rapid translation - faster than a human
    simultaneous translator can manage - and one of my wife's friends from her undergraduate days did that for a living, as well as
    teaching the skill.
    You don't process the speech all that fast - psycholinguists have measured that process in some detail.

    Not long ago I used it for help with translation into French.
    I had to get a human translator to check it and they made a lot of changes.

    I've done that sort of checking on the output of fluent English-speaking
    Dutch people writing in English. There was always stuff that I did
    change to make the text read more like what an native speaker of English
    would have written, and people did notice the changes, even though they
    didn't change the meaning. It did make the text easier to read.

    The human brain does well on large data sets because it has a lot more parallel processing than a regular computer, but - as
    demonstrated by the solution of the protein folding problem - you can get more data into a big computer than you can into a
    human
    brain.

    Sure but I've yet to see an online AI which learns from its mistakes.

    You don't move in those circles.

    And you do?

    My wife did, and I talked to some of her friends and colleagues.

    And when that happens, who is going to teach it what a mistake is and what isn't?

    That's what large language models are for.

    Oh dear.

    They aren't perfect, but they are lot better than the stuff they replaced.

    Some subjects, such as politics, may run into the same difficulties humans have.
    Where will a DT made of ECL with a much larger data set lead us?

    You don't need a computer to notice that Trump lies a lot, and sounds off about subject where his understanding is imperfect.

    But suppose you have a computer which can model DT with a much larger data set and
    a CPU with a similar personality?

    Why would anybody want to? Donald Trump's personality isn't one that we
    would want to emulate.

    Science - in the peer-reviewed literature - has worked out a mechanism to suppress this kind of output. Fact-checkers are the
    nearest thing to that in the political system, and Trump and his supporters are happy to ignore them.

    So there's a good possibility that future AI will too.

    If you see AI as a tool that creeps like Donald will be able to exploit
    for their private advantage, you are looking at a rather depressing - if
    brief - future. Asimov's three laws of robotic were designed to prevent
    that. They were totally inadequate, but AI does need some fairly robust
    sort of error-checking, and will probably get it.

    Hitler and Mao provide perfectly splendid examples of the corrosive effects of misinformation, but quite a few people seem to be
    incapable of recognising more modern examples of the breed.

    Probably because if you haven't lived through it then it may as well not have happened.

    Those who don't study history are condemned to relive it.

    The answer is probably better education, but schools are frequently exploited by religious institutions to implant nonsense in the
    minds of the next generation.

    The same will probably happen with AI.
    Religion knows that the earlier you educate, the more likely that there will be lifetime
    adoption of the religion without question.

    And that is well enough known that they eventually won't be able to keep
    on doing it. The process already seems to be well under way.

    And most American's seem to be taught that the US constitution is perfect, even though it was remarkably primitive when it was
    first put together, and seems unlikely to ever adopt features like proportional representation and votes of confidence. Trump may
    make a big enough mess of the US to prompt some kind of reform, but his supporters who post here don't seem be getting the
    message.

    There seems to be a need to make things "great again" which implies that it is believed
    that they were great in the past but are no longer great.
    So there seems to be a push to go backwards.

    More sideways. The US past wasn't all that great. Modern Europe offers
    more attractive options for most of the US population, not that the US
    media spends much time on pointing this out.

    I wonder what AGI will make of that.

    Artificial general intelligence would need to process a huge amount of information before it formed an opinion on the subject, and the most
    likely opinion would be that it was an ill-posed question

    It may depend on whether you can separate intelligence from personality.
    It does not look to me that you can.

    Of course you can, if you know more about the subject than you seem to do.
    --
    Bill Sloman, Sydney


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Edward Rawde@invalid@invalid.invalid to sci.electronics.design on Tue Oct 14 10:02:47 2025
    From Newsgroup: sci.electronics.design

    "Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10ckpu6$2nsdk$1@dont-email.me...
    On 14/10/2025 3:26 am, Edward Rawde wrote:
    "Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cib2r$21rgr$1@dont-email.me...
    On 13/10/2025 3:35 am, Edward Rawde wrote:
    "Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cgjcn$1i52v$4@dont-email.me...
    On 13/10/2025 1:25 am, Edward Rawde wrote:
    "Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cg3ib$1e2gm$2@dont-email.me...
    On 12/10/2025 4:06 am, john larkin wrote:
    On Sat, 11 Oct 2025 09:56:29 -0700, Don Y
    <blockedofcourse@foo.invalid> wrote:

    On 10/11/2025 5:02 AM, Theo wrote:
    Don Y <blockedofcourse@foo.invalid> wrote:
    On 10/6/2025 8:49 PM, Edward Rawde wrote:
    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:10c1ne0$l8u1$1@dont-email.me...
    On 10/6/2025 2:21 PM, Don Y wrote:
    Ditto for anything that an AI "claims" to have generated.
    .....

    And electronic design is not just coding. It needs real, organic >>>>>>>> intelligence.

    To do it well. More or less adequate electronic design is easier. >>>>>>>
    I've cleaned up after few people whose idea of "adequate" fell a bit short.

    That must be the 234,412,265th time you've said that.

    That may be something of an exaggeration.

    Ok 234,412,104th

    It may depend on whether you can separate intelligence from personality.
    It does not look to me that you can.

    Of course you can, if you know more about the subject than you seem to do.

    Well you don't seem to be able to separate anything from personality so why should AI?


    --
    Bill Sloman, Sydney




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From john larkin@jl@glen--canyon.com to sci.electronics.design on Tue Oct 14 07:13:02 2025
    From Newsgroup: sci.electronics.design

    On Tue, 14 Oct 2025 17:17:22 +1100, Bill Sloman <bill.sloman@ieee.org>
    wrote:

    On 14/10/2025 3:26 am, Edward Rawde wrote:
    "Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cib2r$21rgr$1@dont-email.me...
    On 13/10/2025 3:35 am, Edward Rawde wrote:
    "Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cgjcn$1i52v$4@dont-email.me...
    On 13/10/2025 1:25 am, Edward Rawde wrote:
    "Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cg3ib$1e2gm$2@dont-email.me...
    On 12/10/2025 4:06 am, john larkin wrote:
    On Sat, 11 Oct 2025 09:56:29 -0700, Don Y
    <blockedofcourse@foo.invalid> wrote:

    On 10/11/2025 5:02 AM, Theo wrote:
    Don Y <blockedofcourse@foo.invalid> wrote:
    On 10/6/2025 8:49 PM, Edward Rawde wrote:
    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:10c1ne0$l8u1$1@dont-email.me...
    On 10/6/2025 2:21 PM, Don Y wrote:
    Ditto for anything that an AI "claims" to have generated.
    .....

    And electronic design is not just coding. It needs real, organic >>>>>>>> intelligence.

    To do it well. More or less adequate electronic design is easier. >>>>>>>
    I've cleaned up after few people whose idea of "adequate" fell a bit short.

    That must be the 234,412,265th time you've said that.

    That may be something of an exaggeration.

    Ok 234,412,104th


    It's impressive that a human brain only needs about a hundred watts. >>>>>>>
    It is woefully slow.

    At some things. Not at others.

    Name one. The basic operations in the human brain seem to work in the millisecond range, and ECL can do stuff in a nanosecond.

    Understanding a language you're fluent in appears to be near enough instant.
    Why would you need it any faster?
    I've never seen ECL do that.

    You don't need ECL for that. Google translate uses a large language model to do rapid translation - faster than a human
    simultaneous translator can manage - and one of my wife's friends from her undergraduate days did that for a living, as well as
    teaching the skill.
    You don't process the speech all that fast - psycholinguists have measured that process in some detail.

    Not long ago I used it for help with translation into French.
    I had to get a human translator to check it and they made a lot of changes.

    I've done that sort of checking on the output of fluent English-speaking >Dutch people writing in English. There was always stuff that I did
    change to make the text read more like what an native speaker of English >would have written, and people did notice the changes, even though they >didn't change the meaning. It did make the text easier to read.

    The human brain does well on large data sets because it has a lot more parallel processing than a regular computer, but - as
    demonstrated by the solution of the protein folding problem - you can get more data into a big computer than you can into a
    human
    brain.

    Sure but I've yet to see an online AI which learns from its mistakes.

    You don't move in those circles.

    And you do?

    My wife did, and I talked to some of her friends and colleagues.

    And when that happens, who is going to teach it what a mistake is and what isn't?

    That's what large language models are for.

    Oh dear.

    They aren't perfect, but they are lot better than the stuff they replaced.

    Some subjects, such as politics, may run into the same difficulties humans have.
    Where will a DT made of ECL with a much larger data set lead us?

    You don't need a computer to notice that Trump lies a lot, and sounds off about subject where his understanding is imperfect.

    But suppose you have a computer which can model DT with a much larger data set and
    a CPU with a similar personality?

    Why would anybody want to? Donald Trump's personality isn't one that we >would want to emulate.

    Ask the hostgages.


    John Larkin
    Highland Tech Glen Canyon Design Center
    Lunatic Fringe Electronics
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From john larkin@jl@glen--canyon.com to sci.electronics.design on Tue Oct 14 07:16:26 2025
    From Newsgroup: sci.electronics.design

    On Tue, 14 Oct 2025 16:43:22 +1100, Bill Sloman <bill.sloman@ieee.org>
    wrote:

    On 14/10/2025 3:01 am, john larkin wrote:
    On Mon, 13 Oct 2025 07:41:58 -0700, john larkin <jl@glen--canyon.com>
    wrote:

    On Mon, 13 Oct 2025 20:10:43 +1100, Bill Sloman <bill.sloman@ieee.org>
    wrote:




    It's impressive that a human brain only needs about a hundred watts. >>>>>>>
    It is woefully slow.

    At some things. Not at others.

    Some very impressive things can happen in milliseconds.

    John Larkin is easily impressed by his own brilliance,

    I am impressed by my brain, the one I was born with.

    I don't think our brains are a lot different from the ones our
    ancestors had 5,000, or 50,000 years ago. So why did evolution make
    them/us able to do calculus and design electronics and program in
    Rust?

    Chomsky thinks that our capacity to use language to communicate depends
    on fairly recent tweaks to our brains. Human language is a more
    complicated communication system than anything else we've looked at, and >presumably this lets us move to a higher level of abstraction than our >competitors. When we got to be able to talk about mathematics we'd got
    into a more productive region than any other creature we know.

    It's assumed that, since brains are such energy hogs, critters don't
    evolve much more brain than they really need. And most don't.

    But if there's an ecological niche that a big brain can exploit, brains
    will get bigger.

    Humans benefit from making fire and making weapons, but those wouldn't
    need the ability to do abstract math.

    They got a lot more from cooperative hunting and defense. Dunbar's
    number is 150 which means that we live in bigger packs than most social >mammals. Language lets us coordinate even bigger groups.

    Some people don't like that, and Trump does seem freeze out experts who >don't know him well enough to be aware of his need for flattery.

    You were starting to have a sensible discussion.


    John Larkin
    Highland Tech Glen Canyon Design Center
    Lunatic Fringe Electronics
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Jeroen Belleman@jeroen@nospam.please to sci.electronics.design on Tue Oct 14 17:33:08 2025
    From Newsgroup: sci.electronics.design

    On 10/14/25 16:13, john larkin wrote:
    On Tue, 14 Oct 2025 17:17:22 +1100, Bill Sloman <bill.sloman@ieee.org>
    wrote:

    [...]

    Why would anybody want to? Donald Trump's personality isn't one that we
    would want to emulate.

    Ask the hostgages.


    John Larkin
    Highland Tech Glen Canyon Design Center
    Lunatic Fringe Electronics

    I've been wondering what arguments DT might have used to achieve
    this. It's not his charming personality, for sure.

    Jeroen Belleman
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Bill Sloman@bill.sloman@ieee.org to sci.electronics.design on Wed Oct 15 03:23:35 2025
    From Newsgroup: sci.electronics.design

    On 15/10/2025 1:02 am, Edward Rawde wrote:
    "Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10ckpu6$2nsdk$1@dont-email.me...
    On 14/10/2025 3:26 am, Edward Rawde wrote:
    "Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cib2r$21rgr$1@dont-email.me...
    On 13/10/2025 3:35 am, Edward Rawde wrote:
    "Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cgjcn$1i52v$4@dont-email.me...
    On 13/10/2025 1:25 am, Edward Rawde wrote:
    "Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cg3ib$1e2gm$2@dont-email.me...
    On 12/10/2025 4:06 am, john larkin wrote:
    On Sat, 11 Oct 2025 09:56:29 -0700, Don Y
    <blockedofcourse@foo.invalid> wrote:

    On 10/11/2025 5:02 AM, Theo wrote:
    Don Y <blockedofcourse@foo.invalid> wrote:
    On 10/6/2025 8:49 PM, Edward Rawde wrote:
    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:10c1ne0$l8u1$1@dont-email.me...
    On 10/6/2025 2:21 PM, Don Y wrote:
    Ditto for anything that an AI "claims" to have generated.
    .....

    And electronic design is not just coding. It needs real, organic >>>>>>>>> intelligence.

    To do it well. More or less adequate electronic design is easier. >>>>>>>>
    I've cleaned up after few people whose idea of "adequate" fell a bit short.

    That must be the 234,412,265th time you've said that.

    That may be something of an exaggeration.

    Ok 234,412,104th

    More like a gross exaggeration.

    It may depend on whether you can separate intelligence from personality. >>> It does not look to me that you can.

    Of course you can, if you know more about the subject than you seem to do.

    Well you don't seem to be able to separate anything from personality so why should AI?

    I wonder what you think you means by that? And any intelligence I have
    is entirely natural, so my antics aren't any kind of guide to what
    artificial intelligence might do. Intelligence is about drawing
    conclusions from data - personality is more about the kinds of
    conclusions you want to be able to draw, which famously biases the sort
    of data you will go to the trouble of collecting. The easiest way of
    seeing it in action is to let different personalities look at notionally identical data sets, and compare their conclusions.

    https://en.wikipedia.org/wiki/The_Bell_Curve

    I don't know of anybody who has tried to automate the process of raw
    data collection, and I suspect that it will be quite a while before
    anybody seriously tries to do that. There will be cheats who will
    pretend that they have.
    --
    Bill Sloman, Sydney


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From john larkin@jl@glen--canyon.com to sci.electronics.design on Tue Oct 14 09:26:10 2025
    From Newsgroup: sci.electronics.design

    On Tue, 14 Oct 2025 17:33:08 +0200, Jeroen Belleman
    <jeroen@nospam.please> wrote:

    On 10/14/25 16:13, john larkin wrote:
    On Tue, 14 Oct 2025 17:17:22 +1100, Bill Sloman <bill.sloman@ieee.org>
    wrote:

    [...]

    Why would anybody want to? Donald Trump's personality isn't one that we
    would want to emulate.

    Ask the hostgages.


    John Larkin
    Highland Tech Glen Canyon Design Center
    Lunatic Fringe Electronics

    I've been wondering what arguments DT might have used to achieve
    this. It's not his charming personality, for sure.

    Jeroen Belleman

    Probably brute force application of power. That's basically what we
    elected him to do, act in our interest.

    I like the idea of Gaza becoming a luxury golf resort on the
    Mediterrean. And Iran becoming a friendly democracy.

    And Russia becoming a peaceful European country, but that's obviously
    over the top.


    John Larkin
    Highland Tech Glen Canyon Design Center
    Lunatic Fringe Electronics
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Bill Sloman@bill.sloman@ieee.org to sci.electronics.design on Wed Oct 15 03:38:26 2025
    From Newsgroup: sci.electronics.design

    On 15/10/2025 1:13 am, john larkin wrote:
    On Tue, 14 Oct 2025 17:17:22 +1100, Bill Sloman <bill.sloman@ieee.org>
    wrote:

    On 14/10/2025 3:26 am, Edward Rawde wrote:
    "Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cib2r$21rgr$1@dont-email.me...
    On 13/10/2025 3:35 am, Edward Rawde wrote:
    "Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cgjcn$1i52v$4@dont-email.me...
    On 13/10/2025 1:25 am, Edward Rawde wrote:
    "Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cg3ib$1e2gm$2@dont-email.me...
    On 12/10/2025 4:06 am, john larkin wrote:
    On Sat, 11 Oct 2025 09:56:29 -0700, Don Y
    <blockedofcourse@foo.invalid> wrote:

    On 10/11/2025 5:02 AM, Theo wrote:
    Don Y <blockedofcourse@foo.invalid> wrote:
    On 10/6/2025 8:49 PM, Edward Rawde wrote:
    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:10c1ne0$l8u1$1@dont-email.me...
    On 10/6/2025 2:21 PM, Don Y wrote:
    Ditto for anything that an AI "claims" to have generated.
    .....

    And electronic design is not just coding. It needs real, organic >>>>>>>>> intelligence.

    To do it well. More or less adequate electronic design is easier. >>>>>>>>
    I've cleaned up after few people whose idea of "adequate" fell a bit short.

    That must be the 234,412,265th time you've said that.

    That may be something of an exaggeration.

    Ok 234,412,104th


    It's impressive that a human brain only needs about a hundred watts. >>>>>>>>
    It is woefully slow.

    At some things. Not at others.

    Name one. The basic operations in the human brain seem to work in the millisecond range, and ECL can do stuff in a nanosecond.

    Understanding a language you're fluent in appears to be near enough instant.
    Why would you need it any faster?
    I've never seen ECL do that.

    You don't need ECL for that. Google translate uses a large language model to do rapid translation - faster than a human
    simultaneous translator can manage - and one of my wife's friends from her undergraduate days did that for a living, as well as
    teaching the skill.
    You don't process the speech all that fast - psycholinguists have measured that process in some detail.

    Not long ago I used it for help with translation into French.
    I had to get a human translator to check it and they made a lot of changes. >>
    I've done that sort of checking on the output of fluent English-speaking
    Dutch people writing in English. There was always stuff that I did
    change to make the text read more like what an native speaker of English
    would have written, and people did notice the changes, even though they
    didn't change the meaning. It did make the text easier to read.

    The human brain does well on large data sets because it has a lot more parallel processing than a regular computer, but - as
    demonstrated by the solution of the protein folding problem - you can get more data into a big computer than you can into a
    human
    brain.

    Sure but I've yet to see an online AI which learns from its mistakes. >>>>
    You don't move in those circles.

    And you do?

    My wife did, and I talked to some of her friends and colleagues.

    And when that happens, who is going to teach it what a mistake is and what isn't?

    That's what large language models are for.

    Oh dear.

    They aren't perfect, but they are lot better than the stuff they replaced. >>
    Some subjects, such as politics, may run into the same difficulties humans have.
    Where will a DT made of ECL with a much larger data set lead us?

    You don't need a computer to notice that Trump lies a lot, and sounds off about subject where his understanding is imperfect.

    But suppose you have a computer which can model DT with a much larger data set and
    a CPU with a similar personality?

    Why would anybody want to? Donald Trump's personality isn't one that we
    would want to emulate.

    Ask the hostages.

    They might wonder why it took Trump two years to get around to applying
    his famous (if essentially non-existent) skills in deal making. He was
    going to end the war in the Ukraine within days of getting re-elected,
    and that still hasn't happened.

    The hostages would be unwise to say so publicly. Trump needs to be
    flattered more or less non-stop, and gets quite nasty when he doesn't
    get the admiration he feels he deserves. He'd probably try to get Pam
    Bondi to prosecute them.
    --
    Bill Sloman, Sydney


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Edward Rawde@invalid@invalid.invalid to sci.electronics.design on Tue Oct 14 14:02:39 2025
    From Newsgroup: sci.electronics.design

    "Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cltef$32bdm$1@dont-email.me...
    On 15/10/2025 1:02 am, Edward Rawde wrote:
    "Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10ckpu6$2nsdk$1@dont-email.me...
    On 14/10/2025 3:26 am, Edward Rawde wrote:
    "Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cib2r$21rgr$1@dont-email.me...
    On 13/10/2025 3:35 am, Edward Rawde wrote:
    "Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cgjcn$1i52v$4@dont-email.me...
    On 13/10/2025 1:25 am, Edward Rawde wrote:
    "Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cg3ib$1e2gm$2@dont-email.me...
    On 12/10/2025 4:06 am, john larkin wrote:
    On Sat, 11 Oct 2025 09:56:29 -0700, Don Y
    <blockedofcourse@foo.invalid> wrote:

    On 10/11/2025 5:02 AM, Theo wrote:
    Don Y <blockedofcourse@foo.invalid> wrote:
    On 10/6/2025 8:49 PM, Edward Rawde wrote:
    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:10c1ne0$l8u1$1@dont-email.me...
    On 10/6/2025 2:21 PM, Don Y wrote:
    Ditto for anything that an AI "claims" to have generated. >>>>>> .....

    And electronic design is not just coding. It needs real, organic >>>>>>>>>> intelligence.

    To do it well. More or less adequate electronic design is easier. >>>>>>>>>
    I've cleaned up after few people whose idea of "adequate" fell a bit short.

    That must be the 234,412,265th time you've said that.

    That may be something of an exaggeration.

    Ok 234,412,104th

    More like a gross exaggeration.

    It may depend on whether you can separate intelligence from personality. >>>> It does not look to me that you can.

    Of course you can, if you know more about the subject than you seem to do. >>
    Well you don't seem to be able to separate anything from personality so why should AI?

    I wonder what you think you means by that? And any intelligence I have is entirely natural, so my antics aren't any kind of guide
    to what artificial intelligence might do. Intelligence is about drawing conclusions from data - personality is more about the
    kinds of conclusions you want to be able to draw, which famously biases the sort of data you will go to the trouble of collecting.

    So if you have enough data you can draw pretty much any conclusion you want. This appears to be true for some subjects, such as politics, but not as true for other subjects.

    At one extreme a subject such as mathematics has statements which are hard to argue with.

    At the other extreme there are subjects where it's hard to tell nonsense from anything serious.

    Is AI going to do this any better than humans do and if so why?

    The easiest way of seeing it in action is to let different personalities look at notionally identical data sets, and compare their
    conclusions.

    https://en.wikipedia.org/wiki/The_Bell_Curve

    I don't know of anybody who has tried to automate the process of raw data collection, and I suspect that it will be quite a while
    before anybody seriously tries to do that. There will be cheats who will pretend that they have.
    --
    Bill Sloman, Sydney




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Joerg@news@analogconsultants.com to sci.electronics.design on Tue Oct 14 22:24:55 2025
    From Newsgroup: sci.electronics.design

    On 10/5/25 10:42 AM, john larkin wrote:
    On Sun, 05 Oct 2025 17:54:56 +0100, Cursitor Doom <cd@notformail.com>
    wrote:

    I can't help noticing since I drew everyone's attention to Grok that
    it's gone awfully quiet around here. I did postulate that AI might
    kill this group, but maybe it's happening quicker than I'd expected.
    :-(

    No, it's just that few people design electronics now.


    And the ones who still do, they don't let them retire :-(
    --
    Regards, Joerg

    http://www.analogconsultants.com/
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Bill Sloman@bill.sloman@ieee.org to sci.electronics.design on Wed Oct 15 17:10:13 2025
    From Newsgroup: sci.electronics.design

    https://en.wikipedia.org/wiki/Merchants_of_Doubt
    On 15/10/2025 5:02 am, Edward Rawde wrote:
    "Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cltef$32bdm$1@dont-email.me...
    On 15/10/2025 1:02 am, Edward Rawde wrote:
    "Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10ckpu6$2nsdk$1@dont-email.me...
    On 14/10/2025 3:26 am, Edward Rawde wrote:
    "Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cib2r$21rgr$1@dont-email.me...
    On 13/10/2025 3:35 am, Edward Rawde wrote:
    "Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cgjcn$1i52v$4@dont-email.me...
    On 13/10/2025 1:25 am, Edward Rawde wrote:
    "Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cg3ib$1e2gm$2@dont-email.me...
    On 12/10/2025 4:06 am, john larkin wrote:
    On Sat, 11 Oct 2025 09:56:29 -0700, Don Y
    <blockedofcourse@foo.invalid> wrote:

    On 10/11/2025 5:02 AM, Theo wrote:
    Don Y <blockedofcourse@foo.invalid> wrote:
    On 10/6/2025 8:49 PM, Edward Rawde wrote:
    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:10c1ne0$l8u1$1@dont-email.me...
    On 10/6/2025 2:21 PM, Don Y wrote:
    Ditto for anything that an AI "claims" to have generated. >>>>>>> .....

    And electronic design is not just coding. It needs real, organic >>>>>>>>>>> intelligence.

    To do it well. More or less adequate electronic design is easier. >>>>>>>>>>
    I've cleaned up after few people whose idea of "adequate" fell a bit short.

    That must be the 234,412,265th time you've said that.

    That may be something of an exaggeration.

    Ok 234,412,104th

    More like a gross exaggeration.

    It may depend on whether you can separate intelligence from personality. >>>>> It does not look to me that you can.

    Of course you can, if you know more about the subject than you seem to do. >>>
    Well you don't seem to be able to separate anything from personality so why should AI?

    I wonder what you think you means by that? And any intelligence I have is entirely natural, so my antics aren't any kind of guide
    to what artificial intelligence might do. Intelligence is about drawing conclusions from data - personality is more about the
    kinds of conclusions you want to be able to draw, which famously biases the sort of data you will go to the trouble of collecting.

    So if you have enough data you can draw pretty much any conclusion you want.

    That's not what I was saying. If you are selective about the data you do collect, you can construct plausible but misleading stories, and the
    answer to that is to collect more data from a genuinely representative
    sample of test subjects

    This appears to be true for some subjects, such as politics, but not as true for other subjects.

    It's certainly not true of politics

    https://en.wikipedia.org/wiki/FiveThirtyEight

    but there are any number of people who will lie to you about it.


    At one extreme a subject such as mathematics has statements which are hard to argue with.

    At the other extreme there are subjects where it's hard to tell nonsense from anything serious.

    It can take quite a lot of effort to detect the lies, but some people do
    seem to be willing to put in that effort.

    Is AI going to do this any better than humans do and if so why?

    If it does - and it should - it would be because it could integrate more
    data, and systematically check it for distortions and inconsistencies.

    There will be human actors who will use the same technology to construct
    even more plausible nonsense.

    https://en.wikipedia.org/wiki/Merchants_of_Doubt

    Lying to people is a profitable industry and the people who make money
    out of it would love to automate it.

    The easiest way of seeing it in action is to let different personalities look at notionally identical data sets, and compare their
    conclusions.

    https://en.wikipedia.org/wiki/The_Bell_Curve

    I don't know of anybody who has tried to automate the process of raw data collection, and I suspect that it will be quite a while
    before anybody seriously tries to do that. There will be cheats who will pretend that they have.
    --
    Bill Sloman, Sydney


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Bill Sloman@bill.sloman@ieee.org to sci.electronics.design on Wed Oct 15 17:41:22 2025
    From Newsgroup: sci.electronics.design

    On 15/10/2025 3:26 am, john larkin wrote:
    On Tue, 14 Oct 2025 17:33:08 +0200, Jeroen Belleman
    <jeroen@nospam.please> wrote:

    On 10/14/25 16:13, john larkin wrote:
    On Tue, 14 Oct 2025 17:17:22 +1100, Bill Sloman <bill.sloman@ieee.org>
    wrote:

    [...]

    Why would anybody want to? Donald Trump's personality isn't one that we >>>> would want to emulate.

    Ask the hostgages.


    John Larkin
    Highland Tech Glen Canyon Design Center
    Lunatic Fringe Electronics

    I've been wondering what arguments DT might have used to achieve
    this. It's not his charming personality, for sure.

    Jeroen Belleman

    Probably brute force application of power.

    No more American weapons of Israel if it didn't stop the war.

    That's basically what we
    elected him to do, act in our interest.

    Not that he cares about America's interest. He wants to burnish his image.

    I like the idea of Gaza becoming a luxury golf resort on the
    Mediterranean. And Iran becoming a friendly democracy.

    Liking the ideas isn't going to make them happen.

    And Russia becoming a peaceful European country, but that's obviously
    over the top.

    The Russians would like that, but Putin and his oligarchs wouldn't. The
    fall of the Russian communist party was a missed opportunity. A certain
    amount of American and European bribery could have lead to a much better outcome, but it would have left Russia richer and appreciably more
    powerful, which is probably why it didn't happen.
    --
    Bill Sloman, Sydney



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Bill Sloman@bill.sloman@ieee.org to sci.electronics.design on Thu Oct 16 00:03:19 2025
    From Newsgroup: sci.electronics.design

    On 15/10/2025 1:16 am, john larkin wrote:
    On Tue, 14 Oct 2025 16:43:22 +1100, Bill Sloman <bill.sloman@ieee.org>
    wrote:

    On 14/10/2025 3:01 am, john larkin wrote:
    On Mon, 13 Oct 2025 07:41:58 -0700, john larkin <jl@glen--canyon.com>
    wrote:

    On Mon, 13 Oct 2025 20:10:43 +1100, Bill Sloman <bill.sloman@ieee.org> >>>> wrote:




    It's impressive that a human brain only needs about a hundred watts. >>>>>>>>
    It is woefully slow.

    At some things. Not at others.

    Some very impressive things can happen in milliseconds.

    John Larkin is easily impressed by his own brilliance,

    I am impressed by my brain, the one I was born with.

    I don't think our brains are a lot different from the ones our
    ancestors had 5,000, or 50,000 years ago. So why did evolution make
    them/us able to do calculus and design electronics and program in
    Rust?

    Chomsky thinks that our capacity to use language to communicate depends
    on fairly recent tweaks to our brains. Human language is a more
    complicated communication system than anything else we've looked at, and
    presumably this lets us move to a higher level of abstraction than our
    competitors. When we got to be able to talk about mathematics we'd got
    into a more productive region than any other creature we know.

    It's assumed that, since brains are such energy hogs, critters don't
    evolve much more brain than they really need. And most don't.

    But if there's an ecological niche that a big brain can exploit, brains
    will get bigger.

    Humans benefit from making fire and making weapons, but those wouldn't
    need the ability to do abstract math.

    They got a lot more from cooperative hunting and defense. Dunbar's
    number is 150 which means that we live in bigger packs than most social
    mammals. Language lets us coordinate even bigger groups.

    Some people don't like that, and Trump does seem freeze out experts who
    don't know him well enough to be aware of his need for flattery.

    You were starting to have a sensible discussion.

    Sensible of what? I'm well aware that you think that the sun shines out
    of Donald Trump's bottom, but that's mainly because he's an worse
    egomaniac than you are. Your idea of a "sensible discussion" is one that
    isn't dismissive of your favourite misconceptions, and you do have a lot
    of them.
    --
    Bill Sloman, Sydney

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From john larkin@jl@glen--canyon.com to sci.electronics.design on Wed Oct 15 07:33:33 2025
    From Newsgroup: sci.electronics.design

    On Tue, 14 Oct 2025 22:24:55 -0700, Joerg <news@analogconsultants.com>
    wrote:

    On 10/5/25 10:42 AM, john larkin wrote:
    On Sun, 05 Oct 2025 17:54:56 +0100, Cursitor Doom <cd@notformail.com>
    wrote:

    I can't help noticing since I drew everyone's attention to Grok that
    it's gone awfully quiet around here. I did postulate that AI might
    kill this group, but maybe it's happening quicker than I'd expected.
    :-(

    No, it's just that few people design electronics now.


    And the ones who still do, they don't let them retire :-(

    Yes. I know a couple of guys who retired voluntarily or were nudged
    out by bean counters. Now they work as much as they please, for their
    former employers, and make a lot more money.


    John Larkin
    Highland Tech Glen Canyon Design Center
    Lunatic Fringe Electronics
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From john larkin@jl@glen--canyon.com to sci.electronics.design on Wed Oct 15 07:48:02 2025
    From Newsgroup: sci.electronics.design

    On Thu, 16 Oct 2025 00:03:19 +1100, Bill Sloman <bill.sloman@ieee.org>
    wrote:

    On 15/10/2025 1:16 am, john larkin wrote:
    On Tue, 14 Oct 2025 16:43:22 +1100, Bill Sloman <bill.sloman@ieee.org>
    wrote:

    On 14/10/2025 3:01 am, john larkin wrote:
    On Mon, 13 Oct 2025 07:41:58 -0700, john larkin <jl@glen--canyon.com>
    wrote:

    On Mon, 13 Oct 2025 20:10:43 +1100, Bill Sloman <bill.sloman@ieee.org> >>>>> wrote:




    It's impressive that a human brain only needs about a hundred watts. >>>>>>>>>
    It is woefully slow.

    At some things. Not at others.

    Some very impressive things can happen in milliseconds.

    John Larkin is easily impressed by his own brilliance,

    I am impressed by my brain, the one I was born with.

    I don't think our brains are a lot different from the ones our
    ancestors had 5,000, or 50,000 years ago. So why did evolution make
    them/us able to do calculus and design electronics and program in
    Rust?

    Chomsky thinks that our capacity to use language to communicate depends
    on fairly recent tweaks to our brains. Human language is a more
    complicated communication system than anything else we've looked at, and >>> presumably this lets us move to a higher level of abstraction than our
    competitors. When we got to be able to talk about mathematics we'd got
    into a more productive region than any other creature we know.

    It's assumed that, since brains are such energy hogs, critters don't
    evolve much more brain than they really need. And most don't.

    But if there's an ecological niche that a big brain can exploit, brains
    will get bigger.

    Humans benefit from making fire and making weapons, but those wouldn't >>>> need the ability to do abstract math.

    They got a lot more from cooperative hunting and defense. Dunbar's
    number is 150 which means that we live in bigger packs than most social
    mammals. Language lets us coordinate even bigger groups.

    Some people don't like that, and Trump does seem freeze out experts who
    don't know him well enough to be aware of his need for flattery.

    You were starting to have a sensible discussion.

    Sensible of what? I'm well aware that you think that the sun shines out
    of Donald Trump's bottom, but that's mainly because he's an worse
    egomaniac than you are. Your idea of a "sensible discussion" is one that >isn't dismissive of your favourite misconceptions, and you do have a lot
    of them.

    Sorry, my mistake, you weren't starting to have a sensble discussion.

    TDS is a weird disease. It must be frustrating. Designing electronics
    is much more amusing.

    I'm finishing up an 8-channel relay/circuit breaker module, meeting
    with the coders today to do the FPGA and the driver. And we'll need a
    test set. We can mostly "dogfood" it, use two P946 modules and an SMU
    to test one P946.

    https://www.dropbox.com/scl/fi/89l88mxvccvltnbhowed5/IMG_0071.png?rlkey=57jlctmigqxwqbmiklaeh58ga&raw=1


    John Larkin
    Highland Tech Glen Canyon Design Center
    Lunatic Fringe Electronics
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Edward Rawde@invalid@invalid.invalid to sci.electronics.design on Wed Oct 15 11:27:46 2025
    From Newsgroup: sci.electronics.design

    "Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cndsb$3er5o$1@dont-email.me...
    https://en.wikipedia.org/wiki/Merchants_of_Doubt
    On 15/10/2025 5:02 am, Edward Rawde wrote:
    "Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cltef$32bdm$1@dont-email.me...
    On 15/10/2025 1:02 am, Edward Rawde wrote:
    "Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10ckpu6$2nsdk$1@dont-email.me...
    On 14/10/2025 3:26 am, Edward Rawde wrote:
    "Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cib2r$21rgr$1@dont-email.me...
    On 13/10/2025 3:35 am, Edward Rawde wrote:
    "Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cgjcn$1i52v$4@dont-email.me...
    On 13/10/2025 1:25 am, Edward Rawde wrote:
    "Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cg3ib$1e2gm$2@dont-email.me...
    On 12/10/2025 4:06 am, john larkin wrote:
    On Sat, 11 Oct 2025 09:56:29 -0700, Don Y
    <blockedofcourse@foo.invalid> wrote:

    On 10/11/2025 5:02 AM, Theo wrote:
    Don Y <blockedofcourse@foo.invalid> wrote:
    On 10/6/2025 8:49 PM, Edward Rawde wrote:
    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:10c1ne0$l8u1$1@dont-email.me...
    On 10/6/2025 2:21 PM, Don Y wrote:
    Ditto for anything that an AI "claims" to have generated. >>>>>>>> .....

    And electronic design is not just coding. It needs real, organic >>>>>>>>>>>> intelligence.

    To do it well. More or less adequate electronic design is easier. >>>>>>>>>>>
    I've cleaned up after few people whose idea of "adequate" fell a bit short.

    That must be the 234,412,265th time you've said that.

    That may be something of an exaggeration.

    Ok 234,412,104th

    More like a gross exaggeration.

    It may depend on whether you can separate intelligence from personality. >>>>>> It does not look to me that you can.

    Of course you can, if you know more about the subject than you seem to do.

    Well you don't seem to be able to separate anything from personality so why should AI?

    I wonder what you think you means by that? And any intelligence I have is entirely natural, so my antics aren't any kind of
    guide
    to what artificial intelligence might do. Intelligence is about drawing conclusions from data - personality is more about the
    kinds of conclusions you want to be able to draw, which famously biases the sort of data you will go to the trouble of
    collecting.

    So if you have enough data you can draw pretty much any conclusion you want.

    That's not what I was saying. If you are selective about the data you do collect, you can construct plausible but misleading
    stories, and the answer to that is to collect more data from a genuinely representative sample of test subjects

    This appears to be true for some subjects, such as politics, but not as true for other subjects.

    It's certainly not true of politics

    https://en.wikipedia.org/wiki/FiveThirtyEight

    but there are any number of people who will lie to you about it.

    And those people might turn into AI in the future.



    At one extreme a subject such as mathematics has statements which are hard to argue with.

    At the other extreme there are subjects where it's hard to tell nonsense from anything serious.

    It can take quite a lot of effort to detect the lies, but some people do seem to be willing to put in that effort.

    Is AI going to do this any better than humans do and if so why?

    If it does - and it should - it would be because it could integrate more data, and systematically check it for distortions and
    inconsistencies.

    There will be human actors who will use the same technology to construct even more plausible nonsense.

    https://en.wikipedia.org/wiki/Merchants_of_Doubt

    Lying to people is a profitable industry and the people who make money out of it would love to automate it.

    Which is probably what will happen.
    Nothing in our own brains gives any particular status to truth so why should AI be different?
    If it's trained by humans it will be like humans.


    The easiest way of seeing it in action is to let different personalities look at notionally identical data sets, and compare
    their
    conclusions.

    https://en.wikipedia.org/wiki/The_Bell_Curve

    I don't know of anybody who has tried to automate the process of raw data collection, and I suspect that it will be quite a
    while
    before anybody seriously tries to do that. There will be cheats who will pretend that they have.

    --
    Bill Sloman, Sydney




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From ehsjr@ehsjr@verizon.net to sci.electronics.design on Wed Oct 15 14:27:43 2025
    From Newsgroup: sci.electronics.design

    On 10/15/2025 10:33 AM, john larkin wrote:
    On Tue, 14 Oct 2025 22:24:55 -0700, Joerg <news@analogconsultants.com>
    wrote:

    On 10/5/25 10:42 AM, john larkin wrote:
    On Sun, 05 Oct 2025 17:54:56 +0100, Cursitor Doom <cd@notformail.com>
    wrote:

    I can't help noticing since I drew everyone's attention to Grok that
    it's gone awfully quiet around here. I did postulate that AI might
    kill this group, but maybe it's happening quicker than I'd expected.
    :-(

    No, it's just that few people design electronics now.


    And the ones who still do, they don't let them retire :-(

    Yes. I know a couple of guys who retired voluntarily or were nudged
    out by bean counters. Now they work as much as they please,

    Yes

    for their
    former employers,

    No

    and make a lot more money.

    Per unit time.

    Ed


    John Larkin
    Highland Tech Glen Canyon Design Center
    Lunatic Fringe Electronics

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Bill Sloman@bill.sloman@ieee.org to sci.electronics.design on Thu Oct 16 16:52:30 2025
    From Newsgroup: sci.electronics.design

    On 16/10/2025 2:27 am, Edward Rawde wrote:
    "Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cndsb$3er5o$1@dont-email.me...
    https://en.wikipedia.org/wiki/Merchants_of_Doubt
    On 15/10/2025 5:02 am, Edward Rawde wrote:
    "Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cltef$32bdm$1@dont-email.me...
    On 15/10/2025 1:02 am, Edward Rawde wrote:
    "Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10ckpu6$2nsdk$1@dont-email.me...
    On 14/10/2025 3:26 am, Edward Rawde wrote:
    "Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cib2r$21rgr$1@dont-email.me...
    On 13/10/2025 3:35 am, Edward Rawde wrote:
    "Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cgjcn$1i52v$4@dont-email.me...
    On 13/10/2025 1:25 am, Edward Rawde wrote:
    "Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cg3ib$1e2gm$2@dont-email.me...
    On 12/10/2025 4:06 am, john larkin wrote:
    On Sat, 11 Oct 2025 09:56:29 -0700, Don Y
    <blockedofcourse@foo.invalid> wrote:

    On 10/11/2025 5:02 AM, Theo wrote:
    Don Y <blockedofcourse@foo.invalid> wrote:
    On 10/6/2025 8:49 PM, Edward Rawde wrote:
    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:10c1ne0$l8u1$1@dont-email.me...
    On 10/6/2025 2:21 PM, Don Y wrote:
    Ditto for anything that an AI "claims" to have generated. >>>>>>>>> .....

    And electronic design is not just coding. It needs real, organic >>>>>>>>>>>>> intelligence.

    To do it well. More or less adequate electronic design is easier. >>>>>>>>>>>>
    I've cleaned up after few people whose idea of "adequate" fell a bit short.

    That must be the 234,412,265th time you've said that.

    That may be something of an exaggeration.

    Ok 234,412,104th

    More like a gross exaggeration.

    It may depend on whether you can separate intelligence from personality.
    It does not look to me that you can.

    Of course you can, if you know more about the subject than you seem to do.

    Well you don't seem to be able to separate anything from personality so why should AI?

    I wonder what you think you means by that? And any intelligence I have is entirely natural, so my antics aren't any kind of
    guide
    to what artificial intelligence might do. Intelligence is about drawing conclusions from data - personality is more about the
    kinds of conclusions you want to be able to draw, which famously biases the sort of data you will go to the trouble of
    collecting.

    So if you have enough data you can draw pretty much any conclusion you want.

    That's not what I was saying. If you are selective about the data you do collect, you can construct plausible but misleading
    stories, and the answer to that is to collect more data from a genuinely representative sample of test subjects

    This appears to be true for some subjects, such as politics, but not as true for other subjects.

    It's certainly not true of politics

    https://en.wikipedia.org/wiki/FiveThirtyEight

    but there are any number of people who will lie to you about it.

    And those people might turn into AI in the future.

    I imagine that they are using it already.

    At one extreme a subject such as mathematics has statements which are hard to argue with.

    At the other extreme there are subjects where it's hard to tell nonsense from anything serious.

    It can take quite a lot of effort to detect the lies, but some people do seem to be willing to put in that effort.

    Is AI going to do this any better than humans do and if so why?

    If it does - and it should - it would be because it could integrate more data, and systematically check it for distortions and
    inconsistencies.

    There will be human actors who will use the same technology to construct even more plausible nonsense.

    https://en.wikipedia.org/wiki/Merchants_of_Doubt

    Lying to people is a profitable industry and the people who make money out of it would love to automate it.

    Which is probably what will happen.
    Nothing in our own brains gives any particular status to truth so why should AI be different?
    If it's trained by humans it will be like humans.

    There's nothing obvious in our brains that gives any particular status
    to truth. It's the real world we live in that does that. If you get
    stuff right your plans work out, and if you get it wrong your schemes
    fall apart.

    The feature of our brains that does give a particular status to truth is memory - we can remember what people claimed was going to happen, and if
    it doesn't we distrust them from then on.

    If AI is going to be useful it has to understand real world facts and
    make predictions that come true in the real world.

    The easiest way of seeing it in action is to let different personalities look at notionally identical data sets, and compare
    their conclusions.

    https://en.wikipedia.org/wiki/The_Bell_Curve

    https://en.wikipedia.org/wiki/Inequality_by_Design

    I don't know of anybody who has tried to automate the process of raw data collection, and I suspect that it will be quite a
    while before anybody seriously tries to do that. There will be cheats who will pretend that they have.

    The problem with the Bell Curve book was that Charles Murray and Richard Herrnstein wanted to find that your IQ determined how well you did. In
    fact it has a fairly weak effect, and your social status, your social environment and the quality of education you get also have effects.

    Herrnstein and Murray lumped these three effects together as a single
    social status number, when in fact they are separate factors which -
    while correllated - varied quite a lot from subject to subject. Lumping
    them together let a lot of variation from these factors cancel out

    The book "Inequality by Design" ran a four way multivariate analysis on
    all four factors, and captured a lot more of the variation in outcomes,
    and showed that the IQ had less effect than the various sorts of social advantage.

    This isn't all that subtle, but getting your AI to be clever enough to
    run the analysis correctly would be a big ask.
    --
    Bill Sloman, Sydney

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Bill Sloman@bill.sloman@ieee.org to sci.electronics.design on Thu Oct 16 17:02:41 2025
    From Newsgroup: sci.electronics.design

    On 16/10/2025 1:48 am, john larkin wrote:
    On Thu, 16 Oct 2025 00:03:19 +1100, Bill Sloman <bill.sloman@ieee.org>
    wrote:

    On 15/10/2025 1:16 am, john larkin wrote:
    On Tue, 14 Oct 2025 16:43:22 +1100, Bill Sloman <bill.sloman@ieee.org>
    wrote:

    On 14/10/2025 3:01 am, john larkin wrote:
    On Mon, 13 Oct 2025 07:41:58 -0700, john larkin <jl@glen--canyon.com> >>>>> wrote:

    On Mon, 13 Oct 2025 20:10:43 +1100, Bill Sloman <bill.sloman@ieee.org> >>>>>> wrote:




    It's impressive that a human brain only needs about a hundred watts.

    It is woefully slow.

    At some things. Not at others.

    Some very impressive things can happen in milliseconds.

    John Larkin is easily impressed by his own brilliance,

    I am impressed by my brain, the one I was born with.

    I don't think our brains are a lot different from the ones our
    ancestors had 5,000, or 50,000 years ago. So why did evolution make
    them/us able to do calculus and design electronics and program in
    Rust?

    Chomsky thinks that our capacity to use language to communicate depends >>>> on fairly recent tweaks to our brains. Human language is a more
    complicated communication system than anything else we've looked at, and >>>> presumably this lets us move to a higher level of abstraction than our >>>> competitors. When we got to be able to talk about mathematics we'd got >>>> into a more productive region than any other creature we know.

    It's assumed that, since brains are such energy hogs, critters don't >>>>> evolve much more brain than they really need. And most don't.

    But if there's an ecological niche that a big brain can exploit, brains >>>> will get bigger.

    Humans benefit from making fire and making weapons, but those wouldn't >>>>> need the ability to do abstract math.

    They got a lot more from cooperative hunting and defense. Dunbar's
    number is 150 which means that we live in bigger packs than most social >>>> mammals. Language lets us coordinate even bigger groups.

    Some people don't like that, and Trump does seem freeze out experts who >>>> don't know him well enough to be aware of his need for flattery.

    You were starting to have a sensible discussion.

    Sensible of what? I'm well aware that you think that the sun shines out
    of Donald Trump's bottom, but that's mainly because he's an worse
    egomaniac than you are. Your idea of a "sensible discussion" is one that
    isn't dismissive of your favourite misconceptions, and you do have a lot
    of them.

    Sorry, my mistake, you weren't starting to have a sensble discussion.

    TDS is a weird disease.

    Trump derangement syndrome has been invented by Trump supporters as an
    insult to be used against people who have enough sense to realise that
    Donald Trump is a menace.

    It must be frustrating.

    It certainly is. Trump supporters do seem to be blind to his defects.

    Designing electronics is much more amusing.

    Retreating into your ivory tower may well be comforting, but when you
    have got somebody who is as silly as Hitler and Stalin were in charge of
    the country, ivory towers are vulnerable

    <snipped self-indulgence>
    --
    Bill Sloman, Sydney
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From john larkin@jl@glen--canyon.com to sci.electronics.design on Thu Oct 16 10:19:57 2025
    From Newsgroup: sci.electronics.design

    On Thu, 16 Oct 2025 17:02:41 +1100, Bill Sloman <bill.sloman@ieee.org>
    wrote:

    On 16/10/2025 1:48 am, john larkin wrote:
    On Thu, 16 Oct 2025 00:03:19 +1100, Bill Sloman <bill.sloman@ieee.org>
    wrote:

    On 15/10/2025 1:16 am, john larkin wrote:
    On Tue, 14 Oct 2025 16:43:22 +1100, Bill Sloman <bill.sloman@ieee.org> >>>> wrote:

    On 14/10/2025 3:01 am, john larkin wrote:
    On Mon, 13 Oct 2025 07:41:58 -0700, john larkin <jl@glen--canyon.com> >>>>>> wrote:

    On Mon, 13 Oct 2025 20:10:43 +1100, Bill Sloman <bill.sloman@ieee.org> >>>>>>> wrote:




    It's impressive that a human brain only needs about a hundred watts.

    It is woefully slow.

    At some things. Not at others.

    Some very impressive things can happen in milliseconds.

    John Larkin is easily impressed by his own brilliance,

    I am impressed by my brain, the one I was born with.

    I don't think our brains are a lot different from the ones our
    ancestors had 5,000, or 50,000 years ago. So why did evolution make >>>>>> them/us able to do calculus and design electronics and program in
    Rust?

    Chomsky thinks that our capacity to use language to communicate depends >>>>> on fairly recent tweaks to our brains. Human language is a more
    complicated communication system than anything else we've looked at, and >>>>> presumably this lets us move to a higher level of abstraction than our >>>>> competitors. When we got to be able to talk about mathematics we'd got >>>>> into a more productive region than any other creature we know.

    It's assumed that, since brains are such energy hogs, critters don't >>>>>> evolve much more brain than they really need. And most don't.

    But if there's an ecological niche that a big brain can exploit, brains >>>>> will get bigger.

    Humans benefit from making fire and making weapons, but those wouldn't >>>>>> need the ability to do abstract math.

    They got a lot more from cooperative hunting and defense. Dunbar's
    number is 150 which means that we live in bigger packs than most social >>>>> mammals. Language lets us coordinate even bigger groups.

    Some people don't like that, and Trump does seem freeze out experts who >>>>> don't know him well enough to be aware of his need for flattery.

    You were starting to have a sensible discussion.

    Sensible of what? I'm well aware that you think that the sun shines out
    of Donald Trump's bottom, but that's mainly because he's an worse
    egomaniac than you are. Your idea of a "sensible discussion" is one that >>> isn't dismissive of your favourite misconceptions, and you do have a lot >>> of them.

    Sorry, my mistake, you weren't starting to have a sensble discussion.

    TDS is a weird disease.

    Trump derangement syndrome has been invented by Trump supporters as an >insult to be used against people who have enough sense to realise that >Donald Trump is a menace.

    It must be frustrating.

    It certainly is. Trump supporters do seem to be blind to his defects.

    Designing electronics is much more amusing.

    Retreating into your ivory tower may well be comforting, but when you
    have got somebody who is as silly as Hitler and Stalin were in charge of
    the country, ivory towers are vulnerable

    <snipped self-indulgence>

    Trump count 4.

    John Larkin
    Highland Tech Glen Canyon Design Center
    Lunatic Fringe Electronics
    --- Synchronet 3.21a-Linux NewsLink 1.2