Sysop: | Amessyroom |
---|---|
Location: | Fayetteville, NC |
Users: | 27 |
Nodes: | 6 (0 / 6) |
Uptime: | 46:38:53 |
Calls: | 632 |
Calls today: | 3 |
Files: | 1,187 |
D/L today: |
24 files (29,813K bytes) |
Messages: | 176,485 |
I can't help noticing since I drew everyone's attention to Grok that
it's gone awfully quiet around here. I did postulate that AI might
kill this group, but maybe it's happening quicker than I'd expected.
:-(
I can't help noticing since I drew everyone's attention to Grok that
it's gone awfully quiet around here. I did postulate that AI might
kill this group, but maybe it's happening quicker than I'd expected.
:-(
I see no sign of AI which can learn to design electronics but electronics isn't like playing Go.
What's likely to happen at present is that many young people will insist that what Grok says must be correct.
On 10/5/2025 12:42 PM, Edward Rawde wrote:
I see no sign of AI which can learn to design electronics but electronics isn't like playing Go.
AI is already being used to design electronics.
Not *yet* to the point where
you can give a general specification without many details -- but, that will come.
What's likely to happen at present is that many young people will insist that what Grok says must be correct.
The bigger fear is that OLDER people will defer to AIs -- out of concern for their positions.
Imagine an AI telling a doctor that a patient likely has a cancer
(or other malady). Doctor can see no evidence of this.
Yet, is savvy enough to realize that if the patient DOES have a cancer
and he has ignored the advice of his "learned companion" ("Ladies and gentlemen of the jury..."), *he* will be on the hook for the malpractice claim.
So, the safe bet is to just accept the diagnosis of the AI -- even if
it is incorrect.
Its easy to see how similar claims can be made about other complex
systems ("The airliner will suffer a catastrophic structural failure...").
If challenging an "authority" only results in downside risk for the challenger, then what incentive to make said challenge?
"Cursitor Doom" <cd@notformail.com> wrote in message news:6k85ekhb58ummrpfsg8scf61l1d8adcbr9@4ax.com...
I can't help noticing since I drew everyone's attention to Grok that
it's gone awfully quiet around here. I did postulate that AI might
kill this group, but maybe it's happening quicker than I'd expected.
:-(
LLMs are clearly not useful for solving electronic circuit design problems.
And neither is this group if young newcomers are indirectly referred to as newbie dimwits.
(Or something like that, I forget the exact words.)
AI which can learn from its mistakes clearly exists, otherwise how did AlphaGo learn?
I see no sign of AI which can learn to design electronics but electronics isn't like playing Go.
What's likely to happen at present is that many young people will insist that what Grok says must be correct.
On Sun, 05 Oct 2025 17:54:56 +0100, Cursitor Doom <cd@notformail.com>
wrote:
I can't help noticing since I drew everyone's attention to Grok that
it's gone awfully quiet around here. I did postulate that AI might
kill this group, but maybe it's happening quicker than I'd expected.
:-(
No, it's just that few people design electronics now.
On Sun, 05 Oct 2025 17:54:56 +0100, Cursitor Doom <cd@notformail.com>
wrote:
I can't help noticing since I drew everyone's attention to Grok that
it's gone awfully quiet around here. I did postulate that AI might
kill this group, but maybe it's happening quicker than I'd expected.
:-(
No, it's just that few people design electronics now.
John Larkin--- Synchronet 3.21a-Linux NewsLink 1.2
Highland Tech Glen Canyon Design Center
Lunatic Fringe Electronics
john larkin <jl@glen--canyon.com> wrote:
On Sun, 05 Oct 2025 17:54:56 +0100, Cursitor Doom <cd@notformail.com>
wrote:
I can't help noticing since I drew everyone's attention to Grok that
it's gone awfully quiet around here. I did postulate that AI might
kill this group, but maybe it's happening quicker than I'd expected.
:-(
No, it's just that few people design electronics now.
Simon and I are planning to submit a patent application this week on the >topic of high performance temperature control. Once itAs done, we could >discuss it here if folks are interested.
Cheers
Phil Hobbs
"Don Y" <blockedofcourse@foo.invalid> wrote in message news:10bulvc$3p33v$1@dont-email.me...
On 10/5/2025 12:42 PM, Edward Rawde wrote:
I see no sign of AI which can learn to design electronics but electronics isn't like playing Go.
AI is already being used to design electronics.
Where can I find an AI designer I can test?
On Sun, 5 Oct 2025 17:28:35 -0400, "Edward Rawde"
<invalid@invalid.invalid> wrote:
"Don Y" <blockedofcourse@foo.invalid> wrote in message news:10bulvc$3p33v$1@dont-email.me...
On 10/5/2025 12:42 PM, Edward Rawde wrote:
I see no sign of AI which can learn to design electronics but electronics isn't like playing Go.
AI is already being used to design electronics.
Where can I find an AI designer I can test?
Flux.ai
John Larkin
Highland Tech Glen Canyon Design Center
Lunatic Fringe Electronics
"Don Y" <blockedofcourse@foo.invalid> wrote in message news:10bulvc$3p33v$1@dont-email.me...
On 10/5/2025 12:42 PM, Edward Rawde wrote:
I see no sign of AI which can learn to design electronics but electronics isn't like playing Go.
AI is already being used to design electronics.
Where can I find an AI designer I can test?
Not *yet* to the point where
you can give a general specification without many details -- but, that will >> come.
What's likely to happen at present is that many young people will insist that what Grok says must be correct.
The bigger fear is that OLDER people will defer to AIs -- out of concern for >> their positions.
Imagine an AI telling a doctor that a patient likely has a cancer
(or other malady). Doctor can see no evidence of this.
So the doctor should do more tests.
I'm no cancer expert but I would hope there is a test or two which can confirm
or deny any type of cancer.
Yet, is savvy enough to realize that if the patient DOES have a cancer
and he has ignored the advice of his "learned companion" ("Ladies and
gentlemen of the jury..."), *he* will be on the hook for the malpractice
claim.
Not if all the relevant tests say no cancer.
You likely don't want cancer treatment for cancer you don't have.
So, the safe bet is to just accept the diagnosis of the AI -- even if
it is incorrect.
Its easy to see how similar claims can be made about other complex
systems ("The airliner will suffer a catastrophic structural failure...").
I doubt Boeing used AI.
If challenging an "authority" only results in downside risk for the
challenger, then what incentive to make said challenge?
That's always been a risk of that.
I can think of at least one manager who wanted to get rid of me for pointing out
issues with the project when he wanted to tell managers above him that everything
was wonderful.
If challenging an "authority" only results in downside risk for the
challenger, then what incentive to make said challenge?
That's always been a risk of that.
I can think of at least one manager who wanted to get rid of me for pointing out
issues with the project when he wanted to tell managers above him that
everything
was wonderful.
Sure.-a When a project manager "announced" that our team of *50* would
be done in 4 weeks, I told him "You're fucked" (there was no other term
that could adequately describe how far off his assessment was!).-a I
then queried the various people in the room as to the efforts that *I* knew lay ahead of them.
He complained to the department head.-a That didn't change the reality. ("Don, could you be a bit more diplomatic?")
"Cursitor Doom" <cd@notformail.com> wrote in message news:6k85ekhb58ummrpfsg8scf61l1d8adcbr9@4ax.com...
I can't help noticing since I drew everyone's attention to Grok that
it's gone awfully quiet around here. I did postulate that AI might
kill this group, but maybe it's happening quicker than I'd expected.
:-(
LLMs are clearly not useful for solving electronic circuit design problems.
And neither is this group if young newcomers are indirectly referred to as newbie dimwits.
(Or something like that, I forget the exact words.)
AI which can learn from its mistakes clearly exists, otherwise how did AlphaGo learn?
I see no sign of AI which can learn to design electronics but electronics isn't like playing Go.
What's likely to happen at present is that many young people will insist that what Grok says must be correct.
On Sun, 5 Oct 2025 15:42:48 -0400, "Edward Rawde"
<invalid@invalid.invalid> wrote:
"Cursitor Doom" <cd@notformail.com> wrote in message news:6k85ekhb58ummrpfsg8scf61l1d8adcbr9@4ax.com...
I can't help noticing since I drew everyone's attention to Grok that
it's gone awfully quiet around here. I did postulate that AI might
kill this group, but maybe it's happening quicker than I'd expected.
:-(
LLMs are clearly not useful for solving electronic circuit design problems.
Not very effective at interpreting images either.
And neither is this group if young newcomers are indirectly referred to as newbie dimwits.
(Or something like that, I forget the exact words.)
Sounds like classic Bill Sloman.
On Sun, 5 Oct 2025 17:28:35 -0400, "Edward Rawde"
<invalid@invalid.invalid> wrote:
"Don Y" <blockedofcourse@foo.invalid> wrote in message news:10bulvc$3p33v$1@dont-email.me...
On 10/5/2025 12:42 PM, Edward Rawde wrote:
I see no sign of AI which can learn to design electronics but electronics isn't like playing Go.
AI is already being used to design electronics.
Where can I find an AI designer I can test?
Flux.ai
John Larkin
Highland Tech Glen Canyon Design Center
Lunatic Fringe Electronics
On 6/10/2025 6:42 am, Edward Rawde wrote:
"Cursitor Doom" <cd@notformail.com> wrote in message news:6k85ekhb58ummrpfsg8scf61l1d8adcbr9@4ax.com...
I can't help noticing since I drew everyone's attention to Grok that
it's gone awfully quiet around here. I did postulate that AI might
kill this group, but maybe it's happening quicker than I'd expected.
:-(
LLMs are clearly not useful for solving electronic circuit design problems. >>
And neither is this group if young newcomers are indirectly referred to as newbie dimwits.
(Or something like that, I forget the exact words.)
Dim newbies is the traditional term, and it is reserved for new-comers
who don't know what they are talking about, and are reluctant to take >advantage of better-informed advice.
AI which can learn from its mistakes clearly exists, otherwise how did AlphaGo learn?
Structuring the software so that it can earn from experience is
obviously possible, but it is lot easier when there is a well-defined
target - in AlphaGo winning the game - that it is in more opened ended >situations.
I see no sign of AI which can learn to design electronics but electronics isn't like playing Go.
What's likely to happen at present is that many young people will insist that what Grok says must be correct.
In the same way that John Larkin insists that Donald Trump has common
sense. Common sense is an ill-defined term, and "correct" doesn't mean
much if you don't know how to recognise mistakes.
On 10/5/2025 2:28 PM, Edward Rawde wrote:
"Don Y" <blockedofcourse@foo.invalid> wrote in message news:10bulvc$3p33v$1@dont-email.me...
On 10/5/2025 12:42 PM, Edward Rawde wrote:
I see no sign of AI which can learn to design electronics but electronics isn't like playing Go.
AI is already being used to design electronics.
Where can I find an AI designer I can test?
If you're afraid that your ability to design electronics is being
threatened by technology, best to find some other thing to hang your
hat on. Cleaning bedpans will probably be a human-required skill
for the foreseeable future -- to complex to automate, too cheap to
find humans who can do it.
"Don Y" <blockedofcourse@foo.invalid> wrote in message news:10bv981$c44$1@dont-email.me...
On 10/5/2025 2:28 PM, Edward Rawde wrote:
"Don Y" <blockedofcourse@foo.invalid> wrote in message news:10bulvc$3p33v$1@dont-email.me...
On 10/5/2025 12:42 PM, Edward Rawde wrote:
I see no sign of AI which can learn to design electronics but electronics isn't like playing Go.
AI is already being used to design electronics.
Where can I find an AI designer I can test?
If you're afraid that your ability to design electronics is being
threatened by technology, best to find some other thing to hang your
hat on. Cleaning bedpans will probably be a human-required skill
for the foreseeable future -- to complex to automate, too cheap to
find humans who can do it.
I didn't say I was afraid of technology in any way at all.
I would be happy to use an AI assistant which can provide a useful contribution.
But not one which thinks the output of the op amp circuit I posted recently is -12V
(Or +12V in some cases. I've also seen 8V.)
The AI design services you mentioned don't seem to be quick to show examples of their work.
This doesn't mean I wouldn't want to use them but I prefer to try before I buy.
I also prefer to see examples of specifications which were turned into designs by AI.
Ditto for anything that an AI "claims" to have generated.
Proving that an AI can regurgitate a previously solved
problem is just "automated retrieval".
"Write a program that prints 'Hello, World!'..."
On 10/6/2025 2:21 PM, Don Y wrote:
Ditto for anything that an AI "claims" to have generated.
Proving that an AI can regurgitate a previously solved
problem is just "automated retrieval".
"Write a program that prints 'Hello, World!'..."
A better example:
"Write a program/function/module that counts the number of
set bits in a 64b integer."
There are only 65 possible answers for 2^64 possible inputs.
What approach does the AI take in pursuing this explicit
(though vague) specification?
"Don Y" <blockedofcourse@foo.invalid> wrote in message news:10c1ne0$l8u1$1@dont-email.me...
On 10/6/2025 2:21 PM, Don Y wrote:
Ditto for anything that an AI "claims" to have generated.
Proving that an AI can regurgitate a previously solved
problem is just "automated retrieval".
"Write a program that prints 'Hello, World!'..."
A better example:
"Write a program/function/module that counts the number of
set bits in a 64b integer."
There are only 65 possible answers for 2^64 possible inputs.
What approach does the AI take in pursuing this explicit
(though vague) specification?
Depends on what you ask.
Try asking Grok:
Map the number of set bits in a 64-bit word to a 7-bit code
But we all know LLMs were trained on code.
On Mon, 6 Oct 2025 16:24:38 +1100, Bill Sloman <bill.sloman@ieee.org>
wrote:
On 6/10/2025 6:42 am, Edward Rawde wrote:
"Cursitor Doom" <cd@notformail.com> wrote in message news:6k85ekhb58ummrpfsg8scf61l1d8adcbr9@4ax.com...
I can't help noticing since I drew everyone's attention to Grok that
it's gone awfully quiet around here. I did postulate that AI might
kill this group, but maybe it's happening quicker than I'd expected.
:-(
LLMs are clearly not useful for solving electronic circuit design problems. >>>
And neither is this group if young newcomers are indirectly referred to as newbie dimwits.
(Or something like that, I forget the exact words.)
Dim newbies is the traditional term, and it is reserved for new-comers
who don't know what they are talking about, and are reluctant to take
advantage of better-informed advice.
It's impressive how tribal people get, rallying against outsiders
based on any, or no, real issues.
AI which can learn from its mistakes clearly exists, otherwise how did AlphaGo learn?
Structuring the software so that it can earn from experience is
obviously possible, but it is lot easier when there is a well-defined
target - in AlphaGo winning the game - that it is in more opened ended
situations.
I see no sign of AI which can learn to design electronics but electronics isn't like playing Go.
What's likely to happen at present is that many young people will insist that what Grok says must be correct.
In the same way that John Larkin insists that Donald Trump has common
sense. Common sense is an ill-defined term, and "correct" doesn't mean
much if you don't know how to recognise mistakes.
Really, you obsess about me too much for your own good.
Design something. Build it. You will feel better.
Simon and I are planning to submit a patent application this week on the topic of high performance temperature control. Once itrCOs done, we could discuss it here if folks are interested.Yes, I enjoy reading about your work.
On 10/5/2025 2:49 PM, Phil Hobbs wrote:
Yes, I enjoy reading about your work.
Simon and I are planning to submit a patent application this week on the
topic of high performance temperature control. Once itrCOs done, we could
discuss it here if folks are interested.
john larkin <jl@glen--canyon.com> wrote:
On Sun, 05 Oct 2025 17:54:56 +0100, Cursitor Doom <cd@notformail.com>
wrote:
I can't help noticing since I drew everyone's attention to Grok that
it's gone awfully quiet around here. I did postulate that AI might
kill this group, but maybe it's happening quicker than I'd expected.
:-(
No, it's just that few people design electronics now.
Simon and I are planning to submit a patent application this week on the topic of high performance temperature control. Once itrCOs done, we could discuss it here if folks are interested.
Many diagnostics aren't certain or rely on subjective interpretations of >data. How many mammograms of cancerous breasts are taken before a cancer
is large enough to be *confidently* diagnosed? How much extra breast tissue >is put at risk in that process? What chance for the cancer to metastasize >before being noticeable, there?
AI is another diagnostic tool to further increase confidence in a
diagnosis OR detect conditions that "mere mortals" miss.
In article <10bv981$c44$1@dont-email.me>,
Don Y <blockedofcourse@foo.invalid> wrote:
<SNIP>
Many diagnostics aren't certain or rely on subjective interpretations of
data. How many mammograms of cancerous breasts are taken before a cancer
is large enough to be *confidently* diagnosed? How much extra breast tissue >> is put at risk in that process? What chance for the cancer to metastasize >> before being noticeable, there?
Reportedly Chinese hospitals are using ai successfully to interpret
Roentgen photos. It speeds up the diagnosis process, but they don't
eliminate radiologists.
AI is another diagnostic tool to further increase confidence in a
diagnosis OR detect conditions that "mere mortals" miss.
Groetjes Albert
On 10/6/2025 8:49 PM, Edward Rawde wrote:
"Don Y" <blockedofcourse@foo.invalid> wrote in message news:10c1ne0$l8u1$1@dont-email.me...
On 10/6/2025 2:21 PM, Don Y wrote:
Ditto for anything that an AI "claims" to have generated.
Proving that an AI can regurgitate a previously solved
problem is just "automated retrieval".
"Write a program that prints 'Hello, World!'..."
A better example:
"Write a program/function/module that counts the number of
set bits in a 64b integer."
There are only 65 possible answers for 2^64 possible inputs.
What approach does the AI take in pursuing this explicit
(though vague) specification?
Depends on what you ask.
Try asking Grok:
Map the number of set bits in a 64-bit word to a 7-bit code
But we all know LLMs were trained on code.
But there are many (practical) different solutions to the problem
among many thousands of *possible* solutions. Each has different resource/performance issues. Will it opt for speed? code size?
portability? intuitiveness? "cleverness"?
Will it try to optimize for particular cases?
Or, does it settle for "sufficiency"?
On 10/11/2025 5:02 AM, Theo wrote:
Don Y <blockedofcourse@foo.invalid> wrote:
On 10/6/2025 8:49 PM, Edward Rawde wrote:
"Don Y" <blockedofcourse@foo.invalid> wrote in message news:10c1ne0$l8u1$1@dont-email.me...
On 10/6/2025 2:21 PM, Don Y wrote:
Ditto for anything that an AI "claims" to have generated.
Proving that an AI can regurgitate a previously solved
problem is just "automated retrieval".
"Write a program that prints 'Hello, World!'..."
A better example:
"Write a program/function/module that counts the number of
set bits in a 64b integer."
There are only 65 possible answers for 2^64 possible inputs.
What approach does the AI take in pursuing this explicit
(though vague) specification?
Depends on what you ask.
Try asking Grok:
Map the number of set bits in a 64-bit word to a 7-bit code
But we all know LLMs were trained on code.
But there are many (practical) different solutions to the problem
among many thousands of *possible* solutions. Each has different
resource/performance issues. Will it opt for speed? code size?
portability? intuitiveness? "cleverness"?
Will it try to optimize for particular cases?
Or, does it settle for "sufficiency"?
None of the above.
It was a rhetorical question illustrating how easy it is
to NOT properly constrain a solution space. I.e., someone
has to "tell" an AI what a suitable answer will look like.
If that someone can't imagine all of the criteria appropriate
to that solution, then you *may* get an implementation
that fails many criteria that you've not realized are
important to your problem.
Like asking someone to build you a house -- and ending up
with a house sized for *dolls*!
It looks for examples of the same code having been
written before, and mashes up something to present to you. It doesn't
'know' about speed or portability or code size, absent somebody remarking
about those in its input data.
There's a lot of code out there, so if you ask for a fast algorithm then it >> can probably dredge one up, but it doesn't 'know' why it's fast.
But, a fast algorithm on a 64 bit machine will be very different than
the same function written for an 8 bit machine. See how easy it is to >"forget" pertinent details?
Note that we're just talking about a *tiny* piece of code (dozen lines?), >here -- and how easy it is to NOT ask for the correct constraints.
If you ask it why it's fast, it can look for somebody talking about that in >> the training data and present that as an argument, but it doesn't guarantee >> to relate to the same code example it provided.
For toy problems that have been done a million times before, the training
data is pretty solid so it might look good. But once you start going off
track into areas the training data is sparse then you can look more closely.
That was the point of my "Hello, World" example.
I suspect it does reasonably well with javascript and html5 for web pages >(which tend to largely resemble each other save for minor details and >graphics)
But, think of how much effort you would have to put into "specifying"
a *real* problem -- enough to be sure the solution presented actually
does fit *your* needs. I.e., if you aren't already writing such >specifications for your code, you likely aren't competent to direct
an AI any more than your own "coders".
Don Y <blockedofcourse@foo.invalid> wrote:
On 10/6/2025 8:49 PM, Edward Rawde wrote:
"Don Y" <blockedofcourse@foo.invalid> wrote in message news:10c1ne0$l8u1$1@dont-email.me...
On 10/6/2025 2:21 PM, Don Y wrote:
Ditto for anything that an AI "claims" to have generated.
Proving that an AI can regurgitate a previously solved
problem is just "automated retrieval".
"Write a program that prints 'Hello, World!'..."
A better example:
"Write a program/function/module that counts the number of
set bits in a 64b integer."
There are only 65 possible answers for 2^64 possible inputs.
What approach does the AI take in pursuing this explicit
(though vague) specification?
Depends on what you ask.
Try asking Grok:
Map the number of set bits in a 64-bit word to a 7-bit code
But we all know LLMs were trained on code.
But there are many (practical) different solutions to the problem
among many thousands of *possible* solutions. Each has different
resource/performance issues. Will it opt for speed? code size?
portability? intuitiveness? "cleverness"?
Will it try to optimize for particular cases?
Or, does it settle for "sufficiency"?
None of the above.
It looks for examples of the same code having been
written before, and mashes up something to present to you. It doesn't
'know' about speed or portability or code size, absent somebody remarking about those in its input data.
There's a lot of code out there, so if you ask for a fast algorithm then it can probably dredge one up, but it doesn't 'know' why it's fast.
If you ask it why it's fast, it can look for somebody talking about that in the training data and present that as an argument, but it doesn't guarantee to relate to the same code example it provided.
For toy problems that have been done a million times before, the training data is pretty solid so it might look good. But once you start going off track into areas the training data is sparse then you can look more closely.
On Sat, 11 Oct 2025 09:56:29 -0700, Don Y
<blockedofcourse@foo.invalid> wrote:
On 10/11/2025 5:02 AM, Theo wrote:
Don Y <blockedofcourse@foo.invalid> wrote:
On 10/6/2025 8:49 PM, Edward Rawde wrote:
"Don Y" <blockedofcourse@foo.invalid> wrote in message news:10c1ne0$l8u1$1@dont-email.me...
On 10/6/2025 2:21 PM, Don Y wrote:
Ditto for anything that an AI "claims" to have generated.
Proving that an AI can regurgitate a previously solved
problem is just "automated retrieval".
"Write a program that prints 'Hello, World!'..."
A better example:
"Write a program/function/module that counts the number of
set bits in a 64b integer."
There are only 65 possible answers for 2^64 possible inputs.
What approach does the AI take in pursuing this explicit
(though vague) specification?
Depends on what you ask.
Try asking Grok:
Map the number of set bits in a 64-bit word to a 7-bit code
But we all know LLMs were trained on code.
But there are many (practical) different solutions to the problem
among many thousands of *possible* solutions. Each has different
resource/performance issues. Will it opt for speed? code size?
portability? intuitiveness? "cleverness"?
Will it try to optimize for particular cases?
Or, does it settle for "sufficiency"?
None of the above.
It was a rhetorical question illustrating how easy it is
to NOT properly constrain a solution space. I.e., someone
has to "tell" an AI what a suitable answer will look like.
If that someone can't imagine all of the criteria appropriate
to that solution, then you *may* get an implementation
that fails many criteria that you've not realized are
important to your problem.
Like asking someone to build you a house -- and ending up
with a house sized for *dolls*!
It looks for examples of the same code having been
written before, and mashes up something to present to you. It doesn't
'know' about speed or portability or code size, absent somebody remarking >>> about those in its input data.
There's a lot of code out there, so if you ask for a fast algorithm then it >>> can probably dredge one up, but it doesn't 'know' why it's fast.
But, a fast algorithm on a 64 bit machine will be very different than
the same function written for an 8 bit machine. See how easy it is to
"forget" pertinent details?
Note that we're just talking about a *tiny* piece of code (dozen lines?),
here -- and how easy it is to NOT ask for the correct constraints.
If you ask it why it's fast, it can look for somebody talking about that in >>> the training data and present that as an argument, but it doesn't guarantee >>> to relate to the same code example it provided.
For toy problems that have been done a million times before, the training >>> data is pretty solid so it might look good. But once you start going off >>> track into areas the training data is sparse then you can look more closely.
That was the point of my "Hello, World" example.
I suspect it does reasonably well with javascript and html5 for web pages
(which tend to largely resemble each other save for minor details and
graphics)
But, think of how much effort you would have to put into "specifying"
a *real* problem -- enough to be sure the solution presented actually
does fit *your* needs. I.e., if you aren't already writing such
specifications for your code, you likely aren't competent to direct
an AI any more than your own "coders".
And electronic design is not just coding. It needs real, organic intelligence.
It's impressive that a human brain only needs about a hundred watts.
On 12/10/2025 4:06 am, john larkin wrote:
On Sat, 11 Oct 2025 09:56:29 -0700, Don Y
<blockedofcourse@foo.invalid> wrote:
On 10/11/2025 5:02 AM, Theo wrote:
Don Y <blockedofcourse@foo.invalid> wrote:
On 10/6/2025 8:49 PM, Edward Rawde wrote:
"Don Y" <blockedofcourse@foo.invalid> wrote in message news:10c1ne0$l8u1$1@dont-email.me...
On 10/6/2025 2:21 PM, Don Y wrote:
Ditto for anything that an AI "claims" to have generated.
Proving that an AI can regurgitate a previously solved
problem is just "automated retrieval".
"Write a program that prints 'Hello, World!'..."
A better example:
"Write a program/function/module that counts the number of
set bits in a 64b integer."
There are only 65 possible answers for 2^64 possible inputs.
What approach does the AI take in pursuing this explicit
(though vague) specification?
Depends on what you ask.
Try asking Grok:
Map the number of set bits in a 64-bit word to a 7-bit code
But we all know LLMs were trained on code.
But there are many (practical) different solutions to the problem
among many thousands of *possible* solutions. Each has different
resource/performance issues. Will it opt for speed? code size?
portability? intuitiveness? "cleverness"?
Will it try to optimize for particular cases?
Or, does it settle for "sufficiency"?
None of the above.
It was a rhetorical question illustrating how easy it is
to NOT properly constrain a solution space. I.e., someone
has to "tell" an AI what a suitable answer will look like.
If that someone can't imagine all of the criteria appropriate
to that solution, then you *may* get an implementation
that fails many criteria that you've not realized are
important to your problem.
Like asking someone to build you a house -- and ending up
with a house sized for *dolls*!
It looks for examples of the same code having been
written before, and mashes up something to present to you. It doesn't >>>> 'know' about speed or portability or code size, absent somebody remarking >>>> about those in its input data.
There's a lot of code out there, so if you ask for a fast algorithm then it
can probably dredge one up, but it doesn't 'know' why it's fast.
But, a fast algorithm on a 64 bit machine will be very different than
the same function written for an 8 bit machine. See how easy it is to
"forget" pertinent details?
Note that we're just talking about a *tiny* piece of code (dozen lines?), >>> here -- and how easy it is to NOT ask for the correct constraints.
If you ask it why it's fast, it can look for somebody talking about that in
the training data and present that as an argument, but it doesn't guarantee
to relate to the same code example it provided.
For toy problems that have been done a million times before, the training >>>> data is pretty solid so it might look good. But once you start going off >>>> track into areas the training data is sparse then you can look more closely.
That was the point of my "Hello, World" example.
I suspect it does reasonably well with javascript and html5 for web pages >>> (which tend to largely resemble each other save for minor details and
graphics)
But, think of how much effort you would have to put into "specifying"
a *real* problem -- enough to be sure the solution presented actually
does fit *your* needs. I.e., if you aren't already writing such
specifications for your code, you likely aren't competent to direct
an AI any more than your own "coders".
And electronic design is not just coding. It needs real, organic
intelligence.
To do it well. More or less adequate electronic design is easier.
I've cleaned up after few people whose idea of "adequate" fell a bit short.
It's impressive that a human brain only needs about a hundred watts.
It is woefully slow.
--
Bill Sloman, Sydney
"Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cg3ib$1e2gm$2@dont-email.me...
On 12/10/2025 4:06 am, john larkin wrote:
On Sat, 11 Oct 2025 09:56:29 -0700, Don Y
<blockedofcourse@foo.invalid> wrote:
On 10/11/2025 5:02 AM, Theo wrote:
Don Y <blockedofcourse@foo.invalid> wrote:
On 10/6/2025 8:49 PM, Edward Rawde wrote:
"Don Y" <blockedofcourse@foo.invalid> wrote in message news:10c1ne0$l8u1$1@dont-email.me...
On 10/6/2025 2:21 PM, Don Y wrote:
Ditto for anything that an AI "claims" to have generated.
Proving that an AI can regurgitate a previously solved
problem is just "automated retrieval".
"Write a program that prints 'Hello, World!'..."
A better example:
"Write a program/function/module that counts the number of
set bits in a 64b integer."
There are only 65 possible answers for 2^64 possible inputs.
What approach does the AI take in pursuing this explicit
(though vague) specification?
Depends on what you ask.
Try asking Grok:
Map the number of set bits in a 64-bit word to a 7-bit code
But we all know LLMs were trained on code.
But there are many (practical) different solutions to the problem
among many thousands of *possible* solutions. Each has different
resource/performance issues. Will it opt for speed? code size?
portability? intuitiveness? "cleverness"?
Will it try to optimize for particular cases?
Or, does it settle for "sufficiency"?
None of the above.
It was a rhetorical question illustrating how easy it is
to NOT properly constrain a solution space. I.e., someone
has to "tell" an AI what a suitable answer will look like.
If that someone can't imagine all of the criteria appropriate
to that solution, then you *may* get an implementation
that fails many criteria that you've not realized are
important to your problem.
Like asking someone to build you a house -- and ending up
with a house sized for *dolls*!
It looks for examples of the same code having been
written before, and mashes up something to present to you. It doesn't >>>>> 'know' about speed or portability or code size, absent somebody remarking >>>>> about those in its input data.
There's a lot of code out there, so if you ask for a fast algorithm then it
can probably dredge one up, but it doesn't 'know' why it's fast.
But, a fast algorithm on a 64 bit machine will be very different than
the same function written for an 8 bit machine. See how easy it is to >>>> "forget" pertinent details?
Note that we're just talking about a *tiny* piece of code (dozen lines?), >>>> here -- and how easy it is to NOT ask for the correct constraints.
If you ask it why it's fast, it can look for somebody talking about that in
the training data and present that as an argument, but it doesn't guarantee
to relate to the same code example it provided.
For toy problems that have been done a million times before, the training >>>>> data is pretty solid so it might look good. But once you start going off >>>>> track into areas the training data is sparse then you can look more closely.
That was the point of my "Hello, World" example.
I suspect it does reasonably well with javascript and html5 for web pages >>>> (which tend to largely resemble each other save for minor details and
graphics)
But, think of how much effort you would have to put into "specifying"
a *real* problem -- enough to be sure the solution presented actually
does fit *your* needs. I.e., if you aren't already writing such
specifications for your code, you likely aren't competent to direct
an AI any more than your own "coders".
And electronic design is not just coding. It needs real, organic
intelligence.
To do it well. More or less adequate electronic design is easier.
I've cleaned up after few people whose idea of "adequate" fell a bit short.
That must be the 234,412,265th time you've said that.
It's impressive that a human brain only needs about a hundred watts.
It is woefully slow.
At some things. Not at others.
On 13/10/2025 1:25 am, Edward Rawde wrote:.....
"Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cg3ib$1e2gm$2@dont-email.me...
On 12/10/2025 4:06 am, john larkin wrote:
On Sat, 11 Oct 2025 09:56:29 -0700, Don Y
<blockedofcourse@foo.invalid> wrote:
On 10/11/2025 5:02 AM, Theo wrote:
Don Y <blockedofcourse@foo.invalid> wrote:
On 10/6/2025 8:49 PM, Edward Rawde wrote:
"Don Y" <blockedofcourse@foo.invalid> wrote in message news:10c1ne0$l8u1$1@dont-email.me...
On 10/6/2025 2:21 PM, Don Y wrote:
Ditto for anything that an AI "claims" to have generated.
That must be the 234,412,265th time you've said that.
And electronic design is not just coding. It needs real, organic
intelligence.
To do it well. More or less adequate electronic design is easier.
I've cleaned up after few people whose idea of "adequate" fell a bit short. >>
That may be something of an exaggeration.
It's impressive that a human brain only needs about a hundred watts.
It is woefully slow.
At some things. Not at others.
Name one. The basic operations in the human brain seem to work in the millisecond range, and ECL can do stuff in a nanosecond.
The human brain does well on large data sets because it has a lot more parallel processing than a regular computer, but - as
demonstrated by the solution of the protein folding problem - you can get more data into a big computer than you can into a human
brain.
--
Bill Sloman, Sydney
"Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cg3ib$1e2gm$2@dont-email.me...
On 12/10/2025 4:06 am, john larkin wrote:
On Sat, 11 Oct 2025 09:56:29 -0700, Don Y
<blockedofcourse@foo.invalid> wrote:
On 10/11/2025 5:02 AM, Theo wrote:
Don Y <blockedofcourse@foo.invalid> wrote:
On 10/6/2025 8:49 PM, Edward Rawde wrote:
"Don Y" <blockedofcourse@foo.invalid> wrote in message news:10c1ne0$l8u1$1@dont-email.me...
On 10/6/2025 2:21 PM, Don Y wrote:
Ditto for anything that an AI "claims" to have generated.
Proving that an AI can regurgitate a previously solved
problem is just "automated retrieval".
"Write a program that prints 'Hello, World!'..."
A better example:
"Write a program/function/module that counts the number of
set bits in a 64b integer."
There are only 65 possible answers for 2^64 possible inputs.
What approach does the AI take in pursuing this explicit
(though vague) specification?
Depends on what you ask.
Try asking Grok:
Map the number of set bits in a 64-bit word to a 7-bit code
But we all know LLMs were trained on code.
But there are many (practical) different solutions to the problem
among many thousands of *possible* solutions. Each has different
resource/performance issues. Will it opt for speed? code size?
portability? intuitiveness? "cleverness"?
Will it try to optimize for particular cases?
Or, does it settle for "sufficiency"?
None of the above.
It was a rhetorical question illustrating how easy it is
to NOT properly constrain a solution space. I.e., someone
has to "tell" an AI what a suitable answer will look like.
If that someone can't imagine all of the criteria appropriate
to that solution, then you *may* get an implementation
that fails many criteria that you've not realized are
important to your problem.
Like asking someone to build you a house -- and ending up
with a house sized for *dolls*!
It looks for examples of the same code having been
written before, and mashes up something to present to you. It doesn't >>>>> 'know' about speed or portability or code size, absent somebody remarking >>>>> about those in its input data.
There's a lot of code out there, so if you ask for a fast algorithm then it
can probably dredge one up, but it doesn't 'know' why it's fast.
But, a fast algorithm on a 64 bit machine will be very different than
the same function written for an 8 bit machine. See how easy it is to >>>> "forget" pertinent details?
Note that we're just talking about a *tiny* piece of code (dozen lines?), >>>> here -- and how easy it is to NOT ask for the correct constraints.
If you ask it why it's fast, it can look for somebody talking about that in
the training data and present that as an argument, but it doesn't guarantee
to relate to the same code example it provided.
For toy problems that have been done a million times before, the training >>>>> data is pretty solid so it might look good. But once you start going off >>>>> track into areas the training data is sparse then you can look more closely.
That was the point of my "Hello, World" example.
I suspect it does reasonably well with javascript and html5 for web pages >>>> (which tend to largely resemble each other save for minor details and
graphics)
But, think of how much effort you would have to put into "specifying"
a *real* problem -- enough to be sure the solution presented actually
does fit *your* needs. I.e., if you aren't already writing such
specifications for your code, you likely aren't competent to direct
an AI any more than your own "coders".
And electronic design is not just coding. It needs real, organic
intelligence.
To do it well. More or less adequate electronic design is easier.
I've cleaned up after few people whose idea of "adequate" fell a bit short.
That must be the 234,412,265th time you've said that.
It's impressive that a human brain only needs about a hundred watts.
It is woefully slow.
At some things. Not at others.
"Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cgjcn$1i52v$4@dont-email.me...
On 13/10/2025 1:25 am, Edward Rawde wrote:.....
"Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cg3ib$1e2gm$2@dont-email.me...
On 12/10/2025 4:06 am, john larkin wrote:
On Sat, 11 Oct 2025 09:56:29 -0700, Don Y
<blockedofcourse@foo.invalid> wrote:
On 10/11/2025 5:02 AM, Theo wrote:
Don Y <blockedofcourse@foo.invalid> wrote:
On 10/6/2025 8:49 PM, Edward Rawde wrote:
"Don Y" <blockedofcourse@foo.invalid> wrote in message news:10c1ne0$l8u1$1@dont-email.me...
On 10/6/2025 2:21 PM, Don Y wrote:
Ditto for anything that an AI "claims" to have generated.
And electronic design is not just coding. It needs real, organic
intelligence.
To do it well. More or less adequate electronic design is easier.
I've cleaned up after few people whose idea of "adequate" fell a bit short.
That must be the 234,412,265th time you've said that.
That may be something of an exaggeration.
Ok 234,412,104th
It's impressive that a human brain only needs about a hundred watts.
It is woefully slow.
At some things. Not at others.
Name one. The basic operations in the human brain seem to work in the millisecond range, and ECL can do stuff in a nanosecond.
Understanding a language you're fluent in appears to be near enough instant. Why would you need it any faster?
I've never seen ECL do that.
The human brain does well on large data sets because it has a lot more parallel processing than a regular computer, but - as
demonstrated by the solution of the protein folding problem - you can get more data into a big computer than you can into a human
brain.
Sure but I've yet to see an online AI which learns from its mistakes.
And when that happens, who is going to teach it what a mistake is and what isn't?
Some subjects, such as politics, may run into the same difficulties humans have.
Where will a DT made of ECL with a much larger data set lead us?
On Sun, 12 Oct 2025 10:25:18 -0400, "Edward Rawde"
<invalid@invalid.invalid> wrote:
"Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cg3ib$1e2gm$2@dont-email.me...
On 12/10/2025 4:06 am, john larkin wrote:That must be the 234,412,265th time you've said that.
On Sat, 11 Oct 2025 09:56:29 -0700, Don Y
<blockedofcourse@foo.invalid> wrote:
On 10/11/2025 5:02 AM, Theo wrote:
Don Y <blockedofcourse@foo.invalid> wrote:
On 10/6/2025 8:49 PM, Edward Rawde wrote:
"Don Y" <blockedofcourse@foo.invalid> wrote in message news:10c1ne0$l8u1$1@dont-email.me...
On 10/6/2025 2:21 PM, Don Y wrote:
Ditto for anything that an AI "claims" to have generated.
Proving that an AI can regurgitate a previously solved
problem is just "automated retrieval".
"Write a program that prints 'Hello, World!'..."
A better example:
"Write a program/function/module that counts the number of
set bits in a 64b integer."
There are only 65 possible answers for 2^64 possible inputs. >>>>>>>>> What approach does the AI take in pursuing this explicit
(though vague) specification?
Depends on what you ask.
Try asking Grok:
Map the number of set bits in a 64-bit word to a 7-bit code
But we all know LLMs were trained on code.
But there are many (practical) different solutions to the problem >>>>>>> among many thousands of *possible* solutions. Each has different >>>>>>> resource/performance issues. Will it opt for speed? code size? >>>>>>> portability? intuitiveness? "cleverness"?
Will it try to optimize for particular cases?
Or, does it settle for "sufficiency"?
None of the above.
It was a rhetorical question illustrating how easy it is
to NOT properly constrain a solution space. I.e., someone
has to "tell" an AI what a suitable answer will look like.
If that someone can't imagine all of the criteria appropriate
to that solution, then you *may* get an implementation
that fails many criteria that you've not realized are
important to your problem.
Like asking someone to build you a house -- and ending up
with a house sized for *dolls*!
It looks for examples of the same code having been
written before, and mashes up something to present to you. It doesn't >>>>>> 'know' about speed or portability or code size, absent somebody remarking
about those in its input data.
There's a lot of code out there, so if you ask for a fast algorithm then it
can probably dredge one up, but it doesn't 'know' why it's fast.
But, a fast algorithm on a 64 bit machine will be very different than >>>>> the same function written for an 8 bit machine. See how easy it is to >>>>> "forget" pertinent details?
Note that we're just talking about a *tiny* piece of code (dozen lines?), >>>>> here -- and how easy it is to NOT ask for the correct constraints.
If you ask it why it's fast, it can look for somebody talking about that in
the training data and present that as an argument, but it doesn't guarantee
to relate to the same code example it provided.
For toy problems that have been done a million times before, the training
data is pretty solid so it might look good. But once you start going off
track into areas the training data is sparse then you can look more closely.
That was the point of my "Hello, World" example.
I suspect it does reasonably well with javascript and html5 for web pages >>>>> (which tend to largely resemble each other save for minor details and >>>>> graphics)
But, think of how much effort you would have to put into "specifying" >>>>> a *real* problem -- enough to be sure the solution presented actually >>>>> does fit *your* needs. I.e., if you aren't already writing such
specifications for your code, you likely aren't competent to direct
an AI any more than your own "coders".
And electronic design is not just coding. It needs real, organic
intelligence.
To do it well. More or less adequate electronic design is easier.
I've cleaned up after few people whose idea of "adequate" fell a bit short. >>
It's impressive that a human brain only needs about a hundred watts.
It is woefully slow.
At some things. Not at others.
Some very impressive things can happen in milliseconds.
Sometimes complex things are processed in background, and can take
days or even years.
What's cool is that one can have a problem, forget about it for years,
see some new component that makes it work, and have a new circuit pop
up instantly.
On 13/10/2025 4:13 am, john larkin wrote:
On Sun, 12 Oct 2025 10:25:18 -0400, "Edward Rawde"
<invalid@invalid.invalid> wrote:
"Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cg3ib$1e2gm$2@dont-email.me...
On 12/10/2025 4:06 am, john larkin wrote:
On Sat, 11 Oct 2025 09:56:29 -0700, Don Y
<blockedofcourse@foo.invalid> wrote:
On 10/11/2025 5:02 AM, Theo wrote:
Don Y <blockedofcourse@foo.invalid> wrote:
On 10/6/2025 8:49 PM, Edward Rawde wrote:
"Don Y" <blockedofcourse@foo.invalid> wrote in message news:10c1ne0$l8u1$1@dont-email.me...
On 10/6/2025 2:21 PM, Don Y wrote:
Ditto for anything that an AI "claims" to have generated. >>>>>>>>>>> Proving that an AI can regurgitate a previously solved
problem is just "automated retrieval".
"Write a program that prints 'Hello, World!'..."
A better example:
"Write a program/function/module that counts the number of >>>>>>>>>> set bits in a 64b integer."
There are only 65 possible answers for 2^64 possible inputs. >>>>>>>>>> What approach does the AI take in pursuing this explicit
(though vague) specification?
Depends on what you ask.
Try asking Grok:
Map the number of set bits in a 64-bit word to a 7-bit code
But we all know LLMs were trained on code.
But there are many (practical) different solutions to the problem >>>>>>>> among many thousands of *possible* solutions. Each has different >>>>>>>> resource/performance issues. Will it opt for speed? code size? >>>>>>>> portability? intuitiveness? "cleverness"?
Will it try to optimize for particular cases?
Or, does it settle for "sufficiency"?
None of the above.
It was a rhetorical question illustrating how easy it is
to NOT properly constrain a solution space. I.e., someone
has to "tell" an AI what a suitable answer will look like.
If that someone can't imagine all of the criteria appropriate
to that solution, then you *may* get an implementation
that fails many criteria that you've not realized are
important to your problem.
Like asking someone to build you a house -- and ending up
with a house sized for *dolls*!
It looks for examples of the same code having been
written before, and mashes up something to present to you. It doesn't >>>>>>> 'know' about speed or portability or code size, absent somebody remarking
about those in its input data.
There's a lot of code out there, so if you ask for a fast algorithm then it
can probably dredge one up, but it doesn't 'know' why it's fast.
But, a fast algorithm on a 64 bit machine will be very different than >>>>>> the same function written for an 8 bit machine. See how easy it is to >>>>>> "forget" pertinent details?
Note that we're just talking about a *tiny* piece of code (dozen lines?),
here -- and how easy it is to NOT ask for the correct constraints. >>>>>>
If you ask it why it's fast, it can look for somebody talking about that in
the training data and present that as an argument, but it doesn't guarantee
to relate to the same code example it provided.
For toy problems that have been done a million times before, the training
data is pretty solid so it might look good. But once you start going off
track into areas the training data is sparse then you can look more closely.
That was the point of my "Hello, World" example.
I suspect it does reasonably well with javascript and html5 for web pages
(which tend to largely resemble each other save for minor details and >>>>>> graphics)
But, think of how much effort you would have to put into "specifying" >>>>>> a *real* problem -- enough to be sure the solution presented actually >>>>>> does fit *your* needs. I.e., if you aren't already writing such
specifications for your code, you likely aren't competent to direct >>>>>> an AI any more than your own "coders".
And electronic design is not just coding. It needs real, organic
intelligence.
To do it well. More or less adequate electronic design is easier.
I've cleaned up after few people whose idea of "adequate" fell a bit short.
That must be the 234,412,265th time you've said that.
It's impressive that a human brain only needs about a hundred watts.
It is woefully slow.
At some things. Not at others.
Some very impressive things can happen in milliseconds.
John Larkin is easily impressed by his own brilliance,
Sometimes complex things are processed in background, and can take
days or even years.
If you have a very slow brain.
What's cool is that one can have a problem, forget about it for years,
see some new component that makes it work, and have a new circuit pop
up instantly.
That just lots of memory. One time I did that was when an impractical
way of dealling with ripple on pulse width modulated output which I came
up with in 1975 became practical in 1992 when I got my hands on a
big-enough chunk of programable logic - not all that big, as it was a >plug-in replacement for a 22V10 chip, but big enough. Obviously, I
hadn't forgotten about it. I hadn't been obssessing about it for the >previous 17 years, but I hadn't forgotten about it either.
On Mon, 13 Oct 2025 20:10:43 +1100, Bill Sloman <bill.sloman@ieee.org>
wrote:
It's impressive that a human brain only needs about a hundred watts. >>>>>It is woefully slow.
At some things. Not at others.
Some very impressive things can happen in milliseconds.
John Larkin is easily impressed by his own brilliance,
I am impressed by my brain, the one I was born with.
On 13/10/2025 3:35 am, Edward Rawde wrote:
"Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cgjcn$1i52v$4@dont-email.me...
On 13/10/2025 1:25 am, Edward Rawde wrote:.....
"Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cg3ib$1e2gm$2@dont-email.me...
On 12/10/2025 4:06 am, john larkin wrote:
On Sat, 11 Oct 2025 09:56:29 -0700, Don Y
<blockedofcourse@foo.invalid> wrote:
On 10/11/2025 5:02 AM, Theo wrote:
Don Y <blockedofcourse@foo.invalid> wrote:
On 10/6/2025 8:49 PM, Edward Rawde wrote:
"Don Y" <blockedofcourse@foo.invalid> wrote in message news:10c1ne0$l8u1$1@dont-email.me...
On 10/6/2025 2:21 PM, Don Y wrote:
Ditto for anything that an AI "claims" to have generated.
And electronic design is not just coding. It needs real, organic
intelligence.
To do it well. More or less adequate electronic design is easier.
I've cleaned up after few people whose idea of "adequate" fell a bit short.
That must be the 234,412,265th time you've said that.
That may be something of an exaggeration.
Ok 234,412,104th
It's impressive that a human brain only needs about a hundred watts. >>>>>It is woefully slow.
At some things. Not at others.
Name one. The basic operations in the human brain seem to work in the millisecond range, and ECL can do stuff in a nanosecond.
Understanding a language you're fluent in appears to be near enough instant. >> Why would you need it any faster?
I've never seen ECL do that.
You don't need ECL for that. Google translate uses a large language model to do rapid translation - faster than a human
simultaneous translator can manage - and one of my wife's friends from her undergraduate days did that for a living, as well as
teaching the skill.
You don't process the speech all that fast - psycholinguists have measured that process in some detail.
The human brain does well on large data sets because it has a lot more parallel processing than a regular computer, but - as
demonstrated by the solution of the protein folding problem - you can get more data into a big computer than you can into a
human
brain.
Sure but I've yet to see an online AI which learns from its mistakes.
You don't move in those circles.
And when that happens, who is going to teach it what a mistake is and what isn't?
That's what large language models are for.
Some subjects, such as politics, may run into the same difficulties humans have.
Where will a DT made of ECL with a much larger data set lead us?
You don't need a computer to notice that Trump lies a lot, and sounds off about subject where his understanding is imperfect.
Science - in the peer-reviewed literature - has worked out a mechanism to suppress this kind of output. Fact-checkers are the
nearest thing to that in the political system, and Trump and his supporters are happy to ignore them.
Hitler and Mao provide perfectly splendid examples of the corrosive effects of misinformation, but quite a few people seem to be
incapable of recognising more modern examples of the breed.
The answer is probably better education, but schools are frequently exploited by religious institutions to implant nonsense in the
minds of the next generation.
And most American's seem to be taught that the US constitution is perfect, even though it was remarkably primitive when it was
first put together, and seems unlikely to ever adopt features like proportional representation and votes of confidence. Trump may
make a big enough mess of the US to prompt some kind of reform, but his supporters who post here don't seem be getting the
message.
--
Bill Sloman, Sydney
On Mon, 13 Oct 2025 20:10:43 +1100, Bill Sloman <bill.sloman@ieee.org>
wrote:
On 13/10/2025 4:13 am, john larkin wrote:
On Sun, 12 Oct 2025 10:25:18 -0400, "Edward Rawde"
<invalid@invalid.invalid> wrote:
"Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cg3ib$1e2gm$2@dont-email.me...
On 12/10/2025 4:06 am, john larkin wrote:
On Sat, 11 Oct 2025 09:56:29 -0700, Don Y
<blockedofcourse@foo.invalid> wrote:
On 10/11/2025 5:02 AM, Theo wrote:
Don Y <blockedofcourse@foo.invalid> wrote:
On 10/6/2025 8:49 PM, Edward Rawde wrote:
"Don Y" <blockedofcourse@foo.invalid> wrote in message news:10c1ne0$l8u1$1@dont-email.me...
On 10/6/2025 2:21 PM, Don Y wrote:
Some very impressive things can happen in milliseconds.
John Larkin is easily impressed by his own brilliance,
I am impressed by my brain, the one I was born with.
Sometimes complex things are processed in background, and can take
days or even years.
If you have a very slow brain.
Or if you allow it to work at all time scales.
What's cool is that one can have a problem, forget about it for years,
see some new component that makes it work, and have a new circuit pop
up instantly.
That just lots of memory. One time I did that was when an impractical
way of dealing with ripple on pulse width modulated output which I came
up with in 1975 became practical in 1992 when I got my hands on a
big-enough chunk of programable logic - not all that big, as it was a
plug-in replacement for a 22V10 chip, but big enough. Obviously, I
hadn't forgotten about it. I hadn't been obssessing about it for the
previous 17 years, but I hadn't forgotten about it either.
So, you have a very slow brain?
On 14/10/2025 1:41 am, john larkin wrote:
On Mon, 13 Oct 2025 20:10:43 +1100, Bill Sloman <bill.sloman@ieee.org>
wrote:
On 13/10/2025 4:13 am, john larkin wrote:
On Sun, 12 Oct 2025 10:25:18 -0400, "Edward Rawde"
<invalid@invalid.invalid> wrote:
"Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cg3ib$1e2gm$2@dont-email.me...
On 12/10/2025 4:06 am, john larkin wrote:
On Sat, 11 Oct 2025 09:56:29 -0700, Don Y
<blockedofcourse@foo.invalid> wrote:
On 10/11/2025 5:02 AM, Theo wrote:
Don Y <blockedofcourse@foo.invalid> wrote:
On 10/6/2025 8:49 PM, Edward Rawde wrote:
"Don Y" <blockedofcourse@foo.invalid> wrote in message news:10c1ne0$l8u1$1@dont-email.me...
On 10/6/2025 2:21 PM, Don Y wrote:
<snip>
Some very impressive things can happen in milliseconds.
John Larkin is easily impressed by his own brilliance,
I am impressed by my brain, the one I was born with.
Most egomaniacs are. It's fairly prominent feature of the condition.
Trump has described himself as a "stable genius" which is a comic >illustration of the egomaniac capacity for self-delusion.
Sometimes complex things are processed in background, and can take
days or even years.
If you have a very slow brain.
Or if you allow it to work at all time scales.
You do have some conscious control of what your conscious mind does. The >sub-conscious is less accessible.
What's cool is that one can have a problem, forget about it for years, >>>> see some new component that makes it work, and have a new circuit pop
up instantly.
That just lots of memory. One time I did that was when an impractical
way of dealing with ripple on pulse width modulated output which I came
up with in 1975 became practical in 1992 when I got my hands on a
big-enough chunk of programable logic - not all that big, as it was a
plug-in replacement for a 22V10 chip, but big enough. Obviously, I
hadn't forgotten about it. I hadn't been obssessing about it for the
previous 17 years, but I hadn't forgotten about it either.
So, you have a very slow brain?
The human brain doesn't seem to any kind of delay line store. Stuff gets >encoded, and you can decode it when you need it. I do find myself >remembering stuff from sixty or seventy years ago, so there may be some
kind of house-keeping processing sorting through the memory banks in >background.
I have met Elizabeth Loftus, and know that this gets complicated when >there's significant emotional content, but I'm not getting a lot of that.
https://en.wikipedia.org/wiki/Elizabeth_Loftus
On Tue, 14 Oct 2025 03:53:06 +1100, Bill Sloman <bill.sloman@ieee.org>
wrote:
On 14/10/2025 1:41 am, john larkin wrote:
On Mon, 13 Oct 2025 20:10:43 +1100, Bill Sloman <bill.sloman@ieee.org>
wrote:
On 13/10/2025 4:13 am, john larkin wrote:
On Sun, 12 Oct 2025 10:25:18 -0400, "Edward Rawde"
<invalid@invalid.invalid> wrote:
"Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cg3ib$1e2gm$2@dont-email.me...
On 12/10/2025 4:06 am, john larkin wrote:
On Sat, 11 Oct 2025 09:56:29 -0700, Don Y
<blockedofcourse@foo.invalid> wrote:
On 10/11/2025 5:02 AM, Theo wrote:
Don Y <blockedofcourse@foo.invalid> wrote:
On 10/6/2025 8:49 PM, Edward Rawde wrote:
"Don Y" <blockedofcourse@foo.invalid> wrote in message news:10c1ne0$l8u1$1@dont-email.me...
On 10/6/2025 2:21 PM, Don Y wrote:
<snip>
Some very impressive things can happen in milliseconds.
John Larkin is easily impressed by his own brilliance,
I am impressed by my brain, the one I was born with.
Most egomaniacs are. It's fairly prominent feature of the condition.
Trump has described himself as a "stable genius" which is a comic
illustration of the egomaniac capacity for self-delusion.
I take no credit for having my brain, and I'm impressed by most
anybody's brain.
DT does seem to have created a lot of peace and saved a lot of lives,
so far.
Sometimes complex things are processed in background, and can take
days or even years.
If you have a very slow brain.
Or if you allow it to work at all time scales.
You do have some conscious control of what your conscious mind does. The
sub-conscious is less accessible.
What's cool is that one can have a problem, forget about it for years, >>>>> see some new component that makes it work, and have a new circuit pop >>>>> up instantly.
That just lots of memory. One time I did that was when an impractical
way of dealing with ripple on pulse width modulated output which I came >>>> up with in 1975 became practical in 1992 when I got my hands on a
big-enough chunk of programable logic - not all that big, as it was a
plug-in replacement for a 22V10 chip, but big enough. Obviously, I
hadn't forgotten about it. I hadn't been obssessing about it for the
previous 17 years, but I hadn't forgotten about it either.
So, you have a very slow brain?
The human brain doesn't seem to any kind of delay line store. Stuff gets
encoded, and you can decode it when you need it. I do find myself
remembering stuff from sixty or seventy years ago, so there may be some
kind of house-keeping processing sorting through the memory banks in
background.
I have met Elizabeth Loftus, and know that this gets complicated when
there's significant emotional content, but I'm not getting a lot of that.
https://en.wikipedia.org/wiki/Elizabeth_Loftus
She sounds awful.
On Mon, 13 Oct 2025 07:41:58 -0700, john larkin <jl@glen--canyon.com>
wrote:
On Mon, 13 Oct 2025 20:10:43 +1100, Bill Sloman <bill.sloman@ieee.org>
wrote:
It's impressive that a human brain only needs about a hundred watts. >>>>>>It is woefully slow.
At some things. Not at others.
Some very impressive things can happen in milliseconds.
John Larkin is easily impressed by his own brilliance,
I am impressed by my brain, the one I was born with.
I don't think our brains are a lot different from the ones our
ancestors had 5,000, or 50,000 years ago. So why did evolution make
them/us able to do calculus and design electronics and program in
Rust?
It's assumed that, since brains are such energy hogs, critters don't
evolve much more brain than they really need. And most don't.
Humans benefit from making fire and making weapons, but those wouldn't
need the ability to do abstract math.
"Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cib2r$21rgr$1@dont-email.me...
On 13/10/2025 3:35 am, Edward Rawde wrote:
"Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cgjcn$1i52v$4@dont-email.me...
On 13/10/2025 1:25 am, Edward Rawde wrote:.....
"Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cg3ib$1e2gm$2@dont-email.me...
On 12/10/2025 4:06 am, john larkin wrote:
On Sat, 11 Oct 2025 09:56:29 -0700, Don Y
<blockedofcourse@foo.invalid> wrote:
On 10/11/2025 5:02 AM, Theo wrote:
Don Y <blockedofcourse@foo.invalid> wrote:
On 10/6/2025 8:49 PM, Edward Rawde wrote:
"Don Y" <blockedofcourse@foo.invalid> wrote in message news:10c1ne0$l8u1$1@dont-email.me...
On 10/6/2025 2:21 PM, Don Y wrote:
Ditto for anything that an AI "claims" to have generated.
And electronic design is not just coding. It needs real, organic >>>>>>> intelligence.
To do it well. More or less adequate electronic design is easier.
I've cleaned up after few people whose idea of "adequate" fell a bit short.
That must be the 234,412,265th time you've said that.
That may be something of an exaggeration.
Ok 234,412,104th
It's impressive that a human brain only needs about a hundred watts. >>>>>>It is woefully slow.
At some things. Not at others.
Name one. The basic operations in the human brain seem to work in the millisecond range, and ECL can do stuff in a nanosecond.
Understanding a language you're fluent in appears to be near enough instant.
Why would you need it any faster?
I've never seen ECL do that.
You don't need ECL for that. Google translate uses a large language model to do rapid translation - faster than a human
simultaneous translator can manage - and one of my wife's friends from her undergraduate days did that for a living, as well as
teaching the skill.
You don't process the speech all that fast - psycholinguists have measured that process in some detail.
Not long ago I used it for help with translation into French.
I had to get a human translator to check it and they made a lot of changes.
The human brain does well on large data sets because it has a lot more parallel processing than a regular computer, but - as
demonstrated by the solution of the protein folding problem - you can get more data into a big computer than you can into a
human
brain.
Sure but I've yet to see an online AI which learns from its mistakes.
You don't move in those circles.
And you do?
And when that happens, who is going to teach it what a mistake is and what isn't?
That's what large language models are for.
Oh dear.
Some subjects, such as politics, may run into the same difficulties humans have.
Where will a DT made of ECL with a much larger data set lead us?
You don't need a computer to notice that Trump lies a lot, and sounds off about subject where his understanding is imperfect.
But suppose you have a computer which can model DT with a much larger data set and
a CPU with a similar personality?
Science - in the peer-reviewed literature - has worked out a mechanism to suppress this kind of output. Fact-checkers are the
nearest thing to that in the political system, and Trump and his supporters are happy to ignore them.
So there's a good possibility that future AI will too.
Hitler and Mao provide perfectly splendid examples of the corrosive effects of misinformation, but quite a few people seem to be
incapable of recognising more modern examples of the breed.
Probably because if you haven't lived through it then it may as well not have happened.
The answer is probably better education, but schools are frequently exploited by religious institutions to implant nonsense in the
minds of the next generation.
The same will probably happen with AI.
Religion knows that the earlier you educate, the more likely that there will be lifetime
adoption of the religion without question.
And most American's seem to be taught that the US constitution is perfect, even though it was remarkably primitive when it was
first put together, and seems unlikely to ever adopt features like proportional representation and votes of confidence. Trump may
make a big enough mess of the US to prompt some kind of reform, but his supporters who post here don't seem be getting the
message.
There seems to be a need to make things "great again" which implies that it is believed
that they were great in the past but are no longer great.
So there seems to be a push to go backwards.
I wonder what AGI will make of that.
It may depend on whether you can separate intelligence from personality.
It does not look to me that you can.
On 14/10/2025 3:26 am, Edward Rawde wrote:
"Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cib2r$21rgr$1@dont-email.me...
On 13/10/2025 3:35 am, Edward Rawde wrote:
"Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cgjcn$1i52v$4@dont-email.me...
On 13/10/2025 1:25 am, Edward Rawde wrote:.....
"Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cg3ib$1e2gm$2@dont-email.me...
On 12/10/2025 4:06 am, john larkin wrote:
On Sat, 11 Oct 2025 09:56:29 -0700, Don Y
<blockedofcourse@foo.invalid> wrote:
On 10/11/2025 5:02 AM, Theo wrote:
Don Y <blockedofcourse@foo.invalid> wrote:
On 10/6/2025 8:49 PM, Edward Rawde wrote:
"Don Y" <blockedofcourse@foo.invalid> wrote in message news:10c1ne0$l8u1$1@dont-email.me...
On 10/6/2025 2:21 PM, Don Y wrote:
Ditto for anything that an AI "claims" to have generated.
And electronic design is not just coding. It needs real, organic >>>>>>>> intelligence.
To do it well. More or less adequate electronic design is easier. >>>>>>>
I've cleaned up after few people whose idea of "adequate" fell a bit short.
That must be the 234,412,265th time you've said that.
That may be something of an exaggeration.
Ok 234,412,104th
It may depend on whether you can separate intelligence from personality.
It does not look to me that you can.
Of course you can, if you know more about the subject than you seem to do.
--
Bill Sloman, Sydney
On 14/10/2025 3:26 am, Edward Rawde wrote:
"Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cib2r$21rgr$1@dont-email.me...
On 13/10/2025 3:35 am, Edward Rawde wrote:
"Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cgjcn$1i52v$4@dont-email.me...
On 13/10/2025 1:25 am, Edward Rawde wrote:.....
"Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cg3ib$1e2gm$2@dont-email.me...
On 12/10/2025 4:06 am, john larkin wrote:
On Sat, 11 Oct 2025 09:56:29 -0700, Don Y
<blockedofcourse@foo.invalid> wrote:
On 10/11/2025 5:02 AM, Theo wrote:
Don Y <blockedofcourse@foo.invalid> wrote:
On 10/6/2025 8:49 PM, Edward Rawde wrote:
"Don Y" <blockedofcourse@foo.invalid> wrote in message news:10c1ne0$l8u1$1@dont-email.me...
On 10/6/2025 2:21 PM, Don Y wrote:
Ditto for anything that an AI "claims" to have generated.
And electronic design is not just coding. It needs real, organic >>>>>>>> intelligence.
To do it well. More or less adequate electronic design is easier. >>>>>>>
I've cleaned up after few people whose idea of "adequate" fell a bit short.
That must be the 234,412,265th time you've said that.
That may be something of an exaggeration.
Ok 234,412,104th
It's impressive that a human brain only needs about a hundred watts. >>>>>>>It is woefully slow.
At some things. Not at others.
Name one. The basic operations in the human brain seem to work in the millisecond range, and ECL can do stuff in a nanosecond.
Understanding a language you're fluent in appears to be near enough instant.
Why would you need it any faster?
I've never seen ECL do that.
You don't need ECL for that. Google translate uses a large language model to do rapid translation - faster than a human
simultaneous translator can manage - and one of my wife's friends from her undergraduate days did that for a living, as well as
teaching the skill.
You don't process the speech all that fast - psycholinguists have measured that process in some detail.
Not long ago I used it for help with translation into French.
I had to get a human translator to check it and they made a lot of changes.
I've done that sort of checking on the output of fluent English-speaking >Dutch people writing in English. There was always stuff that I did
change to make the text read more like what an native speaker of English >would have written, and people did notice the changes, even though they >didn't change the meaning. It did make the text easier to read.
The human brain does well on large data sets because it has a lot more parallel processing than a regular computer, but - as
demonstrated by the solution of the protein folding problem - you can get more data into a big computer than you can into a
human
brain.
Sure but I've yet to see an online AI which learns from its mistakes.
You don't move in those circles.
And you do?
My wife did, and I talked to some of her friends and colleagues.
And when that happens, who is going to teach it what a mistake is and what isn't?
That's what large language models are for.
Oh dear.
They aren't perfect, but they are lot better than the stuff they replaced.
Some subjects, such as politics, may run into the same difficulties humans have.
Where will a DT made of ECL with a much larger data set lead us?
You don't need a computer to notice that Trump lies a lot, and sounds off about subject where his understanding is imperfect.
But suppose you have a computer which can model DT with a much larger data set and
a CPU with a similar personality?
Why would anybody want to? Donald Trump's personality isn't one that we >would want to emulate.
On 14/10/2025 3:01 am, john larkin wrote:
On Mon, 13 Oct 2025 07:41:58 -0700, john larkin <jl@glen--canyon.com>
wrote:
On Mon, 13 Oct 2025 20:10:43 +1100, Bill Sloman <bill.sloman@ieee.org>
wrote:
It's impressive that a human brain only needs about a hundred watts. >>>>>>>It is woefully slow.
At some things. Not at others.
Some very impressive things can happen in milliseconds.
John Larkin is easily impressed by his own brilliance,
I am impressed by my brain, the one I was born with.
I don't think our brains are a lot different from the ones our
ancestors had 5,000, or 50,000 years ago. So why did evolution make
them/us able to do calculus and design electronics and program in
Rust?
Chomsky thinks that our capacity to use language to communicate depends
on fairly recent tweaks to our brains. Human language is a more
complicated communication system than anything else we've looked at, and >presumably this lets us move to a higher level of abstraction than our >competitors. When we got to be able to talk about mathematics we'd got
into a more productive region than any other creature we know.
It's assumed that, since brains are such energy hogs, critters don't
evolve much more brain than they really need. And most don't.
But if there's an ecological niche that a big brain can exploit, brains
will get bigger.
Humans benefit from making fire and making weapons, but those wouldn't
need the ability to do abstract math.
They got a lot more from cooperative hunting and defense. Dunbar's
number is 150 which means that we live in bigger packs than most social >mammals. Language lets us coordinate even bigger groups.
Some people don't like that, and Trump does seem freeze out experts who >don't know him well enough to be aware of his need for flattery.
On Tue, 14 Oct 2025 17:17:22 +1100, Bill Sloman <bill.sloman@ieee.org>
wrote:
Why would anybody want to? Donald Trump's personality isn't one that we
would want to emulate.
Ask the hostgages.
John Larkin
Highland Tech Glen Canyon Design Center
Lunatic Fringe Electronics
"Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10ckpu6$2nsdk$1@dont-email.me...
On 14/10/2025 3:26 am, Edward Rawde wrote:
"Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cib2r$21rgr$1@dont-email.me...
On 13/10/2025 3:35 am, Edward Rawde wrote:
"Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cgjcn$1i52v$4@dont-email.me...
On 13/10/2025 1:25 am, Edward Rawde wrote:.....
"Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cg3ib$1e2gm$2@dont-email.me...
On 12/10/2025 4:06 am, john larkin wrote:
On Sat, 11 Oct 2025 09:56:29 -0700, Don Y
<blockedofcourse@foo.invalid> wrote:
On 10/11/2025 5:02 AM, Theo wrote:
Don Y <blockedofcourse@foo.invalid> wrote:
On 10/6/2025 8:49 PM, Edward Rawde wrote:
"Don Y" <blockedofcourse@foo.invalid> wrote in message news:10c1ne0$l8u1$1@dont-email.me...
On 10/6/2025 2:21 PM, Don Y wrote:
Ditto for anything that an AI "claims" to have generated.
And electronic design is not just coding. It needs real, organic >>>>>>>>> intelligence.
To do it well. More or less adequate electronic design is easier. >>>>>>>>
I've cleaned up after few people whose idea of "adequate" fell a bit short.
That must be the 234,412,265th time you've said that.
That may be something of an exaggeration.
Ok 234,412,104th
It may depend on whether you can separate intelligence from personality. >>> It does not look to me that you can.
Of course you can, if you know more about the subject than you seem to do.
Well you don't seem to be able to separate anything from personality so why should AI?
On 10/14/25 16:13, john larkin wrote:
On Tue, 14 Oct 2025 17:17:22 +1100, Bill Sloman <bill.sloman@ieee.org>[...]
wrote:
Why would anybody want to? Donald Trump's personality isn't one that we
would want to emulate.
Ask the hostgages.
John Larkin
Highland Tech Glen Canyon Design Center
Lunatic Fringe Electronics
I've been wondering what arguments DT might have used to achieve
this. It's not his charming personality, for sure.
Jeroen Belleman
On Tue, 14 Oct 2025 17:17:22 +1100, Bill Sloman <bill.sloman@ieee.org>
wrote:
On 14/10/2025 3:26 am, Edward Rawde wrote:
"Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cib2r$21rgr$1@dont-email.me...I've done that sort of checking on the output of fluent English-speaking
On 13/10/2025 3:35 am, Edward Rawde wrote:
"Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cgjcn$1i52v$4@dont-email.me...
On 13/10/2025 1:25 am, Edward Rawde wrote:.....
"Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cg3ib$1e2gm$2@dont-email.me...
On 12/10/2025 4:06 am, john larkin wrote:
On Sat, 11 Oct 2025 09:56:29 -0700, Don Y
<blockedofcourse@foo.invalid> wrote:
On 10/11/2025 5:02 AM, Theo wrote:
Don Y <blockedofcourse@foo.invalid> wrote:
On 10/6/2025 8:49 PM, Edward Rawde wrote:
"Don Y" <blockedofcourse@foo.invalid> wrote in message news:10c1ne0$l8u1$1@dont-email.me...
On 10/6/2025 2:21 PM, Don Y wrote:
Ditto for anything that an AI "claims" to have generated.
And electronic design is not just coding. It needs real, organic >>>>>>>>> intelligence.
To do it well. More or less adequate electronic design is easier. >>>>>>>>
I've cleaned up after few people whose idea of "adequate" fell a bit short.
That must be the 234,412,265th time you've said that.
That may be something of an exaggeration.
Ok 234,412,104th
It's impressive that a human brain only needs about a hundred watts. >>>>>>>>It is woefully slow.
At some things. Not at others.
Name one. The basic operations in the human brain seem to work in the millisecond range, and ECL can do stuff in a nanosecond.
Understanding a language you're fluent in appears to be near enough instant.
Why would you need it any faster?
I've never seen ECL do that.
You don't need ECL for that. Google translate uses a large language model to do rapid translation - faster than a human
simultaneous translator can manage - and one of my wife's friends from her undergraduate days did that for a living, as well as
teaching the skill.
You don't process the speech all that fast - psycholinguists have measured that process in some detail.
Not long ago I used it for help with translation into French.
I had to get a human translator to check it and they made a lot of changes. >>
Dutch people writing in English. There was always stuff that I did
change to make the text read more like what an native speaker of English
would have written, and people did notice the changes, even though they
didn't change the meaning. It did make the text easier to read.
You don't move in those circles.The human brain does well on large data sets because it has a lot more parallel processing than a regular computer, but - as
demonstrated by the solution of the protein folding problem - you can get more data into a big computer than you can into a
human
brain.
Sure but I've yet to see an online AI which learns from its mistakes. >>>>
And you do?
My wife did, and I talked to some of her friends and colleagues.
And when that happens, who is going to teach it what a mistake is and what isn't?
That's what large language models are for.
Oh dear.
They aren't perfect, but they are lot better than the stuff they replaced. >>
Some subjects, such as politics, may run into the same difficulties humans have.
Where will a DT made of ECL with a much larger data set lead us?
You don't need a computer to notice that Trump lies a lot, and sounds off about subject where his understanding is imperfect.
But suppose you have a computer which can model DT with a much larger data set and
a CPU with a similar personality?
Why would anybody want to? Donald Trump's personality isn't one that we
would want to emulate.
Ask the hostages.
On 15/10/2025 1:02 am, Edward Rawde wrote:
"Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10ckpu6$2nsdk$1@dont-email.me...
On 14/10/2025 3:26 am, Edward Rawde wrote:
"Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cib2r$21rgr$1@dont-email.me...
On 13/10/2025 3:35 am, Edward Rawde wrote:
"Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cgjcn$1i52v$4@dont-email.me...
On 13/10/2025 1:25 am, Edward Rawde wrote:
"Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cg3ib$1e2gm$2@dont-email.me...
On 12/10/2025 4:06 am, john larkin wrote:
On Sat, 11 Oct 2025 09:56:29 -0700, Don Y
<blockedofcourse@foo.invalid> wrote:
On 10/11/2025 5:02 AM, Theo wrote:
Don Y <blockedofcourse@foo.invalid> wrote:
On 10/6/2025 8:49 PM, Edward Rawde wrote:
"Don Y" <blockedofcourse@foo.invalid> wrote in message news:10c1ne0$l8u1$1@dont-email.me...
On 10/6/2025 2:21 PM, Don Y wrote:
Ditto for anything that an AI "claims" to have generated. >>>>>> .....
And electronic design is not just coding. It needs real, organic >>>>>>>>>> intelligence.
To do it well. More or less adequate electronic design is easier. >>>>>>>>>
I've cleaned up after few people whose idea of "adequate" fell a bit short.
That must be the 234,412,265th time you've said that.
That may be something of an exaggeration.
Ok 234,412,104th
More like a gross exaggeration.
Well you don't seem to be able to separate anything from personality so why should AI?It may depend on whether you can separate intelligence from personality. >>>> It does not look to me that you can.
Of course you can, if you know more about the subject than you seem to do. >>
I wonder what you think you means by that? And any intelligence I have is entirely natural, so my antics aren't any kind of guide
to what artificial intelligence might do. Intelligence is about drawing conclusions from data - personality is more about the
kinds of conclusions you want to be able to draw, which famously biases the sort of data you will go to the trouble of collecting.
The easiest way of seeing it in action is to let different personalities look at notionally identical data sets, and compare their
conclusions.
https://en.wikipedia.org/wiki/The_Bell_Curve
I don't know of anybody who has tried to automate the process of raw data collection, and I suspect that it will be quite a while
before anybody seriously tries to do that. There will be cheats who will pretend that they have.
--
Bill Sloman, Sydney
On Sun, 05 Oct 2025 17:54:56 +0100, Cursitor Doom <cd@notformail.com>
wrote:
I can't help noticing since I drew everyone's attention to Grok that
it's gone awfully quiet around here. I did postulate that AI might
kill this group, but maybe it's happening quicker than I'd expected.
:-(
No, it's just that few people design electronics now.
"Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cltef$32bdm$1@dont-email.me...
On 15/10/2025 1:02 am, Edward Rawde wrote:
"Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10ckpu6$2nsdk$1@dont-email.me...
On 14/10/2025 3:26 am, Edward Rawde wrote:
"Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cib2r$21rgr$1@dont-email.me...
On 13/10/2025 3:35 am, Edward Rawde wrote:
"Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cgjcn$1i52v$4@dont-email.me...
On 13/10/2025 1:25 am, Edward Rawde wrote:
"Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cg3ib$1e2gm$2@dont-email.me...
On 12/10/2025 4:06 am, john larkin wrote:
On Sat, 11 Oct 2025 09:56:29 -0700, Don Y
<blockedofcourse@foo.invalid> wrote:
On 10/11/2025 5:02 AM, Theo wrote:
Don Y <blockedofcourse@foo.invalid> wrote:
On 10/6/2025 8:49 PM, Edward Rawde wrote:
"Don Y" <blockedofcourse@foo.invalid> wrote in message news:10c1ne0$l8u1$1@dont-email.me...
On 10/6/2025 2:21 PM, Don Y wrote:
Ditto for anything that an AI "claims" to have generated. >>>>>>> .....
And electronic design is not just coding. It needs real, organic >>>>>>>>>>> intelligence.
To do it well. More or less adequate electronic design is easier. >>>>>>>>>>
I've cleaned up after few people whose idea of "adequate" fell a bit short.
That must be the 234,412,265th time you've said that.
That may be something of an exaggeration.
Ok 234,412,104th
More like a gross exaggeration.
Well you don't seem to be able to separate anything from personality so why should AI?It may depend on whether you can separate intelligence from personality. >>>>> It does not look to me that you can.
Of course you can, if you know more about the subject than you seem to do. >>>
I wonder what you think you means by that? And any intelligence I have is entirely natural, so my antics aren't any kind of guide
to what artificial intelligence might do. Intelligence is about drawing conclusions from data - personality is more about the
kinds of conclusions you want to be able to draw, which famously biases the sort of data you will go to the trouble of collecting.
So if you have enough data you can draw pretty much any conclusion you want.
This appears to be true for some subjects, such as politics, but not as true for other subjects.
At one extreme a subject such as mathematics has statements which are hard to argue with.
At the other extreme there are subjects where it's hard to tell nonsense from anything serious.
Is AI going to do this any better than humans do and if so why?
--The easiest way of seeing it in action is to let different personalities look at notionally identical data sets, and compare their
conclusions.
https://en.wikipedia.org/wiki/The_Bell_Curve
I don't know of anybody who has tried to automate the process of raw data collection, and I suspect that it will be quite a while
before anybody seriously tries to do that. There will be cheats who will pretend that they have.
On Tue, 14 Oct 2025 17:33:08 +0200, Jeroen Belleman
<jeroen@nospam.please> wrote:
On 10/14/25 16:13, john larkin wrote:
On Tue, 14 Oct 2025 17:17:22 +1100, Bill Sloman <bill.sloman@ieee.org>[...]
wrote:
Why would anybody want to? Donald Trump's personality isn't one that we >>>> would want to emulate.
Ask the hostgages.
John Larkin
Highland Tech Glen Canyon Design Center
Lunatic Fringe Electronics
I've been wondering what arguments DT might have used to achieve
this. It's not his charming personality, for sure.
Jeroen Belleman
Probably brute force application of power.
That's basically what we
elected him to do, act in our interest.
I like the idea of Gaza becoming a luxury golf resort on the
Mediterranean. And Iran becoming a friendly democracy.
And Russia becoming a peaceful European country, but that's obviously
over the top.
On Tue, 14 Oct 2025 16:43:22 +1100, Bill Sloman <bill.sloman@ieee.org>
wrote:
On 14/10/2025 3:01 am, john larkin wrote:
On Mon, 13 Oct 2025 07:41:58 -0700, john larkin <jl@glen--canyon.com>
wrote:
On Mon, 13 Oct 2025 20:10:43 +1100, Bill Sloman <bill.sloman@ieee.org> >>>> wrote:
It's impressive that a human brain only needs about a hundred watts. >>>>>>>>It is woefully slow.
At some things. Not at others.
Some very impressive things can happen in milliseconds.
John Larkin is easily impressed by his own brilliance,
I am impressed by my brain, the one I was born with.
I don't think our brains are a lot different from the ones our
ancestors had 5,000, or 50,000 years ago. So why did evolution make
them/us able to do calculus and design electronics and program in
Rust?
Chomsky thinks that our capacity to use language to communicate depends
on fairly recent tweaks to our brains. Human language is a more
complicated communication system than anything else we've looked at, and
presumably this lets us move to a higher level of abstraction than our
competitors. When we got to be able to talk about mathematics we'd got
into a more productive region than any other creature we know.
It's assumed that, since brains are such energy hogs, critters don't
evolve much more brain than they really need. And most don't.
But if there's an ecological niche that a big brain can exploit, brains
will get bigger.
Humans benefit from making fire and making weapons, but those wouldn't
need the ability to do abstract math.
They got a lot more from cooperative hunting and defense. Dunbar's
number is 150 which means that we live in bigger packs than most social
mammals. Language lets us coordinate even bigger groups.
Some people don't like that, and Trump does seem freeze out experts who
don't know him well enough to be aware of his need for flattery.
You were starting to have a sensible discussion.
On 10/5/25 10:42 AM, john larkin wrote:
On Sun, 05 Oct 2025 17:54:56 +0100, Cursitor Doom <cd@notformail.com>
wrote:
I can't help noticing since I drew everyone's attention to Grok that
it's gone awfully quiet around here. I did postulate that AI might
kill this group, but maybe it's happening quicker than I'd expected.
:-(
No, it's just that few people design electronics now.
And the ones who still do, they don't let them retire :-(
On 15/10/2025 1:16 am, john larkin wrote:
On Tue, 14 Oct 2025 16:43:22 +1100, Bill Sloman <bill.sloman@ieee.org>
wrote:
On 14/10/2025 3:01 am, john larkin wrote:
On Mon, 13 Oct 2025 07:41:58 -0700, john larkin <jl@glen--canyon.com>
wrote:
On Mon, 13 Oct 2025 20:10:43 +1100, Bill Sloman <bill.sloman@ieee.org> >>>>> wrote:
It's impressive that a human brain only needs about a hundred watts. >>>>>>>>>It is woefully slow.
At some things. Not at others.
Some very impressive things can happen in milliseconds.
John Larkin is easily impressed by his own brilliance,
I am impressed by my brain, the one I was born with.
I don't think our brains are a lot different from the ones our
ancestors had 5,000, or 50,000 years ago. So why did evolution make
them/us able to do calculus and design electronics and program in
Rust?
Chomsky thinks that our capacity to use language to communicate depends
on fairly recent tweaks to our brains. Human language is a more
complicated communication system than anything else we've looked at, and >>> presumably this lets us move to a higher level of abstraction than our
competitors. When we got to be able to talk about mathematics we'd got
into a more productive region than any other creature we know.
It's assumed that, since brains are such energy hogs, critters don't
evolve much more brain than they really need. And most don't.
But if there's an ecological niche that a big brain can exploit, brains
will get bigger.
Humans benefit from making fire and making weapons, but those wouldn't >>>> need the ability to do abstract math.
They got a lot more from cooperative hunting and defense. Dunbar's
number is 150 which means that we live in bigger packs than most social
mammals. Language lets us coordinate even bigger groups.
Some people don't like that, and Trump does seem freeze out experts who
don't know him well enough to be aware of his need for flattery.
You were starting to have a sensible discussion.
Sensible of what? I'm well aware that you think that the sun shines out
of Donald Trump's bottom, but that's mainly because he's an worse
egomaniac than you are. Your idea of a "sensible discussion" is one that >isn't dismissive of your favourite misconceptions, and you do have a lot
of them.
https://en.wikipedia.org/wiki/Merchants_of_Doubt
On 15/10/2025 5:02 am, Edward Rawde wrote:
"Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cltef$32bdm$1@dont-email.me...
On 15/10/2025 1:02 am, Edward Rawde wrote:
"Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10ckpu6$2nsdk$1@dont-email.me...
On 14/10/2025 3:26 am, Edward Rawde wrote:
"Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cib2r$21rgr$1@dont-email.me...
On 13/10/2025 3:35 am, Edward Rawde wrote:
"Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cgjcn$1i52v$4@dont-email.me...
On 13/10/2025 1:25 am, Edward Rawde wrote:
"Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cg3ib$1e2gm$2@dont-email.me...
On 12/10/2025 4:06 am, john larkin wrote:
On Sat, 11 Oct 2025 09:56:29 -0700, Don Y
<blockedofcourse@foo.invalid> wrote:
On 10/11/2025 5:02 AM, Theo wrote:
Don Y <blockedofcourse@foo.invalid> wrote:
On 10/6/2025 8:49 PM, Edward Rawde wrote:
"Don Y" <blockedofcourse@foo.invalid> wrote in message news:10c1ne0$l8u1$1@dont-email.me...
On 10/6/2025 2:21 PM, Don Y wrote:
Ditto for anything that an AI "claims" to have generated. >>>>>>>> .....
And electronic design is not just coding. It needs real, organic >>>>>>>>>>>> intelligence.
To do it well. More or less adequate electronic design is easier. >>>>>>>>>>>
I've cleaned up after few people whose idea of "adequate" fell a bit short.
That must be the 234,412,265th time you've said that.
That may be something of an exaggeration.
Ok 234,412,104th
More like a gross exaggeration.
It may depend on whether you can separate intelligence from personality. >>>>>> It does not look to me that you can.
Of course you can, if you know more about the subject than you seem to do.
Well you don't seem to be able to separate anything from personality so why should AI?
I wonder what you think you means by that? And any intelligence I have is entirely natural, so my antics aren't any kind of
guide
to what artificial intelligence might do. Intelligence is about drawing conclusions from data - personality is more about the
kinds of conclusions you want to be able to draw, which famously biases the sort of data you will go to the trouble of
collecting.
So if you have enough data you can draw pretty much any conclusion you want.
That's not what I was saying. If you are selective about the data you do collect, you can construct plausible but misleading
stories, and the answer to that is to collect more data from a genuinely representative sample of test subjects
This appears to be true for some subjects, such as politics, but not as true for other subjects.
It's certainly not true of politics
https://en.wikipedia.org/wiki/FiveThirtyEight
but there are any number of people who will lie to you about it.
At one extreme a subject such as mathematics has statements which are hard to argue with.
At the other extreme there are subjects where it's hard to tell nonsense from anything serious.
It can take quite a lot of effort to detect the lies, but some people do seem to be willing to put in that effort.
Is AI going to do this any better than humans do and if so why?
If it does - and it should - it would be because it could integrate more data, and systematically check it for distortions and
inconsistencies.
There will be human actors who will use the same technology to construct even more plausible nonsense.
https://en.wikipedia.org/wiki/Merchants_of_Doubt
Lying to people is a profitable industry and the people who make money out of it would love to automate it.
The easiest way of seeing it in action is to let different personalities look at notionally identical data sets, and compare
their
conclusions.
https://en.wikipedia.org/wiki/The_Bell_Curve
I don't know of anybody who has tried to automate the process of raw data collection, and I suspect that it will be quite a
while
before anybody seriously tries to do that. There will be cheats who will pretend that they have.
--
Bill Sloman, Sydney
On Tue, 14 Oct 2025 22:24:55 -0700, Joerg <news@analogconsultants.com>
wrote:
On 10/5/25 10:42 AM, john larkin wrote:
On Sun, 05 Oct 2025 17:54:56 +0100, Cursitor Doom <cd@notformail.com>
wrote:
I can't help noticing since I drew everyone's attention to Grok that
it's gone awfully quiet around here. I did postulate that AI might
kill this group, but maybe it's happening quicker than I'd expected.
:-(
No, it's just that few people design electronics now.
And the ones who still do, they don't let them retire :-(
Yes. I know a couple of guys who retired voluntarily or were nudged
out by bean counters. Now they work as much as they please,
for their
former employers,
and make a lot more money.
John Larkin
Highland Tech Glen Canyon Design Center
Lunatic Fringe Electronics
"Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cndsb$3er5o$1@dont-email.me...
https://en.wikipedia.org/wiki/Merchants_of_Doubt
On 15/10/2025 5:02 am, Edward Rawde wrote:
"Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cltef$32bdm$1@dont-email.me...
On 15/10/2025 1:02 am, Edward Rawde wrote:
"Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10ckpu6$2nsdk$1@dont-email.me...
On 14/10/2025 3:26 am, Edward Rawde wrote:
"Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cib2r$21rgr$1@dont-email.me...
On 13/10/2025 3:35 am, Edward Rawde wrote:
"Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cgjcn$1i52v$4@dont-email.me...
On 13/10/2025 1:25 am, Edward Rawde wrote:
"Bill Sloman" <bill.sloman@ieee.org> wrote in message news:10cg3ib$1e2gm$2@dont-email.me...
On 12/10/2025 4:06 am, john larkin wrote:
On Sat, 11 Oct 2025 09:56:29 -0700, Don Y
<blockedofcourse@foo.invalid> wrote:
On 10/11/2025 5:02 AM, Theo wrote:
Don Y <blockedofcourse@foo.invalid> wrote:
On 10/6/2025 8:49 PM, Edward Rawde wrote:
"Don Y" <blockedofcourse@foo.invalid> wrote in message news:10c1ne0$l8u1$1@dont-email.me...
On 10/6/2025 2:21 PM, Don Y wrote:
Ditto for anything that an AI "claims" to have generated. >>>>>>>>> .....
And electronic design is not just coding. It needs real, organic >>>>>>>>>>>>> intelligence.
To do it well. More or less adequate electronic design is easier. >>>>>>>>>>>>
I've cleaned up after few people whose idea of "adequate" fell a bit short.
That must be the 234,412,265th time you've said that.
That may be something of an exaggeration.
Ok 234,412,104th
More like a gross exaggeration.
It may depend on whether you can separate intelligence from personality.
It does not look to me that you can.
Of course you can, if you know more about the subject than you seem to do.
Well you don't seem to be able to separate anything from personality so why should AI?
I wonder what you think you means by that? And any intelligence I have is entirely natural, so my antics aren't any kind of
guide
to what artificial intelligence might do. Intelligence is about drawing conclusions from data - personality is more about the
kinds of conclusions you want to be able to draw, which famously biases the sort of data you will go to the trouble of
collecting.
So if you have enough data you can draw pretty much any conclusion you want.
That's not what I was saying. If you are selective about the data you do collect, you can construct plausible but misleading
stories, and the answer to that is to collect more data from a genuinely representative sample of test subjects
This appears to be true for some subjects, such as politics, but not as true for other subjects.
It's certainly not true of politics
https://en.wikipedia.org/wiki/FiveThirtyEight
but there are any number of people who will lie to you about it.
And those people might turn into AI in the future.
At one extreme a subject such as mathematics has statements which are hard to argue with.
At the other extreme there are subjects where it's hard to tell nonsense from anything serious.
It can take quite a lot of effort to detect the lies, but some people do seem to be willing to put in that effort.
Is AI going to do this any better than humans do and if so why?
If it does - and it should - it would be because it could integrate more data, and systematically check it for distortions and
inconsistencies.
There will be human actors who will use the same technology to construct even more plausible nonsense.
https://en.wikipedia.org/wiki/Merchants_of_Doubt
Lying to people is a profitable industry and the people who make money out of it would love to automate it.
Which is probably what will happen.
Nothing in our own brains gives any particular status to truth so why should AI be different?
If it's trained by humans it will be like humans.
The easiest way of seeing it in action is to let different personalities look at notionally identical data sets, and compare
their conclusions.
https://en.wikipedia.org/wiki/The_Bell_Curve
I don't know of anybody who has tried to automate the process of raw data collection, and I suspect that it will be quite a
while before anybody seriously tries to do that. There will be cheats who will pretend that they have.
On Thu, 16 Oct 2025 00:03:19 +1100, Bill Sloman <bill.sloman@ieee.org>
wrote:
On 15/10/2025 1:16 am, john larkin wrote:
On Tue, 14 Oct 2025 16:43:22 +1100, Bill Sloman <bill.sloman@ieee.org>
wrote:
On 14/10/2025 3:01 am, john larkin wrote:
On Mon, 13 Oct 2025 07:41:58 -0700, john larkin <jl@glen--canyon.com> >>>>> wrote:
On Mon, 13 Oct 2025 20:10:43 +1100, Bill Sloman <bill.sloman@ieee.org> >>>>>> wrote:
It's impressive that a human brain only needs about a hundred watts.
It is woefully slow.
At some things. Not at others.
Some very impressive things can happen in milliseconds.
John Larkin is easily impressed by his own brilliance,
I am impressed by my brain, the one I was born with.
I don't think our brains are a lot different from the ones our
ancestors had 5,000, or 50,000 years ago. So why did evolution make
them/us able to do calculus and design electronics and program in
Rust?
Chomsky thinks that our capacity to use language to communicate depends >>>> on fairly recent tweaks to our brains. Human language is a more
complicated communication system than anything else we've looked at, and >>>> presumably this lets us move to a higher level of abstraction than our >>>> competitors. When we got to be able to talk about mathematics we'd got >>>> into a more productive region than any other creature we know.
It's assumed that, since brains are such energy hogs, critters don't >>>>> evolve much more brain than they really need. And most don't.
But if there's an ecological niche that a big brain can exploit, brains >>>> will get bigger.
Humans benefit from making fire and making weapons, but those wouldn't >>>>> need the ability to do abstract math.
They got a lot more from cooperative hunting and defense. Dunbar's
number is 150 which means that we live in bigger packs than most social >>>> mammals. Language lets us coordinate even bigger groups.
Some people don't like that, and Trump does seem freeze out experts who >>>> don't know him well enough to be aware of his need for flattery.
You were starting to have a sensible discussion.
Sensible of what? I'm well aware that you think that the sun shines out
of Donald Trump's bottom, but that's mainly because he's an worse
egomaniac than you are. Your idea of a "sensible discussion" is one that
isn't dismissive of your favourite misconceptions, and you do have a lot
of them.
Sorry, my mistake, you weren't starting to have a sensble discussion.
TDS is a weird disease.
It must be frustrating.
Designing electronics is much more amusing.
On 16/10/2025 1:48 am, john larkin wrote:
On Thu, 16 Oct 2025 00:03:19 +1100, Bill Sloman <bill.sloman@ieee.org>
wrote:
On 15/10/2025 1:16 am, john larkin wrote:
On Tue, 14 Oct 2025 16:43:22 +1100, Bill Sloman <bill.sloman@ieee.org> >>>> wrote:
On 14/10/2025 3:01 am, john larkin wrote:
On Mon, 13 Oct 2025 07:41:58 -0700, john larkin <jl@glen--canyon.com> >>>>>> wrote:
On Mon, 13 Oct 2025 20:10:43 +1100, Bill Sloman <bill.sloman@ieee.org> >>>>>>> wrote:
It's impressive that a human brain only needs about a hundred watts.
It is woefully slow.
At some things. Not at others.
Some very impressive things can happen in milliseconds.
John Larkin is easily impressed by his own brilliance,
I am impressed by my brain, the one I was born with.
I don't think our brains are a lot different from the ones our
ancestors had 5,000, or 50,000 years ago. So why did evolution make >>>>>> them/us able to do calculus and design electronics and program in
Rust?
Chomsky thinks that our capacity to use language to communicate depends >>>>> on fairly recent tweaks to our brains. Human language is a more
complicated communication system than anything else we've looked at, and >>>>> presumably this lets us move to a higher level of abstraction than our >>>>> competitors. When we got to be able to talk about mathematics we'd got >>>>> into a more productive region than any other creature we know.
It's assumed that, since brains are such energy hogs, critters don't >>>>>> evolve much more brain than they really need. And most don't.
But if there's an ecological niche that a big brain can exploit, brains >>>>> will get bigger.
Humans benefit from making fire and making weapons, but those wouldn't >>>>>> need the ability to do abstract math.
They got a lot more from cooperative hunting and defense. Dunbar's
number is 150 which means that we live in bigger packs than most social >>>>> mammals. Language lets us coordinate even bigger groups.
Some people don't like that, and Trump does seem freeze out experts who >>>>> don't know him well enough to be aware of his need for flattery.
You were starting to have a sensible discussion.
Sensible of what? I'm well aware that you think that the sun shines out
of Donald Trump's bottom, but that's mainly because he's an worse
egomaniac than you are. Your idea of a "sensible discussion" is one that >>> isn't dismissive of your favourite misconceptions, and you do have a lot >>> of them.
Sorry, my mistake, you weren't starting to have a sensble discussion.
TDS is a weird disease.
Trump derangement syndrome has been invented by Trump supporters as an >insult to be used against people who have enough sense to realise that >Donald Trump is a menace.
It must be frustrating.
It certainly is. Trump supporters do seem to be blind to his defects.
Designing electronics is much more amusing.
Retreating into your ivory tower may well be comforting, but when you
have got somebody who is as silly as Hitler and Stalin were in charge of
the country, ivory towers are vulnerable
<snipped self-indulgence>