Sysop: | Amessyroom |
---|---|
Location: | Fayetteville, NC |
Users: | 42 |
Nodes: | 6 (0 / 6) |
Uptime: | 01:54:43 |
Calls: | 220 |
Calls today: | 1 |
Files: | 824 |
Messages: | 121,544 |
Posted today: | 6 |
https://techxplore.com/news/2024-11-nvidia-intel-dow-index-ai.html
. . .
NVidia makes the special parallel-processing chips most
widely used for "AI" applications these days. It has been
selling VAST quantities of those for awhile.
These are not really "CPU" chips however - they do
parallel math ops REALLY fast and that's their main
thing.
However the way most people will ACCESS those "AI"
apps will be through Intel-powered PCs.
So don't think Intel is goin' down anytime soon, but
it WILL have to share the processor market a bit more.
What we seem to be seeing is a consumer base that is delighted to have
Siri or whatever talk to someone else's cloud, share all their secret
life online and drip feed them with marketing propaganda, whilst tying
them in to rented software and a planned lifetime of a few years for the latest 'shiny new thing'
A gentler more commercial form of communism...
On 03/11/2024 04:54, 186282@ud0s4.net wrote:
https://techxplore.com/news/2024-11-nvidia-intel-dow-index-ai.html
. . .
NVidia makes the special parallel-processing chips most
widely used for "AI" applications these days. It has been
selling VAST quantities of those for awhile.
These are not really "CPU" chips however - they do
parallel math ops REALLY fast and that's their main
thing.
However the way most people will ACCESS those "AI"
apps will be through Intel-powered PCs.
So don't think Intel is goin' down anytime soon, but
it WILL have to share the processor market a bit more.
I think that ARM having eaten into its market in low power devices is
now swinging up towards an equal power performance solution.
And the trend away from customisable solutions towards pure consumer
crap, means its not important what the OS actually is.
No, I don't think intel powered PCs are the future any more than I think
that Windows PCs are.
What we seem to be seeing is a consumer base that is delighted to have
Siri or whatever talk to someone else's cloud, share all their secret
life online and drip feed them with marketing propaganda, whilst tying
them in to rented software and a planned lifetime of a few years for the latest 'shiny new thing'
A gentler more commercial form of communism...
On 2024-11-03, The Natural Philosopher <tnp@invalid.invalid> wrote:
What we seem to be seeing is a consumer base that is delighted to have
Siri or whatever talk to someone else's cloud, share all their secret
life online and drip feed them with marketing propaganda, whilst tying
them in to rented software and a planned lifetime of a few years for the
latest 'shiny new thing'
A gentler more commercial form of communism...
Well, all this subscription-based stuff shows that even right-wing corporations are pursuing Karl Marx's fondest dream: the elimination
of private property.
"Siri, define 'bugging'."
A gentler more commercial form of communism...
On 11/3/24 3:48 AM, The Natural Philosopher wrote:
I think that ARM having eaten into its market in low power devices is
now swinging up towards an equal power performance solution.
They're trying, but Intel has very well refined solutions in that
market. ARM may ruin itself trying to catch up ...
Well, all this subscription-based stuff shows that even right-wing corporations are pursuing Karl Marx's fondest dream: the elimination of private property.
On Sun, 3 Nov 2024 08:48:50 +0000, The Natural Philosopher wrote:
A gentler more commercial form of communism...
“Communism” is when the Government does it.
What do you call it when a private company does it?
They're trying, but Intel has very well refined solutions in that
market. ARM may ruin itself trying to catch up and they still won't
have that Intel brand-rec. IMHO ARM should continue to focus on
'devices', seeking the best mix of performance and low power
consumption.
On Sun, 3 Nov 2024 20:50:49 -0000 (UTC), Lawrence D'Oliveiro wrote:
On Sun, 3 Nov 2024 08:48:50 +0000, The Natural Philosopher wrote:
A gentler more commercial form of communism...
“Communism” is when the Government does it.
What do you call it when a private company does it?
Corporatocracy. The government plays its part.
On Sun, 3 Nov 2024 15:24:19 -0500, 186282@ud0s4.net wrote:
They're trying, but Intel has very well refined solutions in that
market. ARM may ruin itself trying to catch up and they still won't
have that Intel brand-rec. IMHO ARM should continue to focus on
'devices', seeking the best mix of performance and low power
consumption.
<quibble>
I doubt Arm Holdings will ruin itself. Its licensees, otoh, may well do
so.
</quibble>
I doubt Arm Holdings will ruin itself.
ANYway - ARM can surely improve chip performance, but should that be
it's priority, something to blow the net worth on ? Lower-energy
seems more of an ARM thing and what all 'device' owners want.
On Sun, 3 Nov 2024 18:30:47 -0500, 186282@ud0s4.net wrote:
ANYway - ARM can surely improve chip performance, but should that be
it's priority, something to blow the net worth on ? Lower-energy
seems more of an ARM thing and what all 'device' owners want.
It isn't clear to me how the interaction of Arm Holdings and their
licensees works. Arm Holdings doesn't fabricate devices. Overlooking the current feud, when Arm licenses its designs to Qualcomm, who is
responsible for the integration into a Snapdragon SoC?
The Raspberry Pi family is another example. The Pi 4 uses the Broadcom BCM2711 with 4 Cortex-A72 cores at 1.5 GHZ. The Pi 5 has the BCM2712 with
4 Cortex-A76 cores at 2.4 GHz. The 5 is much faster but requires a better power supply. Cooling is strongly suggested if you're going to push it.
How much of the power and performance difference is from the core design
and how much from Broadcom's decisions during integration.
The A78 is claimed to be better for power and performance where the
Cortex-X1 is the balls to the wall rework of the A78 used in the
Snapdragon 888 but that design also has 3 A76 cores and 4 A55 cores to balance things out.
The first devices were hot little buggers which ultimately got blamed on Samsung's manufacturing process versus TSMC so it seems It's not only the
Arm design but who fabs the device.
https://www.patentlyapple.com/2021/05/tsmc-bailed-qualcomm-out-of-a-jam- earlier-this-year-when-the-snapdragon-888-produced-by-samsung-caused- overheating-issues.html
In short, Arm's designs are aimed at different criteria but much of the responsibility depends on what the licensees do with the designs. Bring on the finger pointing.
Arm Holdings doesn't fabricate devices.
On Sun, 3 Nov 2024 08:48:50 +0000, The Natural Philosopher wrote:
A gentler more commercial form of communism...
“Communism” is when the Government does it.
What do you call it when a private company does it?
On 2024-11-03, Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Sun, 3 Nov 2024 08:48:50 +0000, The Natural Philosopher wrote:
A gentler more commercial form of communism...
“Communism” is when the Government does it.
What do you call it when a private company does it?
I've heard the term "corporatism". IMHO that's a contraction of
"corporate fascism".
On Sun, 3 Nov 2024 22:25:34 -0500, 186282@ud0s4.net wrote:
My main gripe was the version of Deb rolled out with the P5 - it
wasn't right, seemed like every thing I was using them for didn't
work. Too-early release maybe ?
I avoided bookworm on my Debian desktop box but I haven't has a problem
with the Raspberry Pi OS version. It runs VS Code and the Pico SDK which
is about all I've done with it so far.
Been into BMax/BeeLink mini-boxes of late ... all have kinda 'cheap
laptop' i3 calibre CPUs. Put Manjaro on a couple, F40 on one and
maybe FreeBSD on the remaining unit. Happy to brag that the included
Win did not run for a single microsecond on any unit
My main machine has been a Beelink with a Ryzen 7 4700U. The specs are
very similar to my Acer Swift 3 laptop. It has Ubuntu 22.04 and has been perking along since February 2023. I'm not that crazy about Ubuntu but my previuos main box was SuSS and I wanted a change.
Anyway, Linux lets these boxes be all that they can be as opposed to
the obese pig Win - and you don't have to create an online M$ spy
account !
My main gripe was the version of Deb rolled out with the P5 - it
wasn't right, seemed like every thing I was using them for didn't
work. Too-early release maybe ?
Been into BMax/BeeLink mini-boxes of late ... all have kinda 'cheap
laptop' i3 calibre CPUs. Put Manjaro on a couple, F40 on one and
maybe FreeBSD on the remaining unit. Happy to brag that the included
Win did not run for a single microsecond on any unit
On Mon, 4 Nov 2024 01:51:43 -0500, 186282@ud0s4.net wrote:
Anyway, Linux lets these boxes be all that they can be as opposed to
the obese pig Win - and you don't have to create an online M$ spy
account !
The Beelink came with Windows 11 Pro. I was a little skeptical of the
license but Win11 didn't last long enough to bother. I've done dual boots
in the past but lately I go scorched earth if there's anything on the
drive.
The Beelink came with Windows 11 Pro. I was a little skeptical of the
license but Win11 didn't last long enough to bother. I've done dual boots
in the past but lately I go scorched earth if there's anything on the
drive.
On Sun, 3 Nov 2024 15:24:19 -0500, 186282@ud0s4.net wrote:
On 11/3/24 3:48 AM, The Natural Philosopher wrote:
I think that ARM having eaten into its market in low power devices is
now swinging up towards an equal power performance solution.
They're trying, but Intel has very well refined solutions in that
market. ARM may ruin itself trying to catch up ...
I’ve got news for you: ARM has already caught up and has long been inhabiting the high-performance computing space.
<https://en.wikipedia.org/wiki/Fujitsu_A64FX> <https://en.wikipedia.org/wiki/Fugaku_(supercomputer)>
On 2024-11-03, Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Sun, 3 Nov 2024 08:48:50 +0000, The Natural Philosopher wrote:
A gentler more commercial form of communism...
“Communism” is when the Government does it.
What do you call it when a private company does it?
I've heard the term "corporatism". IMHO that's a
contraction of "corporate fascism".
My direct experience with Pi4 vs Pi5 is that the thing
seems mostly twice as fast. The 5 may have better power
management too - but at full tilt it can use more juice.
On 11/3/24 22:56, Lawrence D'Oliveiro wrote:
On Sun, 3 Nov 2024 15:24:19 -0500, 186282@ud0s4.net wrote:
On 11/3/24 3:48 AM, The Natural Philosopher wrote:
I think that ARM having eaten into its market in low power devices
is now swinging up towards an equal power performance solution.
They're trying, but Intel has very well refined solutions in that
market. ARM may ruin itself trying to catch up ...
I’ve got news for you: ARM has already caught up and has long been
inhabiting the high-performance computing space.
<https://en.wikipedia.org/wiki/Fujitsu_A64FX>
<https://en.wikipedia.org/wiki/Fugaku_(supercomputer)>
And you also have ARM based OS X Apple Macs in the consumer market.
Intel have had competitor chips (non 86) in the past and survived. I
think the main difference this time is that MS Windows is no longer
dominant. Competitor chips have similar revenue to fund development.
Recently got an Ard Uno and the bits and pieces to build an
electronic door lock. TWO-button switch. The idea is to enter a 7 or
8 digit BINARY combo using the buttons with maybe a 10-15 second
time-out. Gotta decide on polling -vs- interrupt ... interrupt can
use much less standby power if you do it right combined with the Ard
low-power/sleep library. Amazing what can be done even with really
weak/slow chips. For MOST Ard uses though I'd rec the Mega2560 - but
you may have to tweak the libs for accessories as the pins are
different. Built some good multi-channel solar-powered environmental
monitors using those boards.
Caveat: The wowe does not get good reviews on Amazon (2.7 out of 5
stars, last I checked).
A second effect is the rise of GPU computation. Some portion of the
money flow that used to go to Intel for their "next faster CPU" is now instead flowing to Nvidia for their "next faster GPU".
On Mon, 4 Nov 2024 16:40:30 -0000 (UTC), Rich wrote:
A second effect is the rise of GPU computation. Some portion of the
money flow that used to go to Intel for their "next faster CPU" is now
instead flowing to Nvidia for their "next faster GPU".
If the AI bubble dies down there will be bloodshed. (not advocating
violence but there's a lot of money riding on that game)
On Mon, 4 Nov 2024 13:01:01 +0000
The Natural Philosopher <tnp@invalid.invalid> wrote:
However fabrication limits are getting stuck at 10nm and below, and
clock speeds are stuck at a few GHz which means that the
power-performance ratio is pretty much the same for Intel and ARM
architectures. Only by having fewer transistors and implicitly doing
less, can the power be reduced.
I.e. Moore's law has basically stopped representing reality. And ARM
is no longer fantastic power performance compared with Intel. Its one
advantage is it doesn't have to support a legacy architecture. And so
its probably cheaper and less buggy.
I've long held that necessity will ultimately force a serious rethink of programming practices w.r.t. resource-efficiency once Moore's Law runs
afoul of pesky real-world physics principles, i.e. "eighteen inches is a nanosecond" vs. "you can't cram an arbitrary amount of stuff into a
finite space without creating a black hole." Gonna be real interesting
when we finally hit the wall.
Mmm. It is a rather new phenomenon - the collusion of state and big
capital to capture markets by diktat, rather than by competition.
For a given clock speed, which is limited by the physical dimensions of
the chip, the smaller the transistors the less power it takes to run the chips.
However fabrication limits are getting stuck at 10nm and below ...
The Beelink came with Windows 11 Pro. I was a little skeptical of the
license but Win11 didn't last long enough to bother. I've done dual
boots in the past but lately I go scorched earth if there's anything on
the drive.
The Intel NUCs were overpriced for what they were ...
i.e. "eighteen inches is a nanosecond"
On Mon, 4 Nov 2024 18:58:09 -0000 (UTC)
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
i.e. "eighteen inches is a nanosecond"
For suitably small definitions of “inch”, I suppose ...
Well damn, I was misremembering from accounts of Grace Hopper's famous "nanosecond" wires. I guess another fundamental principle is "double-
check yer dang constants..." ;)
On 04/11/2024 18:20, John Ames wrote:
I've long held that necessity will ultimately force a serious rethink of
programming practices w.r.t. resource-efficiency once Moore's Law runs
afoul of pesky real-world physics principles, i.e. "eighteen inches is a
nanosecond" vs. "you can't cram an arbitrary amount of stuff into a
finite space without creating a black hole." Gonna be real interesting
when we finally hit the wall.
It may be that computing as we understand it is simply a mature
technology, and there isn't much more to actually do.
On Mon, 4 Nov 2024 18:30:27 +0000
The Natural Philosopher <tnp@invalid.invalid> wrote:
It may be that computing as we understand it is simply a mature
technology, and there isn't much more to actually do.
You have to wonder - but businesses accustomed to getting cheap-as-free upgrades of ~2^N in raw compute every few years will doubtless still
expect to scale their capabilities upward accordingly. When they can't,
it's gonna be real interesting to see what kind of renewed interest
there'll be in maximizing efficient use of the available resources, vs.
the "throw a beefier computer at it" approach that has very much been standard practice for the last ~30 yrs.
On Mon, 4 Nov 2024 02:02:38 -0600, Harold Stevens wrote:
Caveat: The wowe does not get good reviews on Amazon (2.7 out of 5
stars, last I checked).
I got interested in minis when the company bought a Mac mini to build a iPhone app, not the Mac part but the form factor. The Intel NUCs were overpriced for what they were and I prefer AMD.
I won't say Beelink was the only offering a couple of years ago but they
were one of the first. Now there's a whole raft of copycats. The Beelink
has worked for me so I'd stick with them if I buy another although I'm
sure some of the others are just as good.
On 4 Nov 2024 17:47:36 GMT, rbowman wrote:
The Intel NUCs were overpriced for what they were ...
And Intel still couldn’t make money on them. That’s why it gave up.
On 4 Nov 2024 07:03:56 GMT, rbowman wrote:
The Beelink came with Windows 11 Pro. I was a little skeptical of the
license but Win11 didn't last long enough to bother. I've done dual
boots in the past but lately I go scorched earth if there's anything on
the drive.
The trouble with that is, you are still paying the Microsoft tax.
I've long held that necessity will ultimately force a serious rethink of programming practices w.r.t. resource-efficiency once Moore's Law runs
afoul of pesky real-world physics principles, i.e. "eighteen inches is a nanosecond" vs. "you can't cram an arbitrary amount of stuff into a
finite space without creating a black hole." Gonna be real interesting
when we finally hit the wall.
And I rather expect the current AI marketer driven hype cycle to crash
down just like all the other marketer driven AI hype cycles of the past crashed down. At which point, what small usefulness the current AI's do
have that the marketer's have been hyping will finally come out.
On Mon, 4 Nov 2024 02:35:50 -0500, 186282@ud0s4.net wrote:
Recently got an Ard Uno and the bits and pieces to build an
electronic door lock. TWO-button switch. The idea is to enter a 7 or
8 digit BINARY combo using the buttons with maybe a 10-15 second
time-out. Gotta decide on polling -vs- interrupt ... interrupt can
use much less standby power if you do it right combined with the Ard
low-power/sleep library. Amazing what can be done even with really
weak/slow chips. For MOST Ard uses though I'd rec the Mega2560 - but
you may have to tweak the libs for accessories as the pins are
different. Built some good multi-channel solar-powered environmental
monitors using those boards.
I've got a few Unos. My problem is figuring out what to do with them. I
don't mean the coding/peripheral aspects but projects that are something I need.o
Right now I have a 4 wheel chassis with a primitive IR keypad
controller. The long range plan is to incorporate the PWM ability of the L298Ns and go to the nfr240l01 for two way communication. The problem is
the chassis has limitations.
I've got a couple of the Nano 33 BLE Sense boards. A MIT course in TinyML used them. The nRF52840 is Arm. All of the onboard sensors make it more expensive but it's handier if you can make use of them.
Right now I'm messing around with the Pico W. Too many choices, too little discipline...
On 2024-11-04, The Natural Philosopher <tnp@invalid.invalid> wrote:
On 04/11/2024 18:20, John Ames wrote:
I've long held that necessity will ultimately force a serious rethink of >>> programming practices w.r.t. resource-efficiency once Moore's Law runs
afoul of pesky real-world physics principles, i.e. "eighteen inches is a >>> nanosecond" vs. "you can't cram an arbitrary amount of stuff into a
finite space without creating a black hole." Gonna be real interesting
when we finally hit the wall.
Horrors - we might have to start programming efficiently again.
It may be that computing as we understand it is simply a mature
technology, and there isn't much more to actually do.
I've been seeing a lot of immature behaviour lately.
On Mon, 4 Nov 2024 18:59:07 -0000 (UTC), Lawrence D'Oliveiro wrote:
On 4 Nov 2024 07:03:56 GMT, rbowman wrote:
The Beelink came with Windows 11 Pro. I was a little skeptical of the
license but Win11 didn't last long enough to bother. I've done dual
boots in the past but lately I go scorched earth if there's anything on
the drive.
The trouble with that is, you are still paying the Microsoft tax.
At $350 the tax must be bargained down. That's why I said I didn't have complete confidence in the 11 Pro it came with, not that a company in Shenzhen would cut corners.
Maybe it's entirely legit. The Swift 3 was $679 with very similar hardware including the display, keyboard, laptop configuration.
amazon.com/dp/B0D8NS7KSH/
When you get down to $165 that claims to come with Windows 11 Pro you've
got to wonder what exactly the 'tax' is. Not the greatest processor and I doubt the other components are top shelf, but there still has to be some
cost involved for the physical components and assembly.
On 11/4/24 1:10 PM, rbowman wrote:
On Mon, 4 Nov 2024 02:35:50 -0500, 186282@ud0s4.net wrote:Know what you mean ... I've got tons of parts - for those "someday"
projects :-)
The executors of my estate are NOT gonna be happy.
Hell, even have a ZX-81 in The Heap somewhere :-)
Right now I have a 4 wheel chassis with a primitive IR keypad
controller. The long range plan is to incorporate the PWM ability of
the L298Ns and go to the nfr240l01 for two way communication. The
problem is the chassis has limitations.
PWM ... why not steppers ?
Build a better chassis ? Of course that requires the right tools,
which means off to the hardware store, which means bringing back a
bunch of other stuff you didn't know you needed and ........
I come across robotics sites selling more-or-less finished chassis.
Just bolt yer stuff on.
Radio comms, esp with limited units like Ards, can be annoying. They
DO make an Uno with built-in wifi now - so depending on your coverage
you might be able to run it straight up from a laptop. There are
various 900 MHz bi-di modules too.
https://store.arduino.cc/products/arduino-uno-wifi-rev2
The Pico does interest me. Basically a hopped-up microcontroller and
you can get wi-fi too. Looks easier than starting with a raw PIC or
'51 and building up from there.
Just glad a LOT of people are still into this sort of stuff - don't
think a smartphone is the end-all of tech. The spirit of Radio Shack
lives on. :-)
Only we 'older people' learned to make due, sculpt ASM, for chips
with teenie-weenie RAM/ROM. The follow-ons think only in
mega/giga/terabytes and NEVER in terms of optimizing code. Hand them
a PIC-12f series chip and they'd ask how to start Win-12 on it.
I thought it well-known that Microsoft sells Windows dirt-cheap to OEMs.
On Mon, 4 Nov 2024 20:08:02 -0500, 186282@ud0s4.net wrote:
Only we 'older people' learned to make due, sculpt ASM, for chips
with teenie-weenie RAM/ROM. The follow-ons think only in
mega/giga/terabytes and NEVER in terms of optimizing code. Hand them
a PIC-12f series chip and they'd ask how to start Win-12 on it.
When I interviewed for my current job about 25 years ago one of the
interview questions started with 'Assume you have unlimited memory...' I thought to myself that I was entering a different world.
... 'Assume you have unlimited memory...' ...
On the other hand, I recently re-worked a summary report program
to build the entire table in memory and spew it out after all
input files had been read, because I realized that these days,
given the finite volume of data I'm working with, I effectively
_do_ have unlimited memory.
When a PPOE upgraded its Univac 9300 from 16K of memory to 32K,
we wondered what we would do with all that space. (We soon figured that out.)
On 5 Nov 2024 20:07:07 GMT rbowman <bowman@montana.com> wrote:
Just glad a LOT of people are still into this sort of stuff - don't
think a smartphone is the end-all of tech. The spirit of Radio Shack
lives on.
When they built the new library they incorporated a maker space rather
than a few things stuffed into a meeting room. I don't think it's a
formal class but someone is available Saturdays to help with Arduino
projects.
Really need more of these kinds of things outside of major metro areas,
but yes, it's encouraging to see
On 05/11/2024 20:31, Charlie Gibbs wrote:
On the other hand, I recently re-worked a summary report program to
build the entire table in memory and spew it out after all input files
had been read, because I realized that these days, given the finite
volume of data I'm working with, I effectively _do_ have unlimited
memory.
I have a friend who does maths research, involving operations on
gigantic matrices.
His original code, some of which is assembler to access some obscure
INTEL instructions to do with vector maths, was designed to use 128GB.
On someone else's extremely expensive computer in a far away land.
That is no longer an option, and he spent last week rewriting it to suit
the biggest motherboard he can easily obtain.
Typically a run takes several months. The power usage on the computer is about 500W.
So people can still find ways to push the limits of computers.
On Tue, 5 Nov 2024 23:38:55 +0000, The Natural Philosopher wrote:
On 05/11/2024 20:31, Charlie Gibbs wrote:
On the other hand, I recently re-worked a summary report program to
build the entire table in memory and spew it out after all input files
had been read, because I realized that these days, given the finite
volume of data I'm working with, I effectively _do_ have unlimited
memory.
I have a friend who does maths research, involving operations on
gigantic matrices.
His original code, some of which is assembler to access some obscure
INTEL instructions to do with vector maths, was designed to use 128GB.
On someone else's extremely expensive computer in a far away land.
That is no longer an option, and he spent last week rewriting it to suit
the biggest motherboard he can easily obtain.
Typically a run takes several months. The power usage on the computer is
about 500W.
So people can still find ways to push the limits of computers.
AI is great for that. You know you're in trouble when companies are trying
to buy nuclear plants to keep the lights in in the computing centers.
It doesn't get as much mention yet but all that energy eventually becomes heat. Is the answer something like the Seabrook nuke where you can use the Atlantic to keep the processors cool? When they were building Seabrook one
of the spins was that the lobsters would love their cozy new homes.
On Tue, 5 Nov 2024 23:38:55 +0000, The Natural Philosopher wrote:
On 05/11/2024 20:31, Charlie Gibbs wrote:
On the other hand, I recently re-worked a summary report program to
build the entire table in memory and spew it out after all input files
had been read, because I realized that these days, given the finite
volume of data I'm working with, I effectively _do_ have unlimited
memory.
I have a friend who does maths research, involving operations on
gigantic matrices.
His original code, some of which is assembler to access some obscure
INTEL instructions to do with vector maths, was designed to use 128GB.
On someone else's extremely expensive computer in a far away land.
That is no longer an option, and he spent last week rewriting it to suit
the biggest motherboard he can easily obtain.
Typically a run takes several months. The power usage on the computer is
about 500W.
So people can still find ways to push the limits of computers.
AI is great for that. You know you're in trouble when companies are trying
to buy nuclear plants to keep the lights in in the computing centers.
It doesn't get as much mention yet but all that energy eventually becomes heat. Is the answer something like the Seabrook nuke where you can use the Atlantic to keep the processors cool? When they were building Seabrook one
of the spins was that the lobsters would love their cozy new homes.
In short, the 'AI' approach everybody's using
just SUCKS ... seriously defective and about
as anti-Green as possible.
On 06/11/2024 06:29, 186282@ud0s4.net wrote:
In short, the 'AI' approach everybody's using
just SUCKS ... seriously defective and about
as anti-Green as possible.
Oh, if its anti Green it cant be all bad...
But there are more ways of using low grade heat than spaffing it up a
cooling tower. SMRs built near cities, could heat them. Or acres of polytunnels growing plants unable to survive in the local climate.
On Wed, 6 Nov 2024 11:11:41 +0000, The Natural Philosopher wrote:
But there are more ways of using low grade heat than spaffing it up a
cooling tower. SMRs built near cities, could heat them. Or acres of
polytunnels growing plants unable to survive in the local climate.
District heating has been used for over 200 years.
https://www.powermag.com/district-heating-supply-from-nuclear-power-
plants/
"An extensive study was conducted in Connecticut, which focused on using waste heat from an existing nuclear power plant. It found substantial benefits from using nuclear heat, but concluded that the realization of maximum economic and social benefits would require current laws,
practices, and regulations to be modified. It suggested the larger energy perspective would have to be considered including desegregating the
treatment of energy, and incorporating land use planning and associated economic development into the process."
And there is the problem -- existing regulations and the NIMBY phenomenon.
40 years ago I was skeptical about some of the proposed nuclear plants,
not because of the technology but because of the over-optimistic
projections of future demand. That was then.
https://www.nrc.gov/info-finder/decommissioning/power-reactor/index.html
Many of the plants are at end of life, and that doesn't count the ones
that were decommissioned long ago like Maine Yankee or San Onofre. It
should be possible to design a plant that lasts longer than 40 or 50
years.
terms of 20 years. That even was used for the interstate system. Summer travel in this state can be painful because of the bottlenecks caused by bridge and pavement replacement.
On 06/11/2024 00:26, rbowman wrote:
On Tue, 5 Nov 2024 23:38:55 +0000, The Natural Philosopher wrote:Frankly I regard that as pure serendipity.
On 05/11/2024 20:31, Charlie Gibbs wrote:
On the other hand, I recently re-worked a summary report program to
build the entire table in memory and spew it out after all input files >>>> had been read, because I realized that these days, given the finite
volume of data I'm working with, I effectively _do_ have unlimited
memory.
I have a friend who does maths research, involving operations on
gigantic matrices.
His original code, some of which is assembler to access some obscure
INTEL instructions to do with vector maths, was designed to use 128GB.
On someone else's extremely expensive computer in a far away land.
That is no longer an option, and he spent last week rewriting it to suit >>> the biggest motherboard he can easily obtain.
Typically a run takes several months. The power usage on the computer is >>> about 500W.
So people can still find ways to push the limits of computers.
AI is great for that. You know you're in trouble when companies are
trying
to buy nuclear plants to keep the lights in in the computing centers.
The world needs nuclear power in unheard of quantities, and if AI is the trigger to start that avalanche, I dont care if in the end its utterly pointless.
The nuclear power stations will still be there. and usable
It doesn't get as much mention yet but all that energy eventually becomes
heat. Is the answer something like the Seabrook nuke where you can use
the
Atlantic to keep the processors cool? When they were building Seabrook
one
of the spins was that the lobsters would love their cozy new homes.
Yes. There is a distinct change in species near the outfalls of coastal reactors - but its the same for any thermal power plant - aside from CCGT..
60% of the energy ends up as low grade heat. (Its more like 30% on a
CCGT but no one is talking about efficient uses of Uranium via a tow
stage gas/steam turbine setup yet). Its dirt cheap and plentiful. So
waste heat it will be.
But there are more ways of using low grade heat than spaffing it up a
cooling tower. SMRs built near cities, could heat them. Or acres of polytunnels growing plants unable to survive in the local climate.
De-salination plants for fresh water.
Thermodynamics tells us that in a thermal plant, 100% effeciency is not available, and its a balance between efficiency and cost. No one is comfortable mixing extremely hot high pressure steam and nuclear
reactors, so they run at safer temperatures and pressures.
On 06/11/2024 17:49, rbowman wrote:
On Wed, 6 Nov 2024 11:11:41 +0000, The Natural Philosopher wrote:
But there are more ways of using low grade heat than spaffing it up a
cooling tower. SMRs built near cities, could heat them. Or acres of
polytunnels growing plants unable to survive in the local climate.
District heating has been used for over 200 years.
https://www.powermag.com/district-heating-supply-from-nuclear-power-
plants/
"An extensive study was conducted in Connecticut, which focused on using
waste heat from an existing nuclear power plant. It found substantial
benefits from using nuclear heat, but concluded that the realization of
maximum economic and social benefits would require current laws,
practices, and regulations to be modified. It suggested the larger energy
perspective would have to be considered including desegregating the
treatment of energy, and incorporating land use planning and associated
economic development into the process."
yup. Battersea power station in the middle of london took coal delivered
by rail and river and had a network of hot water pipes feeding local
houses.
https://en.wikipedia.org/wiki/Pimlico_District_Heating_Undertaking
Everybody in the industry knows that to get the best out of nuclear the
rule book needs to be torn up and re-written using modern understanding
of the real much lower danger from low level radiation.
But politicians wont do that. Not even Trump I suspect.
He is happy to protect the fossil fuels, not hasten their demise with
cheap nuclear
And there is the problem -- existing regulations and the NIMBY
phenomenon.
40 years ago I was skeptical about some of the proposed nuclear plants,
not because of the technology but because of the over-optimistic
projections of future demand. That was then.
https://www.nrc.gov/info-finder/decommissioning/power-reactor/index.html
Many of the plants are at end of life, and that doesn't count the ones
that were decommissioned long ago like Maine Yankee or San Onofre. It
should be possible to design a plant that lasts longer than 40 or 50
years.
As I understand it, the UKs AGR reactors were only supposed to do about
25 years, but made it further. Someone told me that the reason for
closure is in all cases corrosion and loss of strength in materials
subject to heavy neutron bombardment.
The knowledge gained from these early reactors means that at least 40
years is the design target with lifetimes up to 60 envisaged.
In the end its a cost-benefit judgement. More expensive reactors might
last longer, but wouldnt recoup the extra costs in their lifetimes. Maybe.
Uk's first reactor - and the worlds first - lasted 47 years.
That would require a prevailing attitude in the US that thinks in
terms of 20 years. That even was used for the interstate system. Summer
travel in this state can be painful because of the bottlenecks caused by
bridge and pavement replacement.
Its much easier to buy votes than build infratsructure.
From a few negative experiences, the one thing you REALLY
need to guard against is some kind of melt-down. To that
end, "pebble bed" reactors are THE solution. Word is that
China is building a number of them right now.
The thermodynamic efficiency of pebble beds isn't AS great
as with some modern designs, but the SAFETY factor is
WORTH it IMHO.
On 11/6/24 6:11 AM, The Natural Philosopher wrote:
On 06/11/2024 00:26, rbowman wrote:
On Tue, 5 Nov 2024 23:38:55 +0000, The Natural Philosopher wrote:Frankly I regard that as pure serendipity.
On 05/11/2024 20:31, Charlie Gibbs wrote:
On the other hand, I recently re-worked a summary report program to
build the entire table in memory and spew it out after all input files >>>>> had been read, because I realized that these days, given the finite
volume of data I'm working with, I effectively _do_ have unlimited >>>>> memory.
I have a friend who does maths research, involving operations on
gigantic matrices.
His original code, some of which is assembler to access some obscure
INTEL instructions to do with vector maths, was designed to use 128GB. >>>> On someone else's extremely expensive computer in a far away land.
That is no longer an option, and he spent last week rewriting it to
suit
the biggest motherboard he can easily obtain.
Typically a run takes several months. The power usage on the
computer is
about 500W.
So people can still find ways to push the limits of computers.
AI is great for that. You know you're in trouble when companies are
trying
to buy nuclear plants to keep the lights in in the computing centers.
The world needs nuclear power in unheard of quantities, and if AI is
the trigger to start that avalanche, I dont care if in the end its
utterly pointless.
The nuclear power stations will still be there. and usable
It doesn't get as much mention yet but all that energy eventually
becomes
heat. Is the answer something like the Seabrook nuke where you can
use the
Atlantic to keep the processors cool? When they were building
Seabrook one
of the spins was that the lobsters would love their cozy new homes.
Yes. There is a distinct change in species near the outfalls of
coastal reactors - but its the same for any thermal power plant -
aside from CCGT..
60% of the energy ends up as low grade heat. (Its more like 30% on a
CCGT but no one is talking about efficient uses of Uranium via a tow
stage gas/steam turbine setup yet). Its dirt cheap and plentiful. So
waste heat it will be.
But there are more ways of using low grade heat than spaffing it up a
cooling tower. SMRs built near cities, could heat them. Or acres of
polytunnels growing plants unable to survive in the local climate.
De-salination plants for fresh water.
Thermodynamics tells us that in a thermal plant, 100% effeciency is
not available, and its a balance between efficiency and cost. No one
is comfortable mixing extremely hot high pressure steam and nuclear
reactors, so they run at safer temperatures and pressures.
An insane amount of energy goes into just HEATING WATER
for whatever uses.
If yer nuke plant has pre-heated the water, as you said,
there are many uses for it, recover an extra percentage of
the heat.
They keep trying to get more electricity from 'lower'
quality heat sources ... but from what I can tell it
may not be worth it except maybe in a space station
or similar. Easier to just use "warm" for what it is.
Anyway, thermodynamics is The Law and no kind of power
plant is gonna be close to 100% efficiency.
On 06/11/2024 21:46, 186282@ud0s4.net wrote:
From a few negative experiences, the one thing you REALLYSMRs are also meltdown proof, in practice.
need to guard against is some kind of melt-down. To that
end, "pebble bed" reactors are THE solution. Word is that
China is building a number of them right now.
But a meltdown as in 3MI or Fukushima is by itself only a destroyed
reactor. It represents no public danger.
The thermodynamic efficiency of pebble beds isn't AS greatSame for SMRs.
as with some modern designs, but the SAFETY factor is
WORTH it IMHO.
A key part of "SMR" is *Small* ... but if you want to power up half a
STATE then "small" isn't how to do it.
Or do you propose Edison's vision of a power plant on every block ?
On 11/7/24 5:37 AM, The Natural Philosopher wrote:
On 06/11/2024 21:46, 186282@ud0s4.net wrote:
From a few negative experiences, the one thing you REALLYSMRs are also meltdown proof, in practice.
need to guard against is some kind of melt-down. To that
end, "pebble bed" reactors are THE solution. Word is that
China is building a number of them right now.
But a meltdown as in 3MI or Fukushima is by itself only a destroyed
reactor. It represents no public danger.
The thermodynamic efficiency of pebble beds isn't AS greatSame for SMRs.
as with some modern designs, but the SAFETY factor is
WORTH it IMHO.
A key part of "SMR" is *Small* ... but if you want to
power up half a STATE then "small" isn't how to do it.
Or do you propose Edison's vision of a power plant
on every block ?
"Now down here we have the laundry room on the right,
the rec room on the left, and down at the end of the
hall is the reactor room ..."