Sysop: | Amessyroom |
---|---|
Location: | Fayetteville, NC |
Users: | 42 |
Nodes: | 6 (0 / 6) |
Uptime: | 01:33:35 |
Calls: | 220 |
Calls today: | 1 |
Files: | 824 |
Messages: | 121,541 |
Posted today: | 6 |
On 10/13/24 13:25, The Natural Philosopher wrote:
On 13/10/2024 10:15, Richard Kettlewell wrote:
"186282@ud0s4.net" <186283@ud0s4.net> writes:Last I heard they were going to use D to As feeding analog
https://techxplore.com/news/2024-10-integer-addition-algorithm-energy-ai.html[...]
The default use of floating-point really took off when
'neural networks' became popular in the 80s. Seemed the
ideal way to keep track of all the various weightings
and values.
But, floating-point operations use a huge amount of
CPU/NPU power.
Seems somebody finally realized that the 'extra resolution'
of floating-point was rarely necessary and you can just
use large integers instead. Integer math is FAST and uses
LITTLE power .....
That’s situational. In this case, the paper isn’t about using large
integers, it’s about very low precision floating point representations. >>> They’ve just found a way to approximate floating point multiplication
without multiplying the fractional parts of the mantissas.
multipliers. And convert back to D afterwards. for a speed/ precision
tradeoff.
That sounds like the 1960s. I guess this idea does sound like a slide rule.
Analogue computers could offer massive parallelism for simulating
complex dynamic systems.
The Natural Philosopher <tnp@invalid.invalid> wrote:
Analogue computers could offer massive parallelism for simulating
complex dynamic systems.
If they have a solution for the typical problem of noise in the
analogue signals drowning out the "complex" simulations. Optical
methods are interesting.
On 13/10/2024 14:23, Pancho wrote:
On 10/13/24 13:25, The Natural Philosopher wrote:
On 13/10/2024 10:15, Richard Kettlewell wrote:
"186282@ud0s4.net" <186283@ud0s4.net> writes:Last I heard they were going to use D to As feeding analog
https://techxplore.com/news/2024-10-integer-addition-algorithm-energy-ai.html[...]
The default use of floating-point really took off when
'neural networks' became popular in the 80s. Seemed the
ideal way to keep track of all the various weightings
and values.
But, floating-point operations use a huge amount of
CPU/NPU power.
Seems somebody finally realized that the 'extra resolution'
of floating-point was rarely necessary and you can just
use large integers instead. Integer math is FAST and uses
LITTLE power .....
That’s situational. In this case, the paper isn’t about using large >>>> integers, it’s about very low precision floating point representations. >>>> They’ve just found a way to approximate floating point multiplication >>>> without multiplying the fractional parts of the mantissas.
multipliers. And convert back to D afterwards. for a speed/ precision
tradeoff.
That sounds like the 1960s. I guess this idea does sound like a slide
rule.
No, apparently its a new (sic!) idea.
I think that even if it does not work successfully it is great that
people are thinking outside the box.
Analogue computers could offer massive parallelism for simulating
complex dynamic systems.
"186282@ud0s4.net" <186283@ud0s4.net> writes:
https://techxplore.com/news/2024-10-integer-addition-algorithm-energy-ai.html[...]
The default use of floating-point really took off when
'neural networks' became popular in the 80s. Seemed the
ideal way to keep track of all the various weightings
and values.
But, floating-point operations use a huge amount of
CPU/NPU power.
Seems somebody finally realized that the 'extra resolution'
of floating-point was rarely necessary and you can just
use large integers instead. Integer math is FAST and uses
LITTLE power .....
That’s situational. In this case, the paper isn’t about using large integers, it’s about very low precision floating point representations. They’ve just found a way to approximate floating point multiplication without multiplying the fractional parts of the mantissas.
On 10/13/24 03:54, 186282@ud0s4.net wrote:
The new technique is basic—instead of using complex
floating-point multiplication (FPM), the method uses integer
addition. Apps use FPM to handle extremely large or small
numbers, allowing applications to carry out calculations
using them with extreme precision. It is also the most
energy-intensive part of AI number crunching.
That isn't really true. Floats can handle big and small, but the reason people use them is for simplicity.
The problem is that typical integer calculations are not closed, the
result is not an integer. Addition is fine, but the result of division
is typically not an integer. So if you use integers to model a problem
every time you do a division (or exp, log, sin, etc) you need to make a decision about how to force the result into an integer.
Floats actually use integral values for exponent and mantissa, but they automatically make ballpark reasonable decisions about how to force the results into integral values for mantissa and exponent, meaning
operations are effectively closed (ignoring exceptions). So the
programmer doesn't have to worry, so much.
Floating point ops are actually quite efficient, much less of a concern
than something like a branch misprediction. A 20x speed up (energy
saving) sounds close to a theoretical maximum. I would be surprised if
it can be achieved in anything but a few cases.
That’s situational. In this case, the paper isn’t about using large
integers, it’s about very low precision floating point representations.
They’ve just found a way to approximate floating point multiplication
without multiplying the fractional parts of the mantissas.
They need to take it further - integers instead
of ANY floating-point absolutely anywhere possible.
On 10/13/24 5:15 AM, Richard Kettlewell wrote:
"186282@ud0s4.net" <186283@ud0s4.net> writes:
https://techxplore.com/news/2024-10-integer-addition-algorithm-energy-ai.html[...]
The default use of floating-point really took off whenThat’s situational. In this case, the paper isn’t about using large
'neural networks' became popular in the 80s. Seemed the
ideal way to keep track of all the various weightings
and values.
But, floating-point operations use a huge amount of
CPU/NPU power.
Seems somebody finally realized that the 'extra resolution'
of floating-point was rarely necessary and you can just
use large integers instead. Integer math is FAST and uses
LITTLE power .....
integers, it’s about very low precision floating point representations.
They’ve just found a way to approximate floating point multiplication
without multiplying the fractional parts of the mantissas.
They need to take it further - integers instead
of ANY floating-point absolutely anywhere possible.
On 10/14/24 6:16 AM, The Natural Philosopher wrote:
I think that even if it does not work successfully it is great that
people are thinking outside the box.
Analogue computers could offer massive parallelism for simulating
complex dynamic systems.
Yea, but not much PRECISION beyond a stage or two
of calx :-)
No "perfect" fixes.
The question is how EXACT the precision HAS to be for
most "AI" uses. Might be safe to throw away a few
decimal points at the bottom.
On Tue, 15 Oct 2024 02:43:08 -0400, 186282@ud0s4.net wrote:
The question is how EXACT the precision HAS to be for most "AI" uses.
Might be safe to throw away a few decimal points at the bottom.
It's usually referred to as 'machine learning' rather than AI but when you look at TinyML on edge devices doing image recognition, wake word
processing, and other tasks it's impressive how much you can throw away
and still get a reasonable quality of results.
https://www.tinyml.org/
This goes back to the slide rule days. Sure, you could whip out your book
of six place tables and get seemingly more accurate results but did all
those decimal places mean anything in the real world? Computers took the
pain out of calculations but also tended to avoid the questions of 'what
does this really mean in the real world'.
The question is how EXACT the precision HAS to be for most "AI" uses.
Might be safe to throw away a few decimal points at the bottom.
The political polls state ranges, but nothing about the alpha, the N,
and,
most importantly, the wording of the poll questions and the nature of
the sampling.
On 15/10/2024 07:31, 186282@ud0s4.net wrote:
On 10/14/24 6:16 AM, The Natural Philosopher wrote:
I think that even if it does not work successfully it is great that
people are thinking outside the box.
Analogue computers could offer massive parallelism for simulating
complex dynamic systems.
Yea, but not much PRECISION beyond a stage or two
of calx :-)
No "perfect" fixes.
As I said, let's say we are simulating airflow over a fast moving
object - now normally the fluid dynamics CFM is crap and it is cheaper
and more accurate to throw it in a wind tunnel.
The wind tunnel is not measuiring data to any high accuracy but its
using atomic level measurement cells in enormous quantities in parallel.
The problem with CFM is you cant have too may 'cells' or you run out of computer power. Its a step beyond 3D modelling where the more triangles
you have the closer to real everything looks, but its a similar problem .
But a wind tunnel built out of analogue 'cells' might be quite simple in concept. Just large in silicon scale.
And it wouldn't need to be 'programmed' as its internal logic would be constructed to be the equations that govern fluid dynamics. All you
would then do is take a 3D surface and constrain every cell in that
computer on that surface to have zero output.
If I were a graduate again that's a PhD project that would appeal...
On 15/10/2024 07:43, 186282@ud0s4.net wrote:
The question is how EXACT the precision HAS to be for
most "AI" uses. Might be safe to throw away a few
decimal points at the bottom.
My thesis is that *in some applications*, more low quality calculations
bets a fewer high quality ones anyway.
I wasn't thinking of AI, as much as modelling complex turbulent flow in
aero and hydrodynamics or weather forecasting
On Tue, 15 Oct 2024 15:46:05 -0400, Chris Ahlstrom wrote:
The political polls state ranges, but nothing about the alpha, the N,
and,
most importantly, the wording of the poll questions and the nature of
the sampling.
I try to ignore polls and most of the hype. A few years back I went to bed expecting Hillary Clinton to be the president elect when I woke up. The DJ
on the radio station I listen to morning was a definite lefty. When he
played Norah Jones' 'Carry On' I found I'd been mistaken.
https://www.youtube.com/watch?v=DqA25Ug71Mc
On 10/15/24 7:06 AM, The Natural Philosopher wrote:
On 15/10/2024 07:43, 186282@ud0s4.net wrote:
The question is how EXACT the precision HAS to be for
most "AI" uses. Might be safe to throw away a few
decimal points at the bottom.
My thesis is that *in some applications*, more low quality
calculations bets a fewer high quality ones anyway.
I wasn't thinking of AI, as much as modelling complex turbulent flow
in aero and hydrodynamics or weather forecasting
Well, weather, any decimal points are BS anyway :-)
However, AI and fuzzy logic and neural networks - it
has just been standard practice to use floats to handle
all values. I've got books going back into the mid 80s
on all those and you JUST USED floats.
BUT ... as said, even a 32-bit int can handle fairly
large vals. Mult little vals by 100 or 1000 and you can
throw away the need for decimal points - and the POWER
required to do such calx. Accuracy should be more than
adequate.
In any case, I'm happy SOMEONE finally realized this.
TOOK a really LONG time though ......
BUT ... as said, even a 32-bit int can handle fairly
large vals. Mult little vals by 100 or 1000 and you can
throw away the need for decimal points - and the POWER
required to do such calx. Accuracy should be more than
adequate.
In any case, I'm happy SOMEONE finally realized this.
TOOK a really LONG time though ......
On 10/15/24 10:35 PM, rbowman wrote:
On Tue, 15 Oct 2024 15:46:05 -0400, Chris Ahlstrom wrote:
The political polls state ranges, but nothing about the alpha, the N,
and,
most importantly, the wording of the poll questions and the nature of
the sampling.
I try to ignore polls and most of the hype. A few years back I went to bed >> expecting Hillary Clinton to be the president elect when I woke up. The DJ >> on the radio station I listen to morning was a definite lefty. When he
played Norah Jones' 'Carry On' I found I'd been mistaken.
https://www.youtube.com/watch?v=DqA25Ug71Mc
Trump IS grating ... no question ... but K is just
an empty skull. That's been her JOB. Can't have
someone like that in times like these.
Not entirely sure of the Linux angle here though ...
Harris as VP was like Linux, working reliably in the background.
She's no empty skull. She was a prosecutor, a district attorney, a state attorney general, a US senator, and the vice president. But some people cannot
stand that in a woman.
I began many years ago, as so many young men do, in searching for the
perfect woman. I believed that if I looked long enough, and hard enough,
I would find her and then I would be secure for life. Well, the years
and romances came and went, and I eventually ended up settling for someone
a lot less than my idea of perfection. But one day, after many years together, I lay there on our bed recovering from a slight illness. My
wife was sitting on a chair next to the bed, humming softly and watching
the late afternoon sun filtering through the trees. The only sounds to
be heard elsewhere were the clock ticking, the kettle downstairs starting
to boil, and an occasional schoolchild passing beneath our window. And
as I looked up into my wife's now wrinkled face, but still warm and
twinkling eyes, I realized something about perfection... It comes only
with time.
-- James L. Collymore, "Perfect Woman"
She's no empty skull. She was a prosecutor, a district attorney, a state attorney general, a US senator, and the vice president. But some people cannot
stand that in a woman.
Harris as VP was like Linux, working reliably in the background.
On Wed, 16 Oct 2024 07:40:46 -0400, Chris Ahlstrom wrote:
Harris as VP was like Linux, working reliably in the background.
There you have the problem. If she was working reliably in the background
for the last three and a half years she can hardly claim to be a candidate for change. Obama could make that work after eight years of Bush.
rbowman wrote this copyrighted missive and expects royalties:
On Wed, 16 Oct 2024 07:40:46 -0400, Chris Ahlstrom wrote:
Harris as VP was like Linux, working reliably in the background.
There you have the problem. If she was working reliably in the background
for the last three and a half years she can hardly claim to be a candidate >> for change. Obama could make that work after eight years of Bush.
Whatever, dude. Incremental change is fine with me.
The big changes we really need (eliminating Citizens United, taking medical insurers out of the system, and so much more) will never happen.
The game is rigged.
Heh heh:
"186282@ud0s4.net" <186283@ud0s4.net> writes:
BUT ... as said, even a 32-bit int can handle fairly
large vals. Mult little vals by 100 or 1000 and you can
throw away the need for decimal points - and the POWER
required to do such calx. Accuracy should be more than
adequate.
You’re talking about fixed-point arithmetic, which is already used where appropriate (although the scale is a power of 2 so you can shift
products down into the right place rather than dividing).
In any case, I'm happy SOMEONE finally realized this.
TOOK a really LONG time though ......
It’s obvious that you’ve not actually read or understood the paper that this thread is about.
On 10/16/24 6:56 AM, Richard Kettlewell wrote:
"186282@ud0s4.net" <186283@ud0s4.net> writes:
BUT ... as said, even a 32-bit int can handle fairlyYou’re talking about fixed-point arithmetic, which is already used
large vals. Mult little vals by 100 or 1000 and you can
throw away the need for decimal points - and the POWER
required to do such calx. Accuracy should be more than
adequate.
where appropriate (although the scale is a power of 2 so you can
shift products down into the right place rather than dividing).
In any case, I'm happy SOMEONE finally realized this.
TOOK a really LONG time though ......
It’s obvious that you’ve not actually read or understood the paper
that this thread is about.
Maybe I understood it better than you ... and from
4+ decades of experiences.
"186282@ud0s4.net" <186283@ud0s4.net> writes:
On 10/16/24 6:56 AM, Richard Kettlewell wrote:
"186282@ud0s4.net" <186283@ud0s4.net> writes:
BUT ... as said, even a 32-bit int can handle fairlyYou’re talking about fixed-point arithmetic, which is already used
large vals. Mult little vals by 100 or 1000 and you can
throw away the need for decimal points - and the POWER
required to do such calx. Accuracy should be more than
adequate.
where appropriate (although the scale is a power of 2 so you can
shift products down into the right place rather than dividing).
In any case, I'm happy SOMEONE finally realized this.
TOOK a really LONG time though ......
It’s obvious that you’ve not actually read or understood the paper
that this thread is about.
Maybe I understood it better than you ... and from
4+ decades of experiences.
Perhaps you could explain why you keep talking about integer arithmetic
when the paper is about floating point arithmetic, then.
On 10/18/24 12:34 PM, Richard Kettlewell wrote:
<snip>
Perhaps you could explain why you keep talking about integer arithmetic
when the paper is about floating point arithmetic, then.
Umm ... because the idea of swapping FP for ints in
order to save lots of power was introduced ?
This issue is getting to be *poitical* now - the
ultra-greenies freaking about how much power the
'AI' computing centers require.
186282ud0s3 wrote this copyrighted missive and expects royalties:
On 10/18/24 12:34 PM, Richard Kettlewell wrote:
<snip>
Perhaps you could explain why you keep talking about integer arithmetic
when the paper is about floating point arithmetic, then.
Umm ... because the idea of swapping FP for ints in
order to save lots of power was introduced ?
This issue is getting to be *poitical* now - the
ultra-greenies freaking about how much power the
'AI' computing centers require.
Heh, I freak out about sites I visit that make my computer rev up and
turn on the cooler: sites polluted with ads, sites that use your CPU
to mine bitcoin and who knows what else.