This is a new kind of floating-point number, likely good for AI, but
lots of other uses will turn up.
From IEEE Spectrum (March 2026 issue):
AI has driven an explosion of new number formatsrCothe ways in which
numbers are represented digitally. Engineers are looking at every
possible way to save computation time and energy, including shortening
the number of bits used to represent data. But what works for AI
doesn't necessarily work for scientific computing, be it for
computational physics, biology, fluid dynamics, or engineering
simulations. IEEE Spectrum spoke with Laslo Hunhold, who recently
joined Barcelona-based Openchip as an AI engineer, about his efforts
to develop a bespoke number format for scientific computing.
.<https://spectrum.ieee.org/number-formats-ai-scientific-computing>
Joe
This is a new kind of floating-point number, likely good for AI, butI suspect (as with my dictionary implementation), developers are
lots of other uses will turn up.
joegwinn@comcast.netwrote:
This is a new kind of floating-point number, likely good for AI, but
lots of other uses will turn up.
From IEEE Spectrum (March 2026 issue):
AI has driven an explosion of new number formatsuthe ways in which
numbers are represented digitally. Engineers are looking at every
possible way to save computation time and energy, including shortening
the number of bits used to represent data. But what works for AI
doesn't necessarily work for scientific computing, be it for
computational physics, biology, fluid dynamics, or engineering
simulations. IEEE Spectrum spoke with Laslo Hunhold, who recently
joined Barcelona-based Openchip as an AI engineer, about his efforts
to develop a bespoke number format for scientific computing.
.<https://spectrum.ieee.org/number-formats-ai-scientific-computing>
Joe
This is a new kind of floating-point number, likely good for AI, but
lots of other uses will turn up.
From IEEE Spectrum (March 2026 issue):
AI has driven an explosion of new number formatsuthe ways in which
numbers are represented digitally. Engineers are looking at every
possible way to save computation time and energy, including shortening
the number of bits used to represent data. But what works for AI
doesn't necessarily work for scientific computing, be it for
computational physics, biology, fluid dynamics, or engineering
simulations. IEEE Spectrum spoke with Laslo Hunhold, who recently
joined Barcelona-based Openchip as an AI engineer, about his efforts
to develop a bespoke number format for scientific computing.
.<https://spectrum.ieee.org/number-formats-ai-scientific-computing>
Joe
On 2026-04-15 18:33, joegwinn@comcast.net wrote:
I sure hope he ditches denormals!
This is a new kind of floating-point number, likely good for AI, but
lots of other uses will turn up.
From IEEE Spectrum (March 2026 issue):
AI has driven an explosion of new number formatsuthe ways in which
numbers are represented digitally. Engineers are looking at every
possible way to save computation time and energy, including shortening
the number of bits used to represent data. But what works for AI
doesn't necessarily work for scientific computing, be it for
computational physics, biology, fluid dynamics, or engineering
simulations. IEEE Spectrum spoke with Laslo Hunhold, who recently
joined Barcelona-based Openchip as an AI engineer, about his efforts
to develop a bespoke number format for scientific computing.
.<https://spectrum.ieee.org/number-formats-ai-scientific-computing>
Joe
Cheers
Phil Hobbs
On 2026-04-15 18:33, joegwinn@comcast.net wrote:
This is a new kind of floating-point number, likely good for AI, but
lots of other uses will turn up.
-aFrom IEEE Spectrum (March 2026 issue):
AI has driven an explosion of new number formatsrCothe ways in which
numbers are represented digitally. Engineers are looking at every
possible way to save computation time and energy, including shortening
the number of bits used to represent data. But what works for AI
doesn't necessarily work for scientific computing, be it for
computational physics, biology, fluid dynamics, or engineering
simulations. IEEE Spectrum spoke with Laslo Hunhold, who recently
joined Barcelona-based Openchip as an AI engineer, about his efforts
to develop a bespoke number format for scientific computing.
.<https://spectrum.ieee.org/number-formats-ai-scientific-computing>
JoeI sure hope he ditches denormals!
joegwinn@comcast.netwrote:
This is a new kind of floating-point number, likely good for AI, but
lots of other uses will turn up.
From IEEE Spectrum (March 2026 issue):
AI has driven an explosion of new number formatsrCothe ways in which
numbers are represented digitally. Engineers are looking at every
possible way to save computation time and energy, including shortening
the number of bits used to represent data. But what works for AI
doesn't necessarily work for scientific computing, be it for
computational physics, biology, fluid dynamics, or engineering
simulations. IEEE Spectrum spoke with Laslo Hunhold, who recently
joined Barcelona-based Openchip as an AI engineer, about his efforts
to develop a bespoke number format for scientific computing.
.<https://spectrum.ieee.org/number-formats-ai-scientific-computing>
Joe
Do not know much about what that guy did.
But I noticed I can do most 'scientific things with 32 bits (in asm at that) For example the Fourier transform in
https://panteltje.nl/panteltje/pic/scope_pic/
asm source downloadable on that site
I did using 32 bit integer.
Is that science?
Of course when AI wants to do a divide by zero using Albert E.'s brain fog, than it will likely need infinite bits to do the wormhole dance...
My conclusion: 32 bits is enough for most things
joegwinn@comcast.netwrote:
This is a new kind of floating-point number, likely good for AI, but
lots of other uses will turn up.
From IEEE Spectrum (March 2026 issue):
AI has driven an explosion of new number formatsrCothe ways in which
numbers are represented digitally. Engineers are looking at every
possible way to save computation time and energy, including shortening
the number of bits used to represent data. But what works for AI
doesn't necessarily work for scientific computing, be it for
computational physics, biology, fluid dynamics, or engineering
simulations. IEEE Spectrum spoke with Laslo Hunhold, who recently
joined Barcelona-based Openchip as an AI engineer, about his efforts
to develop a bespoke number format for scientific computing.
.<https://spectrum.ieee.org/number-formats-ai-scientific-computing>
Joe
Do not know much about what that guy did.
But I noticed I can do most 'scientific things with 32 bits (in asm at that) For example the Fourier transform in
https://panteltje.nl/panteltje/pic/scope_pic/
asm source downloadable on that site
I did using 32 bit integer.
Is that science?
Of course when AI wants to do a divide by zero using Albert E.'s brain fog, than it will likely need infinite bits to do the wormhole dance...
My conclusion: 32 bits is enough for most things
Even something as simple as solving a cubic equation x^3 + ax^2 + bx + c can easily go wrong when computing in float32 since it involves computing a^6. You
can work around this lack of dynamic range but it is painful!
Double precision also helps a lot when accumulating summations of reals or even
in FFTs to do recurrence relations for sin/cos(n*w*t)
Almost all modern FP libraries today promote the float32 argument to double and
do the computation in double precision rounding the result back to float at the
end. It avoids a lot of messing about ensuring nothing overflows during the intermediate calculations.
On Wed, 15 Apr 2026 18:33:16 -0400, joegwinn@comcast.net wrote:
This is a new kind of floating-point number, likely good for AI, but
lots of other uses will turn up.
From IEEE Spectrum (March 2026 issue):
AI has driven an explosion of new number formatsuthe ways in which
numbers are represented digitally. Engineers are looking at every
possible way to save computation time and energy, including shortening
the number of bits used to represent data. But what works for AI
doesn't necessarily work for scientific computing, be it for
computational physics, biology, fluid dynamics, or engineering
simulations. IEEE Spectrum spoke with Laslo Hunhold, who recently
joined Barcelona-based Openchip as an AI engineer, about his efforts
to develop a bespoke number format for scientific computing.
.<https://spectrum.ieee.org/number-formats-ai-scientific-computing>
Joe
John Larkin
Highland Tech Glen Canyon Design Center
Lunatic Fringe Electronics
On Wed, 15 Apr 2026 20:17:20 -0400, Phil Hobbs ><pcdhSpamMeSenseless@electrooptical.net> wrote:
On 2026-04-15 18:33, joegwinn@comcast.net wrote:
I sure hope he ditches denormals!
This is a new kind of floating-point number, likely good for AI, but
lots of other uses will turn up.
From IEEE Spectrum (March 2026 issue):
AI has driven an explosion of new number formatsuthe ways in which
numbers are represented digitally. Engineers are looking at every
possible way to save computation time and energy, including shortening
the number of bits used to represent data. But what works for AI
doesn't necessarily work for scientific computing, be it for
computational physics, biology, fluid dynamics, or engineering
simulations. IEEE Spectrum spoke with Laslo Hunhold, who recently
joined Barcelona-based Openchip as an AI engineer, about his efforts
to develop a bespoke number format for scientific computing.
.<https://spectrum.ieee.org/number-formats-ai-scientific-computing>
Joe
Cheers
Phil Hobbs
I wrote a math package for the 68332, with the usual functions. The
format was signed 64 bits, as 32.32.
Ads and subs were fast, as no normalizations were needed. Integer
conversions were even faster. Divide was admittedly kinda ugly.
I figured that anything physical can be expressed as 32.32.
On 16/04/2026 07:43, Jan Panteltje wrote:
joegwinn@comcast.netwrote:
This is a new kind of floating-point number, likely good for AI, but
lots of other uses will turn up.
From IEEE Spectrum (March 2026 issue):
AI has driven an explosion of new number formatsuthe ways in which
numbers are represented digitally. Engineers are looking at every
possible way to save computation time and energy, including shortening
the number of bits used to represent data. But what works for AI
doesn't necessarily work for scientific computing, be it for
computational physics, biology, fluid dynamics, or engineering
simulations. IEEE Spectrum spoke with Laslo Hunhold, who recently
joined Barcelona-based Openchip as an AI engineer, about his efforts
to develop a bespoke number format for scientific computing.
.<https://spectrum.ieee.org/number-formats-ai-scientific-computing>
Joe
Do not know much about what that guy did.
But I noticed I can do most 'scientific things with 32 bits (in asm at that) >> For example the Fourier transform in
https://panteltje.nl/panteltje/pic/scope_pic/
asm source downloadable on that site
I did using 32 bit integer.
FFTs are relatively forgiving where numerical precision is concerned.
The basis functions are perfectly orthogonal summed over the domain.
Even something as simple as solving a cubic equation x^3 + ax^2 + bx + c
can easily go wrong when computing in float32 since it involves
computing a^6. You can work around this lack of dynamic range but it is >painful!
Double precision also helps a lot when accumulating summations of reals
or even in FFTs to do recurrence relations for sin/cos(n*w*t)
Almost all modern FP libraries today promote the float32 argument to
double and do the computation in double precision rounding the result
back to float at the end. It avoids a lot of messing about ensuring
nothing overflows during the intermediate calculations.
Is that science?
Of course when AI wants to do a divide by zero using Albert E.'s brain fog, >> than it will likely need infinite bits to do the wormhole dance...
My conclusion: 32 bits is enough for most things
CDC7600 60 bits really was good enough for most orbital dynamics >computations which is why astronomical codes used them (and BMEWS too).
Today's CPUs double precision 64bit and float 32 bit have essentially
the same performance unless you are vectorising or using huge arrays so
that unless you *really* know what you are doing double precision is >preferred for most routine scientific calculations. The exception is
bulk raw data where you seldom have more than 4 significant figures.
The problem with that is you have to send along all of the factorsToday's CPUs double precision 64bit and float 32 bit have essentially
the same performance unless you are vectorising or using huge arrays so
that unless you *really* know what you are doing double precision is
preferred for most routine scientific calculations. The exception is
bulk raw data where you seldom have more than 4 significant figures.
Also when carrying data from place to place before modern fiber-optic transmission systems - Even 32 bit was overkill for ADC output data,
so kit was best to send that in integer form, and convert to floats as
late as possible in the process.
john larkin <jl@glen--canyon.com>wrote:
I had to change the title line. Eternal September rejected it for some >reason.
On Thu, 16 Apr 2026 02:16:50 -0700, john larkin <jl@glen--canyon.com>
wrote:
On Wed, 15 Apr 2026 18:33:16 -0400, joegwinn@comcast.net wrote:
This is a new kind of floating-point number, likely good for AI, but
lots of other uses will turn up.
From IEEE Spectrum (March 2026 issue):
AI has driven an explosion of new number formatsuthe ways in which >>>numbers are represented digitally. Engineers are looking at every >>>possible way to save computation time and energy, including shortening >>>the number of bits used to represent data. But what works for AI
doesn't necessarily work for scientific computing, be it for >>>computational physics, biology, fluid dynamics, or engineering >>>simulations. IEEE Spectrum spoke with Laslo Hunhold, who recently
joined Barcelona-based Openchip as an AI engineer, about his efforts
to develop a bespoke number format for scientific computing.
.<https://spectrum.ieee.org/number-formats-ai-scientific-computing>
Joe
John Larkin
Highland Tech Glen Canyon Design Center
Lunatic Fringe Electronics
I had to change the title line. Eternal September rejected it for some >reason.
On Thu, 16 Apr 2026 02:18:58 -0700, john larkin <jl@glen--canyon.com>
wrote:
On Wed, 15 Apr 2026 20:17:20 -0400, Phil Hobbs >><pcdhSpamMeSenseless@electrooptical.net> wrote:
On 2026-04-15 18:33, joegwinn@comcast.net wrote:
I sure hope he ditches denormals!
This is a new kind of floating-point number, likely good for AI, but
lots of other uses will turn up.
From IEEE Spectrum (March 2026 issue):
AI has driven an explosion of new number formatsuthe ways in which
numbers are represented digitally. Engineers are looking at every
possible way to save computation time and energy, including shortening >>>> the number of bits used to represent data. But what works for AI
doesn't necessarily work for scientific computing, be it for
computational physics, biology, fluid dynamics, or engineering
simulations. IEEE Spectrum spoke with Laslo Hunhold, who recently
joined Barcelona-based Openchip as an AI engineer, about his efforts
to develop a bespoke number format for scientific computing.
.<https://spectrum.ieee.org/number-formats-ai-scientific-computing>
Joe
I think he does.
Cheers
Phil Hobbs
I wrote a math package for the 68332, with the usual functions. The
format was signed 64 bits, as 32.32.
Ads and subs were fast, as no normalizations were needed. Integer >>conversions were even faster. Divide was admittedly kinda ugly.
Unless you use power-of-two scaling, allowing bit shifts to do the
job.
I figured that anything physical can be expressed as 32.32.
I've done much the same, but usually more like 16.16 - these computers
were tiny.
Joe
On Thu, 16 Apr 2026 11:36:13 -0400, joegwinn@comcast.net wrote:
On Thu, 16 Apr 2026 02:18:58 -0700, john larkin <jl@glen--canyon.com> >>wrote:
On Wed, 15 Apr 2026 20:17:20 -0400, Phil Hobbs >>><pcdhSpamMeSenseless@electrooptical.net> wrote:
On 2026-04-15 18:33, joegwinn@comcast.net wrote:
I sure hope he ditches denormals!
This is a new kind of floating-point number, likely good for AI, but >>>>> lots of other uses will turn up.
From IEEE Spectrum (March 2026 issue):
AI has driven an explosion of new number formatsuthe ways in which
numbers are represented digitally. Engineers are looking at every
possible way to save computation time and energy, including shortening >>>>> the number of bits used to represent data. But what works for AI
doesn't necessarily work for scientific computing, be it for
computational physics, biology, fluid dynamics, or engineering
simulations. IEEE Spectrum spoke with Laslo Hunhold, who recently
joined Barcelona-based Openchip as an AI engineer, about his efforts >>>>> to develop a bespoke number format for scientific computing.
.<https://spectrum.ieee.org/number-formats-ai-scientific-computing>
Joe
I think he does.
Cheers
Phil Hobbs
I wrote a math package for the 68332, with the usual functions. The >>>format was signed 64 bits, as 32.32.
Ads and subs were fast, as no normalizations were needed. Integer >>>conversions were even faster. Divide was admittedly kinda ugly.
Unless you use power-of-two scaling, allowing bit shifts to do the
job.
I was PWMing a heater and wanted to adjust for the unregulated supply >voltage, which was the only divide in that system. So it didn't have
to be very good. That was inside the more serious temperature control
loop.
I could have done a lookup table, I guess.
I figured that anything physical can be expressed as 32.32.
I've done much the same, but usually more like 16.16 - these computers
were tiny.
Joe
Since I was running realtime control loops, the package threw no
exceptions. It always returned a legal-format value and made its best
guess.
68K was a 32-bit machine but the 68332 didn't have floats. And it was
slow, a 16 MHz CISC processor.
But it was a joy to code in assembler.
John Larkin--- Synchronet 3.21f-Linux NewsLink 1.2
Highland Tech Glen Canyon Design Center
Lunatic Fringe Electronics
On 4/16/2026 8:44 AM, joegwinn@comcast.net wrote:
The problem with that is you have to send along all of the factorsToday's CPUs double precision 64bit and float 32 bit have essentially
the same performance unless you are vectorising or using huge arrays so
that unless you *really* know what you are doing double precision is
preferred for most routine scientific calculations. The exception is
bulk raw data where you seldom have more than 4 significant figures.
Also when carrying data from place to place before modern fiber-optic
transmission systems - Even 32 bit was overkill for ADC output data,
so kit was best to send that in integer form, and convert to floats as
late as possible in the process.
that allow those integers to be ACCURATELY mapped to the quantities
they represent. Or, arrange for all of that information to already
*be* at the "far end".
Converting to a float/engineering units AT the acquisition site
lets you encapsulate all of that hardware/domain/application
specific information AT the acquisition point so you don't have to
expose all of that detail. Imagine replacing a 10b device with
a 12b -- how much tinkering will you have to do if the far
end was expecting 10b data and now sees 12b?
|| |
I had to change the title line. Eternal September rejected it for some| |>reason. |
On Wed, 15 Apr 2026 20:17:20 -0400, Phil Hobbs <pcdhSpamMeSenseless@electrooptical.net> wrote:
On 2026-04-15 18:33, joegwinn@comcast.net wrote:
I sure hope he ditches denormals!
This is a new kind of floating-point number, likely good for AI, but
lots of other uses will turn up.
From IEEE Spectrum (March 2026 issue):
AI has driven an explosion of new number formatsrCothe ways in which
numbers are represented digitally. Engineers are looking at every
possible way to save computation time and energy, including shortening
the number of bits used to represent data. But what works for AI
doesn't necessarily work for scientific computing, be it for
computational physics, biology, fluid dynamics, or engineering
simulations. IEEE Spectrum spoke with Laslo Hunhold, who recently
joined Barcelona-based Openchip as an AI engineer, about his efforts
to develop a bespoke number format for scientific computing.
.<https://spectrum.ieee.org/number-formats-ai-scientific-computing>
Joe
Cheers
Phil Hobbs
I wrote a math package for the 68332, with the usual functions. The
format was signed 64 bits, as 32.32.
Ads and subs were fast, as no normalizations were needed. Integer
conversions were even faster. Divide was admittedly kinda ugly.
I figured that anything physical can be expressed as 32.32.
John Larkin
Highland Tech Glen Canyon Design Center
Lunatic Fringe Electronics
Converting to a float/engineering units AT the acquisition site
lets you encapsulate all of that hardware/domain/application
specific information AT the acquisition point so you don't have to
expose all of that detail. Imagine replacing a 10b device with
a 12b -- how much tinkering will you have to do if the far
end was expecting 10b data and now sees 12b?
In those days, we were happy to get anything this fancy to work, even
if it would have to be totally replaced to add anything.
On 16/04/2026 01:17, Phil Hobbs wrote:
On 2026-04-15 18:33, joegwinn@comcast.net wrote:
This is a new kind of floating-point number, likely good for AI, but
lots of other uses will turn up.
aFrom IEEE Spectrum (March 2026 issue):
AI has driven an explosion of new number formatsuthe ways in which
numbers are represented digitally. Engineers are looking at every
possible way to save computation time and energy, including shortening
the number of bits used to represent data. But what works for AI
doesn't necessarily work for scientific computing, be it for
computational physics, biology, fluid dynamics, or engineering
simulations. IEEE Spectrum spoke with Laslo Hunhold, who recently
joined Barcelona-based Openchip as an AI engineer, about his efforts
to develop a bespoke number format for scientific computing.
.<https://spectrum.ieee.org/number-formats-ai-scientific-computing>
A copy of the actual paper is on arXiv here:
https://arxiv.org/pdf/2404.18603
Its a log tapered number system. He calls them takums vs posits.
Not an easy read. Be interesting to see if it flies in hardware.
There has been a *lot* of investment in IEEE754 FP already!
CDC7600 et all got it about right 60 bit reals were good enough for most >purposes. float32 was always somewhat lacking in precision. Far too easy
to have underflows and overflows in quite modest computations.
In the past I stuck with legacy compiler versions that supported x87
native 80bit FP (gcc still does today, ICX can be forced to).
I have my own DIY lightweight float128 class that exploits fused
multiply and add to provide fast high dynamic range bigger floats
without the overheads of a full multiprecision math library.
JoeI sure hope he ditches denormals!
Denorms are not all *that* bad - some modern CPUs can even process them
at full speed - though many are still glacially slow and in the past
they used to be even slower (you can set DAZ flag now if you don't
care). Often they were handled by a trap and tediously slow microcode.
Intel ICX compiler defaults to round denorms as zero.
On 4/16/2026 11:51 AM, joegwinn@comcast.net wrote:
Converting to a float/engineering units AT the acquisition site
lets you encapsulate all of that hardware/domain/application
specific information AT the acquisition point so you don't have to
expose all of that detail.-a Imagine replacing a 10b device with
a 12b -- how much tinkering will you have to do if the far
end was expecting 10b data and now sees 12b?
In those days, we were happy to get anything this fancy to work, even
if it would have to be totally replaced to add anything.
I've cut more corners than most folks.-a But, in hindsight, it
was wasted effort.-a Hardware has always been cheap -- even when
it wasn't!-a Development (and re-development) has always been
expensive.-a And, there are costs associated with "reputation"
(you don't want to be known for a particular bug in your product).
But, managers are short-sighted; they see the cost of the BoM and
panic over trying to shave a few dollars off -- at the expense of
man-months of time.
I can recall trying to take a few hundred bytes out of a 12KB memory
image to save the cost of an "extra" EPROM.-a Of course, once that
was done, ADDING any functionality looked prohibitively expensive
("Well have to ADD another EPROM...")
I originally started my current project with that "low cost"
mindset... use lots of PIC-ish motes feeding a *big* machine.
And, the big machine keeps getting bigger (more complex, expensive)
in an attempt to keep the motes dirt cheap.
But, when you look at *real* costs, the savings are illusions.
Especially as you start thinking about how you're going to
market "add ons" (does the user have to upgrade the "big machine"
if he wants add-ons X, Y or Z?-a Or, if he has too many W's??)
Instead, add capability as you add functionality.-a Let the
hardware make your job easier and more reliable.
[Additionally, this gives you more freedom in implementation
as the interfaces become more abstract and less tied to *a*
particular implementation]
On 16/04/2026 22:29, Don Y wrote:
On 4/16/2026 11:51 AM, joegwinn@comcast.net wrote:
Converting to a float/engineering units AT the acquisition site
lets you encapsulate all of that hardware/domain/application
specific information AT the acquisition point so you don't have to
expose all of that detail.a Imagine replacing a 10b device with
a 12b -- how much tinkering will you have to do if the far
end was expecting 10b data and now sees 12b?
In those days, we were happy to get anything this fancy to work, even
if it would have to be totally replaced to add anything.
I've cut more corners than most folks.a But, in hindsight, it
was wasted effort.a Hardware has always been cheap -- even when
it wasn't!a Development (and re-development) has always been
expensive.a And, there are costs associated with "reputation"
(you don't want to be known for a particular bug in your product).
But, managers are short-sighted; they see the cost of the BoM and
panic over trying to shave a few dollars off -- at the expense of
man-months of time.
I can recall trying to take a few hundred bytes out of a 12KB memory
image to save the cost of an "extra" EPROM.a Of course, once that
was done, ADDING any functionality looked prohibitively expensive
("Well have to ADD another EPROM...")
For a few years I worked with the inventor of the ARM Thumb
instruction set, Paul Denman. The motivation of that invention
was exactly what you refer to - shaving off some memory cost in
the processors used in FAX machines.
US patent 5784585 awarded 21 July 1998
John
I originally started my current project with that "low cost"
mindset... use lots of PIC-ish motes feeding a *big* machine.
And, the big machine keeps getting bigger (more complex, expensive)
in an attempt to keep the motes dirt cheap.
But, when you look at *real* costs, the savings are illusions.
Especially as you start thinking about how you're going to
market "add ons" (does the user have to upgrade the "big machine"
if he wants add-ons X, Y or Z?a Or, if he has too many W's??)
Instead, add capability as you add functionality.a Let the
hardware make your job easier and more reliable.
[Additionally, this gives you more freedom in implementation
as the interfaces become more abstract and less tied to *a*
particular implementation]
On 16/04/2026 22:29, Don Y wrote:
On 4/16/2026 11:51 AM, joegwinn@comcast.net wrote:
Converting to a float/engineering units AT the acquisition site
lets you encapsulate all of that hardware/domain/application
specific information AT the acquisition point so you don't have to
expose all of that detail.-a Imagine replacing a 10b device with
a 12b -- how much tinkering will you have to do if the far
end was expecting 10b data and now sees 12b?
In those days, we were happy to get anything this fancy to work, even
if it would have to be totally replaced to add anything.
I've cut more corners than most folks.-a But, in hindsight, it
was wasted effort.-a Hardware has always been cheap -- even when
it wasn't!-a Development (and re-development) has always been
expensive.-a And, there are costs associated with "reputation"
(you don't want to be known for a particular bug in your product).
But, managers are short-sighted; they see the cost of the BoM and
panic over trying to shave a few dollars off -- at the expense of
man-months of time.
I can recall trying to take a few hundred bytes out of a 12KB memory
image to save the cost of an "extra" EPROM.-a Of course, once that
was done, ADDING any functionality looked prohibitively expensive
("Well have to ADD another EPROM...")
For a few years I worked with the inventor of the ARM Thumb
instruction set, Paul Denman.-a The motivation of that invention
was exactly what you refer to - shaving off some memory cost in
the processors used in FAX machines.
And now days, the evolved ARM ISA has won the world, displacing Intel.Most of the "old guard" made horrible decisions in the MCU market.
On 4/16/2026 8:44 AM, joegwinn@comcast.net wrote:
The problem with that is you have to send along all of the factorsToday's CPUs double precision 64bit and float 32 bit have essentially
the same performance unless you are vectorising or using huge arrays so
that unless you *really* know what you are doing double precision is
preferred for most routine scientific calculations. The exception is
bulk raw data where you seldom have more than 4 significant figures.
Also when carrying data from place to place before modern fiber-optic
transmission systems - Even 32 bit was overkill for ADC output data,
so kit was best to send that in integer form, and convert to floats as
late as possible in the process.
that allow those integers to be ACCURATELY mapped to the quantities
they represent.-a Or, arrange for all of that information to already
*be* at the "far end".
Converting to a float/engineering units AT the acquisition site
lets you encapsulate all of that hardware/domain/application
specific information AT the acquisition point so you don't have to
expose all of that detail.-a Imagine replacing a 10b device with
a 12b -- how much tinkering will you have to do if the far
end was expecting 10b data and now sees 12b?
On 2026-04-16 17:59, Don Y wrote:
Converting to a float/engineering units AT the acquisition site
lets you encapsulate all of that hardware/domain/application
specific information AT the acquisition point so you don't have to
expose all of that detail.-a Imagine replacing a 10b device with
a 12b -- how much tinkering will you have to do if the far
end was expecting 10b data and now sees 12b?
That was handled in proces control equipment by using the ADC MSB-aligned. Full scale was defined as 1.0, and 10b would give FFC0, 12 bits then FFF0. Just the resolution improved, not the scale factor changed. No tinkering. I've designed ADC and DAC boards that worked that way.
On Thu, 16 Apr 2026 09:31:01 -0700, john larkin <jl@glen--canyon.com>
wrote:
On Thu, 16 Apr 2026 11:36:13 -0400, joegwinn@comcast.net wrote:
On Thu, 16 Apr 2026 02:18:58 -0700, john larkin <jl@glen--canyon.com> >>>wrote:
On Wed, 15 Apr 2026 20:17:20 -0400, Phil Hobbs >>>><pcdhSpamMeSenseless@electrooptical.net> wrote:
On 2026-04-15 18:33, joegwinn@comcast.net wrote:
I sure hope he ditches denormals!
This is a new kind of floating-point number, likely good for AI, but >>>>>> lots of other uses will turn up.
From IEEE Spectrum (March 2026 issue):
AI has driven an explosion of new number formatsuthe ways in which >>>>>> numbers are represented digitally. Engineers are looking at every
possible way to save computation time and energy, including shortening >>>>>> the number of bits used to represent data. But what works for AI
doesn't necessarily work for scientific computing, be it for
computational physics, biology, fluid dynamics, or engineering
simulations. IEEE Spectrum spoke with Laslo Hunhold, who recently
joined Barcelona-based Openchip as an AI engineer, about his efforts >>>>>> to develop a bespoke number format for scientific computing.
.<https://spectrum.ieee.org/number-formats-ai-scientific-computing> >>>>>>
Joe
I think he does.
Cheers
Phil Hobbs
I wrote a math package for the 68332, with the usual functions. The >>>>format was signed 64 bits, as 32.32.
Ads and subs were fast, as no normalizations were needed. Integer >>>>conversions were even faster. Divide was admittedly kinda ugly.
Unless you use power-of-two scaling, allowing bit shifts to do the
job.
I was PWMing a heater and wanted to adjust for the unregulated supply >>voltage, which was the only divide in that system. So it didn't have
to be very good. That was inside the more serious temperature control
loop.
I could have done a lookup table, I guess.
That is traditional.
I figured that anything physical can be expressed as 32.32.
I've done much the same, but usually more like 16.16 - these computers >>>were tiny.
Joe
Since I was running realtime control loops, the package threw no >>exceptions. It always returned a legal-format value and made its best >>guess.
68K was a 32-bit machine but the 68332 didn't have floats. And it was
slow, a 16 MHz CISC processor.
This was the processor I was using for realtime, and also for my
original toaster Mac, which had a socket for the coprocessor chip.
Apple sold and installed these for a very high price, so I bought the
chip from Digikey or then equivalent, and installed it myself. It
wasn't very hard, but needed care to not bust those pins.
The realtime use did not have the coprocessors, because FP was still
far too slow, and achieved far more precision than needed.
But it was a joy to code in assembler.
Yes. I knew the fellow who developed the Instruction Set Architecture
(ISA) for the 6800 and 68000-series processors. They were based on
the DEC PDP-11 ISA.
Joe
John Larkin
Highland Tech Glen Canyon Design Center
Lunatic Fringe Electronics
On Thu, 16 Apr 2026 14:49:17 -0400, joegwinn@comcast.net wrote:
On Thu, 16 Apr 2026 09:31:01 -0700, john larkin <jl@glen--canyon.com> >>wrote:
On Thu, 16 Apr 2026 11:36:13 -0400, joegwinn@comcast.net wrote:
On Thu, 16 Apr 2026 02:18:58 -0700, john larkin <jl@glen--canyon.com> >>>>wrote:
On Wed, 15 Apr 2026 20:17:20 -0400, Phil Hobbs >>>>><pcdhSpamMeSenseless@electrooptical.net> wrote:
On 2026-04-15 18:33, joegwinn@comcast.net wrote:
I sure hope he ditches denormals!
This is a new kind of floating-point number, likely good for AI, but >>>>>>> lots of other uses will turn up.
From IEEE Spectrum (March 2026 issue):
AI has driven an explosion of new number formatsuthe ways in which >>>>>>> numbers are represented digitally. Engineers are looking at every >>>>>>> possible way to save computation time and energy, including shortening >>>>>>> the number of bits used to represent data. But what works for AI >>>>>>> doesn't necessarily work for scientific computing, be it for
computational physics, biology, fluid dynamics, or engineering
simulations. IEEE Spectrum spoke with Laslo Hunhold, who recently >>>>>>> joined Barcelona-based Openchip as an AI engineer, about his efforts >>>>>>> to develop a bespoke number format for scientific computing.
.<https://spectrum.ieee.org/number-formats-ai-scientific-computing> >>>>>>>
Joe
I think he does.
Cheers
Phil Hobbs
I wrote a math package for the 68332, with the usual functions. The >>>>>format was signed 64 bits, as 32.32.
Ads and subs were fast, as no normalizations were needed. Integer >>>>>conversions were even faster. Divide was admittedly kinda ugly.
Unless you use power-of-two scaling, allowing bit shifts to do the
job.
I was PWMing a heater and wanted to adjust for the unregulated supply >>>voltage, which was the only divide in that system. So it didn't have
to be very good. That was inside the more serious temperature control >>>loop.
I could have done a lookup table, I guess.
That is traditional.
I figured that anything physical can be expressed as 32.32.
I've done much the same, but usually more like 16.16 - these computers >>>>were tiny.
Joe
Since I was running realtime control loops, the package threw no >>>exceptions. It always returned a legal-format value and made its best >>>guess.
68K was a 32-bit machine but the 68332 didn't have floats. And it was >>>slow, a 16 MHz CISC processor.
This was the processor I was using for realtime, and also for my
original toaster Mac, which had a socket for the coprocessor chip.
Apple sold and installed these for a very high price, so I bought the
chip from Digikey or then equivalent, and installed it myself. It
wasn't very hard, but needed care to not bust those pins.
The realtime use did not have the coprocessors, because FP was still
far too slow, and achieved far more precision than needed.
But it was a joy to code in assembler.
Yes. I knew the fellow who developed the Instruction Set Architecture >>(ISA) for the 6800 and 68000-series processors. They were based on
the DEC PDP-11 ISA.
Joe
John Larkin
Highland Tech Glen Canyon Design Center
Lunatic Fringe Electronics
The PDP-11 was revolutionary. You could
ASR PC
namely arithmetic shift the progam counter if you wanted. It was that >general.
I wrote two RTOS's for the 11. It was beautiful.
John Larkin--- Synchronet 3.21f-Linux NewsLink 1.2
Highland Tech Glen Canyon Design Center
Lunatic Fringe Electronics
On 16/04/2026 07:43, Jan Panteltje wrote:
joegwinn@comcast.netwrote:
This is a new kind of floating-point number, likely good for AI, but
lots of other uses will turn up.
From IEEE Spectrum (March 2026 issue):
AI has driven an explosion of new number formatsrCothe ways in which
numbers are represented digitally. Engineers are looking at every
possible way to save computation time and energy, including shortening
the number of bits used to represent data. But what works for AI
doesn't necessarily work for scientific computing, be it for
computational physics, biology, fluid dynamics, or engineering
simulations. IEEE Spectrum spoke with Laslo Hunhold, who recently
joined Barcelona-based Openchip as an AI engineer, about his efforts
to develop a bespoke number format for scientific computing.
.<https://spectrum.ieee.org/number-formats-ai-scientific-computing>
Joe
Do not know much about what that guy did.
But I noticed I can do most 'scientific things with 32 bits (in asm at
that)
For example the Fourier transform in
-a https://panteltje.nl/panteltje/pic/scope_pic/
-a-a asm source downloadable on that site
-a I did using 32 bit integer.
FFTs are relatively forgiving where numerical precision is concerned.
The basis functions are perfectly orthogonal summed over the domain.
Even something as simple as solving a cubic equation x^3 + ax^2 + bx + c
can easily go wrong when computing in float32 since it involves
computing a^6. You can work around this lack of dynamic range but it is painful!
Double precision also helps a lot when accumulating summations of reals
or even in FFTs to do recurrence relations for sin/cos(n*w*t)
Almost all modern FP libraries today promote the float32 argument to
double and do the computation in double precision rounding the result
back to float at the end. It avoids a lot of messing about ensuring
nothing overflows during the intermediate calculations.
Is that science?
Of course when AI wants to do a divide by zero using Albert E.'s brain
fog,
than it will likely need infinite bits to do the wormhole dance...
My conclusion: 32 bits is enough for most things
CDC7600 60 bits really was good enough for most orbital dynamics computations which is why astronomical codes used them (and BMEWS too).
Today's CPUs double precision 64bit and float 32 bit have essentially
the same performance unless you are vectorising or using huge arrays so
that unless you *really* know what you are doing double precision is preferred for most routine scientific calculations. The exception is
bulk raw data where you seldom have more than 4 significant figures.
Don Y <blockedofcourse@foo.invalid> wrote: |-------------------------------------------------------------------|Customers/users occasionally welcome design bugs -- if they can be exploited
|"[. . .] there are costs associated with "reputation" |
|(you don't want to be known for a particular bug in your product)."| |-------------------------------------------------------------------|
Willy H. Gates III wanted to be known for DONKEY.BAS, but its
flickering is not precisely a bug. Donkeys are also not precisely
bugs.
On 4/17/2026 12:56 PM, Niocl|is P||l Caile|in de Ghloucester wrote:
Don Y <blockedofcourse@foo.invalid> wrote:Customers/users occasionally welcome design bugs -- if they can be
|-------------------------------------------------------------------|
|"[. . .] there are costs associated with "reputation" |
|(you don't want to be known for a particular bug in your product)."|
|-------------------------------------------------------------------|
Willy H. Gates III wanted to be known for DONKEY.BAS, but its
flickering is not precisely a bug. Donkeys are also not precisely
bugs.
exploited
to their advantage.
Pinball machines (back in the electro-mechanical era) used to "randomly" generate a 2 digit value (rightmost of which was always zero, of course) which would be compared to the rightmost two digits of a player's
score at the end of the game. If the two agreed, a free game was
awarded.
Of course, rather than a true random number generator, the number
generated was driven by observable events during game play -- like
the number of times a particular target was struck.
Being able to count such events, knowing where the "number generator"
was at the start of the game AND the sequence of "random" values
that it would provide, you could predict the setting of the generator
at any given time. So, just prior to the end of the game, deliberately
tilt the game when your score coincides with the "predicted" value
of the number generator -- and you can continue with a new game.
The physical clearance between some targets and the cover glass on
some machines is close enough that you can sit on the glass to deflect
it just enough to inhibit those targets "resetting".
Coin mechanisms can be tricked into accepting pennies in lieu of
quarters.
Some video games allow the player to pass THROUGH solid objects.
In-band tone signalling for telephones gave rise to "Cap'n Crunch"
and the whole phreaking movement. (TAP, anyone?)
| Sysop: | Amessyroom |
|---|---|
| Location: | Fayetteville, NC |
| Users: | 63 |
| Nodes: | 6 (0 / 6) |
| Uptime: | 493003:32:46 |
| Calls: | 840 |
| Files: | 1,302 |
| Messages: | 268,105 |