• AIAs Math Tricks DonAt Work for Scientific Computing - Low-precision number formats donAt suit many simulations

    From joegwinn@joegwinn@comcast.net to sci.electronics.design on Wed Apr 15 18:33:16 2026
    From Newsgroup: sci.electronics.design


    This is a new kind of floating-point number, likely good for AI, but
    lots of other uses will turn up.

    From IEEE Spectrum (March 2026 issue):

    AI has driven an explosion of new number formatsuthe ways in which
    numbers are represented digitally. Engineers are looking at every
    possible way to save computation time and energy, including shortening
    the number of bits used to represent data. But what works for AI
    doesn't necessarily work for scientific computing, be it for
    computational physics, biology, fluid dynamics, or engineering
    simulations. IEEE Spectrum spoke with Laslo Hunhold, who recently
    joined Barcelona-based Openchip as an AI engineer, about his efforts
    to develop a bespoke number format for scientific computing.

    .<https://spectrum.ieee.org/number-formats-ai-scientific-computing>

    Joe
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Phil Hobbs@pcdhSpamMeSenseless@electrooptical.net to sci.electronics.design on Wed Apr 15 20:17:20 2026
    From Newsgroup: sci.electronics.design

    On 2026-04-15 18:33, joegwinn@comcast.net wrote:

    This is a new kind of floating-point number, likely good for AI, but
    lots of other uses will turn up.

    From IEEE Spectrum (March 2026 issue):

    AI has driven an explosion of new number formatsrCothe ways in which
    numbers are represented digitally. Engineers are looking at every
    possible way to save computation time and energy, including shortening
    the number of bits used to represent data. But what works for AI
    doesn't necessarily work for scientific computing, be it for
    computational physics, biology, fluid dynamics, or engineering
    simulations. IEEE Spectrum spoke with Laslo Hunhold, who recently
    joined Barcelona-based Openchip as an AI engineer, about his efforts
    to develop a bespoke number format for scientific computing.

    .<https://spectrum.ieee.org/number-formats-ai-scientific-computing>

    Joe

    I sure hope he ditches denormals!

    Cheers

    Phil Hobbs
    --
    Dr Philip C D Hobbs
    Principal Consultant
    ElectroOptical Innovations LLC / Hobbs ElectroOptics
    Optics, Electro-optics, Photonics, Analog Electronics
    Briarcliff Manor NY 10510

    http://electrooptical.net
    http://hobbs-eo.com

    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Don Y@blockedofcourse@foo.invalid to sci.electronics.design on Wed Apr 15 18:29:04 2026
    From Newsgroup: sci.electronics.design

    On 4/15/2026 3:33 PM, joegwinn@comcast.net wrote:

    This is a new kind of floating-point number, likely good for AI, but
    lots of other uses will turn up.
    I suspect (as with my dictionary implementation), developers are
    going to be tasked with embueing their algorithms with more knowledge
    of the problem domain -- and the actual values to be encountered at
    specific steps in computations.

    Much like selecting algorithms to minimize cancellation instead of
    resorting to a grade school idea of "how math works".

    E.g., I use Big Rationals for user-facing computations.
    Multiplication and division are pretty inexpensive -- if
    you defer the reduction/normalization stage until it
    is worth the savings. As the developer is the only one who
    knows what operators will be imposed in the future, why
    prematurely incur a cost if it won't materially improve
    performance? Do you care if "2.0" is stored as (2,1)
    vs (800000000000000000000,400000000000000000000)? Does
    the *user*?
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Jan Panteltje@alien@comet.invalid to sci.electronics.design on Thu Apr 16 06:43:43 2026
    From Newsgroup: sci.electronics.design

    joegwinn@comcast.netwrote:

    This is a new kind of floating-point number, likely good for AI, but
    lots of other uses will turn up.

    From IEEE Spectrum (March 2026 issue):

    AI has driven an explosion of new number formatsuthe ways in which
    numbers are represented digitally. Engineers are looking at every
    possible way to save computation time and energy, including shortening
    the number of bits used to represent data. But what works for AI
    doesn't necessarily work for scientific computing, be it for
    computational physics, biology, fluid dynamics, or engineering
    simulations. IEEE Spectrum spoke with Laslo Hunhold, who recently
    joined Barcelona-based Openchip as an AI engineer, about his efforts
    to develop a bespoke number format for scientific computing.

    .<https://spectrum.ieee.org/number-formats-ai-scientific-computing>

    Joe

    Do not know much about what that guy did.
    But I noticed I can do most 'scientific things with 32 bits (in asm at that) For example the Fourier transform in
    https://panteltje.nl/panteltje/pic/scope_pic/
    asm source downloadable on that site
    I did using 32 bit integer.

    Is that science?
    Of course when AI wants to do a divide by zero using Albert E.'s brain fog, than it will likely need infinite bits to do the wormhole dance...


    My conclusion: 32 bits is enough for most things

    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From john larkin@jl@glen--canyon.com to sci.electronics.design on Thu Apr 16 02:16:50 2026
    From Newsgroup: sci.electronics.design

    On Wed, 15 Apr 2026 18:33:16 -0400, joegwinn@comcast.net wrote:


    This is a new kind of floating-point number, likely good for AI, but
    lots of other uses will turn up.

    From IEEE Spectrum (March 2026 issue):

    AI has driven an explosion of new number formatsuthe ways in which
    numbers are represented digitally. Engineers are looking at every
    possible way to save computation time and energy, including shortening
    the number of bits used to represent data. But what works for AI
    doesn't necessarily work for scientific computing, be it for
    computational physics, biology, fluid dynamics, or engineering
    simulations. IEEE Spectrum spoke with Laslo Hunhold, who recently
    joined Barcelona-based Openchip as an AI engineer, about his efforts
    to develop a bespoke number format for scientific computing.

    .<https://spectrum.ieee.org/number-formats-ai-scientific-computing>

    Joe

    John Larkin
    Highland Tech Glen Canyon Design Center
    Lunatic Fringe Electronics
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From john larkin@jl@glen--canyon.com to sci.electronics.design on Thu Apr 16 02:18:58 2026
    From Newsgroup: sci.electronics.design

    On Wed, 15 Apr 2026 20:17:20 -0400, Phil Hobbs <pcdhSpamMeSenseless@electrooptical.net> wrote:

    On 2026-04-15 18:33, joegwinn@comcast.net wrote:

    This is a new kind of floating-point number, likely good for AI, but
    lots of other uses will turn up.

    From IEEE Spectrum (March 2026 issue):

    AI has driven an explosion of new number formatsuthe ways in which
    numbers are represented digitally. Engineers are looking at every
    possible way to save computation time and energy, including shortening
    the number of bits used to represent data. But what works for AI
    doesn't necessarily work for scientific computing, be it for
    computational physics, biology, fluid dynamics, or engineering
    simulations. IEEE Spectrum spoke with Laslo Hunhold, who recently
    joined Barcelona-based Openchip as an AI engineer, about his efforts
    to develop a bespoke number format for scientific computing.

    .<https://spectrum.ieee.org/number-formats-ai-scientific-computing>

    Joe

    I sure hope he ditches denormals!

    Cheers

    Phil Hobbs

    I wrote a math package for the 68332, with the usual functions. The
    format was signed 64 bits, as 32.32.

    Ads and subs were fast, as no normalizations were needed. Integer
    conversions were even faster. Divide was admittedly kinda ugly.

    I figured that anything physical can be expressed as 32.32.

    John Larkin
    Highland Tech Glen Canyon Design Center
    Lunatic Fringe Electronics
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Martin Brown@'''newspam'''@nonad.co.uk to sci.electronics.design on Thu Apr 16 10:45:50 2026
    From Newsgroup: sci.electronics.design

    On 16/04/2026 01:17, Phil Hobbs wrote:
    On 2026-04-15 18:33, joegwinn@comcast.net wrote:

    This is a new kind of floating-point number, likely good for AI, but
    lots of other uses will turn up.

    -aFrom IEEE Spectrum (March 2026 issue):

    AI has driven an explosion of new number formatsrCothe ways in which
    numbers are represented digitally. Engineers are looking at every
    possible way to save computation time and energy, including shortening
    the number of bits used to represent data. But what works for AI
    doesn't necessarily work for scientific computing, be it for
    computational physics, biology, fluid dynamics, or engineering
    simulations. IEEE Spectrum spoke with Laslo Hunhold, who recently
    joined Barcelona-based Openchip as an AI engineer, about his efforts
    to develop a bespoke number format for scientific computing.

    .<https://spectrum.ieee.org/number-formats-ai-scientific-computing>


    A copy of the actual paper is on arXiv here:

    https://arxiv.org/pdf/2404.18603

    Its a log tapered number system. He calls them takums vs posits.
    Not an easy read. Be interesting to see if it flies in hardware.
    There has been a *lot* of investment in IEEE754 FP already!

    CDC7600 et all got it about right 60 bit reals were good enough for most purposes. float32 was always somewhat lacking in precision. Far too easy
    to have underflows and overflows in quite modest computations.

    In the past I stuck with legacy compiler versions that supported x87
    native 80bit FP (gcc still does today, ICX can be forced to).

    I have my own DIY lightweight float128 class that exploits fused
    multiply and add to provide fast high dynamic range bigger floats
    without the overheads of a full multiprecision math library.

    Joe

    I sure hope he ditches denormals!

    Denorms are not all *that* bad - some modern CPUs can even process them
    at full speed - though many are still glacially slow and in the past
    they used to be even slower (you can set DAZ flag now if you don't
    care). Often they were handled by a trap and tediously slow microcode.

    Intel ICX compiler defaults to round denorms as zero.
    --
    Martin Brown

    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Bill Sloman@bill.sloman@ieee.org to sci.electronics.design on Thu Apr 16 21:28:30 2026
    From Newsgroup: sci.electronics.design

    On 16/04/2026 4:43 pm, Jan Panteltje wrote:
    joegwinn@comcast.netwrote:

    This is a new kind of floating-point number, likely good for AI, but
    lots of other uses will turn up.

    From IEEE Spectrum (March 2026 issue):

    AI has driven an explosion of new number formatsrCothe ways in which
    numbers are represented digitally. Engineers are looking at every
    possible way to save computation time and energy, including shortening
    the number of bits used to represent data. But what works for AI
    doesn't necessarily work for scientific computing, be it for
    computational physics, biology, fluid dynamics, or engineering
    simulations. IEEE Spectrum spoke with Laslo Hunhold, who recently
    joined Barcelona-based Openchip as an AI engineer, about his efforts
    to develop a bespoke number format for scientific computing.

    .<https://spectrum.ieee.org/number-formats-ai-scientific-computing>

    Joe

    Do not know much about what that guy did.
    But I noticed I can do most 'scientific things with 32 bits (in asm at that) For example the Fourier transform in
    https://panteltje.nl/panteltje/pic/scope_pic/
    asm source downloadable on that site
    I did using 32 bit integer.

    Is that science?
    Of course when AI wants to do a divide by zero using Albert E.'s brain fog, than it will likely need infinite bits to do the wormhole dance...

    My conclusion: 32 bits is enough for most things

    But it is worth testing. For my Ph.D. work I found myself accumulating
    some sums with double precision numbers - rounding errors had a nasty
    way of accumulating when I was adding up hundreds of experimental observations.
    --
    Bill Sloman, Sydney


    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Martin Brown@'''newspam'''@nonad.co.uk to sci.electronics.design on Thu Apr 16 13:06:17 2026
    From Newsgroup: sci.electronics.design

    On 16/04/2026 07:43, Jan Panteltje wrote:
    joegwinn@comcast.netwrote:

    This is a new kind of floating-point number, likely good for AI, but
    lots of other uses will turn up.

    From IEEE Spectrum (March 2026 issue):

    AI has driven an explosion of new number formatsrCothe ways in which
    numbers are represented digitally. Engineers are looking at every
    possible way to save computation time and energy, including shortening
    the number of bits used to represent data. But what works for AI
    doesn't necessarily work for scientific computing, be it for
    computational physics, biology, fluid dynamics, or engineering
    simulations. IEEE Spectrum spoke with Laslo Hunhold, who recently
    joined Barcelona-based Openchip as an AI engineer, about his efforts
    to develop a bespoke number format for scientific computing.

    .<https://spectrum.ieee.org/number-formats-ai-scientific-computing>

    Joe

    Do not know much about what that guy did.
    But I noticed I can do most 'scientific things with 32 bits (in asm at that) For example the Fourier transform in
    https://panteltje.nl/panteltje/pic/scope_pic/
    asm source downloadable on that site
    I did using 32 bit integer.

    FFTs are relatively forgiving where numerical precision is concerned.
    The basis functions are perfectly orthogonal summed over the domain.

    Even something as simple as solving a cubic equation x^3 + ax^2 + bx + c
    can easily go wrong when computing in float32 since it involves
    computing a^6. You can work around this lack of dynamic range but it is painful!

    Double precision also helps a lot when accumulating summations of reals
    or even in FFTs to do recurrence relations for sin/cos(n*w*t)

    Almost all modern FP libraries today promote the float32 argument to
    double and do the computation in double precision rounding the result
    back to float at the end. It avoids a lot of messing about ensuring
    nothing overflows during the intermediate calculations.

    Is that science?
    Of course when AI wants to do a divide by zero using Albert E.'s brain fog, than it will likely need infinite bits to do the wormhole dance...

    My conclusion: 32 bits is enough for most things

    CDC7600 60 bits really was good enough for most orbital dynamics
    computations which is why astronomical codes used them (and BMEWS too).

    Today's CPUs double precision 64bit and float 32 bit have essentially
    the same performance unless you are vectorising or using huge arrays so
    that unless you *really* know what you are doing double precision is
    preferred for most routine scientific calculations. The exception is
    bulk raw data where you seldom have more than 4 significant figures.
    --
    Martin Brown

    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Don Y@blockedofcourse@foo.invalid to sci.electronics.design on Thu Apr 16 06:34:39 2026
    From Newsgroup: sci.electronics.design

    On 4/16/2026 5:06 AM, Martin Brown wrote:
    Even something as simple as solving a cubic equation x^3 + ax^2 + bx + c can easily go wrong when computing in float32 since it involves computing a^6. You
    can work around this lack of dynamic range but it is painful!

    The problem with any fixed precision math "library" is the user
    (developer?) has to be cognizant of those limitations while
    using the library.

    If doing "one-off" calculations, he may be observant enough to notice
    things aren't going as expected and question his approach, operator
    ordering, etc.

    But, if embedding a calculation in a piece of code, he may never see
    some corner condition where his approach shits the bed.

    The Sun has an apparent diameter of 864,400 miles. Imagine a strap
    hugging it at its equator. That strap would be 864,400 * 2 * pi miles (5,431,185.37952603 to 15 digits) long; 864,400 * 2 * pi * 5,280 * 12
    INCHES (344,119,905,646.769 to 15 digits) long!

    Cut that band and insert an additional 6 inch length (344,119,905,652.769).

    What's the gap between that lengthened band and the equatorial surface?
    We both know the answer to be (6/2*pi) inches yet a naive "software implementation" (with doubles) will expose a discrepancy compared to
    "pen and paper".

    Double precision also helps a lot when accumulating summations of reals or even
    in FFTs to do recurrence relations for sin/cos(n*w*t)

    Almost all modern FP libraries today promote the float32 argument to double and
    do the computation in double precision rounding the result back to float at the
    end. It avoids a lot of messing about ensuring nothing overflows during the intermediate calculations.

    C has implicit promotions for floats that sometimes bites developers
    who read what they want instead of what the code actually *says*.

    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From john larkin@jl@glen--canyon.com to sci.electronics.design on Thu Apr 16 07:45:06 2026
    From Newsgroup: sci.electronics.design

    On Thu, 16 Apr 2026 02:16:50 -0700, john larkin <jl@glen--canyon.com>
    wrote:

    On Wed, 15 Apr 2026 18:33:16 -0400, joegwinn@comcast.net wrote:


    This is a new kind of floating-point number, likely good for AI, but
    lots of other uses will turn up.

    From IEEE Spectrum (March 2026 issue):

    AI has driven an explosion of new number formatsuthe ways in which
    numbers are represented digitally. Engineers are looking at every
    possible way to save computation time and energy, including shortening
    the number of bits used to represent data. But what works for AI
    doesn't necessarily work for scientific computing, be it for
    computational physics, biology, fluid dynamics, or engineering
    simulations. IEEE Spectrum spoke with Laslo Hunhold, who recently
    joined Barcelona-based Openchip as an AI engineer, about his efforts
    to develop a bespoke number format for scientific computing.

    .<https://spectrum.ieee.org/number-formats-ai-scientific-computing>

    Joe

    John Larkin
    Highland Tech Glen Canyon Design Center
    Lunatic Fringe Electronics

    I had to change the title line. Eternal September rejected it for some
    reason.


    John Larkin
    Highland Tech Glen Canyon Design Center
    Lunatic Fringe Electronics
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From joegwinn@joegwinn@comcast.net to sci.electronics.design on Thu Apr 16 11:36:13 2026
    From Newsgroup: sci.electronics.design

    On Thu, 16 Apr 2026 02:18:58 -0700, john larkin <jl@glen--canyon.com>
    wrote:

    On Wed, 15 Apr 2026 20:17:20 -0400, Phil Hobbs ><pcdhSpamMeSenseless@electrooptical.net> wrote:

    On 2026-04-15 18:33, joegwinn@comcast.net wrote:

    This is a new kind of floating-point number, likely good for AI, but
    lots of other uses will turn up.

    From IEEE Spectrum (March 2026 issue):

    AI has driven an explosion of new number formatsuthe ways in which
    numbers are represented digitally. Engineers are looking at every
    possible way to save computation time and energy, including shortening
    the number of bits used to represent data. But what works for AI
    doesn't necessarily work for scientific computing, be it for
    computational physics, biology, fluid dynamics, or engineering
    simulations. IEEE Spectrum spoke with Laslo Hunhold, who recently
    joined Barcelona-based Openchip as an AI engineer, about his efforts
    to develop a bespoke number format for scientific computing.

    .<https://spectrum.ieee.org/number-formats-ai-scientific-computing>

    Joe

    I sure hope he ditches denormals!

    I think he does.


    Cheers

    Phil Hobbs

    I wrote a math package for the 68332, with the usual functions. The
    format was signed 64 bits, as 32.32.

    Ads and subs were fast, as no normalizations were needed. Integer
    conversions were even faster. Divide was admittedly kinda ugly.

    Unless you use power-of-two scaling, allowing bit shifts to do the
    job.


    I figured that anything physical can be expressed as 32.32.

    I've done much the same, but usually more like 16.16 - these computers
    were tiny.

    Joe
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From joegwinn@joegwinn@comcast.net to sci.electronics.design on Thu Apr 16 11:44:34 2026
    From Newsgroup: sci.electronics.design

    On Thu, 16 Apr 2026 13:06:17 +0100, Martin Brown
    <'''newspam'''@nonad.co.uk> wrote:

    On 16/04/2026 07:43, Jan Panteltje wrote:
    joegwinn@comcast.netwrote:

    This is a new kind of floating-point number, likely good for AI, but
    lots of other uses will turn up.

    From IEEE Spectrum (March 2026 issue):

    AI has driven an explosion of new number formatsuthe ways in which
    numbers are represented digitally. Engineers are looking at every
    possible way to save computation time and energy, including shortening
    the number of bits used to represent data. But what works for AI
    doesn't necessarily work for scientific computing, be it for
    computational physics, biology, fluid dynamics, or engineering
    simulations. IEEE Spectrum spoke with Laslo Hunhold, who recently
    joined Barcelona-based Openchip as an AI engineer, about his efforts
    to develop a bespoke number format for scientific computing.

    .<https://spectrum.ieee.org/number-formats-ai-scientific-computing>

    Joe

    Do not know much about what that guy did.
    But I noticed I can do most 'scientific things with 32 bits (in asm at that) >> For example the Fourier transform in
    https://panteltje.nl/panteltje/pic/scope_pic/
    asm source downloadable on that site
    I did using 32 bit integer.

    FFTs are relatively forgiving where numerical precision is concerned.
    The basis functions are perfectly orthogonal summed over the domain.

    Even something as simple as solving a cubic equation x^3 + ax^2 + bx + c
    can easily go wrong when computing in float32 since it involves
    computing a^6. You can work around this lack of dynamic range but it is >painful!

    The standard dodge was to scale things such that the numerical values
    were close to unity, so even the sixth power was still close enough to
    unity.


    Double precision also helps a lot when accumulating summations of reals
    or even in FFTs to do recurrence relations for sin/cos(n*w*t)

    Almost all modern FP libraries today promote the float32 argument to
    double and do the computation in double precision rounding the result
    back to float at the end. It avoids a lot of messing about ensuring
    nothing overflows during the intermediate calculations.

    But it's slow.


    Is that science?
    Of course when AI wants to do a divide by zero using Albert E.'s brain fog, >> than it will likely need infinite bits to do the wormhole dance...

    My conclusion: 32 bits is enough for most things

    CDC7600 60 bits really was good enough for most orbital dynamics >computations which is why astronomical codes used them (and BMEWS too).

    Yep.


    Today's CPUs double precision 64bit and float 32 bit have essentially
    the same performance unless you are vectorising or using huge arrays so
    that unless you *really* know what you are doing double precision is >preferred for most routine scientific calculations. The exception is
    bulk raw data where you seldom have more than 4 significant figures.

    Also when carrying data from place to place before modern fiber-optic transmission systems - Even 32 bit was overkill for ADC output data,
    so kit was best to send that in integer form, and convert to floats as
    late as possible in the process.

    Joe
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Don Y@blockedofcourse@foo.invalid to sci.electronics.design on Thu Apr 16 08:59:17 2026
    From Newsgroup: sci.electronics.design

    On 4/16/2026 8:44 AM, joegwinn@comcast.net wrote:
    Today's CPUs double precision 64bit and float 32 bit have essentially
    the same performance unless you are vectorising or using huge arrays so
    that unless you *really* know what you are doing double precision is
    preferred for most routine scientific calculations. The exception is
    bulk raw data where you seldom have more than 4 significant figures.

    Also when carrying data from place to place before modern fiber-optic transmission systems - Even 32 bit was overkill for ADC output data,
    so kit was best to send that in integer form, and convert to floats as
    late as possible in the process.
    The problem with that is you have to send along all of the factors
    that allow those integers to be ACCURATELY mapped to the quantities
    they represent. Or, arrange for all of that information to already
    *be* at the "far end".

    Converting to a float/engineering units AT the acquisition site
    lets you encapsulate all of that hardware/domain/application
    specific information AT the acquisition point so you don't have to
    expose all of that detail. Imagine replacing a 10b device with
    a 12b -- how much tinkering will you have to do if the far
    end was expecting 10b data and now sees 12b?
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Jan Panteltje@alien@comet.invalid to sci.electronics.design on Thu Apr 16 15:59:26 2026
    From Newsgroup: sci.electronics.design

    john larkin <jl@glen--canyon.com>wrote:

    I had to change the title line. Eternal September rejected it for some >reason.

    Same problem here.
    Same solution!


    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From joegwinn@joegwinn@comcast.net to sci.electronics.design on Thu Apr 16 12:09:10 2026
    From Newsgroup: sci.electronics.design

    On Thu, 16 Apr 2026 07:45:06 -0700, john larkin <jl@glen--canyon.com>
    wrote:

    On Thu, 16 Apr 2026 02:16:50 -0700, john larkin <jl@glen--canyon.com>
    wrote:

    On Wed, 15 Apr 2026 18:33:16 -0400, joegwinn@comcast.net wrote:


    This is a new kind of floating-point number, likely good for AI, but
    lots of other uses will turn up.

    From IEEE Spectrum (March 2026 issue):

    AI has driven an explosion of new number formatsuthe ways in which >>>numbers are represented digitally. Engineers are looking at every >>>possible way to save computation time and energy, including shortening >>>the number of bits used to represent data. But what works for AI
    doesn't necessarily work for scientific computing, be it for >>>computational physics, biology, fluid dynamics, or engineering >>>simulations. IEEE Spectrum spoke with Laslo Hunhold, who recently
    joined Barcelona-based Openchip as an AI engineer, about his efforts
    to develop a bespoke number format for scientific computing.

    .<https://spectrum.ieee.org/number-formats-ai-scientific-computing>

    Joe

    John Larkin
    Highland Tech Glen Canyon Design Center
    Lunatic Fringe Electronics

    I had to change the title line. Eternal September rejected it for some >reason.

    Probably the apostrophes.

    Joe
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From john larkin@jl@glen--canyon.com to sci.electronics.design on Thu Apr 16 09:31:01 2026
    From Newsgroup: sci.electronics.design

    On Thu, 16 Apr 2026 11:36:13 -0400, joegwinn@comcast.net wrote:

    On Thu, 16 Apr 2026 02:18:58 -0700, john larkin <jl@glen--canyon.com>
    wrote:

    On Wed, 15 Apr 2026 20:17:20 -0400, Phil Hobbs >><pcdhSpamMeSenseless@electrooptical.net> wrote:

    On 2026-04-15 18:33, joegwinn@comcast.net wrote:

    This is a new kind of floating-point number, likely good for AI, but
    lots of other uses will turn up.

    From IEEE Spectrum (March 2026 issue):

    AI has driven an explosion of new number formatsuthe ways in which
    numbers are represented digitally. Engineers are looking at every
    possible way to save computation time and energy, including shortening >>>> the number of bits used to represent data. But what works for AI
    doesn't necessarily work for scientific computing, be it for
    computational physics, biology, fluid dynamics, or engineering
    simulations. IEEE Spectrum spoke with Laslo Hunhold, who recently
    joined Barcelona-based Openchip as an AI engineer, about his efforts
    to develop a bespoke number format for scientific computing.

    .<https://spectrum.ieee.org/number-formats-ai-scientific-computing>

    Joe

    I sure hope he ditches denormals!

    I think he does.


    Cheers

    Phil Hobbs

    I wrote a math package for the 68332, with the usual functions. The
    format was signed 64 bits, as 32.32.

    Ads and subs were fast, as no normalizations were needed. Integer >>conversions were even faster. Divide was admittedly kinda ugly.

    Unless you use power-of-two scaling, allowing bit shifts to do the
    job.

    I was PWMing a heater and wanted to adjust for the unregulated supply
    voltage, which was the only divide in that system. So it didn't have
    to be very good. That was inside the more serious temperature control
    loop.

    I could have done a lookup table, I guess.



    I figured that anything physical can be expressed as 32.32.

    I've done much the same, but usually more like 16.16 - these computers
    were tiny.

    Joe

    Since I was running realtime control loops, the package threw no
    exceptions. It always returned a legal-format value and made its best
    guess.

    68K was a 32-bit machine but the 68332 didn't have floats. And it was
    slow, a 16 MHz CISC processor.

    But it was a joy to code in assembler.


    John Larkin
    Highland Tech Glen Canyon Design Center
    Lunatic Fringe Electronics
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From joegwinn@joegwinn@comcast.net to sci.electronics.design on Thu Apr 16 14:49:17 2026
    From Newsgroup: sci.electronics.design

    On Thu, 16 Apr 2026 09:31:01 -0700, john larkin <jl@glen--canyon.com>
    wrote:

    On Thu, 16 Apr 2026 11:36:13 -0400, joegwinn@comcast.net wrote:

    On Thu, 16 Apr 2026 02:18:58 -0700, john larkin <jl@glen--canyon.com> >>wrote:

    On Wed, 15 Apr 2026 20:17:20 -0400, Phil Hobbs >>><pcdhSpamMeSenseless@electrooptical.net> wrote:

    On 2026-04-15 18:33, joegwinn@comcast.net wrote:

    This is a new kind of floating-point number, likely good for AI, but >>>>> lots of other uses will turn up.

    From IEEE Spectrum (March 2026 issue):

    AI has driven an explosion of new number formatsuthe ways in which
    numbers are represented digitally. Engineers are looking at every
    possible way to save computation time and energy, including shortening >>>>> the number of bits used to represent data. But what works for AI
    doesn't necessarily work for scientific computing, be it for
    computational physics, biology, fluid dynamics, or engineering
    simulations. IEEE Spectrum spoke with Laslo Hunhold, who recently
    joined Barcelona-based Openchip as an AI engineer, about his efforts >>>>> to develop a bespoke number format for scientific computing.

    .<https://spectrum.ieee.org/number-formats-ai-scientific-computing>

    Joe

    I sure hope he ditches denormals!

    I think he does.


    Cheers

    Phil Hobbs

    I wrote a math package for the 68332, with the usual functions. The >>>format was signed 64 bits, as 32.32.

    Ads and subs were fast, as no normalizations were needed. Integer >>>conversions were even faster. Divide was admittedly kinda ugly.

    Unless you use power-of-two scaling, allowing bit shifts to do the
    job.

    I was PWMing a heater and wanted to adjust for the unregulated supply >voltage, which was the only divide in that system. So it didn't have
    to be very good. That was inside the more serious temperature control
    loop.

    I could have done a lookup table, I guess.

    That is traditional.


    I figured that anything physical can be expressed as 32.32.

    I've done much the same, but usually more like 16.16 - these computers
    were tiny.

    Joe

    Since I was running realtime control loops, the package threw no
    exceptions. It always returned a legal-format value and made its best
    guess.

    68K was a 32-bit machine but the 68332 didn't have floats. And it was
    slow, a 16 MHz CISC processor.

    This was the processor I was using for realtime, and also for my
    original toaster Mac, which had a socket for the coprocessor chip.
    Apple sold and installed these for a very high price, so I bought the
    chip from Digikey or then equivalent, and installed it myself. It
    wasn't very hard, but needed care to not bust those pins.

    The realtime use did not have the coprocessors, because FP was still
    far too slow, and achieved far more precision than needed.


    But it was a joy to code in assembler.

    Yes. I knew the fellow who developed the Instruction Set Architecture
    (ISA) for the 6800 and 68000-series processors. They were based on
    the DEC PDP-11 ISA.

    Joe


    John Larkin
    Highland Tech Glen Canyon Design Center
    Lunatic Fringe Electronics
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From joegwinn@joegwinn@comcast.net to sci.electronics.design on Thu Apr 16 14:51:51 2026
    From Newsgroup: sci.electronics.design

    On Thu, 16 Apr 2026 08:59:17 -0700, Don Y
    <blockedofcourse@foo.invalid> wrote:

    On 4/16/2026 8:44 AM, joegwinn@comcast.net wrote:
    Today's CPUs double precision 64bit and float 32 bit have essentially
    the same performance unless you are vectorising or using huge arrays so
    that unless you *really* know what you are doing double precision is
    preferred for most routine scientific calculations. The exception is
    bulk raw data where you seldom have more than 4 significant figures.

    Also when carrying data from place to place before modern fiber-optic
    transmission systems - Even 32 bit was overkill for ADC output data,
    so kit was best to send that in integer form, and convert to floats as
    late as possible in the process.
    The problem with that is you have to send along all of the factors
    that allow those integers to be ACCURATELY mapped to the quantities
    they represent. Or, arrange for all of that information to already
    *be* at the "far end".

    Converting to a float/engineering units AT the acquisition site
    lets you encapsulate all of that hardware/domain/application
    specific information AT the acquisition point so you don't have to
    expose all of that detail. Imagine replacing a 10b device with
    a 12b -- how much tinkering will you have to do if the far
    end was expecting 10b data and now sees 12b?

    In those days, we were happy to get anything this fancy to work, even
    if it would have to be totally replaced to add anything.

    Joe
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From =?UTF-8?Q?Niocl=C3=A1s_P=C3=B3l_Caile=C3=A1n?= de Ghloucester@thanks-to@Taf.com to sci.electronics.design on Thu Apr 16 19:14:21 2026
    From Newsgroup: sci.electronics.design

    John Larkin <jl@Glen--Canyon.com> wrote: |-------------------------------------------------------------|
    |"I figured that anything physical can be expressed as 32.32."| |-------------------------------------------------------------|

    A lot but not all.
    (S. HTTP://Gloucester.Insomnia247.NL/ fuer Kontaktdaten!)
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From =?UTF-8?Q?Niocl=C3=A1s_P=C3=B3l_Caile=C3=A1n?= de Ghloucester@thanks-to@Taf.com to sci.electronics.design on Thu Apr 16 19:24:51 2026
    From Newsgroup: sci.electronics.design

    Jan Panteltje <alien@comet.invalid> schreef: |-----------------------------------------------------------------------| |">john larkin <jl@glen--canyon.com>wrote: |
    |
    I had to change the title line. Eternal September rejected it for some| |>reason. |
    | |
    |Same problem here. |
    |Same solution!" | |-----------------------------------------------------------------------|

    The BOFH does not have that problem!
    (S. HTTP://Gloucester.Insomnia247.NL/ fuer Kontaktdaten!)
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From bitrex@user@example.net to sci.electronics.design on Thu Apr 16 16:46:55 2026
    From Newsgroup: sci.electronics.design

    On 4/16/2026 5:18 AM, john larkin wrote:
    On Wed, 15 Apr 2026 20:17:20 -0400, Phil Hobbs <pcdhSpamMeSenseless@electrooptical.net> wrote:

    On 2026-04-15 18:33, joegwinn@comcast.net wrote:

    This is a new kind of floating-point number, likely good for AI, but
    lots of other uses will turn up.

    From IEEE Spectrum (March 2026 issue):

    AI has driven an explosion of new number formatsrCothe ways in which
    numbers are represented digitally. Engineers are looking at every
    possible way to save computation time and energy, including shortening
    the number of bits used to represent data. But what works for AI
    doesn't necessarily work for scientific computing, be it for
    computational physics, biology, fluid dynamics, or engineering
    simulations. IEEE Spectrum spoke with Laslo Hunhold, who recently
    joined Barcelona-based Openchip as an AI engineer, about his efforts
    to develop a bespoke number format for scientific computing.

    .<https://spectrum.ieee.org/number-formats-ai-scientific-computing>

    Joe

    I sure hope he ditches denormals!

    Cheers

    Phil Hobbs

    I wrote a math package for the 68332, with the usual functions. The
    format was signed 64 bits, as 32.32.

    Ads and subs were fast, as no normalizations were needed. Integer
    conversions were even faster. Divide was admittedly kinda ugly.

    I figured that anything physical can be expressed as 32.32.

    John Larkin
    Highland Tech Glen Canyon Design Center
    Lunatic Fringe Electronics

    Neural networks for generative AI tend to be massively
    "overparamterized", so relatively large differences in vector
    scaling/rounding often don't make that much different in an output.

    Activation functions also tend to be nonlinear and naturally behave kind
    of like mu-law/A-law compression, so you can bit-reduce the internal representation similar to why you can reduce the internal representation
    in a system that includes companding.

    So why use e.g. int16 vectors when int8 or int4 (or remarkably sometimes
    even int2) can do well enough
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Don Y@blockedofcourse@foo.invalid to sci.electronics.design on Thu Apr 16 14:29:32 2026
    From Newsgroup: sci.electronics.design

    On 4/16/2026 11:51 AM, joegwinn@comcast.net wrote:
    Converting to a float/engineering units AT the acquisition site
    lets you encapsulate all of that hardware/domain/application
    specific information AT the acquisition point so you don't have to
    expose all of that detail. Imagine replacing a 10b device with
    a 12b -- how much tinkering will you have to do if the far
    end was expecting 10b data and now sees 12b?

    In those days, we were happy to get anything this fancy to work, even
    if it would have to be totally replaced to add anything.

    I've cut more corners than most folks. But, in hindsight, it
    was wasted effort. Hardware has always been cheap -- even when
    it wasn't! Development (and re-development) has always been
    expensive. And, there are costs associated with "reputation"
    (you don't want to be known for a particular bug in your product).

    But, managers are short-sighted; they see the cost of the BoM and
    panic over trying to shave a few dollars off -- at the expense of
    man-months of time.

    I can recall trying to take a few hundred bytes out of a 12KB memory
    image to save the cost of an "extra" EPROM. Of course, once that
    was done, ADDING any functionality looked prohibitively expensive
    ("Well have to ADD another EPROM...")

    I originally started my current project with that "low cost"
    mindset... use lots of PIC-ish motes feeding a *big* machine.
    And, the big machine keeps getting bigger (more complex, expensive)
    in an attempt to keep the motes dirt cheap.

    But, when you look at *real* costs, the savings are illusions.
    Especially as you start thinking about how you're going to
    market "add ons" (does the user have to upgrade the "big machine"
    if he wants add-ons X, Y or Z? Or, if he has too many W's??)

    Instead, add capability as you add functionality. Let the
    hardware make your job easier and more reliable.

    [Additionally, this gives you more freedom in implementation
    as the interfaces become more abstract and less tied to *a*
    particular implementation]
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From joegwinn@joegwinn@comcast.net to sci.electronics.design on Thu Apr 16 18:38:22 2026
    From Newsgroup: sci.electronics.design

    On Thu, 16 Apr 2026 10:45:50 +0100, Martin Brown
    <'''newspam'''@nonad.co.uk> wrote:

    On 16/04/2026 01:17, Phil Hobbs wrote:
    On 2026-04-15 18:33, joegwinn@comcast.net wrote:

    This is a new kind of floating-point number, likely good for AI, but
    lots of other uses will turn up.

    aFrom IEEE Spectrum (March 2026 issue):

    AI has driven an explosion of new number formatsuthe ways in which
    numbers are represented digitally. Engineers are looking at every
    possible way to save computation time and energy, including shortening
    the number of bits used to represent data. But what works for AI
    doesn't necessarily work for scientific computing, be it for
    computational physics, biology, fluid dynamics, or engineering
    simulations. IEEE Spectrum spoke with Laslo Hunhold, who recently
    joined Barcelona-based Openchip as an AI engineer, about his efforts
    to develop a bespoke number format for scientific computing.

    .<https://spectrum.ieee.org/number-formats-ai-scientific-computing>


    A copy of the actual paper is on arXiv here:

    https://arxiv.org/pdf/2404.18603

    Its a log tapered number system. He calls them takums vs posits.
    Not an easy read. Be interesting to see if it flies in hardware.
    There has been a *lot* of investment in IEEE754 FP already!

    CDC7600 et all got it about right 60 bit reals were good enough for most >purposes. float32 was always somewhat lacking in precision. Far too easy
    to have underflows and overflows in quite modest computations.

    In the past I stuck with legacy compiler versions that supported x87
    native 80bit FP (gcc still does today, ICX can be forced to).

    I have my own DIY lightweight float128 class that exploits fused
    multiply and add to provide fast high dynamic range bigger floats
    without the overheads of a full multiprecision math library.

    Joe

    I sure hope he ditches denormals!

    Denorms are not all *that* bad - some modern CPUs can even process them
    at full speed - though many are still glacially slow and in the past
    they used to be even slower (you can set DAZ flag now if you don't
    care). Often they were handled by a trap and tediously slow microcode.

    Intel ICX compiler defaults to round denorms as zero.

    I think that the advantage of Takums for AI in particular is that one
    can do arithmetic on a random collection of Takums of varying
    representation size (in bits) directly, without needing to convert to
    and from a common representation. And things like overflow and
    underflow don't exist. I don't know if such things as signed Infinity
    or indeterminate or NaN can be represented in any natural way.

    Joe
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From John R Walliker@jrwalliker@gmail.com to sci.electronics.design on Thu Apr 16 23:44:17 2026
    From Newsgroup: sci.electronics.design

    On 16/04/2026 22:29, Don Y wrote:
    On 4/16/2026 11:51 AM, joegwinn@comcast.net wrote:
    Converting to a float/engineering units AT the acquisition site
    lets you encapsulate all of that hardware/domain/application
    specific information AT the acquisition point so you don't have to
    expose all of that detail.-a Imagine replacing a 10b device with
    a 12b -- how much tinkering will you have to do if the far
    end was expecting 10b data and now sees 12b?

    In those days, we were happy to get anything this fancy to work, even
    if it would have to be totally replaced to add anything.

    I've cut more corners than most folks.-a But, in hindsight, it
    was wasted effort.-a Hardware has always been cheap -- even when
    it wasn't!-a Development (and re-development) has always been
    expensive.-a And, there are costs associated with "reputation"
    (you don't want to be known for a particular bug in your product).

    But, managers are short-sighted; they see the cost of the BoM and
    panic over trying to shave a few dollars off -- at the expense of
    man-months of time.

    I can recall trying to take a few hundred bytes out of a 12KB memory
    image to save the cost of an "extra" EPROM.-a Of course, once that
    was done, ADDING any functionality looked prohibitively expensive
    ("Well have to ADD another EPROM...")

    For a few years I worked with the inventor of the ARM Thumb
    instruction set, Paul Denman. The motivation of that invention
    was exactly what you refer to - shaving off some memory cost in
    the processors used in FAX machines.
    US patent 5784585 awarded 21 July 1998
    John

    I originally started my current project with that "low cost"
    mindset... use lots of PIC-ish motes feeding a *big* machine.
    And, the big machine keeps getting bigger (more complex, expensive)
    in an attempt to keep the motes dirt cheap.

    But, when you look at *real* costs, the savings are illusions.
    Especially as you start thinking about how you're going to
    market "add ons" (does the user have to upgrade the "big machine"
    if he wants add-ons X, Y or Z?-a Or, if he has too many W's??)

    Instead, add capability as you add functionality.-a Let the
    hardware make your job easier and more reliable.

    [Additionally, this gives you more freedom in implementation
    as the interfaces become more abstract and less tied to *a*
    particular implementation]

    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From joegwinn@joegwinn@comcast.net to sci.electronics.design on Thu Apr 16 19:00:12 2026
    From Newsgroup: sci.electronics.design

    On Thu, 16 Apr 2026 23:44:17 +0100, John R Walliker
    <jrwalliker@gmail.com> wrote:

    On 16/04/2026 22:29, Don Y wrote:
    On 4/16/2026 11:51 AM, joegwinn@comcast.net wrote:
    Converting to a float/engineering units AT the acquisition site
    lets you encapsulate all of that hardware/domain/application
    specific information AT the acquisition point so you don't have to
    expose all of that detail.a Imagine replacing a 10b device with
    a 12b -- how much tinkering will you have to do if the far
    end was expecting 10b data and now sees 12b?

    In those days, we were happy to get anything this fancy to work, even
    if it would have to be totally replaced to add anything.

    I've cut more corners than most folks.a But, in hindsight, it
    was wasted effort.a Hardware has always been cheap -- even when
    it wasn't!a Development (and re-development) has always been
    expensive.a And, there are costs associated with "reputation"
    (you don't want to be known for a particular bug in your product).

    But, managers are short-sighted; they see the cost of the BoM and
    panic over trying to shave a few dollars off -- at the expense of
    man-months of time.

    I can recall trying to take a few hundred bytes out of a 12KB memory
    image to save the cost of an "extra" EPROM.a Of course, once that
    was done, ADDING any functionality looked prohibitively expensive
    ("Well have to ADD another EPROM...")

    For a few years I worked with the inventor of the ARM Thumb
    instruction set, Paul Denman. The motivation of that invention
    was exactly what you refer to - shaving off some memory cost in
    the processors used in FAX machines.
    US patent 5784585 awarded 21 July 1998
    John

    I originally started my current project with that "low cost"
    mindset... use lots of PIC-ish motes feeding a *big* machine.
    And, the big machine keeps getting bigger (more complex, expensive)
    in an attempt to keep the motes dirt cheap.

    But, when you look at *real* costs, the savings are illusions.
    Especially as you start thinking about how you're going to
    market "add ons" (does the user have to upgrade the "big machine"
    if he wants add-ons X, Y or Z?a Or, if he has too many W's??)

    Instead, add capability as you add functionality.a Let the
    hardware make your job easier and more reliable.

    [Additionally, this gives you more freedom in implementation
    as the interfaces become more abstract and less tied to *a*
    particular implementation]

    And now days, the evolved ARM ISA has won the world, displacing Intel.

    Joe
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Don Y@blockedofcourse@foo.invalid to sci.electronics.design on Thu Apr 16 16:41:11 2026
    From Newsgroup: sci.electronics.design

    On 4/16/2026 3:44 PM, John R Walliker wrote:
    On 16/04/2026 22:29, Don Y wrote:
    On 4/16/2026 11:51 AM, joegwinn@comcast.net wrote:
    Converting to a float/engineering units AT the acquisition site
    lets you encapsulate all of that hardware/domain/application
    specific information AT the acquisition point so you don't have to
    expose all of that detail.-a Imagine replacing a 10b device with
    a 12b -- how much tinkering will you have to do if the far
    end was expecting 10b data and now sees 12b?

    In those days, we were happy to get anything this fancy to work, even
    if it would have to be totally replaced to add anything.

    I've cut more corners than most folks.-a But, in hindsight, it
    was wasted effort.-a Hardware has always been cheap -- even when
    it wasn't!-a Development (and re-development) has always been
    expensive.-a And, there are costs associated with "reputation"
    (you don't want to be known for a particular bug in your product).

    But, managers are short-sighted; they see the cost of the BoM and
    panic over trying to shave a few dollars off -- at the expense of
    man-months of time.

    I can recall trying to take a few hundred bytes out of a 12KB memory
    image to save the cost of an "extra" EPROM.-a Of course, once that
    was done, ADDING any functionality looked prohibitively expensive
    ("Well have to ADD another EPROM...")

    For a few years I worked with the inventor of the ARM Thumb
    instruction set, Paul Denman.-a The motivation of that invention
    was exactly what you refer to - shaving off some memory cost in
    the processors used in FAX machines.

    Yes, but the world has changed. My first commercial product had 128 bytes
    of RAM. Essentially every byte was effectively a union -- of all of
    the possible uses that were being made of that byte (based on which part
    of the code was executing). The stack was defined as a specific number
    of singleton *bytes* -- the number derived from a static analysis of
    the deepest call stack plus the worst case pile-up of ISRs.

    It took 4 hours to turn the crank to evaluate a change in the software.
    And, that was done on the actual hardware (no emulators, etc.). You
    probed specific pins in the design waiting for your code to twiddle
    them to indicate its progress.

    This was a horrid way of using developer's time! It did nothing to
    improve the quality of the product. And, being the *sole* entry in
    that market, likely did nothing to increase sales from what they would
    have been at a higher sell price.

    Now, I build more elaborate runtimes to support (and constrain!) the application. Online diagnostics. Hot plugging of software updates.
    Runtime invariants. etc. Things that consume resources but enhance
    the quality of the delivered code. Because my time is worth far more
    than the added hardware costs.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Don Y@blockedofcourse@foo.invalid to sci.electronics.design on Thu Apr 16 16:48:04 2026
    From Newsgroup: sci.electronics.design

    On 4/16/2026 4:00 PM, joegwinn@comcast.net wrote:
    And now days, the evolved ARM ISA has won the world, displacing Intel.
    Most of the "old guard" made horrible decisions in the MCU market.
    2650, 8x300, 16032, Z800/8000, 99115, 29000, etc. Remember Intel EPROMs? Motogorilla held on for a while with the 68K.

    I still have a copy of the 86010 databook. And, GI before the PIC's
    fame (notoriety?).
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Arie de Muijnck@noreply@ademu.nl to sci.electronics.design on Fri Apr 17 02:01:15 2026
    From Newsgroup: sci.electronics.design

    On 2026-04-16 17:59, Don Y wrote:
    On 4/16/2026 8:44 AM, joegwinn@comcast.net wrote:
    Today's CPUs double precision 64bit and float 32 bit have essentially
    the same performance unless you are vectorising or using huge arrays so
    that unless you *really* know what you are doing double precision is
    preferred for most routine scientific calculations. The exception is
    bulk raw data where you seldom have more than 4 significant figures.

    Also when carrying data from place to place before modern fiber-optic
    transmission systems - Even 32 bit was overkill for ADC output data,
    so kit was best to send that in integer form, and convert to floats as
    late as possible in the process.
    The problem with that is you have to send along all of the factors
    that allow those integers to be ACCURATELY mapped to the quantities
    they represent.-a Or, arrange for all of that information to already
    *be* at the "far end".

    Converting to a float/engineering units AT the acquisition site
    lets you encapsulate all of that hardware/domain/application
    specific information AT the acquisition point so you don't have to
    expose all of that detail.-a Imagine replacing a 10b device with
    a 12b -- how much tinkering will you have to do if the far
    end was expecting 10b data and now sees 12b?

    That was handled in proces control equipment by using the ADC MSB-aligned.
    Full scale was defined as 1.0, and 10b would give FFC0, 12 bits then FFF0.
    Just the resolution improved, not the scale factor changed. No tinkering.
    I've designed ADC and DAC boards that worked that way.

    Arie

    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Don Y@blockedofcourse@foo.invalid to sci.electronics.design on Thu Apr 16 17:30:03 2026
    From Newsgroup: sci.electronics.design

    On 4/16/2026 5:01 PM, Arie de Muijnck wrote:
    On 2026-04-16 17:59, Don Y wrote:
    Converting to a float/engineering units AT the acquisition site
    lets you encapsulate all of that hardware/domain/application
    specific information AT the acquisition point so you don't have to
    expose all of that detail.-a Imagine replacing a 10b device with
    a 12b -- how much tinkering will you have to do if the far
    end was expecting 10b data and now sees 12b?

    That was handled in proces control equipment by using the ADC MSB-aligned. Full scale was defined as 1.0, and 10b would give FFC0, 12 bits then FFF0. Just the resolution improved, not the scale factor changed. No tinkering. I've designed ADC and DAC boards that worked that way.

    But you can't report the available resolution or calibration
    factors per input point. If you don't expose those data, then
    calibration doesn't know "how close is close".

    You want to hide that information *in* the acquisition node, not have
    to drag it around and require its "consumers" to apply those factors.
    No one *wants* to bear the cost of NBS traceable devices!

    This also gives you freedom to change the measurement technology
    without having to alert the application to such a change.

    [Nowadays, this would be called "edge processing"]
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From john larkin@jl@glen--canyon.com to sci.electronics.design on Thu Apr 16 18:25:12 2026
    From Newsgroup: sci.electronics.design

    On Thu, 16 Apr 2026 14:49:17 -0400, joegwinn@comcast.net wrote:

    On Thu, 16 Apr 2026 09:31:01 -0700, john larkin <jl@glen--canyon.com>
    wrote:

    On Thu, 16 Apr 2026 11:36:13 -0400, joegwinn@comcast.net wrote:

    On Thu, 16 Apr 2026 02:18:58 -0700, john larkin <jl@glen--canyon.com> >>>wrote:

    On Wed, 15 Apr 2026 20:17:20 -0400, Phil Hobbs >>>><pcdhSpamMeSenseless@electrooptical.net> wrote:

    On 2026-04-15 18:33, joegwinn@comcast.net wrote:

    This is a new kind of floating-point number, likely good for AI, but >>>>>> lots of other uses will turn up.

    From IEEE Spectrum (March 2026 issue):

    AI has driven an explosion of new number formatsuthe ways in which >>>>>> numbers are represented digitally. Engineers are looking at every
    possible way to save computation time and energy, including shortening >>>>>> the number of bits used to represent data. But what works for AI
    doesn't necessarily work for scientific computing, be it for
    computational physics, biology, fluid dynamics, or engineering
    simulations. IEEE Spectrum spoke with Laslo Hunhold, who recently
    joined Barcelona-based Openchip as an AI engineer, about his efforts >>>>>> to develop a bespoke number format for scientific computing.

    .<https://spectrum.ieee.org/number-formats-ai-scientific-computing> >>>>>>
    Joe

    I sure hope he ditches denormals!

    I think he does.


    Cheers

    Phil Hobbs

    I wrote a math package for the 68332, with the usual functions. The >>>>format was signed 64 bits, as 32.32.

    Ads and subs were fast, as no normalizations were needed. Integer >>>>conversions were even faster. Divide was admittedly kinda ugly.

    Unless you use power-of-two scaling, allowing bit shifts to do the
    job.

    I was PWMing a heater and wanted to adjust for the unregulated supply >>voltage, which was the only divide in that system. So it didn't have
    to be very good. That was inside the more serious temperature control
    loop.

    I could have done a lookup table, I guess.

    That is traditional.


    I figured that anything physical can be expressed as 32.32.

    I've done much the same, but usually more like 16.16 - these computers >>>were tiny.

    Joe

    Since I was running realtime control loops, the package threw no >>exceptions. It always returned a legal-format value and made its best >>guess.

    68K was a 32-bit machine but the 68332 didn't have floats. And it was
    slow, a 16 MHz CISC processor.

    This was the processor I was using for realtime, and also for my
    original toaster Mac, which had a socket for the coprocessor chip.
    Apple sold and installed these for a very high price, so I bought the
    chip from Digikey or then equivalent, and installed it myself. It
    wasn't very hard, but needed care to not bust those pins.

    The realtime use did not have the coprocessors, because FP was still
    far too slow, and achieved far more precision than needed.


    But it was a joy to code in assembler.

    Yes. I knew the fellow who developed the Instruction Set Architecture
    (ISA) for the 6800 and 68000-series processors. They were based on
    the DEC PDP-11 ISA.

    Joe


    John Larkin
    Highland Tech Glen Canyon Design Center
    Lunatic Fringe Electronics

    The PDP-11 was revolutionary. You could

    ASR PC

    namely arithmetic shift the progam counter if you wanted. It was that
    general.

    I wrote two RTOS's for the 11. It was beautiful.


    John Larkin
    Highland Tech Glen Canyon Design Center
    Lunatic Fringe Electronics
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From joegwinn@joegwinn@comcast.net to sci.electronics.design on Fri Apr 17 11:52:14 2026
    From Newsgroup: sci.electronics.design

    On Thu, 16 Apr 2026 18:25:12 -0700, john larkin <jl@glen--canyon.com>
    wrote:

    On Thu, 16 Apr 2026 14:49:17 -0400, joegwinn@comcast.net wrote:

    On Thu, 16 Apr 2026 09:31:01 -0700, john larkin <jl@glen--canyon.com> >>wrote:

    On Thu, 16 Apr 2026 11:36:13 -0400, joegwinn@comcast.net wrote:

    On Thu, 16 Apr 2026 02:18:58 -0700, john larkin <jl@glen--canyon.com> >>>>wrote:

    On Wed, 15 Apr 2026 20:17:20 -0400, Phil Hobbs >>>>><pcdhSpamMeSenseless@electrooptical.net> wrote:

    On 2026-04-15 18:33, joegwinn@comcast.net wrote:

    This is a new kind of floating-point number, likely good for AI, but >>>>>>> lots of other uses will turn up.

    From IEEE Spectrum (March 2026 issue):

    AI has driven an explosion of new number formatsuthe ways in which >>>>>>> numbers are represented digitally. Engineers are looking at every >>>>>>> possible way to save computation time and energy, including shortening >>>>>>> the number of bits used to represent data. But what works for AI >>>>>>> doesn't necessarily work for scientific computing, be it for
    computational physics, biology, fluid dynamics, or engineering
    simulations. IEEE Spectrum spoke with Laslo Hunhold, who recently >>>>>>> joined Barcelona-based Openchip as an AI engineer, about his efforts >>>>>>> to develop a bespoke number format for scientific computing.

    .<https://spectrum.ieee.org/number-formats-ai-scientific-computing> >>>>>>>
    Joe

    I sure hope he ditches denormals!

    I think he does.


    Cheers

    Phil Hobbs

    I wrote a math package for the 68332, with the usual functions. The >>>>>format was signed 64 bits, as 32.32.

    Ads and subs were fast, as no normalizations were needed. Integer >>>>>conversions were even faster. Divide was admittedly kinda ugly.

    Unless you use power-of-two scaling, allowing bit shifts to do the
    job.

    I was PWMing a heater and wanted to adjust for the unregulated supply >>>voltage, which was the only divide in that system. So it didn't have
    to be very good. That was inside the more serious temperature control >>>loop.

    I could have done a lookup table, I guess.

    That is traditional.


    I figured that anything physical can be expressed as 32.32.

    I've done much the same, but usually more like 16.16 - these computers >>>>were tiny.

    Joe

    Since I was running realtime control loops, the package threw no >>>exceptions. It always returned a legal-format value and made its best >>>guess.

    68K was a 32-bit machine but the 68332 didn't have floats. And it was >>>slow, a 16 MHz CISC processor.

    This was the processor I was using for realtime, and also for my
    original toaster Mac, which had a socket for the coprocessor chip.
    Apple sold and installed these for a very high price, so I bought the
    chip from Digikey or then equivalent, and installed it myself. It
    wasn't very hard, but needed care to not bust those pins.

    The realtime use did not have the coprocessors, because FP was still
    far too slow, and achieved far more precision than needed.


    But it was a joy to code in assembler.

    Yes. I knew the fellow who developed the Instruction Set Architecture >>(ISA) for the 6800 and 68000-series processors. They were based on
    the DEC PDP-11 ISA.

    Joe


    John Larkin
    Highland Tech Glen Canyon Design Center
    Lunatic Fringe Electronics

    The PDP-11 was revolutionary. You could

    ASR PC

    namely arithmetic shift the progam counter if you wanted. It was that >general.

    The CompSci term is "orthogonal", which is the ideal for all languages
    natural and invented. It's parallel to composability. One achieves
    great expressive power, plus the ability to speak complete nonsense.
    But that's a semantic issue...


    I wrote two RTOS's for the 11. It was beautiful.

    I wrote lots of code for those 68xxx computers in the mid-1980s, but
    we bought the real time operating system MTOS (Multi-Tasking OS) from
    a small outfit (two guys) called IPI (Industrial Programming Inc), now
    long gone. Its claim to fame was the ability to work with multiple
    SBCs and global memory boards in parallel, in VMEbus crate slots.

    .<https://ia601506.us.archive.org/13/items/users-guide-for-multi-tasking-operating-system-mtos-68/User%27s%20Guide%20for%20Multi-Tasking%20Operating%20System%20MTOS-68.pdf>




    John Larkin
    Highland Tech Glen Canyon Design Center
    Lunatic Fringe Electronics
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Phil Hobbs@pcdhSpamMeSenseless@electrooptical.net to sci.electronics.design on Fri Apr 17 15:53:53 2026
    From Newsgroup: sci.electronics.design

    On 2026-04-16 08:06, Martin Brown wrote:
    On 16/04/2026 07:43, Jan Panteltje wrote:
    joegwinn@comcast.netwrote:

    This is a new kind of floating-point number, likely good for AI, but
    lots of other uses will turn up.

    From IEEE Spectrum (March 2026 issue):

    AI has driven an explosion of new number formatsrCothe ways in which
    numbers are represented digitally. Engineers are looking at every
    possible way to save computation time and energy, including shortening
    the number of bits used to represent data. But what works for AI
    doesn't necessarily work for scientific computing, be it for
    computational physics, biology, fluid dynamics, or engineering
    simulations. IEEE Spectrum spoke with Laslo Hunhold, who recently
    joined Barcelona-based Openchip as an AI engineer, about his efforts
    to develop a bespoke number format for scientific computing.

    .<https://spectrum.ieee.org/number-formats-ai-scientific-computing>

    Joe

    Do not know much about what that guy did.
    But I noticed I can do most 'scientific things with 32 bits (in asm at
    that)
    For example the Fourier transform in
    -a https://panteltje.nl/panteltje/pic/scope_pic/
    -a-a asm source downloadable on that site
    -a I did using 32 bit integer.

    FFTs are relatively forgiving where numerical precision is concerned.
    The basis functions are perfectly orthogonal summed over the domain.

    Even something as simple as solving a cubic equation x^3 + ax^2 + bx + c
    can easily go wrong when computing in float32 since it involves
    computing a^6. You can work around this lack of dynamic range but it is painful!

    Double precision also helps a lot when accumulating summations of reals
    or even in FFTs to do recurrence relations for sin/cos(n*w*t)

    Almost all modern FP libraries today promote the float32 argument to
    double and do the computation in double precision rounding the result
    back to float at the end. It avoids a lot of messing about ensuring
    nothing overflows during the intermediate calculations.

    Is that science?
    Of course when AI wants to do a divide by zero using Albert E.'s brain
    fog,
    than it will likely need infinite bits to do the wormhole dance...

    My conclusion: 32 bits is enough for most things

    CDC7600 60 bits really was good enough for most orbital dynamics computations which is why astronomical codes used them (and BMEWS too).

    Today's CPUs double precision 64bit and float 32 bit have essentially
    the same performance unless you are vectorising or using huge arrays so
    that unless you *really* know what you are doing double precision is preferred for most routine scientific calculations. The exception is
    bulk raw data where you seldom have more than 4 significant figures.


    There are also schemes such as FDTD (finite difference time domain) EM simulation, where the roundoff error is pretty nearly constant--all the numerical noise flows out of the simulation domain at the speed of
    light! (Providing you use the right sort of absorbing boundaries, of
    course.)

    My clusterized simulator is all done in single precision floats, which
    makes a big difference in speed.

    Cheers

    Phil Hobbs
    --
    Dr Philip C D Hobbs
    Principal Consultant
    ElectroOptical Innovations LLC / Hobbs ElectroOptics
    Optics, Electro-optics, Photonics, Analog Electronics
    Briarcliff Manor NY 10510

    http://electrooptical.net
    http://hobbs-eo.com

    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From =?UTF-8?Q?Niocl=C3=A1s_P=C3=B3l_Caile=C3=A1n?= de Ghloucester@thanks-to@Taf.com to sci.electronics.design on Fri Apr 17 19:56:59 2026
    From Newsgroup: sci.electronics.design

    Don Y <blockedofcourse@foo.invalid> wrote: |-------------------------------------------------------------------|
    |"[. . .] there are costs associated with "reputation" |
    |(you don't want to be known for a particular bug in your product)."| |-------------------------------------------------------------------|

    Willy H. Gates III wanted to be known for DONKEY.BAS, but its
    flickering is not precisely a bug. Donkeys are also not precisely
    bugs.

    |--------------------|
    |"But, managers are "| idiots.
    |--------------------|

    |----------------------------------------------------------------|
    |"But, when you look at *real* costs, the savings are illusions."| |----------------------------------------------------------------|

    True.
    (S. HTTP://Gloucester.Insomnia247.NL/ fuer Kontaktdaten!)
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Don Y@blockedofcourse@foo.invalid to sci.electronics.design on Fri Apr 17 16:24:40 2026
    From Newsgroup: sci.electronics.design

    On 4/17/2026 12:56 PM, Niocl|is P||l Caile|in de Ghloucester wrote:
    Don Y <blockedofcourse@foo.invalid> wrote: |-------------------------------------------------------------------|
    |"[. . .] there are costs associated with "reputation" |
    |(you don't want to be known for a particular bug in your product)."| |-------------------------------------------------------------------|

    Willy H. Gates III wanted to be known for DONKEY.BAS, but its
    flickering is not precisely a bug. Donkeys are also not precisely
    bugs.
    Customers/users occasionally welcome design bugs -- if they can be exploited
    to their advantage.

    Pinball machines (back in the electro-mechanical era) used to "randomly" generate a 2 digit value (rightmost of which was always zero, of course)
    which would be compared to the rightmost two digits of a player's
    score at the end of the game. If the two agreed, a free game was
    awarded.

    Of course, rather than a true random number generator, the number
    generated was driven by observable events during game play -- like
    the number of times a particular target was struck.

    Being able to count such events, knowing where the "number generator"
    was at the start of the game AND the sequence of "random" values
    that it would provide, you could predict the setting of the generator
    at any given time. So, just prior to the end of the game, deliberately
    tilt the game when your score coincides with the "predicted" value
    of the number generator -- and you can continue with a new game.

    The physical clearance between some targets and the cover glass on
    some machines is close enough that you can sit on the glass to deflect
    it just enough to inhibit those targets "resetting".

    Coin mechanisms can be tricked into accepting pennies in lieu of
    quarters.

    Some video games allow the player to pass THROUGH solid objects.

    In-band tone signalling for telephones gave rise to "Cap'n Crunch"
    and the whole phreaking movement. (TAP, anyone?)

    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Ross Finlayson@ross.a.finlayson@gmail.com to sci.electronics.design on Fri Apr 17 18:06:34 2026
    From Newsgroup: sci.electronics.design

    On 04/17/2026 04:24 PM, Don Y wrote:
    On 4/17/2026 12:56 PM, Niocl|is P||l Caile|in de Ghloucester wrote:
    Don Y <blockedofcourse@foo.invalid> wrote:
    |-------------------------------------------------------------------|
    |"[. . .] there are costs associated with "reputation" |
    |(you don't want to be known for a particular bug in your product)."|
    |-------------------------------------------------------------------|

    Willy H. Gates III wanted to be known for DONKEY.BAS, but its
    flickering is not precisely a bug. Donkeys are also not precisely
    bugs.
    Customers/users occasionally welcome design bugs -- if they can be
    exploited
    to their advantage.

    Pinball machines (back in the electro-mechanical era) used to "randomly" generate a 2 digit value (rightmost of which was always zero, of course) which would be compared to the rightmost two digits of a player's
    score at the end of the game. If the two agreed, a free game was
    awarded.

    Of course, rather than a true random number generator, the number
    generated was driven by observable events during game play -- like
    the number of times a particular target was struck.

    Being able to count such events, knowing where the "number generator"
    was at the start of the game AND the sequence of "random" values
    that it would provide, you could predict the setting of the generator
    at any given time. So, just prior to the end of the game, deliberately
    tilt the game when your score coincides with the "predicted" value
    of the number generator -- and you can continue with a new game.

    The physical clearance between some targets and the cover glass on
    some machines is close enough that you can sit on the glass to deflect
    it just enough to inhibit those targets "resetting".

    Coin mechanisms can be tricked into accepting pennies in lieu of
    quarters.

    Some video games allow the player to pass THROUGH solid objects.

    In-band tone signalling for telephones gave rise to "Cap'n Crunch"
    and the whole phreaking movement. (TAP, anyone?)


    One time I hit 53 goals on that soccer pinball at "The Pinball Museum".
    It made the top ten all time.

    At the back or top of the field was a keeper, after activating
    some bumpers, he would wobble more-or-less back and forth or
    "randomly", then hitting shots from the flippers to past the
    keeper to the goal resulted a score, also the machine went
    "Gooooaaall." So, it was a matter of luck, and reflexes.


    "Lock is lit."


    --- Synchronet 3.21f-Linux NewsLink 1.2