• Re: Well DUH ! AI People Finally Realize They Can Ditch Most Floating-P

    From The Natural Philosopher@21:1/5 to Pancho on Mon Oct 14 11:16:48 2024
    On 13/10/2024 14:23, Pancho wrote:
    On 10/13/24 13:25, The Natural Philosopher wrote:
    On 13/10/2024 10:15, Richard Kettlewell wrote:
    "186282@ud0s4.net" <186283@ud0s4.net> writes:
    https://techxplore.com/news/2024-10-integer-addition-algorithm-energy-ai.html
    [...]
       The default use of floating-point really took off when
       'neural networks' became popular in the 80s. Seemed the
       ideal way to keep track of all the various weightings
       and values.

       But, floating-point operations use a huge amount of
       CPU/NPU power.

       Seems somebody finally realized that the 'extra resolution'
       of floating-point was rarely necessary and you can just
       use large integers instead. Integer math is FAST and uses
       LITTLE power .....

    That’s situational. In this case, the paper isn’t about using large
    integers, it’s about very low precision floating point representations. >>> They’ve just found a way to approximate floating point multiplication
    without multiplying the fractional parts of the mantissas.

    Last I heard they were going to use D to As feeding analog
    multipliers. And convert back to D afterwards. for a speed/ precision
    tradeoff.


    That sounds like the 1960s. I guess this idea does sound like a slide rule.

    No, apparently its a new (sic!) idea.

    I think that even if it does not work successfully it is great that
    people are thinking outside the box.
    Analogue computers could offer massive parallelism for simulating
    complex dynamic systems.


    --
    There’s a mighty big difference between good, sound reasons and reasons
    that sound good.

    Burton Hillis (William Vaughn, American columnist)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Computer Nerd Kev@21:1/5 to The Natural Philosopher on Tue Oct 15 07:10:32 2024
    The Natural Philosopher <tnp@invalid.invalid> wrote:
    Analogue computers could offer massive parallelism for simulating
    complex dynamic systems.

    If they have a solution for the typical problem of noise in the
    analogue signals drowning out the "complex" simulations. Optical
    methods are interesting.

    --
    __ __
    #_ < |\| |< _#

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to Computer Nerd Kev on Mon Oct 14 23:47:11 2024
    On 14/10/2024 22:10, Computer Nerd Kev wrote:
    The Natural Philosopher <tnp@invalid.invalid> wrote:
    Analogue computers could offer massive parallelism for simulating
    complex dynamic systems.

    If they have a solution for the typical problem of noise in the
    analogue signals drowning out the "complex" simulations. Optical
    methods are interesting.


    If they don't, then that is in itself a valuable indication that it tow
    runs give different results they are modelling a chaotic system.
    It doesn't matter how much precision you put on junk data, its still junk.


    --
    "It is an established fact to 97% confidence limits that left wing
    conspirators see right wing conspiracies everywhere"

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From 186283@ud0s4.net@21:1/5 to The Natural Philosopher on Tue Oct 15 02:31:58 2024
    On 10/14/24 6:16 AM, The Natural Philosopher wrote:
    On 13/10/2024 14:23, Pancho wrote:
    On 10/13/24 13:25, The Natural Philosopher wrote:
    On 13/10/2024 10:15, Richard Kettlewell wrote:
    "186282@ud0s4.net" <186283@ud0s4.net> writes:
    https://techxplore.com/news/2024-10-integer-addition-algorithm-energy-ai.html

    [...]
       The default use of floating-point really took off when
       'neural networks' became popular in the 80s. Seemed the
       ideal way to keep track of all the various weightings
       and values.

       But, floating-point operations use a huge amount of
       CPU/NPU power.

       Seems somebody finally realized that the 'extra resolution'
       of floating-point was rarely necessary and you can just
       use large integers instead. Integer math is FAST and uses
       LITTLE power .....

    That’s situational. In this case, the paper isn’t about using large >>>> integers, it’s about very low precision floating point representations. >>>> They’ve just found a way to approximate floating point multiplication >>>> without multiplying the fractional parts of the mantissas.

    Last I heard they were going to use D to As feeding analog
    multipliers. And convert back to D afterwards. for a speed/ precision
    tradeoff.


    That sounds like the 1960s. I guess this idea does sound like a slide
    rule.

    No, apparently its a new (sic!) idea.

    I think that even if it does not work successfully it is great that
    people are thinking outside the box.
    Analogue computers could offer massive parallelism for simulating
    complex dynamic systems.


    Yea, but not much PRECISION beyond a stage or two
    of calx :-)

    No "perfect" fixes.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From 186283@ud0s4.net@21:1/5 to Richard Kettlewell on Tue Oct 15 02:30:12 2024
    On 10/13/24 5:15 AM, Richard Kettlewell wrote:
    "186282@ud0s4.net" <186283@ud0s4.net> writes:
    https://techxplore.com/news/2024-10-integer-addition-algorithm-energy-ai.html
    [...]
    The default use of floating-point really took off when
    'neural networks' became popular in the 80s. Seemed the
    ideal way to keep track of all the various weightings
    and values.

    But, floating-point operations use a huge amount of
    CPU/NPU power.

    Seems somebody finally realized that the 'extra resolution'
    of floating-point was rarely necessary and you can just
    use large integers instead. Integer math is FAST and uses
    LITTLE power .....

    That’s situational. In this case, the paper isn’t about using large integers, it’s about very low precision floating point representations. They’ve just found a way to approximate floating point multiplication without multiplying the fractional parts of the mantissas.


    They need to take it further - integers instead
    of ANY floating-point absolutely anywhere possible.

    The greenies have begun to freak over the sheer electric
    power required by "AI" systems. It IS rather a lot. It's
    getting worse than even bitcoin mining now. Judging by
    the article, a large percentage of that energy is going
    into un-needed floating-point calx.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From 186283@ud0s4.net@21:1/5 to Pancho on Tue Oct 15 02:43:08 2024
    On 10/13/24 6:45 AM, Pancho wrote:
    On 10/13/24 03:54, 186282@ud0s4.net wrote:

    The new technique is basic—instead of using complex
    floating-point multiplication (FPM), the method uses integer
    addition. Apps use FPM to handle extremely large or small
    numbers, allowing applications to carry out calculations
    using them with extreme precision. It is also the most
    energy-intensive part of AI number crunching.


    That isn't really true. Floats can handle big and small, but the reason people use them is for simplicity.


    "Simple", usually. Energy/time-efficient ... not so much.


    The problem is that typical integer calculations are not closed, the
    result is not an integer. Addition is fine, but the result of division
    is typically not an integer. So if you use integers to model a problem
    every time you do a division (or exp, log, sin, etc) you need to make a decision about how to force the result into an integer.


    The question is how EXACT the precision HAS to be for
    most "AI" uses. Might be safe to throw away a few
    decimal points at the bottom.


    Floats actually use integral values for exponent and mantissa, but they automatically make ballpark reasonable decisions about how to force the results into integral values for mantissa and exponent, meaning
    operations are effectively closed (ignoring exceptions).  So the
    programmer doesn't have to worry, so much.

    Floating point ops are actually quite efficient, much less of a concern
    than something like a branch misprediction. A 20x speed up (energy
    saving) sounds close to a theoretical maximum. I would be surprised if
    it can be achieved in anything but a few cases.

    Well ... the article insists they are NOT energy-efficient,
    esp when performed en-masse. I think their prelim tests
    suggested an almost 95% savings (sometimes).

    Anyway, at least the IDEA is back out there again. We
    old guys, oft dealing with microcontrollers, knew the
    advantages of wider integers over even 'small' FP.

    Math processors disguised the amount of processing
    required for FP ... but it was STILL there.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mike Scott@21:1/5 to 186282@ud0s4.net on Tue Oct 15 08:52:03 2024
    On 15/10/2024 07:30, 186282@ud0s4.net wrote:

    That’s situational. In this case, the paper isn’t about using large
    integers, it’s about very low precision floating point representations.
    They’ve just found a way to approximate floating point multiplication
    without multiplying the fractional parts of the mantissas.


      They need to take it further - integers instead
      of ANY floating-point absolutely anywhere possible.

    Reminds me of PDP8 days.

    We were doing fft's by the million. All done in 12-bit integer
    arithmetic with a block exponent. Lookup tables for logs were simple
    enough, as were trig functions. Not that anything was exactly "fast" --
    IIRC a 1.2usec basic instruction cycle.

    The machine did have a FP unit, but it was too s..l..o..w.. by far for this.

    The circle goes around.

    --
    Mike Scott
    Harlow, England

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Kettlewell@21:1/5 to 186282@ud0s4.net on Tue Oct 15 09:14:40 2024
    "186282@ud0s4.net" <186283@ud0s4.net> writes:
    On 10/13/24 5:15 AM, Richard Kettlewell wrote:
    "186282@ud0s4.net" <186283@ud0s4.net> writes:
    https://techxplore.com/news/2024-10-integer-addition-algorithm-energy-ai.html
    [...]
    The default use of floating-point really took off when
    'neural networks' became popular in the 80s. Seemed the
    ideal way to keep track of all the various weightings
    and values.

    But, floating-point operations use a huge amount of
    CPU/NPU power.

    Seems somebody finally realized that the 'extra resolution'
    of floating-point was rarely necessary and you can just
    use large integers instead. Integer math is FAST and uses
    LITTLE power .....
    That’s situational. In this case, the paper isn’t about using large
    integers, it’s about very low precision floating point representations.
    They’ve just found a way to approximate floating point multiplication
    without multiplying the fractional parts of the mantissas.

    They need to take it further - integers instead
    of ANY floating-point absolutely anywhere possible.

    Perhaps you could publish your alternative algorithm that satisfies
    their use case.

    --
    https://www.greenend.org.uk/rjk/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to 186282@ud0s4.net on Tue Oct 15 12:03:32 2024
    On 15/10/2024 07:31, 186282@ud0s4.net wrote:
    On 10/14/24 6:16 AM, The Natural Philosopher wrote:

    I think that even if it does not work successfully it is great that
    people are thinking outside the box.
    Analogue computers could offer massive parallelism for simulating
    complex dynamic systems.


      Yea, but not much PRECISION beyond a stage or two
      of calx  :-)

      No "perfect" fixes.

    As I said, let's say we are simulating airflow over a fast moving
    object - now normally the fluid dynamics CFM is crap and it is cheaper
    and more accurate to throw it in a wind tunnel.

    The wind tunnel is not measuiring data to any high accuracy but its
    using atomic level measurement cells in enormous quantities in parallel.

    The problem with CFM is you cant have too may 'cells' or you run out of computer power. Its a step beyond 3D modelling where the more triangles
    you have the closer to real everything looks, but its a similar problem .

    But a wind tunnel built out of analogue 'cells' might be quite simple in concept. Just large in silicon scale.

    And it wouldn't need to be 'programmed' as its internal logic would be constructed to be the equations that govern fluid dynamics. All you
    would then do is take a 3D surface and constrain every cell in that
    computer on that surface to have zero output.

    If I were a graduate again that's a PhD project that would appeal...

    --
    There is nothing a fleet of dispatchable nuclear power plants cannot do
    that cannot be done worse and more expensively and with higher carbon
    emissions and more adverse environmental impact by adding intermittent renewable energy.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to 186282@ud0s4.net on Tue Oct 15 12:06:30 2024
    On 15/10/2024 07:43, 186282@ud0s4.net wrote:
    The question is how EXACT the precision HAS to be for
      most "AI" uses. Might be safe to throw away a few
      decimal points at the bottom.

    My thesis is that *in some applications*, more low quality calculations
    bets a fewer high quality ones anyway.
    I wasn't thinkingof AI, as much as modelling complex turbulent flow in
    aero and hydrodynamics or weather forecasting
    --
    Outside of a dog, a book is a man's best friend. Inside of a dog it's
    too dark to read.

    Groucho Marx

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Chris Ahlstrom@21:1/5 to rbowman on Tue Oct 15 15:46:05 2024
    rbowman wrote this copyrighted missive and expects royalties:

    On Tue, 15 Oct 2024 02:43:08 -0400, 186282@ud0s4.net wrote:

    The question is how EXACT the precision HAS to be for most "AI" uses.
    Might be safe to throw away a few decimal points at the bottom.

    It's usually referred to as 'machine learning' rather than AI but when you look at TinyML on edge devices doing image recognition, wake word
    processing, and other tasks it's impressive how much you can throw away
    and still get a reasonable quality of results.

    https://www.tinyml.org/

    This goes back to the slide rule days. Sure, you could whip out your book
    of six place tables and get seemingly more accurate results but did all
    those decimal places mean anything in the real world? Computers took the
    pain out of calculations but also tended to avoid the questions of 'what
    does this really mean in the real world'.

    In high school chemistry, we learned how to apply uncertainty ranges (plus or minus) to measurements and how to accumulate ranges based on multiple measurements.

    The political polls state ranges, but nothing about the alpha, the N, and,
    most importantly, the wording of the poll questions and the nature of the sampling.

    --
    It was the Law of the Sea, they said. Civilization ends at the waterline. Beyond that, we all enter the food chain, and not always right at the top.
    -- Hunter S. Thompson

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From rbowman@21:1/5 to 186282@ud0s4.net on Tue Oct 15 19:20:32 2024
    On Tue, 15 Oct 2024 02:43:08 -0400, 186282@ud0s4.net wrote:

    The question is how EXACT the precision HAS to be for most "AI" uses.
    Might be safe to throw away a few decimal points at the bottom.

    It's usually referred to as 'machine learning' rather than AI but when you
    look at TinyML on edge devices doing image recognition, wake word
    processing, and other tasks it's impressive how much you can throw away
    and still get a reasonable quality of results.

    https://www.tinyml.org/

    This goes back to the slide rule days. Sure, you could whip out your book
    of six place tables and get seemingly more accurate results but did all
    those decimal places mean anything in the real world? Computers took the
    pain out of calculations but also tended to avoid the questions of 'what
    does this really mean in the real world'.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From rbowman@21:1/5 to Chris Ahlstrom on Wed Oct 16 02:35:52 2024
    On Tue, 15 Oct 2024 15:46:05 -0400, Chris Ahlstrom wrote:

    The political polls state ranges, but nothing about the alpha, the N,
    and,
    most importantly, the wording of the poll questions and the nature of
    the sampling.

    I try to ignore polls and most of the hype. A few years back I went to bed expecting Hillary Clinton to be the president elect when I woke up. The DJ
    on the radio station I listen to morning was a definite lefty. When he
    played Norah Jones' 'Carry On' I found I'd been mistaken.

    https://www.youtube.com/watch?v=DqA25Ug71Mc

    "Let's just forget
    Leave it behind
    And carry on."

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From 186283@ud0s4.net@21:1/5 to The Natural Philosopher on Wed Oct 16 02:54:04 2024
    On 10/15/24 7:03 AM, The Natural Philosopher wrote:
    On 15/10/2024 07:31, 186282@ud0s4.net wrote:
    On 10/14/24 6:16 AM, The Natural Philosopher wrote:

    I think that even if it does not work successfully it is great that
    people are thinking outside the box.
    Analogue computers could offer massive parallelism for simulating
    complex dynamic systems.


       Yea, but not much PRECISION beyond a stage or two
       of calx  :-)

       No "perfect" fixes.

    As I said, let's say we are simulating airflow over  a fast moving
    object - now normally the fluid dynamics CFM is crap and it is cheaper
    and more accurate to throw it in a wind tunnel.

    Very likely ... though I've never thrown anything into
    a wind tunnel.

    Analog still has a place. Until you go atomic it really
    is a very analog universe.

    In theory you can do "digitized analog" ... signal
    levels that seem/act analog but are really finely
    discrete digital values. This CAN minimize the
    chain-calc accuracy problem.


    The wind tunnel is not measuiring data to any high accuracy but its
    using atomic level measurement cells in enormous quantities in parallel.

    The problem with CFM is you cant have too may 'cells' or you run out of computer power. Its a step beyond 3D modelling where the more triangles
    you have the closer to real everything looks, but its a similar problem .

    But a wind tunnel built out of analogue 'cells' might be quite simple in concept. Just large in silicon scale.

    And it wouldn't need to be 'programmed' as its internal logic would be constructed to be the equations that govern fluid dynamics. All you
    would then do is take a 3D surface and constrain every cell in that
    computer on that surface to have zero output.

    If I were a graduate again that's a PhD project that would appeal...

    I've seen old analog computers - mostly aimed at finding
    spring rates and such. Rs, caps, inductors ... you can
    sim a somewhat complex mechanical system just by plugging
    in modules. Real-time and adequately accurate. You can
    fake it in digital now however ... but it's not as
    beautiful/natural.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From 186283@ud0s4.net@21:1/5 to The Natural Philosopher on Wed Oct 16 02:38:08 2024
    On 10/15/24 7:06 AM, The Natural Philosopher wrote:
    On 15/10/2024 07:43, 186282@ud0s4.net wrote:
    The question is how EXACT the precision HAS to be for
       most "AI" uses. Might be safe to throw away a few
       decimal points at the bottom.

    My thesis is that *in some applications*, more low quality calculations
    bets a fewer high quality ones anyway.
    I wasn't thinking of AI, as much as modelling complex turbulent flow in
    aero and hydrodynamics or weather forecasting

    Well, weather, any decimal points are BS anyway :-)

    However, AI and fuzzy logic and neural networks - it
    has just been standard practice to use floats to handle
    all values. I've got books going back into the mid 80s
    on all those and you JUST USED floats.

    BUT ... as said, even a 32-bit int can handle fairly
    large vals. Mult little vals by 100 or 1000 and you can
    throw away the need for decimal points - and the POWER
    required to do such calx. Accuracy should be more than
    adequate.

    In any case, I'm happy SOMEONE finally realized this.

    TOOK a really LONG time though ......

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From 186283@ud0s4.net@21:1/5 to rbowman on Wed Oct 16 03:13:44 2024
    On 10/15/24 10:35 PM, rbowman wrote:
    On Tue, 15 Oct 2024 15:46:05 -0400, Chris Ahlstrom wrote:

    The political polls state ranges, but nothing about the alpha, the N,
    and,
    most importantly, the wording of the poll questions and the nature of
    the sampling.

    I try to ignore polls and most of the hype. A few years back I went to bed expecting Hillary Clinton to be the president elect when I woke up. The DJ
    on the radio station I listen to morning was a definite lefty. When he
    played Norah Jones' 'Carry On' I found I'd been mistaken.

    https://www.youtube.com/watch?v=DqA25Ug71Mc


    Trump IS grating ... no question ... but K is just
    an empty skull. That's been her JOB. Can't have
    someone like that in times like these.

    Not entirely sure of the Linux angle here though ...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Pancho@21:1/5 to 186282@ud0s4.net on Wed Oct 16 08:23:45 2024
    On 10/16/24 07:38, 186282@ud0s4.net wrote:
    On 10/15/24 7:06 AM, The Natural Philosopher wrote:
    On 15/10/2024 07:43, 186282@ud0s4.net wrote:
    The question is how EXACT the precision HAS to be for
       most "AI" uses. Might be safe to throw away a few
       decimal points at the bottom.

    My thesis is that *in some applications*, more low quality
    calculations bets a fewer high quality ones anyway.
    I wasn't thinking of AI, as much as modelling complex turbulent flow
    in aero and hydrodynamics or weather forecasting

      Well, weather, any decimal points are BS anyway :-)

      However, AI and fuzzy logic and neural networks - it
      has just been standard practice to use floats to handle
      all values. I've got books going back into the mid 80s
      on all those and you JUST USED floats.

      BUT ... as said, even a 32-bit int can handle fairly
      large vals. Mult little vals by 100 or 1000 and you can
      throw away the need for decimal points - and the POWER
      required to do such calx. Accuracy should be more than
      adequate.

      In any case, I'm happy SOMEONE finally realized this.

      TOOK a really LONG time though ......

    AIUI, GPU/Cuda only offered 32 bit floats, no doubles. So I think people
    always knew.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Kettlewell@21:1/5 to 186282@ud0s4.net on Wed Oct 16 11:56:52 2024
    "186282@ud0s4.net" <186283@ud0s4.net> writes:
    BUT ... as said, even a 32-bit int can handle fairly
    large vals. Mult little vals by 100 or 1000 and you can
    throw away the need for decimal points - and the POWER
    required to do such calx. Accuracy should be more than
    adequate.

    You’re talking about fixed-point arithmetic, which is already used where appropriate (although the scale is a power of 2 so you can shift
    products down into the right place rather than dividing).

    In any case, I'm happy SOMEONE finally realized this.

    TOOK a really LONG time though ......

    It’s obvious that you’ve not actually read or understood the paper that this thread is about.

    --
    https://www.greenend.org.uk/rjk/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Chris Ahlstrom@21:1/5 to 186282@ud0s4.net on Wed Oct 16 07:40:46 2024
    186282@ud0s4.net wrote this copyrighted missive and expects royalties:

    On 10/15/24 10:35 PM, rbowman wrote:
    On Tue, 15 Oct 2024 15:46:05 -0400, Chris Ahlstrom wrote:

    The political polls state ranges, but nothing about the alpha, the N,
    and,
    most importantly, the wording of the poll questions and the nature of
    the sampling.

    I try to ignore polls and most of the hype. A few years back I went to bed >> expecting Hillary Clinton to be the president elect when I woke up. The DJ >> on the radio station I listen to morning was a definite lefty. When he
    played Norah Jones' 'Carry On' I found I'd been mistaken.

    https://www.youtube.com/watch?v=DqA25Ug71Mc

    Trump IS grating ... no question ... but K is just
    an empty skull. That's been her JOB. Can't have
    someone like that in times like these.

    Trump's the empty skull. Well, it is full... of nonsense and bile.

    Not entirely sure of the Linux angle here though ...

    Harris as VP was like Linux, working reliably in the background.

    She's no empty skull. She was a prosecutor, a district attorney, a state attorney general, a US senator, and the vice president. But some people cannot stand that in a woman.

    --
    I began many years ago, as so many young men do, in searching for the
    perfect woman. I believed that if I looked long enough, and hard enough,
    I would find her and then I would be secure for life. Well, the years
    and romances came and went, and I eventually ended up settling for someone
    a lot less than my idea of perfection. But one day, after many years
    together, I lay there on our bed recovering from a slight illness. My
    wife was sitting on a chair next to the bed, humming softly and watching
    the late afternoon sun filtering through the trees. The only sounds to
    be heard elsewhere were the clock ticking, the kettle downstairs starting
    to boil, and an occasional schoolchild passing beneath our window. And
    as I looked up into my wife's now wrinkled face, but still warm and
    twinkling eyes, I realized something about perfection... It comes only
    with time.
    -- James L. Collymore, "Perfect Woman"

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Charlie Gibbs@21:1/5 to Chris Ahlstrom on Wed Oct 16 16:16:23 2024
    On 2024-10-16, Chris Ahlstrom <OFeem1987@teleworm.us> wrote:

    Harris as VP was like Linux, working reliably in the background.

    She's no empty skull. She was a prosecutor, a district attorney, a state attorney general, a US senator, and the vice president. But some people cannot
    stand that in a woman.

    <applause>

    I began many years ago, as so many young men do, in searching for the
    perfect woman. I believed that if I looked long enough, and hard enough,
    I would find her and then I would be secure for life. Well, the years
    and romances came and went, and I eventually ended up settling for someone
    a lot less than my idea of perfection. But one day, after many years together, I lay there on our bed recovering from a slight illness. My
    wife was sitting on a chair next to the bed, humming softly and watching
    the late afternoon sun filtering through the trees. The only sounds to
    be heard elsewhere were the clock ticking, the kettle downstairs starting
    to boil, and an occasional schoolchild passing beneath our window. And
    as I looked up into my wife's now wrinkled face, but still warm and
    twinkling eyes, I realized something about perfection... It comes only
    with time.
    -- James L. Collymore, "Perfect Woman"

    Beautiful. Here's Heinlein's take on it:

    A man does not insist on physical beauty in a woman who
    builds up his morale. After a while he realizes that
    she _is_ beautiful - he just hadn't noticed it at first.

    --
    /~\ Charlie Gibbs | We'll go down in history as the
    \ / <cgibbs@kltpzyxm.invalid> | first society that wouldn't save
    X I'm really at ac.dekanfrus | itself because it wasn't cost-
    / \ if you read it the right way. | effective. -- Kurt Vonnegut

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to Chris Ahlstrom on Wed Oct 16 20:02:57 2024
    On 16/10/2024 12:40, Chris Ahlstrom wrote:
    She's no empty skull. She was a prosecutor, a district attorney, a state attorney general, a US senator, and the vice president. But some people cannot
    stand that in a woman.

    I note that you omitted the adjective 'successful' from her resumé...

    That pretty much describes our new prime minister.

    He is all things considered, a man, and a completely incompetent cunt.

    elected because the Tories seemed even worse.

    In fact they are almost identical in their utter failure to address
    really important issues and make a lot of noise about irrelevant tripe.
    And put their snouts in the trough


    --
    “The fundamental cause of the trouble in the modern world today is that
    the stupid are cocksure while the intelligent are full of doubt."

    - Bertrand Russell

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From rbowman@21:1/5 to Chris Ahlstrom on Wed Oct 16 21:07:56 2024
    On Wed, 16 Oct 2024 07:40:46 -0400, Chris Ahlstrom wrote:

    Harris as VP was like Linux, working reliably in the background.

    There you have the problem. If she was working reliably in the background
    for the last three and a half years she can hardly claim to be a candidate
    for change. Obama could make that work after eight years of Bush.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Chris Ahlstrom@21:1/5 to rbowman on Thu Oct 17 06:54:25 2024
    rbowman wrote this copyrighted missive and expects royalties:

    On Wed, 16 Oct 2024 07:40:46 -0400, Chris Ahlstrom wrote:

    Harris as VP was like Linux, working reliably in the background.

    There you have the problem. If she was working reliably in the background
    for the last three and a half years she can hardly claim to be a candidate for change. Obama could make that work after eight years of Bush.

    Whatever, dude. Incremental change is fine with me.

    The big changes we really need (eliminating Citizens United, taking medical insurers out of the system, and so much more) will never happen.

    The game is rigged.

    Heh heh:

    --
    We have only two things to worry about: That things will never get
    back to normal, and that they already have.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to Chris Ahlstrom on Thu Oct 17 13:38:58 2024
    On 17/10/2024 11:54, Chris Ahlstrom wrote:
    rbowman wrote this copyrighted missive and expects royalties:

    On Wed, 16 Oct 2024 07:40:46 -0400, Chris Ahlstrom wrote:

    Harris as VP was like Linux, working reliably in the background.

    There you have the problem. If she was working reliably in the background
    for the last three and a half years she can hardly claim to be a candidate >> for change. Obama could make that work after eight years of Bush.

    Whatever, dude. Incremental change is fine with me.

    The big changes we really need (eliminating Citizens United, taking medical insurers out of the system, and so much more) will never happen.

    It will if you let Putin take Alaska and China have the whole west coast.

    The game is rigged.

    Heh heh:


    --
    “it should be clear by now to everyone that activist environmentalism
    (or environmental activism) is becoming a general ideology about humans,
    about their freedom, about the relationship between the individual and
    the state, and about the manipulation of people under the guise of a
    'noble' idea. It is not an honest pursuit of 'sustainable development,'
    a matter of elementary environmental protection, or a search for
    rational mechanisms designed to achieve a healthy environment. Yet
    things do occur that make you shake your head and remind yourself that
    you live neither in Joseph Stalin’s Communist era, nor in the Orwellian utopia of 1984.”

    Vaclav Klaus

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From 186283@ud0s4.net@21:1/5 to Richard Kettlewell on Fri Oct 18 00:20:39 2024
    On 10/16/24 6:56 AM, Richard Kettlewell wrote:
    "186282@ud0s4.net" <186283@ud0s4.net> writes:
    BUT ... as said, even a 32-bit int can handle fairly
    large vals. Mult little vals by 100 or 1000 and you can
    throw away the need for decimal points - and the POWER
    required to do such calx. Accuracy should be more than
    adequate.

    You’re talking about fixed-point arithmetic, which is already used where appropriate (although the scale is a power of 2 so you can shift
    products down into the right place rather than dividing).

    In any case, I'm happy SOMEONE finally realized this.

    TOOK a really LONG time though ......

    It’s obvious that you’ve not actually read or understood the paper that this thread is about.

    Maybe I understood it better than you ... and from
    4+ decades of experiences.

    But, argue as you will. I'm not too proud. Many are
    better than me, many more are worse.

    IF this was just about the power reqs of various forms
    of fixed/floating then there'd be little point in the
    article. Breaking the FP tradition as much as possible
    and going to (wide) ints really CAN save tons of power
    and time. With current AI systems this is a BIG deal.

    There was a period where I had to do some quasi-AI stuff
    for micro-controllers. Crude NNs/Fuzzy mostly. Not too
    sophisticated, yet the approach DID make 'em better.

    Now DO check into what's needed for FP on a PIC or 8051.
    It's nasty. By seeing beyond the usual examples in books
    and articles - which all used FP for "convenience" -
    I found the vast advantages of substituting ints instead.
    Easy to FAKE a few decimal-points of precision using
    ints. That's usually more than good enough.

    You can splice an NPU into most any kind of processor
    BUT the steps to do FP don't really change, still suck
    up power. Just SEEMS trivial because it's faster.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Kettlewell@21:1/5 to 186282@ud0s4.net on Fri Oct 18 17:34:17 2024
    "186282@ud0s4.net" <186283@ud0s4.net> writes:
    On 10/16/24 6:56 AM, Richard Kettlewell wrote:
    "186282@ud0s4.net" <186283@ud0s4.net> writes:
    BUT ... as said, even a 32-bit int can handle fairly
    large vals. Mult little vals by 100 or 1000 and you can
    throw away the need for decimal points - and the POWER
    required to do such calx. Accuracy should be more than
    adequate.
    You’re talking about fixed-point arithmetic, which is already used
    where appropriate (although the scale is a power of 2 so you can
    shift products down into the right place rather than dividing).

    In any case, I'm happy SOMEONE finally realized this.

    TOOK a really LONG time though ......

    It’s obvious that you’ve not actually read or understood the paper
    that this thread is about.

    Maybe I understood it better than you ... and from
    4+ decades of experiences.

    Perhaps you could explain why you keep talking about integer arithmetic
    when the paper is about floating point arithmetic, then.

    --
    https://www.greenend.org.uk/rjk/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From 186282ud0s3@21:1/5 to Richard Kettlewell on Fri Oct 18 14:12:13 2024
    On 10/18/24 12:34 PM, Richard Kettlewell wrote:
    "186282@ud0s4.net" <186283@ud0s4.net> writes:
    On 10/16/24 6:56 AM, Richard Kettlewell wrote:
    "186282@ud0s4.net" <186283@ud0s4.net> writes:
    BUT ... as said, even a 32-bit int can handle fairly
    large vals. Mult little vals by 100 or 1000 and you can
    throw away the need for decimal points - and the POWER
    required to do such calx. Accuracy should be more than
    adequate.
    You’re talking about fixed-point arithmetic, which is already used
    where appropriate (although the scale is a power of 2 so you can
    shift products down into the right place rather than dividing).

    In any case, I'm happy SOMEONE finally realized this.

    TOOK a really LONG time though ......

    It’s obvious that you’ve not actually read or understood the paper
    that this thread is about.

    Maybe I understood it better than you ... and from
    4+ decades of experiences.

    Perhaps you could explain why you keep talking about integer arithmetic
    when the paper is about floating point arithmetic, then.


    Umm ... because the idea of swapping FP for ints in
    order to save lots of power was introduced ?

    This issue is getting to be *poitical* now - the
    ultra-greenies freaking about how much power the
    'AI' computing centers require.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Chris Ahlstrom@21:1/5 to All on Fri Oct 18 15:56:35 2024
    186282ud0s3 wrote this copyrighted missive and expects royalties:

    On 10/18/24 12:34 PM, Richard Kettlewell wrote:

    <snip>

    Perhaps you could explain why you keep talking about integer arithmetic
    when the paper is about floating point arithmetic, then.

    Umm ... because the idea of swapping FP for ints in
    order to save lots of power was introduced ?

    This issue is getting to be *poitical* now - the
    ultra-greenies freaking about how much power the
    'AI' computing centers require.

    Heh, I freak out about sites I visit that make my computer rev up and
    turn on the cooler: sites polluted with ads, sites that use your CPU
    to mine bitcoin and who knows what else.

    --
    A putt that stops close enough to the cup to inspire such comments as
    "you could blow it in" may be blown in. This rule does not apply if
    the ball is more than three inches from the hole, because no one wants
    to make a travesty of the game.
    -- Donald A. Metz

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to Chris Ahlstrom on Fri Oct 18 21:06:40 2024
    On 18/10/2024 20:56, Chris Ahlstrom wrote:
    186282ud0s3 wrote this copyrighted missive and expects royalties:

    On 10/18/24 12:34 PM, Richard Kettlewell wrote:

    <snip>

    Perhaps you could explain why you keep talking about integer arithmetic
    when the paper is about floating point arithmetic, then.

    Umm ... because the idea of swapping FP for ints in
    order to save lots of power was introduced ?

    This issue is getting to be *poitical* now - the
    ultra-greenies freaking about how much power the
    'AI' computing centers require.

    Heh, I freak out about sites I visit that make my computer rev up and
    turn on the cooler: sites polluted with ads, sites that use your CPU
    to mine bitcoin and who knows what else.


    People say this, but I haven't seen hardly any ads since installing
    Ublock origin.
    I have a CPU and bandwidth monitor in my task bar and if it starts
    looking odd, I exit the site...

    --
    New Socialism consists essentially in being seen to have your heart in
    the right place whilst your head is in the clouds and your hand is in
    someone else's pocket.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)