• The tables have turned: GigaLIP on the Laptop? (Re: 100% serious Giga Logical Inferences per Second (GLIPS))

    From Mild Shock@janburse@fastmail.fm to sci.physics on Fri Nov 28 15:13:29 2025
    From Newsgroup: sci.physics

    Hi,

    Notably we try to do something with AI laptops
    that came out in 2025. Tapping into their NPU.
    Back in the late 80's GigaLIP were rather

    hypothetical, small machines could hardly
    do KLIPs. Today small machines do easily MLIPs.
    But hardly any popular Prolog systems already taps

    into the AI Boom, they are all Sleepy Joes.
    Not to mention populate by morons like Boris the
    Loris and Nazi Retard Julio.

    They simply cannot connect the dots.

    Bye

    P.S.: Nice travel to the past is this paper:

    Is a GigaLIP Fast Enough?
    TOM W. KELLER - December 1988
    DOI: 10.1007/BF00436711

    Mild Shock schrieb:

    Hi,

    I am 100% serious about Giga Logical Inferences
    per Second (GLIPS). Leaving behind the sequential
    constraint solving world:

    The Complexity of Constraint Satisfaction Revisited https://www.cs.ubc.ca/~mack/Publications/AIP93.pdf

    Only I have missed the deep learning bandwagon,
    never programmed with PyTorch or Keras. So even
    for the banal problem of coding some

    ReLU networks and shipping them to a GPU or NPU,
    or a hybrid, I don't have much experience. So
    I am marveling at papers such as:

    Learning Variable Ordering Heuristics
    for Solving Constraint Satisfaction Problems
    https://arxiv.org/abs/1912.10762

    Given that the AI Boom started after 2019,
    the above paper is already old, and it has
    currious antique terminology like Multilayer

    Perceptron, which is not so common anymore?
    It does also more than what I want to demonstrate,
    it does also do policy learning.

    Bye

    Mild Shock schrieb:
    Hi,

    I am spekulating an NPU could give 1000x more LIPS.
    For certain combinatorial search problems. It all
    boils down to implement this thingy:

    In June 2020, Stockfish introduced the efficiently
    updatable neural network (NNUE) approach, based
    on earlier work by computer shogi programmers
    https://en.wikipedia.org/wiki/Stockfish_%28chess%29

    There are varying degrees what gets updated of
    a neural network. But the specs of an NPU tell
    me very simply the following:

    - An NPU can make 40 TFLOPS, all my AI Laptops
    -a-a from 2025 can do that right now. The brands
    -a-a are Intel Ultra, AMD Ryzen and Snapdragon X,

    -a-a but I guess there might be more brands around,
    -a-a which can do that with a price tag less
    -a-a than 1000.- USD.

    - SWI Prolog can make 30 MLIPS, Dogelog Player
    -a-a runs similar, some Prolog systems are faster.

    Now thats is 10^12 versus 10^6. If some of the
    LIPS can be delegated to a NPU, and if we assume
    for example less locality or more primitive

    operations that require a layering. Would could assume
    that from the NPU 10^12 a factor of 1000 goes
    away. So we might still see 10'9 LIPS emerge.

    Now make the calculation:

    - Without NPU: MLIPS
    - With NPU: GLIPS
    - Ratio: 1000x times faster

    Have fun!

    Bye

    Mild Shock schrieb:
    Hi,

    So Boris the Loris and Nazi Retartd Julio are
    not alone. There is now a mobilization of the
    kind of rage against the machine,

    fighting for methods without randomness. Its
    almost like-a Albert Einstein ascendet from his
    grave and is now preaching,

    "God does not play dice"

    So how it started:

    PIVOT was an interactive program verifier designed by
    L. Peter Deutsch for his Ph.D. dissertation.
    Posted here by permission of L. Peter Deutsch.
    https://softwarepreservation.computerhistory.org/pivot/

    How its going:

    Formal Methods: Whence and Whither?
    The text also highlights the evolving role of formal
    methods amidst technological advancements, such as
    AI, and explores educational and standardization issues
    related to their adoption.
    https://de.slideshare.net/slideshow/formal-methods-whence-and-whither-keynote/273708245


    Can the Don Quijotes win, and fight the AI windmills?

    LoL

    Bye

    Mild Shock schrieb:
    Hi,

    Boris the Loris and Julio Di Egidio the Nazi Retard,
    are going for an afterwork beer. They are still
    highly confused by Fuzzy Testing:

    Star Trek - The 70's Disco Generation
    https://www.youtube.com/watch?v=505zvAvnreg

    The favorite hangout is Spock's Logic Dancefloor,
    which is known for its sharp unfuzzy wit. They
    have-a a chat with Data about Disco Math,

    the only Math which has no Fuzzy Logic in it.

    Bye

    Mild Shock schrieb:
    Hi,

    Candidate Recommendation Draft - 30 September 2025
    https://www.w3.org/TR/webnn

    WebNN samples by Ningxin Hu, Intel, Shanghai
    https://github.com/webmachinelearning/webnn-samples

    Bye

    Mild Shock schrieb:
    Hi,

    It seems I am having problems pacing with
    all the new fancy toys. Wasn't able to really
    benchmark my NPU from a Desktop AI machine,

    picked the wrong driver. Need to try again.
    What worked was benchmarking Mobile AI machines.
    I just grabbed Geekbench AI and some devices:

    USA Fab, M4:

    -a-a-a-a sANN-a-a-a hANN-a-a-a qANN
    iPad CPU-a-a-a 4848-a-a-a 7947-a-a-a 6353
    iPad GPU-a-a-a 9752-a-a-a 11383-a-a-a 10051
    iPad NPU-a-a-a 4873-a-a-a 36544-a-a-a *51634*

    China Fab, Snapdragon:

    -a-a-a-a sANN-a-a-a hANN-a-a-a qANN
    Redmi CPU-a-a-a 1044-a-a-a 950-a-a-a 1723
    Redmi GPU-a-a-a 480-a-a-a 905-a-a-a 737
    Redmi NNAPI-a-a-a 205-a-a-a 205-a-a-a 469
    Redmi QNN-a-a-a 226-a-a-a 226-a-a-a *10221*

    Speed-Up via NPU is factor 10x. See the column
    qANN which means quantizised artificial neural
    networks, when NPU or QNN is picked.

    The mobile AI NPUs are optimized using
    mimimal amounts of energy, and minimal amounts
    of space squeezing (distilling) everything

    into INT8 and INT4.

    Bye






    --- Synchronet 3.21a-Linux NewsLink 1.2