NPUs (Neural Processing Units) are the new normal
From
Mild Shock@janburse@fastmail.fm to
sci.physics on Wed Oct 15 16:15:07 2025
From Newsgroup: sci.physics
Hi,
It seems I am having problems pacing with
all the new fancy toys. Wasn't able to really
benchmark my NPU from a Desktop AI machine,
picked the wrong driver. Need to try again.
What worked was benchmarking Mobile AI machines.
I just grabbed Geekbench AI and some devices:
USA Fab, M4:
sANN hANN qANN
iPad CPU 4848 7947 6353
iPad GPU 9752 11383 10051
iPad NPU 4873 36544 *51634*
China Fab, Snapdragon:
sANN hANN qANN
Redmi CPU 1044 950 1723
Redmi GPU 480 905 737
Redmi NNAPI 205 205 469
Redmi QNN 226 226 *10221*
Speed-Up via NPU is factor 10x. See the column
qANN which means quantizised artificial neural
networks, when NPU or QNN is picked.
The mobile AI NPUs are optimized using
mimimal amounts of energy, and minimal amounts
of space squeezing (distilling) everything
into INT8 and INT4.
Bye
--- Synchronet 3.21a-Linux NewsLink 1.2