• For those arguing over languages...

    From The Natural Philosopher@tnp@invalid.invalid to comp.os.linux.misc on Tue Feb 10 09:09:00 2026
    From Newsgroup: comp.os.linux.misc

    ...more fuel on the fire...

    https://www.theregister.com/2026/02/09/compilers_undermine_encryption/

    GCC erases code whose delays obfuscates encryption delays because it
    doesn't do anything...
    --
    ItrCOs easier to fool people than to convince them that they have been fooled. Mark Twain


    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Richard Kettlewell@invalid@invalid.invalid to comp.os.linux.misc on Tue Feb 10 13:11:41 2026
    From Newsgroup: comp.os.linux.misc

    The Natural Philosopher <tnp@invalid.invalid> writes:
    ...more fuel on the fire...

    https://www.theregister.com/2026/02/09/compilers_undermine_encryption/

    GCC erases code whose delays obfuscates encryption delays because it
    doesn't do anything...

    ItrCOs a well-known issue, unique neither to C nor GCC. Some other recent examples:

    https://pqshield.com/pqshield-plugs-timing-leaks-in-kyber-ml-kem-to-improve-pqc-implementation-maturity/

    https://github.com/RustCrypto/signatures/security/advisories/GHSA-hcp2-x6j4-29j7

    Sometimes the compiler helps instead of hinders, for example in the
    ML-DSA Decompose() case, GCC can eliminate the variable-latency division
    and replace it with a multiplication and a couple of shifts (see https://godbolt.org/z/zaY5WEEdx), but it does that because itrCOs faster,
    not because itrCOs trying to generate constant-time code.
    --
    https://www.greenend.org.uk/rjk/
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Farley Flud@ff@linux.rocks to comp.os.linux.misc on Tue Feb 10 14:08:00 2026
    From Newsgroup: comp.os.linux.misc

    On Tue, 10 Feb 2026 09:09:00 +0000, The Natural Philosopher wrote:

    ...more fuel on the fire...

    https://www.theregister.com/2026/02/09/compilers_undermine_encryption/

    GCC erases code whose delays obfuscates encryption delays because it
    doesn't do anything...

    A complete non issue. Click bait.

    Just shut off optimization, either en masse or selectively via the dozens
    of switches.

    Knowing how to compile is just as important as knowing how to program.
    --
    Gentoo/LFS: Is there any-fucking-thing else?
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Marc Haber@mh+usenetspam1118@zugschl.us to comp.os.linux.misc on Tue Feb 10 15:16:50 2026
    From Newsgroup: comp.os.linux.misc

    Farley Flud <ff@linux.rocks> wrote:
    On Tue, 10 Feb 2026 09:09:00 +0000, The Natural Philosopher wrote:
    ...more fuel on the fire...

    https://www.theregister.com/2026/02/09/compilers_undermine_encryption/

    GCC erases code whose delays obfuscates encryption delays because it
    doesn't do anything...

    A complete non issue. Click bait.

    Just shut off optimization, either en masse or selectively via the dozens
    of switches.

    Knowing how to compile is just as important as knowing how to program.

    Does GCC have pragmas so that the programmer can turn off optimization
    for only those code parts? That would probably be the wise thing to do
    (at least that's what occurs to me, who happens to have zero
    experience with serious C programming of time- oder security-critical
    code).

    Turning off optimization completely doesn't sound like the right
    thing. We learned in the 1990ies to code for readers, and rely on the
    compiler to opimize away the inefficiencies of the readable code. I
    guess that things are still taught this way. Are they?

    Greetings
    Marc
    -- ---------------------------------------------------------------------------- Marc Haber | " Questions are the | Mailadresse im Header Rhein-Neckar, DE | Beginning of Wisdom " |
    Nordisch by Nature | Lt. Worf, TNG "Rightful Heir" | Fon: *49 6224 1600402
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Richard Kettlewell@invalid@invalid.invalid to comp.os.linux.misc on Tue Feb 10 16:19:01 2026
    From Newsgroup: comp.os.linux.misc

    Marc Haber <mh+usenetspam1118@zugschl.us> writes:
    Does GCC have pragmas so that the programmer can turn off optimization
    for only those code parts? That would probably be the wise thing to do
    (at least that's what occurs to me, who happens to have zero
    experience with serious C programming of time- oder security-critical
    code).

    No.

    https://gcc.gnu.org/onlinedocs/cpp/Pragmas.html

    Turning off optimization completely doesn't sound like the right
    thing.

    ItrCOs not the right thing for the simple and obvious reason that it
    doesnrCOt solve the problem.

    https://godbolt.org/z/ePPbca91P is the example from MeuselrCOs slides, translated to C[1] and compiled with -O2. The early exit can be seen at
    L13 in the asm.

    https://godbolt.org/z/YvYn34dn7 is the same source compiled at -O0. It
    no longer has an early exit, but instead, once match=false, every
    iteration of the loop branches past the update to match (L17). The side
    channel remains, just in a slightly different form. As a free gift to
    the attacker the side channel is also amplified by the poor performance,
    i.e. they donrCOt need such an accurate clock to extract any signal from
    it.


    [1] I translated to C because the unoptimized object code from the C++
    example in the slides has a huge amount of irrelevant noise in it
    due to the all the extra abstraction used in the C++ version.
    --
    https://www.greenend.org.uk/rjk/
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Chris Ahlstrom@OFeem1987@teleworm.us to comp.os.linux.misc on Tue Feb 10 11:22:56 2026
    From Newsgroup: comp.os.linux.misc

    Marc Haber wrote this post by blinking in Morse code:

    Farley Flud <ff@linux.rocks> wrote:
    On Tue, 10 Feb 2026 09:09:00 +0000, The Natural Philosopher wrote:
    ...more fuel on the fire...

    https://www.theregister.com/2026/02/09/compilers_undermine_encryption/

    GCC erases code whose delays obfuscates encryption delays because it
    doesn't do anything...

    A complete non issue. Click bait.

    Just shut off optimization, either en masse or selectively via the dozens >>of switches.

    Knowing how to compile is just as important as knowing how to program.

    Does GCC have pragmas so that the programmer can turn off optimization
    for only those code parts? That would probably be the wise thing to do
    (at least that's what occurs to me, who happens to have zero
    experience with serious C programming of time- oder security-critical
    code).

    <https://gcc.gnu.org/onlinedocs/gcc/Common-Function-Attributes.html>

    It has an assload of __attribute__ (xxx, ....) specifiers for
    functions. Maybe this one would do the trick:

    __attribute__((optimize(0))) void test(int n)

    No promises.

    Also there are CPU attributes:

    <https://gcc.gnu.org/onlinedocs/gcc/Function-Attributes.html>

    But man!

    Turning off optimization completely doesn't sound like the right
    thing. We learned in the 1990ies to code for readers, and rely on the compiler to opimize away the inefficiencies of the readable code. I
    guess that things are still taught this way. Are they?
    --
    Boys are beyond the range of anybody's sure understanding, at least
    when they are between the ages of 18 months and 90 years.
    -- James Thurber
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From John Ames@commodorejohn@gmail.com to comp.os.linux.misc on Tue Feb 10 08:24:38 2026
    From Newsgroup: comp.os.linux.misc

    On Tue, 10 Feb 2026 14:08:00 +0000
    Farley Flud <ff@linux.rocks> wrote:

    ...more fuel on the fire...

    https://www.theregister.com/2026/02/09/compilers_undermine_encryption/

    GCC erases code whose delays obfuscates encryption delays because it doesn't do anything...

    A complete non issue. Click bait.

    Just shut off optimization, either en masse or selectively via the
    dozens of switches.

    Seriously. It's funny in an "oh *right,* hadn't thought of that" kinda
    way, but it should *maybe* be common knowledge that if your techniques
    require perfect literality from the compiler, you disable optimization?

    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Richard Kettlewell@invalid@invalid.invalid to comp.os.linux.misc on Tue Feb 10 18:20:15 2026
    From Newsgroup: comp.os.linux.misc

    Richard Kettlewell <invalid@invalid.invalid> writes:
    Marc Haber <mh+usenetspam1118@zugschl.us> writes:
    Does GCC have pragmas so that the programmer can turn off optimization
    for only those code parts? That would probably be the wise thing to do
    (at least that's what occurs to me, who happens to have zero
    experience with serious C programming of time- oder security-critical
    code).

    No.

    https://gcc.gnu.org/onlinedocs/cpp/Pragmas.html

    Looks like I was wrong about this detail - but the point that disabling optimization has nothing to do with this problem remains.
    --
    https://www.greenend.org.uk/rjk/
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Richard Kettlewell@invalid@invalid.invalid to comp.os.linux.misc on Tue Feb 10 18:16:12 2026
    From Newsgroup: comp.os.linux.misc

    John Ames <commodorejohn@gmail.com> writes:
    Farley Flud <ff@linux.rocks> wrote:
    ...more fuel on the fire...

    https://www.theregister.com/2026/02/09/compilers_undermine_encryption/

    GCC erases code whose delays obfuscates encryption delays because it
    doesn't do anything...

    A complete non issue. Click bait.

    Just shut off optimization, either en masse or selectively via the
    dozens of switches.

    Seriously. It's funny in an "oh *right,* hadn't thought of that" kinda
    way, but it should *maybe* be common knowledge that if your techniques require perfect literality from the compiler, you disable optimization?

    But the requirement isnrCOt rCLperfect literalityrCY, whatever that means for C. The requirement is that the code be constant-time (or more
    accurately, have running time independent of the value of any secret parameters). Disabling optimization doensrCOt give you that (and you shouldnrCOt expect it to). See my other post for a concrete example.
    --
    https://www.greenend.org.uk/rjk/
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From c186282@c186282@nnada.net to comp.os.linux.misc on Tue Feb 10 22:34:57 2026
    From Newsgroup: comp.os.linux.misc

    On 2/10/26 04:09, The Natural Philosopher wrote:
    ...more fuel on the fire...

    https://www.theregister.com/2026/02/09/compilers_undermine_encryption/

    GCC erases code whose delays obfuscates encryption delays because it
    doesn't do anything...


    Very interesting ! How 'optimization' sometimes ISN'T.

    And encryption IS hyper-important these days ...

    "The user types in a password, which gets checked against
    a database, character by character. Once the first character
    doesn't match, an error message is returned.

    For a close observer trying to break in, the time it takes
    the system to return that error indicates how many letters
    of the guessed password the user has already entered correctly.
    A longer response time indicates more of the password has
    been guessed.

    This side-channel leak has been used in the past to facilitate
    brute-force break-ins."

    Brute-force is STILL a thing ... although "insiders",
    stupid humans or spybots, seem to be very prevalent
    these days.

    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Rich@rich@example.invalid to comp.os.linux.misc on Wed Feb 11 18:50:06 2026
    From Newsgroup: comp.os.linux.misc

    c186282 <c186282@nnada.net> wrote:
    On 2/10/26 04:09, The Natural Philosopher wrote:
    ...more fuel on the fire...

    https://www.theregister.com/2026/02/09/compilers_undermine_encryption/

    GCC erases code whose delays obfuscates encryption delays because it
    doesn't do anything...


    Very interesting ! How 'optimization' sometimes ISN'T.

    Nope. As Richard Kettlewell has pointed out, what the encryption code
    writers want is "constant time execution, regardless of inputs" which
    is not a promised output from gcc, no matter the optimization level
    chosen.

    The compiler is "properly optimizing" given the meaning of
    "optimization" it uses ("make code run as fast as possible" or "make
    code as small as possible" -- with -Os). But the compiler was not
    designed to create "constant time execution" code. The writers were
    expecting a promise the compiler never promised.
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From The Natural Philosopher@tnp@invalid.invalid to comp.os.linux.misc on Wed Feb 11 19:28:54 2026
    From Newsgroup: comp.os.linux.misc

    On 11/02/2026 18:50, Rich wrote:
    c186282 <c186282@nnada.net> wrote:
    On 2/10/26 04:09, The Natural Philosopher wrote:
    ...more fuel on the fire...

    https://www.theregister.com/2026/02/09/compilers_undermine_encryption/

    GCC erases code whose delays obfuscates encryption delays because it
    doesn't do anything...


    Very interesting ! How 'optimization' sometimes ISN'T.

    Nope. As Richard Kettlewell has pointed out, what the encryption code writers want is "constant time execution, regardless of inputs" which
    is not a promised output from gcc, no matter the optimization level
    chosen.

    +1.

    The compiler is "properly optimizing" given the meaning of
    "optimization" it uses ("make code run as fast as possible" or "make
    code as small as possible" -- with -Os). But the compiler was not
    designed to create "constant time execution" code. The writers were expecting a promise the compiler never promised.

    Sounds deeply political :-)

    Perhaps a C construct ...

    void randMicrodelay()

    could be constructed in Asssember for every platform...
    --
    I would rather have questions that cannot be answered...
    ...than to have answers that cannot be questioned

    Richard Feynman



    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Chris Ahlstrom@OFeem1987@teleworm.us to comp.os.linux.misc on Wed Feb 11 16:24:43 2026
    From Newsgroup: comp.os.linux.misc

    Rich wrote this post by blinking in Morse code:

    c186282 <c186282@nnada.net> wrote:
    On 2/10/26 04:09, The Natural Philosopher wrote:
    ...more fuel on the fire...

    https://www.theregister.com/2026/02/09/compilers_undermine_encryption/

    GCC erases code whose delays obfuscates encryption delays because it
    doesn't do anything...


    Very interesting ! How 'optimization' sometimes ISN'T.

    Nope. As Richard Kettlewell has pointed out, what the encryption code writers want is "constant time execution, regardless of inputs" which
    is not a promised output from gcc, no matter the optimization level
    chosen.

    Isn't that something programmer needs to code?

    <https://www.chosenplaintext.ca/articles/beginners-guide-constant-time-cryptography.html>

    ... in asssembler.

    The compiler is "properly optimizing" given the meaning of
    "optimization" it uses ("make code run as fast as possible" or "make
    code as small as possible" -- with -Os). But the compiler was not
    designed to create "constant time execution" code. The writers were expecting a promise the compiler never promised.

    I'm happy I don't need to worry about this... I think.
    --
    May I ask a question?
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Rich@rich@example.invalid to comp.os.linux.misc on Wed Feb 11 21:27:51 2026
    From Newsgroup: comp.os.linux.misc

    The Natural Philosopher <tnp@invalid.invalid> wrote:
    On 11/02/2026 18:50, Rich wrote:
    c186282 <c186282@nnada.net> wrote:
    On 2/10/26 04:09, The Natural Philosopher wrote:
    ...more fuel on the fire...

    https://www.theregister.com/2026/02/09/compilers_undermine_encryption/ >>>>
    GCC erases code whose delays obfuscates encryption delays because it
    doesn't do anything...


    Very interesting ! How 'optimization' sometimes ISN'T.

    Nope. As Richard Kettlewell has pointed out, what the encryption
    code writers want is "constant time execution, regardless of inputs"
    which is not a promised output from gcc, no matter the optimization
    level chosen.

    +1.

    The compiler is "properly optimizing" given the meaning of
    "optimization" it uses ("make code run as fast as possible" or "make
    code as small as possible" -- with -Os). But the compiler was not
    designed to create "constant time execution" code. The writers were
    expecting a promise the compiler never promised.

    Sounds deeply political :-)

    Perhaps a C construct ...

    void randMicrodelay()

    could be constructed in Asssember for every platform...

    For crypto work that likely would not be considered sufficient. Unless
    the randomness for the "delay" came from a true random source it would
    likely still leak side-channel data. It would make the attackers job
    harder, but not fully close the leak.
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Carlos E. R.@robin_listas@es.invalid to comp.os.linux.misc on Wed Feb 11 23:24:59 2026
    From Newsgroup: comp.os.linux.misc

    On 2026-02-11 19:50, Rich wrote:
    c186282 <c186282@nnada.net> wrote:
    On 2/10/26 04:09, The Natural Philosopher wrote:
    ...more fuel on the fire...

    https://www.theregister.com/2026/02/09/compilers_undermine_encryption/

    GCC erases code whose delays obfuscates encryption delays because it
    doesn't do anything...


    Very interesting ! How 'optimization' sometimes ISN'T.

    Nope. As Richard Kettlewell has pointed out, what the encryption code writers want is "constant time execution, regardless of inputs" which
    is not a promised output from gcc, no matter the optimization level
    chosen.

    The compiler is "properly optimizing" given the meaning of
    "optimization" it uses ("make code run as fast as possible" or "make
    code as small as possible" -- with -Os). But the compiler was not
    designed to create "constant time execution" code. The writers were expecting a promise the compiler never promised.

    In the example posted:

    The user types in a password, which gets checked against
    a database, character by character. Once the first character
    doesn't match, an error message is returned.

    ...the fault is not of the compiler, but of the programmer. He has to
    examine all characters even if he knows there is no point.
    --
    Cheers,
    Carlos E.R.
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Rich@rich@example.invalid to comp.os.linux.misc on Wed Feb 11 22:45:21 2026
    From Newsgroup: comp.os.linux.misc

    Chris Ahlstrom <OFeem1987@teleworm.us> wrote:
    Rich wrote this post by blinking in Morse code:

    c186282 <c186282@nnada.net> wrote:
    On 2/10/26 04:09, The Natural Philosopher wrote:
    ...more fuel on the fire...

    https://www.theregister.com/2026/02/09/compilers_undermine_encryption/ >>>>
    GCC erases code whose delays obfuscates encryption delays because it
    doesn't do anything...


    Very interesting ! How 'optimization' sometimes ISN'T.

    Nope. As Richard Kettlewell has pointed out, what the encryption code
    writers want is "constant time execution, regardless of inputs" which
    is not a promised output from gcc, no matter the optimization level
    chosen.

    Isn't that something programmer needs to code?

    <https://www.chosenplaintext.ca/articles/beginners-guide-constant-time-cryptography.html>

    ... in asssembler.

    If they wanted to be assured the compiler did not change the intent for
    them, yes.
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.linux.misc on Wed Feb 11 22:48:56 2026
    From Newsgroup: comp.os.linux.misc

    On Wed, 11 Feb 2026 23:24:59 +0100, Carlos E. R. wrote:

    In the example posted:

    The user types in a password, which gets checked against a
    database, character by character. Once the first character
    doesn't match, an error message is returned.

    ...the fault is not of the compiler, but of the programmer. He has
    to examine all characters even if he knows there is no point.

    Security is a tricky thing. The term for unexpected information leaks
    in unexpected directions is rCLside-channel attackrCY.
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Rich@rich@example.invalid to comp.os.linux.misc on Wed Feb 11 22:49:56 2026
    From Newsgroup: comp.os.linux.misc

    Carlos E. R. <robin_listas@es.invalid> wrote:
    On 2026-02-11 19:50, Rich wrote:
    c186282 <c186282@nnada.net> wrote:
    On 2/10/26 04:09, The Natural Philosopher wrote:
    ...more fuel on the fire...

    https://www.theregister.com/2026/02/09/compilers_undermine_encryption/ >>>>
    GCC erases code whose delays obfuscates encryption delays because it
    doesn't do anything...


    Very interesting ! How 'optimization' sometimes ISN'T.

    Nope. As Richard Kettlewell has pointed out, what the encryption
    code writers want is "constant time execution, regardless of inputs"
    which is not a promised output from gcc, no matter the optimization
    level chosen.

    The compiler is "properly optimizing" given the meaning of
    "optimization" it uses ("make code run as fast as possible" or "make
    code as small as possible" -- with -Os). But the compiler was not
    designed to create "constant time execution" code. The writers were
    expecting a promise the compiler never promised.

    In the example posted:

    The user types in a password, which gets checked against a
    database, character by character. Once the first character doesn't
    match, an error message is returned.

    ...the fault is not of the compiler, but of the programmer. He has to examine all characters even if he knows there is no point.

    Richard Kettlewell's example, which was from the talk slides, showed
    that even at -O0 (no optimizations) that the compiler still
    short-circuited (skipped over) the "subsequent checks that have no
    point" once the first one of them failed.

    So the C code, read literally, is actually "testing every character,
    even after a failure is noted". But the compiled code is skipping over
    most of the output CPU intructions once the first "non-equal" character
    is found, creating a timing side-channel.

    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Richard Kettlewell@invalid@invalid.invalid to comp.os.linux.misc on Thu Feb 12 09:48:40 2026
    From Newsgroup: comp.os.linux.misc

    Rich <rich@example.invalid> writes:
    The Natural Philosopher <tnp@invalid.invalid> wrote:
    On 11/02/2026 18:50, Rich wrote:
    c186282 <c186282@nnada.net> wrote:
    On 2/10/26 04:09, The Natural Philosopher wrote:
    ...more fuel on the fire...

    https://www.theregister.com/2026/02/09/compilers_undermine_encryption/ >>>>>
    GCC erases code whose delays obfuscates encryption delays because it >>>>> doesn't do anything...


    Very interesting ! How 'optimization' sometimes ISN'T.

    Nope. As Richard Kettlewell has pointed out, what the encryption
    code writers want is "constant time execution, regardless of inputs"
    which is not a promised output from gcc, no matter the optimization
    level chosen.

    +1.

    The compiler is "properly optimizing" given the meaning of
    "optimization" it uses ("make code run as fast as possible" or "make
    code as small as possible" -- with -Os). But the compiler was not
    designed to create "constant time execution" code. The writers were
    expecting a promise the compiler never promised.

    Sounds deeply political :-)

    Perhaps a C construct ...

    void randMicrodelay()

    could be constructed in Asssember for every platform...

    For crypto work that likely would not be considered sufficient. Unless
    the randomness for the "delay" came from a true random source it would likely still leak side-channel data. It would make the attackers job harder, but not fully close the leak.

    Cryptographic code often needs a good random source anyway, so thatrCOs
    not necessarily an obstacle to TNPrCOs proposal of inserting random
    delays. But there are exceptions, and in practice itrCOs not a common
    strategy. More common is to avoid constructions that the compiler emits branches for (which is a bit fragile) and more effectively,
    constructions that the compiler contractually cannot optimize
    through. This can be found in MeuselrCOs slides, but people might have to actually read links rather than just comment on them to notice that.

    Some cryptographic implementations do use pure assembler for this and
    other reasons, but itrCOs not a very practical strategy. If you look at
    OpenSSL yourCOll find the most popular algorithms and the most popular CPU architectures are well-covered by assembler implementations, but for
    anything else it falls back to C.
    --
    https://www.greenend.org.uk/rjk/
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Richard Kettlewell@invalid@invalid.invalid to comp.os.linux.misc on Thu Feb 12 09:55:36 2026
    From Newsgroup: comp.os.linux.misc

    "Carlos E. R." <robin_listas@es.invalid> writes:
    In the example posted:

    The user types in a password, which gets checked against
    a database, character by character. Once the first character
    doesn't match, an error message is returned.

    ...the fault is not of the compiler, but of the programmer. He has to
    examine all characters even if he knows there is no point.

    Obviously you didnrCOt read the whole article...
    --
    https://www.greenend.org.uk/rjk/
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Carlos E. R.@robin_listas@es.invalid to comp.os.linux.misc on Thu Feb 12 11:49:21 2026
    From Newsgroup: comp.os.linux.misc

    On 2026-02-12 10:55, Richard Kettlewell wrote:
    "Carlos E. R." <robin_listas@es.invalid> writes:
    In the example posted:

    The user types in a password, which gets checked against
    a database, character by character. Once the first character
    doesn't match, an error message is returned.

    ...the fault is not of the compiler, but of the programmer. He has to
    examine all characters even if he knows there is no point.

    Obviously you didnrCOt read the whole article...

    I didn't :-)
    --
    Cheers,
    Carlos E.R.
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From The Natural Philosopher@tnp@invalid.invalid to comp.os.linux.misc on Thu Feb 12 12:38:56 2026
    From Newsgroup: comp.os.linux.misc

    On 11/02/2026 22:24, Carlos E. R. wrote:
    On 2026-02-11 19:50, Rich wrote:
    c186282 <c186282@nnada.net> wrote:
    On 2/10/26 04:09, The Natural Philosopher wrote:
    ...more fuel on the fire...

    https://www.theregister.com/2026/02/09/compilers_undermine_encryption/ >>>>
    GCC erases code whose delays obfuscates encryption delays because it
    doesn't do anything...


    -a-a Very interesting ! How 'optimization' sometimes ISN'T.

    Nope.-a As Richard Kettlewell has pointed out, what the encryption code
    writers want is "constant time execution, regardless of inputs" which
    is not a promised output from gcc, no matter the optimization level
    chosen.

    The compiler is "properly optimizing" given the meaning of
    "optimization" it uses ("make code run as fast as possible" or "make
    code as small as possible" -- with -Os).-a But the compiler was not
    designed to create "constant time execution" code.-a The writers were
    expecting a promise the compiler never promised.

    In the example posted:

    -a The user types in a password, which gets checked against
    -a a database, character by character. Once the first character
    -a doesn't match, an error message is returned.

    ...the fault is not of the compiler, but of the programmer. He has to examine all characters even if he knows there is no point.


    I think the point is that the compiler knows that isn't necessary, and
    doesnt bother.
    --
    rCLIt is not the truth of Marxism that explains the willingness of intellectuals to believe it, but the power that it confers on
    intellectuals, in their attempts to control the world. And since...it is futile to reason someone out of a thing that he was not reasoned into,
    we can conclude that Marxism owes its remarkable power to survive every criticism to the fact that it is not a truth-directed but a
    power-directed system of thought.rCY
    Sir Roger Scruton

    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From The Natural Philosopher@tnp@invalid.invalid to comp.os.linux.misc on Thu Feb 12 12:42:28 2026
    From Newsgroup: comp.os.linux.misc

    On 11/02/2026 22:45, Rich wrote:
    Chris Ahlstrom <OFeem1987@teleworm.us> wrote:
    Rich wrote this post by blinking in Morse code:

    c186282 <c186282@nnada.net> wrote:
    On 2/10/26 04:09, The Natural Philosopher wrote:
    ...more fuel on the fire...

    https://www.theregister.com/2026/02/09/compilers_undermine_encryption/ >>>>>
    GCC erases code whose delays obfuscates encryption delays because it >>>>> doesn't do anything...


    Very interesting ! How 'optimization' sometimes ISN'T.

    Nope. As Richard Kettlewell has pointed out, what the encryption code
    writers want is "constant time execution, regardless of inputs" which
    is not a promised output from gcc, no matter the optimization level
    chosen.

    Isn't that something programmer needs to code?

    <https://www.chosenplaintext.ca/articles/beginners-guide-constant-time-cryptography.html>

    ... in asssembler.

    If they wanted to be assured the compiler did not change the intent for
    them, yes.

    And given the microcode and caching in the hardware, even that might not
    be enough.

    Some kind of device featuring embedded atomic decay might be an
    option.Quantum level effects are to all intents and purpose random at
    the pico level.
    --
    rCLIt is not the truth of Marxism that explains the willingness of intellectuals to believe it, but the power that it confers on
    intellectuals, in their attempts to control the world. And since...it is futile to reason someone out of a thing that he was not reasoned into,
    we can conclude that Marxism owes its remarkable power to survive every criticism to the fact that it is not a truth-directed but a
    power-directed system of thought.rCY
    Sir Roger Scruton

    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Carlos E. R.@robin_listas@es.invalid to comp.os.linux.misc on Thu Feb 12 15:14:57 2026
    From Newsgroup: comp.os.linux.misc

    On 2026-02-12 13:38, The Natural Philosopher wrote:
    On 11/02/2026 22:24, Carlos E. R. wrote:
    On 2026-02-11 19:50, Rich wrote:
    c186282 <c186282@nnada.net> wrote:
    On 2/10/26 04:09, The Natural Philosopher wrote:
    ...more fuel on the fire...

    https://www.theregister.com/2026/02/09/compilers_undermine_encryption/ >>>>>
    GCC erases code whose delays obfuscates encryption delays because it >>>>> doesn't do anything...


    -a-a Very interesting ! How 'optimization' sometimes ISN'T.

    Nope.-a As Richard Kettlewell has pointed out, what the encryption code
    writers want is "constant time execution, regardless of inputs" which
    is not a promised output from gcc, no matter the optimization level
    chosen.

    The compiler is "properly optimizing" given the meaning of
    "optimization" it uses ("make code run as fast as possible" or "make
    code as small as possible" -- with -Os).-a But the compiler was not
    designed to create "constant time execution" code.-a The writers were
    expecting a promise the compiler never promised.

    In the example posted:

    -a-a The user types in a password, which gets checked against
    -a-a a database, character by character. Once the first character
    -a-a doesn't match, an error message is returned.

    ...the fault is not of the compiler, but of the programmer. He has to
    examine all characters even if he knows there is no point.


    I think the point is that the compiler knows that isn't necessary, and doesnt bother.


    Then don't optimize. Optimization has always been somewhat problematic. Sometimes it introduced bugs that could not be debugged, because
    debugging altered the code, possibly removing the optimizations.
    --
    Cheers,
    Carlos E.R.
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Rich@rich@example.invalid to comp.os.linux.misc on Thu Feb 12 17:23:04 2026
    From Newsgroup: comp.os.linux.misc

    Carlos E. R. <robin_listas@es.invalid> wrote:
    On 2026-02-12 13:38, The Natural Philosopher wrote:
    On 11/02/2026 22:24, Carlos E. R. wrote:
    On 2026-02-11 19:50, Rich wrote:
    c186282 <c186282@nnada.net> wrote:
    On 2/10/26 04:09, The Natural Philosopher wrote:
    ...more fuel on the fire...

    https://www.theregister.com/2026/02/09/compilers_undermine_encryption/ >>>>>>
    GCC erases code whose delays obfuscates encryption delays because it >>>>>> doesn't do anything...


    -a-a Very interesting ! How 'optimization' sometimes ISN'T.

    Nope.-a As Richard Kettlewell has pointed out, what the encryption code >>>> writers want is "constant time execution, regardless of inputs" which
    is not a promised output from gcc, no matter the optimization level
    chosen.

    The compiler is "properly optimizing" given the meaning of
    "optimization" it uses ("make code run as fast as possible" or "make
    code as small as possible" -- with -Os).-a But the compiler was not
    designed to create "constant time execution" code.-a The writers were
    expecting a promise the compiler never promised.

    In the example posted:

    -a-a The user types in a password, which gets checked against
    -a-a a database, character by character. Once the first character
    -a-a doesn't match, an error message is returned.

    ...the fault is not of the compiler, but of the programmer. He has to
    examine all characters even if he knows there is no point.


    I think the point is that the compiler knows that isn't necessary, and
    doesnt bother.


    Then don't optimize. Optimization has always been somewhat problematic. Sometimes it introduced bugs that could not be debugged, because
    debugging altered the code, possibly removing the optimizations.

    It wasn't the optimizer causing the "skipping" of the rest of the
    checks. It was a byproduct of boolean short-circuiting of boolean expressions. Most languages only evaluate just enough of a complex
    boolean expression to reach a true or false indication, then skip the
    rest of the expression (yes, this is an 'optimization', but not by the
    code optimizer but the language specification itself).

    The skipping of the remaining character checks in the example posted
    here was due to this boolean short-circuit behavior. Once the first
    'false' arrived for the first incorrect character, the compiled code
    skipped over evaluating the boolean expression for subsequent
    characters. So -O0 (no optimizations) or -O3 (full optimizations) made
    no difference, portions of the 'constant time execution' were skipped,
    opening a timing side channel attack.
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Carlos E. R.@robin_listas@es.invalid to comp.os.linux.misc on Thu Feb 12 19:52:41 2026
    From Newsgroup: comp.os.linux.misc

    On 2026-02-12 18:23, Rich wrote:
    Carlos E. R. <robin_listas@es.invalid> wrote:
    On 2026-02-12 13:38, The Natural Philosopher wrote:
    On 11/02/2026 22:24, Carlos E. R. wrote:
    On 2026-02-11 19:50, Rich wrote:
    c186282 <c186282@nnada.net> wrote:
    On 2/10/26 04:09, The Natural Philosopher wrote:
    ...more fuel on the fire...




    I think the point is that the compiler knows that isn't necessary, and
    doesnt bother.


    Then don't optimize. Optimization has always been somewhat problematic.
    Sometimes it introduced bugs that could not be debugged, because
    debugging altered the code, possibly removing the optimizations.

    It wasn't the optimizer causing the "skipping" of the rest of the
    checks. It was a byproduct of boolean short-circuiting of boolean expressions. Most languages only evaluate just enough of a complex
    boolean expression to reach a true or false indication, then skip the
    rest of the expression (yes, this is an 'optimization', but not by the
    code optimizer but the language specification itself).

    The skipping of the remaining character checks in the example posted
    here was due to this boolean short-circuit behavior. Once the first
    'false' arrived for the first incorrect character, the compiled code
    skipped over evaluating the boolean expression for subsequent
    characters. So -O0 (no optimizations) or -O3 (full optimizations) made
    no difference, portions of the 'constant time execution' were skipped, opening a timing side channel attack.

    Ah, yes, I remember that now. Can play havoc when one of the expression
    is actually a function and the later code relies on the prior execution
    of that code.
    --
    Cheers,
    Carlos E.R.
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From c186282@c186282@nnada.net to comp.os.linux.misc on Thu Feb 12 20:48:11 2026
    From Newsgroup: comp.os.linux.misc

    On 2/12/26 04:48, Richard Kettlewell wrote:
    Rich <rich@example.invalid> writes:
    The Natural Philosopher <tnp@invalid.invalid> wrote:
    On 11/02/2026 18:50, Rich wrote:
    c186282 <c186282@nnada.net> wrote:
    On 2/10/26 04:09, The Natural Philosopher wrote:
    ...more fuel on the fire...

    https://www.theregister.com/2026/02/09/compilers_undermine_encryption/ >>>>>>
    GCC erases code whose delays obfuscates encryption delays because it >>>>>> doesn't do anything...


    Very interesting ! How 'optimization' sometimes ISN'T.

    Nope. As Richard Kettlewell has pointed out, what the encryption
    code writers want is "constant time execution, regardless of inputs"
    which is not a promised output from gcc, no matter the optimization
    level chosen.

    +1.

    The compiler is "properly optimizing" given the meaning of
    "optimization" it uses ("make code run as fast as possible" or "make
    code as small as possible" -- with -Os). But the compiler was not
    designed to create "constant time execution" code. The writers were
    expecting a promise the compiler never promised.

    Sounds deeply political :-)

    Perhaps a C construct ...

    void randMicrodelay()

    could be constructed in Asssember for every platform...

    For crypto work that likely would not be considered sufficient. Unless
    the randomness for the "delay" came from a true random source it would
    likely still leak side-channel data. It would make the attackers job
    harder, but not fully close the leak.

    Cryptographic code often needs a good random source anyway, so thatrCOs
    not necessarily an obstacle to TNPrCOs proposal of inserting random
    delays. But there are exceptions, and in practice itrCOs not a common strategy. More common is to avoid constructions that the compiler emits branches for (which is a bit fragile) and more effectively,
    constructions that the compiler contractually cannot optimize
    through. This can be found in MeuselrCOs slides, but people might have to actually read links rather than just comment on them to notice that.

    Some cryptographic implementations do use pure assembler for this and
    other reasons, but itrCOs not a very practical strategy. If you look at OpenSSL yourCOll find the most popular algorithms and the most popular CPU architectures are well-covered by assembler implementations, but for
    anything else it falls back to C.

    Really damned good randomness can be had by mushing
    together things like CPU temperature, process numbers,
    they used to use video-refresh timing and mouse coords.
    Put together they'll yield a number pretty impossible
    to guess/duplicate. It's not "perfect" randomness, there
    may be no such thing, but it's close.

    ASM, obviously, can be used to write anything. The
    trade off is "how easily ?". Often not worth it in
    today's environment. I used to do a fair amount of
    ASM, for microcontrollers and even the early IBM-PCs
    when they didn't have much clever software. I don't
    use ASM any more though - 'C' is more than good
    enough for almost anything (REALLY tiny uC's
    may still warrant ASM work - still think the PIC
    12xxx 8-pin jobbies are cool).

    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From c186282@c186282@nnada.net to comp.os.linux.misc on Thu Feb 12 20:54:49 2026
    From Newsgroup: comp.os.linux.misc

    On 2/12/26 04:55, Richard Kettlewell wrote:
    "Carlos E. R." <robin_listas@es.invalid> writes:
    In the example posted:

    The user types in a password, which gets checked against
    a database, character by character. Once the first character
    doesn't match, an error message is returned.

    ...the fault is not of the compiler, but of the programmer. He has to
    examine all characters even if he knows there is no point.

    Obviously you didnrCOt read the whole article...

    It's "examining" behavior that's the fault :-)

    If you ALWAYS process ALL the characters, and/or try
    to make fake timing so success/fail will use up the
    same amount of CPU time, THEN you're ahead of the game.

    Until they figure out how to detect the fake timing ...

    Olde tyme cracking - brute force or 'clever' - does
    not seem as prevalent as before. There are SO many
    computers, you'd waste far too much time. 'Insider'
    info - either from planted humans or spyware or
    just idiot humans that leave unencrypted databases
    open to the world - seem more 'economical' in most
    cases.

    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From c186282@c186282@nnada.net to comp.os.linux.misc on Thu Feb 12 20:56:57 2026
    From Newsgroup: comp.os.linux.misc

    On 2/12/26 07:38, The Natural Philosopher wrote:
    On 11/02/2026 22:24, Carlos E. R. wrote:
    On 2026-02-11 19:50, Rich wrote:
    c186282 <c186282@nnada.net> wrote:
    On 2/10/26 04:09, The Natural Philosopher wrote:
    ...more fuel on the fire...

    https://www.theregister.com/2026/02/09/compilers_undermine_encryption/ >>>>>
    GCC erases code whose delays obfuscates encryption delays because it >>>>> doesn't do anything...


    -a-a Very interesting ! How 'optimization' sometimes ISN'T.

    Nope.-a As Richard Kettlewell has pointed out, what the encryption code
    writers want is "constant time execution, regardless of inputs" which
    is not a promised output from gcc, no matter the optimization level
    chosen.

    The compiler is "properly optimizing" given the meaning of
    "optimization" it uses ("make code run as fast as possible" or "make
    code as small as possible" -- with -Os).-a But the compiler was not
    designed to create "constant time execution" code.-a The writers were
    expecting a promise the compiler never promised.

    In the example posted:

    -a-a The user types in a password, which gets checked against
    -a-a a database, character by character. Once the first character
    -a-a doesn't match, an error message is returned.

    ...the fault is not of the compiler, but of the programmer. He has to
    examine all characters even if he knows there is no point.


    I think the point is that the compiler knows that isn't necessary, and doesnt bother.


    Yep, too smart for its own good :-)

    This behavior will have to be sabotaged - either
    by messing with the compiler or messing with the
    algo used.

    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Rich@rich@example.invalid to comp.os.linux.misc on Fri Feb 13 03:20:44 2026
    From Newsgroup: comp.os.linux.misc

    c186282 <c186282@nnada.net> wrote:
    On 2/12/26 04:55, Richard Kettlewell wrote:
    "Carlos E. R." <robin_listas@es.invalid> writes:
    In the example posted:

    The user types in a password, which gets checked against
    a database, character by character. Once the first character
    doesn't match, an error message is returned.

    ...the fault is not of the compiler, but of the programmer. He has to
    examine all characters even if he knows there is no point.

    Obviously you didnrCOt read the whole article...

    It's "examining" behavior that's the fault :-)

    If you ALWAYS process ALL the characters, and/or try
    to make fake timing so success/fail will use up the
    same amount of CPU time, THEN you're ahead of the game.

    Obviously you didnrCOt read [Richard Kettlewell's posts]

    The C code was, if executed literally as written, processing ALL the characters.

    But in both the optimized state (-O3) and the "do not optimize" state
    (-O0) the GCC output object code was skipping execution of much of the
    object code that needed to be executed for a "constant time"
    comparison.

    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From c186282@c186282@nnada.net to comp.os.linux.misc on Thu Feb 12 23:44:56 2026
    From Newsgroup: comp.os.linux.misc

    On 2/12/26 22:20, Rich wrote:
    c186282 <c186282@nnada.net> wrote:
    On 2/12/26 04:55, Richard Kettlewell wrote:
    "Carlos E. R." <robin_listas@es.invalid> writes:
    In the example posted:

    The user types in a password, which gets checked against
    a database, character by character. Once the first character
    doesn't match, an error message is returned.

    ...the fault is not of the compiler, but of the programmer. He has to
    examine all characters even if he knows there is no point.

    Obviously you didnrCOt read the whole article...

    It's "examining" behavior that's the fault :-)

    If you ALWAYS process ALL the characters, and/or try
    to make fake timing so success/fail will use up the
    same amount of CPU time, THEN you're ahead of the game.

    Obviously you didnrCOt read [Richard Kettlewell's posts]

    I was reading the Original Source that explained
    the attack method.

    The C code was, if executed literally as written, processing ALL the characters.

    But in both the optimized state (-O3) and the "do not optimize" state
    (-O0) the GCC output object code was skipping execution of much of the
    object code that needed to be executed for a "constant time"
    comparison.

    As said before ... 'too smart for its own good' in
    this instance.

    Now lots of people need to update lots of code to
    obfuscate this issue. Expensive.

    Yea yea, we all tend to use the 'high-optimization'
    flags by default ... it's a psych thing. However in
    this case someone has figured out how to use that
    against us all. Either a lot of code had to be
    re-compiled in less-optimized mode or the algos
    have to be changed a bit. The former fix is easier.

    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From The Natural Philosopher@tnp@invalid.invalid to comp.os.linux.misc on Fri Feb 13 10:10:04 2026
    From Newsgroup: comp.os.linux.misc

    On 12/02/2026 14:14, Carlos E. R. wrote:
    On 2026-02-12 13:38, The Natural Philosopher wrote:
    On 11/02/2026 22:24, Carlos E. R. wrote:
    On 2026-02-11 19:50, Rich wrote:
    c186282 <c186282@nnada.net> wrote:
    On 2/10/26 04:09, The Natural Philosopher wrote:
    ...more fuel on the fire...

    https://www.theregister.com/2026/02/09/compilers_undermine_encryption/ >>>>>>
    GCC erases code whose delays obfuscates encryption delays because it >>>>>> doesn't do anything...


    -a-a Very interesting ! How 'optimization' sometimes ISN'T.

    Nope.-a As Richard Kettlewell has pointed out, what the encryption code >>>> writers want is "constant time execution, regardless of inputs" which
    is not a promised output from gcc, no matter the optimization level
    chosen.

    The compiler is "properly optimizing" given the meaning of
    "optimization" it uses ("make code run as fast as possible" or "make
    code as small as possible" -- with -Os).-a But the compiler was not
    designed to create "constant time execution" code.-a The writers were
    expecting a promise the compiler never promised.

    In the example posted:

    -a-a The user types in a password, which gets checked against
    -a-a a database, character by character. Once the first character
    -a-a doesn't match, an error message is returned.

    ...the fault is not of the compiler, but of the programmer. He has to
    examine all characters even if he knows there is no point.


    I think the point is that the compiler knows that isn't necessary, and
    doesnt bother.


    Then don't optimize. Optimization has always been somewhat problematic. Sometimes it introduced bugs that could not be debugged, because
    debugging altered the code, possibly removing the optimizations.


    I think you are on a hiding to nothing here. No high level language guarantees any particular load of assembler, and no particular load of assembler guarantees an exact timing these days.
    We have moved on from the Z80....
    --
    Truth welcomes investigation because truth knows investigation will lead
    to converts. It is deception that uses all the other techniques.

    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From The Natural Philosopher@tnp@invalid.invalid to comp.os.linux.misc on Fri Feb 13 10:11:22 2026
    From Newsgroup: comp.os.linux.misc

    On 12/02/2026 18:52, Carlos E. R. wrote:
    On 2026-02-12 18:23, Rich wrote:
    Carlos E. R. <robin_listas@es.invalid> wrote:
    On 2026-02-12 13:38, The Natural Philosopher wrote:
    On 11/02/2026 22:24, Carlos E. R. wrote:
    On 2026-02-11 19:50, Rich wrote:
    c186282 <c186282@nnada.net> wrote:
    On 2/10/26 04:09, The Natural Philosopher wrote:
    ...more fuel on the fire...




    I think the point is that the compiler knows that isn't necessary, and >>>> doesnt bother.


    Then don't optimize. Optimization has always been somewhat problematic.
    Sometimes it introduced bugs that could not be debugged, because
    debugging altered the code, possibly removing the optimizations.

    It wasn't the optimizer causing the "skipping" of the rest of the
    checks.-a It was a byproduct of boolean short-circuiting of boolean
    expressions.-a Most languages only evaluate just enough of a complex
    boolean expression to reach a true or false indication, then skip the
    rest of the expression (yes, this is an 'optimization', but not by the
    code optimizer but the language specification itself).

    The skipping of the remaining character checks in the example posted
    here was due to this boolean short-circuit behavior.-a Once the first
    'false' arrived for the first incorrect character, the compiled code
    skipped over evaluating the boolean expression for subsequent
    characters.-a So -O0 (no optimizations) or -O3 (full optimizations) made
    no difference, portions of the 'constant time execution' were skipped,
    opening a timing side channel attack.

    Ah, yes, I remember that now. Can play havoc when one of the expression
    is actually a function and the later code relies on the prior execution
    of that code.

    the keyword 'volatile' helps in this case



    --
    Truth welcomes investigation because truth knows investigation will lead
    to converts. It is deception that uses all the other techniques.

    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Richard Kettlewell@invalid@invalid.invalid to comp.os.linux.misc on Fri Feb 13 10:20:34 2026
    From Newsgroup: comp.os.linux.misc

    The Natural Philosopher <tnp@invalid.invalid> writes:

    On 12/02/2026 18:52, Carlos E. R. wrote:
    On 2026-02-12 18:23, Rich wrote:
    Carlos E. R. <robin_listas@es.invalid> wrote:
    On 2026-02-12 13:38, The Natural Philosopher wrote:
    On 11/02/2026 22:24, Carlos E. R. wrote:
    On 2026-02-11 19:50, Rich wrote:
    c186282 <c186282@nnada.net> wrote:
    On 2/10/26 04:09, The Natural Philosopher wrote:
    ...more fuel on the fire...


    I think the point is that the compiler knows that isn't necessary, and >>>>> doesnt bother.


    Then don't optimize. Optimization has always been somewhat problematic. >>>> Sometimes it introduced bugs that could not be debugged, because
    debugging altered the code, possibly removing the optimizations.

    It wasn't the optimizer causing the "skipping" of the rest of the
    checks.-a It was a byproduct of boolean short-circuiting of boolean
    expressions.-a Most languages only evaluate just enough of a complex
    boolean expression to reach a true or false indication, then skip the
    rest of the expression (yes, this is an 'optimization', but not by the
    code optimizer but the language specification itself).

    The skipping of the remaining character checks in the example posted
    here was due to this boolean short-circuit behavior.-a Once the first
    'false' arrived for the first incorrect character, the compiled code
    skipped over evaluating the boolean expression for subsequent
    characters.-a So -O0 (no optimizations) or -O3 (full optimizations) made >>> no difference, portions of the 'constant time execution' were skipped,
    opening a timing side channel attack.

    Ah, yes, I remember that now. Can play havoc when one of the
    expression is actually a function and the later code relies on the
    prior execution of that code.

    the keyword 'volatile' helps in this case

    No. volatile does not affect the execution of the short-circuiting
    boolean operators. If the left-hand side of && is false, or the left
    hand side of || is true, then the right-hand is not executed.
    --
    https://www.greenend.org.uk/rjk/
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From c186282@c186282@nnada.net to comp.os.linux.misc on Fri Feb 13 19:57:00 2026
    From Newsgroup: comp.os.linux.misc

    On 2/13/26 05:10, The Natural Philosopher wrote:
    On 12/02/2026 14:14, Carlos E. R. wrote:
    On 2026-02-12 13:38, The Natural Philosopher wrote:
    On 11/02/2026 22:24, Carlos E. R. wrote:
    On 2026-02-11 19:50, Rich wrote:
    c186282 <c186282@nnada.net> wrote:
    On 2/10/26 04:09, The Natural Philosopher wrote:
    ...more fuel on the fire...

    https://www.theregister.com/2026/02/09/
    compilers_undermine_encryption/

    GCC erases code whose delays obfuscates encryption delays because it >>>>>>> doesn't do anything...


    -a-a Very interesting ! How 'optimization' sometimes ISN'T.

    Nope.-a As Richard Kettlewell has pointed out, what the encryption code >>>>> writers want is "constant time execution, regardless of inputs" which >>>>> is not a promised output from gcc, no matter the optimization level
    chosen.

    The compiler is "properly optimizing" given the meaning of
    "optimization" it uses ("make code run as fast as possible" or "make >>>>> code as small as possible" -- with -Os).-a But the compiler was not
    designed to create "constant time execution" code.-a The writers were >>>>> expecting a promise the compiler never promised.

    In the example posted:

    -a-a The user types in a password, which gets checked against
    -a-a a database, character by character. Once the first character
    -a-a doesn't match, an error message is returned.

    ...the fault is not of the compiler, but of the programmer. He has
    to examine all characters even if he knows there is no point.


    I think the point is that the compiler knows that isn't necessary,
    and doesnt bother.


    Then don't optimize. Optimization has always been somewhat
    problematic. Sometimes it introduced bugs that could not be debugged,
    because debugging altered the code, possibly removing the optimizations.


    I think you are on a hiding to nothing here.-a No high level language guarantees any particular load of assembler, and no particular load of assembler guarantees an exact timing these days.
    -aWe have moved on from the Z80....


    Hmmmm ... how about a Z80 equiv in a thumb-drive
    casing ? You can send it stuff, it can send stuff
    back, nice single-CPU single-thread performance :-)

    When getting a PW you actually have the Z80 do it
    and confirm accuracy.

    For desktops, a PCI card with similar setup.

    You can EMULATE a Z80 or 6502 and a few others
    easily enough with Linux utils, but they're
    still actually a Linux x86/64 pgm and may be
    subject to the same vulnerabilities as the
    article mentioned.

    For on-order commercial purposes, there might be a
    market for "<oldCPU>-on-a stick" ...

    I'd like a 6809 and the CoCo ver of OS-9 :-)

    (don't think MicroWare actually makes a 6809
    version of OS-9 anymore alas - there ARE a
    few ARM versions however)

    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From John@john@panix.com to comp.os.linux.misc on Tue Feb 17 16:23:46 2026
    From Newsgroup: comp.os.linux.misc

    Marc Haber <mh+usenetspam1118@zugschl.us> wrote:
    Farley Flud <ff@linux.rocks> wrote:
    The Natural Philosopher wrote:
    ...more fuel on the fire...

    https://www.theregister.com/2026/02/09/compilers_undermine_encryption/

    GCC erases code whose delays obfuscates encryption delays because it
    doesn't do anything...

    A complete non issue. Click bait.

    Just shut off optimization, either en masse or selectively via the dozens >>of switches.

    Knowing how to compile is just as important as knowing how to program.

    Does GCC have pragmas so that the programmer can turn off optimization
    for only those code parts? That would probably be the wise thing to do Turning off optimization completely doesn't sound like the right
    thing.

    Don't know whether or not this was suggested downthread:
    write your delay code as a callable function, compile it
    separately without optimization, and then link that
    delayfunc.o file with your other code, which has been
    modified to call delayfunc() as needed. As already stated,
    a complete non-issue.
    --
    John
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Rich@rich@example.invalid to comp.os.linux.misc on Thu Feb 19 19:26:04 2026
    From Newsgroup: comp.os.linux.misc

    John <john@panix.com> wrote:
    Marc Haber <mh+usenetspam1118@zugschl.us> wrote:
    Farley Flud <ff@linux.rocks> wrote:
    The Natural Philosopher wrote:
    ...more fuel on the fire...

    https://www.theregister.com/2026/02/09/compilers_undermine_encryption/ >>>>
    GCC erases code whose delays obfuscates encryption delays because it
    doesn't do anything...

    A complete non issue. Click bait.

    Just shut off optimization, either en masse or selectively via the dozens >>>of switches.

    Knowing how to compile is just as important as knowing how to program.

    Does GCC have pragmas so that the programmer can turn off optimization
    for only those code parts? That would probably be the wise thing to do
    Turning off optimization completely doesn't sound like the right
    thing.

    Don't know whether or not this was suggested downthread:
    write your delay code as a callable function, compile it
    separately without optimization, and then link that
    delayfunc.o file with your other code, which has been
    modified to call delayfunc() as needed. As already stated,
    a complete non-issue.

    Look at the examples posted by Richard Kettelwell in the message with Message-ID: <wwvcy2cbmh6.fsf@LkoBDZeT.terraraq.uk>

    Even with no optimizations, due to normal boolean logic
    short-circuiting defined by the C spec, the output assembly by the
    compiler still skips over much of the "constant time" activity that
    must be executed for the code to be "constant time".

    The end result is: compilers do not guarantee "constant runtime" object
    code, regardless of optimization settings.
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Richard Kettlewell@invalid@invalid.invalid to comp.os.linux.misc on Sun Feb 22 09:28:36 2026
    From Newsgroup: comp.os.linux.misc

    Rich <rich@example.invalid> writes:
    John <john@panix.com> wrote:
    Don't know whether or not this was suggested downthread: write your
    delay code as a callable function, compile it separately without
    optimization, and then link that delayfunc.o file with your other
    code, which has been modified to call delayfunc() as needed. As
    already stated, a complete non-issue.

    Look at the examples posted by Richard Kettelwell in the message with Message-ID: <wwvcy2cbmh6.fsf@LkoBDZeT.terraraq.uk>

    Even with no optimizations, due to normal boolean logic
    short-circuiting defined by the C spec, the output assembly by the
    compiler still skips over much of the "constant time" activity that
    must be executed for the code to be "constant time".

    The end result is: compilers do not guarantee "constant runtime" object code, regardless of optimization settings.

    ItrCOs possibly worth noting that the rCyrandom delayrCO strategy does not
    work in general. In many real designs a given secret is used many times
    (e.g. an https site will generate a new signature, using the same key,
    for every new connection). Instead of a single timing, the attacker gets
    a collection of timings and gets to draw inferences from their
    distribution. The attackerrCOs cost goes up, for sure, but the attack
    doesnrCOt go away.

    If this was truly trivial to solve then nobody would be talking about
    it. The people claiming itrCOs a non-issue have not engaged with the issue
    at all.
    --
    https://www.greenend.org.uk/rjk/
    --- Synchronet 3.21b-Linux NewsLink 1.2