• Re: Microarchitectural support for counting

    From EricP@21:1/5 to Scott Lurndal on Fri Oct 4 14:11:23 2024
    Scott Lurndal wrote:
    Brett <ggtgp@yahoo.com> writes:
    When a modern CPU takes an interrupt it does not suspend the current
    processing, instead it just starts fetching code from the new process while >> letting computations in the pipeline continue to completion. The OoOe can >> have a 1000 instructions in flight. At some point the resources start
    getting dedicated to the new process, and the old process is drained out or >> maybe actually stopped.

    Not necessarily the case. For various reasons, entry to the interrupt handler may actually have a barrier to ensure that outstanding stores
    are committed (store buffer drained) before continuing. This is for
    error containment purposes.


    Yes but pipelining interrupts is trickier than that.

    First there is pipelining the super/user mode change. This requires fetch
    to have a future copy of the mode which is used for instruction address translation, and a mode flag attached to each instruction or uOp,
    each checkpoint saves a mode copy, and retire has the committed mode copy. Privileged instructions are checked by decode to ensure their fetch mode
    was correct.

    On interrupt, if the core starts fetching instructions from the handler and stuffing them into the instruction queue (ROB) while there are still instructions in flight, and if those older instructions get a branch mispredict, then the purge of mispredicted older instructions will also
    purge the interrupt handler. Also the older instructions might trigger
    an exception, delivery of which would take precedence over the delivery
    of the interrupt and again purge the handler. Also the older instructions
    might raise the core's interrupt priority, masking the interrupt that
    it just tried to accept.

    The interrupt controller can't complete the hand-off of the interrupt
    to a core until it knows that hand-off won't get purged by a mispredict, exception or priority change. So the hand-off becomes like a two-phase
    commit where the controller offers an available core an interrupt,
    core accepts it tentatively and starts executing the handler,
    and core later either commits or rejects the hand-off.
    While the interrupt is in limbo the controller marks it as tentative
    but keeps its position in the interrupt queue.

    This is where your point comes in.
    Because the x86/x64 automatically pushes the saved context on the kernel
    stack, RIP, RSP, RFLAG, that context store can only happen when the entry
    to the interrupt sequence reaches retire, which means all older
    instructions must have retired. At that point the core sends a commit
    signal to the interrupt controller and begins its stores, and controller removes the interrupt from its queue. If anything purges the hand-off then
    core sends a reject signal to controller, which returns the interrupt
    to a pending state at its position at the front of its queue.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to EricP on Fri Oct 4 23:09:53 2024
    On Fri, 4 Oct 2024 18:11:23 +0000, EricP wrote:

    Scott Lurndal wrote:
    Brett <ggtgp@yahoo.com> writes:
    When a modern CPU takes an interrupt it does not suspend the current
    processing, instead it just starts fetching code from the new process
    while
    letting computations in the pipeline continue to completion. The OoOe
    can
    have a 1000 instructions in flight. At some point the resources start
    getting dedicated to the new process, and the old process is drained out >>> or
    maybe actually stopped.

    Not necessarily the case. For various reasons, entry to the interrupt
    handler may actually have a barrier to ensure that outstanding stores
    are committed (store buffer drained) before continuing. This is for
    error containment purposes.


    Yes but pipelining interrupts is trickier than that.

    First there is pipelining the super/user mode change. This requires
    fetch
    to have a future copy of the mode which is used for instruction address translation, and a mode flag attached to each instruction or uOp,
    each checkpoint saves a mode copy, and retire has the committed mode
    copy.
    Privileged instructions are checked by decode to ensure their fetch mode
    was correct.

    On interrupt, if the core starts fetching instructions from the handler
    and
    stuffing them into the instruction queue (ROB) while there are still instructions in flight, and if those older instructions get a branch mispredict, then the purge of mispredicted older instructions will also
    purge the interrupt handler.

    Not necessary, you purge all of the younger instructions from the
    thread at retirement, but none of the instructions associated with
    the new <interrupt> thread at the front.

    Also the older instructions might trigger
    an exception, delivery of which would take precedence over the delivery
    of the interrupt and again purge the handler. Also the older
    instructions
    might raise the core's interrupt priority, masking the interrupt that
    it just tried to accept.

    The interrupt controller can't complete the hand-off of the interrupt
    to a core until it knows that hand-off won't get purged by a mispredict, exception or priority change. So the hand-off becomes like a two-phase
    commit where the controller offers an available core an interrupt,
    core accepts it tentatively and starts executing the handler,
    and core later either commits or rejects the hand-off.
    While the interrupt is in limbo the controller marks it as tentative
    but keeps its position in the interrupt queue.

    This is where your point comes in.
    Because the x86/x64 automatically pushes the saved context on the kernel stack, RIP, RSP, RFLAG, that context store can only happen when the
    entry
    to the interrupt sequence reaches retire, which means all older
    instructions must have retired. At that point the core sends a commit
    signal to the interrupt controller and begins its stores, and controller removes the interrupt from its queue. If anything purges the hand-off
    then
    core sends a reject signal to controller, which returns the interrupt
    to a pending state at its position at the front of its queue.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From jseigh@21:1/5 to jseigh on Sat Dec 28 07:20:17 2024
    On 12/27/24 11:16, jseigh wrote:
    On 10/3/24 10:00, Anton Ertl wrote:
    Two weeks ago Rene Mueller presented the paper "The Cost of Profiling
    in the HotSpot Virtual Machine" at MPLR 2024.  He reported that for
    some programs the counters used for profiling the program result in
    cache contention due to true or false sharing among threads.

    The traditional software mitigation for that problem is to split the
    counters into per-thread or per-core instances.  But for heavily
    multi-threaded programs running on machines with many cores the cost
    of this mitigation is substantial.


    For profiling, do we really need accurate counters?  They just need to
    be statistically accurate I would think.

    Instead of incrementing a counter, just store a non-zero immediate into
    a zero initialized byte array at a per "counter" index.   There's no
    rmw data dependency, just a store so should have little impact on
    pipeline.

    A profiling thread loops thru the byte array, incrementing an actual
    counter when it sees no zero byte, and resets the byte to zero.  You
    could use vector ops to process the array.

    If the stores were fast enough, you could do 2 or more stores at
    hashed indices, different hash for each store. Sort of a counting
    Bloom filter.  The effective count would be the minimum of the
    hashed counts.

    No idea how feasible this would be though.


    Probably not feasible. The polling frequency wouldn't be high enough.


    If the problem is the number of counters, then counting Bloom filters
    might be worth looking into, assuming the overhead of incrementing
    the counts isn't a problem.

    Joe Seigh

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From EricP@21:1/5 to Anton Ertl on Mon Dec 30 14:39:27 2024
    Anton Ertl wrote:
    "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> writes:
    On 10/3/2024 7:00 AM, Anton Ertl wrote:
    Two weeks ago Rene Mueller presented the paper "The Cost of Profiling
    in the HotSpot Virtual Machine" at MPLR 2024. He reported that for
    some programs the counters used for profiling the program result in
    cache contention due to true or false sharing among threads.

    The traditional software mitigation for that problem is to split the
    counters into per-thread or per-core instances. But for heavily
    multi-threaded programs running on machines with many cores the cost
    of this mitigation is substantial.
    ....
    For the HotSpot application, the
    eventual answer was that they live with the cost of cache contention
    for the programs that have that problem. After some minutes the hot
    parts of the program are optimized, and cache contention is no longer
    a problem.
    ....
    If the per-thread counters are properly padded to a l2 cache line and
    properly aligned on cache line boundaries, well, the should not cause
    false sharing with other cache lines... Right?

    Sure, that's what the first sentence of the second paragraph you cited
    (and which I cited again) is about. Next, read the next sentence.

    Maybe I should give an example (fully made up on the spot, read the
    paper for real numbers): If HotSpot uses, on average one counter per conditional branch, and assuming a conditional branch every 10 static instructions (each having, say 4 bytes), with 1MB of generated code
    and 8 bytes per counter, that's 200KB of counters. But these counters
    are shared between all threads, so for code running on many cores you
    get true and false sharing.

    As mentioned, the usual mitigation is per-core counters. With a
    256-core machine, we now have 51.2MB of counters for 1MB of executable
    code. Now this is Java, so there might be quite a bit more executable
    code and correspondingly more counters. They eventually decided that
    the benefit of reduced cache coherence traffic is not worth that cost
    (or the cost of a hardware mechanism), as described in the last
    paragraph, from which I cited the important parts.

    - anton

    They could do this by having each thread log its own profile data
    into a thread-local profile bucket. When the bucket is full it
    queues its bucket to a "full" list and dequeues a new bucket from
    an "empty" list. A dedicated thread processes full buckets into the
    profile summary arrays, then puts the empty buckets on the empty list.

    A profile bucket is an array of 32-bits values. Each value is
    a 16-bit event type and 16-bit item id (or whatever).
    Simple events like counting each use of a branch take just one entry.
    Other profile events could take multiple entries if they recorded
    cpu performance counters or real time timestamps or both.

    The atomic accesses are only on full and empty bucket lists heads.
    By playing with the bucket sizes you can keep the chances of
    core collisions on the list heads to negligible.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to Paul A. Clayton on Wed Jan 1 00:34:44 2025
    On Tue, 31 Dec 2024 2:02:05 +0000, Paul A. Clayton wrote:

    On 12/25/24 1:30 PM, MitchAlsup1 wrote:
    On Wed, 25 Dec 2024 17:50:12 +0000, Paul A. Clayton wrote:

    On 10/5/24 11:11 AM, EricP wrote:
    MitchAlsup1 wrote:
    [snip]
    --------------------------

    But voiding doesn't look like it works for exceptions or
    conflicting
    interrupt priority adjustments. In those cases purging the
    interrupt
    handler and rejecting the hand-off looks like the only option.

    Should exceptions always have priority? It seems to me that if a
    thread is low enough priority to be interrupted, it is low enough
    priority to have its exception processing interrupted/delayed.

    It depends on what you mean::

    a) if you mean that exceptions are prioritized and the highest
    priority exception is the one taken, then OK you are working
    in an ISA that has multiple exceptions per instruction. Most
    RISC ISAs do not have this property.

    The context was any exception taking priority over an interrupt
    that was accepted, at least on a speculative path. I.e., the
    statement would have been more complete as "Should exceptions
    always (or ever) have priority over an accepted interrupt?"

    In the parlance I used to document My 66000 architecture, exceptions
    happen at instruction boundaries, while interrupts happen between
    instructions. Thus CPU is never deciding between an interrupt or an
    exception.

    Interrupts take on the priority assigned at I/O creation time.
    {{Oh and BTW, a single I/O request can take I/O exception to
    GuestOS, to HyperVisor, can deliver completion to assigned
    supervisor (Guest OS or HV), and deliver I/O failures to
    Secure Monitor (or whomever is assigned)}}

    Exceptions take on the priority of the currently running thread.
    A page fault at priority min does not block any interrupt at
    priority > min. A page fault at priority max is not interruptible.


    --------------------------------------

    Sooner or later an ISR has to actually deal with the MMI/O
    control registers associated with the <ahem> interrupt.

    Yes, but multithreading could hide some of those latencies in
    terms of throughput.

    EricP is the master proponent of finishing the instructions in the
    execution window that are finishable. I, merely, have no problem
    in allowing the pipe to complete or take a flush based on the kind
    of pipeline being engineered.

    With 300-odd instructions in the window this thesis has merit,
    with a 5-stage pipeline 1-wide, it does not have merit but is
    not devoid of merit either.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to Paul A. Clayton on Wed Dec 25 18:30:43 2024
    On Wed, 25 Dec 2024 17:50:12 +0000, Paul A. Clayton wrote:

    On 10/5/24 11:11 AM, EricP wrote:
    MitchAlsup1 wrote:
    [snip]
    --------------------------

    But voiding doesn't look like it works for exceptions or conflicting
    interrupt priority adjustments. In those cases purging the interrupt
    handler and rejecting the hand-off looks like the only option.

    Should exceptions always have priority? It seems to me that if a
    thread is low enough priority to be interrupted, it is low enough
    priority to have its exception processing interrupted/delayed.

    It depends on what you mean::

    a) if you mean that exceptions are prioritized and the highest
    priority exception is the one taken, then OK you are working
    in an ISA that has multiple exceptions per instruction. Most
    RISC ISAs do not have this property.

    b) if you mean that exceptions take priority over non-exception
    instruction streaming, well that is what exceptions ARE. In these
    cases, the exception handler inherits the priority of the instruction
    stream that raised it--but that is NOT assigning a priority to the
    exception.

    c) and then there are the cases where a PageFault from GuestOS
    page tables is serviced by GuestOS, while a PageFault from
    HyperVisor page tables is serviced by HyperVisor. You could
    assert that HV has higher priority than GuestOS, but it is
    more like HV has privilege over GuestOS while running at the
    same priority level.

    (There might be cases where normal operation allows deadlines to
    be met with lower priority and unusual extended operation requires
    high priority/resource allocation. Boosting the priority/resource
    budget of a thread/task to meet deadlines seems likely to make
    system-level reasoning more difficult. It seems one could also
    create an inflationary spiral.)

    With substantial support for Switch-on-Event MultiThreading, it
    is conceivable that a lower priority interrupt could be held
    "resident" after being interrupted by a higher priority interrupt.

    I don't know what you mean by 'resident' would "lower priority
    ISR gets pushed on stack to allow higher priority ISR to run"
    qualify as 'resident' ?

    And then there is the slightly easier case: where GuestOS is
    servicing an interrupt and ISR takes a PageFault in Hyper-
    Visor page tables. HV PF ISR fixes GuestOS ISR PF, and returns
    to interrupted interrupt handler. Here, even an instruction
    stream incapable (IE & EE=OFF) of taking an Exception takes an
    Exception to a different privilege level.

    Switch-on-Event helps but is not necessary.

    A chunked ROB could support such, but it is not clear that such
    is desirable even ignoring complexity factors.

    Being able to overlap latency of a memory-mapped I/O access (or
    other slow access) with execution of another thread seems
    attractive and even an interrupt handler with few instructions
    might have significant run time. Since interrupt blocking is
    used to avoid core-localized resource contention, software would
    have to know about such SoEMT.

    It may take 10,000 cycles to read an I/O control register way
    down the PCIe tree, the ISR reads several of these registers,
    and constructs a data-structure to be processed by softIRQ (or
    DPC) at lower priority. So, allowing the long cycle MMI/O LDs
    to overlap with ISR thread setup is advantageous.

    (Interrupts seem similar to certain server software threads in
    having lower ILP from control dependencies and more frequent high
    latency operations, which hints that multithreading may be
    desirable.)

    Sooner or later an ISR has to actually deal with the MMI/O
    control registers associated with the <ahem> interrupt.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to All on Wed Dec 25 18:44:19 2024
    On Sat, 5 Oct 2024 22:55:47 +0000, MitchAlsup1 wrote:

    On Sat, 5 Oct 2024 15:11:29 +0000, EricP wrote:

    MitchAlsup1 wrote:
    On Fri, 4 Oct 2024 18:11:23 +0000, EricP wrote:
    On interrupt, if the core starts fetching instructions from the handler >>>> and
    stuffing them into the instruction queue (ROB) while there are still
    instructions in flight, and if those older instructions get a branch
    mispredict, then the purge of mispredicted older instructions will also >>>> purge the interrupt handler.

    Not necessary, you purge all of the younger instructions from the
    thread at retirement, but none of the instructions associated with
    the new <interrupt> thread at the front.

    That's difficult with a circular buffer for the instruction queue/rob
    as you can't edit the order. For a branch mispredict you might be able
    to mark a circular range of entries as voided, and leave the entries
    to be recovered serially at retire.

    Every instruction needs a way to place itself before or after
    any mispredictable branch. Once you know which branch mispredicted, you
    know instructions will not retire, transitively. All you really need to
    know is if the instruction will retire, or not. The rest of the
    mechanics play out naturally in the pipeline.

    If, instead of nullifying every instruction past a given point, you
    make each instruction dependent on HIS-Branch execution (a predicted). Instructions issued under a mispredict shadow, remove THEMSELVES from instruction queues.

    If one is doing Predication with then-clauses and else-clauses*,
    dropping
    both clauses into execution and letting branch resolution choose which instruction execute and which die. At this point, the pipeline is well
    setup for using the same structure wrt interrupt hand-over. Should an
    exception happen in the application instruction stream, which was
    already
    in execution at the time of interruption, Any branch mispredict from application instructions stops application instruction stream precisely
    and we will get back to that precise point after ISR services the
    interrupt/

    (*) like My 66000

    But voiding doesn't look like it works for exceptions or conflicting
    interrupt priority adjustments. In those cases purging the interrupt
    handler and rejecting the hand-off looks like the only option.

    Can you make this statement again and use different words?

    If one can live with the occasional replay of an interrupt hand-off and
    handler execute due to mispredict/exception/interrupt_priority_adjust
    then the interrupt pipelining looks much simplified.

    You just have to cover the depth of the pipeline.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to mitchalsup@aol.com on Wed Dec 25 19:10:09 2024
    mitchalsup@aol.com (MitchAlsup1) writes:
    On Wed, 25 Dec 2024 17:50:12 +0000, Paul A. Clayton wrote:

    On 10/5/24 11:11 AM, EricP wrote:
    MitchAlsup1 wrote:
    [snip]
    --------------------------

    But voiding doesn't look like it works for exceptions or conflicting
    interrupt priority adjustments. In those cases purging the interrupt
    handler and rejecting the hand-off looks like the only option.

    Should exceptions always have priority? It seems to me that if a
    thread is low enough priority to be interrupted, it is low enough
    priority to have its exception processing interrupted/delayed.

    It depends on what you mean::

    a) if you mean that exceptions are prioritized and the highest
    priority exception is the one taken, then OK you are working
    in an ISA that has multiple exceptions per instruction. Most
    RISC ISAs do not have this property.

    AArch64 has 44 different synchronous exception priorities, and within
    each priority that describes more than one exception, there
    is a sub-prioritization therein. (Section D 1.3.5.5 pp 6080 in DDI0487K_a).

    While it is not common for a particular instruction to generate
    multiple execeptions, it is certainly possible (e.g. when
    instructions are trapped to a more privileged execution mode).


    b) if you mean that exceptions take priority over non-exception
    instruction streaming, well that is what exceptions ARE. In these
    cases, the exception handler inherits the priority of the instruction
    stream that raised it--but that is NOT assigning a priority to the
    exception.

    c) and then there are the cases where a PageFault from GuestOS
    page tables is serviced by GuestOS, while a PageFault from
    HyperVisor page tables is serviced by HyperVisor. You could
    assert that HV has higher priority than GuestOS, but it is
    more like HV has privilege over GuestOS while running at the
    same priority level.

    It seems unlikely that a translation fault in user mode would need
    handling in both the guest OS and the hypervisor during the
    execution of an instruction; the
    exception to the hypervisor would generally occur when the
    instruction trapped by the guest (who updated the guest translation
    tables) is restarted.

    Other exception causes (such as asynchronous exceptions
    like interrupts) would remain pending and be taken (subject
    to priority and control enables) when the instruction is
    restarted (or the next instruction is dispached for asynchronous
    exceptions).


    <snip>

    Being able to overlap latency of a memory-mapped I/O access (or
    other slow access) with execution of another thread seems

    That depends on whether the access is posted or non-posted. Only
    the latter affects instruction latency. The bulk of I/O to and
    from a PCIe express device is initiated by the device directly
    to memory (subject to iommu translation), not by the CPU, so
    generally the latency to read a MMIO register high enough
    to worry about scheduling other work on the core during
    the transfer.

    In most cases, it takes 1 or 2 orders of magnitude less than 10,000
    cycles to read an I/O control register in a typical PCI express function[***], particularly with modern on-chip PCIe endpoints[*] and CXL[**] (absent
    a PCIe Switched fabric such as the now deprecated multi-root
    I/O virtualization (MR-IOV)). A PCIe Gen-5 card can turn around
    a memory read request rather rapidly if the host I/O bus is
    clocked at a significant fraction (or unity) of the processor
    clock.

    [*] Such as the various bus 0 functions integrated into Intel and
    ARM processors for e.g. memory controller, I2C, SPI, etc) or
    on-chip network and crypto accelerators.

    [**] 150ns round trip additional latency compared with
    local DRAM with PCIe GEN5.

    [***] which don't need to deal with the PCIe transport
    and data link layers

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to EricP on Wed Dec 25 20:35:29 2024
    On Sat, 5 Oct 2024 15:11:29 +0000, EricP wrote:

    MitchAlsup1 wrote:
    On Fri, 4 Oct 2024 18:11:23 +0000, EricP wrote:

    --------------------------

    Not necessary, you purge all of the younger instructions from the
    thread at retirement, but none of the instructions associated with
    the new <interrupt> thread at the front.

    That's difficult with a circular buffer for the instruction queue/rob
    as you can't edit the order. For a branch mispredict you might be able
    to mark a circular range of entries as voided, and leave the entries
    to be recovered serially at retire.

    Sooner or later, the pipeline designer needs to recognize the of
    occuring
    code sequence pictured as::

    INST
    INST
    BC-------\
    INST |
    INST |
    INST |
    /----BR |
    | INST<----/
    | INST
    | INST
    \--->INST
    INST

    So that the branch predictor predicts as usual, but DECODER recognizes
    the join point of this prediction, so if the prediction is wrong, one
    only nullifies the mispredicted instructions and then inserts the
    alternate instructions while holding the join point instructions until
    the alternate instruction complete.

    But voiding doesn't look like it works for exceptions or conflicting interrupt priority adjustments. In those cases purging the interrupt
    handler and rejecting the hand-off looks like the only option.

    Nullify instructions from the mispredicted paths. On hand off to ISR,
    adjust recovery IP to past the last instruction that executed properly; nullifying between exception and ISR.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to Scott Lurndal on Wed Dec 25 20:26:05 2024
    On Wed, 25 Dec 2024 19:10:09 +0000, Scott Lurndal wrote:

    mitchalsup@aol.com (MitchAlsup1) writes:
    On Wed, 25 Dec 2024 17:50:12 +0000, Paul A. Clayton wrote:

    On 10/5/24 11:11 AM, EricP wrote:
    MitchAlsup1 wrote:
    [snip]
    --------------------------

    But voiding doesn't look like it works for exceptions or conflicting
    interrupt priority adjustments. In those cases purging the interrupt
    handler and rejecting the hand-off looks like the only option.

    Should exceptions always have priority? It seems to me that if a
    thread is low enough priority to be interrupted, it is low enough
    priority to have its exception processing interrupted/delayed.

    It depends on what you mean::

    a) if you mean that exceptions are prioritized and the highest
    priority exception is the one taken, then OK you are working
    in an ISA that has multiple exceptions per instruction. Most
    RISC ISAs do not have this property.

    AArch64 has 44 different synchronous exception priorities, and within
    each priority that describes more than one exception, there
    is a sub-prioritization therein. (Section D 1.3.5.5 pp 6080 in
    DDI0487K_a).

    Thanks for the link::

    However, I would claim that the vast majority of those 44 things
    are interrupts and not exceptions (in colloquial nomenclature).

    An exception is raised if an instruction cannot execute to completion
    and is raised synchronously with the instruction stream (and at a
    precise point in the instruction stream.

    An interrupt is raised asynchronous to the instruction stream.

    Reset is an interrupt and not an exceptions.

    Debug that hits an address range is closer to an interrupt than an
    exception. <but I digress>

    But it appears that ARM has many interrupts classified as exceptions.
    Anything not generated from instructions within the architectural
    instruction stream is an interrupt, and anything generated from
    within an architectural instructions stream is an exception.

    It also appears ARM uses priority to sort exceptions into an order,
    while most architectures define priority as a mechanism to to choose
    when to take hard-control-flow-events rather than what.

    Be that as it may...


    While it is not common for a particular instruction to generate
    multiple execeptions, it is certainly possible (e.g. when
    instructions are trapped to a more privileged execution mode).


    b) if you mean that exceptions take priority over non-exception
    instruction streaming, well that is what exceptions ARE. In these
    cases, the exception handler inherits the priority of the instruction >>stream that raised it--but that is NOT assigning a priority to the >>exception.

    c) and then there are the cases where a PageFault from GuestOS
    page tables is serviced by GuestOS, while a PageFault from
    HyperVisor page tables is serviced by HyperVisor. You could
    assert that HV has higher priority than GuestOS, but it is
    more like HV has privilege over GuestOS while running at the
    same priority level.

    It seems unlikely that a translation fault in user mode would need
    handling in both the guest OS and the hypervisor during the
    execution of an instruction;

    Neither stated nor inferred. A PageFault is handled singularly by
    the level in the system that controls (writes) those PTEs.

    There is a significant period of time in may architectures after
    control arrives at ISR where the ISR is not allowed to raise a
    page fault {Storing registers to a stack}, and since this ISR
    might be the PageFault handler, it is not in a position to
    handle its own faults. However, HyperVisor can handle GuestOS PageFaults--GuestOS thinks the pages are present with reasonable
    access rights, HyperVisor tables are used to swap them in/out.
    Other than latency GuestOS ISR does not see the PageFault.

    My 66000, on the other hand, when ISR receives control, state
    has been saved on a stack, the instruction stream is already
    re-entrant, and the register file as it was the last time
    this ISR ran.

    the
    exception to the hypervisor would generally occur when the
    instruction trapped by the guest (who updated the guest translation
    tables) is restarted.

    Other exception causes (such as asynchronous exceptions
    like interrupts)

    Asynchronous exceptions A R E interrupts, not like interrupts;
    they ARE interrupts. If it is not synchronous with instruction
    stream it is an interrupt. Only if it is synchronous with the
    instruction stream is it an exception.

    would remain pending and be taken (subject
    to priority and control enables) when the instruction is
    restarted (or the next instruction is dispached for asynchronous
    exceptions).


    <snip>

    Being able to overlap latency of a memory-mapped I/O access (or
    other slow access) with execution of another thread seems

    That depends on whether the access is posted or non-posted.

    Writes can be posted, Reads cannot. Reads must complete for the
    ISR to be able to setup the control block softIRQ/DPC will
    process shortly. Only after the data structure for softIRQ/DPC
    is written can ISR allow control flow to leave.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to mitchalsup@aol.com on Thu Dec 26 12:32:29 2024
    On Wed, 25 Dec 2024 20:35:29 +0000
    mitchalsup@aol.com (MitchAlsup1) wrote:

    On Sat, 5 Oct 2024 15:11:29 +0000, EricP wrote:

    MitchAlsup1 wrote:
    On Fri, 4 Oct 2024 18:11:23 +0000, EricP wrote:

    --------------------------

    Not necessary, you purge all of the younger instructions from the
    thread at retirement, but none of the instructions associated with
    the new <interrupt> thread at the front.

    That's difficult with a circular buffer for the instruction
    queue/rob as you can't edit the order. For a branch mispredict you
    might be able to mark a circular range of entries as voided, and
    leave the entries to be recovered serially at retire.

    Sooner or later, the pipeline designer needs to recognize the of
    occuring
    code sequence pictured as::

    INST
    INST
    BC-------\
    INST |
    INST |
    INST |
    /----BR |
    | INST<----/
    | INST
    | INST
    \--->INST
    INST

    So that the branch predictor predicts as usual, but DECODER recognizes
    the join point of this prediction, so if the prediction is wrong, one
    only nullifies the mispredicted instructions and then inserts the
    alternate instructions while holding the join point instructions until
    the alternate instruction complete.


    Yes, compilers often generate such code.
    When coding in asm, I typically know at least something about
    probability of branches, so I tend to code it differently:

    inst
    inst
    bc colder_section
    inst
    inst
    inst
    merge_flow:
    inst
    inst
    ...
    ret

    colder_section:
    inst
    inst
    inst
    br merge_flow


    Intel's "efficiency" cores family starting from Tremont has weird
    "clustered" front end design. It often prefers [predicted] taken
    branches over [predicted] non-taken branches. On front ends like that
    my optimization is likely to become pessimization.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Anton Ertl@21:1/5 to Chris M. Thomasson on Thu Dec 26 14:56:30 2024
    "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> writes:
    On 10/3/2024 7:00 AM, Anton Ertl wrote:
    Two weeks ago Rene Mueller presented the paper "The Cost of Profiling
    in the HotSpot Virtual Machine" at MPLR 2024. He reported that for
    some programs the counters used for profiling the program result in
    cache contention due to true or false sharing among threads.

    The traditional software mitigation for that problem is to split the
    counters into per-thread or per-core instances. But for heavily
    multi-threaded programs running on machines with many cores the cost
    of this mitigation is substantial.
    ...
    For the HotSpot application, the
    eventual answer was that they live with the cost of cache contention
    for the programs that have that problem. After some minutes the hot
    parts of the program are optimized, and cache contention is no longer
    a problem.
    ...
    If the per-thread counters are properly padded to a l2 cache line and >properly aligned on cache line boundaries, well, the should not cause
    false sharing with other cache lines... Right?

    Sure, that's what the first sentence of the second paragraph you cited
    (and which I cited again) is about. Next, read the next sentence.

    Maybe I should give an example (fully made up on the spot, read the
    paper for real numbers): If HotSpot uses, on average one counter per conditional branch, and assuming a conditional branch every 10 static instructions (each having, say 4 bytes), with 1MB of generated code
    and 8 bytes per counter, that's 200KB of counters. But these counters
    are shared between all threads, so for code running on many cores you
    get true and false sharing.

    As mentioned, the usual mitigation is per-core counters. With a
    256-core machine, we now have 51.2MB of counters for 1MB of executable
    code. Now this is Java, so there might be quite a bit more executable
    code and correspondingly more counters. They eventually decided that
    the benefit of reduced cache coherence traffic is not worth that cost
    (or the cost of a hardware mechanism), as described in the last
    paragraph, from which I cited the important parts.

    - anton
    --
    'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
    Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From EricP@21:1/5 to All on Thu Dec 26 14:25:37 2024
    MitchAlsup1 wrote:
    On Sat, 5 Oct 2024 15:11:29 +0000, EricP wrote:

    MitchAlsup1 wrote:
    On Fri, 4 Oct 2024 18:11:23 +0000, EricP wrote:

    --------------------------

    Not necessary, you purge all of the younger instructions from the
    thread at retirement, but none of the instructions associated with
    the new <interrupt> thread at the front.

    That's difficult with a circular buffer for the instruction queue/rob
    as you can't edit the order. For a branch mispredict you might be able
    to mark a circular range of entries as voided, and leave the entries
    to be recovered serially at retire.

    Sooner or later, the pipeline designer needs to recognize the of
    occuring
    code sequence pictured as::

    INST
    INST
    BC-------\
    INST |
    INST |
    INST |
    /----BR |
    | INST<----/
    | INST
    | INST
    \--->INST
    INST

    So that the branch predictor predicts as usual, but DECODER recognizes
    the join point of this prediction, so if the prediction is wrong, one
    only nullifies the mispredicted instructions and then inserts the
    alternate instructions while holding the join point instructions until
    the alternate instruction complete.

    Yes. Long ago I looked at some academic papers on hardware IF-conversion.
    Those papers were in the context of Itanium around 2005 or so,
    automatically converting short forward branches onto predication.

    There were also papers that looked at HW converting predication
    back into short branches because they tie down less resources.

    IIRC they were looking at interactions between predication,
    aka Guarded Execution, and branch predictors, and how IF-conversion
    affects the branch predictor stats.

    But voiding doesn't look like it works for exceptions or conflicting
    interrupt priority adjustments. In those cases purging the interrupt
    handler and rejecting the hand-off looks like the only option.

    Nullify instructions from the mispredicted paths. On hand off to ISR,
    adjust recovery IP to past the last instruction that executed properly; nullifying between exception and ISR.

    Yes, that seems the most straight forward way to do it.
    But to nullify *some* of the in-flight instructions and not others,
    just the ones in the mispredicted shadow, in the middle of a stream
    of other instructions, seems to require much of the logic necessary
    to support general OoO predication/guarded-execution.

    Branch mispredict could use two mechanisms, one using checkpoint
    and rollback for a normal branch mispredict which recovers resources immediately in one clock, and another if there is a pipelined interrupt
    already appended which defers resource recovery to retire.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From EricP@21:1/5 to All on Thu Jan 2 14:14:50 2025
    MitchAlsup1 wrote:
    On Tue, 31 Dec 2024 2:02:05 +0000, Paul A. Clayton wrote:
    On 12/25/24 1:30 PM, MitchAlsup1 wrote:

    Sooner or later an ISR has to actually deal with the MMI/O
    control registers associated with the <ahem> interrupt.

    Yes, but multithreading could hide some of those latencies in
    terms of throughput.

    EricP is the master proponent of finishing the instructions in the
    execution window that are finishable. I, merely, have no problem
    in allowing the pipe to complete or take a flush based on the kind
    of pipeline being engineered.

    With 300-odd instructions in the window this thesis has merit,
    with a 5-stage pipeline 1-wide, it does not have merit but is
    not devoid of merit either.

    It is also possible that the speculation barriers I describe below
    will limit the benefits that pipelining exceptions and interrupts
    might be able to see.

    The issue is that both exception handlers and interrupts usually read and
    write Privileged Control Registers (PCR) and/or MMIO device registers very early into the handler. Most MMIO device registers and cpu PCR cannot be speculatively read as that may cause a state transition.
    Of course all stores are never speculated and can only be initiated
    at commit/retire.

    The normal memory coherence rules assume that loads are to memory-like locations that do not state transition on reads and that therefore
    memory loads can be harmlessly replayed if needed.
    While memory stores are not performed speculatively, an implementation
    might speculatively prefetch a cache line as soon as a store is queued
    and cause cache lines to ping-pong.

    But for loads to many MMIO devices and PCR effectively require a
    speculation barrier in front of them to prevent replays.

    A SPCB Speculation Barrier instruction could block speculation.
    It stalls execution until all older conditional branches are resolved and
    all older instructions that might throw an exception have determined
    they won't do so.

    The core could have an internal lookup table telling it which PCR can be
    read speculatively because there are no side effects to doing so.
    Those PCR would not require an SPCB to guard them.

    For MMIO device registers I think having an explicit SPCB instruction
    might be better than putting a "no-speculate" flag on the PTE for the
    device register address as that flag would be difficult to propagate
    backwards from address translate to all the parts of the core that
    we might have to sync with.

    This all means that there may be very little opportunity for speculative execution of their handlers, no matter how much hardware one tosses at them.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to EricP on Thu Jan 2 19:45:36 2025
    On Thu, 2 Jan 2025 19:14:50 +0000, EricP wrote:

    MitchAlsup1 wrote:
    On Tue, 31 Dec 2024 2:02:05 +0000, Paul A. Clayton wrote:
    On 12/25/24 1:30 PM, MitchAlsup1 wrote:

    Sooner or later an ISR has to actually deal with the MMI/O
    control registers associated with the <ahem> interrupt.

    Yes, but multithreading could hide some of those latencies in
    terms of throughput.

    EricP is the master proponent of finishing the instructions in the
    execution window that are finishable. I, merely, have no problem
    in allowing the pipe to complete or take a flush based on the kind
    of pipeline being engineered.

    With 300-odd instructions in the window this thesis has merit,
    with a 5-stage pipeline 1-wide, it does not have merit but is
    not devoid of merit either.

    It is also possible that the speculation barriers I describe below
    will limit the benefits that pipelining exceptions and interrupts
    might be able to see.

    The issue is that both exception handlers and interrupts usually read
    and
    write Privileged Control Registers (PCR) and/or MMIO device registers
    very
    early into the handler. Most MMIO device registers and cpu PCR cannot be speculatively read as that may cause a state transition.
    Of course all stores are never speculated and can only be initiated
    at commit/retire.

    This becomes a question of "who knows what when".

    At the point of interrupt recognition (It has been raised, and I am
    going
    to take that interrupt) the pipeline has instructions retiring from the execution window, and instructions being performed, and instructions
    waiting for "things to happen".

    After interrupt recognition, you are inserting instructions into the
    execution window--but these are not speculative--they are known to
    not be under any speculation--they WILL execute to completion--regard-
    less of whether speculative instructions from before recognition are
    performed or flushed. This property is known until the ISR performs
    a predicted branch.

    So, it is possible to stream right onto an ISR--but few pipelines do.

    The normal memory coherence rules assume that loads are to memory-like locations that do not state transition on reads and that therefore
    memory loads can be harmlessly replayed if needed.
    While memory stores are not performed speculatively, an implementation
    might speculatively prefetch a cache line as soon as a store is queued
    and cause cache lines to ping-pong.

    But for loads to many MMIO devices and PCR effectively require a
    speculation barrier in front of them to prevent replays.

    My 66000 architecture specifies that accesses to MMI/O space is
    performed
    as if the core were performing memory references sequentially
    consistent;
    obviating a need for SPCB instruction there.

    There is only 1 instruction used to read/write control registers. It
    reads the operand registers and the control register at the beginning
    of execution, but does not write the control register until retirement; obviating a need for SPCB instruction there.

    Also note: core[i] can access core[j] control registers, but this access
    takes place in MMI/O space (and is sequentially consistent).

    A SPCB Speculation Barrier instruction could block speculation.
    It stalls execution until all older conditional branches are resolved
    and
    all older instructions that might throw an exception have determined
    they won't do so.

    The core could have an internal lookup table telling it which PCR can be
    read speculatively because there are no side effects to doing so.
    Those PCR would not require an SPCB to guard them.

    For MMIO device registers I think having an explicit SPCB instruction
    might be better than putting a "no-speculate" flag on the PTE for the
    device register address as that flag would be difficult to propagate backwards from address translate to all the parts of the core that
    we might have to sync with.

    I am curious. Is "unCacheable and MMI/O space" insufficient to figure
    out "Hey, it's non-speculative" too ??

    This all means that there may be very little opportunity for speculative execution of their handlers, no matter how much hardware one tosses at
    them.

    Good point, often unseen or unstated.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to mitchalsup@aol.com on Fri Dec 27 16:38:21 2024
    mitchalsup@aol.com (MitchAlsup1) writes:
    On Wed, 25 Dec 2024 19:10:09 +0000, Scott Lurndal wrote:

    mitchalsup@aol.com (MitchAlsup1) writes:
    On Wed, 25 Dec 2024 17:50:12 +0000, Paul A. Clayton wrote:

    On 10/5/24 11:11 AM, EricP wrote:
    MitchAlsup1 wrote:
    [snip]
    --------------------------

    But voiding doesn't look like it works for exceptions or conflicting >>>>> interrupt priority adjustments. In those cases purging the interrupt >>>>> handler and rejecting the hand-off looks like the only option.

    Should exceptions always have priority? It seems to me that if a
    thread is low enough priority to be interrupted, it is low enough
    priority to have its exception processing interrupted/delayed.

    It depends on what you mean::

    a) if you mean that exceptions are prioritized and the highest
    priority exception is the one taken, then OK you are working
    in an ISA that has multiple exceptions per instruction. Most
    RISC ISAs do not have this property.

    AArch64 has 44 different synchronous exception priorities, and within
    each priority that describes more than one exception, there
    is a sub-prioritization therein. (Section D 1.3.5.5 pp 6080 in
    DDI0487K_a).

    Thanks for the link::

    However, I would claim that the vast majority of those 44 things
    are interrupts and not exceptions (in colloquial nomenclature).

    I think that nomenclature is often processor specific. Anything
    that occurs synchronously during instruction execution as a result
    of executing that particular instruction is considered an exception
    in AArch64. Many of them are traps to higher exception levels
    for various reasons (including hypervisor traps) which can occur
    potentially with other exceptions such as TLB faults, etc.

    Interrupts, in the ARM sense are _always_ asynchronous, and more
    specifically refer to the two signals IRQ and FIQ that the Generic
    Interrupt Controller uses to inform a processing thread that it
    needs to handle and I/O interrupt.

    In Aarch64, they all vector through the same per-exception-level (kernel, hypervisor, secure monitor, realm) vector table.


    An exception is raised if an instruction cannot execute to completion
    and is raised synchronously with the instruction stream (and at a
    precise point in the instruction stream.

    That description describes accurately all of the 44 conditions
    above - the section is entitled, after all, 'SYNCHRONOUS exception
    priorities". Interrupts are by definition asynchronous in that
    AArch64 architecture.


    An interrupt is raised asynchronous to the instruction stream.

    Reset is an interrupt and not an exceptions.

    I would argue that reset is a condition and is in this list
    as such - sometimes it is synchronous (a result of executing
    a special instruction or store to a system register), sometimes
    it is asynchronous (via the chipset/SoC). The fact that reset
    has the highest priority is noted here specifically.


    Debug that hits an address range is closer to an interrupt than an
    exception. <but I digress>

    It is still synchronous to instruction execution.


    But it appears that ARM has many interrupts classified as exceptions. >Anything not generated from instructions within the architectural
    instruction stream is an interrupt, and anything generated from
    within an architectural instructions stream is an exception.

    That's your definition. It certainly doesn't apply to AAarch64
    (or the burroughs mainframes, for that matter).


    It also appears ARM uses priority to sort exceptions into an order,
    while most architectures define priority as a mechanism to to choose
    when to take hard-control-flow-events rather than what.

    They desire determinism for the software.



    It seems unlikely that a translation fault in user mode would need
    handling in both the guest OS and the hypervisor during the
    execution of an instruction;

    Neither stated nor inferred. A PageFault is handled singularly by
    the level in the system that controls (writes) those PTEs.

    Indeed. And the guest OS owns the PTEs (TTEs) for the guest
    user process, and the hypervisor owns the PTEs for the guest
    "physical address space view". This is true for ARM, Intel
    and AMD.


    There is a significant period of time in may architectures after
    control arrives at ISR where the ISR is not allowed to raise a
    page fault {Storing registers to a stack}, and since this ISR
    might be the PageFault handler, it is not in a position to
    handle its own faults. However, HyperVisor can handle GuestOS >PageFaults--GuestOS thinks the pages are present with reasonable
    access rights, HyperVisor tables are used to swap them in/out.
    Other than latency GuestOS ISR does not see the PageFault.

    I've written two hypervisors (one on x86, long before hardware
    assist (1998) and one using AMD SVM and NPT (mid 2000's). There is a
    very clean deliniation between the guest physical address space view
    from the guest and guest applications, and the host physical
    address space apportioned out to the various guest OS' by
    the hypervisor. In some cases the hypervisor can not even
    peek into the guest physical address space. They are distinct
    and independent (sans paravirtualization).


    My 66000, on the other hand, when ISR receives control, state
    has been saved on a stack, the instruction stream is already
    re-entrant, and the register file as it was the last time
    this ISR ran.

    The AAarch64 exception entry (for both interrupts and exceptions)
    is identical and only a few cycles. The exception routine
    (ISR in your nomenclature) can decide for itself what state
    to preserve (the processor state and return address are saved
    in special per-exception-level system registers automatically
    during exception entry and restored by exception return (eret
    instruction)).


    the
    exception to the hypervisor would generally occur when the
    instruction trapped by the guest (who updated the guest translation
    tables) is restarted.

    Other exception causes (such as asynchronous exceptions
    like interrupts)

    Asynchronous exceptions A R E interrupts, not like interrupts;
    they ARE interrupts. If it is not synchronous with instruction
    stream it is an interrupt. Only if it is synchronous with the
    instruction stream is it an exception.

    Your interrupt terminology differs from the ARM version. An
    interrupt is considered an asynchronous exception (of which
    there are three - IRQ, FIQ and Serror[*]). Both synchronous
    exceptions and asynchronous exceptionsuse the
    same vector table (indexed by exception level (privilege))
    and the ESR_ELx (Exception Status Register) has a 6-bit
    exception code that the exception routine uses to vector
    to the appropriate handler. Data and Instruction abort
    (translation faults) exception codes distinguish between
    a translation fault that occured in the lesser privilege
    (e.g. user mode trapping to kernel, or guest page fault
    trapping to hypervisor).

    [*] Asynchronous system error (e.g a posted store that subsequently
    failed downstream).


    would remain pending and be taken (subject
    to priority and control enables) when the instruction is
    restarted (or the next instruction is dispached for asynchronous
    exceptions).


    <snip>

    Being able to overlap latency of a memory-mapped I/O access (or
    other slow access) with execution of another thread seems

    That depends on whether the access is posted or non-posted.

    Writes can be posted, Reads cannot. Reads must complete for the
    ISR to be able to setup the control block softIRQ/DPC will
    process shortly. Only after the data structure for softIRQ/DPC
    is written can ISR allow control flow to leave.

    As I said, it depends on if it is posted or not. A store
    to trigger a doorbell that starts processing a ring of
    DMA instructions, for example, has no latency. And the DMA
    is all initiated by the endpoint device, not the OS.

    All that said, this isn't 1993 PCI, modern chipset and PCIe
    latencies are less than they used to be especially on
    SoCs where you don't have SERDES overhead.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From jseigh@21:1/5 to Anton Ertl on Fri Dec 27 11:16:47 2024
    On 10/3/24 10:00, Anton Ertl wrote:
    Two weeks ago Rene Mueller presented the paper "The Cost of Profiling
    in the HotSpot Virtual Machine" at MPLR 2024. He reported that for
    some programs the counters used for profiling the program result in
    cache contention due to true or false sharing among threads.

    The traditional software mitigation for that problem is to split the
    counters into per-thread or per-core instances. But for heavily multi-threaded programs running on machines with many cores the cost
    of this mitigation is substantial.


    For profiling, do we really need accurate counters? They just need to
    be statistically accurate I would think.

    Instead of incrementing a counter, just store a non-zero immediate into
    a zero initialized byte array at a per "counter" index. There's no
    rmw data dependency, just a store so should have little impact on
    pipeline.

    A profiling thread loops thru the byte array, incrementing an actual
    counter when it sees no zero byte, and resets the byte to zero. You
    could use vector ops to process the array.

    If the stores were fast enough, you could do 2 or more stores at
    hashed indices, different hash for each store. Sort of a counting
    Bloom filter. The effective count would be the minimum of the
    hashed counts.

    No idea how feasible this would be though.

    Joe Seigh

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to EricP on Fri Jan 3 17:24:33 2025
    EricP <ThatWouldBeTelling@thevillage.com> writes:
    MitchAlsup1 wrote:
    On Tue, 31 Dec 2024 2:02:05 +0000, Paul A. Clayton wrote:
    On 12/25/24 1:30 PM, MitchAlsup1 wrote:

    Sooner or later an ISR has to actually deal with the MMI/O
    control registers associated with the <ahem> interrupt.

    Yes, but multithreading could hide some of those latencies in
    terms of throughput.

    EricP is the master proponent of finishing the instructions in the
    execution window that are finishable. I, merely, have no problem
    in allowing the pipe to complete or take a flush based on the kind
    of pipeline being engineered.

    With 300-odd instructions in the window this thesis has merit,
    with a 5-stage pipeline 1-wide, it does not have merit but is
    not devoid of merit either.

    It is also possible that the speculation barriers I describe below
    will limit the benefits that pipelining exceptions and interrupts
    might be able to see.

    The issue is that both exception handlers and interrupts usually read and >write Privileged Control Registers (PCR) and/or MMIO device registers very >early into the handler. Most MMIO device registers and cpu PCR cannot be >speculatively read as that may cause a state transition.
    Of course all stores are never speculated and can only be initiated
    at commit/retire.

    The normal memory coherence rules assume that loads are to memory-like >locations that do not state transition on reads and that therefore
    memory loads can be harmlessly replayed if needed.
    While memory stores are not performed speculatively, an implementation
    might speculatively prefetch a cache line as soon as a store is queued
    and cause cache lines to ping-pong.

    But for loads to many MMIO devices and PCR effectively require a
    speculation barrier in front of them to prevent replays.

    A SPCB Speculation Barrier instruction could block speculation.
    It stalls execution until all older conditional branches are resolved and
    all older instructions that might throw an exception have determined
    they won't do so.

    The core could have an internal lookup table telling it which PCR can be
    read speculatively because there are no side effects to doing so.
    Those PCR would not require an SPCB to guard them.

    For MMIO device registers I think having an explicit SPCB instruction
    might be better than putting a "no-speculate" flag on the PTE for the
    device register address as that flag would be difficult to propagate >backwards from address translate to all the parts of the core that
    we might have to sync with.

    MMIO accesses are, by definition, non-cachable, which is typically
    designated in either a translation table entry or associated
    attribute registers (MTTR, MAIR). Non-cacheable accesses
    are not speculatively executed, which provides the
    correct semantics for device registers which have side effects
    on read accesses.

    Granted the granularity of that attribute is usually a translation unit
    (page) size.


    This all means that there may be very little opportunity for speculative >execution of their handlers, no matter how much hardware one tosses at them.

    That's true. ARM goes to some lengths to ensure that the access
    to the system register (ICC_IARx_EL1) that contains the current pending interrupt
    number for a given hardware thread/core is synchronized appropriately.

    "To allow software to ensure appropriate observability of actions
    initiated by GIC register accesses, the PE and CPU interface logic
    must ensure that reads of this register are self-synchronising when
    interrupts are masked by the PE (that is when PSTATE.{I,F} == {0,0}).
    This ensures that the effect of activating an interrupt on the signaling
    of interrupt exceptions is observed when a read of this register is
    architecturally executed so that no spurious interrupt exception
    occurs if interrupts are unmasked by an instruction immediately
    following the read."

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)