• Call by reference protection

    From Don Y@blockedofcourse@foo.invalid to sci.electronics.design on Thu Feb 19 15:04:09 2026
    From Newsgroup: sci.electronics.design

    [Obdisclaimer: cc'ing s.e.d only because some of you are no longer
    subscribed to the list and will likely not see this, otherwise.
    And, its a substantial change in the API so worth noting.]

    Using similar mechanisms to those that I use in call-by-value RMIs,
    I can protect against races for call-by-reference -- throwing an
    exception or just spinning on any violations on the calling side.

    Or, I can just let people rely on their own discipline to
    ensure they don't introduce latent bugs via this mechanism
    (resorting to call by value universally seems a bad idea
    for legacy coders). As these types of races have typically
    been hard to test for, I suspect it is worth the effort.

    Any pointers to languages or IDLs that include such qualifying
    adjectives?
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Martin Brown@'''newspam'''@nonad.co.uk to sci.electronics.design on Fri Feb 20 10:36:13 2026
    From Newsgroup: sci.electronics.design

    On 19/02/2026 22:04, Don Y wrote:
    [Obdisclaimer:-a cc'ing s.e.d only because some of you are no longer subscribed to the list and will likely not see this, otherwise.
    And, its a substantial change in the API so worth noting.]

    Using similar mechanisms to those that I use in call-by-value RMIs,
    I can protect against races for call-by-reference -- throwing an
    exception or just spinning on any violations on the calling side.

    Or, I can just let people rely on their own discipline to
    ensure they don't introduce latent bugs via this mechanism
    (resorting to call by value universally seems a bad idea
    for legacy coders).-a As these types of races have typically
    been hard to test for, I suspect it is worth the effort.

    Any pointers to languages or IDLs that include such qualifying
    adjectives?

    Languages that allow call by reference to be qualified with a const or readonly directive so that the routine reading the original object (no
    copy made) is not allowed to alter the it in any way.

    Detectable as a compile time fault if you do. Relying on all coders to
    be disciplined is likely to be ahem... disappointing.

    I can't be the only one to have seen shops where the journeymen are so unskilled that getting C code to compile by the random application of
    casts is the norm. Not written in C but the UK scandalous Horizon PO accounting system was written by people of that calibre (thickness).

    They compounded the problem by having expert witnesses perjure
    themselves to convict entirely innocent postmasters of fraud because the computer was "infallible". The resulting mess is still ongoing.
    --
    Martin Brown

    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From john larkin@jl@glen--canyon.com to sci.electronics.design on Fri Feb 20 09:04:01 2026
    From Newsgroup: sci.electronics.design

    On Fri, 20 Feb 2026 10:36:13 +0000, Martin Brown
    <'''newspam'''@nonad.co.uk> wrote:

    On 19/02/2026 22:04, Don Y wrote:
    [Obdisclaimer:a cc'ing s.e.d only because some of you are no longer
    subscribed to the list and will likely not see this, otherwise.
    And, its a substantial change in the API so worth noting.]

    Using similar mechanisms to those that I use in call-by-value RMIs,
    I can protect against races for call-by-reference -- throwing an
    exception or just spinning on any violations on the calling side.

    Or, I can just let people rely on their own discipline to
    ensure they don't introduce latent bugs via this mechanism
    (resorting to call by value universally seems a bad idea
    for legacy coders).a As these types of races have typically
    been hard to test for, I suspect it is worth the effort.

    Any pointers to languages or IDLs that include such qualifying
    adjectives?

    Languages that allow call by reference to be qualified with a const or >readonly directive so that the routine reading the original object (no
    copy made) is not allowed to alter the it in any way.

    Detectable as a compile time fault if you do. Relying on all coders to
    be disciplined is likely to be ahem... disappointing.

    I can't be the only one to have seen shops where the journeymen are so >unskilled that getting C code to compile by the random application of
    casts is the norm. Not written in C but the UK scandalous Horizon PO >accounting system was written by people of that calibre (thickness).

    They compounded the problem by having expert witnesses perjure
    themselves to convict entirely innocent postmasters of fraud because the >computer was "infallible". The resulting mess is still ongoing.

    Procedural coding with mostly punctuation marks is the mess. That is
    going to change.

    And lots of CE grads will be looking for any job they can find to pay
    the rent. I meet lots of them already.



    AI Overview

    Approximately
    17,000 to 19,000 students graduate with a degree in computer
    engineering in the US annually. When including the broader category of
    computer science, over 100,000 students graduate with computer-related
    degrees each year. The number of computer engineering degrees has
    shown growth, with about 18,973 awarded in 2023.

    Computer Engineering (Specific): Roughly 16,954 to 18,973 degrees
    awarded annually.

    Computer Science & Related: Over 100,000, with 112,720 bachelor's
    degrees in computer and information sciences in 2022-2023.

    Top Locations: California, Texas, and New York produce the highest
    number of graduates.

    Growth Trend: The number of graduates in computer-related fields
    has more than doubled over the last decade


    John Larkin
    Highland Tech Glen Canyon Design Center
    Lunatic Fringe Electronics
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Don Y@blockedofcourse@foo.invalid to sci.electronics.design on Fri Feb 20 10:47:43 2026
    From Newsgroup: sci.electronics.design

    On 2/20/2026 3:36 AM, Martin Brown wrote:
    On 19/02/2026 22:04, Don Y wrote:
    Using similar mechanisms to those that I use in call-by-value RMIs,
    I can protect against races for call-by-reference -- throwing an
    exception or just spinning on any violations on the calling side.

    Or, I can just let people rely on their own discipline to
    ensure they don't introduce latent bugs via this mechanism
    (resorting to call by value universally seems a bad idea
    for legacy coders).-a As these types of races have typically
    been hard to test for, I suspect it is worth the effort.

    Any pointers to languages or IDLs that include such qualifying
    adjectives?

    Languages that allow call by reference to be qualified with a const or readonly
    directive so that the routine reading the original object (no copy made) is not
    allowed to alter the it in any way.

    That's a different problem.

    A mutable object *intended* to be manipulated in the called scope
    is EXPECTED to be altered by that invoked function.

    Less obviously, another executing thread is likely NOT expected to
    alter that object while the invoked function is executing!

    While C doesn't really have cal-by-reference, the problem can
    be illustrated using pointers instead of references. Consider:

    object_t anObject;

    // initialize anObject somehow
    ...

    // act on anObject through a pointer to it
    operator(&anObject, ...)

    // reference the expected changes in anObject in someway
    ...

    In a single threaded, single processor environment, one KNOWS that
    nothing is dicking with anObject while operator() is running -- because operator() has exclusive use of the processor.

    Consequently, after operator() completes, one knows that anObject reflects
    the operation performed by operator().

    Add a second thread (or, second process having access to anObject).

    Now, there is the possibility that the other actor can alter anObject while operator() is executing -- likely without expecting such an interaction.
    And, after operator() has concluded, the next line of code can't assume
    that anObject reflects operator()'s actions.

    [This can be avoided by using call-by-copy-restore but that just
    ensures "the next line of code" works and does nothing for the
    other problems]

    The biggest exposure is likely from another thread in the same
    process container acting on anObject alongside the thread that
    is executing the above code. Adding explicit locks can avoid
    this (at a cost and another level of discipline). A better approach
    is to structure the code so this "doesn't (but CAN!) happen"

    [Almost every piece of code in my system is a service or an agency.
    As such, they all try to be N copies of the same algorithm running
    on N different instances of objects of a particular type. Easy
    if you *design* for that case; tedious if you adopt /ad hoc/
    methods!]

    When operator is an IPC/RPC/RMI, you have another can of worms as
    as the window of vulnerability expands due to the extra overhead of
    invocation along with those "external" actors (multiple processors
    instead of just multiple threads/processes).

    [Given the server/agent nature, all interactions will be non-local]

    I use CoW to implement call-by-value semantics on objects that
    would typically be passed as call-by-reference. E.g., imagine
    anObject is a single frame of video and operator() is going to
    apply a masking function to it, eliding all but the "important"
    parts of the frame for return to the caller.

    The instance of anObject shared between caller and callee can
    then be isolated from other actors. You can emulate a "single
    processor, single thread" environment with all of those impied
    expectations.

    If I then combine this with call-by-value-restore (for the
    call-by-reference case), then anObject accurately reflects
    the actions performed by operator() regardless of what other
    competing actions may have transpired while operator() was
    executing; those other actions occurred on a different instance
    of anObject.

    But, the cost to do so is high.

    And, more significantly, I wonder if it makes bugs MORE likely
    because "anObject" isn't *anObject* any longer. If buggy code
    expected it to be so... <frown>

    Detectable as a compile time fault if you do. Relying on all coders to be disciplined is likely to be ahem... disappointing.

    I can't be the only one to have seen shops where the journeymen are so unskilled that getting C code to compile by the random application of casts is
    the norm. Not written in C but the UK scandalous Horizon PO accounting system
    was written by people of that calibre (thickness).

    Most programming problems that I see are the result of creating bad models
    of the problem being solved. Then, having to "bugger" what should have been
    a clean, straightforward implementation to bend the model to the reality. Anyone can code -- especially as you can SEE if your code "appears" to work without having to make a big investment in time or material/treasure.

    But, moving from a single, self-contained application to an interactive service/agent seems to be a big leap; considerably harder than more mundane issues like multitasking (despite, IMO, being infinitely easier!).

    I'm trying to spend (waste?) hardware resources to improve the quality
    of the codebase -- especially for folks who likely have little/no experience developing such applications. In the real world, how many folks have
    written web servers or other similar multi-client services, etc.? Then, imagine the subset of those who have written agents!

    They compounded the problem by having expert witnesses perjure themselves to convict entirely innocent postmasters of fraud because the computer was "infallible". The resulting mess is still ongoing.


    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Don Y@blockedofcourse@foo.invalid to sci.electronics.design on Fri Feb 20 11:32:09 2026
    From Newsgroup: sci.electronics.design

    On 2/20/2026 10:47 AM, Don Y wrote:
    On 2/20/2026 3:36 AM, Martin Brown wrote:

    I can't be the only one to have seen shops where the journeymen are so
    unskilled that getting C code to compile by the random application of casts >> is the norm. Not written in C but the UK scandalous Horizon PO accounting >> system was written by people of that calibre (thickness).

    Most programming problems that I see are the result of creating bad models
    of the problem being solved.-a Then, having to "bugger" what should have been a clean, straightforward implementation to bend the model to the reality. Anyone can code -- especially as you can SEE if your code "appears" to work without having to make a big investment in time or material/treasure.

    But, moving from a single, self-contained application to an interactive service/agent seems to be a big leap; considerably harder than more mundane issues like multitasking (despite, IMO, being infinitely easier!).

    I'm trying to spend (waste?) hardware resources to improve the quality
    of the codebase -- especially for folks who likely have little/no experience developing such applications.-a In the real world, how many folks have written web servers or other similar multi-client services, etc.?-a Then, imagine the subset of those who have written agents!

    By way of example, ask a "programmer" how he'd design a file service.
    He'd likely see it as "get request, open file". How many details
    can you see missing in such a naive approach? Any thought for
    performance and how the design would impact that? Any thought for how
    the design would burden the hardware (and other services running on it?)

    [This is actually an excellent "starter problem" for people learning how
    to write servers as you can implement it on damn near ANY "computer" (even those without network interfaces). Having their ego bruised, thusly, they
    MAY be more careful in thinking about how to convert that into an agency!]

    There's a big step between single-session interfaces and providing
    performant services.
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From bitrex@user@example.net to sci.electronics.design on Fri Feb 20 16:21:38 2026
    From Newsgroup: sci.electronics.design

    On 2/20/2026 12:47 PM, Don Y wrote:
    On 2/20/2026 3:36 AM, Martin Brown wrote:
    On 19/02/2026 22:04, Don Y wrote:
    Using similar mechanisms to those that I use in call-by-value RMIs,
    I can protect against races for call-by-reference -- throwing an
    exception or just spinning on any violations on the calling side.

    Or, I can just let people rely on their own discipline to
    ensure they don't introduce latent bugs via this mechanism
    (resorting to call by value universally seems a bad idea
    for legacy coders).-a As these types of races have typically
    been hard to test for, I suspect it is worth the effort.

    Any pointers to languages or IDLs that include such qualifying
    adjectives?

    Languages that allow call by reference to be qualified with a const or
    readonly directive so that the routine reading the original object (no
    copy made) is not allowed to alter the it in any way.

    That's a different problem.

    A mutable object *intended* to be manipulated in the called scope
    is EXPECTED to be altered by that invoked function.

    Less obviously, another executing thread is likely NOT expected to
    alter that object while the invoked function is executing!

    While C doesn't really have cal-by-reference, the problem can
    be illustrated using pointers instead of references.-a Consider:

    object_t anObject;

    // initialize anObject somehow
    ...

    // act on anObject through a pointer to it
    operator(&anObject, ...)

    // reference the expected changes in anObject in someway
    ...

    In a single threaded, single processor environment, one KNOWS that
    nothing is dicking with anObject while operator() is running -- because operator() has exclusive use of the processor.

    Consequently, after operator() completes, one knows that anObject reflects the operation performed by operator().

    Add a second thread (or, second process having access to anObject).

    Now, there is the possibility that the other actor can alter anObject while operator() is executing -- likely without expecting such an interaction.
    And, after operator() has concluded, the next line of code can't assume
    that anObject reflects operator()'s actions.

    [This can be avoided by using call-by-copy-restore but that just
    ensures "the next line of code" works and does nothing for the
    other problems]

    The biggest exposure is likely from another thread in the same
    process container acting on anObject alongside the thread that
    is executing the above code.-a Adding explicit locks can avoid
    this (at a cost and another level of discipline).-a A better approach
    is to structure the code so this "doesn't (but CAN!) happen"

    [Almost every piece of code in my system is a service or an agency.
    As such, they all try to be N copies of the same algorithm running
    on N different instances of objects of a particular type.-a Easy
    if you *design* for that case; tedious if you adopt /ad hoc/
    methods!]


    Yeah, having mutable state in a multithreaded embedded environment
    without big-iron tools to manage mutable state across threads (like
    std::mutex and std::weak_ptr) kind of sucks!!

    Even for single-threaded embedded stuff I like treating C++ more like a functional language and passing non-const references to anything very
    rarely, those relationships are hard to reason about.
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From bitrex@user@example.net to sci.electronics.design on Fri Feb 20 17:25:35 2026
    From Newsgroup: sci.electronics.design

    On 2/20/2026 12:47 PM, Don Y wrote:

    I use CoW to implement call-by-value semantics on objects that
    would typically be passed as call-by-reference.-a E.g., imagine
    anObject is a single frame of video and operator() is going to
    apply a masking function to it, eliding all but the "important"
    parts of the frame for return to the caller.
    Similar to how GPUs do, if the operator() can perform its required calculations on a const reference to the data but its output can be
    reduced to operations on individual pixel values of integral type which
    don't depend on the output value of neighboring pixels std::atomic is
    for that situation, and should be lock free for processors that have
    hardware support for atomics e.g. ARM v6 and later
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Don Y@blockedofcourse@foo.invalid to sci.electronics.design on Fri Feb 20 16:09:00 2026
    From Newsgroup: sci.electronics.design

    On 2/20/2026 2:21 PM, bitrex wrote:
    On 2/20/2026 12:47 PM, Don Y wrote:
    [Almost every piece of code in my system is a service or an agency.
    As such, they all try to be N copies of the same algorithm running
    on N different instances of objects of a particular type.-a Easy
    if you *design* for that case; tedious if you adopt /ad hoc/
    methods!]

    Yeah, having mutable state in a multithreaded embedded environment without big-iron tools to manage mutable state across threads (like std::mutex and std::weak_ptr) kind of sucks!!

    Even for single-threaded embedded stuff I like treating C++ more like a functional language and passing non-const references to anything very rarely,
    those relationships are hard to reason about.

    Something has to "do work" -- i.e., make changes.

    E.g., the example of a single frame of video needing to be masked
    can either be done by masking the original frame (thereby changing
    it in the process) *or* by masking a copy of the original frame.

    It's up to the goals of the algorithm as to which approach to pursue;
    if you don;'t need to preserve the original (unmasked) frame, then
    creating a copy of it for the sole purpose of treating it as const
    is wasteful.

    OTOH, creating a copy to ensure other actors' actions don't interfere
    with your processing (and the validity of your actions) *has* value
    (in that it leads to more predictable behavior).

    Nowadays, its relatively easy to buy horsepower and other resources
    so the question boils down to how you use them.

    [My first "from scratch" commercial product had 12KB of ROM and 256
    bytes of RAM plus the I/Os (motor drivers, etc.). The cost of just
    the CPU board was well over $400 (when EPROM climbed to $50/2KB).
    Spending $20 on a single node is a yawner...]

    Decomposing a design into clients, services and agencies lets it
    dynamically map onto a variety of different hardware implementations
    and freely trade performance, power, size, latency, etc. as needed.
    E.g., each object instance could be backed by a single server
    instance -- or, all object instances can be backed by a single
    server instance -- or, any combination thereof. Each server can
    decide how much concurrency it wants to support (how many kernel
    threads to consume) as well as how responsive it wants to be (how
    much caching, preprocessing, etc. it uses to meet demands placed
    on it).
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Don Y@blockedofcourse@foo.invalid to sci.electronics.design on Fri Feb 20 16:16:44 2026
    From Newsgroup: sci.electronics.design

    On 2/20/2026 3:25 PM, bitrex wrote:
    On 2/20/2026 12:47 PM, Don Y wrote:

    I use CoW to implement call-by-value semantics on objects that
    would typically be passed as call-by-reference.-a E.g., imagine
    anObject is a single frame of video and operator() is going to
    apply a masking function to it, eliding all but the "important"
    parts of the frame for return to the caller.
    Similar to how GPUs do, if the operator() can perform its required calculations
    on a const reference to the data but its output can be reduced to operations on
    individual pixel values of integral type which don't depend on the output value
    of neighboring pixels std::atomic is for that situation, and should be lock free for processors that have hardware support for atomics e.g. ARM v6 and later

    That depends on the nature of the operation(s) being performed.

    If a service relies on another service to perform its desired
    operation, then you end up holding locks for long periods of time.
    Not the case for a GPU where it has lots of resources to devote
    to its very specific functionality.

    E.g., in the file server example, accepting a "request" and then
    invoking a synchronous call to the local file system ties up that
    thread for a long time as it now is blocked waiting for the
    file system to complete the requested task. (imagine if the
    requested file resided on a remote server or on slow media).

    To regain any sense of performance, you'd have to rely on using multiple (kernel, not POSIX) threads so another could be available to handle
    another request while the first is blocking. Lather, rinse, repeat.

    I.e., it is often better to make (or simulate) an asynchronous call
    to any other services required so you can release the thread for
    "other work" while that is being processed. This, of course, makes
    the design a bit more delicate as YOU now have to keep track of how
    many balls you are juggling.
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From bitrex@user@example.net to sci.electronics.design on Sat Feb 21 00:55:39 2026
    From Newsgroup: sci.electronics.design

    On 2/20/2026 6:09 PM, Don Y wrote:
    On 2/20/2026 2:21 PM, bitrex wrote:
    On 2/20/2026 12:47 PM, Don Y wrote:
    [Almost every piece of code in my system is a service or an agency.
    As such, they all try to be N copies of the same algorithm running
    on N different instances of objects of a particular type.-a Easy
    if you *design* for that case; tedious if you adopt /ad hoc/
    methods!]

    Yeah, having mutable state in a multithreaded embedded environment
    without big-iron tools to manage mutable state across threads (like
    std::mutex and std::weak_ptr) kind of sucks!!

    Even for single-threaded embedded stuff I like treating C++ more like
    a functional language and passing non-const references to anything
    very rarely, those relationships are hard to reason about.

    Something has to "do work" -- i.e., make changes.

    E.g., the example of a single frame of video needing to be masked
    can either be done by masking the original frame (thereby changing
    it in the process) *or* by masking a copy of the original frame.

    It's up to the goals of the algorithm as to which approach to pursue;
    if you don;'t need to preserve the original (unmasked) frame, then
    creating a copy of it for the sole purpose of treating it as const
    is wasteful.

    OTOH, creating a copy to ensure other actors' actions don't interfere
    with your processing (and the validity of your actions) *has* value
    (in that it leads to more predictable behavior).

    Nowadays, its relatively easy to buy horsepower and other resources
    so the question boils down to how you use them.

    [My first "from scratch" commercial product had 12KB of ROM and 256
    bytes of RAM plus the I/Os (motor drivers, etc.).-a The cost of just
    the CPU board was well over $400 (when EPROM climbed to $50/2KB).
    Spending $20 on a single node is a yawner...]

    Decomposing a design into clients, services and agencies lets it
    dynamically map onto a variety of different hardware implementations
    and freely trade performance, power, size, latency, etc. as needed.
    E.g., each object instance could be backed by a single server
    instance -- or, all object instances can be backed by a single
    server instance -- or, any combination thereof.-a Each server can
    decide how much concurrency it wants to support (how many kernel
    threads to consume) as well as how responsive it wants to be (how
    much caching, preprocessing, etc. it uses to meet demands placed
    on it).

    It sounds like you're describing some very hard realtime baremetal
    system where you have the luxury of lots of memory to do very resource-intensive operations like full copies of video frames on the
    grounds of "predictability" (I think most embedded video processing on general-purpose CPUs would try very hard to avoid doing any full
    copies), but also can't afford the luxury of an MMU and/or a RTOS that supports some subest of POSIX, so you can use modern C++ features like
    smart pointers and mutexes. Maybe that would add too much overhead.

    These are unusual requirements, to me anyway, I've done a decent amount
    of embedded programming over the years but IDK how much advice I can
    give here. For "big iron"-like tasks like multi-thread processing of
    large amounts of data having an MMU and an embedded OS makes life a lot easier.

    For simple devices like 8 bitters which are more used as "process
    controllers" rather than to perform hardcore calculations like working
    on video, I find cooperative multitasking among state machines works
    pretty well, what mutable state there is is mostly stored in the machine states.
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From bitrex@user@example.net to sci.electronics.design on Sat Feb 21 01:07:41 2026
    From Newsgroup: sci.electronics.design

    On 2/21/2026 12:55 AM, bitrex wrote:
    On 2/20/2026 6:09 PM, Don Y wrote:
    On 2/20/2026 2:21 PM, bitrex wrote:
    On 2/20/2026 12:47 PM, Don Y wrote:
    [Almost every piece of code in my system is a service or an agency.
    As such, they all try to be N copies of the same algorithm running
    on N different instances of objects of a particular type.-a Easy
    if you *design* for that case; tedious if you adopt /ad hoc/
    methods!]

    Yeah, having mutable state in a multithreaded embedded environment
    without big-iron tools to manage mutable state across threads (like
    std::mutex and std::weak_ptr) kind of sucks!!

    Even for single-threaded embedded stuff I like treating C++ more like
    a functional language and passing non-const references to anything
    very rarely, those relationships are hard to reason about.

    Something has to "do work" -- i.e., make changes.

    E.g., the example of a single frame of video needing to be masked
    can either be done by masking the original frame (thereby changing
    it in the process) *or* by masking a copy of the original frame.

    It's up to the goals of the algorithm as to which approach to pursue;
    if you don;'t need to preserve the original (unmasked) frame, then
    creating a copy of it for the sole purpose of treating it as const
    is wasteful.

    OTOH, creating a copy to ensure other actors' actions don't interfere
    with your processing (and the validity of your actions) *has* value
    (in that it leads to more predictable behavior).

    Nowadays, its relatively easy to buy horsepower and other resources
    so the question boils down to how you use them.

    [My first "from scratch" commercial product had 12KB of ROM and 256
    bytes of RAM plus the I/Os (motor drivers, etc.).-a The cost of just
    the CPU board was well over $400 (when EPROM climbed to $50/2KB).
    Spending $20 on a single node is a yawner...]

    Decomposing a design into clients, services and agencies lets it
    dynamically map onto a variety of different hardware implementations
    and freely trade performance, power, size, latency, etc. as needed.
    E.g., each object instance could be backed by a single server
    instance -- or, all object instances can be backed by a single
    server instance -- or, any combination thereof.-a Each server can
    decide how much concurrency it wants to support (how many kernel
    threads to consume) as well as how responsive it wants to be (how
    much caching, preprocessing, etc. it uses to meet demands placed
    on it).

    It sounds like you're describing some very hard realtime baremetal
    system where you have the luxury of lots of memory to do very resource- intensive operations like full copies of video frames on the grounds of "predictability" (I think most embedded video processing on general-
    purpose CPUs would try very hard to avoid doing any full copies), but
    also can't afford the luxury of an MMU and/or a RTOS that supports some subest of POSIX, so you can use modern C++ features like smart pointers
    and mutexes. Maybe that would add too much overhead.

    Is this a high-frequency trading box? Are you building a salami-slicer

    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Don Y@blockedofcourse@foo.invalid to sci.electronics.design on Fri Feb 20 23:42:44 2026
    From Newsgroup: sci.electronics.design

    On 2/20/2026 10:55 PM, bitrex wrote:
    On 2/20/2026 6:09 PM, Don Y wrote:
    On 2/20/2026 2:21 PM, bitrex wrote:
    On 2/20/2026 12:47 PM, Don Y wrote:
    [Almost every piece of code in my system is a service or an agency.
    As such, they all try to be N copies of the same algorithm running
    on N different instances of objects of a particular type.-a Easy
    if you *design* for that case; tedious if you adopt /ad hoc/
    methods!]

    Yeah, having mutable state in a multithreaded embedded environment without >>> big-iron tools to manage mutable state across threads (like std::mutex and >>> std::weak_ptr) kind of sucks!!

    Even for single-threaded embedded stuff I like treating C++ more like a >>> functional language and passing non-const references to anything very
    rarely, those relationships are hard to reason about.

    Something has to "do work" -- i.e., make changes.

    E.g., the example of a single frame of video needing to be masked
    can either be done by masking the original frame (thereby changing
    it in the process) *or* by masking a copy of the original frame.

    It's up to the goals of the algorithm as to which approach to pursue;
    if you don;'t need to preserve the original (unmasked) frame, then
    creating a copy of it for the sole purpose of treating it as const
    is wasteful.

    OTOH, creating a copy to ensure other actors' actions don't interfere
    with your processing (and the validity of your actions) *has* value
    (in that it leads to more predictable behavior).

    Nowadays, its relatively easy to buy horsepower and other resources
    so the question boils down to how you use them.

    [My first "from scratch" commercial product had 12KB of ROM and 256
    bytes of RAM plus the I/Os (motor drivers, etc.).-a The cost of just
    the CPU board was well over $400 (when EPROM climbed to $50/2KB).
    Spending $20 on a single node is a yawner...]

    Decomposing a design into clients, services and agencies lets it
    dynamically map onto a variety of different hardware implementations
    and freely trade performance, power, size, latency, etc. as needed.
    E.g., each object instance could be backed by a single server
    instance -- or, all object instances can be backed by a single
    server instance -- or, any combination thereof.-a Each server can
    decide how much concurrency it wants to support (how many kernel
    threads to consume) as well as how responsive it wants to be (how
    much caching, preprocessing, etc. it uses to meet demands placed
    on it).

    It sounds like you're describing some very hard realtime baremetal system where

    It's actually *soft* as you KNOW you can never meet every deadline
    (unless you intentionally derate the performance you intend to achieve;
    you can't even guarantee that you can shoot down every incoming MISSILE
    with really deep pockets! :> )

    This is actually considerably harder than a "hard" real-time system
    because you have to actively consider what to do WHEN you miss a deadline.
    And, which tasks/jobs you might want to shed to free up resources to
    improve your chances of meeting those (certain) deadlines in the future.

    [E.g., stop protecting New England and concentrate your defenses on D.C.]

    you have the luxury of lots of memory to do very resource-intensive operations
    like full copies of video frames on the grounds of "predictability" (I think most embedded video processing on general-purpose CPUs would try very hard to
    avoid doing any full copies), but also can't afford the luxury of an MMU and/or
    a RTOS that supports some subest of POSIX, so you can use modern C++ features
    like smart pointers and mutexes. Maybe that would add too much overhead.

    I avoid copies as much as possible. I fiddle with the MMU to give
    the appearance of a copy without actually having to move all of the bytes
    from one process container to another.

    OTOH, if the code "fails to cooperate", then I have to bear the cost of
    making that "anonymous" duplicate to protect the code from itself.

    This, eventually, translates into a resource cost penalty (I maintain
    ledgers and resource quotas for each process/job) so a shitty developer discovers that his "product" abends more frequently than other "products".

    [It is incredibly tedious to consider how to keep "foreign" developers
    from being piggish with resources. One easy way is to elide their jobs
    when resources are scarce -- let THEM answer the support calls from
    their customers as to why THEIR product keeps crashing...]

    These are unusual requirements, to me anyway, I've done a decent amount of embedded programming over the years but IDK how much advice I can give here. For "big iron"-like tasks like multi-thread processing of large amounts of data
    having an MMU and an embedded OS makes life a lot easier.

    I have adopted the MULTICS philosophy of "Computing as a Service"; expect
    it to be available just as much as any other "utility" (e.g., hot swapping hardware and software in live systems, running diagnostics alongside
    regular applications, identifying software and hardware "problems" before
    they manifest, etc.)

    I have about a thousand cores and almost a TB of RAM in my alpha site.
    One of the beta sites is planned as almost double that.

    For simple devices like 8 bitters which are more used as "process controllers"
    rather than to perform hardcore calculations like working on video, I find cooperative multitasking among state machines works pretty well, what mutable
    state there is is mostly stored in the machine states.

    I let each job decide how it wants to represent its state in the event
    that it is killed off and restarted at a later time. Sort of like checkpointing but letting the job (tasks) figure out what they need
    to persist in order to accomplish this.

    E.g., if you are transcoding video, then remembering where (time/frame
    offset) in the input stream you were lets you return to that point when
    you are restarted (it would be silly to start over from the beginning, especially as you may AGAIN be killed off before finishing!)

    This lets me avoid saving the entire process state -- do you really
    care about the value of the PC when you were terminated? Will it make
    a *material* difference if your entire process state could be restored??
    Or, is the "video offset" enough to achieve MOST of the value you need?

    Because resources are finite and workload isn't, individual jobs need
    to be aware of the resources they consume and HOW they use them. As
    I track all of this, it is in their best interest to use as little
    as possible (because the workload manager targets "gluttons" for
    termination when resources get scarce -- the leaner your footprint,
    the better your chances of being allowed to continue execution).

    Unfortunately, this exposes a fair bit of these mechanisms to the
    individual jobs. But, so far, I have been able to build templates
    that make managing those resources easier.

    E.g., when you compile/link a job, you partition it into sections
    (segments) based on how and when they will be used. So, when the
    STARTUP segment is finished (setting up the job, resolving names,
    dynamic binding, identifying configuration options, etc.), *YOU*
    can "free" the resources that were used to perform those setup
    activities. (do you ever revisit main()? If not, why keep the
    code that is located there resident in memory, "paying" for it
    without NEEDING it?)

    Similarly, if you are implementing a server, then you can explicitly
    load the SERVER segment of your resources WHEN you are ready to start
    offering those services -- presumably after shedding the STARTUP
    resources.

    There are a lot of similarly novel ideas in place that make it
    a bit more challenging than a "legacy" design (who wants to run a
    50+ year old "UNIX" clone with all of its warts? maybe someone who
    hasn't kept up with the research...). But, I have been unimpressed
    with what legacy designs have given us and the number of additional
    mechanisms they keep layering on to fix inherent flaws in the design
    approach. The Past is The Past.
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Don Y@blockedofcourse@foo.invalid to sci.electronics.design on Fri Feb 20 23:45:37 2026
    From Newsgroup: sci.electronics.design

    On 2/20/2026 11:07 PM, bitrex wrote:
    Is this a high-frequency trading box? Are you building a salami-slicer

    No. More IoT but with all of the processing in the leafs [sic]
    instead of an unscalable "central processor" trying to coordinate
    the activities of motes.

    Why put a processor in a mote if all its going to do is sense
    something or control an actuator -- based on decisions made by
    some other "smarter" entity? Once you have the CPU and connectivity,
    why not migrate the smarts out to the periphery (folks are slowly
    starting to realize this is inevitable)

    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From antispam@antispam@fricas.org (Waldek Hebisch) to sci.electronics.design on Sat Feb 21 16:53:19 2026
    From Newsgroup: sci.electronics.design

    Don Y <blockedofcourse@foo.invalid> wrote:
    On 2/20/2026 11:07 PM, bitrex wrote:
    Is this a high-frequency trading box? Are you building a salami-slicer

    No. More IoT but with all of the processing in the leafs [sic]
    instead of an unscalable "central processor" trying to coordinate
    the activities of motes.

    Why put a processor in a mote if all its going to do is sense
    something or control an actuator -- based on decisions made by
    some other "smarter" entity? Once you have the CPU and connectivity,
    why not migrate the smarts out to the periphery (folks are slowly
    starting to realize this is inevitable)

    Well, the point is that processor close to hardware can:
    - do real time things
    - reduce bandwith needed for communication
    - reduce need for wires

    Such processor may be quite cheap (I can buy resonable MCU at $0.2
    per piece and modules with MCU at $2 each), usually does not need mass
    storage and can have tiny RAM. Small MCU-s may be cheaper than
    specialized chips, so it make sense to use them just to unify
    hardware and lower the cost.

    OTOH more complicated algorithms may need a lot of data, large
    persistent starage, a lot of RAM. Still, it is likely that a single
    CPU can do all needed job. Single CPU make many things simpler. So
    unless that is compelling need for more processing using relatively
    dumb peripherial nodes and slightly more powerful cental node
    makes a lot of sense.

    Of course, in commercial settings people work on what they are
    payed to do. IIUC developers of say Home Assistant get no
    incentives to make it working on really low cost hardware,
    so you get requiremets like 8 GB ram, 16 (or maybe 32) GB
    filesystem for something that should comfortably run in 32 MB
    RAM and 500 MB filesystem. Actually, IIUC comparable functionality
    was available in the past on much smaller machines than the
    32 MB RAM and 500 MB filesystem mentioned above, I am simply adding
    a lot of slack, to allow higher-level coding and to reduce need
    for micro-optimization.
    --
    Waldek Hebisch
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Don Y@blockedofcourse@foo.invalid to sci.electronics.design on Sat Feb 21 13:18:54 2026
    From Newsgroup: sci.electronics.design

    On 2/21/2026 9:53 AM, Waldek Hebisch wrote:
    Don Y <blockedofcourse@foo.invalid> wrote:
    On 2/20/2026 11:07 PM, bitrex wrote:
    Is this a high-frequency trading box? Are you building a salami-slicer

    No. More IoT but with all of the processing in the leafs [sic]
    instead of an unscalable "central processor" trying to coordinate
    the activities of motes.

    Why put a processor in a mote if all its going to do is sense
    something or control an actuator -- based on decisions made by
    some other "smarter" entity? Once you have the CPU and connectivity,
    why not migrate the smarts out to the periphery (folks are slowly
    starting to realize this is inevitable)

    Well, the point is that processor close to hardware can:
    - do real time things
    - reduce bandwith needed for communication
    - reduce need for wires

    Such processor may be quite cheap (I can buy resonable MCU at $0.2
    per piece and modules with MCU at $2 each), usually does not need mass storage and can have tiny RAM. Small MCU-s may be cheaper than
    specialized chips, so it make sense to use them just to unify
    hardware and lower the cost.

    So, the "product" is a chip wrapped up in pretty paper?

    Of course not!

    You need a power source and conditioning/protection circuitry
    (for the processor, associated electronics AND anything required
    by the field).

    You need the field interface, a circuit board, connectors for the
    field AND the "main/central CPU". And, a box to contain it all.
    And, some means of interacting with "it" /in situ/ to determine
    if it is misbehaving (and merits being uninstalled).

    You need to pay to have this installed and the cable(s) run. If
    new work, you're just paying for wire and time on a jobsite. If
    old work, the installer is crawling through attics/basements,
    removing (and later repairing and repainting) wall board, etc.
    Along with any "protection" that would be needed to "protect" the
    signal path from tampering.

    You need to develop and test the software. And, have to ensure
    an adversary can't just mimic those signals to defeat the device
    (e.g., encrypted tunnel).

    The difference between a $0.20 MCU and a $20 SoC is just noise
    in calculations like that!

    OTOH more complicated algorithms may need a lot of data, large
    persistent starage, a lot of RAM. Still, it is likely that a single
    CPU can do all needed job. Single CPU make many things simpler. So
    unless that is compelling need for more processing using relatively
    dumb peripherial nodes and slightly more powerful cental node
    makes a lot of sense.

    So, how do you process video? Just *digitize* it at the leaf and
    ship it off to the "central CPU"? How many of those feeds can
    the CPU process concurrently (you can't ignore camera 1 while
    you are processing camera 13)? You can't use some cheap/slow
    interface because the pipe wouldn't be fat enough. So, add
    magnetics and upgrade the MCU to support a NIC... and a network
    stack... and a switch...

    A central processor limits the total amount of "work" that can be
    done. Crippled leaf processors mean they can't meaningfully *help*.
    E.g., if I want to scan a recorded OTA broadcast to identify the
    "commercials" (ads) within, I can call on the leaf processor
    that handles the garage (door, etc.) to do that work as it is
    likely not busy, at the current time.

    Or, *ask* the processor that handles the weather station if it
    has any spare resources that I could exploit.

    With a single processor, every node that you add represents
    more *work* for THAT processor. With powerful nodes, every node
    brings additional *capabilities* to the problem.

    Of course, in commercial settings people work on what they are
    payed to do. IIUC developers of say Home Assistant get no
    incentives to make it working on really low cost hardware,
    so you get requiremets like 8 GB ram, 16 (or maybe 32) GB
    filesystem for something that should comfortably run in 32 MB
    RAM and 500 MB filesystem. Actually, IIUC comparable functionality
    was available in the past on much smaller machines than the
    32 MB RAM and 500 MB filesystem mentioned above, I am simply adding
    a lot of slack, to allow higher-level coding and to reduce need
    for micro-optimization.

    I spent a career driving hardware costs to $0. I had one product that supported 16Kx1 and 64Kx1 DRAMs pluggable in quantities of *1*.
    I.e., so you could have 7 16Kb devices and 1 64Kb device in the same
    DRAM bank with the software recognizing the differences in capacity
    and treating the 64Kb bit position as "bit wide" while the first 16K
    was treated as byte-wide. I.e., expand memory in 6KB (48Kb) increments
    just by plugging different devices.

    I've written custom floating point packages to reduce the size
    of each float. Or, expedite certain classes of computations.
    Because the hardware couldn't "afford" to "do things right".

    It's a false economy in almost every case! Even for "self-contained"
    products that can be "installed" by setting them on a countertop and
    plugging into the mains.

    It completely ignores the externalities that come with products.

    ANY bug costs the customer time/resources. He may not "bill" you for
    it but it will affect your reputation and possible future sales.
    You (and he) have incurred a cost by the manifestation of this "defect".
    If he has to contact you to resolve that bug, it now costs your support
    staff.

    If it is a genuine bug, then you have to track it down and fix it
    and push out an update -- possibly to all users and not just the
    one who complained about it. I don't know many people who welcome
    the news that there "device" is now busied out while being updated.
    AND, that the update doesn't change anything that they didn't
    expect.

    [Don't you just LOVE your periodic MS and desktop app updates?]

    Anything you can do to reduce the cost of development and (ahem)
    "maintenance" decreases the TCO, even if not measurable or accounted
    for on a BoM. (If your staff is busy supporting a prior product,
    then it isn't available to work on NEW products).

    Bigger processors tend to be more amenable to HLLs and better development/diagnostic/debugging tools. You can build mechanisms
    into the code that minimize latent bugs hiding in the codebase
    (manifesting AFTER delivery dramatically increases the actual and
    perceived cost of the product).

    [Imagine a customer having to take that countertop device and
    bring/ship it to you for "repair"/upgrade. What TOTAL cost
    for that?]

    Figure a system designer at $250K/yr. If you trim a week off of
    his workload, you've saved $5K. Likewise for a programmer,
    that's $2K/week. Note that this is ANY time that they would have
    spent (development, maintenance, testing, etc.).

    If you "only" save a total of 4 wks (1@5K and 3@2K), that's $10K.
    If that savings only happens *once* and you sell 10K pieces,
    you've justified another $1 in product cost.

    And, don't forget the cost of burden as well as opportunity.

    [Remember, you are developing multiple DIFFERENT motes so you
    are likely making these savings several times before RTM!]

    Of course, the *developer* typically doesn't think about these things
    (and likely wouldn't be "rewarded" for doing so!)
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Lumin Etherlight@lumin@etherlight.link to sci.electronics.design on Sun Feb 22 17:53:29 2026
    From Newsgroup: sci.electronics.design


    Thank you for writing all of this, I enjoyed
    reading it. I wish more engineers gave as much
    thought to their craft as you do.

    Lumin Etherlight
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From albert@albert@spenarnc.xs4all.nl to sci.electronics.design on Mon Feb 23 12:44:02 2026
    From Newsgroup: sci.electronics.design

    In article <10n9deu$a7ej$1@dont-email.me>,
    Martin Brown <'''newspam'''@nonad.co.uk> wrote:
    On 19/02/2026 22:04, Don Y wrote:
    [Obdisclaimer:-a cc'ing s.e.d only because some of you are no longer
    subscribed to the list and will likely not see this, otherwise.
    And, its a substantial change in the API so worth noting.]

    Using similar mechanisms to those that I use in call-by-value RMIs,
    I can protect against races for call-by-reference -- throwing an
    exception or just spinning on any violations on the calling side.

    Or, I can just let people rely on their own discipline to
    ensure they don't introduce latent bugs via this mechanism
    (resorting to call by value universally seems a bad idea
    for legacy coders).-a As these types of races have typically
    been hard to test for, I suspect it is worth the effort.

    Any pointers to languages or IDLs that include such qualifying
    adjectives?

    Languages that allow call by reference to be qualified with a const or >readonly directive so that the routine reading the original object (no
    copy made) is not allowed to alter the it in any way.

    Detectable as a compile time fault if you do. Relying on all coders to
    be disciplined is likely to be ahem... disappointing.

    I can't be the only one to have seen shops where the journeymen are so >unskilled that getting C code to compile by the random application of
    casts is the norm. Not written in C but the UK scandalous Horizon PO >accounting system was written by people of that calibre (thickness).

    Using casts should be forbidden in the code standard.
    If you want them, you should get approval from a senior programmer.
    He will probably learn you that the cast wasn't necessary.


    They compounded the problem by having expert witnesses perjure
    themselves to convict entirely innocent postmasters of fraud because the >computer was "infallible". The resulting mess is still ongoing.

    --
    Martin Brown

    --
    The Chinese government is satisfied with its military superiority over USA.
    The next 5 year plan has as primary goal to advance life expectancy
    over 80 years, like Western Europe.
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Don Y@blockedofcourse@foo.invalid to sci.electronics.design on Mon Feb 23 05:20:15 2026
    From Newsgroup: sci.electronics.design

    On 2/23/2026 4:44 AM, albert@spenarnc.xs4all.nl wrote:
    In article <10n9deu$a7ej$1@dont-email.me>,
    Martin Brown <'''newspam'''@nonad.co.uk> wrote:
    On 19/02/2026 22:04, Don Y wrote:
    [Obdisclaimer:-a cc'ing s.e.d only because some of you are no longer
    subscribed to the list and will likely not see this, otherwise.
    And, its a substantial change in the API so worth noting.]

    Using similar mechanisms to those that I use in call-by-value RMIs,
    I can protect against races for call-by-reference -- throwing an
    exception or just spinning on any violations on the calling side.

    Or, I can just let people rely on their own discipline to
    ensure they don't introduce latent bugs via this mechanism
    (resorting to call by value universally seems a bad idea
    for legacy coders).-a As these types of races have typically
    been hard to test for, I suspect it is worth the effort.

    Any pointers to languages or IDLs that include such qualifying
    adjectives?

    Languages that allow call by reference to be qualified with a const or
    readonly directive so that the routine reading the original object (no
    copy made) is not allowed to alter the it in any way.

    Detectable as a compile time fault if you do. Relying on all coders to
    be disciplined is likely to be ahem... disappointing.

    I can't be the only one to have seen shops where the journeymen are so
    unskilled that getting C code to compile by the random application of
    casts is the norm. Not written in C but the UK scandalous Horizon PO
    accounting system was written by people of that calibre (thickness).

    Using casts should be forbidden in the code standard.
    If you want them, you should get approval from a senior programmer.
    He will probably learn you that the cast wasn't necessary.

    It may have been necessary to silence a compiler complaint.
    But, the "problem" is often an incorrect choice of data type.
    E.g., ints and pointers.

    Coders often don't think about what the "thing" they are
    manipulating represents. And, don't realize the value of
    using a better data type.

    E.g., I worked on a DBMS where the schema stored hex values as
    varchar's. Doing so, allowed errors like "DEADBEEs" to be stored
    and propagated to the queries that tried to make sense of this.

    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Don Y@blockedofcourse@foo.invalid to sci.electronics.design on Mon Feb 23 16:26:42 2026
    From Newsgroup: sci.electronics.design

    On 2/23/2026 4:44 AM, albert@spenarnc.xs4all.nl wrote:
    In article <10n9deu$a7ej$1@dont-email.me>,
    I can't be the only one to have seen shops where the journeymen are so
    unskilled that getting C code to compile by the random application of
    casts is the norm. Not written in C but the UK scandalous Horizon PO
    accounting system was written by people of that calibre (thickness).

    Using casts should be forbidden in the code standard.

    The other problems with "coding standards" include:
    - many shops don't have them
    - they are often not rigorously enforced (code reviews? ha!)
    - they often reflect the beliefs of a small group of individuals
    - they are often inappropriate for an application/domain
    - they are considered a panacea

    If you want them, you should get approval from a senior programmer.
    He will probably learn you that the cast wasn't necessary.

    Just hire better people rather than trying to "fix" the efforts
    of subgrade performers.
    --- Synchronet 3.21b-Linux NewsLink 1.2