• computer science and the stone age

    From gcalliet@gerard.calliet@pia-sofer.fr to comp.os.vms on Fri Feb 13 10:58:09 2026
    From Newsgroup: comp.os.vms

    Hello,

    A lot of thanks for the answers I got on another thread.

    To express my gratitude, I give here the opportunity to read an article
    I submitted on the last Eutopean Ada conference, and they honored me publishing it.

    I hope you'll enjoy reading something I think you'll think somehow as a delusion.

    So, just for fun: https://www.growkudos.com/publications/10.1145%25252F3784987.3784994/reader

    G|-rard Calliet
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Fri Feb 13 21:50:13 2026
    From Newsgroup: comp.os.vms

    On Fri, 13 Feb 2026 10:58:09 +0100, gcalliet wrote:

    https://www.growkudos.com/publications/10.1145%25252F3784987.3784994/reader

    How is it that "spatial" portability (between various OS or
    hardware platforms) is taken for granted, while "temporal"
    portability is forgotten?

    Four words: technical debt.
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From gcalliet@gerard.calliet@pia-sofer.fr to comp.os.vms on Sat Feb 14 23:02:55 2026
    From Newsgroup: comp.os.vms

    Le 13/02/2026 |a 22:50, Lawrence DrCOOliveiro a |-crit-a:
    On Fri, 13 Feb 2026 10:58:09 +0100, gcalliet wrote:

    https://www.growkudos.com/publications/10.1145%25252F3784987.3784994/reader

    How is it that "spatial" portability (between various OS or
    hardware platforms) is taken for granted, while "temporal"
    portability is forgotten?

    Four words: technical debt.
    Indeed https://en.wikipedia.org/wiki/Technical_debt

    Two dates: technical debt: 1992 ; today : 2026

    (same world?)

    G|-rard Calliet
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Stephen Hoffman@seaohveh@hoffmanlabs.invalid to comp.os.vms on Sun Feb 15 11:43:54 2026
    From Newsgroup: comp.os.vms

    On 2026-02-13 09:58:09 +0000, gcalliet said:

    So, just for fun: https://www.growkudos.com/publications/10.1145%25252F3784987.3784994/reader ...

    "We will ask ourselves why backward compatibility, considered an
    intrinsic quality of software, particularly for VMS, has become a
    potential and luxurious add-on under the term LTS. How is it that
    "spatial" portability (between various OS or hardware platforms) is
    taken for granted, while "temporal" portability is forgotten?"
    The concept that computers and apps are fixed and unchanging over time
    is becoming increasingly rare yes, outside of SCADA and process control
    and factory floor environments, and enterprise environments, and such; long-term deployments.

    And even within those LTS-aligned environments, changes such as
    encryption and authentication and related hardening are becoming
    required, and which then causes other changes within the apps and
    hardware configurations.

    For vendors, maintaining ABIs and to a lesser extent APIs becomes
    increasingly costly, difficult, and problematic, and less useful given
    the apps themselves are increasingly being continuously rebuilt.
    DEC sought to provide a degree of ABI and API stability, which rCo *looks around* rCo clearly wasn't a particularly viable business model. Not for funding competitive product development work, and not for maintaining
    and growing the customer base.
    Stagnant or shrinking customer bases are bad for pricing and
    amortization and competition, and on the wrong side of any market consolidation. That then shifts the pricing and the strategies
    available, which is where VSI is today.
    LTS is a hard problem, and that in various dimensions.
    --
    Pure Personal Opinion | HoffmanLabs LLC

    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From jgd@jgd@cix.co.uk (John Dallman) to comp.os.vms on Sun Feb 15 19:23:00 2026
    From Newsgroup: comp.os.vms

    In article <10mst4a$5o8o$1@dont-email.me>, seaohveh@hoffmanlabs.invalid (Stephen Hoffman) wrote:

    The concept that computers and apps are fixed and unchanging over
    time is becoming increasingly rare yes, outside of SCADA and
    process control and factory floor environments, and enterprise
    environments, and such; long-term deployments.

    And even within those LTS-aligned environments, changes such as
    encryption and authentication and related hardening are becoming
    required, and which then causes other changes within the apps and
    hardware configurations.

    The rule I work to is that if a system is always air-gapped and cannot communicate with any other computer, even via exchangeable media (floppy drives, USB sticks, etc), then it can be frozen. Anything else needs
    security updates, and if there's software in the stack that does not get security updates, it has to go.

    For vendors, maintaining ABIs and to a lesser extent APIs becomes increasingly costly, difficult, and problematic, and less useful
    given the apps themselves are increasingly being continuously
    rebuilt.

    It's not actually that hard, but the understanding of how to do it right
    seems to be very rare.

    DEC sought to provide a degree of ABI and API stability, which _
    *looks around* _ clearly wasn't a particularly viable business
    model. Not for funding competitive product development work, and
    not for maintaining and growing the customer base.

    OTOH, the Linux kernel maintains its ABIs and API very thoroughly, with
    the objective that changes within the kernel can't break applications.

    LTS is a hard problem, and that in various dimensions.

    Notably, it involves risks that can't be predicted.

    John
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From gcalliet@gerard.calliet@pia-sofer.fr to comp.os.vms on Sun Feb 15 20:36:11 2026
    From Newsgroup: comp.os.vms

    Le 13/02/2026 |a 22:50, Lawrence DrCOOliveiro a |-crit-a:
    On Fri, 13 Feb 2026 10:58:09 +0100, gcalliet wrote:

    https://www.growkudos.com/publications/10.1145%25252F3784987.3784994/reader

    How is it that "spatial" portability (between various OS or
    hardware platforms) is taken for granted, while "temporal"
    portability is forgotten?

    Four words: technical debt.
    Hello,

    More seriously, I think what I'm speaking about is another issue.

    In the years 1990s, you had this concept of technical debt, but in the
    same time the backward compatibility was thought as a "must". I think
    about the port from VAX to Alpha, for example, with project like DEC
    Migrate for accompaniment, or rolling upgrade in mixed clusters.

    So if you wanted, you had a lot of possibilities to cope with your
    technical debt.

    Now it seems you have to pay more if you want something like LTS. In my opinion it is not about technical debt, but about rapid cycles of
    "creative destruction". And so a lot of gaps are artificialy made
    between the past and the present which "must" run after the "future", sometimes forgetting qualities which were in the past.

    I may be completely wrong, but not in the way you say it. :)

    G|-rard Calliet
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Sun Feb 15 15:21:33 2026
    From Newsgroup: comp.os.vms

    On 2/15/2026 2:23 PM, John Dallman wrote:
    The rule I work to is that if a system is always air-gapped and cannot communicate with any other computer, even via exchangeable media (floppy drives, USB sticks, etc), then it can be frozen. Anything else needs
    security updates, and if there's software in the stack that does not get security updates, it has to go.

    Curious.

    Where do you make the cut?

    Example list:

    commercial vendor where you directly pay for support
    commercial vendor with product supported
    open source with multiple maintainers and recent releases
    open source with single maintainer but recent releases
    open source with single maintainer and no recent releases
    open source declared EOL by author but source still available
    commercial vendor with product not supported
    commercial vendor no longer existing

    And it does not matter what it is and how it is used?

    If we are talking a classic 80's or 90's VMS Basic or
    Cobol application, then it is sort of easy.

    But if we are talking something recently developed, then
    there is a good chance that with transitive dependencies
    you will have 1000-5000 open source libraries included
    in the solution.

    And then it can become a little harder.

    Let us say that Felix Boehm decided not to maintain
    this little code gem:

    https://github.com/fb55/boolbase/blob/master/index.js

    Would you worry?

    And before something thinks that it is a joke, then according
    to public statistics https://www.npmjs.com/package/boolbase
    then it is downloaded 37 millions times per week (for
    npm "builds").

    Arne



    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Sun Feb 15 20:53:49 2026
    From Newsgroup: comp.os.vms

    On Sun, 15 Feb 2026 11:43:54 -0500, Stephen Hoffman wrote:

    DEC sought to provide a degree of ABI and API stability, which rCo
    *looks around* rCo clearly wasn't a particularly viable business
    model.

    IBM (with its mainframe business) and Microsoft would seem to be doing
    much the same sort of thing. IBM lasted a bit longer than DEC, but
    that part of its business is clearly in rCLlegacyrCY mode these days.

    As for Microsoft, for all the myths about backward compatibility, OS
    upgrades are now enough of a headache that large swathes of customers
    have become reluctant to accept them.
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Sun Feb 15 20:57:41 2026
    From Newsgroup: comp.os.vms

    On Sun, 15 Feb 2026 20:36:11 +0100, gcalliet wrote:

    In the years 1990s, you had this concept of technical debt, but in
    the same time the backward compatibility was thought as a "must". I
    think about the port from VAX to Alpha, for example, with project
    like DEC Migrate for accompaniment, or rolling upgrade in mixed
    clusters.

    So long as you understood that backward compatibility was only ever
    going to be an interim thing, and not a forever thing, then you could
    survive. It was to give you some breathing room to manage the upgrade
    in a more organized fashion, not to put it off indefinitely.

    Now it seems you have to pay more if you want something like LTS.

    Even that is just a slightly longer-timescale version of rCLinterimrCY.

    Just like regular monetary debt, technical debt accrues interest. And
    the longer you defer it, the more painful it becomes to pay that off.
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Sun Feb 15 23:46:32 2026
    From Newsgroup: comp.os.vms

    On Mon, 16 Feb 2026 00:15:14 +0100, gcalliet wrote:

    Le 15/02/2026 |a 17:43, Stephen Hoffman a |-crit-a:

    DEC sought to provide a degree of ABI and API stability, which rCo
    *looks around* rCo clearly wasn't a particularly viable business
    model.

    Did DEC failed because of that?

    DEC failed in spite of that, was I think the point.

    And yes one part of the problem is purely economic. What happens
    when you're not exactly in the economic mainstream?...

    DEC was at one time the second-largest computer vendor in the world.

    If thatrCOs not rCLin the economic mainstreamrCY, then I for one donrCOt know what is ...
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Sun Feb 15 19:21:30 2026
    From Newsgroup: comp.os.vms

    On 2/15/2026 6:46 PM, Lawrence DrCOOliveiro wrote:
    DEC was at one time the second-largest computer vendor in the world.

    Indeed. Far behind IBM, but a clear number two.

    And I believe some at DEC had ambitions to move up
    to number one.

    Remember VAX 9000 the "mainframe killer".

    Of course it was a fiasco (for various reasons
    including delays, technical and business).
    The late models (400, 500, 600) VAX 6000 systems
    was kept running for many many years. But the 9000
    systems came and went in just a few years.

    Arne

    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From gcalliet@gerard.calliet@pia-sofer.fr to comp.os.vms on Mon Feb 16 09:22:26 2026
    From Newsgroup: comp.os.vms

    Le 16/02/2026 |a 00:46, Lawrence DrCOOliveiro a |-crit-a:
    DEC was at one time the second-largest computer vendor in the world.

    If thatrCOs not rCLin the economic mainstreamrCY, then I for one donrCOt know what is ...
    The birth of DEC is against main frame main stream for use of computers:
    the minicomputers concept.

    Again: you have a flaw in the main stream (not thinking computers could
    be used by litte entities, like medium companies or departments in big
    ones) and so you win. Thinking against the main way of thinking is
    sometimes fruitfull.


    G|-rard Calliet


    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From jgd@jgd@cix.co.uk (John Dallman) to comp.os.vms on Mon Feb 16 21:30:00 2026
    From Newsgroup: comp.os.vms

    In article <10mt9sd$9orh$1@dont-email.me>, arne@vajhoej.dk (Arne Vajhoj)
    wrote:

    Where do you make the cut?

    Example list:

    commercial vendor where you directly pay for support
    commercial vendor with product supported
    open source with multiple maintainers and recent releases

    There: stuff like Xerces XML, Open JDK or GCC is fine.

    open source with single maintainer but recent releases
    open source with single maintainer and no recent releases
    open source declared EOL by author but source still available
    commercial vendor with product not supported
    commercial vendor no longer existing

    But if we are talking something recently developed, then
    there is a good chance that with transitive dependencies
    you will have 1000-5000 open source libraries included
    in the solution.

    I'm not in the web apps business. I produce closed-source mathematical modelling libraries. I try to keep our development environments as simple
    as possible, aided by not having management that wants to take up every
    new fashion.

    John
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Stephen Hoffman@seaohveh@hoffmanlabs.invalid to comp.os.vms on Mon Feb 16 17:11:50 2026
    From Newsgroup: comp.os.vms

    On 2026-02-15 19:23:00 +0000, John Dallman said:

    In article <10mst4a$5o8o$1@dont-email.me>, seaohveh@hoffmanlabs.invalid (Stephen Hoffman) wrote:

    The concept that computers and apps are fixed and unchanging over time
    is becoming increasingly rare yes, outside of SCADA and process
    control and factory floor environments, and enterprise environments,
    and such; long-term deployments.

    And even within those LTS-aligned environments, changes such as
    encryption and authentication and related hardening are becoming
    required, and which then causes other changes within the apps and
    hardware configurations.

    The rule I work to is that if a system is always air-gapped and cannot communicate with any other computer, even via exchangeable media
    (floppy drives, USB sticks, etc), then it can be frozen. Anything else
    needs security updates, and if there's software in the stack that does
    not get security updates, it has to go.

    I follow similar, though with the "isolated" network and server
    operations instrumented and monitored. Canaries, too. Isolation is
    nice. I like isolation. But I don't trust it to be maintained.

    For vendors, maintaining ABIs and to a lesser extent APIs becomes
    increasingly costly, difficult, and problematic, and less useful given
    the apps themselves are increasingly being continuously rebuilt.

    It's not actually that hard, but the understanding of how to do it
    right seems to be very rare.

    Oh, it gets much harder when the API or ABI no longer reflects current reality, and you're left to break ABIs or downgrade operations.

    DEC sought to provide a degree of ABI and API stability, which _
    *looks around* _ clearly wasn't a particularly viable business model.
    Not for funding competitive product development work, and not for
    maintaining and growing the customer base.

    OTOH, the Linux kernel maintains its ABIs and API very thoroughly, with
    the objective that changes within the kernel can't break applications.

    That's a goal of many platforms. OpenVMS has an extensive ABI and API
    test suite. (One or two things slipped by it over the years too, such
    as the BACKUP ABI.)

    LTS is a hard problem, and that in various dimensions.

    Notably, it involves risks that can't be predicted.

    And some future changes that just can't be (or weren't) predicted, too.
    --
    Pure Personal Opinion | HoffmanLabs LLC

    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Mon Feb 16 20:21:31 2026
    From Newsgroup: comp.os.vms

    On 2/16/2026 4:30 PM, John Dallman wrote:
    In article <10mt9sd$9orh$1@dont-email.me>, arne@vajhoej.dk (Arne Vajh|+j) wrote:
    Where do you make the cut?

    Example list:

    commercial vendor where you directly pay for support
    commercial vendor with product supported
    open source with multiple maintainers and recent releases

    There: stuff like Xerces XML, Open JDK or GCC is fine.

    open source with single maintainer but recent releases
    open source with single maintainer and no recent releases
    open source declared EOL by author but source still available
    commercial vendor with product not supported
    commercial vendor no longer existing

    Relative high bar, but it can be justified.

    But if we are talking something recently developed, then
    there is a good chance that with transitive dependencies
    you will have 1000-5000 open source libraries included
    in the solution.

    I'm not in the web apps business. I produce closed-source mathematical modelling libraries. I try to keep our development environments as simple
    as possible, aided by not having management that wants to take up every
    new fashion.

    It is not just web. Even though web tend to be the worst due
    the JS worlds acceptance of micro libraries (which in many's
    opinion including mine is a a concept worse than the square
    wheel).

    But you can manage this stuff when you can focus on
    your own libraries.

    I have no idea who the users of your libraries are,
    but it could be a lot more complex out there:
    * various data sources: relational databases,
    NoSQL databases, flat files
    * various data flows: message queues, event streaming
    system (read: Kafka), ETL tools
    * specialized databases: search databases, time series
    database, vector database
    * modelling applications in Python/Fortran/C that
    use your library and a dozen other libraries to
    model whatever
    * report generation: PDF, JSON, XLSX
    * monitoring tools to keep and eye on the entire flow
    * scheduling tools to automate runs
    etc.etc.

    Arne






    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Tue Feb 17 14:45:25 2026
    From Newsgroup: comp.os.vms

    In article <mvelhbF374cU1@mid.individual.net>,
    gcalliet <gerard.calliet@pia-sofer.fr> wrote:
    Le 13/02/2026 |a 22:50, Lawrence DrCOOliveiro a |-crit-a:
    On Fri, 13 Feb 2026 10:58:09 +0100, gcalliet wrote:

    https://www.growkudos.com/publications/10.1145%25252F3784987.3784994/reader >>
    How is it that "spatial" portability (between various OS or
    hardware platforms) is taken for granted, while "temporal"
    portability is forgotten?

    Four words: technical debt.
    Hello,

    More seriously, I think what I'm speaking about is another issue.

    In the years 1990s, you had this concept of technical debt, but in the
    same time the backward compatibility was thought as a "must". I think
    about the port from VAX to Alpha, for example, with project like DEC
    Migrate for accompaniment, or rolling upgrade in mixed clusters.

    In many ways, technical debt and backwards compatibility are
    orthogonal, though clearly not in _all_ ways.

    In particular, backwards compatibility does not absolve a
    project of technical debt, rather it is a means to maximize the
    value of existing investments while providing a graceful path
    forward with respect to change. Failure to take advantage of
    that in a timely manner gives rise to technical debt; put
    another way, one of the _many_ sources of technical debt is
    abusing backwards compatibility as an excuse not to invest in
    upgrades or maintenance for long-term projects.

    So if you wanted, you had a lot of possibilities to cope with your
    technical debt.

    Yes. And if someone squandered those, then they reap the
    consequences down the line.

    Now it seems you have to pay more if you want something like LTS. In my >opinion it is not about technical debt, but about rapid cycles of
    "creative destruction". And so a lot of gaps are artificialy made
    between the past and the present which "must" run after the "future", >sometimes forgetting qualities which were in the past.

    I may be completely wrong, but not in the way you say it. :)

    My own experience is that change isn't necessarily hard to
    manage, even at massive scale, as long as one is willing to
    invest in the resources to manage it. THAT, and non-technical
    issues (like regulatory requiremnets) are the hard parts; the
    technology part is surprisingly easy.

    - Dan C.

    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Tue Feb 17 16:22:05 2026
    From Newsgroup: comp.os.vms

    In article <10mst4a$5o8o$1@dont-email.me>,
    Stephen Hoffman <seaohveh@hoffmanlabs.invalid> wrote:
    On 2026-02-13 09:58:09 +0000, gcalliet said:

    So, just for fun:
    https://www.growkudos.com/publications/10.1145%25252F3784987.3784994/reader >> ...

    "We will ask ourselves why backward compatibility, considered an
    intrinsic quality of software, particularly for VMS, has become a
    potential and luxurious add-on under the term LTS. How is it that
    "spatial" portability (between various OS or hardware platforms) is
    taken for granted, while "temporal" portability is forgotten?"
    The concept that computers and apps are fixed and unchanging over time
    is becoming increasingly rare yes, outside of SCADA and process control
    and factory floor environments, and enterprise environments, and such; >long-term deployments.

    And even within those LTS-aligned environments, changes such as
    encryption and authentication and related hardening are becoming
    required, and which then causes other changes within the apps and
    hardware configurations.

    For vendors, maintaining ABIs and to a lesser extent APIs becomes >increasingly costly, difficult, and problematic, and less useful given
    the apps themselves are increasingly being continuously rebuilt.

    I suppose one must define "ABI" in this context; if we mean
    specific interfaces from specific libraries or linkable segments
    then yeah, that can be an issue.

    If instead, we mean things like procedure calling conventions
    and data structure layout, then my sense is that these do not
    change that frequently, and stability is highly beneficial. In
    the Bad Old Days, inter-langauge procedure calls were incredibly
    difficult on e.g. Unix: calling FORTRAN from C and vice versa,
    for instance; the VAX folks would laugh because it was a
    non-issue on VMS because of the calling standard document.

    Now, FFI is much easier, though there are still gaps (vtables
    for langauges with methods and so on are not standardized,
    generally speaking, and things like symbol mangling for
    languages with hierarchical modules and such vary).

    I'll agree that maintaining _APIs_ is less beneficial as systems
    evolve.

    DEC sought to provide a degree of ABI and API stability, which rCo *looks >around* rCo clearly wasn't a particularly viable business model. Not for >funding competitive product development work, and not for maintaining
    and growing the customer base.

    I think this is unrelated, or at least, much less significant
    than you make it out to be. DEC's major problem was failure to
    react and adapt to changes in the market. As hardware
    commoditized it was inevitable that software would follow, and
    it largely has; at the same time, the rise of the web rendered
    the underlying node-level system _mostly_ irrelevant. DEC tried
    to leverage vendor lock-in to maintain its high-margin vertical
    position in the industry; the technical aspects were nice of how
    they tried to make that palatable to customers were nice, but
    mostly separate.

    It's ironic; I think DEC had a very compelling vision for
    computing that is actually close to what we more or less ended
    up with: highly distributed, client/server; what they missed was
    that it would be far more compelling to go with DEC if they had
    a better interoperability story with the rest of the world.

    Stagnant or shrinking customer bases are bad for pricing and
    amortization and competition, and on the wrong side of any market >consolidation. That then shifts the pricing and the strategies
    available, which is where VSI is today.
    LTS is a hard problem, and that in various dimensions.

    And how.

    - Dan C.

    --- Synchronet 3.21b-Linux NewsLink 1.2