• Re: 43 Years Of TCP/IP

    From Lars Poulsen@lars@beagle-ears.com to alt.folklore.computers on Fri Jan 2 14:08:08 2026
    From Newsgroup: alt.folklore.computers

    Peter Flass <Peter@Iron-Spring.com> writes:
    I think the alternatives were X.25 and various "network architectures"
    from different vendors, that all looked like SNA. SNA was a complete
    mess.

    On 2026-01-02, Lynn Wheeler <lynn@garlic.com> wrote:
    The Internet That Wasn't. How TCP/IP eclipsed the Open
    Systems Interconnection standards to become the global protocol for
    computer networking
    https://spectrum.ieee.org/osi-the-internet-that-wasnt

    Oh, those bad old days, when we all used TCP/IP while getting paid to
    implement OSI protocol stacks.

    SNA seemed to me to be designed around the IBM32xx transaction
    terminals. The entire structure revolved around the assumption that a
    network would have a hierarchical structure with ONE central node
    coordinating the whole network. This was why it could not be used for
    Lynn's "IBM Internal Network", which was built on low-level point-
    to-point links emulating IBM2780 RJE terminals, and where the protocol
    assumed that the "terminal" was initiating the connection.
    For a network to become truly universal, it has to allow connection
    between equal peers, what Lynn calls "Internetworking".

    SNA could easily have won out, if IBM had been willing to concede
    some of these points:
    - allow peer-to-peer networks (internetworking) and have a way
    for departments (such as research groups) in one company to
    connect to groups in other companies, maybe through intermediaries
    (neutral brokers).
    - allow outsiders to implement the protocol set without exorbitant
    royalties.
    - cede control of the standard to an independent body.

    OSI could have won if
    - the registry were optional or if it had been specified FIRST
    in an extensible manner
    - a non-profit had sponsored a basic reference implementation with
    enough features to be useful, and thereafter any new extesnsions
    to the protocol set had to be tested/certified/demonstrated to
    interwork with that reference

    TCP/IP won because
    - it had internetworking
    - it was all peer-to-peer
    - the protocols were open source, developed by working engineers
    and graduate student to solve real-world problems
    - it HAD to be proven to work before being accepted as "standard

    Truth, beauty and the Internet Way:
    "We believe in rough concensus and running code!"

    Vint Cerf guided the process masterfully. I asked Edge AI |As Vint Cerf
    still alive?

    "Yes, Vint Cerf is alive. He is currently 79 years old and remains active
    in his role as Google's Chief Internet Evangelist, continuing his
    efforts to advance the beneficial use of the Internet. He will turn 80
    this year."

    Also: https://www.the-independent.com/tech/vint-cerf-father-internet-b2582067.html
    --
    Lars Poulsen - an old geek in Santa Barbara, California
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lynn Wheeler@lynn@garlic.com to alt.folklore.computers on Fri Jan 2 08:27:29 2026
    From Newsgroup: alt.folklore.computers

    Lynn Wheeler <lynn@garlic.com> writes:
    newspaper article about some of Edson's Internet & TCP/IP IBM battles: https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
    Also from wayback machine, some additional (IBM missed, Internet &
    TCP/IP) references from Ed's website https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm

    late 80s, a senior disk engineer got a talk scheduled at internal,
    world-wide, annual communication group conference, supposedly on 3174 performance. However, his opening was that the communication group was
    going to be responsible for the demise of the disk division. The disk
    division was seeing drop in disk sales with data fleeing mainframe
    datacenters to more distributed computing friendly platforms. The disk
    division had come up with a number of solutions, but they were
    constantly being vetoed by the communication group (with their corporate ownership of everything that crossed the datacenter walls) trying to
    protect their dumb terminal paradigm. Senior disk software executive
    partial countermeasure was investing in distributed computing startups
    that would use IBM disks (he would periodically ask us to drop in on his investments to see if we could offer any assistance).

    The communication group's stranglehold on mainframe datacenters wasn't
    just disks and a couple years later, IBM has one of the largest losses
    in the history of US companies ... and was being reorganized into the 13
    "baby blues" (take-off on the "baby bells" breakup a decade earlier) in preperation for breaking up IBM. https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
    https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
    We had already left IBM but get a call from the bowels of Armonk asking
    if we could help with the breakup. Before we get started, the board
    brings in the former AMEX president as CEO to try and save the company,
    who (somewhat) reverses the breakup and uses some of the same techniques
    used at RJR (gone 404, but lives on at wayback) https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml

    other trivia: in the early 80s, I was funded for HSDT project, T1 and
    faster computer links (both terrestrial and satellite) and battles with
    SNA group (60s, IBM had 2701 supporting T1, 70s with SNA/VTAM and
    issues, links were capped at 56kbit ... and I had to mostly resort to
    non-IBM hardware). Also was working with NSF director and was suppose to
    get $20M to interconnect the NSF Supercomputer centers. Then congress
    cuts the budget, some other things happened and eventually there was RFP released (in part based on what we already had running). NSF 28Mar1986 Preliminary Announcement (from old archived a.f.c post): https://www.garlic.com/~lynn/2002k.html#12
    The OASC has initiated three programs: The Supercomputer Centers Program
    to provide Supercomputer cycles; the New Technologies Program to foster
    new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.

    ... IBM internal politics was not allowing us to bid. The NSF director
    tried to help by writing the company a letter (3Apr1986, NSF Director to
    IBM Chief Scientist and IBM Senior VP and director of Research, copying
    IBM CEO) with support from other gov. agencies ... but that just made
    the internal politics worse (as did claims that what we already had
    operational was at least 5yrs ahead of the winning bid), as regional
    networks connect in, NSFnet becomes the NSFNET backbone, precursor to
    modern internet. Note RFP had called for T1 links, however winning bid
    put in 440kbit/sec links ... then to make it look something like T1,
    they put in T1 trunks with telco multiplexors running multiple
    440kbit/sec links over T1 trunks.

    When director left NSF, he went over to K (H?) street lobby group
    (council on competitiveness) and we would try and periodically drop in
    on him
    --
    virtualization experience starting Jan1968, online at home since Mar1970
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to alt.folklore.computers on Fri Jan 2 20:29:34 2026
    From Newsgroup: alt.folklore.computers

    On Fri, 2 Jan 2026 14:08:08 -0000 (UTC), Lars Poulsen wrote:

    - allow peer-to-peer networks (internetworking) ...

    I never came across this usage of rCLinternetworkingrCY before. From the textbooks, I always understood it to mean rCLconnections between
    separate, autonomous networksrCY.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to alt.folklore.computers on Fri Jan 2 20:34:44 2026
    From Newsgroup: alt.folklore.computers

    On Fri, 02 Jan 2026 08:27:29 -1000, Lynn Wheeler wrote:

    The disk division had come up with a number of solutions, but they
    were constantly being vetoed by the communication group (with their
    corporate ownership of everything that crossed the datacenter walls)
    trying to protect their dumb terminal paradigm.

    IBM were legendary (notorious?) for having just about the biggest
    patent hoard of any company back in the day, and for the sheer number
    of papers published by their research division.

    But my impression of their shipping products was that very little of
    this cutting-edge cleverness actually made it into production. SNA
    being a case in point.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lynn Wheeler@lynn@garlic.com to alt.folklore.computers on Fri Jan 2 13:27:04 2026
    From Newsgroup: alt.folklore.computers

    Al Kossow <aek@bitsavers.org> writes:
    Chessin came to visit us in the Systems Technology Group at Apple ATG
    and we had a nice discussion.

    I had wondered whatever happened to XTP.

    TCP had minimum 7 packet exchange and XTP defined a reliable transaction
    with minimum of 3 packet exchange. Issue was that TCP/IP was part of
    kernel distribution requiring physical media (and typically some
    expertise for complete system change/upgrade; browsers and webservers
    were self contained load&go).

    XTP also defined things like trailer protocol where interface hardware
    could do CRC as packet flowing through and do the append/check
    ... helping minimize packet fiddling (as well as other pieces of
    protocol offloading, Chessin also liked to draw analogies with SGI
    graphic card process pipelining). Problem was that there were lots of
    push back (part of claim at the time HTTPS prevailing over IPSEC) for
    any kernel change prereq.

    topic drift ... 1988, HA/6000 was approved, initially for NYTimes to
    migrate their newspaper system off DEC VAXCluster to RS/6000. I rename
    it HA/CMP https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing when I start doing technical/scientific cluster scale-up with national
    labs (LANL, LLNL, NCAR, etc, also porting LLNL LINCS and NCAR
    filesystems to HA/CMP) and commercial cluster scale-up with RDBMS
    vendors (Oracle, Sybase, Ingres, Informix) that had VAXCluster support
    in same source base with unix (also do DLM supporting VAXCluster
    semantics).

    Early Jan92, have a meeting with Oracle CEO where IBM AWD executive
    Hester tells Ellison that we would have 16-system clusters by mid92 and 128-system clusters by ye92. Mid Jan92, convince IBM FSD to bid HA/CMP
    for gov. supercomputers. Late Jan92, cluster scale-up is transferred for announce as IBM Supercomputer (for technical/scientific *ONLY*) and we
    are told we can't do clusters with anything that involve more than four
    systems (we leave IBM a few months later).

    Partially blamed FSD going up to the IBM Kingston supercomputer group to
    tell them they were adopting HA/CMP for gov. bids (of course somebody
    was going to have to do it eventually). A couple weeks later, 17feb1992, Computerworld news ... IBM establishes laboratory to develop parallel
    systems (pg8)
    https://archive.org/details/sim_computerworld_1992-02-17_26_7

    Not long after leaving IBM, was brought in as consulatnt to small
    client/server startup, two former Oracle people (that had worked on
    HA/CMP and were in the Ellison/Hester meeting) are there responsible for something call "commerce server" and they want to do payment
    transactions. The startup had also invented this stuff they called "SSL"
    they want to use, it is now frequently calle "e-commerce". I had
    responsibility between web servers and payment networks, including the
    payment gateways.

    One of the problems with HTTP&HTTPS were transactions built on top of
    TCP ... implementation that sort of assumed long lived sessions (made it
    easier to install on top kernel TCP/IP protocol stack). As webserver
    workload ramped up, web servers were starting to spend 95+% of CPU
    running FINWAIT list. NETSCAPE was increasing number of servers and
    trying to spread the workload. Eventually NETSCAPE installs a large multiprocessor server from SEQUENT (that had also redone DYNIX FINWAIT processing to eliminate that non-linear increase in CPU overhead).

    XTP had provided for piggy-back transaction processing to keep packet
    exchange overhead to minimum ... and I showed HTTPS over XTP in the
    minimum 3-packet exchange (existing HTTPS had to 1st establish TCP
    session, then establish HTTPS, then the transaction, then shutdown
    session).
    https://en.wikipedia.org/wiki/Xpress_Transport_Protocol

    other trivia: I then did a talk on "Why Internet Isn't Business Critical Dataprocessing" based on documentation, processes and software I had to
    do for e-commerce, which (IETF RFC editor) Postel sponsored at ISI/USC.

    more trivia: when 1st started doing TCP/IP over high-speed satellite
    links, established dynamic adaptive rate-based pacing implementation
    ... which I also got written into the XTP spec.
    --
    virtualization experience starting Jan1968, online at home since Mar1970
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to alt.folklore.computers on Sat Jan 3 03:11:58 2026
    From Newsgroup: alt.folklore.computers

    On Fri, 02 Jan 2026 13:27:04 -1000, Lynn Wheeler wrote:

    XTP had provided for piggy-back transaction processing to keep
    packet exchange overhead to minimum ...

    And HTTP/3 (aka QUIC) works over UDP for a similar reason, doesnrCOt it.
    How does that compare, efficiency-wise?
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Johnny Billquist@bqt@softjar.se to alt.folklore.computers on Sun Jan 4 16:46:41 2026
    From Newsgroup: alt.folklore.computers

    On 2026-01-01 21:25, Lawrence DrCOOliveiro wrote:
    The Arpanet started switching over from the old NCP protocol it had
    been using to this new TCP/IP thing on 1st January 1983 <https://www.tomshardware.com/networking/arpanet-standardized-tcp-ip-on-this-day-in-1983-43-year-old-standard-set-the-foundations-for-todays-internet>.
    The transition took six months to complete.

    The article says:

    In contrast, the open, scalable, and hardware-agnostic TCP/IP
    managed to get a clear run at widespread adoption, and succeeded.
    One could say it won - not by being the best protocol designed to
    connect everything - but by being the only one.

    Why was nobody interested in offering a suitably scalable rival to
    TCP/IP? Perhaps because in those days companies wanted to monetize everything. IrCOm sure there were alternative protocols available -- for
    a price. TCP/IP was the only one whose creators were offering it for
    free -- no NDAs, no patent licensing, nothing.

    DECnet anyone?

    The case against DECnet was partly the concern that it was designed by
    one of the companies competing in the space, even though the DECnet
    specs were fully open and anyone could do their own implementation.

    Second point was that the address space of DECnet was too small.
    Basically just a 16-bit address, compared to the 32 bits in IPv4.

    There are some really cool and nice things in DECnet, but there are also
    some ugly bits in there. Especially some of the application level
    protocols...

    Johnny

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lars Poulsen@lars@beagle-ears.com to alt.folklore.computers on Sun Jan 4 15:53:44 2026
    From Newsgroup: alt.folklore.computers

    On 2026-01-04, Johnny Billquist <bqt@softjar.se> wrote:
    On 2026-01-01 21:25, Lawrence DrCOOliveiro wrote:
    The Arpanet started switching over from the old NCP protocol it had
    been using to this new TCP/IP thing on 1st January 1983
    <https://www.tomshardware.com/networking/arpanet-standardized-tcp-ip-on-this-day-in-1983-43-year-old-standard-set-the-foundations-for-todays-internet>.
    The transition took six months to complete.

    The article says:

    In contrast, the open, scalable, and hardware-agnostic TCP/IP
    managed to get a clear run at widespread adoption, and succeeded.
    One could say it won - not by being the best protocol designed to
    connect everything - but by being the only one.

    Why was nobody interested in offering a suitably scalable rival to
    TCP/IP? Perhaps because in those days companies wanted to monetize
    everything. IrCOm sure there were alternative protocols available -- for
    a price. TCP/IP was the only one whose creators were offering it for
    free -- no NDAs, no patent licensing, nothing.

    DECnet anyone?

    The case against DECnet was partly the concern that it was designed by
    one of the companies competing in the space, even though the DECnet
    specs were fully open and anyone could do their own implementation.

    Second point was that the address space of DECnet was too small.
    Basically just a 16-bit address, compared to the 32 bits in IPv4.

    There are some really cool and nice things in DECnet, but there are also some ugly bits in there. Especially some of the application level protocols...

    The address space concern was addressed in DECnet Phase V - which
    IIRC was structured with a foundational packet format that matched
    low-level ISO protocols. The larger address space also made it possible
    to tunnel it across the Internet.

    Unfortunately, the structural changes were so large that you could not
    mix it with the earlier generations in the same network, so the adoption
    rate was rather low. Sort of like the IPv4 to IPv6 transition.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Johnny Billquist@bqt@softjar.se to alt.folklore.computers on Sat Jan 10 17:38:20 2026
    From Newsgroup: alt.folklore.computers

    On 2026-01-04 16:53, Lars Poulsen wrote:
    On 2026-01-04, Johnny Billquist <bqt@softjar.se> wrote:
    On 2026-01-01 21:25, Lawrence DrCOOliveiro wrote:
    The Arpanet started switching over from the old NCP protocol it had
    been using to this new TCP/IP thing on 1st January 1983
    <https://www.tomshardware.com/networking/arpanet-standardized-tcp-ip-on-this-day-in-1983-43-year-old-standard-set-the-foundations-for-todays-internet>.
    The transition took six months to complete.

    The article says:

    In contrast, the open, scalable, and hardware-agnostic TCP/IP
    managed to get a clear run at widespread adoption, and succeeded.
    One could say it won - not by being the best protocol designed to
    connect everything - but by being the only one.

    Why was nobody interested in offering a suitably scalable rival to
    TCP/IP? Perhaps because in those days companies wanted to monetize
    everything. IrCOm sure there were alternative protocols available -- for >>> a price. TCP/IP was the only one whose creators were offering it for
    free -- no NDAs, no patent licensing, nothing.

    DECnet anyone?

    The case against DECnet was partly the concern that it was designed by
    one of the companies competing in the space, even though the DECnet
    specs were fully open and anyone could do their own implementation.

    Second point was that the address space of DECnet was too small.
    Basically just a 16-bit address, compared to the 32 bits in IPv4.

    There are some really cool and nice things in DECnet, but there are also
    some ugly bits in there. Especially some of the application level
    protocols...

    The address space concern was addressed in DECnet Phase V - which
    IIRC was structured with a foundational packet format that matched
    low-level ISO protocols. The larger address space also made it possible
    to tunnel it across the Internet.

    Unfortunately, the structural changes were so large that you could not
    mix it with the earlier generations in the same network, so the adoption
    rate was rather low. Sort of like the IPv4 to IPv6 transition.

    Well, that isn't exactly true. DECnet phase IV nodes and DECnet phase V
    nodes can talk fine with each other.

    However, phase V was/is a headache in general, not to mention that it
    was way later than TCP/IP v4, so it wasn't even an option at the time.

    Johnny

    --- Synchronet 3.21a-Linux NewsLink 1.2