• Re: ancient PL/I, was fledgling assembler programmer

    From drb@drb@ihatespam.msu.edu (Dennis Boone) to comp.compilers on Fri Mar 24 22:51:32 2023
    From Newsgroup: comp.compilers

    OK, the IBM PL/I (F) compiler, for what many consider a bloated
    language, is designed to run (maybe not well) in 64K.
    At the end of every compilation it tells how much memory was
    used, how much available, and how much to keep the symbol table
    in memory.

    It's... 30-some passes, iirc?

    De
    [Well, phases or overlays but yes, IBM was really good at slicing compilers into pieces they could overlay. -John]
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From gah4@gah4@u.washington.edu to comp.compilers on Fri Mar 24 22:44:49 2023
    From Newsgroup: comp.compilers

    On Friday, March 24, 2023 at 9:13:05rC>PM UTC-7, Dennis Boone wrote:

    (after I wrote)
    OK, the IBM PL/I (F) compiler, for what many consider a bloated
    language, is designed to run (maybe not well) in 64K.
    At the end of every compilation it tells how much memory was
    used, how much available, and how much to keep the symbol table
    in memory.

    It's... 30-some passes, iirc?

    [Well, phases or overlays but yes, IBM was really good at slicing compilers into pieces they could overlay. -John]

    It is what IBM calls, I believe, dynamic overlay. Each module specifically requests others to be loaded into memory. If there is enough memory,
    they can stay, otherwise they are removed.

    And there are a few disk files to be used, when it is actually
    a separate pass. The only one I actually know, is if the preprocessor
    is used, it writes a disk file with the preprocessor output.

    And as noted, if it is really short on memory, the symbol table
    goes out to disk.

    Fortran H, on the other hand, uses the overlay system generated
    by the linkage editor. When running on virtual storage system, it is
    usual to run the compiler through the linkage editor to remove
    the overlay structure. (One of the few linkers that knows how
    to read its own output.) Normally it is about 300K, without
    overlay closer to 450K.
    [Never heard of dynamic overlays on S/360. -John]
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From gah4@gah4@u.washington.edu to comp.compilers on Sat Mar 25 01:27:18 2023
    From Newsgroup: comp.compilers

    On Saturday, March 25, 2023 at 12:09:30rC>AM UTC-7, gah4 wrote:

    (snip)

    It is what IBM calls, I believe, dynamic overlay. Each module specifically requests others to be loaded into memory. If there is enough memory,
    they can stay, otherwise they are removed.

    Traditional overlays are generated by the linkage editor, and have
    static offsets determined at link time.

    PL/I (F) uses OS/360 LINK, LOAD, and DELETE macros to dynamically
    load and unload modules. The addresses are not static. IBM says:

    "The compiler consists of a number of phases
    under the supervision of compiler control
    routines. The compiler communicates with
    the control program of the operating
    system, for input/output and other
    services, through the control routines."

    All described in:

    http://bitsavers.trailing-edge.com/pdf/ibm/360/pli/GY28-6800-5_PL1_F_Program_Logic_Manual_197112.pdf

    They do seem to be called phases, but there are both physical and
    logical phases, where physical phases are what are more commonly
    called phases. There are way more than 100 modules, but I stopped
    counting.

    (snip)
    [Never heard of dynamic overlays on S/360. -John]

    It seems not to actually have a name.
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Hans-Peter Diettrich@DrDiettrich1@netscape.net to comp.compilers on Tue Mar 28 09:21:50 2023
    From Newsgroup: comp.compilers

    On 3/26/23 1:54 AM, George Neuner wrote:
    On Sat, 25 Mar 2023 13:07:57 +0100, Hans-Peter Diettrich <DrDiettrich1@netscape.net> wrote:

    After a look at "open software" I was astonished by the number of
    languages and steps involved in writing portable C code. Also updates of
    popular programs (Firefox...) are delayed by months on some platforms,
    IMO due to missing manpower on the target systems for checks and the
    adaptation of "configure". Now I understand why many people prefer
    interpreted languages (Java, JavaScript, Python, .NET...) for a
    simplification of their software products and spreading.

    Actually Python is the /only/ one of those that normally is
    interpreted. And the interpreter is so slow the language would be
    unusable were it not for the fact that all of its standard library
    functions and most of its useful extensions are written in C.

    My impression of "interpretation" was aimed at the back-end, where
    tokenized (virtual machine...) code has to be brought to a physical
    machine, with a specific firmware (OS). Then the real back-end has to
    reside on the target machine and OS, fully detached from the preceding
    compiler stages.

    Then, from the compiler writer viewpoint, it's not sufficient to define
    a new language and a compiler for it, instead it must placed on top of
    some popular "firmware" like Java VM, CLR or C/C++ standard libraries,
    or else a dedicated back-end and libraries have to be implemented on
    each supported platform.

    My impression was that the FSF favors C and ./configure for "portable"
    code. That's why I understand that any other way is easier for the implementation of really portable software, that deserves no extra
    tweaks for each supported target platform, for every single program. Can somebody shed some light on the current practice of writing portable
    C/C++ software, or any other compiled language, that (hopefully) does
    not require additional human work before or after compilation for a
    specific target platform?

    DoDi
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From gah4@gah4@u.washington.edu to comp.compilers on Tue Mar 28 14:21:05 2023
    From Newsgroup: comp.compilers

    On Tuesday, March 28, 2023 at 1:14:29rC>AM UTC-7, Hans-Peter Diettrich wrote:

    (snip)
    Then, from the compiler writer viewpoint, it's not sufficient to define
    a new language and a compiler for it, instead it must placed on top of
    some popular "firmware" like Java VM, CLR or C/C++ standard libraries,
    or else a dedicated back-end and libraries have to be implemented on
    each supported platform.

    From an announcement today here on an ACM organized conference:


    "We encourage authors to prepare their artifacts for submission
    and make them more portable, reusable and customizable using
    open-source frameworks including Docker, OCCAM, reprozip,
    CodeOcean and CK."

    I hadn't heard about those until I read that one, but it does sound interesting.
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From arnold@arnold@freefriends.org (Aharon Robbins) to comp.compilers on Tue Mar 28 14:42:18 2023
    From Newsgroup: comp.compilers


    In article <23-03-029@comp.compilers>,
    Hans-Peter Diettrich <DrDiettrich1@netscape.net> wrote:
    My impression was that the FSF favors C and ./configure for "portable"
    code.

    Like many things, this is the result of evolution. Autoconf is well
    over 20 years old, and when it was created the ISO C and POSIX standards
    had not yet spread throughout the Unix/Windows/macOS world. It and the
    rest of the autotools solved a real problem.

    Today, the C and C++ worlds are easier to program in, but it's still
    not perfect and I don't think I'd want to do without the autotools. Particularly for the less POSIX-y systems, like MinGW and OpenVMS.

    Can somebody shed some light on the current practice of writing portable >C/C++ software, or any other compiled language, that (hopefully) does
    not require additional human work before or after compilation for a
    specific target platform?

    Well, take a look at Go. The trend there (as in the Python, Java and
    C# worlds) is to significantly beef up the standard libraries. Go
    has regular expressions, networking, file system, process and all kinds
    of other stuff in its libraries, all things that regular old C and C++ code often has to (or had to) hand-roll. That makes it a lot easier for
    someone to just write the code to get their job done, as well as
    providing for uniformity across both operating systems and applications
    written in Go.

    Go goes one step further, even. Following the Plan 9 example, the
    golang.org Go compilers are also cross compilers. I can build a Linux
    x86_64 executable on my macOS system just by setting some environment
    variables when running 'go build'. Really nice.

    The "go" tool itself also takes over a lot of the manual labor, such
    as downloading libraries from the internet, managing build dependencies
    (no need for "make") and much more. I suspect that that is also a
    trend.

    Does that answer your question?

    Arnold

    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From George Neuner@gneuner2@comcast.net to comp.compilers on Tue Mar 28 17:26:45 2023
    From Newsgroup: comp.compilers

    On Tue, 28 Mar 2023 09:21:50 +0200, Hans-Peter Diettrich <DrDiettrich1@netscape.net> wrote:

    On 3/26/23 1:54 AM, George Neuner wrote:
    On Sat, 25 Mar 2023 13:07:57 +0100, Hans-Peter Diettrich
    <DrDiettrich1@netscape.net> wrote:

    After a look at "open software" I was astonished by the number of
    languages and steps involved in writing portable C code. Also updates of >>> popular programs (Firefox...) are delayed by months on some platforms,
    IMO due to missing manpower on the target systems for checks and the
    adaptation of "configure". Now I understand why many people prefer
    interpreted languages (Java, JavaScript, Python, .NET...) for a
    simplification of their software products and spreading.

    Actually Python is the /only/ one of those that normally is
    interpreted. And the interpreter is so slow the language would be
    unusable were it not for the fact that all of its standard library
    functions and most of its useful extensions are written in C.

    My impression of "interpretation" was aimed at the back-end, where
    tokenized (virtual machine...) code has to be brought to a physical
    machine, with a specific firmware (OS). Then the real back-end has to
    reside on the target machine and OS, fully detached from the preceding >compiler stages.

    That is exactly as I meant it.

    Python and Java both initially are compiled to bytecode. But at
    runtime Python bytecode is interpreted: the Python VM examines each
    bytecode instruction, one by one, and executes an associated native
    code subroutine that implements that operation.

    In contrast, at runtime Java bytecode is JIT compiled to equivalent
    native code - which include calls to native subroutines to implement
    complex operations like "new", etc. The JVM JIT compiles function by
    function as the program executes ... so it takes some time before the
    whole program exists as native code ... but once a whole load module
    has been JIT compiled, the JVM can completely ignore and even unload
    the bytecode from memory.


    Then, from the compiler writer viewpoint, it's not sufficient to define
    a new language and a compiler for it, instead it must placed on top of
    some popular "firmware" like Java VM, CLR or C/C++ standard libraries,
    or else a dedicated back-end and libraries have to be implemented on
    each supported platform.

    Actually it simplifies the compiler writer's job because the
    instruction set for the platform VM tends not to change much over
    time. A compiler targeting the VM doesn't have to scramble to support
    features of every new CPU - in many cases that can be left to the
    platform's JIT compiler.


    My impression was that the FSF favors C and ./configure for "portable"
    code. That's why I understand that any other way is easier for the >implementation of really portable software, that deserves no extra
    tweaks for each supported target platform, for every single program. Can >somebody shed some light on the current practice of writing portable
    C/C++ software, or any other compiled language, that (hopefully) does
    not require additional human work before or after compilation for a
    specific target platform?

    Right. When you work on a popular "managed" platform (e.g., JVM or
    CLR), then its JIT compiler and CPU specific libraries gain you any
    CPU specific optimizations that may be available, essentially for
    free.

    OTOH, when you work in C (or other independent language), to gain CPU
    specific optimizations you have to write model specific code and/or
    obtain model specific libraries, you have to maintain different
    versions of your compiled executables (and maybe also your sources),
    and you need to be able to identify the CPU so as to install or use
    model specific code.


    For most developers, targeting a managed platform tends to reduce the
    effort needed to achieve an equivalent result.


    DoDi
    George
    [The usual python implementation interprets bytecodes, but there are
    also versions for .NET, the Java VM, and a JIT compiler. -John]
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Hans-Peter Diettrich@DrDiettrich1@netscape.net to comp.compilers on Fri Mar 31 07:49:46 2023
    From Newsgroup: comp.compilers

    On 3/28/23 4:42 PM, Aharon Robbins wrote:
    In article <23-03-029@comp.compilers>,
    Hans-Peter Diettrich <DrDiettrich1@netscape.net> wrote:
    My impression was that the FSF favors C and ./configure for "portable"
    code.

    Like many things, this is the result of evolution. Autoconf is well
    over 20 years old, and when it was created the ISO C and POSIX standards
    had not yet spread throughout the Unix/Windows/macOS world. It and the
    rest of the autotools solved a real problem.

    About 20 years ago I could not build any open source program on Windows. Messages like "Compiler can not build executables" popped up when using
    MinGW or Cygwin. I ended up in ./configure in a Linux VM and fixing the resulting compiler errors manually on Windows. Without that trick I had
    no chance to load the "portable" source code into any development
    environment for inspection in readable (compilable) form. Often I had
    the impression that the author wanted the program not for use on Windows machines. Kind of "source open for specific OS only" :-(

    DoDi

    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Thomas Koenig@tkoenig@netcologne.de to comp.compilers on Fri Mar 31 05:19:14 2023
    From Newsgroup: comp.compilers

    gah4 <gah4@u.washington.edu> schrieb:

    For system like Matlab and Octave, and I believe also for Python,
    or one of many higher math languages, programs should spend
    most of the time in the internal compiled library routines.

    They should, but sometimes they don't.

    If you run into things not covered by compiled libraries, but which
    are compute-intensive, then Matlab and (interpreted) Python run
    as slow as molasses, orders of magnitude slower than compiled code.

    As far as the projects to create compiled versions with Python
    go, one of the problems is that Python is a constantly evolving
    target, which can lead to real problems, especially in long-term
    program maintenance. As Konrad Hinsen reported, results in
    published science papers have changed due to changes in the Python infrastructure:

    http://blog.khinsen.net/posts/2017/11/16/a-plea-for-stability-in-the-scipy-ecosystem/

    At the company I work for, I'm told each Python project will only
    use a certain specified version of Python will never be changed for
    fear of incompatibilities - they treat each version as a new
    programming language :-|

    To bring this back a bit towards compilers - a language definition
    is an integral part of compiler writing. If

    - the specification to o be implemented is unclear or "whatever
    the reference implementation does"

    - the compiler writers always reserve the right for a better,
    incompatible idea

    - the compiler writers do not pay careful attention to
    existing specifications

    then the resuling compiler will be of poor quality, regardless of
    the cool parsing or code generation that go into it.

    And I know very well that reading and understanding language
    standards is no fun, but I'm told that writing them is even
    less fun.
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From anton@anton@mips.complang.tuwien.ac.at (Anton Ertl) to comp.compilers on Fri Mar 31 16:34:24 2023
    From Newsgroup: comp.compilers

    Hans-Peter Diettrich <DrDiettrich1@netscape.net> writes:
    My impression was that the FSF favors C and ./configure for "portable"
    code. That's why I understand that any other way is easier for the >implementation of really portable software, that deserves no extra
    tweaks for each supported target platform, for every single program.

    I have not noticed that the FSF has any preference for C, apart from C
    being the lingua franca in the late 1980s and the 1990s, and arguably
    for certain requirements it still is.

    Now C on Unix has to fight with certain portability issues. In early
    times C programs contained a config.h that the sysadmin installing a
    program had to edit by hand before running make. Then came autoconf,
    which generates configure files that run certain checks on the system
    and fill in config.h for you; and of course, once the mechanism is
    there, stuff in other files is filled in with configure, too.

    It's unclear to me what you mean with "any other way is easier". The
    way of manually editing config.h certainly was not easier for the
    sysadmins. Not sure if it was easier for the maintainer of the
    programs.

    Can somebody shed some light on the current practice of writing portable >C/C++ software, or any other compiled language, that (hopefully) does
    not require additional human work before or after compilation for a
    specific target platform?

    There are other tools like Cmake that claim to make autoconf
    unnecessary, but when I looked at it, I did not find it useful for my
    needs (but I forgot why).

    So I'll tell you here some of what autoconf does for Gforth: Gforth is
    a Forth system mostly written in Forth, but using a C substrate. Many
    system differences are dealt with in the C substrate, often with the
    help of autoconf. The configure.ac file describes what autoconf
    should do for Gforth; it has grown to 1886 lines.

    * It determines the CPU architecture and OS where the configure script
    is running at, and uses that to configure some architecture-specific
    stuff for Gforth, in particular how to synchronize the data and
    instruction caches; later gcc acquired __builtin___clear_cache() to
    do that, but at least on some platforms that builtin is broken
    <https://gcc.gnu.org/bugzilla/show_bug.cgi?id=93811>.

    * It checks the sizes of the C integer types in order to determine the
    C type for Forth's cell and double-cell types.

    * It uses the OS information to configure things like the newline
    sequence, the directory and path separators.

    * It deals with differences between OSs, such as large (>4GB) file
    support, an issue relevant in the 1990s.

    * It checks for the chcon program, and, if present, uses it to "work
    around SELinux brain damage"; if not present, the brain is probably
    undamaged.

    * It tests which of several ways is accepted by the assembler to skip
    code space (needed for implementing Gforth's dynamic
    superinstructions).

    * It checks for the presence of various programs and library functions
    needed for building Gforth, e.g. mmap() (yes, there used to be
    systems that do not have mmap()). In some cases it works around the
    absence, sometimes with degraded functionality; in other cases it
    just reports the absence, so the sysadmin knows what to install.

    That's just some of the things I see in configure.ac; there are many
    bits and pieces that are too involved and/or too minor to report here.

    Our portability stuff does not catch everything. E.g., MacOS on Apple
    Silicon has a broken mmap() (broken as far as Gforth is concerned;
    looking at POSIX, it's compliant with that, but that does not justify
    this breakage; MacOS on Intel works fine, as does Linux on Apple
    Silicon), an issue that's new to us; I have not yet devised a
    workaround for that, but when I do, a part of the solution may use
    autoconf.

    Now when you write Forth code in Gforth, it tends to be quite portable
    across platforms (despite Forth being a low-level language where, if
    you want to see them, it's easy to see differences between 32-bit and
    64-bit systems, and between different byte orders). One reason for
    that is that Gforth papers over system differences (with the help of
    autoconf among other things); another reason is that Gforth does not
    expose many of the things where the systems are different, at least
    not at the Forth level. You can use the C interface and then access
    all the things that C gives access to, many of which are
    system-specific, and for which tools like autoconf exist.

    The story is probably similar for other languages.

    - anton
    --
    M. Anton Ertl
    anton@mips.complang.tuwien.ac.at
    http://www.complang.tuwien.ac.at/anton/
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From gah4@gah4@u.washington.edu to comp.compilers on Fri Mar 31 12:41:32 2023
    From Newsgroup: comp.compilers

    On Friday, March 31, 2023 at 4:42:14rC>AM UTC-7, Thomas Koenig wrote:
    gah4 <ga...@u.washington.edu> schrieb:
    For system like Matlab and Octave, and I believe also for Python,
    or one of many higher math languages, programs should spend
    most of the time in the internal compiled library routines.

    They should, but sometimes they don't.

    If you run into things not covered by compiled libraries, but which
    are compute-intensive, then Matlab and (interpreted) Python run
    as slow as molasses, orders of magnitude slower than compiled code.

    But then there is dynamic linking.

    I have done it in R, but I believe it also works for Matlab and
    Python, and is the way many packages are implemented. You write a
    small C or Fortran program that does the slow part, and call it from interpreted code.

    And back to my favorite x86 assembler program:

    rdtsc: rdtsc
    ret

    which allows high resolution timing, to find where the program
    is spending too much time. Some years ago, I did this on a program
    written by someone else, so I mostly didn't know the structure.
    Track down which subroutines used too much time, and fix
    just those.

    In that case, one big time sink is building up a large matrix one
    row or one column at a time, which requires a new allocation and
    copy for each time. Preallocating to the final (if known) size fixes that.

    But then there were some very simple operations that, as you note,
    are not included and slow. Small C programs fixed those.
    There are complications for memory allocation, which I avoid
    by writing mine to assume (require) that all is already allocated.

    (snip)

    At the company I work for, I'm told each Python project will only
    use a certain specified version of Python will never be changed for
    fear of incompatibilities - they treat each version as a new
    programming language :-|

    To bring this back a bit towards compilers - a language definition
    is an integral part of compiler writing. If

    I have heard about that one.

    It seems that there are non-backward compatible changes
    from Python 2.x to 3.x. That is, they pretty much are different
    languages.

    Tradition on updating a language standard is to maintain, as much
    as possible, backward compatibility. It isn't always 100%, but often
    close enough. You can run Fortran 66 program on new Fortran 2018
    compilers without all that much trouble. (Much of the actual problem
    comes with extensions used by the old programs.)
    [Python's rapid development cycle definitely has its drawbacks. Python 3
    is not backward compatible with python 2 (that's why they bumped the major version number) and they ended support for python 2 way too soon. -John]
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From anton@anton@mips.complang.tuwien.ac.at (Anton Ertl) to comp.compilers on Sun Apr 2 10:04:31 2023
    From Newsgroup: comp.compilers

    Hans-Peter Diettrich <DrDiettrich1@netscape.net> writes:
    Often I had
    the impression that the author wanted the program not for use on Windows >machines. Kind of "source open for specific OS only" :-(

    Whatever we want, it's also a question of what the OS vendor wants.

    For a Unix, there were a few hoops we had to jump through to make
    Gforth work: e.g., IRIX 6.5 had a bug in sigaltstack, so we put in a
    workaround for that; HP/UX's make dealt with files with the same mtime differently from other makes, so we put in a workaround for that.
    Windows, even with Cygwin, puts up many more hoops to jump through;
    Bernd Paysan actually jumped through them for Gforth, but a Windows
    build is still quite a bit of work, so he does that only occasionally.

    It's no surprise to me that other developers don't jump through these
    hoops; maybe if someone payed them for it, but why should they do it
    on their own time?

    As a recent example of another OS, Apple has intentionally reduced the functionality of mmap() on MacOS on Apple silicon compared to MacOS on
    Intel. As a result, the development version of Gforth does not work
    on MacOS on Apple Silicon (it works fine on Linux on Apple Silicon).
    I spent a day last summer on the MacOS laptop of a friend (an
    extremely unpleasant experience) trying to find the problem and fix
    it, and I found the problem, but time ran out before I had a working
    fix (it did not help that I had to spend a lot of time on working
    around things that I missed in MacOS). Since then this problem has
    not reached the top of my ToDo list; and when it does, I will go for
    the minimal fix, with the result that Gforth on MacOS will run without
    dynamic native-code generation, i.e., slower than on Linux.

    - anton
    --
    M. Anton Ertl
    anton@mips.complang.tuwien.ac.at
    http://www.complang.tuwien.ac.at/anton/
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Hans-Peter Diettrich@DrDiettrich1@netscape.net to comp.compilers on Wed Apr 5 11:23:39 2023
    From Newsgroup: comp.compilers

    On 4/2/23 12:04 PM, Anton Ertl wrote:

    For a Unix, there were a few hoops we had to jump through to make
    Gforth work: e.g., IRIX 6.5 had a bug in sigaltstack, so we put in a workaround for that; HP/UX's make dealt with files with the same mtime differently from other makes, so we put in a workaround for that.
    Windows, even with Cygwin, puts up many more hoops to jump through;
    Bernd Paysan actually jumped through them for Gforth, but a Windows
    build is still quite a bit of work, so he does that only occasionally.

    Too bad that not all existing OS are POSIX compatible? ;-)

    So my impression still is: have a language (plus library) and an
    interpreter (VM, browser, compiler...) on each target system. Then
    adaptations to a target system have to be made only once, for each
    target, not for every single program.

    Even for programs with extreme speed requirements the development can be
    done from the general implementation, for tests etc., and a version
    tweaked for a very specific target system, instead of the single target
    version in the first place and problematic ports to many other platforms.

    Of course it's up to the software developer or principal to order or
    build a software for a (more or less) specific target system only, or a primarily unbound software.

    (G)FORTH IMO is a special case because it's (also) a development system. Building (bootstrapping) a new FORTH system written in FORTH is quite complicated, in contrast to languages with stand alone tools like
    compiler, linker etc. Some newer (umbilical?) FORTH versions also
    compile to native code.

    DoDi
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From anton@anton@mips.complang.tuwien.ac.at (Anton Ertl) to comp.compilers on Wed Apr 5 16:30:31 2023
    From Newsgroup: comp.compilers

    Hans-Peter Diettrich <DrDiettrich1@netscape.net> writes:
    On 4/2/23 12:04 PM, Anton Ertl wrote:

    For a Unix, there were a few hoops we had to jump through to make
    Gforth work: e.g., IRIX 6.5 had a bug in sigaltstack, so we put in a
    workaround for that; HP/UX's make dealt with files with the same mtime
    differently from other makes, so we put in a workaround for that.
    Windows, even with Cygwin, puts up many more hoops to jump through;
    Bernd Paysan actually jumped through them for Gforth, but a Windows
    build is still quite a bit of work, so he does that only occasionally.

    Too bad that not all existing OS are POSIX compatible? ;-)

    Like many standards, POSIX is a subset of the functionality that
    programs use. Windows NT used to have a POSIX subsystem in order to
    make WNT comply with FIPS 151-2 needed to make WNT eligible for
    certain USA government purchases. From what I read, it was useful for
    that, but not much else.

    So my impression still is: have a language (plus library) and an
    interpreter (VM, browser, compiler...) on each target system. Then >adaptations to a target system have to be made only once, for each
    target, not for every single program.

    You mean: Write your program in Java, Python, Gforth, or the like?
    Sure, they deal with compatibility problems for you, but you may want
    to do things (or have performance) that they do not offer, or only
    offer through a C interface (and in the latter case you run into the
    C-level compatibility again).

    Even for programs with extreme speed requirements the development can be
    done from the general implementation, for tests etc., and a version
    tweaked for a very specific target system, instead of the single target >version in the first place and problematic ports to many other platforms.

    Well, if you go that route, the result can easily be that your program
    does not run on Windows. Especially for GNU programs: The primary
    goal is that they run on GNU. Any effort spent on a Windows port is
    extra effort that not everybody has time for.

    (G)FORTH IMO is a special case because it's (also) a development system. >Building (bootstrapping) a new FORTH system written in FORTH is quite >complicated, in contrast to languages with stand alone tools like
    compiler, linker etc.

    Not really. Most self-respecting languages have their compiler(s)
    implemented in the language itself, resulting in having to bootstrap.
    AFAIK the problem Gforth has with Windows is not the bootstrapping;
    packaging and installation are different than for Unix.

    - anton
    --
    M. Anton Ertl
    anton@mips.complang.tuwien.ac.at
    http://www.complang.tuwien.ac.at/anton/
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Hans-Peter Diettrich@DrDiettrich1@netscape.net to comp.compilers on Fri Apr 7 15:35:32 2023
    From Newsgroup: comp.compilers

    On 4/5/23 6:30 PM, Anton Ertl wrote:
    Hans-Peter Diettrich <DrDiettrich1@netscape.net> writes:

    You mean: Write your program in Java, Python, Gforth, or the like?
    Sure, they deal with compatibility problems for you, but you may want
    to do things (or have performance) that they do not offer, or only
    offer through a C interface (and in the latter case you run into the
    C-level compatibility again).

    Except the library also is portable ;-)

    Else you end up with:
    Program runs only on systems with libraries X, Y, Z installed.


    (G)FORTH IMO is a special case because it's (also) a development system.
    Building (bootstrapping) a new FORTH system written in FORTH is quite
    complicated, in contrast to languages with stand alone tools like
    compiler, linker etc.

    Not really. Most self-respecting languages have their compiler(s) implemented in the language itself, resulting in having to bootstrap.

    The FORTH compiler also is part of the current monolithic framework.
    Replacing a WORD has immediate impact on the just running compiler and everything else. A bug can make the current system crash immediately,
    without diagnostics. Else the current WORDs can not be replaced
    immediately, only after a full compilation and only by code that depends
    on neither the old nor the new framrwork.


    AFAIK the problem Gforth has with Windows is not the bootstrapping;
    packaging and installation are different than for Unix.

    Isn't that the same problem with every language?

    DoDi
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Thomas Koenig@tkoenig@netcologne.de to comp.compilers on Sat Apr 8 18:25:06 2023
    From Newsgroup: comp.compilers

    Anton Ertl <anton@mips.complang.tuwien.ac.at> schrieb:
    Most self-respecting languages have their compiler(s)
    implemented in the language itself, resulting in having to bootstrap.

    This is a bit complicated for GCC and LLVM.

    For both, the middle end (and back end) is implemented in C++,
    so a C++ interface at class level is required, and that is a
    bit daunting.

    Examples: Gnat (GCC's Ada front end) is written in Ada, and its
    Modula-2 front end is written in Modula-2. On the other hand,
    the Fortran front end is written in C++ (well, mostly C with
    C++ features hidden behind macros).

    The very first Fortran compiler, of course, was written in
    assembler.
    [It was, but Fortran H, the 1960s optimizing compiler for S/360 was
    written in Fortran with a few data structure extensions. -John]
    --- Synchronet 3.21b-Linux NewsLink 1.2