• Re: (bash) How (really!) does the "current job" get determined?

    From Kenny McCormack@21:1/5 to naddy@mips.inka.de on Sat Oct 5 15:02:45 2024
    In article <slrnvg0s5f.1qk1.naddy@lorvorc.mips.inka.de>,
    Christian Weisgerber <naddy@mips.inka.de> wrote:
    ...
    Job control does not require an interactive shell or a terminal
    session. It can be used in scripting. That's the facts.

    True. But as they say, there are none so blind as those that will not see.

    I'm curious myself. That said, here's something I stumbled across
    recently:

    background job &
    ...
    kill %1 # clean up

    What happens if the background job has already terminated on its
    own accord before we reach the kill(1)? Not much, because with job
    control, the shell knows that no such job exists. If you do this
    with "kill $!", you signal that PID, which no longer refers to the
    intended process and may in fact have been reused for a different
    process.

    The problem of re-used pids is something people frequently worry about, but which is (for all practical purposes) never seen in real life. For one
    thing, even in the old days of 15 bit pids, it is still basically
    impossible for it to cycle all the way through in any sort of reasonable
    time frame. Nowadays, we have 22 bit pids, so it is even less likely (*).

    Some other notes about this:
    1) As far as I know, all "normal" Unixes use the simple cycle method of
    allocating pids - i.e., just keep going up by 1 until you reach the max,
    then start over again at 1 (or 2). But I think at one point, it was
    thought that having "predictable" pids was somehow bad for "security",
    so they had a random assignment method.
    2) Other non-Unix, but Unix-like, environments, such as Windows, treats
    pids differently. I think Windows aggressively re-uses them, so one
    probably needs to be more careful there than in regular Unix/Linux.
    3) As I said, this is more of a problem in theory than in practice, but
    the pidfd*() functions were inspired by a perceived need for being able
    to be sure.

    (*) Actually this kinda begs the question, though, why 22 bits? Why not all 32?
    Or 64? Incidentally, there are comments in the kernel to the effect of "22 bits has to be enough; 4 million pids should be enough for anyone" (just
    like 640K, I suppose...)

    --
    Kenny, I'll ask you to stop using quotes of mine as taglines.

    - Rick C Hodgin -

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Christian Weisgerber@21:1/5 to Kenny McCormack on Sat Oct 5 15:38:42 2024
    On 2024-10-05, Kenny McCormack <gazelle@shell.xmission.com> wrote:

    The problem of re-used pids is something people frequently worry about, but which is (for all practical purposes) never seen in real life. For one thing, even in the old days of 15 bit pids, it is still basically
    impossible for it to cycle all the way through in any sort of reasonable
    time frame. Nowadays, we have 22 bit pids, so it is even less likely (*).

    "We" do? Offhand, I don't know the size of pid_t, much less how
    much of its numerical range is actually used. There are trivial
    concerns, such as how many columns PIDs take up in the output of
    ps(1).

    1) As far as I know, all "normal" Unixes use the simple cycle method of
    allocating pids - i.e., just keep going up by 1 until you reach the max,
    then start over again at 1 (or 2). But I think at one point, it was
    thought that having "predictable" pids was somehow bad for "security",
    so they had a random assignment method.

    I thought random assignment of PIDs was standard by now.
    Okay, on FreeBSD it isn't but can be enabled.

    --
    Christian "naddy" Weisgerber naddy@mips.inka.de

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to Kenny McCormack on Sat Oct 5 23:41:35 2024
    On 2024-10-05, Kenny McCormack <gazelle@shell.xmission.com> wrote:
    In article <slrnvg0s5f.1qk1.naddy@lorvorc.mips.inka.de>,
    Christian Weisgerber <naddy@mips.inka.de> wrote:
    ...
    Job control does not require an interactive shell or a terminal
    session. It can be used in scripting. That's the facts.

    True. But as they say, there are none so blind as those that will not see.

    "They" being mainly billionaire televangelists.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Janis Papanagnou@21:1/5 to Christian Weisgerber on Mon Oct 7 04:50:31 2024
    On 05.10.2024 00:48, Christian Weisgerber wrote:
    On 2024-10-04, Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:

    What? Scripting should not go anywhere near POSIX job control, which is >>>> an interactive feature that requires a terminal session.

    Well, there _is_ set -m.

    And how will that devaluate what Kaz has said? Please elaborate.

    Job control does not require an interactive shell or a terminal
    session. It can be used in scripting. That's the facts.

    Yes, but for one that doesn't explain why you emphasized 'set -m',
    and your example below - certainly reasonable for discussion! - I
    don't find convincing. In contrast to '$!' that you get and work
    with there's no (no easy?) way to obtain the job number that the
    shell assigns! And (for concerning your question below) you have
    alway 'wait' available, for both, PIDs or job numbers (at least
    in Kornshell; don't know about Bash or what POSIX says about it).

    Janis


    or if you know of any useful and sensible application contexts
    for non-interactive usages I'd certainly be curious to know.[*]

    I'm curious myself. That said, here's something I stumbled across
    recently:

    background job &
    ...
    kill %1 # clean up

    What happens if the background job has already terminated on its
    own accord before we reach the kill(1)? Not much, because with job
    control, the shell knows that no such job exists. If you do this
    with "kill $!", you signal that PID, which no longer refers to the
    intended process and may in fact have been reused for a different
    process.


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kenny McCormack@21:1/5 to janis_papanagnou+ng@hotmail.com on Mon Oct 7 11:48:06 2024
    In article <vdvi9o$1imug$1@dont-email.me>,
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:
    ...
    In contrast to '$!' that you get and work
    with there's no (no easy?) way to obtain the job number that the
    shell assigns!

    I showed a method in an earlier post; it consists of piping the output of
    "jobs -l" into an AWK script (that matches on $!). It isn't pretty, but it works.

    And (for concerning your question below) you have
    alway 'wait' available, for both, PIDs or job numbers (at least
    in Kornshell; don't know about Bash or what POSIX says about it).

    What annoys me is that (in bash), most, but not all, of the job control
    related commands take either a pid or a job number. To be clear, what
    annoys me is that they don't *all* do. In particular, "fg" only takes a
    job number. "disown" takes either, which is a very good thing. Wish they
    all did.

    --
    If Jeb is Charlie Brown kicking a football-pulled-away, Mitt is a '50s housewife with a black eye who insists to her friends the roast wasn't
    dry.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Janis Papanagnou@21:1/5 to Kenny McCormack on Mon Oct 7 14:54:54 2024
    On 07.10.2024 13:48, Kenny McCormack wrote:

    What annoys me is that (in bash), most, but not all, of the job control related commands take either a pid or a job number. To be clear, what
    annoys me is that they don't *all* do. In particular, "fg" only takes a
    job number. "disown" takes either, which is a very good thing. Wish they all did.

    I think that shell's job control purpose is to make job handling
    simpler than using PIDs (even though PIDs are also displayed when
    a background job gets started). But, yes, a consistent interface
    accepting both would be a good thing [for all shell's job control
    commands]. Incidentally Bolky/Korn notes: "When a command in this
    section [Job Control] takes an argument called /job/, /job/ can be
    a process id." - I don't know about Bash, but Kornshell at least
    seems to have done it right.

    Janis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Helmut Waitzmann@21:1/5 to All on Mon Oct 7 19:26:22 2024
    Christian Weisgerber <naddy@mips.inka.de>:

    That said, here's something I stumbled across recently:


    background job &
    ...
    kill %1 # clean up

    What happens if the background job has already terminated on its
    own accord before we reach the kill(1)? Not much, because with job
    control, the shell knows that no such job exists. If you do this
    with "kill $!", you signal that PID, which no longer refers to the
    intended process and may in fact have been reused for a different
    process.


    In order for the pid "$!" to have been reused for a different
    process the shell would have needed call "wait()" (or
    "waitpid()") beforehand.  (Otherwise the terminated process would
    remain a zombie (i.e. an unwaited) process.)  Does the shell even
    call "wait()" or "waitpid()" if given the "set" option "+b"?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Harnden@21:1/5 to Christian Weisgerber on Mon Oct 7 23:32:58 2024
    On 04/10/2024 23:48, Christian Weisgerber wrote:

    I'm curious myself. That said, here's something I stumbled across
    recently:

    background job &
    ...
    kill %1 # clean up

    What happens if the background job has already terminated on its
    own accord before we reach the kill(1)? Not much, because with job
    control, the shell knows that no such job exists. If you do this
    with "kill $!", you signal that PID, which no longer refers to the
    intended process and may in fact have been reused for a different
    process.


    There would have to be a very long time in your '...' part for a pid to
    get reused. I guess you could ps | grep and check the command name is
    what you expect.

    You can 'kill -0 <pid>' (or %<job>) for 'could I signal <pid>', rather
    that actually sending a signal - it returns non-zero if it can't.

    If the job finishes before you wait then the process is gone, ie not
    zombied, so ksh/bash must keep track of the wait status. The job is also
    gone, so only wait <pid> will get you the correct exit status.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to Christian Weisgerber on Tue Oct 8 17:37:20 2024
    On 2024-10-04, Christian Weisgerber <naddy@mips.inka.de> wrote:
    What happens if the background job has already terminated on its
    own accord before we reach the kill(1)? Not much, because with job
    control, the shell knows that no such job exists.

    In Unix, when a child process terminates, it does not go away. The parent process has to call one of the wait functions like waitpid in order to "reap" that process. It can be notified of children in this state asynchronously via the SIGCHLD signal.

    The problem of PIDs suddenly disappearing and being recycled behind the parent process' back does not exist in the operating system.

    We can imagine a shell which does nothing when a child coprocess launched with & terminates spontaneously, so that the script /must/ use the wait command.

    In that shell, the process ID of that child will remain reliably available until that wait.

    Only if the shell reaps terminated coprocesses behind the script's back, so to speak, do you have the reuse problem.

    What does POSIX say? Something between those two alternatives:

    When an element of an asynchronous list (the portion of the list ended
    by an <ampersand>, such as command1, above) is started by the shell, the
    process ID of the last command in the asynchronous list element shall
    become known in the current shell execution environment; see Shell
    Execution Environment. This process ID shall remain known until:

    The command terminates and the application waits for the process ID.

    Another asynchronous list is invoked before "$!" (corresponding to
    the previous asynchronous list) is expanded in the current execution
    environment.

    The implementation need not retain more than the {CHILD_MAX} most
    recent entries in its list of known process IDs in the current shell
    execution environment.

    It's seems as if what POSIX is saying is that scripts which fire off asynchronous jobs one after another receive automatic clean up.
    A script which does not refer to the $! variable from one ampersand
    job, before firing off more ampersand jobs, will not clog the system
    with uncollected zombie processes. But a script which does reference $!
    after launching an ampersand job (before launching another one) will not
    have that process cleaned up behind its back: it takes on the responsibility for doing the wait which recycles the PID.

    Anyway, that's what I'd like to believe that the quoted passage means.

    If you do this
    with "kill $!", you signal that PID, which no longer refers to the
    intended process and may in fact have been reused for a different
    process.

    At the system call level, that's not what kill means. It means to
    pass a certain signal (fatal or not, catchable or not) to the process.
    Even if the signal is uncatchable and fatal, kill deos not mean
    "make the target process disappear, so that its PID may be reused".

    The waitpid system call will do that (in the situation when the process
    is a zombie, and so subsequently the returned status indicates that
    it exited, and with what exit code or on what signal).

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kenny McCormack@21:1/5 to All on Thu Oct 3 23:08:52 2024
    Note: This is a "How do things really work - in the real world?", rather
    than a "What does the manual say about how things work?" sort of thread.

    The manual says the answer is "The job most recently started in the
    background or most recently stopped." This is not always the case.

    Observe (this is bash version 5.2.15) (and "j" is aliased to "jobs -l"):

    $ j
    [1]+ 20914 Stopped (signal) sudo bash
    $ sleep 100 & j
    [2] 12914
    [1]+ 20914 Stopped (signal) sudo bash
    [2]- 12914 Running sleep 100 &
    $ fg
    sudo bash
    # suspend

    [1]+ Stopped sudo bash
    Status: 147
    $ %2

    Note that I start with one background job (the "sudo"). I launch a second
    one, but, according to the "jobs" listing, job #1 is still the "current"
    job (denoted with the "+"). Further, when I do "fg", I get back to job #1.

    Two comments:
    1) You generally would like it to work the way it should work, since
    you generally want to manipulate the most recent job (the sleep in the
    above example, not the sudo). Getting the job id from the pid ($!) is
    possible and is my chosen workaround, but it is not trivial.
    2) I could not find mention of what exactly the output of the jobs
    command means; i.e., what the + and - mean - in "man bash".

    Also note: I googled this question and found something on unix.stackexchange. There is a post there by our own "Stephan Chaz...", that basically just
    quotes the manual. As I said, this info is incorrect (as seen above).

    --
    That's the Trump playbook. Every action by Trump or his supporters can
    be categorized as one (or more) of:

    outrageous, incompetent, or mentally ill.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to Kenny McCormack on Fri Oct 4 02:40:06 2024
    On 2024-10-03, Kenny McCormack <gazelle@shell.xmission.com> wrote:
    Note: This is a "How do things really work - in the real world?", rather
    than a "What does the manual say about how things work?" sort of thread.

    The manual says the answer is "The job most recently started in the background or most recently stopped." This is not always the case.

    Observe (this is bash version 5.2.15) (and "j" is aliased to "jobs -l"):

    It looks buggered.

    $ j
    [1]+ 20914 Stopped (signal) sudo bash

    This is now most recently stopped.

    $ sleep 100 & j

    This is now most recently started in the background, therefore the documentation specifies that it is now the current job.

    It must be that Bash has no test cases covering the documented
    requirements in this area adequate enough to catch what you have found.

    Is this automatically tested at all?

    Testing interactive job control features pretty much requires Bash to be
    behind a pseudo-tty; driven by expect or something like it.

    (Or else at least a unit test is required where the function that
    identifies the current job is tested in isolation, with the various
    conditions mocked up: suspended job introduced while existing job is
    stopped, etc.)

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to Kaz Kylheku on Thu Oct 10 19:15:08 2024
    On 2024-10-05, Kaz Kylheku <643-408-1753@kylheku.com> wrote:
    On 2024-10-04, Christian Weisgerber <naddy@mips.inka.de> wrote:
    On 2024-10-04, Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:

    What? Scripting should not go anywhere near POSIX job control, which is >>>>> an interactive feature that requires a terminal session.

    Well, there _is_ set -m.

    And how will that devaluate what Kaz has said? Please elaborate.

    Job control does not require an interactive shell or a terminal
    session.

    It can be used in scripting. That's the facts.

    An example of what you mean would help.

    *crickets*

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kenny McCormack@21:1/5 to 643-408-1753@kylheku.com on Fri Oct 4 13:13:59 2024
    In article <20241003170607.397@kylheku.com>,
    Kaz Kylheku <643-408-1753@kylheku.com> wrote:
    On 2024-10-03, Kenny McCormack <gazelle@shell.xmission.com> wrote:
    Note: This is a "How do things really work - in the real world?", rather
    than a "What does the manual say about how things work?" sort of thread.

    The manual says the answer is "The job most recently started in the
    background or most recently stopped." This is not always the case.

    Observe (this is bash version 5.2.15) (and "j" is aliased to "jobs -l"):

    It looks buggered.

    Indeed it does. What this means from a scripting programmer's
    point-of-view is that you can't count on it. You can't rely on the job you just launched being the "current job". Thus, you have to convert $! into a
    job id, via something like:

    jobs -l | awk $!' == $2 { print substr($0,2)+0 }'

    From the bash developers point-of-view, the question becomes: What specific
    set of circumstances triggers this?

    Note also that the underlying problem here is that while most of the "job related" commands that take a "job spec" will take either something like %1
    or an actual pid, but the "fg" command only takes %n. So, if you want to
    fg the most recent job, you need to obtain the job id (via the command line above), before passing it to "fg". Note that "fg" with no arg at all would
    fg the wrong job.

    It must be that Bash has no test cases covering the documented
    requirements in this area adequate enough to catch what you have found.

    Is this automatically tested at all?

    Testing interactive job control features pretty much requires Bash to be >behind a pseudo-tty; driven by expect or something like it.

    Indeed. Good point.

    (Or else at least a unit test is required where the function that
    identifies the current job is tested in isolation, with the various >conditions mocked up: suspended job introduced while existing job is
    stopped, etc.)

    Yes.

    --
    Trump could say he invented gravity, and 40% of the country would believe him...
    This is where we are at, ladies and gentlemen.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to Kenny McCormack on Fri Oct 4 14:29:39 2024
    On 2024-10-04, Kenny McCormack <gazelle@shell.xmission.com> wrote:
    In article <20241003170607.397@kylheku.com>,
    Kaz Kylheku <643-408-1753@kylheku.com> wrote:
    On 2024-10-03, Kenny McCormack <gazelle@shell.xmission.com> wrote:
    Note: This is a "How do things really work - in the real world?", rather >>> than a "What does the manual say about how things work?" sort of thread. >>>
    The manual says the answer is "The job most recently started in the
    background or most recently stopped." This is not always the case.

    Observe (this is bash version 5.2.15) (and "j" is aliased to "jobs -l"):

    It looks buggered.

    Indeed it does. What this means from a scripting programmer's
    point-of-view is that you can't count on it.

    What? Scripting should not go anywhere near POSIX job control, which is
    an interactive feature that requires a terminal session.

    Unless you mean scripting that is peripheral to an interactive session,
    for automating some interactive job control use cases? Like making
    a more friendly job control system, or whatever?

    If you'd like to build some kind of layer over job control, it looks as
    if indeed you cannot rely on job control's implicit selection of the
    most recent process. If you want your job control layer to have that,
    you need your own global variable for it, and always pass down a %n
    argument to the job control cruft below you.

    Even if the current job variable worked reliably as documented, it would
    be unreliable to you because any background can become the current job
    at any time, asynchronously to you, due to being suddenly stopped on a
    signal.

    You can't rely on the job you
    just launched being the "current job".

    This is what I'm saying: even if it works as documented. There is a race condition. A fraction of a second after you launch the job, some
    existing, executing background job tries to do TTY input and is stopped.
    Oops! It is now the "current job".

    Note also that the underlying problem here is that while most of the "job related" commands that take a "job spec" will take either something like %1 or an actual pid, but the "fg" command only takes %n. So, if you want to
    fg the most recent job, you need to obtain the job id (via the command line above), before passing it to "fg". Note that "fg" with no arg at all would fg the wrong job.

    Yes; so if you're writing your own cruft on top of job control, it's
    probably a good idea to never call anything below without a %n argument.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Christian Weisgerber@21:1/5 to Kaz Kylheku on Fri Oct 4 17:49:32 2024
    On 2024-10-04, Kaz Kylheku <643-408-1753@kylheku.com> wrote:

    What? Scripting should not go anywhere near POSIX job control, which is
    an interactive feature that requires a terminal session.

    Well, there _is_ set -m.

    --
    Christian "naddy" Weisgerber naddy@mips.inka.de

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Janis Papanagnou@21:1/5 to Christian Weisgerber on Fri Oct 4 21:42:12 2024
    On 04.10.2024 19:49, Christian Weisgerber wrote:
    On 2024-10-04, Kaz Kylheku <643-408-1753@kylheku.com> wrote:

    What? Scripting should not go anywhere near POSIX job control, which is
    an interactive feature that requires a terminal session.

    Well, there _is_ set -m.

    And how will that devaluate what Kaz has said? Please elaborate.

    See also Bolsky/Korn (chapter "Job Control") about some details
    on implicit and explicit activation, implementation dependencies,
    and use of option 'monitor' (-m) [for interactive invocations]
    for systems that don't support ("complete") job control.

    If you have other information (facts, rationales, or insights),
    or if you know of any useful and sensible application contexts
    for non-interactive usages I'd certainly be curious to know.[*]

    Janis

    [*] The job-control "layering" that Kaz mentioned was the only
    thing that appeared somewhat obvious to me. I also don't expect
    any insights from the OP (who obviously was in insult-mode), so
    feel free to jump in.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Christian Weisgerber@21:1/5 to Janis Papanagnou on Fri Oct 4 22:48:15 2024
    On 2024-10-04, Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:

    What? Scripting should not go anywhere near POSIX job control, which is
    an interactive feature that requires a terminal session.

    Well, there _is_ set -m.

    And how will that devaluate what Kaz has said? Please elaborate.

    Job control does not require an interactive shell or a terminal
    session. It can be used in scripting. That's the facts.

    or if you know of any useful and sensible application contexts
    for non-interactive usages I'd certainly be curious to know.[*]

    I'm curious myself. That said, here's something I stumbled across
    recently:

    background job &
    ...
    kill %1 # clean up

    What happens if the background job has already terminated on its
    own accord before we reach the kill(1)? Not much, because with job
    control, the shell knows that no such job exists. If you do this
    with "kill $!", you signal that PID, which no longer refers to the
    intended process and may in fact have been reused for a different
    process.

    --
    Christian "naddy" Weisgerber naddy@mips.inka.de

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to Christian Weisgerber on Sat Oct 5 02:23:09 2024
    On 2024-10-04, Christian Weisgerber <naddy@mips.inka.de> wrote:
    On 2024-10-04, Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:

    What? Scripting should not go anywhere near POSIX job control, which is >>>> an interactive feature that requires a terminal session.

    Well, there _is_ set -m.

    And how will that devaluate what Kaz has said? Please elaborate.

    Job control does not require an interactive shell or a terminal
    session.

    It can be used in scripting. That's the facts.

    An example of what you mean would help.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)