• (shellcheck) SC2103

    From Kenny McCormack@21:1/5 to All on Wed Mar 5 14:43:04 2025
    All testing done with shellcheck version 0.10.0 and bash under Linux.

    Shellcheck says that you should replace code like:

    cd somedir
    do_something
    cd .. # (Or, cd -, which is almost, but not exactly the same thing)

    with

    (
    cd somedir
    do_something
    )

    The ostensible rationale is that it is shorter/easier to code, but the real rationale is that if the cd fails, putting it into a subshell "localizes"
    the damage.

    I find this analysis problematic for (at least) the following reasons:

    1) First of all, I almost always run with either -e or (more likely)
    "trap handler ERR", where handler is a shell function that prints
    an error message and exits. So, if any "cd" fails, the script
    aborts. Shellcheck fails to realize this and flags every "cd" in
    the script with "Don't you mean: cd ... || exit". So, if it
    recognized the trap better, both this and the SC2103 thing would
    evaporate.

    2) Subshells (still, as far as I know) require a fork() and run as
    another process. Given that bash is a large program (it says so
    right in the man page), this fork() is expensive. And note that it
    isn't the cheap sort of fork() where it is immediately followed by
    an exec(). It's the expensive kind which requires the COW
    mechanism to kick in. So, it seems unwise for shellcheck to be
    recommending using a subshell.

    I certainly avoid it if possible, for this reason.

    A further note: The most common case is to use (as shown above):

    cd somewhere;...;cd ..

    but the more general pattern would be:

    cd somewhere;...;cd -

    But note that "cd -" - even in a script - prints the name of the directory
    it is cd'ing back to, which is annoying. I could not find any option to
    turn this off, but: cd - > /dev/null
    works. I'm guessing that people avoid using "cd -" in scripts for this
    reason.

    --
    Modern Christian: Someone who can take time out from using Leviticus
    to defend homophobia and Exodus to plaster the Ten Commandments on
    every school and courthouse to claim that the Old Testament is merely
    "ancient laws" that "only applies to Jews".

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Helmut Waitzmann@21:1/5 to All on Wed Mar 5 18:33:22 2025
    gazelle@shell.xmission.com (Kenny McCormack):
    All testing done with shellcheck version 0.10.0 and bash under Linux.

    Shellcheck says that you should replace code like:

    cd somedir
    do_something
    cd .. # (Or, cd -, which is almost, but not exactly the same thing)

    with

    (
    cd somedir
    do_something
    )

    The ostensible rationale is that it is shorter/easier to code, but the real rationale is that if the cd fails, putting it into a subshell "localizes"
    the damage.


    Does


    cd -- somedir &&
    {
    do_something
    cd ..
    }


    make shellcheck happy while at the same time localizing the
    damage and avoiding a subshell?


    cd somewhere;...;cd -

    But note that "cd -" - even in a script - prints the name of the directory
    it is cd'ing back to, which is annoying. I could not find any option to
    turn this off,


    cd -- "$OLDPWD"

    Will help.


    But note, that


    cd -- "$OLDPWD"


    as well as


    cd -


    will not restore the working directory established at the start
    of the


    cd somewhere; do_something; cd -


    commandline if that directory gets renamed or moved by a
    (asynchronous) process while running.


    Try the following command in an empty directory:  It creates a
    subdirectory named “sandbox” and in it more subdirectories and
    removes everything when it ends:


    (
    mkdir -- sandbox &&
    {
    n=1 &&
    mkdir -p -- sandbox/"$n"/WD &&
    {
    while
    sleep 1 &&
    mv -- "$n"/ "$((n+1))"/ &&
    n="$((n+1))"
    do
    :
    done &
    } &&
    pid="$!" &&
    {
    (
    CDPATH= cd -- sandbox/"$n"/WD &&
    exec "$SHELL"
    )
    kill -s INT -- "$pid"
    }
    sleep 1
    rm -R -- sandbox
    }
    )


    The command creates the subdirectories “sandbox/1” and
    “sandbox/1/WD” and launches a process that, running in the
    background, will rename the directory “sandbox/1/” after 1 second
    to “sandbox/2/”, then, after an additional second to “sandbox/3/”
    and so on, continuing incrementing the number, until terminated.


    At the same time the command launches in the foreground an
    interactive (sub‐)shell (“"$SHELL"”), using the directory
    “sandbox/1/WD” as its working directory, allowing the user to
    type commands.  If they type “exit”, the “"$SHELL"” will exit.


    The command then will signal the background process an INT signal
    and removes the directory “sandbox/” including its subhierarchy. 
    Finally it exits.


    Each second the launched “"$SHELL"” will have its working
    directory renamed to a new path in the file system without being
    notificated about that fact.


    In the launched interactive “"$SHELL"” one can repeatedly type the
    commands


    pwd -P

    pwd -L

    cd .


    or the like, observing, whether they continue to report the
    original (obsolete) path of the working directory or follow the
    changed path.


    In this use case, the command


    cd -- "$OLDPWD"


    might fail to restore the former working directory.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Janis Papanagnou@21:1/5 to Kaz Kylheku on Wed Mar 5 20:02:05 2025
    On 05.03.2025 19:40, Kaz Kylheku wrote:
    On 2025-03-05, Kenny McCormack <gazelle@shell.xmission.com> wrote:
    All testing done with shellcheck version 0.10.0 and bash under Linux.

    Shellcheck says that you should replace code like:

    cd somedir
    do_something
    cd .. # (Or, cd -, which is almost, but not exactly the same thing) >>
    with

    (
    cd somedir
    do_something
    )

    That obviously won't work if do_something has to set a variable
    that is then visible to the rest of the script.

    Indeed. Only for strict hierarchical semantics it makes sense.


    Forking a process just to preserve a current working directory
    is wasteful; we wouldn't do that in a C program, where we might
    open the current directory to be saved, and then fchdir back to it.

    Shells may be different. While Bash regularly creates a subprocess,
    Ksh in certain structures creates just a "subshell context" without forking/cloning an own process.

    [...]

    but the more general pattern would be:

    cd somewhere;...;cd -

    cd - will break if any of the steps in between happen to to cd;
    it is hostile toward maintenance of the script.

    Indeed.

    Just note that spending a "subshell context"[Ksh] or a subprocess
    [Bash] keeps the structure intact. (If you want to pay for that;
    especially when it's costly [as in Bash].)

    Janis

    [...]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to Kenny McCormack on Wed Mar 5 18:40:30 2025
    On 2025-03-05, Kenny McCormack <gazelle@shell.xmission.com> wrote:
    All testing done with shellcheck version 0.10.0 and bash under Linux.

    Shellcheck says that you should replace code like:

    cd somedir
    do_something
    cd .. # (Or, cd -, which is almost, but not exactly the same thing)

    with

    (
    cd somedir
    do_something
    )

    That obviously won't work if do_something has to set a variable
    that is then visible to the rest of the script.

    Forking a process just to preserve a current working directory
    is wasteful; we wouldn't do that in a C program, where we might
    open the current directory to be saved, and then fchdir back to it.

    However, most of the actions in a shell script fork and exec
    something anyway.

    The ostensible rationale is that it is shorter/easier to code, but the real rationale is that if the cd fails, putting it into a subshell "localizes"
    the damage.

    save_pwd=$(pwd) # local save_pwd=$(pwd) in shells that have local

    if cd somedir ; then
    ...
    cd "$save_pwd"
    else
    ...
    fi

    but the more general pattern would be:

    cd somewhere;...;cd -

    cd - will break if any of the steps in between happen to to cd;
    it is hostile toward maintenance of the script.

    By the way, we should also try to exploit the capability of commands to
    do their own chdir.

    E.g:

    save_pwd=$(pwd)
    cd somewhere
    tar czf "$save_pwd"/foo.tar.gz .
    cd "$save_pwd"

    becomes

    tar -C somewhere -czf foo.tar.gz .

    tar will change to somewhere for the sake of finding the
    files, and will resolve the . argument relative to that location,
    but the foo.tar.gz file is created in the original directory
    where it was invoked.

    Another utility with -C <dir> is make.


    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Helmut Waitzmann@21:1/5 to All on Wed Mar 5 23:09:52 2025
    Kaz Kylheku <643-408-1753@kylheku.com>:
    On 2025-03-05, Kenny McCormack <gazelle@shell.xmission.com> wrote:
    All testing done with shellcheck version 0.10.0 and bash under Linux.

    Shellcheck says that you should replace code like:

    cd somedir
    do_something
    cd .. # (Or, cd -, which is almost, but not exactly the same thing) >>
    with

    (
    cd somedir
    do_something
    )

    That obviously won't work if do_something has to set a variable
    that is then visible to the rest of the script.

    Forking a process just to preserve a current working directory
    is wasteful; we wouldn't do that in a C program, where we might
    open the current directory to be saved, and then fchdir back to it.


    Unfortunately, there is no way to fchdir() by means of a shell
    built‐in command, say “fchdir“.


    cd somewhere;...;cd -

    cd - will break if any of the steps in between happen to to cd;
    it is hostile toward maintenance of the script.


    Yes.  I really would appreciate using the imagined “fchdir” bash
    built‐in command to be able to do


    set_exit_status()
    {
    return ${1-}
    }
    unset -v es &&
    exec {saved_wd}< . &&
    {
    cd -- somedir &&
    {
    do_something
    es="$?"
    fchdir -- "$saved_wd"
    }
    exec {saved_wd}<&-
    set_exit_status ${es-}
    }

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kenny McCormack@21:1/5 to 643-408-1753@kylheku.com on Thu Mar 6 11:39:35 2025
    In article <20250305103210.358@kylheku.com>,
    Kaz Kylheku <643-408-1753@kylheku.com> wrote:
    On 2025-03-05, Kenny McCormack <gazelle@shell.xmission.com> wrote:
    All testing done with shellcheck version 0.10.0 and bash under Linux.

    Shellcheck says that you should replace code like:

    cd somedir
    do_something
    cd .. # (Or, cd -, which is almost, but not exactly the same thing) >>
    with

    (
    cd somedir
    do_something
    )

    That obviously won't work if do_something has to set a variable
    that is then visible to the rest of the script.

    That's actually mentioned in the rationale on the web page.
    That if you need to set a variable, you can't use subshells.

    Forking a process just to preserve a current working directory
    is wasteful; we wouldn't do that in a C program, where we might
    open the current directory to be saved, and then fchdir back to it.

    However, most of the actions in a shell script fork and exec
    something anyway.

    Yes, but those would be the cheap form of fork(), where if it is quickly followed by an exec(), you don't have to do COW.

    Note that, in Linux, fork() is actually an alias for vfork().

    --
    The randomly chosen signature file that would have appeared here is more than 4 lines long. As such, it violates one or more Usenet RFCs. In order to remain in compliance with said RFCs, the actual sig can be found at the following URL:
    http://user.xmission.com/~gazelle/Sigs/TedCruz

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kenny McCormack@21:1/5 to oe.throttle@xoxy.net on Thu Mar 6 11:32:28 2025
    In article <83v7snfkm5.fsf@helmutwaitzmann.news.arcor.de>,
    Helmut Waitzmann <oe.throttle@xoxy.net> wrote:
    ...
    Does


    cd -- somedir &&
    {
    do_something
    cd ..
    }


    make shellcheck happy while at the same time localizing the
    damage and avoiding a subshell?

    Yes, it does - silence both warnings. But seems kind of end-around.
    The point is that it should recognize that you are running in "trap ... ERR" mode and therefore there can't be an untrapped "cd".

    Now, I get that this is probably just too hard for shellcheck to do
    (although it is amazing it does as much as it does, and it is getting
    better with each new version), and so we have to live with the warning, but
    my overall point is that this could be problematic if you happen to work in
    an environment where management insists that your script pass shellcheck.

    cd somewhere;...;cd -

    But note that "cd -" - even in a script - prints the name of the directory >> it is cd'ing back to, which is annoying. I could not find any option to
    turn this off,


    cd -- "$OLDPWD"

    Will help.

    Pick your poison.

    I'm happy enough with just using "> /dev/null".
    It is about the same # of characters to type.

    --
    Nov 4, 2008 - the day when everything went
    from being Clinton's fault to being Obama's fault.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to Kenny McCormack on Thu Mar 6 19:50:16 2025
    On 2025-03-06, Kenny McCormack <gazelle@shell.xmission.com> wrote:
    In article <20250305103210.358@kylheku.com>,
    Kaz Kylheku <643-408-1753@kylheku.com> wrote:
    On 2025-03-05, Kenny McCormack <gazelle@shell.xmission.com> wrote:
    All testing done with shellcheck version 0.10.0 and bash under Linux.

    Shellcheck says that you should replace code like:

    cd somedir
    do_something
    cd .. # (Or, cd -, which is almost, but not exactly the same thing) >>>
    with

    (
    cd somedir
    do_something
    )

    That obviously won't work if do_something has to set a variable
    that is then visible to the rest of the script.

    That's actually mentioned in the rationale on the web page.
    That if you need to set a variable, you can't use subshells.

    Forking a process just to preserve a current working directory
    is wasteful; we wouldn't do that in a C program, where we might
    open the current directory to be saved, and then fchdir back to it.

    However, most of the actions in a shell script fork and exec
    something anyway.

    Yes, but those would be the cheap form of fork(), where if it is quickly followed by an exec(), you don't have to do COW.

    Note that, in Linux, fork() is actually an alias for vfork().

    Nope. In Linux, fork translates to a clone call with a
    menu of options to bring about the fork behavior. Whereas
    vfork is its own system call. You can easily see this with
    strace.

    Classic vfork shares address space between the child and parent,
    including the stack frame where the the fork is happening.

    Linux vfork mitigates the problems which could arise from that
    sharing by suspending the parent until the child either
    execs or terminates.

    Either way, fork cannot be vfork; that is nuts. It would break amost all programs which do anything other than exec an image in the child.

    Of course, fork *can* be vfork when it is vfork that is made
    identical to fork.

    I have an old vfork test program in my directory of such test
    programs. It's showing that vfork does share the stack frame
    with parent and child.

    The output is "var == 43" showing that the parent sees the
    value incremented by the child:

    #include <stdio.h>
    #include <stdlib.h>
    #include <unistd.h>
    #include <sys/wait.h>

    int main(void)
    {
    volatile int var = 42;
    int status;
    pid_t child = vfork();

    if (child > 0) {
    waitpid(child, &status, 0);
    printf("var == %d\n", var);
    } else if (child == 0) {
    var++;
    _exit(0);
    } else {
    perror("vfork");
    return EXIT_FAILURE;
    }

    return EXIT_SUCCESS;
    }

    If we reorder the declarations like this:

    int main(void)
    {
    pid_t child = vfork();
    volatile int var = 42;
    int status;

    The output is then "var == 42". Why? Because vfork() suspends the parent
    until the child shits a new process, or gets off the can.
    And only when vfork terminates does the parent execute the effect
    of the "volatile int var = 42" declaration. So at that point, it has
    clobbered the value to 42. While the parent is suspended, the
    child also initializes the value to 42, and increments it to 43.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)