• chromium builds very slow with vfs.vnode.param.can_skip_requeue=0

    From Stefan Ehmann@shoesoft@gmx.net to muc.lists.freebsd.stable on Wed Oct 29 23:50:04 2025
    From Newsgroup: muc.lists.freebsd.stable

    After updating from 14.3 to 15.0-BETA I've noticed that chromium builds
    in poudriere slow down to a crawl after some time. top shows > 95%
    system usage.
    dtrace/hotkernel shows > 90% spent in kernel`lock_delay.
    dtrace -n 'fbt::lock_delay:entry { @[stack()] = count(); }' has lots of
    traces similar to this one:
    kernel`__mtx_lock_sleep+0xe8
    kernel`vdbatch_process+0x4fb
    kernel`vdropl+0x20e
    kernel`vput_final+0xa3
    kernel`vn_close1+0x186
    kernel`vn_closefile+0x3d
    kernel`_fdrop+0x11
    kernel`closef+0x24a
    kernel`closefp_impl+0x58
    kernel`amd64_syscall+0x126
    kernel`0xffffffff809f8a0b
    In vdbatch_process() there is the following comment above the condition
    that is controlled by vfs.vnode.param.can_skip_requeue:
    /*
    * Attempt to requeue the passed batch, but give up easily.
    *
    * Despite batching the mechanism is prone to transient *significant*
    * lock contention, where vnode_list_mtx becomes the primary bottleneck
    * if multiple CPUs get here (one real-world example is highly parallel
    * do-nothing make , which will stat *tons* of vnodes). Since it is
    * quasi-LRU (read: not that great even if fully honoured) provide an
    * option to just dodge the problem. Parties which don't like it are
    * welcome to implement something better.
    */
    if (vnode_can_skip_requeue) {
    ...
    Setting "sysctl vfs.vnode.param.can_skip_requeue=1" remedies the
    situation immediately and system usage returns to ~15%.
    I cannot recall such problems in 14.3, is this a regression in 15.x?
    --
    Posted automagically by a mail2news gateway at muc.de e.V.
    Please direct questions, flames, donations, etc. to news-admin@muc.de
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Stefan Ehmann@shoesoft@gmx.net to muc.lists.freebsd.stable on Thu Oct 30 19:32:12 2025
    From Newsgroup: muc.lists.freebsd.stable

    On 10/30/25 12:23 AM, Lars Tunkrans wrote:

    -a What-a is-a the-a size-a of-a RAM-a-a-a in-a your -a machine.-a ?
    -aWhen-a building-a Chrome -a / ungoogled-chrome -a-a on-a 15-current-a-a at
    some-a time-a this -a-a spring, -a-a About-a 18-a GB
    -aRAM -a was-a used -a on-a a-a 64 GB -a machine.-a-a And then you need RAM for
    the ZFS-a ARC cache., if you have the source code on ZFS, Therefor-a you need -a 32 GB-a RAM to build-a Chrome-a efficiently.
    I-a think I-a remember -a that -a using-a Link Time Optimization option also took a-a long-a time at the-a end-a of-a the build.
    The machine has 32GB RAM, chromium is in TMPFS_BLACKLIST.
    There are some GBs swap used but that has never been a performance issue
    so far. If memory was the problem, I don't think setting vfs.vnode.param.can_skip_requeue=1 would improve the situation.
    I'm still looking for a simpler setup to reproduce the situation. It's
    hard to test if I have to wait several hours until the problem manifests.
    I'm not sure if it happens at a specific point of the chromium build.
    This time I noticed the issue at around 40K (of about 52K total) files
    built.
    It might has nothing to do with the specific commands in the build but
    just be a problem that gets worse over time.
    --
    Posted automagically by a mail2news gateway at muc.de e.V.
    Please direct questions, flames, donations, etc. to news-admin@muc.de
    --- Synchronet 3.21a-Linux NewsLink 1.2