Has anyone been able to build this port since 14th Oct?At this point stable/15 and a non-debug main 16 are not all that
Since https://cgit.freebsd.org/ports/commit/?id=38cc3ba437b87e694d99f1faaf3152ab3709501a
?
(tried in stable/14 and stable/15 poudriere on different hosts.
In both cases when the build got to the actual port after building
all the prerequisites, the host rebooted after a short compile time,
no error message as to why. I wonder if it's a problem with
the machines, the builders, or the port, which is why I'm asking
here).
At this point stable/15 and a non-debug main 16 are not all that
different. So I attempted builds of what I had for a ports tree
(from Oct 13) and then updating the ports tree and rebuilding
what changed ( PKG_NO_VERSION_FOR_DEPS=yes style ) based on my
normal environment and poudriere-devel use.
Neither failed.
But you give little configuration information so I do not
know how well my attempt approximated your context:
RAM+SWAP == ??? + ??? == ???128+4 == 132GB
poudriere.conf :all
USE_TMPFS=???
TMPFS_BLACKLIST=???not defined
PARALLEL_JOBS=??? # (or command line control of such)1 in poudriere.conf at the moment. It usually has 3 as per pkg.f.o
ALLOW_MAKE_JOBS=??? # (defined vs. not)yes
ALLOW_MAKE_JOBS_PACKAGES=???undefined
MUTUALLY_EXCLUSIVE_BUILD_PACKAGES=???"llvm* rust* gcc*"
PRIORITY_BOOST=???undefined
other relevant possibilities?not defined within jailname-make.conf
make.conf (or command line control of such):
MAKE_JOBS_NUMBER_LIMIT=??? # or MAKE_JOBS_NUMBER=???
Details for my context . . .
On Fri, Oct 17, 2025 at 09:21:06PM -0700, Mark Millard wrote:
At this point stable/15 and a non-debug main 16 are not all that
different. So I attempted builds of what I had for a ports tree
(from Oct 13) and then updating the ports tree and rebuilding
what changed ( PKG_NO_VERSION_FOR_DEPS=yes style ) based on my
normal environment and poudriere-devel use.
Neither failed.
But you give little configuration information so I do not
know how well my attempt approximated your context:
Point taken but at that stage I only wanted to know if others
could build it, because I couldnt on multiple poudrieres and
it had/has not yet (2025.10.18-1224 UTC) been built on the pkg cluster.
Now that I know it can be built, I partially know where to look, and
I avoid making a PR for the port.
RAM+SWAP == ??? + ??? == ???128+4 == 132GB
The problem happened on two systems. For simplicity I'm talking about the beefier system. It has 20 cpu (40 with HT on but it is turned off)
and 128GB ram. configured swap is 4GB and hardly used.
poudriere.conf :all
USE_TMPFS=???
TMPFS_BLACKLIST=???not defined
PARALLEL_JOBS=??? # (or command line control of such)1 in poudriere.conf at the moment. It usually has 3 as per pkg.f.o
but -J20 was also tried directly on the command line
ALLOW_MAKE_JOBS=??? # (defined vs. not)yes
ALLOW_MAKE_JOBS_PACKAGES=???undefined
MUTUALLY_EXCLUSIVE_BUILD_PACKAGES=???"llvm* rust* gcc*"
PRIORITY_BOOST=???undefined
other relevant possibilities?
make.conf (or command line control of such):not defined within jailname-make.conf
MAKE_JOBS_NUMBER_LIMIT=??? # or MAKE_JOBS_NUMBER=???
Details for my context . . .
Thank you for your time in this. I'm interested - do you
make available your hacked version of top? Could be useful!
void <void_at_f-m.fm> wrote on
Date: Sat, 18 Oct 2025 12:43:07 UTC :
On Fri, Oct 17, 2025 at 09:21:06PM -0700, Mark Millard wrote:
At this point stable/15 and a non-debug main 16 are not all that
different. So I attempted builds of what I had for a ports tree
(from Oct 13) and then updating the ports tree and rebuilding
what changed ( PKG_NO_VERSION_FOR_DEPS=yes style ) based on my
normal environment and poudriere-devel use.
Neither failed.
But you give little configuration information so I do not
know how well my attempt approximated your context:
Point taken but at that stage I only wanted to know if others
could build it, because I couldnt on multiple poudrieres and
it had/has not yet (2025.10.18-1224 UTC) been built on the pkg cluster.
Now that I know it can be built, I partially know where to look, and
I avoid making a PR for the port.
Do you use anything like:
# Delay when persistent low free RAM leads to
# Out Of Memory killing of processes:
vm.pageout_oom_seq=120
Or:
#
# For plunty of swap/paging space (will not
# run out), avoid pageout delays leading to
# Out Of Memory killing of processes:
#vm.pfault_oom_attempts=-1
#
# For possibly insufficient swap/paging space
# (might run out), increase the pageout delay
# that leads to Out Of Memory killing of
# processes (showing defaults at the time):
#vm.pfault_oom_attempts= 3
#vm.pfault_oom_wait= 10
(Mine are in /boot/loader.conf .)
RAM+SWAP == ??? + ??? == ???128+4 == 132GB
Note that with USE_TMPFS=all but TMPFS_BLACKLIST extensively
used to avoid tmpfs use for port-packakge with huge file
system requirements, I reported for the initial build:
QUOTE
So: Somewhere between 132624 MiBytes and 143875 MiBytes or
so was sufficient RAM+SWAP, all RAM here.
END QUOTE
But that was for 32 FreeBSD cpus, not 20. Still, the file
system usage contribution to RAM+SWAP usage for when tmpfs
is in full use tends to not be all that dependent on the
FreeBSD cpu count.
Converting my figures to GiBytes:
132624 MiBytes is a little under 129.6 GiBytes
143875 MiBytes is a little under 140.6 GiBytes
The range is that wide based, in part, on
lack of significant memory pressure, given the
192 GiByes of RAM. When SWAP is is significantly
involved gives much better information about
RAM+SWAP requirements because of the memory
pressure consequences. So I'd not infer that
much from the above.
I can boot the system using hw.physmem="128G"
in /boot/loader.conf. I'll probably get a SWAP
binding warning about 512 GiBytes of SWAP
being a potential mistuning for that amount of
RAM. (More like 474 GiBytes of SWAP would likely
not complain for 128 GiBytes of RAM.)
I can disable my TMPFS_BLACKLIST list.
I can constrain to use of PARALLEL_JOBS=20 and
have MAKE_JOBS_NUMBER_LIMIT=20 for
ALLOW_MAKE_JOBS use. But attempting to have it
actually avoid 12 of the 32 FreeBSD cpus would
probably be messier and I've no experience with
any known-effective way of doing that for bulk
runs. So I may well not deal that issue and
just let it use up to the 32. This makes
judging load average implications dependent
on the 32.
Also, this build would not have prior builds
of some of the port-packages. (Nothing would
end up with "inspected" status.)
So I may later have better information for
comparison, including for RAM+SWAP use.
The problem happened on two systems. For simplicity I'm talking about the
beefier system. It has 20 cpu (40 with HT on but it is turned off)
and 128GB ram. configured swap is 4GB and hardly used.
poudriere.conf :all
USE_TMPFS=???
TMPFS_BLACKLIST=???not defined
PARALLEL_JOBS=??? # (or command line control of such)1 in poudriere.conf at the moment. It usually has 3 as per pkg.f.o
but -J20 was also tried directly on the command line
ALLOW_MAKE_JOBS=??? # (defined vs. not)yes
ALLOW_MAKE_JOBS_PACKAGES=???undefined
MUTUALLY_EXCLUSIVE_BUILD_PACKAGES=???"llvm* rust* gcc*"
PRIORITY_BOOST=???undefined
other relevant possibilities?not defined within jailname-make.conf
make.conf (or command line control of such):
MAKE_JOBS_NUMBER_LIMIT=??? # or MAKE_JOBS_NUMBER=???
Details for my context . . .
Thank you for your time in this. I'm interested - do you
make available your hacked version of top? Could be useful!
I'll deal with top separately. I've not been doing
source based activities for months and likely
should get my context for such up to date first.
On Oct 18, 2025, at 10:43, Mark Millard <marklmi@yahoo.com> wrote:Another thing I did not ask about was other competing
void <void_at_f-m.fm> wrote on
Date: Sat, 18 Oct 2025 12:43:07 UTC :
On Fri, Oct 17, 2025 at 09:21:06PM -0700, Mark Millard wrote:
At this point stable/15 and a non-debug main 16 are not all that
different. So I attempted builds of what I had for a ports tree
(from Oct 13) and then updating the ports tree and rebuilding
what changed ( PKG_NO_VERSION_FOR_DEPS=yes style ) based on my
normal environment and poudriere-devel use.
Neither failed.
But you give little configuration information so I do not
know how well my attempt approximated your context:
Point taken but at that stage I only wanted to know if others
could build it, because I couldnt on multiple poudrieres and
it had/has not yet (2025.10.18-1224 UTC) been built on the pkg cluster.
Now that I know it can be built, I partially know where to look, and
I avoid making a PR for the port.
Do you use anything like:
# Delay when persistent low free RAM leads to
# Out Of Memory killing of processes:
vm.pageout_oom_seq=120
Or:
#
# For plunty of swap/paging space (will not
# run out), avoid pageout delays leading to
# Out Of Memory killing of processes:
#vm.pfault_oom_attempts=-1
#
# For possibly insufficient swap/paging space
# (might run out), increase the pageout delay
# that leads to Out Of Memory killing of
# processes (showing defaults at the time):
#vm.pfault_oom_attempts= 3
#vm.pfault_oom_wait= 10
(Mine are in /boot/loader.conf .)
RAM+SWAP == ??? + ??? == ???128+4 == 132GB
Note that with USE_TMPFS=all but TMPFS_BLACKLIST extensively
used to avoid tmpfs use for port-packakge with huge file
system requirements, I reported for the initial build:
QUOTE
So: Somewhere between 132624 MiBytes and 143875 MiBytes or
so was sufficient RAM+SWAP, all RAM here.
END QUOTE
But that was for 32 FreeBSD cpus, not 20. Still, the file
system usage contribution to RAM+SWAP usage for when tmpfs
is in full use tends to not be all that dependent on the
FreeBSD cpu count.
Converting my figures to GiBytes:
132624 MiBytes is a little under 129.6 GiBytes
143875 MiBytes is a little under 140.6 GiBytes
The range is that wide based, in part, on
lack of significant memory pressure, given the
192 GiByes of RAM. When SWAP is is significantly
involved gives much better information about
RAM+SWAP requirements because of the memory
pressure consequences. So I'd not infer that
much from the above.
I can boot the system using hw.physmem="128G"
in /boot/loader.conf. I'll probably get a SWAP
binding warning about 512 GiBytes of SWAP
being a potential mistuning for that amount of
RAM. (More like 474 GiBytes of SWAP would likely
not complain for 128 GiBytes of RAM.)
I can disable my TMPFS_BLACKLIST list.
I can constrain to use of PARALLEL_JOBS=20 and
have MAKE_JOBS_NUMBER_LIMIT=20 for
ALLOW_MAKE_JOBS use. But attempting to have it
actually avoid 12 of the 32 FreeBSD cpus would
probably be messier and I've no experience with
any known-effective way of doing that for bulk
runs. So I may well not deal that issue and
just let it use up to the 32. This makes
judging load average implications dependent
on the 32.
Also, this build would not have prior builds
of some of the port-packages. (Nothing would
end up with "inspected" status.)
So I may later have better information for
comparison, including for RAM+SWAP use.
The problem happened on two systems. For simplicity I'm talking about the >>> beefier system. It has 20 cpu (40 with HT on but it is turned off)
and 128GB ram. configured swap is 4GB and hardly used.
poudriere.conf :all
USE_TMPFS=???
TMPFS_BLACKLIST=???not defined
PARALLEL_JOBS=??? # (or command line control of such)1 in poudriere.conf at the moment. It usually has 3 as per pkg.f.o >>> but -J20 was also tried directly on the command line
ALLOW_MAKE_JOBS=??? # (defined vs. not)yes
ALLOW_MAKE_JOBS_PACKAGES=???undefined
MUTUALLY_EXCLUSIVE_BUILD_PACKAGES=???"llvm* rust* gcc*"
PRIORITY_BOOST=???undefined
other relevant possibilities?not defined within jailname-make.conf
make.conf (or command line control of such):
MAKE_JOBS_NUMBER_LIMIT=??? # or MAKE_JOBS_NUMBER=???
Details for my context . . .
Thank you for your time in this. I'm interested - do you
make available your hacked version of top? Could be useful!
I'll deal with top separately. I've not been doing
source based activities for months and likely
should get my context for such up to date first.
I forgot to ask about the non-tmpfs file system(s):
ZFS? UFS? Any tuning of note?
My prior tests I reported on were done in a ZFS
context, although just on a single partition: it
is ZFS just to have bectl use as far as why goes,
not for redundancy or other typical ZFS usage. The
only tuning is:
/etc/sysctl.conf:vfs.zfs.vdev.min_auto_ashift=12 /etc/sysctl.conf:vfs.zfs.per_txg_dirty_frees_percent=5
The use of "5" instead of "30" was as recommended
by the person that changed the default to 30. It was
for some behavior that I reported for a specific
context, but the 5 seemed to not be a problem for me
for any context I had so I've used it systematically
since then. 5 was the prior default, as I remember.
Do you use anything like:
# Delay when persistent low free RAM leads to
# Out Of Memory killing of processes:
vm.pageout_oom_seq=120
# For plunty of swap/paging space (will not
# run out), avoid pageout delays leading to
# Out Of Memory killing of processes:
#vm.pfault_oom_attempts=-1
#vm.pfault_oom_wait= 10
(Mine are in /boot/loader.conf .)
The system was only running poudriere when the reboot occurred.FYI: that special value effectively disables use of
It's a hp-proliant-g8 2u rackmount server. Xenon E5-2690v2 (x2)
On Sat, Oct 18, 2025 at 06:15:59PM -0700, Mark Millard wrote:
Do you use anything like:
# Delay when persistent low free RAM leads to
# Out Of Memory killing of processes:
vm.pageout_oom_seq=120
Yes
# For plunty of swap/paging space (will not
# run out), avoid pageout delays leading to
# Out Of Memory killing of processes:
#vm.pfault_oom_attempts=-1
That is also set.
FYI: I've always used the default for vm.pageout_update_period .#vm.pfault_oom_wait= 10
This isn't set in loader or sysctl.conf, but it is that value,
so I guess it's the default. For swap, my sysctl.conf has these:
# swap
vm.pageout_oom_seq=120
vm.pfault_oom_attempts=-1
vm.pageout_update_period=0
From man sysctl:(Mine are in /boot/loader.conf .)
Why is that (asking because I thought using loader.conf was being discouraged)
I'm attempting to build this port again after clearing obj ccache dirsSo far, running out of RAM+SWAP does not look to be involved.
then rebuilding the system then deleting the builder, its pkgs & logs
then recreating the builder from the obj of the rebuilt system.
it's got as far as building electron37 right now. top looks like this
last pid: 7554; load averages: 9.96, 10.01, 10.00 up 0+17:11:14 11:53:51
106 processes: 11 running, 95 sleeping
CPU: 48.2% user, 0.0% nice, 1.9% system, 0.0% interrupt, 49.9% idle
Mem: 38G Active, 5202M Inact, 12G Wired, 1267M Buf, 70G Free
ARC: 5946M Total, 2664M MFU, 2426M MRU, 645K Anon, 40M Header, 781M Other 4379M Compressed, 6795M Uncompressed, 1.55:1 Ratio
Swap: 4034M Total, 4034M Free
On Sun, Oct 19, 2025 at 11:57:57AM +0100, void wrote:That is a fairly sizable Active and the Wired is a lot bigger
Reboot happens in the same place. After electron completes
and a couple of minutes into signal-desktop build starting.
Here's top from exactly when it happened:
last pid: 17262; load averages: 1.58, 4.47, 7.38 up 0+19:12:53 13:55:30
89 processes: 2 running, 87 sleeping
CPU: 5.9% user, 0.0% nice, 1.1% system, 0.2% interrupt, 92.8% idle
Mem: 13G Active, 5898M Inact, 1136K Laundry, 13G Wired, 986M Buf, 94G Free
ARC: 5955M Total, 3855M MFU, 1260M MRU, 710K Anon, 39M Header, 766M OtherThe log file for my from scratch build has the "node scripts/generate-acknowledgments.js" line as 1210 of
4475M Compressed, 6925M Uncompressed, 1.55:1 Ratio
Swap: 4034M Total, 4034M Free
PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND
8047 root 7 99 0 781M 187M CPU1 1 0:00 19.75% node 30128 root 1 59 0 16M 5192K nanslp 0 3:13 0.87% sh 8202 root 1 0 0 15M 4704K select 3 0:24 0.12% top 39762 root 1 0 0 15M 4480K CPU19 19 0:03 0.11% top 43542 void 1 0 0 25M 12M select 0 0:02 0.01% sshd-session
87984 root 1 0 0 14M 2920K piperd 13 0:00 0.01% timestamp
80631 ntpd 1 0 0 26M 8688K select 16 0:03 0.01% ntpd 85841 void 1 0 0 25M 12M select 1 0:04 0.01% sshd-session
51984 _pflogd 1 0 0 15M 3172K bpf 0 0:02 0.00% pflogd
last few lines of build log:
[00:01:34] Lockfile is up to date, resolution step is skipped
[00:01:34] Already up to date
[00:01:36]
[00:01:36]
[00:01:36] > signal-desktop@7.74.0 postinstall /wrkdirs/usr/ports/net-im/signal-desktop/work/Signal-Desktop-7.74.0
[00:01:36] > pnpm run build:acknowledgments && pnpm run electron:install-app-deps
[00:01:36]
[00:01:38]
[00:01:38] > signal-desktop@7.74.0 build:acknowledgments /wrkdirs/usr/ports/net-im/signal-desktop/work/Signal-Desktop-7.74.0
[00:01:38] > node scripts/generate-acknowledgments.js
[00:01:38]
signal-desktop@7.74.0 electron:install-app-deps /wrkdirs/usr/ports/net-im/signal-desktop/work/Signal-Desktop-7.74.0rCo electron-builder version=26.0.14
electron-builder install-app-deps
I'm out of ideas. I thought the problem may have been because CPUTYPE?= was defined withinFor amd64: https://pkg-status.freebsd.org/beefy24/data/main-amd64-default/pcd2ca5cd2643_s6fa18fe744/logs/signal-desktop-7.74.0_1.log
src.conf but that has been removed and both the os and the builder built from its obj
have been rebuilt after clearing /usr/obj /var/cache/ccache and make cleanworld
in /usr/src.
build of net-im/signal-desktop | signal-desktop-7.74.0_1 ended at Sun Oct 19 16:04:15 -00 2025Cleaning up wrkdirCleaning for signal-desktop-7.74.0_1
build of net-im/signal-desktop | signal-desktop-7.74.0_1 ended at Sun Oct 19 20:52:37 -00 2025Cleaning up wrkdirCleaning for signal-desktop-7.74.0_1
build of net-im/signal-desktop | signal-desktop-7.74.0 ended at Sun Oct 19 01:17:50 -00 2025Cleaning up wrkdirCleaning for signal-desktop-7.74.0
void <void_at_f-m.fm> wrote on
Date: Mon, 20 Oct 2025 02:25:46 UTC :
On Sun, Oct 19, 2025 at 11:57:57AM +0100, void wrote:
Reboot happens in the same place. After electron completes
and a couple of minutes into signal-desktop build starting.
Here's top from exactly when it happened:
last pid: 17262; load averages: 1.58, 4.47, 7.38 up 0+19:12:53 13:55:30
89 processes: 2 running, 87 sleeping
CPU: 5.9% user, 0.0% nice, 1.1% system, 0.2% interrupt, 92.8% idle
Mem: 13G Active, 5898M Inact, 1136K Laundry, 13G Wired, 986M Buf, 94G Free
That is a fairly sizable Active and the Wired is a lot bigger
than the ARC Total below, basically matching the Active size.
Also node shows only 187M RES(ident) below.
I wonder what makes up much of the Active and Wired.
While I can not match the time frame in the overall build
sequence to the above at all, I did report:
59254Mi MaxObsActive
17128Mi MaxObsWired (likely not from the same time point)
Your figures are not large compared to those.
ARC: 5955M Total, 3855M MFU, 1260M MRU, 710K Anon, 39M Header, 766M Other
4475M Compressed, 6925M Uncompressed, 1.55:1 Ratio
Swap: 4034M Total, 4034M Free
PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND
8047 root 7 99 0 781M 187M CPU1 1 0:00 19.75% node >> 30128 root 1 59 0 16M 5192K nanslp 0 3:13 0.87% sh
8202 root 1 0 0 15M 4704K select 3 0:24 0.12% top >> 39762 root 1 0 0 15M 4480K CPU19 19 0:03 0.11% top >> 43542 void 1 0 0 25M 12M select 0 0:02 0.01% sshd-session
87984 root 1 0 0 14M 2920K piperd 13 0:00 0.01% timestamp
80631 ntpd 1 0 0 26M 8688K select 16 0:03 0.01% ntpd >> 85841 void 1 0 0 25M 12M select 1 0:04 0.01% sshd-session
51984 _pflogd 1 0 0 15M 3172K bpf 0 0:02 0.00% pflogd
last few lines of build log:
[00:01:34] Lockfile is up to date, resolution step is skipped
[00:01:34] Already up to date
[00:01:36]
[00:01:36]
[00:01:36] > signal-desktop@7.74.0 postinstall /wrkdirs/usr/ports/net-im/signal-desktop/work/Signal-Desktop-7.74.0
[00:01:36] > pnpm run build:acknowledgments && pnpm run electron:install-app-deps
[00:01:36]
[00:01:38]
[00:01:38] > signal-desktop@7.74.0 build:acknowledgments /wrkdirs/usr/ports/net-im/signal-desktop/work/Signal-Desktop-7.74.0
[00:01:38] > node scripts/generate-acknowledgments.js
[00:01:38]
The log file for my from scratch build has the "node scripts/generate-acknowledgments.js" line as 1210 of
1557.
After the above, my log file shows:
signal-desktop@7.74.0 electron:install-app-deps /wrkdirs/usr/ports/net-im/signal-desktop/work/Signal-Desktop-7.74.0
electron-builder install-app-deps
rCo electron-builder version=26.0.14
rCo loaded configuration file=package.json ("build" field)
rCo executing @electron/rebuild electronVersion=38.2.0 arch=x64 buildFromSource=false appDir=./
rCo installing native dependencies arch=x64
rCo preparing moduleName=bufferutil arch=x64
rCo preparing moduleName=utf-8-validate arch=x64
rCo preparing moduleName=@indutny/mac-screen-share arch=x64
rCo preparing moduleName=@signalapp/windows-ucv arch=x64
rCo preparing moduleName=bufferutil arch=x64
rCo preparing moduleName=canvas arch=x64
rCo preparing moduleName=utf-8-validate arch=x64
rCo finished moduleName=@signalapp/windows-ucv arch=x64
rCo finished moduleName=@indutny/mac-screen-share arch=x64
rCo finished moduleName=utf-8-validate arch=x64
rCo finished moduleName=utf-8-validate arch=x64
rCo finished moduleName=bufferutil arch=x64
rCo finished moduleName=bufferutil arch=x64
rCo finished moduleName=canvas arch=x64
rCo completed installing native dependencies
Done in 27.8s using pnpm v10.6.4
cd /wrkdirs/usr/ports/net-im/signal-desktop/work/Signal-Desktop-7.74.0/sticker-creator && /usr/bin/env ELECTRON_OVERRIDE_DIST_PATH=/usr/local/share/electron37 HOME=/wrkdirs/usr/ports/net-im/signal-desktop/work USE_SYSTEM_APP_BUILDER=true SOURCE_DATE_EPOCH=$(date +'%s') PATH=/wrkdirs/usr/ports/net-im/signal-desktop/work/Signal-Desktop-7.74.0/node_modules/.bin:/usr/local/bin:/wrkdirs/usr/ports/net-im/signal-desktop/work/.bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/root/bin ELECTRON_SKIP_BINARY_DOWNLOAD=1 PYTHONDONTWRITEBYTECODE=1 OPENSSLBASE=/usr OPENSSLDIR=/etc/ssl OPENSSLINC=/usr/include OPENSSLLIB=/usr/lib XDG_DATA_HOME=/wrkdirs/usr/ports/net-im/signal-desktop/work XDG_CONFIG_HOME=/wrkdirs/usr/ports/net-im/signal-desktop/work XDG_CACHE_HOME=/wrkdirs/usr/ports/net-im/signal-desktop/work/.cache HOME=/wrkdirs/usr/ports/net-im/signal-desktop/work TMPDIR="/tmp" PKG_CONFIG_LIBDIR=/wrkdirs/usr/ports/net-im/signal-desktop/work/.pkgconfig:/usr/local/libdata/pkgconfig:/usr/local/share/pkgconfig:/usr/libdata/pkgconfig MK_DEBUG_FILES=no MK_KERNEL_SYMBOLS=no SHELL=/bin/sh NO_LINT=YES PREFIX=/usr/local LOCALBASE=/usr/local CC="cc" CFLAGS="-O2 -pipe -fstack-protector-strong -fno-strict-aliasing " CPP="cpp" CPPFLAGS="" LDFLAGS=" " LIBS="" CXX="c++" CXXFLAGS="-O2 -pipe -fstack-protector-strong -fno-strict-aliasing " BSD_INSTALL_PROGRAM="install -s -m 555" BSD_INSTALL_LIB="install -s -m 0644" BSD_INSTALL_SCRIPT="install -m 555" BSD_INSTALL_DATA="install -m 0644" BSD_INSTALL_MAN="install -m 444" pnpm install
Lockfile is up to date, resolution step is skipped
Progress: resolved 1, reused 0, downloaded 0, added 0
Packages: +581 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Progress: resolved 581, reused 581, downloaded 0, added 581, done
dependencies:
+ @formatjs/fast-memoize 1.2.8
+ @indutny/emoji-picker-react 4.10.0
+ @popperjs/core 2.11.8
+ @react-aria/interactions 3.19.0
+ @reduxjs/toolkit 1.9.5
+ @stablelib/x25519 1.0.3
+ base64-js 1.5.1
+ classnames 2.3.2
+ debug 4.3.4
+ focus-trap-react 10.1.1
+ memoizee 0.4.15
+ npm-run-all 4.1.5
+ protobufjs 7.2.5
+ protobufjs-cli 1.1.1
+ qrcode-generator 1.4.4
+ react 18.3.1
+ react-dom 18.3.1
+ react-dropzone 14.2.3
+ react-intl 6.4.1
+ react-popper 2.3.0
+ react-redux 8.0.5
+ react-router-dom 6.10.0
+ react-sortablejs 6.1.4
+ redux 4.2.1
+ reselect 4.1.8
+ sortablejs 1.15.0
+ zod 3.22.3
devDependencies:
+ @types/debug 4.1.7
+ @types/lodash 4.14.194
+ @types/memoizee 0.4.8
+ @types/react 18.3.20
+ @types/react-dom 18.3.6
+ @types/sortablejs 1.15.1
+ @typescript-eslint/eslint-plugin 5.59.0
+ @typescript-eslint/parser 5.59.0
+ @vitejs/plugin-react 3.1.0
+ emoji-datasource-apple 16.0.0
+ eslint 8.38.0
+ eslint-config-airbnb-typescript-prettier 5.0.0
+ eslint-config-prettier 8.8.0
. . .
I'm out of ideas. I thought the problem may have been because CPUTYPE?= was defined within
src.conf but that has been removed and both the os and the builder built from its obj
have been rebuilt after clearing /usr/obj /var/cache/ccache and make cleanworld
in /usr/src.
For amd64:
https://pkg-status.freebsd.org/beefy24/data/main-amd64-default/pcd2ca5cd2643_s6fa18fe744/logs/signal-desktop-7.74.0_1.log
shows a successful main 16 latest build as of:
build of net-im/signal-desktop | signal-desktop-7.74.0_1 ended at Sun Oct 19 16:04:15 -00 2025Cleaning up wrkdirCleaning for signal-desktop-7.74.0_1
build time: 00:04:36
(The above shows at https://www.freshports.org/net-im/signal-desktop/
for FreeBSD:16:latest . The others below do not yet[?] show up.)
build of cad/freecad | FreeCAD-1.0.2_2 ended at Sun Oct 19 17:29:55 -00 2025 build time: 01:07:46Cleaning up wrkdirCleaning for FreeCAD-1.0.2_2
and:build of science/ttk | ttk-1.3.0_2 ended at Sun Oct 19 22:09:39 -00 2025
https://pkg-status.freebsd.org/beefy22/data/143amd64-default/cd2ca5cd2643/logs/signal-desktop-7.74.0_1.log
shows a successful 14.3 latest build as of:
build of net-im/signal-desktop | signal-desktop-7.74.0_1 ended at Sun Oct 19 20:52:37 -00 2025Cleaning up wrkdirCleaning for signal-desktop-7.74.0_1
build time: 00:06:57
Cleaning up wrkdirCleaning for ttk-1.3.0_2
and:build of editors/vscode | vscode-1.105.0 ended at Sun Oct 19 01:33:37 -00 2025 build time: 00:19:56
https://pkg-status.freebsd.org/beefy23/data/150releng-amd64-default/774fe8f3f054/logs/signal-desktop-7.74.0.log
shows a successful 15.0 releng latest build as of:
build of net-im/signal-desktop | signal-desktop-7.74.0 ended at Sun Oct 19 01:17:50 -00 2025Cleaning up wrkdirCleaning for signal-desktop-7.74.0
build time: 00:04:10
(I do not know if the 150releng-amd64 builds are all being distributed
vs. not. My build was of 7.74.0 as well.)
Cleaning up wrkdirCleaning for vscode-1.105.0
I never have used ccache or the like so that is another
environmental difference. I've no clue if such is
important.
The processor used was an AMD 7950X3D, which is far more recent
than the Xeon E5-2690v2 , not that I know if that distinction
is involved.
For amd64:
https://pkg-status.freebsd.org/beefy24/data/main-amd64-default/pcd2ca5cd2643_s6fa18fe744/logs/signal-desktop-7.74.0_1.log
I never have used ccache or the like so that is another
environmental difference. I've no clue if such is
important.
The processor used was an AMD 7950X3D, which is far more recent
than the Xeon E5-2690v2 , not that I know if that distinction
is involved.
| Sysop: | Amessyroom |
|---|---|
| Location: | Fayetteville, NC |
| Users: | 54 |
| Nodes: | 6 (0 / 6) |
| Uptime: | 06:58:04 |
| Calls: | 743 |
| Files: | 1,218 |
| Messages: | 189,181 |