On Dec 6, 2025, at 06:14, Mark Millard <marklmi@yahoo.com> wrote:
Mateusz Guzik <mjguzik_at_gmail.com> wrote on
Date: Sat, 06 Dec 2025 10:50:08 UTC :
compiling llvm.I got pointed at phoronix: https://www.phoronix.com/review/freebsd-15-amd-epyc
While I don't treat their results as gospel, a FreeBSD vs FreeBSD test
showing a slowdown most definitely warrants a closer look.
They observed slowdowns when using iperf over localhost and when
I can confirm both problems and more.
I found the profiling tooling for userspace to be broken again so I
did not investigate much and I'm not going to dig into it further.
Test box is AMD EPYC 9454 48-Core Processor, with the 2 systems
running as 8 core vms under kvm.
. . .
Both of the below are from ampere3 (aarch64) instead, its
2 most recent "bulk -a" runs that completed, elapsed times
shown for qt6-webengine-6.9.3 builds:
150releng-arm64-quarterly qt6-webengine-6.9.3 53:33:46
135arm64-default qt6-webengine-6.9.3 38:43:36
For reference:
Host OSVERSION: 1600000
Jail OSVERSION: 1500068
vs.
Host OSVERSION: 1600000
Jail OSVERSION: 1305000
The difference for the above is in the Jail's world builds,
not in the boot's (kernel+world) builds.
For reference:
https://pkg-status.freebsd.org/ampere3/build.html?mastername=3D150releng-=arm64-quarterly&build=3D88084f9163ae
build of www/qt6-webengine | qt6-webengine-6.9.3 ended at Sun Nov 3005:40:02 -00 2025
build time: 2D:05:33:52
https://pkg-status.freebsd.org/ampere3/build.html?mastername=3D135arm64-d=efault&build=3Df5384fe59be6
build of www/qt6-webengine | qt6-webengine-6.9.3 ended at Sat Nov 2215:33:34 -00 2025
build time: 1D:14:43:41
Expanding the notes to before and after jemalloc 5.3.0,
was merged to main: beefy18 was the main-amd64 builder
before and somewhat after the jemalloc 5.3.0 merge from
vendor branch:
Before: p2650762431ca_s51affb7e971 261:29:13 building 36074 port-packages=
start 05 Aug 2025 01:10:59 GMT,
( jemalloc 5.3.0 merge from vendor branch: 15 Aug 2025)
After : p9652f95ce8e4_sb45a181a74c 428:49:20 building 36318 port-packages=
start 19 Aug 2025 01:30:33 GMT
(The log files are long gone for port-packages built.)
main-15 used a debug jail world but 15.0-RELEASE does not.
I'm not aware of such a port-package builder context for a
non-debug jail world before and after a jemalloc 5.3.0 merge.
On Sat, Dec 6, 2025, 3:06rC>PM Mark Millard <marklmi@yahoo.com> wrote:The range of commits look like:
On Dec 6, 2025, at 06:14, Mark Millard <marklmi@yahoo.com> wrote:A few months before I landed the jemalloc patches, i did 4 or 5 from dirt buildworlds. The elasped time was, iirc, with 1 or 2%. Enough to see maybe a diff with the small sample size, but not enough for ministat to trigger at 95%. I didn't recall keeping the data for this and can't find it now. And I'm not even sure, in hindsight, I ran a good experiment. It might be related, or not, but it would be easy enough for someone to setup a two jails: one just before and one just after. Build from scratch the world (same hash) on both. That would test it since you'd be holding all other variables constant.
Mateusz Guzik <mjguzik_at_gmail.com> wrote on
Date: Sat, 06 Dec 2025 10:50:08 UTC :
I got pointed at phoronix: https://www.phoronix.com/review/freebsd-15-amd-epyc
While I don't treat their results as gospel, a FreeBSD vs FreeBSD test
showing a slowdown most definitely warrants a closer look.
They observed slowdowns when using iperf over localhost and when compiling llvm.
I can confirm both problems and more.
I found the profiling tooling for userspace to be broken again so I
did not investigate much and I'm not going to dig into it further.
Test box is AMD EPYC 9454 48-Core Processor, with the 2 systems
running as 8 core vms under kvm.
. . .
Both of the below are from ampere3 (aarch64) instead, its
2 most recent "bulk -a" runs that completed, elapsed times
shown for qt6-webengine-6.9.3 builds:
150releng-arm64-quarterly qt6-webengine-6.9.3 53:33:46
135arm64-default qt6-webengine-6.9.3 38:43:36
For reference:
Host OSVERSION: 1600000
Jail OSVERSION: 1500068
vs.
Host OSVERSION: 1600000
Jail OSVERSION: 1305000
The difference for the above is in the Jail's world builds,
not in the boot's (kernel+world) builds.
For reference:
https://pkg-status.freebsd.org/ampere3/build.html?mastername=150releng-arm64-quarterly&build=88084f9163ae
build of www/qt6-webengine | qt6-webengine-6.9.3 ended at Sun Nov 30 05:40:02 -00 2025
build time: 2D:05:33:52
https://pkg-status.freebsd.org/ampere3/build.html?mastername=135arm64-default&build=f5384fe59be6
build of www/qt6-webengine | qt6-webengine-6.9.3 ended at Sat Nov 22 15:33:34 -00 2025
build time: 1D:14:43:41
Expanding the notes to before and after jemalloc 5.3.0
was merged to main: beefy18 was the main-amd64 builder
before and somewhat after the jemalloc 5.3.0 merge from
vendor branch:
Before: p2650762431ca_s51affb7e971 261:29:13 building 36074 port-packages, start 05 Aug 2025 01:10:59 GMT
( jemalloc 5.3.0 merge from vendor branch: 15 Aug 2025)
After : p9652f95ce8e4_sb45a181a74c 428:49:20 building 36318 port-packages, start 19 Aug 2025 01:30:33 GMT
(The log files are long gone for port-packages built.)
main-15 used a debug jail world but 15.0-RELEASE does not.
I'm not aware of such a port-package builder context for a
non-debug jail world before and after a jemalloc 5.3.0 merge.
When we imported the tip of FreeBSD main at work, we didn't get a cpu change trigger from our tests that I recall...
On Mon, 8 Dec 2025 02:15:33 +0200
Konstantin Belousov <kib@freebsd.org> wrote:
Next, the change of llvm components to dynamically link with the llvm
libs is how upstream does it. Not to mention, that this way to use clang+lld saves both disk space (not very important) and memory (much
more important).
It waste time and energy = money waster, "multiply CO2 production".
And there is nothing good to user to pay this price.
I have:Did you noted this line?
# pkg version -vI | grep llvm
libclc-llvm15-15.0.7 = up-to-date with index
llvm15-15.0.7_10 = up-to-date with index
llvm17-17.0.6_8 = up-to-date with index
llvm18-18.1.8_2 = up-to-date with index
llvm19-19.1.7_1 = up-to-date with index
there is no any crappy libprivateclang.so/libprivatellvm.so shared libs:
# ldd /usr/local/llvm19/bin/clang-19
/usr/local/llvm19/bin/clang-19:
libthr.so.3 => /lib/libthr.so.3 (0x801063000)
libclang-cpp.so.19.1 => /usr/local/llvm19/bin/../lib/libclang-cpp.so.19.1 (0x801200000)
libLLVM.so.19.1 => /usr/local/llvm19/bin/../lib/libLLVM.so.19.1 (0x805c00000)
libc++.so.1 => /lib/libc++.so.1 (0x801092000)I am curious about the motivation.
libcxxrt.so.1 => /lib/libcxxrt.so.1 (0x80119b000)
libm.so.5 => /lib/libm.so.5 (0x8011bd000)
libc.so.7 => /lib/libc.so.7 (0x80d663000)
librt.so.1 => /lib/librt.so.1 (0x805bcb000)
libexecinfo.so.1 => /usr/lib/libexecinfo.so.1 (0x805bd4000)
libz.so.6 => /lib/libz.so.6 (0x805bda000)
libzstd.so.1 => /usr/local/lib/libzstd.so.1 (0x80d963000)
libgcc_s.so.1 => /lib/libgcc_s.so.1 (0x80da38000)
libelf.so.2 => /lib/libelf.so.2 (0x80da59000)
[vdso] (0x7ffffffff000)
But
# ls /usr/bin/cc
-r-xr-xr-x 6 root wheel 82M Oct 19 18:10:39 2025 /usr/bin/cc*
# ls /usr/local/llvm19/bin/clang-19
-rwxr-xr-x 2 root wheel 125K Aug 18 06:43:31 2025 /usr/local/llvm19/bin/clang-19*
So it dynamic linked....
....
And we found in port:
CMAKE_ARGS= -DLLVM_BUILD_LLVM_DYLIB=ON
CMAKE_ARGS+= -DLLVM_LINK_LLVM_DYLIB=ON
(exist from first llvm6 372b8a151352984140f74c342a62eae2236b2c2c and copy-pasted to all next llvm~s by brooks@FreeBSD.org)
According to: https://llvm.org/docs/CMake.html =============================================================================================
BUILD_SHARED_LIBS is only recommended for use by LLVM developers.
If you want to build LLVM as a shared library, you should use the LLVM_BUILD_LLVM_DYLIB option.
=============================================================================================
So upstream DOES NOT RECOMMEND to build shared libs to users!!!
Why FBSD use shared libs for LLVM in ports and now in base!???
@brooks - why do you do that?
The implied load on rtld is something that could be handled: there is definitely no need to have such huge surface of exported symbols on
both libllvm and esp. libclang. Perhaps by default the internal
libraries can use protected symbols, normally C++ do not rely on interposing. But such 'fixes' must occur at upstream.
So far all the clang toolchain changes were aligning it with what the
llvm project does.
No, upstream does not recommend to use shared libs to llvm users.
--------
Konstantin Belousov writes:
JFYI, shared llvm libs are required for lot of things. The incomplete
list of examples that I am aware of are dri drivers and ispc Intel compiler.
But installing the shared libs for those other users, does not mean we have to link the compiler itself against the shared lib ?
| Sysop: | Amessyroom |
|---|---|
| Location: | Fayetteville, NC |
| Users: | 54 |
| Nodes: | 6 (0 / 6) |
| Uptime: | 14:19:50 |
| Calls: | 742 |
| Files: | 1,218 |
| D/L today: |
3 files (2,681K bytes) |
| Messages: | 183,842 |
| Posted today: | 1 |