Sysop: | Amessyroom |
---|---|
Location: | Fayetteville, NC |
Users: | 27 |
Nodes: | 6 (0 / 6) |
Uptime: | 38:45:33 |
Calls: | 631 |
Calls today: | 2 |
Files: | 1,187 |
D/L today: |
23 files (29,781K bytes) |
Messages: | 174,060 |
Hello ports@,
I wanted to give some context around several new port submissions I have pending and to outline my ongoing efforts to improve FreeBSD's HPC (High-Performance Computing) software stack.
Recent activity:
Took over maintainership of sysutils/slurm-wlm (bug #288600)
Submitted new ports: devel/py-reframe (bug #289292), sysutils/py-clustershell (bug #289176), devel/spack (bug #289296)
Planned/ongoing ports:
devel/mpifileutils (in progress)
sysutils/openpmix (planned)
sysutils/prrte (planned)
sysutils/flux-core (planned)
sysutils/flux-shed (planned)
These tools are widely used in Tier 1/0 HPC centers (eg. Spack for package management, ReFrame for regression testing, OpenPMIx/PRRTE as MPI runtime foundations, Flux as next-generation workload manager). My current goal is to make FreeBSD a viable HPC platform by ensuring these pieces are available and functional.
I would appreciate feedback on:
Wether there are other HPC-relevant software packages I should target.
Any pitfalls or best practices to keep in mind while scaling this effort. Potential co-maintainers or testers interested in this space.
My aim is to make FreeBSD a serious option for scientific computing and large-scale HPC environments, and I welcome any input form the community.--
Best regards,
Rikka
Hello ports@,
I wanted to give some context around several new port submissions I
have pending and to outline my ongoing efforts to improve FreeBSD's
HPC (High-Performance Computing) software stack.
=20
Recent activity:
I would appreciate feedback on:
=20
=20
*
Wether there are other HPC-relevant software packages I should target.
*
Any pitfalls or best practices to keep in mind while scaling this effort.
*
Potential co-maintainers or testers interested in this space.
=20
My aim is to make FreeBSD a serious option for scientific computing
and large-scale HPC environments, and I welcome any input form the
community.