-
Notifications
You must be signed in to change notification settings - Fork 2.3k
Telcon: 2025 01 22
Wednesday January 22nd, 9am PT (UTC -8:00)
- Peter Scheibel (host)
- Tammy Dahlgren
- Davide Del Vento
- Sean Koyama
- Chris Green
- Kacper Kornet
- Mark Krentel
Kin Fai TSE (next time for personal matter)
This is for general Q&A - there are no pre-planned topics (feel free to add one)
- Sean: parallel build support
spack install -j
- You can also run multiple instances of Spack
- So for
W -> [X, Y, Z]
and you run two instances of Spack, it will install X and Y simultaneously
- So for
- See also: https://spack.readthedocs.io/en/latest/packaging_guide.html#install-level-build-parallelism
- See also: https://github.com/spack/spack/pull/47590
- Chris: ping on
- https://github.com/spack/spack/issues/48203 (unexpected behavior with include concrete)
-
https://github.com/spack/spack/pull/46792 (includes as config)
- Tammy: target deadline is end of month
-
https://github.com/spack/spack/pull/45189 (compilers as nodes)
- Chris: looked at alpha3 preview and evaluated compatibility with our fork
- Additional issue: building a compiler using a native compiler fails
- Chris: should I be reporting the issue against alpha-3
- Project for checking 1.0 compatibility: https://github.com/marcpaterno/spack_retreat/issues
- (these aren't bugs in spack necessarily)
- Mark: I have a pre-question about maven builds
- Peter: there is a
MavenPackage
build system in Spack - Mark: what about Maven dependency management? Is this effectively bypassing Spack dependency management
- Chris: Speaking with current experience working on a Java project using Maven: it does take over the world and manage/vendor the vast majority of dependencies and build options. In the Spack recipe repository, there are a couple of recipes with dependencies outside Java/Maven/C/C++. It might be instructive to look at those to see how those dependencies are integrated into the Maven build. Try looking at
hadoop-xrootd
- Peter: there is a
- Davide: ping on llvm issue (flang vs. flang-new): https://github.com/spack/spack/issues/48253
-
[Packaging] Proposal: Externalize package versions
Shall we consider externalize version from packages? i.e. read
versions.yaml
next topackages.py
and programmatically fill versions and deps inPackages
fromversions.yaml
.Such approach is much easier to automate
- Current:
- add a line in packages.yaml
- manually verify sha256
- merge via PR
- (new) semi-automated:
- likely write-once package.yaml that reads
versions.yaml
for (version, source, hash, deps, etc) - generate
versions.yaml
from repo index (e.g. pypi), can be done from CI on a trusted platform - new versions add automatically, only build bugs would be fixed via PR
- likely write-once package.yaml that reads
Author: @kftsehk(@kftse-ust-hk) from HKUST
- Current:
-
[Packaging] Proposal: Support for platform-dependent source
Had a case find it difficult to incorporate platform-dependent source support, since
Version
api does not supportplatform/os/target/arch
. Turns outResource
api has it, work-around by- declaring
version.json
as source for anyVersion
- declaring
(platform, os, target, version) => (installer_url, sha256hash)
asResource
conditioned onwhen="@x.y.z platform=... os=... target=...
.
Found this approach works with cache as well.
A working example: https://github.com/spack/spack/pull/48603
Author: @kftsehk(@kftse-ust-hk) from HKUST
- declaring
-
[Automation] Proposal: Update
py-*
from pypi package indexI think it can be achived given
- we allow programmatic generated
versions.yaml
(e.g. via ci bot to pull some index) - a sufficiently powerful
versions.yaml
that would allow specifying complex deps chain resolution of pypi packages's- python_version
- pip package deps specified in range
Possibly in another repo, not the
builtin
, as it follows entirely different update strategy.Author: @kftsehk(@kftse-ust-hk) from HKUST
- we allow programmatic generated
-
[Developer Experience] Proposal: Local locks
Add to
config:locks = [false|true]
a new valuelocal
, which behaves like- creating
.spack-db/{lock,prefix_lock,failure_lock}
on local filesystem, e.g.${tempdir}
or/run/user
- using that for lock instead of.spack-db/*lock
This would greatly improve package build efficiency
-
locks=false
: cannot build concurrently due to potential corruption -
locks=true
: can be built on multiple machine, but significant overheads on locks (read lock for whole dependency tree, and write lock for the installing package) -
locks=local
(new): Allows concurrent building on a single machine, does not work with distributed installation.Works especially well for the case of e.g. 256 cores machine connected to storage via traditional NFS protocol where lock is expensive.
A modern HPC with 256 core would be an overkill even for building 4-5 packages concurrently. Most of the time are spent in serial
./configure
.
I had tested this approach by
- softlink
.spack-db/{lock,prefix_lock,failure_lock}
file to files in/dev/shm
-
spack -e <env-name> install --only-concrete &
4 or 5 envs at a time
Author: @kftsehk(@kftse-ust-hk) from HKUST
- creating
-
[Docs] Advise on Performance: Site installation should precompile
__pycache__
for read-only userIf you are offering a read-only spack instance to your fellow colleagues, make sure you compile all python file's pyc using
python -m compileall
for all potential python minor versions (3.8
,3.9
, ...) your fellow colleagues would use. Only one pre-compile is needed for each python minor release, i.e. same cache for python (3.9.1
,3.9.2
, etc).If not, it will vastly slow down spack for your users, due to approx. 5 failed filesystem access for every python file used without cache.
If user can neither find nor write a cache,
strace spack <command>
would shows a flow resembles following (I don't have a trace at hand)- stat the cache dir
__pycache__
: FAIL - attempt create dir: FAIL
- stat
__pycache__/[name].pyc
: FAIL - jit compile a pycache
- attempt create dir: FAIL
- open
__pycache__/[name].pyc
for write: FAIL - exec the cache
It is the case for ALL python scripts (e.g. those installed from other packages into
$spack/opt/share
too.)Side-note: If for Network based filesystems, be sure to turn on some dir attr cache (can be as short as 3-5s), otherwise python
stat
every source directory to reachpy
/pyc
file too.Author: @kftsehk(@kftse-ust-hk) from HKUST Backer: @becker33 [in Slack channel]
- stat the cache dir