GNU bug report logs -
#77432
[PATCH core-packages-team] build-system/gnu: Limit load average.
Previous Next
To reply to this bug, email your comments to 77432 AT debbugs.gnu.org.
There is no need to reopen the bug first.
Toggle the display of automated, internal messages from the tracker.
Report forwarded
to
andreas <at> enge.fr, janneke <at> gnu.org, ludo <at> gnu.org, z572 <at> z572.online, guix-patches <at> gnu.org
:
bug#77432
; Package
guix-patches
.
(Tue, 01 Apr 2025 14:01:11 GMT)
Full text and
rfc822 format available.
Acknowledgement sent
to
Greg Hogan <code <at> greghogan.com>
:
New bug report received and forwarded. Copy sent to
andreas <at> enge.fr, janneke <at> gnu.org, ludo <at> gnu.org, z572 <at> z572.online, guix-patches <at> gnu.org
.
(Tue, 01 Apr 2025 14:01:12 GMT)
Full text and
rfc822 format available.
Message #5 received at submit <at> debbugs.gnu.org (full text, mbox):
A nice feature of offload builds is that Guix will throttle the start of new
jobs based on the overload-threshold. There is no equivalent for local builds,
so one must either run builds in serial (--max-jobs=1) and endure
single-threaded builds or run concurrent builds and watch the system overload
as it runs multiple multi-threaded builds.
I have been testing this "max-load" setting in both the gnu (attached) and cmake
(soon on the c++-team branch) build systems. Both make and ninja will delay
starting a new build action when the system is overloaded (the number of running
processes is greater than, as configured below, the number of processors).
ctest has a similar option "test-load" which compares against system load.
From the following benchmark comparing the compilation of concurrent Folly
builds, the "max-load" option reduced the overall time by 8.3%. Memory use also
drops considerably since we are only running 1/4 of the processes at any time.
If this is too late for inclusion in the core-packages-team branch, I would
appreciate consideration from the team for inclusion on the c++-team branch.
--8<---------------cut here---------------start------------->8---
$ guix shell -D folly
$ CONCURRENT=4
$ for i in `seq 1 $CONCURRENT` ; do rm -rf build$i ; mkdir build$i ; cd build$i ; cmake ../folly & cd .. ; done ; wait
$ time bash -c 'for i in `seq 1 '$CONCURRENT'` ; do cd build$i ; make -j`nproc` -l`nproc` & cd .. ; done ; wait'
real 2m18.669s
user 28m41.383s
sys 4m35.790s
$ for i in `seq 1 $CONCURRENT` ; do rm -rf build$i ; mkdir build$i ; cd build$i ; cmake ../folly & cd .. ; done ; wait
$ time bash -c 'for i in `seq 1 '$CONCURRENT'` ; do cd build$i ; make -j`nproc`& cd .. ; done ; wait'
real 2m31.158s
user 30m44.591s
sys 4m34.438s
--8<---------------cut here---------------end--------------->8---
* guix/build/gnu-build-system.scm (build, check): Set max load.
Change-Id: I97f1e3e59880b6ed23faed2038eb5279415e9c95
---
guix/build/gnu-build-system.scm | 9 +++++++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/guix/build/gnu-build-system.scm b/guix/build/gnu-build-system.scm
index 0b94416a8d..e324c3488c 100644
--- a/guix/build/gnu-build-system.scm
+++ b/guix/build/gnu-build-system.scm
@@ -28,6 +28,7 @@ (define-module (guix build gnu-build-system)
#:use-module (ice-9 regex)
#:use-module (ice-9 format)
#:use-module (ice-9 ftw)
+ #:use-module (ice-9 threads)
#:use-module (srfi srfi-1)
#:use-module (srfi srfi-19)
#:use-module (srfi srfi-34)
@@ -385,7 +386,9 @@ (define* (build #:key (make-flags '()) (parallel-build? #t)
#:allow-other-keys)
(apply invoke "make"
`(,@(if parallel-build?
- `("-j" ,(number->string (parallel-job-count)))
+ `("-j" ,(number->string (parallel-job-count))
+ ,(string-append "--max-load="
+ (number->string (total-processor-count))))
'())
,@make-flags)))
@@ -424,7 +427,9 @@ (define* (check #:key target (make-flags '()) (tests? (not target))
(raise c)))
(apply invoke "make" test-target
`(,@(if parallel-tests?
- `("-j" ,(number->string (parallel-job-count)))
+ `("-j" ,(number->string (parallel-job-count))
+ ,(string-append "--max-load="
+ (number->string (total-processor-count))))
'())
,@make-flags)))
(format #t "test suite not run~%")))
base-commit: eb04a0d2c955f5fa9a721537c8202fc5c5959b19
--
2.49.0
Reply sent
to
Ludovic Courtès <ludo <at> gnu.org>
:
You have taken responsibility.
(Thu, 03 Apr 2025 09:05:01 GMT)
Full text and
rfc822 format available.
Notification sent
to
Greg Hogan <code <at> greghogan.com>
:
bug acknowledged by developer.
(Thu, 03 Apr 2025 09:05:02 GMT)
Full text and
rfc822 format available.
Message #10 received at 77432-done <at> debbugs.gnu.org (full text, mbox):
Hi Greg,
Greg Hogan <code <at> greghogan.com> skribis:
> A nice feature of offload builds is that Guix will throttle the start of new
> jobs based on the overload-threshold. There is no equivalent for local builds,
> so one must either run builds in serial (--max-jobs=1) and endure
> single-threaded builds or run concurrent builds and watch the system overload
> as it runs multiple multi-threaded builds.
>
> I have been testing this "max-load" setting in both the gnu (attached) and cmake
> (soon on the c++-team branch) build systems. Both make and ninja will delay
> starting a new build action when the system is overloaded (the number of running
> processes is greater than, as configured below, the number of processors).
> ctest has a similar option "test-load" which compares against system load.
>
> From the following benchmark comparing the compilation of concurrent Folly
> builds, the "max-load" option reduced the overall time by 8.3%. Memory use also
> drops considerably since we are only running 1/4 of the processes at any time.
>
> If this is too late for inclusion in the core-packages-team branch, I would
> appreciate consideration from the team for inclusion on the c++-team branch.
It’s not too late since we’re waiting for bug fixes in Gash to land. I
stripped the commit log a bit and applied it.
Thanks for this welcome improvement!
Ludo’.
This bug report was last modified 5 days ago.
Previous Next
GNU bug tracking system
Copyright (C) 1999 Darren O. Benham,
1997,2003 nCipher Corporation Ltd,
1994-97 Ian Jackson.