Package: guix-patches;
Reported by: Morgan Smith <Morgan.J.Smith <at> outlook.com>
Date: Thu, 13 Mar 2025 19:49:02 UTC
Severity: normal
Tags: patch
To reply to this bug, email your comments to 76999 AT debbugs.gnu.org.
Toggle the display of automated, internal messages from the tracker.
View this report as an mbox folder, status mbox, maintainer mbox
guix-patches <at> gnu.org
:bug#76999
; Package guix-patches
.
(Thu, 13 Mar 2025 19:49:02 GMT) Full text and rfc822 format available.Morgan Smith <Morgan.J.Smith <at> outlook.com>
:guix-patches <at> gnu.org
.
(Thu, 13 Mar 2025 19:49:02 GMT) Full text and rfc822 format available.Message #5 received at submit <at> debbugs.gnu.org (full text, mbox):
From: Morgan Smith <Morgan.J.Smith <at> outlook.com> To: guix-patches <at> gnu.org Cc: Morgan Smith <Morgan.J.Smith <at> outlook.com> Subject: [PATCH 0/2] gnu: llama-cpp: Update to 0.0.0-b4882. Date: Thu, 13 Mar 2025 15:42:37 -0400
I was having some troubles running llama-cpp as it didn't have the ability to download things and the python scripts didn't seem to have their dependencies. This no longer installs "convert_hf_to_gguf_update.py" but that didn't work before this patch series anyways. Morgan Smith (2): gnu: Add python-gguf-llama-cpp. gnu: llama-cpp: Update to 0.0.0-b4882. gnu/local.mk | 1 - gnu/packages/machine-learning.scm | 49 +++++++++++++------ .../patches/llama-cpp-vulkan-optional.patch | 38 -------------- 3 files changed, 35 insertions(+), 53 deletions(-) delete mode 100644 gnu/packages/patches/llama-cpp-vulkan-optional.patch -- 2.48.1
guix-patches <at> gnu.org
:bug#76999
; Package guix-patches
.
(Thu, 13 Mar 2025 21:20:02 GMT) Full text and rfc822 format available.Message #8 received at 76999 <at> debbugs.gnu.org (full text, mbox):
From: Morgan Smith <Morgan.J.Smith <at> outlook.com> To: 76999 <at> debbugs.gnu.org Cc: Morgan Smith <Morgan.J.Smith <at> outlook.com> Subject: [PATCH 1/2] gnu: Add python-gguf-llama-cpp. Date: Thu, 13 Mar 2025 17:18:50 -0400
* gnu/packages/machine-learning.scm (python-gguf-llama-cpp): New variable. Change-Id: I1c1b5f5956e3acb380b56816d180f53243b741fa --- gnu/packages/machine-learning.scm | 15 +++++++++++++++ 1 file changed, 15 insertions(+) diff --git a/gnu/packages/machine-learning.scm b/gnu/packages/machine-learning.scm index 246b004156..ee5feb58fc 100644 --- a/gnu/packages/machine-learning.scm +++ b/gnu/packages/machine-learning.scm @@ -6490,6 +6490,21 @@ (define-public python-gguf (description "A Python library for reading and writing GGUF & GGML format ML models.") (license license:expat))) +(define-public python-gguf-llama-cpp + (package/inherit python-gguf + (version "0.16.0") + (source (package-source llama-cpp)) + (propagated-inputs (list python-numpy python-pyyaml python-sentencepiece + python-tqdm)) + (native-inputs (list python-poetry-core)) + (arguments + (substitute-keyword-arguments (package-arguments python-gguf) + ((#:phases phases #~%standard-phases) + #~(modify-phases #$phases + (add-after 'unpack 'chdir + (lambda _ + (chdir "gguf-py"))))))))) + (define-public python-gymnasium (package (name "python-gymnasium") -- 2.48.1
guix-patches <at> gnu.org
:bug#76999
; Package guix-patches
.
(Thu, 13 Mar 2025 21:21:01 GMT) Full text and rfc822 format available.Message #11 received at 76999 <at> debbugs.gnu.org (full text, mbox):
From: Morgan Smith <Morgan.J.Smith <at> outlook.com> To: 76999 <at> debbugs.gnu.org Cc: Morgan Smith <Morgan.J.Smith <at> outlook.com> Subject: [PATCH 2/2] gnu: llama-cpp: Update to 0.0.0-b4882. Date: Thu, 13 Mar 2025 17:18:52 -0400
* gnu/packages/machine-learning.scm (llama-cpp): Update to 0.0.0-b4882. [inputs]: Add curl, glslang, and python-gguf-llama-cpp. [native-inputs]: bash -> bash-minimal. [source, homepage]: Update URL. [python-scripts]: Check that we can run them. [fix-tests]: Fix an additional test. * gnu/packages/patches/llama-cpp-vulkan-optional.patch: Delete. * gnu/local.mk: Unregister patch. Change-Id: Ic297534cd142cb83e3964eae21b4eb807b74e9bc --- gnu/local.mk | 1 - gnu/packages/machine-learning.scm | 41 +++++++++++-------- .../patches/llama-cpp-vulkan-optional.patch | 38 ----------------- 3 files changed, 25 insertions(+), 55 deletions(-) delete mode 100644 gnu/packages/patches/llama-cpp-vulkan-optional.patch diff --git a/gnu/local.mk b/gnu/local.mk index 5425095e1d..dcff631515 100644 --- a/gnu/local.mk +++ b/gnu/local.mk @@ -1841,7 +1841,6 @@ dist_patch_DATA = \ %D%/packages/patches/mcrypt-CVE-2012-4527.patch \ %D%/packages/patches/libmemcached-build-with-gcc7.patch \ %D%/packages/patches/libmhash-hmac-fix-uaf.patch \ - %D%/packages/patches/llama-cpp-vulkan-optional.patch \ %D%/packages/patches/llhttp-ponyfill-object-fromentries.patch \ %D%/packages/patches/lvm2-no-systemd.patch \ %D%/packages/patches/maturin-no-cross-compile.patch \ diff --git a/gnu/packages/machine-learning.scm b/gnu/packages/machine-learning.scm index ee5feb58fc..b173f54fec 100644 --- a/gnu/packages/machine-learning.scm +++ b/gnu/packages/machine-learning.scm @@ -77,6 +77,7 @@ (define-module (gnu packages machine-learning) #:use-module (gnu packages cmake) #:use-module (gnu packages cpp) #:use-module (gnu packages cran) + #:use-module (gnu packages curl) #:use-module (gnu packages databases) #:use-module (gnu packages dejagnu) #:use-module (gnu packages documentation) @@ -585,7 +586,7 @@ (define-public guile-aiscm-next (deprecated-package "guile-aiscm-next" guile-aiscm)) (define-public llama-cpp - (let ((tag "b4549")) + (let ((tag "b4882")) (package (name "llama-cpp") (version (string-append "0.0.0-" tag)) @@ -593,19 +594,19 @@ (define-public llama-cpp (origin (method git-fetch) (uri (git-reference - (url "https://github.com/ggerganov/llama.cpp") + (url "https://github.com/ggml-org/llama.cpp") (commit tag))) (file-name (git-file-name name tag)) (sha256 - (base32 "1xf2579q0r8nv06kj8padi6w9cv30w58vdys65nq8yzm3dy452a1")) - (patches - (search-patches "llama-cpp-vulkan-optional.patch")))) + (base32 "1mhh4293lgvyvyq58hpphqk18n5g2zadafpdf9icf7xlj0cf7bqc")))) (build-system cmake-build-system) (arguments (list #:configure-flags - #~(list "-DBUILD_SHARED_LIBS=ON" + #~(list #$(string-append "-DGGML_BUILD_NUMBER=" tag) + "-DBUILD_SHARED_LIBS=ON" "-DGGML_VULKAN=ON" + "-DLLAMA_CURL=ON" "-DGGML_BLAS=ON" "-DGGML_BLAS_VENDOR=OpenBLAS" (string-append "-DBLAS_INCLUDE_DIRS=" @@ -635,13 +636,16 @@ (define-public llama-cpp (substitute* "ggml/src/ggml-vulkan/vulkan-shaders/vulkan-shaders-gen.cpp" (("\"/bin/sh\"") (string-append "\"" (search-input-file inputs "/bin/sh") "\""))))) - (add-after 'unpack 'disable-unrunable-tests + (add-after 'unpack 'fix-tests (lambda _ ;; test-eval-callback downloads ML model from network, cannot ;; run in Guix build environment (substitute* '("examples/eval-callback/CMakeLists.txt") (("COMMAND llama-eval-callback") - "COMMAND true llama-eval-callback")))) + "COMMAND true llama-eval-callback")) + ;; Help it find the test files it needs + (substitute* "tests/test-chat.cpp" + (("\"\\.\\./\"") "\"../source/\"")))) (add-before 'install 'install-python-scripts (lambda _ (let ((bin (string-append #$output "/bin/"))) @@ -657,23 +661,28 @@ (define-public llama-cpp (get-string-all input)))))) (chmod (string-append bin script) #o555))) (mkdir-p bin) - (make-script "convert_hf_to_gguf") - (make-script "convert_llama_ggml_to_gguf") - (make-script "convert_hf_to_gguf_update.py")))) - (add-after 'install-python-scripts 'wrap-python-scripts - (assoc-ref python:%standard-phases 'wrap)) + (for-each + (lambda (file) + (make-script file) + ;; Run script as a sanity check + (invoke (string-append bin file) "-h")) + '(;; involves adding python-transformers package which looks involved. + ;; "convert_hf_to_gguf_update.py" + "convert_hf_to_gguf" + "convert_llama_ggml_to_gguf"))))) (add-after 'install 'remove-tests (lambda* (#:key outputs #:allow-other-keys) (for-each delete-file (find-files (string-append (assoc-ref outputs "out") "/bin") "^test-"))))))) - (inputs (list python vulkan-headers vulkan-loader)) - (native-inputs (list pkg-config shaderc bash)) + (inputs (list curl glslang python python-gguf-llama-cpp + vulkan-headers vulkan-loader)) + (native-inputs (list pkg-config shaderc bash-minimal)) (propagated-inputs (list python-numpy python-pytorch python-sentencepiece openblas)) (properties '((tunable? . #true))) ;use AVX512, FMA, etc. when available - (home-page "https://github.com/ggerganov/llama.cpp") + (home-page "https://github.com/ggml-org/llama.cpp") (synopsis "Port of Facebook's LLaMA model in C/C++") (description "This package provides a port to Facebook's LLaMA collection of foundation language models. It requires models parameters to be downloaded diff --git a/gnu/packages/patches/llama-cpp-vulkan-optional.patch b/gnu/packages/patches/llama-cpp-vulkan-optional.patch deleted file mode 100644 index 43a49b6a02..0000000000 --- a/gnu/packages/patches/llama-cpp-vulkan-optional.patch +++ /dev/null @@ -1,38 +0,0 @@ -Author: Danny Milosavljevic <dannym <at> friendly-machines.com> -Date: 2025-01-29 -License: Expat -Subject: Make Vulkan optional - -See also: <https://github.com/ggerganov/llama.cpp/pull/11494> - -diff -ru orig/llama.cpp/ggml/include/ggml-vulkan.h llama.cpp/ggml/include/ggml-vulkan.h ---- orig/llama.cpp/ggml/include/ggml-vulkan.h 2025-01-29 10:24:10.894476682 +0100 -+++ llama.cpp/ggml/include/ggml-vulkan.h 2025-02-07 18:28:34.509509638 +0100 -@@ -10,8 +10,6 @@ - #define GGML_VK_NAME "Vulkan" - #define GGML_VK_MAX_DEVICES 16 - --GGML_BACKEND_API void ggml_vk_instance_init(void); -- - // backend API - GGML_BACKEND_API ggml_backend_t ggml_backend_vk_init(size_t dev_num); - -diff -ru orig/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp ---- orig/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp 2025-01-29 10:24:10.922476480 +0100 -+++ llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp 2025-01-29 22:33:19.955087552 +0100 -@@ -8174,8 +8174,13 @@ - /* .iface = */ ggml_backend_vk_reg_i, - /* .context = */ nullptr, - }; -- -- return ® -+ try { -+ ggml_vk_instance_init(); -+ return ® -+ } catch (const vk::SystemError& e) { -+ VK_LOG_DEBUG("ggml_vk_get_device_count() -> Error: System error: " << e.what()); -+ return nullptr; -+ } - } - - // Extension availability -- 2.48.1
guix-patches <at> gnu.org
:bug#76999
; Package guix-patches
.
(Mon, 31 Mar 2025 22:52:01 GMT) Full text and rfc822 format available.Message #14 received at 76999 <at> debbugs.gnu.org (full text, mbox):
From: Morgan Smith <Morgan.J.Smith <at> outlook.com> To: 76999 <at> debbugs.gnu.org Cc: Morgan Smith <Morgan.J.Smith <at> outlook.com> Subject: [PATCH v2 1/2] gnu: Add python-gguf-llama-cpp. Date: Mon, 31 Mar 2025 18:50:46 -0400
* gnu/packages/machine-learning.scm (python-gguf-llama-cpp): New variable. Change-Id: I1c1b5f5956e3acb380b56816d180f53243b741fa --- gnu/packages/machine-learning.scm | 15 +++++++++++++++ 1 file changed, 15 insertions(+) diff --git a/gnu/packages/machine-learning.scm b/gnu/packages/machine-learning.scm index 7fdf5f37ee..7cb807ae91 100644 --- a/gnu/packages/machine-learning.scm +++ b/gnu/packages/machine-learning.scm @@ -6544,6 +6544,21 @@ (define-public python-gguf (description "A Python library for reading and writing GGUF & GGML format ML models.") (license license:expat))) +(define-public python-gguf-llama-cpp + (package/inherit python-gguf + (version "0.16.0") + (source (package-source llama-cpp)) + (propagated-inputs (list python-numpy python-pyyaml python-sentencepiece + python-tqdm)) + (native-inputs (list python-poetry-core)) + (arguments + (substitute-keyword-arguments (package-arguments python-gguf) + ((#:phases phases #~%standard-phases) + #~(modify-phases #$phases + (add-after 'unpack 'chdir + (lambda _ + (chdir "gguf-py"))))))))) + (define-public python-gymnasium (package (name "python-gymnasium") base-commit: e2c2f98edd5d64921678c2570439dedfe662b1f8 -- 2.49.0
guix-patches <at> gnu.org
:bug#76999
; Package guix-patches
.
(Mon, 31 Mar 2025 22:52:02 GMT) Full text and rfc822 format available.Message #17 received at 76999 <at> debbugs.gnu.org (full text, mbox):
From: Morgan Smith <Morgan.J.Smith <at> outlook.com> To: 76999 <at> debbugs.gnu.org Cc: Morgan Smith <Morgan.J.Smith <at> outlook.com> Subject: [PATCH v2 2/2] gnu: llama-cpp: Update to 0.0.0-b5013. Date: Mon, 31 Mar 2025 18:50:48 -0400
* gnu/packages/machine-learning.scm (llama-cpp): Update to 0.0.0-b5013. [inputs]: Add curl, glslang, and python-gguf-llama-cpp. [native-inputs]: bash -> bash-minimal. [source, homepage]: Update URL. [python-scripts]: Rely on upstream to install them. Delete phase. [fix-tests]: Fix an additional test. * gnu/packages/patches/llama-cpp-vulkan-optional.patch: Delete. * gnu/local.mk: Unregister patch. Change-Id: Ic297534cd142cb83e3964eae21b4eb807b74e9bc --- gnu/local.mk | 1 - gnu/packages/machine-learning.scm | 47 +++++++------------ .../patches/llama-cpp-vulkan-optional.patch | 38 --------------- 3 files changed, 17 insertions(+), 69 deletions(-) delete mode 100644 gnu/packages/patches/llama-cpp-vulkan-optional.patch diff --git a/gnu/local.mk b/gnu/local.mk index f03fcb14fc..00b1a7a959 100644 --- a/gnu/local.mk +++ b/gnu/local.mk @@ -1845,7 +1845,6 @@ dist_patch_DATA = \ %D%/packages/patches/libmhash-hmac-fix-uaf.patch \ %D%/packages/patches/libmodbus-disable-networking-test.patch \ %D%/packages/patches/lib-tl-for-telegram-memcpy.patch \ - %D%/packages/patches/llama-cpp-vulkan-optional.patch \ %D%/packages/patches/llhttp-ponyfill-object-fromentries.patch \ %D%/packages/patches/lvm2-no-systemd.patch \ %D%/packages/patches/maturin-no-cross-compile.patch \ diff --git a/gnu/packages/machine-learning.scm b/gnu/packages/machine-learning.scm index 7cb807ae91..84be26cf35 100644 --- a/gnu/packages/machine-learning.scm +++ b/gnu/packages/machine-learning.scm @@ -78,6 +78,7 @@ (define-module (gnu packages machine-learning) #:use-module (gnu packages cmake) #:use-module (gnu packages cpp) #:use-module (gnu packages cran) + #:use-module (gnu packages curl) #:use-module (gnu packages databases) #:use-module (gnu packages dejagnu) #:use-module (gnu packages documentation) @@ -634,7 +635,7 @@ (define-public guile-aiscm-next (deprecated-package "guile-aiscm-next" guile-aiscm)) (define-public llama-cpp - (let ((tag "b4549")) + (let ((tag "b5013")) (package (name "llama-cpp") (version (string-append "0.0.0-" tag)) @@ -642,19 +643,19 @@ (define-public llama-cpp (origin (method git-fetch) (uri (git-reference - (url "https://github.com/ggerganov/llama.cpp") + (url "https://github.com/ggml-org/llama.cpp") (commit tag))) (file-name (git-file-name name tag)) (sha256 - (base32 "1xf2579q0r8nv06kj8padi6w9cv30w58vdys65nq8yzm3dy452a1")) - (patches - (search-patches "llama-cpp-vulkan-optional.patch")))) + (base32 "0s73dz871x53dr366lkzq19f677bwgma2ri8m5vhbfa9p8yp4p3r")))) (build-system cmake-build-system) (arguments (list #:configure-flags - #~(list "-DBUILD_SHARED_LIBS=ON" + #~(list #$(string-append "-DGGML_BUILD_NUMBER=" tag) + "-DBUILD_SHARED_LIBS=ON" "-DGGML_VULKAN=ON" + "-DLLAMA_CURL=ON" "-DGGML_BLAS=ON" "-DGGML_BLAS_VENDOR=OpenBLAS" (string-append "-DBLAS_INCLUDE_DIRS=" @@ -684,32 +685,17 @@ (define-public llama-cpp (substitute* "ggml/src/ggml-vulkan/vulkan-shaders/vulkan-shaders-gen.cpp" (("\"/bin/sh\"") (string-append "\"" (search-input-file inputs "/bin/sh") "\""))))) - (add-after 'unpack 'disable-unrunable-tests + (add-after 'unpack 'fix-tests (lambda _ ;; test-eval-callback downloads ML model from network, cannot ;; run in Guix build environment (substitute* '("examples/eval-callback/CMakeLists.txt") (("COMMAND llama-eval-callback") - "COMMAND true llama-eval-callback")))) - (add-before 'install 'install-python-scripts - (lambda _ - (let ((bin (string-append #$output "/bin/"))) - (define (make-script script) - (let ((suffix (if (string-suffix? ".py" script) "" ".py"))) - (call-with-input-file - (string-append "../source/" script suffix) - (lambda (input) - (call-with-output-file (string-append bin script) - (lambda (output) - (format output "#!~a/bin/python3\n~a" - #$(this-package-input "python") - (get-string-all input)))))) - (chmod (string-append bin script) #o555))) - (mkdir-p bin) - (make-script "convert_hf_to_gguf") - (make-script "convert_llama_ggml_to_gguf") - (make-script "convert_hf_to_gguf_update.py")))) - (add-after 'install-python-scripts 'wrap-python-scripts + "COMMAND true llama-eval-callback")) + ;; Help it find the test files it needs + (substitute* "tests/test-chat.cpp" + (("\"\\.\\./\"") "\"../source/\"")))) + (add-after 'install 'wrap-python-scripts (assoc-ref python:%standard-phases 'wrap)) (add-after 'install 'remove-tests (lambda* (#:key outputs #:allow-other-keys) @@ -717,12 +703,13 @@ (define-public llama-cpp (string-append (assoc-ref outputs "out") "/bin") "^test-"))))))) - (inputs (list python vulkan-headers vulkan-loader)) - (native-inputs (list pkg-config shaderc bash)) + (inputs (list curl glslang python python-gguf-llama-cpp + vulkan-headers vulkan-loader)) + (native-inputs (list pkg-config shaderc bash-minimal)) (propagated-inputs (list python-numpy python-pytorch python-sentencepiece openblas)) (properties '((tunable? . #true))) ;use AVX512, FMA, etc. when available - (home-page "https://github.com/ggerganov/llama.cpp") + (home-page "https://github.com/ggml-org/llama.cpp") (synopsis "Port of Facebook's LLaMA model in C/C++") (description "This package provides a port to Facebook's LLaMA collection of foundation language models. It requires models parameters to be downloaded diff --git a/gnu/packages/patches/llama-cpp-vulkan-optional.patch b/gnu/packages/patches/llama-cpp-vulkan-optional.patch deleted file mode 100644 index 43a49b6a02..0000000000 --- a/gnu/packages/patches/llama-cpp-vulkan-optional.patch +++ /dev/null @@ -1,38 +0,0 @@ -Author: Danny Milosavljevic <dannym <at> friendly-machines.com> -Date: 2025-01-29 -License: Expat -Subject: Make Vulkan optional - -See also: <https://github.com/ggerganov/llama.cpp/pull/11494> - -diff -ru orig/llama.cpp/ggml/include/ggml-vulkan.h llama.cpp/ggml/include/ggml-vulkan.h ---- orig/llama.cpp/ggml/include/ggml-vulkan.h 2025-01-29 10:24:10.894476682 +0100 -+++ llama.cpp/ggml/include/ggml-vulkan.h 2025-02-07 18:28:34.509509638 +0100 -@@ -10,8 +10,6 @@ - #define GGML_VK_NAME "Vulkan" - #define GGML_VK_MAX_DEVICES 16 - --GGML_BACKEND_API void ggml_vk_instance_init(void); -- - // backend API - GGML_BACKEND_API ggml_backend_t ggml_backend_vk_init(size_t dev_num); - -diff -ru orig/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp ---- orig/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp 2025-01-29 10:24:10.922476480 +0100 -+++ llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp 2025-01-29 22:33:19.955087552 +0100 -@@ -8174,8 +8174,13 @@ - /* .iface = */ ggml_backend_vk_reg_i, - /* .context = */ nullptr, - }; -- -- return ® -+ try { -+ ggml_vk_instance_init(); -+ return ® -+ } catch (const vk::SystemError& e) { -+ VK_LOG_DEBUG("ggml_vk_get_device_count() -> Error: System error: " << e.what()); -+ return nullptr; -+ } - } - - // Extension availability -- 2.49.0
guix-patches <at> gnu.org
:bug#76999
; Package guix-patches
.
(Sun, 06 Apr 2025 16:16:01 GMT) Full text and rfc822 format available.Message #20 received at 76999 <at> debbugs.gnu.org (full text, mbox):
From: Christopher Baines <mail <at> cbaines.net> To: Morgan Smith <Morgan.J.Smith <at> outlook.com> Cc: 76999 <at> debbugs.gnu.org Subject: Re: [bug#76999] [PATCH 1/2] gnu: Add python-gguf-llama-cpp. Date: Sun, 06 Apr 2025 17:15:10 +0100
[Message part 1 (text/plain, inline)]
Morgan Smith <Morgan.J.Smith <at> outlook.com> writes: > * gnu/packages/machine-learning.scm (python-gguf-llama-cpp): New variable. > > Change-Id: I1c1b5f5956e3acb380b56816d180f53243b741fa > --- > gnu/packages/machine-learning.scm | 15 +++++++++++++++ > 1 file changed, 15 insertions(+) > > diff --git a/gnu/packages/machine-learning.scm b/gnu/packages/machine-learning.scm > index 246b004156..ee5feb58fc 100644 > --- a/gnu/packages/machine-learning.scm > +++ b/gnu/packages/machine-learning.scm > @@ -6490,6 +6490,21 @@ (define-public python-gguf > (description "A Python library for reading and writing GGUF & GGML format ML models.") > (license license:expat))) > > +(define-public python-gguf-llama-cpp > + (package/inherit python-gguf > + (version "0.16.0") > + (source (package-source llama-cpp)) > + (propagated-inputs (list python-numpy python-pyyaml python-sentencepiece > + python-tqdm)) > + (native-inputs (list python-poetry-core)) > + (arguments > + (substitute-keyword-arguments (package-arguments python-gguf) > + ((#:phases phases #~%standard-phases) > + #~(modify-phases #$phases > + (add-after 'unpack 'chdir > + (lambda _ > + (chdir "gguf-py"))))))))) > + > (define-public python-gymnasium > (package > (name "python-gymnasium") Can python-gguf be updated rather than adding this package?
[signature.asc (application/pgp-signature, inline)]
guix-patches <at> gnu.org
:bug#76999
; Package guix-patches
.
(Mon, 07 Apr 2025 23:11:02 GMT) Full text and rfc822 format available.Message #23 received at 76999 <at> debbugs.gnu.org (full text, mbox):
From: Morgan Smith <Morgan.J.Smith <at> outlook.com> To: Christopher Baines <mail <at> cbaines.net> Cc: 76999 <at> debbugs.gnu.org Subject: Re: [bug#76999] [PATCH 1/2] gnu: Add python-gguf-llama-cpp. Date: Mon, 07 Apr 2025 19:09:57 -0400
Christopher Baines <mail <at> cbaines.net> writes: > > Can python-gguf be updated rather than adding this package? I had a tricky time tracking down the canonical source for the package. I'm fairly certain now that it is actually in the llama-cpp package. I assumed it was a bundling situation but it doesn't seem that way. So I will send a patch where I simply update python-gguf.
guix-patches <at> gnu.org
:bug#76999
; Package guix-patches
.
(Mon, 07 Apr 2025 23:12:01 GMT) Full text and rfc822 format available.Message #26 received at 76999 <at> debbugs.gnu.org (full text, mbox):
From: Morgan Smith <Morgan.J.Smith <at> outlook.com> To: 76999 <at> debbugs.gnu.org Cc: Morgan Smith <Morgan.J.Smith <at> outlook.com> Subject: [PATCH v2] gnu: llama-cpp: Update to 0.0.0-b5013. Date: Mon, 7 Apr 2025 19:11:08 -0400
* gnu/packages/machine-learning.scm (llama-cpp): Update to 0.0.0-b5013. [inputs]: Add curl, glslang, and python-gguf. [native-inputs]: bash -> bash-minimal. [source, homepage]: Update URL. [python-scripts]: Rely on upstream to install them. Delete phase. [fix-tests]: Fix an additional test. * gnu/packages/patches/llama-cpp-vulkan-optional.patch: Delete. * gnu/local.mk: Unregister patch. Change-Id: Ic297534cd142cb83e3964eae21b4eb807b74e9bc --- gnu/local.mk | 1 - gnu/packages/machine-learning.scm | 47 +++++++------------ .../patches/llama-cpp-vulkan-optional.patch | 38 --------------- 3 files changed, 17 insertions(+), 69 deletions(-) delete mode 100644 gnu/packages/patches/llama-cpp-vulkan-optional.patch diff --git a/gnu/local.mk b/gnu/local.mk index 6dc4b4f61b..65c21c2f0d 100644 --- a/gnu/local.mk +++ b/gnu/local.mk @@ -1843,7 +1843,6 @@ dist_patch_DATA = \ %D%/packages/patches/libmhash-hmac-fix-uaf.patch \ %D%/packages/patches/libmodbus-disable-networking-test.patch \ %D%/packages/patches/lib-tl-for-telegram-memcpy.patch \ - %D%/packages/patches/llama-cpp-vulkan-optional.patch \ %D%/packages/patches/llhttp-ponyfill-object-fromentries.patch \ %D%/packages/patches/lvm2-no-systemd.patch \ %D%/packages/patches/maturin-no-cross-compile.patch \ diff --git a/gnu/packages/machine-learning.scm b/gnu/packages/machine-learning.scm index bd7a4fd81b..0b9ee4fa39 100644 --- a/gnu/packages/machine-learning.scm +++ b/gnu/packages/machine-learning.scm @@ -84,6 +84,7 @@ (define-module (gnu packages machine-learning) #:use-module (gnu packages crates-io) #:use-module (gnu packages crates-tls) #:use-module (gnu packages crates-web) + #:use-module (gnu packages curl) #:use-module (gnu packages databases) #:use-module (gnu packages dejagnu) #:use-module (gnu packages documentation) @@ -640,7 +641,7 @@ (define-public guile-aiscm-next (deprecated-package "guile-aiscm-next" guile-aiscm)) (define-public llama-cpp - (let ((tag "b4549")) + (let ((tag "b5013")) (package (name "llama-cpp") (version (string-append "0.0.0-" tag)) @@ -648,19 +649,19 @@ (define-public llama-cpp (origin (method git-fetch) (uri (git-reference - (url "https://github.com/ggerganov/llama.cpp") + (url "https://github.com/ggml-org/llama.cpp") (commit tag))) (file-name (git-file-name name tag)) (sha256 - (base32 "1xf2579q0r8nv06kj8padi6w9cv30w58vdys65nq8yzm3dy452a1")) - (patches - (search-patches "llama-cpp-vulkan-optional.patch")))) + (base32 "0s73dz871x53dr366lkzq19f677bwgma2ri8m5vhbfa9p8yp4p3r")))) (build-system cmake-build-system) (arguments (list #:configure-flags - #~(list "-DBUILD_SHARED_LIBS=ON" + #~(list #$(string-append "-DGGML_BUILD_NUMBER=" tag) + "-DBUILD_SHARED_LIBS=ON" "-DGGML_VULKAN=ON" + "-DLLAMA_CURL=ON" "-DGGML_BLAS=ON" "-DGGML_BLAS_VENDOR=OpenBLAS" (string-append "-DBLAS_INCLUDE_DIRS=" @@ -690,32 +691,17 @@ (define-public llama-cpp (substitute* "ggml/src/ggml-vulkan/vulkan-shaders/vulkan-shaders-gen.cpp" (("\"/bin/sh\"") (string-append "\"" (search-input-file inputs "/bin/sh") "\""))))) - (add-after 'unpack 'disable-unrunable-tests + (add-after 'unpack 'fix-tests (lambda _ ;; test-eval-callback downloads ML model from network, cannot ;; run in Guix build environment (substitute* '("examples/eval-callback/CMakeLists.txt") (("COMMAND llama-eval-callback") - "COMMAND true llama-eval-callback")))) - (add-before 'install 'install-python-scripts - (lambda _ - (let ((bin (string-append #$output "/bin/"))) - (define (make-script script) - (let ((suffix (if (string-suffix? ".py" script) "" ".py"))) - (call-with-input-file - (string-append "../source/" script suffix) - (lambda (input) - (call-with-output-file (string-append bin script) - (lambda (output) - (format output "#!~a/bin/python3\n~a" - #$(this-package-input "python") - (get-string-all input)))))) - (chmod (string-append bin script) #o555))) - (mkdir-p bin) - (make-script "convert_hf_to_gguf") - (make-script "convert_llama_ggml_to_gguf") - (make-script "convert_hf_to_gguf_update.py")))) - (add-after 'install-python-scripts 'wrap-python-scripts + "COMMAND true llama-eval-callback")) + ;; Help it find the test files it needs + (substitute* "tests/test-chat.cpp" + (("\"\\.\\./\"") "\"../source/\"")))) + (add-after 'install 'wrap-python-scripts (assoc-ref python:%standard-phases 'wrap)) (add-after 'install 'remove-tests (lambda* (#:key outputs #:allow-other-keys) @@ -723,12 +709,13 @@ (define-public llama-cpp (string-append (assoc-ref outputs "out") "/bin") "^test-"))))))) - (inputs (list python vulkan-headers vulkan-loader)) - (native-inputs (list pkg-config shaderc bash)) + (inputs (list curl glslang python python-gguf + vulkan-headers vulkan-loader)) + (native-inputs (list pkg-config shaderc bash-minimal)) (propagated-inputs (list python-numpy python-pytorch python-sentencepiece openblas)) (properties '((tunable? . #true))) ;use AVX512, FMA, etc. when available - (home-page "https://github.com/ggerganov/llama.cpp") + (home-page "https://github.com/ggml-org/llama.cpp") (synopsis "Port of Facebook's LLaMA model in C/C++") (description "This package provides a port to Facebook's LLaMA collection of foundation language models. It requires models parameters to be downloaded diff --git a/gnu/packages/patches/llama-cpp-vulkan-optional.patch b/gnu/packages/patches/llama-cpp-vulkan-optional.patch deleted file mode 100644 index 43a49b6a02..0000000000 --- a/gnu/packages/patches/llama-cpp-vulkan-optional.patch +++ /dev/null @@ -1,38 +0,0 @@ -Author: Danny Milosavljevic <dannym <at> friendly-machines.com> -Date: 2025-01-29 -License: Expat -Subject: Make Vulkan optional - -See also: <https://github.com/ggerganov/llama.cpp/pull/11494> - -diff -ru orig/llama.cpp/ggml/include/ggml-vulkan.h llama.cpp/ggml/include/ggml-vulkan.h ---- orig/llama.cpp/ggml/include/ggml-vulkan.h 2025-01-29 10:24:10.894476682 +0100 -+++ llama.cpp/ggml/include/ggml-vulkan.h 2025-02-07 18:28:34.509509638 +0100 -@@ -10,8 +10,6 @@ - #define GGML_VK_NAME "Vulkan" - #define GGML_VK_MAX_DEVICES 16 - --GGML_BACKEND_API void ggml_vk_instance_init(void); -- - // backend API - GGML_BACKEND_API ggml_backend_t ggml_backend_vk_init(size_t dev_num); - -diff -ru orig/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp ---- orig/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp 2025-01-29 10:24:10.922476480 +0100 -+++ llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp 2025-01-29 22:33:19.955087552 +0100 -@@ -8174,8 +8174,13 @@ - /* .iface = */ ggml_backend_vk_reg_i, - /* .context = */ nullptr, - }; -- -- return ® -+ try { -+ ggml_vk_instance_init(); -+ return ® -+ } catch (const vk::SystemError& e) { -+ VK_LOG_DEBUG("ggml_vk_get_device_count() -> Error: System error: " << e.what()); -+ return nullptr; -+ } - } - - // Extension availability base-commit: 666a6cfd88b3e5106a9180e06ea128db8084be0e prerequisite-patch-id: 1e2c478cf648ee8c9a3b1af55543e1b96ff24ec7 -- 2.49.0
guix-patches <at> gnu.org
:bug#76999
; Package guix-patches
.
(Mon, 07 Apr 2025 23:21:01 GMT) Full text and rfc822 format available.Message #29 received at 76999 <at> debbugs.gnu.org (full text, mbox):
From: Morgan Smith <Morgan.J.Smith <at> outlook.com> To: 76999 <at> debbugs.gnu.org Cc: Morgan Smith <Morgan.J.Smith <at> outlook.com> Subject: [PATCH v3 1/2] gnu: python-gguf: Update to 0.16.0. Date: Mon, 7 Apr 2025 19:19:38 -0400
* gnu/packages/machine-learning.scm (python-gguf): Update to 0.16.0. Change origin to git repository. Run tests. Change-Id: I1c1b5f5956e3acb380b56816d180f53243b741fa --- gnu/packages/machine-learning.scm | 47 +++++++++++++++++++------------ 1 file changed, 29 insertions(+), 18 deletions(-) diff --git a/gnu/packages/machine-learning.scm b/gnu/packages/machine-learning.scm index 3e68af3476..bd7a4fd81b 100644 --- a/gnu/packages/machine-learning.scm +++ b/gnu/packages/machine-learning.scm @@ -7041,24 +7041,35 @@ (define-public oneapi-dnnl-for-r-torch (base32 "1zyw5rd8x346bb7gac9a7x3saviw3zvp6aqz2z1l9sv163vmjfz6")))))) (define-public python-gguf - (package - (name "python-gguf") - (version "0.6.0") - (source - (origin - (method url-fetch) - (uri (pypi-uri "gguf" version)) - (sha256 - (base32 "0rbyc2h3kpqnrvbyjvv8a69l577jv55a31l12jnw21m1lamjxqmj")))) - (build-system pyproject-build-system) - (arguments - (list #:tests? #false)) - (inputs (list poetry python-pytest)) - (propagated-inputs (list python-numpy)) - (home-page "https://ggml.ai") - (synopsis "Read and write ML models in GGUF for GGML") - (description "A Python library for reading and writing GGUF & GGML format ML models.") - (license license:expat))) + ;; They didn't tag the commit + (let ((commit "69050a11be0ae3e01329f11371ecb6850bdaded5")) + (package + (name "python-gguf") + (version "0.16.0") + (source + (origin + (method git-fetch) + (uri (git-reference + (url "https://github.com/ggml-org/llama.cpp") + (commit commit))) + (file-name (git-file-name name commit)) + (sha256 + (base32 "1563mbrjykwpsbhghhzi4h1qv9qy74gq5vq4xhs58zk0jp20c7zz")))) + (build-system pyproject-build-system) + (arguments + (list + #:phases + #~(modify-phases %standard-phases + (add-after 'unpack 'chdir + (lambda _ + (chdir "gguf-py")))))) + (propagated-inputs (list python-numpy python-pyyaml python-sentencepiece + python-tqdm)) + (native-inputs (list python-poetry-core python-pytest)) + (home-page "https://ggml.ai") + (synopsis "Read and write ML models in GGUF for GGML") + (description "A Python library for reading and writing GGUF & GGML format ML models.") + (license license:expat)))) (define-public python-gymnasium (package base-commit: 666a6cfd88b3e5106a9180e06ea128db8084be0e -- 2.49.0
guix-patches <at> gnu.org
:bug#76999
; Package guix-patches
.
(Mon, 07 Apr 2025 23:21:02 GMT) Full text and rfc822 format available.Message #32 received at 76999 <at> debbugs.gnu.org (full text, mbox):
From: Morgan Smith <Morgan.J.Smith <at> outlook.com> To: 76999 <at> debbugs.gnu.org Cc: Morgan Smith <Morgan.J.Smith <at> outlook.com> Subject: [PATCH v3 2/2] gnu: llama-cpp: Update to 0.0.0-b5013. Date: Mon, 7 Apr 2025 19:19:39 -0400
* gnu/packages/machine-learning.scm (llama-cpp): Update to 0.0.0-b5013. [inputs]: Add curl, glslang, and python-gguf. [native-inputs]: bash -> bash-minimal. [source, homepage]: Update URL. [python-scripts]: Rely on upstream to install them. Delete phase. [fix-tests]: Fix an additional test. * gnu/packages/patches/llama-cpp-vulkan-optional.patch: Delete. * gnu/local.mk: Unregister patch. Change-Id: Ic297534cd142cb83e3964eae21b4eb807b74e9bc --- gnu/local.mk | 1 - gnu/packages/machine-learning.scm | 47 +++++++------------ .../patches/llama-cpp-vulkan-optional.patch | 38 --------------- 3 files changed, 17 insertions(+), 69 deletions(-) delete mode 100644 gnu/packages/patches/llama-cpp-vulkan-optional.patch diff --git a/gnu/local.mk b/gnu/local.mk index 6dc4b4f61b..65c21c2f0d 100644 --- a/gnu/local.mk +++ b/gnu/local.mk @@ -1843,7 +1843,6 @@ dist_patch_DATA = \ %D%/packages/patches/libmhash-hmac-fix-uaf.patch \ %D%/packages/patches/libmodbus-disable-networking-test.patch \ %D%/packages/patches/lib-tl-for-telegram-memcpy.patch \ - %D%/packages/patches/llama-cpp-vulkan-optional.patch \ %D%/packages/patches/llhttp-ponyfill-object-fromentries.patch \ %D%/packages/patches/lvm2-no-systemd.patch \ %D%/packages/patches/maturin-no-cross-compile.patch \ diff --git a/gnu/packages/machine-learning.scm b/gnu/packages/machine-learning.scm index bd7a4fd81b..0b9ee4fa39 100644 --- a/gnu/packages/machine-learning.scm +++ b/gnu/packages/machine-learning.scm @@ -84,6 +84,7 @@ (define-module (gnu packages machine-learning) #:use-module (gnu packages crates-io) #:use-module (gnu packages crates-tls) #:use-module (gnu packages crates-web) + #:use-module (gnu packages curl) #:use-module (gnu packages databases) #:use-module (gnu packages dejagnu) #:use-module (gnu packages documentation) @@ -640,7 +641,7 @@ (define-public guile-aiscm-next (deprecated-package "guile-aiscm-next" guile-aiscm)) (define-public llama-cpp - (let ((tag "b4549")) + (let ((tag "b5013")) (package (name "llama-cpp") (version (string-append "0.0.0-" tag)) @@ -648,19 +649,19 @@ (define-public llama-cpp (origin (method git-fetch) (uri (git-reference - (url "https://github.com/ggerganov/llama.cpp") + (url "https://github.com/ggml-org/llama.cpp") (commit tag))) (file-name (git-file-name name tag)) (sha256 - (base32 "1xf2579q0r8nv06kj8padi6w9cv30w58vdys65nq8yzm3dy452a1")) - (patches - (search-patches "llama-cpp-vulkan-optional.patch")))) + (base32 "0s73dz871x53dr366lkzq19f677bwgma2ri8m5vhbfa9p8yp4p3r")))) (build-system cmake-build-system) (arguments (list #:configure-flags - #~(list "-DBUILD_SHARED_LIBS=ON" + #~(list #$(string-append "-DGGML_BUILD_NUMBER=" tag) + "-DBUILD_SHARED_LIBS=ON" "-DGGML_VULKAN=ON" + "-DLLAMA_CURL=ON" "-DGGML_BLAS=ON" "-DGGML_BLAS_VENDOR=OpenBLAS" (string-append "-DBLAS_INCLUDE_DIRS=" @@ -690,32 +691,17 @@ (define-public llama-cpp (substitute* "ggml/src/ggml-vulkan/vulkan-shaders/vulkan-shaders-gen.cpp" (("\"/bin/sh\"") (string-append "\"" (search-input-file inputs "/bin/sh") "\""))))) - (add-after 'unpack 'disable-unrunable-tests + (add-after 'unpack 'fix-tests (lambda _ ;; test-eval-callback downloads ML model from network, cannot ;; run in Guix build environment (substitute* '("examples/eval-callback/CMakeLists.txt") (("COMMAND llama-eval-callback") - "COMMAND true llama-eval-callback")))) - (add-before 'install 'install-python-scripts - (lambda _ - (let ((bin (string-append #$output "/bin/"))) - (define (make-script script) - (let ((suffix (if (string-suffix? ".py" script) "" ".py"))) - (call-with-input-file - (string-append "../source/" script suffix) - (lambda (input) - (call-with-output-file (string-append bin script) - (lambda (output) - (format output "#!~a/bin/python3\n~a" - #$(this-package-input "python") - (get-string-all input)))))) - (chmod (string-append bin script) #o555))) - (mkdir-p bin) - (make-script "convert_hf_to_gguf") - (make-script "convert_llama_ggml_to_gguf") - (make-script "convert_hf_to_gguf_update.py")))) - (add-after 'install-python-scripts 'wrap-python-scripts + "COMMAND true llama-eval-callback")) + ;; Help it find the test files it needs + (substitute* "tests/test-chat.cpp" + (("\"\\.\\./\"") "\"../source/\"")))) + (add-after 'install 'wrap-python-scripts (assoc-ref python:%standard-phases 'wrap)) (add-after 'install 'remove-tests (lambda* (#:key outputs #:allow-other-keys) @@ -723,12 +709,13 @@ (define-public llama-cpp (string-append (assoc-ref outputs "out") "/bin") "^test-"))))))) - (inputs (list python vulkan-headers vulkan-loader)) - (native-inputs (list pkg-config shaderc bash)) + (inputs (list curl glslang python python-gguf + vulkan-headers vulkan-loader)) + (native-inputs (list pkg-config shaderc bash-minimal)) (propagated-inputs (list python-numpy python-pytorch python-sentencepiece openblas)) (properties '((tunable? . #true))) ;use AVX512, FMA, etc. when available - (home-page "https://github.com/ggerganov/llama.cpp") + (home-page "https://github.com/ggml-org/llama.cpp") (synopsis "Port of Facebook's LLaMA model in C/C++") (description "This package provides a port to Facebook's LLaMA collection of foundation language models. It requires models parameters to be downloaded diff --git a/gnu/packages/patches/llama-cpp-vulkan-optional.patch b/gnu/packages/patches/llama-cpp-vulkan-optional.patch deleted file mode 100644 index 43a49b6a02..0000000000 --- a/gnu/packages/patches/llama-cpp-vulkan-optional.patch +++ /dev/null @@ -1,38 +0,0 @@ -Author: Danny Milosavljevic <dannym <at> friendly-machines.com> -Date: 2025-01-29 -License: Expat -Subject: Make Vulkan optional - -See also: <https://github.com/ggerganov/llama.cpp/pull/11494> - -diff -ru orig/llama.cpp/ggml/include/ggml-vulkan.h llama.cpp/ggml/include/ggml-vulkan.h ---- orig/llama.cpp/ggml/include/ggml-vulkan.h 2025-01-29 10:24:10.894476682 +0100 -+++ llama.cpp/ggml/include/ggml-vulkan.h 2025-02-07 18:28:34.509509638 +0100 -@@ -10,8 +10,6 @@ - #define GGML_VK_NAME "Vulkan" - #define GGML_VK_MAX_DEVICES 16 - --GGML_BACKEND_API void ggml_vk_instance_init(void); -- - // backend API - GGML_BACKEND_API ggml_backend_t ggml_backend_vk_init(size_t dev_num); - -diff -ru orig/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp ---- orig/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp 2025-01-29 10:24:10.922476480 +0100 -+++ llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp 2025-01-29 22:33:19.955087552 +0100 -@@ -8174,8 +8174,13 @@ - /* .iface = */ ggml_backend_vk_reg_i, - /* .context = */ nullptr, - }; -- -- return ® -+ try { -+ ggml_vk_instance_init(); -+ return ® -+ } catch (const vk::SystemError& e) { -+ VK_LOG_DEBUG("ggml_vk_get_device_count() -> Error: System error: " << e.what()); -+ return nullptr; -+ } - } - - // Extension availability -- 2.49.0
GNU bug tracking system
Copyright (C) 1999 Darren O. Benham,
1997,2003 nCipher Corporation Ltd,
1994-97 Ian Jackson.