Skip to content

Actions: AutonomicPerfectionist/llama.cpp

Python check requirements.txt

Actions

Loading...
Loading

Show workflow options

Create status badge

Loading
7 workflow run results
7 workflow run results

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

Change requirement of last backend being CPU to requiring its default…
Python check requirements.txt #7: Commit 2217b02 pushed by AutonomicPerfectionist
March 15, 2024 03:25 5m 37s mpi-heterogenous
March 15, 2024 03:25 5m 37s
llama : fix integer overflow during quantization (#6063)
Python check requirements.txt #6: Commit 4755afd pushed by AutonomicPerfectionist
March 15, 2024 01:03 5m 42s master
March 15, 2024 01:03 5m 42s
Update to use backend GUID and changed signatures
Python check requirements.txt #5: Commit 95c3511 pushed by AutonomicPerfectionist
March 12, 2024 17:40 5m 44s mpi-heterogenous
March 12, 2024 17:40 5m 44s
ci : remove tidy-review (#6021)
Python check requirements.txt #4: Commit 306d34b pushed by AutonomicPerfectionist
March 12, 2024 16:50 5m 45s master
March 12, 2024 16:50 5m 45s
Fix draft thread args and remove grads from mpi eval_init
Python check requirements.txt #3: Commit 005f9cb pushed by AutonomicPerfectionist
February 5, 2024 23:16 5m 18s mpi-heterogenous
February 5, 2024 23:16 5m 18s
Vulkan Intel Fixes, Optimizations and Debugging Flags (#5301)
Python check requirements.txt #2: Commit e920ed3 pushed by AutonomicPerfectionist
February 3, 2024 19:05 5m 47s master
February 3, 2024 19:05 5m 47s
ggml : add ggml_vdotq_s32 alias (#4715)
Python check requirements.txt #1: Commit e39106c pushed by AutonomicPerfectionist
December 31, 2023 19:59 5m 59s master
December 31, 2023 19:59 5m 59s