Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PI_ERROR_BUILD_PROGRAM_FAILURE error when running Ollama using ipex-llm on 12450H CPU #12597

Closed
qadzhang opened this issue Dec 23, 2024 · 11 comments
Assignees

Comments

@qadzhang
Copy link

Hello,

The CPU is 12450H with driver version 32.0.101.6325.
The installed software is ipex-llm[cpp], and the Ollama version is 0.4.6.

The installation was successful, but an error occurred before inference while loading the model.

time=2024-12-23T23:18:56.511+08:00 level=INFO source=routes.go:1248 msg="Listening on 127.0.0.1:11434 (version 0.4.6-ipexllm-20241223)"
time=2024-12-23T23:18:56.511+08:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners=[ipex_llm]

time=2024-12-23T23:09:28.726+08:00 level=INFO source=server.go:619 msg="llama runner started in 3.77 seconds"
The program was built for 1 devices
Build program log for 'Intel(R) UHD Graphics':
-11 (PI_ERROR_BUILD_PROGRAM_FAILURE)Exception caught at file:D:/actions-runner/release-cpp-oneapi_2024_2/_work/llm.cpp/llm.cpp/llama-cpp-bigdl/ggml/src/ggml-sycl.cpp, line:3775

捕获

@qiuxin2012
Copy link
Contributor

Which model are you using?

@qadzhang
Copy link
Author

use qwen2.5:7b

@qiuxin2012

@qiuxin2012
Copy link
Contributor

similar issue: #12598, we are fixing it.

@qiuxin2012
Copy link
Contributor

@qadzhang You can try to update ipex-llm[cpp] to 2.2.0b20241226 tomorrow and try again.

@qadzhang
Copy link
Author

Thank you for your efforts.

I upgraded the version and then tested qwen2.5:7b, qwen2.5:0.5b, qwen2:0.5b, bge-m3, and gemma2:9b.

Among them, qwen2:0.5b and gemma2:9b can run normally, while the other two report errors.

When running qwen2.5:0.5b and qwen2.5:7b, the following errors occur:

Sometimes the error is reported at the beginning, sometimes after executing a few steps, but eventually, the following error will always be reported:

The program was built for 1 devices
Build program log for 'Intel(R) UHD Graphics':
-11 (PI_ERROR_BUILD_PROGRAM_FAILURE)Exception caught at file:D:/actions-runner/release-cpp-oneapi_2024_2/_work/llm.cpp/llm.cpp/llama-cpp-bigdl/ggml/src/ggml-sycl.cpp, line:3781


When running bge-m3, an error is reported as soon as it starts loading, with the following error message:

llama_new_context_with_model: graph splits = 2
time=2024-12-27T01:18:42.734+08:00 level=WARN source=runner.go:894 msg="%s: warming up the model with an empty run - please wait ... " !BADKEY=loadModel
D:\actions-runner\release-cpp-oneapi_2024_2_work\llm.cpp\llm.cpp\llama-cpp-bigdl\src\llama.cpp:17622: GGML_ASSERT(seq_id < n_tokens && "seq_id cannot be larger than n_tokens with pooling_type == CLS or RANK") failed
time=2024-12-27T01:18:42.906+08:00 level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error"
time=2024-12-27T01:18:43.157+08:00 level=ERROR source=sched.go:455 msg="error loading llama server" error="llama runner process has terminated: GGML_ASSERT(seq_id < n_tokens && "seq_id cannot be larger than n_tokens with pooling_type == CLS or RANK") failed"
[GIN] 2024/12/27 - 01:18:43 | 500 | 1.9490714s | 127.0.0.1 | POST "/api/embeddings"


(llm-cpp) C:\Users\zc\llama-cpp>ollama serve
2024/12/27 01:28:07 routes.go:1197: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY:localhost,127.0.0.1 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:D:\ollama_db OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-12-27T01:28:07.908+08:00 level=INFO source=images.go:753 msg="total blobs: 20"
time=2024-12-27T01:28:07.909+08:00 level=INFO source=images.go:760 msg="total unused blobs removed: 0"
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.

[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.

  • using env: export GIN_MODE=release
  • using code: gin.SetMode(gin.ReleaseMode)

[GIN-debug] POST /api/pull --> ollama/server.(*Server).PullHandler-fm (5 handlers)
[GIN-debug] POST /api/generate --> ollama/server.(*Server).GenerateHandler-fm (5 handlers)
[GIN-debug] POST /api/chat --> ollama/server.(*Server).ChatHandler-fm (5 handlers)
[GIN-debug] POST /api/embed --> ollama/server.(*Server).EmbedHandler-fm (5 handlers)
[GIN-debug] POST /api/embeddings --> ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)
[GIN-debug] POST /api/create --> ollama/server.(*Server).CreateHandler-fm (5 handlers)
[GIN-debug] POST /api/push --> ollama/server.(*Server).PushHandler-fm (5 handlers)
[GIN-debug] POST /api/copy --> ollama/server.(*Server).CopyHandler-fm (5 handlers)
[GIN-debug] DELETE /api/delete --> ollama/server.(*Server).DeleteHandler-fm (5 handlers)
[GIN-debug] POST /api/show --> ollama/server.(*Server).ShowHandler-fm (5 handlers)
[GIN-debug] POST /api/blobs/:digest --> ollama/server.(*Server).CreateBlobHandler-fm (5 handlers)
[GIN-debug] HEAD /api/blobs/:digest --> ollama/server.(*Server).HeadBlobHandler-fm (5 handlers)
[GIN-debug] GET /api/ps --> ollama/server.(*Server).PsHandler-fm (5 handlers)
[GIN-debug] POST /v1/chat/completions --> ollama/server.(*Server).ChatHandler-fm (6 handlers)
[GIN-debug] POST /v1/completions --> ollama/server.(*Server).GenerateHandler-fm (6 handlers)
[GIN-debug] POST /v1/embeddings --> ollama/server.(*Server).EmbedHandler-fm (6 handlers)
[GIN-debug] GET /v1/models --> ollama/server.(*Server).ListHandler-fm (6 handlers)
[GIN-debug] GET /v1/models/:model --> ollama/server.(*Server).ShowHandler-fm (6 handlers)
[GIN-debug] GET / --> ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] GET /api/tags --> ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] GET /api/version --> ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
[GIN-debug] HEAD / --> ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] HEAD /api/tags --> ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] HEAD /api/version --> ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
time=2024-12-27T01:28:07.910+08:00 level=INFO source=routes.go:1248 msg="Listening on 127.0.0.1:11434 (version 0.4.6-ipexllm-20241226)"
time=2024-12-27T01:28:07.910+08:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners=[ipex_llm]


@qiuxin2012

@qiuxin2012
Copy link
Contributor

@qadzhang I have reproduced your error, we will look into it.

@qiuxin2012 qiuxin2012 assigned sgwhat and unassigned leonardozcm Dec 27, 2024
@qiuxin2012
Copy link
Contributor

image
I test qwen2.5:0.5b with ipex llm 2.2.0b20250102, it works fine now.
Please update your ipex-llm to 2.2.0b20250102, and clean your ollama links, init-ollama, then try again.

@qadzhang
Copy link
Author

qadzhang commented Jan 6, 2025

I've tried qwen2.5 and it works indeed.

However, I still get errors when using the bge-m3 model for embeddings.

If you want to experiment, you can try running the following program to test it.

import ollama

text="I am learning at the group company's training center today"
response = ollama.embeddings(model="bge-m3", prompt=text)
embedding = response["embedding"]
print('Vector dimension:', len(embedding))
print('First 10 numbers:', embedding[:10])


(llm-cpp) C:\Users\zc\llama-cpp>ollama serve
2025/01/06 20:13:58 routes.go:1197: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY:localhost,127.0.0.1 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:D:\ollama_db OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2025-01-06T20:13:58.362+08:00 level=INFO source=images.go:753 msg="total blobs: 20"
time=2025-01-06T20:13:58.363+08:00 level=INFO source=images.go:760 msg="total unused blobs removed: 0"
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.

[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.

  • using env: export GIN_MODE=release
  • using code: gin.SetMode(gin.ReleaseMode)

[GIN-debug] POST /api/pull --> ollama/server.(*Server).PullHandler-fm (5 handlers)
[GIN-debug] POST /api/generate --> ollama/server.(*Server).GenerateHandler-fm (5 handlers)
[GIN-debug] POST /api/chat --> ollama/server.(*Server).ChatHandler-fm (5 handlers)
[GIN-debug] POST /api/embed --> ollama/server.(*Server).EmbedHandler-fm (5 handlers)
[GIN-debug] POST /api/embeddings --> ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)
[GIN-debug] POST /api/create --> ollama/server.(*Server).CreateHandler-fm (5 handlers)
[GIN-debug] POST /api/push --> ollama/server.(*Server).PushHandler-fm (5 handlers)
[GIN-debug] POST /api/copy --> ollama/server.(*Server).CopyHandler-fm (5 handlers)
[GIN-debug] DELETE /api/delete --> ollama/server.(*Server).DeleteHandler-fm (5 handlers)
[GIN-debug] POST /api/show --> ollama/server.(*Server).ShowHandler-fm (5 handlers)
[GIN-debug] POST /api/blobs/:digest --> ollama/server.(*Server).CreateBlobHandler-fm (5 handlers)
[GIN-debug] HEAD /api/blobs/:digest --> ollama/server.(*Server).HeadBlobHandler-fm (5 handlers)
[GIN-debug] GET /api/ps --> ollama/server.(*Server).PsHandler-fm (5 handlers)
[GIN-debug] POST /v1/chat/completions --> ollama/server.(*Server).ChatHandler-fm (6 handlers)
[GIN-debug] POST /v1/completions --> ollama/server.(*Server).GenerateHandler-fm (6 handlers)
[GIN-debug] POST /v1/embeddings --> ollama/server.(*Server).EmbedHandler-fm (6 handlers)
[GIN-debug] GET /v1/models --> ollama/server.(*Server).ListHandler-fm (6 handlers)
[GIN-debug] GET /v1/models/:model --> ollama/server.(*Server).ShowHandler-fm (6 handlers)
[GIN-debug] GET / --> ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] GET /api/tags --> ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] GET /api/version --> ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
[GIN-debug] HEAD / --> ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] HEAD /api/tags --> ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] HEAD /api/version --> ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
time=2025-01-06T20:13:58.364+08:00 level=INFO source=routes.go:1248 msg="Listening on 127.0.0.1:11434 (version 0.4.6-ipexllm-20250105)"
time=2025-01-06T20:13:58.364+08:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners=[ipex_llm]
time=2025-01-06T20:14:07.106+08:00 level=INFO source=gpu.go:221 msg="looking for compatible GPUs"
time=2025-01-06T20:14:07.106+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=1
time=2025-01-06T20:14:07.106+08:00 level=INFO source=gpu_windows.go:183 msg="efficiency cores detected" maxEfficiencyClass=1
time=2025-01-06T20:14:07.106+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=8 efficiency=4 threads=12
time=2025-01-06T20:14:07.159+08:00 level=INFO source=server.go:105 msg="system memory" total="31.7 GiB" free="26.7 GiB" free_swap="31.4 GiB"
time=2025-01-06T20:14:07.159+08:00 level=INFO source=memory.go:356 msg="offload to device" layers.requested=-1 layers.model=25 layers.offload=0 layers.split="" memory.available="[26.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="1.1 GiB" memory.required.partial="0 B" memory.required.kv="12.0 MiB" memory.required.allocations="[1.1 GiB]" memory.weights.total="589.2 MiB" memory.weights.repeating="100.9 MiB" memory.weights.nonrepeating="488.3 MiB" memory.graph.full="32.0 MiB" memory.graph.partial="32.0 MiB"
time=2025-01-06T20:14:07.168+08:00 level=INFO source=server.go:401 msg="starting llama server" cmd="C:\Users\zc\llama-cpp\dist\windows-amd64\lib\ollama\runners\ipex_llm\ollama_llama_server.exe --model D:\ollama_db\blobs\sha256-daec91ffb5dd0c27411bd71f29932917c49cf529a641d0168496c3a501e3062c --ctx-size 2048 --batch-size 512 --n-gpu-layers 999 --threads 4 --no-mmap --parallel 1 --port 49828"
time=2025-01-06T20:14:07.173+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2025-01-06T20:14:07.173+08:00 level=INFO source=server.go:580 msg="waiting for llama runner to start responding"
time=2025-01-06T20:14:07.173+08:00 level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error"
time=2025-01-06T20:14:07.204+08:00 level=INFO source=runner.go:956 msg="starting go runner"
time=2025-01-06T20:14:07.208+08:00 level=INFO source=runner.go:957 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(clang)" threads=4
time=2025-01-06T20:14:07.208+08:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:49828"
llama_model_loader: loaded meta data with 33 key-value pairs and 389 tensors from D:\ollama_db\blobs\sha256-daec91ffb5dd0c27411bd71f29932917c49cf529a641d0168496c3a501e3062c (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = bert
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.size_label str = 567M
llama_model_loader: - kv 3: general.license str = mit
llama_model_loader: - kv 4: general.tags arr[str,4] = ["sentence-transformers", "feature-ex...
llama_model_loader: - kv 5: bert.block_count u32 = 24
llama_model_loader: - kv 6: bert.context_length u32 = 8192
llama_model_loader: - kv 7: bert.embedding_length u32 = 1024
llama_model_loader: - kv 8: bert.feed_forward_length u32 = 4096
llama_model_loader: - kv 9: bert.attention.head_count u32 = 16
llama_model_loader: - kv 10: bert.attention.layer_norm_epsilon f32 = 0.000010
llama_model_loader: - kv 11: general.file_type u32 = 1
llama_model_loader: - kv 12: bert.attention.causal bool = false
llama_model_loader: - kv 13: bert.pooling_type u32 = 2
llama_model_loader: - kv 14: tokenizer.ggml.model str = t5
llama_model_loader: - kv 15: tokenizer.ggml.pre str = default
llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,250002] = ["", "", "", "", ","...
llama_model_loader: - kv 17: tokenizer.ggml.scores arr[f32,250002] = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,250002] = [3, 3, 3, 2, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 19: tokenizer.ggml.add_space_prefix bool = true
llama_model_loader: - kv 20: tokenizer.ggml.token_type_count u32 = 1
llama_model_loader: - kv 21: tokenizer.ggml.remove_extra_whitespaces bool = true
llama_model_loader: - kv 22: tokenizer.ggml.precompiled_charsmap arr[u8,237539] = [0, 180, 2, 0, 0, 132, 0, 0, 0, 0, 0,...
llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 0
llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 2
llama_model_loader: - kv 25: tokenizer.ggml.unknown_token_id u32 = 3
llama_model_loader: - kv 26: tokenizer.ggml.seperator_token_id u32 = 2
llama_model_loader: - kv 27: tokenizer.ggml.padding_token_id u32 = 1
llama_model_loader: - kv 28: tokenizer.ggml.cls_token_id u32 = 0
llama_model_loader: - kv 29: tokenizer.ggml.mask_token_id u32 = 250001
llama_model_loader: - kv 30: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 31: tokenizer.ggml.add_eos_token bool = true
llama_model_loader: - kv 32: general.quantization_version u32 = 2
llama_model_loader: - type f32: 244 tensors
llama_model_loader: - type f16: 145 tensors
time=2025-01-06T20:14:07.425+08:00 level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 4
llm_load_vocab: token to piece cache size = 2.1668 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = bert
llm_load_print_meta: vocab type = UGM
llm_load_print_meta: n_vocab = 250002
llm_load_print_meta: n_merges = 0
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 8192
llm_load_print_meta: n_embd = 1024
llm_load_print_meta: n_layer = 24
llm_load_print_meta: n_head = 16
llm_load_print_meta: n_head_kv = 16
llm_load_print_meta: n_rot = 64
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 64
llm_load_print_meta: n_embd_head_v = 64
llm_load_print_meta: n_gqa = 1
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 1.0e-05
llm_load_print_meta: f_norm_rms_eps = 0.0e+00
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 4096
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 0
llm_load_print_meta: pooling type = 2
llm_load_print_meta: rope type = 2
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 8192
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = 335M
llm_load_print_meta: model ftype = F16
llm_load_print_meta: model params = 566.70 M
llm_load_print_meta: model size = 1.07 GiB (16.25 BPW)
llm_load_print_meta: general.name = n/a
llm_load_print_meta: BOS token = 0 ''
llm_load_print_meta: EOS token = 2 '
'
llm_load_print_meta: UNK token = 3 ''
llm_load_print_meta: SEP token = 2 ''
llm_load_print_meta: PAD token = 1 ''
llm_load_print_meta: CLS token = 0 ''
llm_load_print_meta: MASK token = 250001 '[PAD250000]'
llm_load_print_meta: LF token = 6 '▁'
llm_load_print_meta: EOG token = 2 '
'
llm_load_print_meta: max token length = 48
ggml_sycl_init: GGML_SYCL_FORCE_MMQ: no
ggml_sycl_init: SYCL_USE_XMX: yes
ggml_sycl_init: found 1 SYCL devices:
get_memory_info: [warning] ext_intel_free_memory is not supported (export/set ZES_ENABLE_SYSMAN=1 to support), use total memory as free memory
llm_load_tensors: ggml ctx size = 0.32 MiB
llm_load_tensors: offloading 24 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 25/25 layers to GPU
llm_load_tensors: SYCL0 buffer size = 577.23 MiB
llm_load_tensors: SYCL_Host buffer size = 520.29 MiB
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 1
[SYCL] call ggml_check_sycl
ggml_check_sycl: GGML_SYCL_DEBUG: 0
ggml_check_sycl: GGML_SYCL_F16: no
found 1 SYCL devices:

llama_kv_cache_init: SYCL0 KV buffer size = 192.00 MiB
llama_new_context_with_model: KV self size = 192.00 MiB, K (f16): 96.00 MiB, V (f16): 96.00 MiB
llama_new_context_with_model: CPU output buffer size = 0.00 MiB
llama_new_context_with_model: SYCL0 compute buffer size = 13.01 MiB
llama_new_context_with_model: SYCL_Host compute buffer size = 5.01 MiB
llama_new_context_with_model: graph nodes = 801
llama_new_context_with_model: graph splits = 2
time=2025-01-06T20:14:08.512+08:00 level=WARN source=runner.go:894 msg="%s: warming up the model with an empty run - please wait ... " !BADKEY=loadModel
D:\actions-runner\release-cpp-oneapi_2024_2_work\llm.cpp\llm.cpp\ollama-llama-cpp\src\llama.cpp:17622: GGML_ASSERT(seq_id < n_tokens && "seq_id cannot be larger than n_tokens with pooling_type == CLS or RANK") failed
time=2025-01-06T20:14:08.678+08:00 level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error"
time=2025-01-06T20:14:08.928+08:00 level=ERROR source=sched.go:455 msg="error loading llama server" error="llama runner process has terminated: GGML_ASSERT(seq_id < n_tokens && "seq_id cannot be larger than n_tokens with pooling_type == CLS or RANK") failed"
[GIN] 2025/01/06 - 20:14:08 | 500 | 1.8233861s | 127.0.0.1 | POST "/api/embeddings"

@qiuxin2012

@qiuxin2012
Copy link
Contributor

@qadzhang bge-m3 is a different error, we are fixing it.

@sgwhat
Copy link
Contributor

sgwhat commented Jan 9, 2025

Hi @qadzhang, we have fixed the embedding issue, you may install our latest ipex-llm ollama via pip install --pre --upgrade ipex-llm[cpp] to run bge-m3 again.

@qadzhang
Copy link
Author

BGE-M3 is also acceptable. Thank you all for your efforts.

@qiuxin2012 @sgwhat

@qadzhang qadzhang reopened this Jan 11, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants