Skip to content

ggml-webgpu: Fix compilation error in ggml_backend_webgpu_debug in debug mode#21798

Merged
reeselevine merged 1 commit intoggml-org:masterfrom
yomaytk:webgpu-fix-debug-function
Apr 13, 2026
Merged

ggml-webgpu: Fix compilation error in ggml_backend_webgpu_debug in debug mode#21798
reeselevine merged 1 commit intoggml-org:masterfrom
yomaytk:webgpu-fix-debug-function

Conversation

@yomaytk
Copy link
Copy Markdown
Contributor

@yomaytk yomaytk commented Apr 12, 2026

Overview

This PR fixes a compilation error that occurs when building in debug mode (related to #21521).

llama.cpp/ggml/src/ggml-webgpu/ggml-webgpu.cpp:537:9: error: invalid argument
      type 'void' to unary expression
  537 |     if (!ggml_backend_webgpu_map_buffer(ctx, ctx->debug_host_buf, wgpu::MapMode::Read, 0,
      |         ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  538 |                                         ctx->debug_host_buf.GetSize())) {

It seems that calling ggml_backend_webgpu_check_wait_status at the end of ggml_backend_webgpu_map_buffer is sufficient for validation.

Additional information

Requirements

@yomaytk yomaytk requested a review from a team as a code owner April 12, 2026 08:33
@github-actions github-actions Bot added ggml changes relating to the ggml tensor library for machine learning WebGPU labels Apr 12, 2026
@reeselevine
Copy link
Copy Markdown
Contributor

Thanks for checking this and the fix!

@reeselevine reeselevine merged commit bafae27 into ggml-org:master Apr 13, 2026
46 of 47 checks passed
@yomaytk yomaytk deleted the webgpu-fix-debug-function branch April 13, 2026 03:19
crodjer added a commit to crodjer/llama.cpp that referenced this pull request Apr 13, 2026
* origin/master:
  webui: MCP Diagnostics improvements (ggml-org#21803)
  Remove extra conditional check on debug mode. (ggml-org#21798)
  sycl: disable Q1_0 in backend and cleanup unused variables (ggml-org#21807)
  mtmd: fix crash when sending image under 2x2 pixels (ggml-org#21711)
  mtmd: qwen3 audio support (qwen3-omni and qwen3-asr) (ggml-org#19441)
  convert : force f16 or f32 on step3-vl conv weights (ggml-org#21646)
  mtmd: add gemma 4 test (vision + audio) [no ci] (ggml-org#21806)
  mtmd: add Gemma 4 audio conformer encoder support (ggml-org#21421)
  fix: Proper messages rendering for "Show raw output" (ggml-org#21672)
  docs: add guide on how to add multimodal support (ggml-org#21778)
cnsiva pushed a commit to saas-home/llama.cpp that referenced this pull request Apr 13, 2026
HermestoAizales pushed a commit to HermestoAizales/llama.cpp that referenced this pull request Apr 13, 2026
ArberSephirotheca pushed a commit to ArberSephirotheca/llama.cpp that referenced this pull request Apr 21, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ggml changes relating to the ggml tensor library for machine learning WebGPU

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants