Skip to content

Commit baabbe5

Browse files
committed
fix(build): update Windows build environment setup for llama.cpp
1 parent aa56334 commit baabbe5

1 file changed

Lines changed: 3 additions & 4 deletions

File tree

.github/workflows/main.yml

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -309,11 +309,10 @@ jobs:
309309
run: |
310310
call "C:\Program Files (x86)\Intel\oneAPI\setvars.bat" intel64 --force
311311
call "C:\Program Files\Microsoft Visual Studio\2022\Enterprise\VC\Auxiliary\Build\vcvarsall.bat" x64
312-
set CUDAHOSTCXX=cl.exe
312+
set "PATH=%VCToolsInstallDir%bin\Hostx64\x64;%WindowsSdkBinPath%x64;%PATH%"
313+
set "PATH=%PATH:C:\msys64\usr\bin;=%"
314+
set "PATH=%PATH:C:\msys64\mingw64\bin;=%"
313315
make build/llama.cpp.stamp ${{ matrix.make }} LLAMA_ARGS="--target ggml"
314-
#make build/llama.cpp.stamp ${{ matrix.make }} LLAMA_ARGS="--target common"
315-
#make build/llama.cpp.stamp ${{ matrix.make }} LLAMA_ARGS="--target llama"
316-
#make build/llama.cpp.stamp ${{ matrix.make }} LLAMA_ARGS="--target ggml-base"
317316
318317
- name: copy backend modules to dist
319318
if: false #matrix.name == 'windows-gpu'

0 commit comments

Comments
 (0)