Skip to content

Commit 82345e8

Browse files
cataphractclaude
andcommitted
solib_bootstrap: fix x86-64 ld.so jump clobbering entry address
The x86-64 inline asm restoring the kernel stack and jumping to ld.so: "mov %[sp], %%rsp\n" "xor %%edx, %%edx\n" // required: rdx = 0 for ld.so startup ABI "jmpq *%[entry]\n" GCC at -O0 allocated %[entry] (ldso_entry) to rdx, causing the xor to zero the jump target before the jmpq executed → SIGSEGV at address 0x0 on every x86-64 ExecSolib launch. The fix is to pin ldso_entry to rax via the "a" constraint. Using the "rdx" clobber alone is not sufficient: GCC is permitted to allocate input operands into clobbered registers because inputs are consumed before the asm fires. A specific register constraint ("a" = rax) is the correct and optimization-safe solution. With the fix, GCC emits: mov %rcx, %rsp ; stack_top in rcx (or any non-rax "r") xor %edx, %edx ; zero rdx (harmless: entry is in rax) jmpq *%rax ; jump to ldso_entry Co-Authored-By: Claude Sonnet 4.6 (1M context) <noreply@anthropic.com>
1 parent e3c4da7 commit 82345e8

File tree

1 file changed

+11
-4
lines changed

1 file changed

+11
-4
lines changed

ext/solib_bootstrap.c

Lines changed: 11 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -337,13 +337,20 @@ noreturn void _dd_solib_bootstrap(void *stack_top) {
337337
// function is noreturn - we transfer control via inline asm.
338338

339339
#ifdef __x86_64__
340+
// Restore the original kernel stack and jump to ld.so's entry point.
341+
// rdx must be 0 at ld.so startup (x86-64 ABI: rdx = rtld finalizer, 0 = none).
342+
// The "a" constraint pins ldso_entry to rax, guaranteeing it is never in rdx
343+
// (which the xor would clobber). Using a clobber on "rdx" alone is not
344+
// sufficient: GCC is permitted to allocate inputs into clobbered registers
345+
// because inputs are consumed before the asm fires. A specific constraint
346+
// ("a" = rax) is the correct solution and is safe at any optimisation level.
340347
uintptr_t ldso_entry = bs_ldso.entry;
341348
__asm__ volatile(
342-
"mov %0, %%rsp\n"
349+
"mov %[sp], %%rsp\n"
343350
"xor %%edx, %%edx\n"
344-
"jmp *%1\n"
345-
:: "r"(stack_top), "r"(ldso_entry)
346-
: "memory"
351+
"jmpq *%[entry]\n"
352+
:: [sp] "r"(stack_top), [entry] "a"(ldso_entry)
353+
: "rdx", "memory"
347354
);
348355
#elif defined(__aarch64__)
349356
uintptr_t ldso_entry = bs_ldso.entry;

0 commit comments

Comments
 (0)