Problem
Sandbox.mountBucket() returns success even when the underlying s3fs mount silently fails. The user-facing symptom is that the mount endpoint reports {"ok":true} but no FUSE filesystem is ever attached: writes "succeed" against a plain local directory and never reach R2, and the later unmountBucket() blows up with
fusermount: entry for <mount> not found in /etc/mtab
because there was nothing to unmount. The unmount error is the first visible signal of a problem that began at mount time.
I hit this with a typo in the bucket name which caused R2 to reject the bucket check with 403 AccessDenied, but the SDK treated the mount as successful.
Root cause
In packages/sandbox/src/sandbox.ts, executeS3FSMount invokes s3fs in its default (daemonising) mode:
const mountCmd = `s3fs ${shellEscape(bucket)} ${shellEscape(mountPath)} -o ${optionsStr}`;
const result = sessionId
? await this.execWithSession(mountCmd, sessionId, { origin: 'internal' })
: await this.execInternal(mountCmd);
if (result.exitCode !== 0) {
throw new S3FSMountError(
`S3FS mount failed: ${result.stderr || result.stdout || 'Unknown error'}`
);
}
s3fs forks a child to run the FUSE event loop and the parent exits 0 before the bucket check completes. When the child then fails its SigV4 bucket check (auth error, wrong bucket name, network error, …) it logs Exiting FUSE event loop due to errors and dies — but the parent has long since returned success, so executeS3FSMount never throws.
I confirmed this directly by running s3fs … -o curldbg -f -d against a misnamed bucket: the foreground mount surfaces the 403 immediately, while the default backgrounded invocation exits 0 and leaves no mountpoint, no s3fs process, and no entry in /proc/self/mountinfo.
Suggested fix (from Claude)
After invoking s3fs, verify the mount actually established before returning. The cheapest reliable check is mountpoint -q <path> with a short retry loop to absorb the brief window between fork and the FUSE filesystem appearing in the kernel's mount table. If the path never becomes a mountpoint, throw S3FSMountError with whatever stderr captured (and, if possible, the s3fs log) so the caller sees a real error instead of a false ok.
A defensive secondary measure: capture s3fs stderr to a tempfile via -o logfile=… (or shell redirection) so the thrown error can include the actual reason (403 AccessDenied, unable to connect, etc.) instead of the empty stdout/stderr the parent produces.
For yr agent
Tasks
Problem
Sandbox.mountBucket()returns success even when the underlyings3fsmount silently fails. The user-facing symptom is that the mount endpoint reports{"ok":true}but no FUSE filesystem is ever attached: writes "succeed" against a plain local directory and never reach R2, and the laterunmountBucket()blows up withbecause there was nothing to unmount. The unmount error is the first visible signal of a problem that began at mount time.
I hit this with a typo in the bucket name which caused R2 to reject the bucket check with
403 AccessDenied, but the SDK treated the mount as successful.Root cause
In
packages/sandbox/src/sandbox.ts,executeS3FSMountinvokess3fsin its default (daemonising) mode:s3fsforks a child to run the FUSE event loop and the parent exits0before the bucket check completes. When the child then fails its SigV4 bucket check (auth error, wrong bucket name, network error, …) it logsExiting FUSE event loop due to errorsand dies — but the parent has long since returned success, soexecuteS3FSMountnever throws.I confirmed this directly by running
s3fs … -o curldbg -f -dagainst a misnamed bucket: the foreground mount surfaces the 403 immediately, while the default backgrounded invocation exits 0 and leaves no mountpoint, no s3fs process, and no entry in/proc/self/mountinfo.Suggested fix (from Claude)
After invoking
s3fs, verify the mount actually established before returning. The cheapest reliable check ismountpoint -q <path>with a short retry loop to absorb the brief window between fork and the FUSE filesystem appearing in the kernel's mount table. If the path never becomes a mountpoint, throwS3FSMountErrorwith whatever stderr captured (and, if possible, the s3fs log) so the caller sees a real error instead of a falseok.A defensive secondary measure: capture s3fs stderr to a tempfile via
-o logfile=…(or shell redirection) so the thrown error can include the actual reason (403 AccessDenied,unable to connect, etc.) instead of the empty stdout/stderr the parent produces.For yr agent
Tasks
mountBucketagainst a bogus bucket name and assert it throwsS3FSMountError(currently it resolves)executeS3FSMount, after thes3fsexec, pollmountpoint -q <mountPath>with a small retry budget (e.g. up to ~2s, 50ms intervals) before declaring successS3FSMountErrorand roll back the same way the existingcatchdoes (delete password file, dropactiveMountsentry); make suremkdir -p'd mount-point directory is also removed so we don't leave a stale empty dir behind