Skip to content

VSJ001/Distributed-NFS-File-Server-with-Scheduler-Based-Locking

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Scheduler-Based File Locking for Fair Write Access in an NFS-Like Distributed File System

A concurrent, NFS-like client-server file system in Rust with a research extension exploring scheduler-based file locking for improving fairness under write contention.

When multiple clients concurrently request write access to the same file, write locks are not granted using a simple FIFO policy. Instead, pending write requests are scheduled using classic operating system scheduling algorithms — FIFO, Round Robin (RR), Multi-Level Feedback Queue (MLFQ), and Completely Fair Scheduler (CFS) — allowing analysis of the trade-offs between fairness, write latency, and server throughput in a distributed file system setting.

Language: Rust (2024 edition)

Usage

Build

cargo build

Server

Start the file server (binds to 127.0.0.1:7878 over UDP):

cargo run -p server

By default the server uses a FIFO write scheduler. To select a different scheduling policy, use the --scheduler flag:

cargo run -p server -- --scheduler fifo   # FIFO (default)
cargo run -p server -- --scheduler rr     # Round Robin
cargo run -p server -- --scheduler mlfq   # Multi-Level Feedback Queue
cargo run -p server -- --scheduler cfs    # Completely Fair Scheduler

Client

In a separate terminal, start the interactive client:

cargo run -p client

VFS Tests

cargo test -p vfs

Server Tests (concurrency + lock manager)

cargo test -p server

System Overview

At a high level, the system consists of:

  • Clients that issue file operations (read, write, lookup)
  • A file server that handles requests via RPC
  • A virtual file system backed by a single disk image file
  • A centralized file locking mechanism enforcing exclusive write access
  • A scheduler-based lock queue that determines which waiting client is granted the next write lock

Reads are served from the last committed version, while writes require exclusive access. Write locks are lease-based to prevent indefinite lock ownership.

Multi-Client Concurrency

The server handles multiple clients concurrently using Rust's standard threading primitives. All mutable state (Vsfs, LockManager, pending write table) is wrapped in an Arc<Mutex<ServerState>>. The UDP socket is shared separately via Arc<UdpSocket> since sending only requires &self. For each incoming datagram, the main loop clones the Arcs and spawns a dedicated thread to dispatch the request, allowing the server to immediately accept the next packet without waiting for the previous one to finish. The LockManager ensures that concurrent writes to the same inode are serialized and data is never corrupted.

Persistent Disk Image

The disk image (vsfs.img) persists across server restarts. On startup, the server checks whether the image file already exists: if it does, it opens it with Vsfs::open() to reload all existing files and directories; if not, it creates a fresh image with Vsfs::create().

Conventions

  • Files: lower kebab-case. Example: file-name-1.txt

Repository Structure

.
  Cargo.toml                  # Workspace manifest
  README.md
  benchmarks/                 # Benchmark scripts, raw results, reproduction guide
    Cargo.toml
    README.md
    src/                      # Benchmark client implementation
    run_*.sh                  # Driver scripts (all, comparison, scheduler, server load, etc.)
    baseline_results/         # First-pass single-scheduler results
    results/                  # Comparison, high-confidence, server-load, timed-starvation runs
  common/                     # Shared protocol types and UDP wire format
    Cargo.toml
    src/lib.rs                # Request/Response enums, send_msg/recv_msg
  server/                     # File server (UDP, multi-client, lease-based locking)
    Cargo.toml
    src/
      main.rs                 # Event loop, Arc<Mutex<>> threading, request dispatch
      lock.rs                 # LockManager, lease expiry, WriteScheduler trait,
                              # FifoScheduler, RrScheduler, MlfqScheduler, CfsScheduler
      lib.rs                  # Crate root
    tests/
      lock_test.rs            # Lock manager unit tests
      multi_client_test.rs    # Multi-client concurrency tests
  client/                     # Interactive REPL client
    Cargo.toml
    src/main.rs               # CLI: lookup, create, mkdir, read, write, ls, rm
  vfs/                        # Very Simple File System (on-disk image)
    Cargo.toml
    src/
      lib.rs                  # Public API (create, open, lookup, read, write, etc.)
      superblock.rs           # Superblock and disk layout constants
      inode.rs                # Inode structure (file type, size, direct pointers)
      bitmap.rs               # Allocation bitmaps for inodes and data blocks
      dir.rs                  # Directory entry operations (lookup, add, remove)
      file.rs                 # File read/write with block-level I/O
      disk.rs                 # Raw disk image I/O abstraction
      error.rs                # Custom error types
    tests/integration.rs      # End-to-end VFS tests

About

Concurrent NFS-like distributed file server in Rust (UDP RPC, Arc<Mutex<>> threading, lease-based write locks) that applies classic OS scheduling algorithms (FIFO, RR, MLFQ, CFS) to write-lock arbitration. Includes a VSFS-based on-disk filesystem and an external benchmark suite measuring fairness, latency, and throughput.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors