EBS is a network-attached block storage for EC2. Think of it as a virtual hard drive in the cloud.
| Type | Use case | Max IOPS | Throughput |
|---|---|---|---|
| gp3 (SSD) | General purpose (boot, apps) | 16,000 | 1,000 MB/s |
| gp2 (SSD) | General purpose (legacy) | 16,000 (burst) | 250 MB/s |
| io1/io2 (SSD) | High performance DBs | 64,000 / 256,000 | 1,000 MB/s |
| io2 Block Express | Critical production DBs | 256,000 | 4,000 MB/s |
| st1 (HDD) | Throughput-optimized (data warehouse) | 500 | 500 MB/s |
| sc1 (HDD) | Cold HDD (infrequent access) | 250 | 250 MB/s |
Rules:
- Only gp2/gp3, io1/io2 can be boot volumes
- HDD (st1/sc1) cannot be boot volumes
- gp3: IOPS and throughput are independent (provision separately) — better than gp2
Attach same EBS volume to multiple EC2 instances in the same AZ.
Use case: Cluster applications (Oracle RAC, Lustre), shared scratch space.
- Uses KMS
- Snapshot of encrypted volume = encrypted
- Volume from encrypted snapshot = encrypted
- Data in-flight between EC2 and volume encrypted
Encrypting existing unencrypted volume:
Unencrypted Volume → Create Snapshot → Copy Snapshot (enable encryption)
→ Create Volume from encrypted snapshot → Attach to EC2
EFS is a shared NFS file system — multiple EC2 instances (across AZs) can mount and use it simultaneously.
EC2 in AZ-A ─┐
EC2 in AZ-B ─┼─→ EFS (shared POSIX file system)
Lambda ───────┘
| Feature | EBS | EFS |
|---|---|---|
| Connection | One EC2 at a time (except Multi-Attach) | Many EC2s simultaneously |
| AZ | Tied to one AZ | Multi-AZ |
| Protocol | Block (no file path) | POSIX NFS |
| Scaling | Fixed size (provision upfront) | Auto-grows |
| Use for | OS disks, databases | Shared files, CMS, home directories |
| Class | Use | Price |
|---|---|---|
| Standard | Frequently accessed | Higher |
| Standard-IA | Infrequently accessed | 47% lower |
| One Zone | Frequently accessed, one AZ | 47% lower |
| One Zone-IA | IA, one AZ | 92% lower |
EFS Lifecycle Policies: Automatically move files to IA after N days (similar to S3).
- General Purpose: latency-sensitive (web serving, CMS) — default
- Max I/O: high parallelism (big data, media processing) — higher latency
- Bursting: throughput scales with size (1TB = 50MB/s baseline, burst to 100MB/s)
- Elastic: auto-scales throughput based on workload — recommended for unpredictable IO
- Provisioned: specify throughput independent of size
Bridge between on-premises infrastructure and AWS cloud storage.
| Type | Protocol | Stores to | Use case |
|---|---|---|---|
| S3 File Gateway | NFS/SMB | S3 | File shares backed by S3, migration |
| FSx File Gateway | SMB | Amazon FSx for Windows | Windows file shares on-premises |
| Volume Gateway (Stored) | iSCSI | Primary on-prem, async backup to S3 | On-prem primary storage + S3 backup |
| Volume Gateway (Cached) | iSCSI | S3 as primary, cache frequently accessed locally | S3 as primary, low-latency access to frequent data |
| Tape Gateway | iSCSI VTL | S3 or Glacier | Replace physical tape library |
S3 File Gateway: On-premises apps access S3 as if it's a local NFS mount. Files stored as S3 objects.
- EBS is AZ-locked — can't attach to EC2 in different AZ. To move: snapshot → new volume in different AZ.
- gp3 > gp2 — always prefer gp3 for new volumes (cheaper, more control).
- EFS = NFS = shared access. EBS = block = single instance.
- EFS not available in all regions and doesn't support Windows (POSIX only).
- Instance Store (not EBS): physically attached to host, fastest I/O, data lost on stop/terminate. Use for buffers, caches, temp data.
- Storage Gateway = always means hybrid cloud / on-premises to AWS bridge.
Q: Share files across multiple EC2 instances in different AZs? → EFS (NFS mount, multi-AZ).
Q: On-premises app needs to write files that end up in S3? → S3 File Gateway — NFS/SMB on-premises, backed by S3.
Q: Highest IOPS for critical database on EC2? → io2 Block Express EBS volume.
Q: EC2 needs fastest possible local storage (ephemeral OK)? → Instance Store (NVMe SSDs physically attached to host).