flaws.cloud — Level 1¶
Vulnerability: S3 bucket with public "Everyone" read access
Step 1 — DNS Recon with dig¶
What is dig?¶
dig (Domain Information Groper) is a DNS lookup tool. You give it a domain, it asks DNS servers what records exist for that domain. Think of DNS as the internet's phonebook — dig lets you read that phonebook directly.
Command breakdown, flag by flag:¶
| Part | What it does |
|---|---|
dig |
The tool itself |
+nocmd |
Suppresses the first line of output (the dig version/command echo). Cleaner output |
flaws.cloud |
The domain you're querying |
any |
Query type — asks for ALL record types (A, NS, SOA, MX, etc.) instead of just one |
+multiline |
Formats records across multiple lines (especially useful for SOA — makes it readable) |
+noall |
Turns off ALL output sections by default |
+answer |
Then turns back on just the answer section — so you get only the actual records, no noise |
The +noall +answer combo is a filter: strip everything, then add back only what matters.
Alternatives to dig:¶
| Tool | Command | Notes |
|---|---|---|
nslookup |
nslookup flaws.cloud |
Older, simpler, less info. Built into Windows too |
host |
host flaws.cloud |
Even simpler, quick A/MX lookup |
drill |
drill flaws.cloud any |
dig-like, used on some Linux distros |
resolvectl |
resolvectl query flaws.cloud |
Modern systemd-based Linux |
| Online | dnschecker.org, mxtoolbox.com | GUI option when no CLI available |
For recon, dig is the standard — more control, more output.
Step 2 — Reading the DNS Output¶
What are all those IP addresses?¶
S3 is not one server — it's a massive distributed system. AWS has many physical servers across a region serving S3 content. When you query flaws.cloud, you get multiple IPs back because S3 load-balances across them. Any one of those IPs will serve the bucket — they're all equally valid endpoints for the same bucket.
This also means: if you visit 52.218.183.27 in a browser, you're hitting an AWS S3 endpoint server — it will respond but redirect you to aws.amazon.com/s3/ because it doesn't know which bucket you want (you need the hostname). That's why browsing directly to the IP doesn't give you the bucket directly.
Record types explained:¶
| Record | What it means |
|---|---|
A |
Address record — maps domain → IPv4 address. These are the actual IPs of the S3 servers |
NS |
Name Server — tells you which DNS servers are authoritative for this domain. Here it's awsdns-* because Route 53 (AWS's DNS service) hosts this domain |
SOA |
Start of Authority — metadata about the zone itself (see below) |
Step 3 — SOA Record Explained¶
flaws.cloud. 900 IN SOA ns-1890.awsdns-44.co.uk. awsdns-hostmaster.amazon.com. (
1 ; serial
7200 ; refresh (2 hours)
900 ; retry (15 minutes)
1209600 ; expire (2 weeks)
86400 ; minimum (1 day)
)
The SOA record is the "ownership document" for the DNS zone. It's not for browsing — it's used by secondary DNS servers to know how/when to sync with the primary.
| Field | Value | Meaning |
|---|---|---|
| Primary NS | ns-1890.awsdns-44.co.uk. |
The master DNS server for this zone |
| Admin email | awsdns-hostmaster.amazon.com. |
Email of the zone admin (dots replace @, so this is awsdns-hostmaster@amazon.com) |
serial |
1 |
Version number. Every time DNS records change, this increments. Secondary DNS servers compare this to know if they need to re-sync |
refresh |
7200 (2 hrs) |
How often secondary NS servers check the primary for updates |
retry |
900 (15 min) |
If the refresh fails, how long to wait before trying again |
expire |
1209600 (2 weeks) |
If a secondary can't reach the primary for this long, it stops answering for the zone entirely (considers itself stale) |
minimum |
86400 (1 day) |
Minimum TTL for negative caching — if a record doesn't exist, cache that "not found" answer for this long |
Why does this matter for recon? The NS records tell you the domain is on Route 53 (AWS), which means the site is almost certainly AWS-hosted. That narrows your attack surface before you've done anything else.
Step 4 — Reverse DNS with nslookup¶
What is nslookup?¶
nslookup (Name Server Lookup) queries DNS. You can use it forward (domain → IP) or reverse (IP → domain). Here it's used in reverse: given an IP, what hostname is it?
What is .in-addr.arpa?¶
Reverse DNS is stored in a special zone called in-addr.arpa. The IP is written backwards and .in-addr.arpa is appended. So:
The DNS system looks up that record and returns the hostname. This is called a PTR record (pointer record).
Why backwards? DNS is hierarchical left-to-right. For IPs, the most specific part is on the right (the host), but in DNS it needs to be on the left. Reversing the IP makes that work.
What does the result tell us?¶
s3-website-us-west-2.amazonaws.com — this IP belongs to an S3 static website endpoint in us-west-2. We now know:
- The bucket is hosted on S3
- It's in the us-west-2 region
- This is a website endpoint (not just a REST API endpoint)
Step 5 — S3 URL Formats¶
Why do S3 buckets have domain-based URLs?¶
S3 bucket names must be globally unique across all of AWS. AWS uses the bucket name as a subdomain so each bucket gets its own URL automatically. The format is:
<bucket-name>.s3.amazonaws.com ← REST/API endpoint
<bucket-name>.s3-website-<region>.amazonaws.com ← Website endpoint
REST endpoint (s3.amazonaws.com): For API access — returns XML listings, error messages in XML. Used by the AWS CLI and SDKs.
Website endpoint (s3-website-region.amazonaws.com): For browser access — returns actual HTML, supports index documents and custom error pages, no XML. You can't use the CLI against this endpoint.
Is the format always the same?¶
Yes. The formula is always:
So if your bucket is named alexsusanu.com, the URL would be:
This is also why bucket name squatting is possible — if someone else takes the name alexsusanu.com in S3, they "own" that URL.
Step 6 — Accessing the Bucket¶
Listing the bucket anonymously:¶
| Part | What it does |
|---|---|
aws s3 ls |
List contents of an S3 bucket (like ls in Linux) |
s3://flaws.cloud/ |
The bucket URI. s3:// is the scheme, flaws.cloud is the bucket name |
--no-sign-request |
Send the request without any AWS credentials — anonymous/unauthenticated access |
Why does --no-sign-request work here? Because the bucket ACL has "Everyone" read access enabled. This is the misconfiguration we're exploiting.
Downloading a file:¶
cp copies. The . means "current directory". Same as wget-ing a file.
Full AWS S3 Cheatsheet¶
Navigation¶
aws s3 ls # list all YOUR buckets
aws s3 ls s3://bucket-name/ # list bucket contents
aws s3 ls s3://bucket-name/folder/ # list subfolder
aws s3 ls s3://bucket-name/ --recursive # list everything recursively
Read/Download¶
aws s3 cp s3://bucket-name/file.txt . # download single file
aws s3 cp s3://bucket-name/ . --recursive # download entire bucket
aws s3 sync s3://bucket-name/ ./local-dir/ # sync bucket to local dir
Upload/Write (if you have write perms)¶
aws s3 cp file.txt s3://bucket-name/ # upload file
aws s3 rm s3://bucket-name/file.txt # delete file
Anonymous access (no credentials)¶
aws s3 ls s3://bucket-name/ --no-sign-request
aws s3 cp s3://bucket-name/file.txt . --no-sign-request
With a specific credentials profile¶
Credential management¶
aws configure # set up default creds interactively
aws configure --profile stolen # set up named profile
cat ~/.aws/credentials # view stored creds
cat ~/.aws/config # view config
Set creds via environment variables (for one-off use)¶
export AWS_ACCESS_KEY_ID=AKIA...
export AWS_SECRET_ACCESS_KEY=abc123...
aws s3 ls # uses env vars, no profile needed
Recon / Identity¶
aws sts get-caller-identity # who am I? (account, user/role, ARN)
aws iam get-user # get user info
aws iam list-attached-user-policies --user-name USERNAME
aws iam list-user-policies --user-name USERNAME
Key flags reference¶
| Flag | What it does |
|---|---|
--no-sign-request |
Anonymous — no credentials sent |
--profile name |
Use named credential profile |
--region us-east-1 |
Specify region (use if default fails or bucket is in another region) |
--recursive |
Apply to all files in bucket/path |
--endpoint-url URL |
Override endpoint (for non-AWS S3-compatible storage like MinIO) |
Vulnerability Summary¶
The flaw: S3 bucket ACL set to allow "Everyone" (public, anonymous) read access.
Why it matters: Anyone on the internet can list and download the contents without any credentials at all. No authentication required.
The fix: ACLs should be set to private. Use bucket policies to grant access only to specific principals (IAM users/roles). AWS now blocks public ACLs by default on new buckets — this setting is called "Block Public Access".
Attack chain:
1. dig → find IPs → reverse DNS → identify it's S3 in us-west-2
2. Construct bucket URL: flaws.cloud.s3.amazonaws.com
3. aws s3 ls s3://flaws.cloud/ --no-sign-request → bucket listing is open
4. Download files → find secret-e4443fc.html → level 2 URL inside