[quick tip] the trick for ultra-fast S3 bucket discovery

Amazon S3 logo

Amazon S3 is a massively popular object storage service, and finding S3 buckets is a common task for bug bounty hunters and pentesters. The traditional way to find buckets is by bruteforcing common names, sending an HTTP request to s3.amazonaws.com/<bucket-name> for every word in a list. You'd then check the HTTP status code to see if the bucket exists. The problem? This is painfully slow. HTTP relies on the TCP protocol, which requires a handshake for every single check, adding significant overhead.

This is where DNS comes in. By querying for bucket names directly via DNS, we can leverage the speed of the UDP protocol. Since UDP is connectionless, we can fire off millions of queries without waiting for a handshake. The performance difference isn't just a small improvement; it's a massive leap. 🚀

Let's compare the numbers. Using a standard tool like ffuf to bruteforce via HTTP, we can check 50,000 names in about 18 seconds, hitting a rate of roughly 2,800 requests/sec. That's fast, but not fast enough. With a specialized DNS tool like pugdns, we were able to query 500,000 names in just 2.8 seconds. That's a staggering rate of nearly 180,000 requests/sec. Based on these tests, the DNS method is over 60 times faster.

Why Direct DNS Queries Don't Work

Before diving into the technique, it's important to understand why the obvious approach doesn't work. You might think you could just query bucketname.s3.amazonaws.com directly via DNS and check if it resolves. The problem is that AWS returns NOERROR for all queries, regardless of whether the bucket exists or not.

Let's test this with a bucket we know exists (ifood) and one that definitely doesn't exist:

user ) echo ifood.s3.amazonaws.com | zdns A | jq '.results.A.status' "NOERROR" user ) echo blabla1337123nonexistentofc.s3.amazonaws.com | zdns A | jq '.results.A.status' "NOERROR"

Both queries return NOERROR, making it impossible to distinguish between existing and non-existing buckets using this approach. This is where the CNAME trick comes in handy.

The Technique: Region-Specific CNAMEs

Whenever a bucket is located in a specific region (basically, any region other than the default us-east-1), its DNS record will resolve to a CNAME that includes the region name. This is a dead giveaway that the bucket exists.

Let's look at an existing bucket, ifood.s3.amazonaws.com, which is in sa-east-1 (São Paulo). We can use zdns to see what's happening:

user ) echo ifood.s3.amazonaws.com | zdns A | jq { "name": "ifood.s3.amazonaws.com", "results": { "A": { "data": { "answers": [ { "answer": "s3-sa-east-1-w.amazonaws.com.", "class": "IN", "name": "ifood.s3.amazonaws.com", "ttl": 20436, "type": "CNAME" }, { "answer": "52.95.165.32", "class": "IN", "name": "s3-sa-east-1-w.amazonaws.com", "ttl": 5, "type": "A" } ] }, "status": "NOERROR" } } }

See that first CNAME answer? s3-sa-east-1-w.amazonaws.com. The sa-east-1 part tells us two things: the bucket exists, and it's hosted in São Paulo. Simple.

Now, let's try a bucket that we know doesn't exist:

user ) echo nonexistentbucket69691337-blabla.s3.amazonaws.com | zdns A | jq { "name": "nonexistentbucket69691337-blabla.s3.amazonaws.com", "results": { "A": { "data": { "answers": [ { "answer": "s3-1-w.amazonaws.com.", "class": "IN", "name": "nonexistentbucket69691337-blabla.s3.amazonaws.com", "ttl": 42781, "type": "CNAME" }, { "answer": "s3-w.us-east-1.amazonaws.com.", "class": "IN", "name": "s3-1-w.amazonaws.com", "ttl": 262, "type": "CNAME" }, { "answer": "52.217.15.52", "class": "IN", "name": "s3-w.us-east-1.amazonaws.com", "ttl": 5, "type": "A" } ] }, "status": "NOERROR" } } }

The DNS resolution path is completely different. It points to a generic CNAME, s3-1-w.amazonaws.com, which then points to the generic us-east-1 endpoint. This is the default behavior for a non-existent bucket.

The Caveat: us-east-1

This technique works great for the majority of buckets. However, there's a big blind spot: buckets in us-east-1. Because this is the default region, many buckets hosted there will resolve to the exact same generic endpoint as a non-existent bucket. So, if you see the s3-w.us-east-1.amazonaws.com CNAME, you can't be sure if the bucket exists in us-east-1 or if it doesn't exist at all.

Putting It All Together: ultra-fast bucket discovery

Now that we understand the technique, let's see how to automate it for large-scale bucket discovery. The key is to combine a good wordlist with the DNS CNAME checking approach.

Here's how to do it with pugdns, which is specifically designed for this type of DNS bruteforcing:

# cat best-dns-wordlist.txt | sed 's/$/.s3.amazonaws.com./g' > wordlist.txt # ./pugdns -interface enp6s0 -nameservers s3_ns.txt -retries 20 -domains wordlist.txt -retry-timeout 500ms -maxbatch 300000 -output bucket_brute.jsonl # rg -v s3-1-w.amazonaws.com bucket_brute.jsonl | rg -v us-east-1 > found_buckets.txt # cat found_buckets.txt | sort -u | wc -l 208703 # echo We found 208703 buckets in 2 minutes :)

The beauty of this approach is that you can process hundreds of thousands of potential bucket names in just a few seconds, and you'll only get back the ones that actually exist (minus the us-east-1 caveat). This makes it perfect for reconnaissance phases where you want to quickly identify valid targets without generating excessive HTTP traffic.

finishing it off

Despite the limitation with us-east-1, this is a powerful and stealthy way to validate S3 buckets during recon. By simply checking for a region-specific CNAME, you can confirm a bucket's existence at a massive scale without ever touching it directly via HTTP. It's another small trick that shows how much information is hiding in plain sight. Happy hunting!