I have deleted an S3 bucket exactly once and regretted it immediately. Back in 2022, I tore down a staging environment, and within a few hours someone else had claimed the same bucket name. A CloudFormation stack in another account kept happily writing logs to a bucket I no longer controlled. Not my favorite Friday.
AWS has finally shipped a real fix: account regional namespaces for S3 general purpose buckets. It took about seven years, which feels both absurd and very on-brand.
The Problem in 30 Seconds
S3 bucket names are globally unique across all AWS accounts. Delete a bucket and anybody can grab that name. If you use a predictable naming pattern like myapp-us-east-1, someone else can register it before you do in a new region.
That is bucketsquatting. It sounds like one of those edge-case security problems right up until it happens to you.
I have seen this cause real messes: Terraform state files landing in the wrong bucket because somebody reused a workspace, CI pipelines pushing artifacts to a hijacked bucket name, and CloudFormation templates with region names baked in turning into attack paths.
The New Namespace Pattern
The fix is a new naming pattern that ties the bucket to your account:
<prefix>-<account-id>-<region>-an
For example:
myapp-123456789012-us-west-2-an
The -an suffix stands for “account namespace.” If another account tries to create a bucket that matches your account namespace pattern, AWS rejects it with InvalidBucketNamespace. It is simple, effective, and the kind of thing S3 should have had years ago.
Creating Namespaced Buckets with the CLI
aws s3api create-bucket \
--bucket myapp-123456789012-us-east-1-an \
--bucket-namespace account-regional \
--region us-east-1
For regions outside us-east-1, you still need the location constraint:
aws s3api create-bucket \
--bucket myapp-123456789012-eu-west-1-an \
--bucket-namespace account-regional \
--region eu-west-1 \
--create-bucket-configuration LocationConstraint=eu-west-1
Terraform Example
If you manage S3 buckets with Terraform, this is the kind of change worth baking into your defaults:
data "aws_caller_identity" "current" {}
data "aws_region" "current" {}
resource "aws_s3_bucket" "artifacts" {
bucket = "artifacts-${data.aws_caller_identity.current.account_id}-${data.aws_region.current.name}-an"
bucket_namespace = "account-regional"
}
I would wrap this in a module so every team in the org uses it the same way:
module "s3_bucket" {
source = "./modules/s3-namespaced"
prefix = "artifacts"
}
The module just handles the naming convention for you. Less room for somebody to forget part of the pattern.
Enforcing It Across the Org with SCPs
AWS also added a new condition key, s3:x-amz-bucket-namespace, that you can use in Service Control Policies. This is the kind of guardrail I would put in place right away:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "RequireAccountNamespace",
"Effect": "Deny",
"Action": "s3:CreateBucket",
"Resource": "*",
"Condition": {
"StringNotEquals": {
"s3:x-amz-bucket-namespace": "account-regional"
}
}
}
]
}
This blocks anyone in the org from creating new buckets without the namespace. Existing buckets stay as they are, but nobody gets to skip the protection going forward.
CloudFormation Migration
If you’re running CloudFormation, the change is small:
Resources:
ArtifactsBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: !Sub "artifacts-${AWS::AccountId}-${AWS::Region}-an"
BucketNamespace: "account-regional"
The pseudo parameters AWS::AccountId and AWS::Region keep it clean. No hardcoded values, no weird string juggling.
The Migration Reality
This does not magically fix existing buckets. If you have a bucket called myapp-us-east-1 that has been around for three years, it is still in the global namespace and still exposed if you ever delete it.
The migration path is straightforward, but it is still a bit of a slog:
- Create the new namespaced bucket
- Sync data with
aws s3 sync - Update all references (Terraform state, application configs, CI pipelines)
- Keep the old bucket around for a while to catch anything you missed
- Eventually delete the old one
I am not rushing this for every bucket. For production buckets that are stable and not going anywhere, the risk stays low as long as you never delete them. I would start with buckets used by ephemeral environments, because those are the ones most likely to get recreated and squatted.
What About GCP and Azure?
Google Cloud Storage has domain-based namespacing, which partially solves it. Azure Blob Storage scopes containers under storage accounts, so the global uniqueness problem is much smaller there.
AWS was the last major cloud provider to really address this. Better late than never, I guess.
My Takeaway
If you are creating new S3 buckets, use the namespace pattern from now on. Update your Terraform modules, CloudFormation templates, and CDK constructs. Add the SCP so the rule is enforced instead of living in a wiki page nobody reads.
For existing buckets, make a list of the ones tied to ephemeral environments and migrate those first. The rest can wait, but I would still get them onto the backlog so they do not disappear from view.
Seven years is a long time to wait for a fix, but at least this one is clean and easy to explain.