💡 The ARN typo that cost me an afternoon: The table was called user-sessions-prod. The ARN in my policy said user-session-prod. Missing one letter — the "s" in sessions. Lambda kept returning "Access Denied" with nothing useful in the error. I checked the policy document maybe fifteen times. Rebuilt the role twice. Opened a support ticket. Four hours in, I finally pasted the ARN from the DynamoDB console directly into the policy and diffed them character by character. One missing letter. IAM does not tell you which part of the ARN failed. It just says no.
Quick Security Checklist
Everything that went wrong with my account — every single item — maps to something on this list. I run through it now on every AWS account I touch, personal or work. If you do nothing else from this article, do these eight things.
- ☐ MFA enabled on root account
- ☐ Root access keys deleted (there shouldn't be any)
- ☐ IAM user created for daily use
- ☐ MFA enabled on IAM admin user
- ☐ No access keys in code repositories
- ☐ Budget alert configured
- ☐ CloudTrail enabled
- ☐ Old/unused credentials removed
The rest of this article walks through each one. Why it matters, how to set it up, what the actual console steps look like.
IAM stands for Identity and Access Management. Short version: it's the system that decides who gets to do what inside your AWS account. Every single API call — spinning up an EC2 instance, reading a file from S3, invoking a Lambda — goes through IAM first. If the permission isn't there, the call fails.
Four things you need to know:
- Users — people. You, your coworker, the intern.
- Groups — bundles of users who need the same access
- Roles — permissions you hand to AWS services, not humans. Your Lambda function gets a role. Your EC2 instance gets a role.
- Policies — JSON documents that spell out exactly what's allowed and what isn't
Step 1: Stop Using Root
Your AWS account comes with a root user. It can do everything. Close the account, change billing, delete every resource in every region. Think of it as the god-mode account. The credentials I leaked in my incident? They were attached to a user with AdministratorAccess, which is basically root-equivalent. That's why the attackers could spin up GPU instances across four regions simultaneously.
Don't use root for anything day-to-day. Log in with it exactly three times:
- When you first create the account
- If you need to change billing or payment info
- If you're closing the account
That's it. For everything else, make an IAM user.
Enable MFA on Root (Right Now)
Stop reading and do this first. Seriously. If the attackers who hit my account had faced an MFA prompt, the leaked keys alone wouldn't have been enough. Here's the process:
- Log in to AWS Console as root
- Click your account name (top right) then Security credentials
- Find Multi-factor authentication (MFA), click Assign MFA device
- Pick Authenticator app
- Scan the QR code with whatever authenticator you use — Google Authenticator, Authy, 1Password, doesn't matter
- Type two consecutive codes to confirm it's working
- Click Assign MFA
Now someone with your root password still can't get in without physical access to your phone. That alone would have saved me a very bad week.
Step 2: Create an Admin User
Once root is locked down with MFA, you need a separate IAM user for your actual daily work. This is the account you'll log in with every day.
- Open the IAM Console
- Click Users, then Create user
- Give it a name you'll recognize — I use
anurag-admin - Check Provide user access to the AWS Management Console
- Pick a strong password (or let AWS generate one)
- Click Next
Attach Permissions
- Select Attach policies directly
- Search for AdministratorAccess and check the box
- Click through to Create user
AWS gives you a sign-in URL specific to IAM users. It looks something like
123456789012.signin.aws.amazon.com/console. Bookmark that URL. Use it from now on. The root login page is for emergencies only.
Enable MFA for the Admin User Too
Yes, this one too. The admin user has AdministratorAccess, which means it can do almost everything root can. MFA here is not optional.
- Click on the user you just created
- Open the Security credentials tab
- Under MFA, hit Assign MFA device
- Same steps as root — authenticator app, scan, verify, done
Step 3: Understand Least Privilege
This is the part that would have limited the damage in my case. Least privilege means: every user, every role, every service gets only the permissions it actually needs. Nothing extra. Nothing "just in case."
My leaked keys had AdministratorAccess. The attackers could do anything. If those keys had been scoped to just S3 — which is all I was actually using at the time — the miners couldn't have launched EC2 instances at all. The blast radius would have been a messed-up bucket instead of an $8K bill.
Practically, this means: your developer working on Lambda doesn't get billing access. Your CI/CD pipeline deploying to S3 doesn't get EC2 permissions. Your monitoring tool reading CloudWatch logs gets read access, not write.
Creating Limited Users
Say you need a user that only touches S3. Nothing else. Here's how:
- IAM, then Users, then Create user
- Call it
s3-uploader - Skip console access — this account is for scripts, not humans
- Attach AmazonS3FullAccess
That user can read, write, and delete S3 objects across all your buckets. But it can't launch instances, can't invoke functions, can't touch databases. If these credentials get compromised, the damage is limited to S3.
Even More Limited: Custom Policies
Here's the thing about AmazonS3FullAccess — it covers every bucket in your account. All of them. If you've got a bucket with production data and a bucket with throwaway test files, that policy treats them the same. You probably don't want that.
So you write a custom policy instead:
- IAM → Policies → Create policy
- Switch to the JSON editor
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::my-specific-bucket/*"
},
{
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::my-specific-bucket"
}
]
}
That JSON locks the user to my-specific-bucket and three specific actions. If these credentials end up on GitHub — which, yes, happens more often than anyone admits — the damage stops at one bucket. Not your whole account. Not your whole infrastructure.
Step 4: Access Keys (Programmatic Access)
This is the part that got me in trouble. When your code or scripts need to talk to AWS, they authenticate with access keys. Two pieces:
- Access Key ID — the public part, looks like
AKIAIOSFODNN7EXAMPLE - Secret Access Key — the private part. AWS shows it exactly once. If you lose it, you generate a new one.
Creating a key pair:
- IAM, then Users, then click your user
- Open the Security credentials tab
- Access keys section, click Create access key
- Pick your use case (CLI, application, third-party service)
- Copy both values somewhere safe immediately
The Golden Rules
I broke the first one. Learn from that.
- Never commit keys to Git. Add
.envto your.gitignorebefore you write a single line of code. Bots scan public repos constantly. My keys were live on GitHub for maybe three days before the mining started. Three days. - Never hardcode keys in source files. Use environment variables. Or better, use AWS Secrets Manager.
- Rotate keys every 90 days. Old keys are liability sitting in config files you forgot about.
- Use roles instead of keys wherever you can. Roles don't have permanent credentials to leak. More on this in the next section.
Set Up AWS CLI Safely
If you do use access keys for the CLI, configure them properly:
aws configure
It prompts you for the access key, secret key, default region, and output format. Everything gets stored in ~/.aws/. Treat that directory like a password vault — if someone copies those files, they have your keys.
If you work across multiple AWS accounts (personal, work, client), use profiles:
aws configure --profile work
aws configure --profile personal
# Use a specific profile
aws s3 ls --profile work
Step 5: IAM Roles (The Right Way)
Roles are what I should have been using from the start. A role is like a user, but for AWS services — not for people. Your EC2 instance gets a role. Your Lambda function gets a role. No permanent credentials involved.
Why this matters:
- There are no keys to accidentally push to GitHub
- Credentials are temporary and rotate automatically
- AWS handles the credential lifecycle, not you
Example: EC2 Instance Role
You've got an EC2 instance that needs to pull files from S3. Instead of creating access keys and baking them into the instance, you create a role:
- IAM, then Roles, then Create role
- Trusted entity: AWS service
- Use case: EC2
- Click Next
- Attach AmazonS3ReadOnlyAccess
- Name it
ec2-s3-reader - Create the role
Then attach it to your running instance:
- EC2 Console, select the instance
- Actions, Security, Modify IAM role
- Pick
ec2-s3-reader - Save
Done. The AWS SDK on that instance automatically picks up the role credentials. No keys in config files, no keys in environment variables, no keys anywhere that someone could copy or commit. This is how it should work.
Step 6: Groups for Easy Management
Managing permissions user by user gets painful fast once you have more than two or three people on an account. Groups fix this. You define the permissions once on the group, then just add or remove people.
- IAM, then User groups, then Create group
- Name it
developers - Attach the policies this group needs
- Add users
The groups I typically set up:
- admins — full access, two or three people max
- developers — Lambda, API Gateway, DynamoDB, CloudWatch
- data-team — S3, Glue, Athena, Redshift
- billing — Cost Explorer and Budgets only
New hire shows up, you drop them in the right group. Somebody leaves, you pull them out. No chasing individual policies across fifteen different users.
Step 7: Monitor and Audit
All the permissions in the world don't help if nobody's watching. I had no monitoring when the miners hit. No alerts, no audit trail I was checking, no budget alarms. I found out because AWS emailed me. That's too late.
CloudTrail
CloudTrail records every API call in your account. Who did what, when, from which IP. New accounts usually have it turned on by default, but verify.
- CloudTrail, then Event history — this shows recent API calls
- You can search by user, resource type, event name
For anything serious, set up a trail that dumps logs to an S3 bucket. After my incident, I went back through CloudTrail and could see exactly when the attacker started launching instances — 3:47 AM on a Saturday. I was asleep.
IAM Credential Report
Go to IAM, then Credential report, then Download report. You get a CSV with:
- Every user and when they last logged in
- Every access key and when it was last used
- Who has MFA and who doesn't
I pull this once a month now. Any user who hasn't logged in for 90 days gets disabled. Any access key that hasn't been used in 60 days gets deleted. The credentials that got me hacked had been sitting unused for weeks before I put them in that .env file. Old credentials are just attack surface waiting to be exploited.
AWS Budgets
This is the thing that finally caught my incident. Not CloudTrail, not any fancy monitoring — a billing alert from AWS.
- Billing, then Budgets, then Create budget
- Type: Cost budget
- Set your monthly ceiling — $50 is reasonable for personal accounts
- Add alerts at 50%, 80%, and 100% of that number
When GPU instances started spinning up across four regions at once, the spending spiked immediately. A budget alert at $20 would have woken me up eight hours earlier than AWS support did. Eight hours of mining at those instance sizes is real money.
Total damage from one .env file in a public repo: $8,247. Eleven hours of crypto mining across four AWS regions on GPU instances I didn't know existed in my account. AWS waived most of it after I filed a support case and showed them the CloudTrail logs proving the activity was unauthorized. They didn't have to.
💬 Comments