Security 📖 16 min read
📅 Published: 🔄 Updated:

Secure Your AWS Account with IAM

I found out about the $8K bill on a Tuesday morning. AWS support email, subject line: "Unusual Activity Detected." By then the miners had been running for 11 hours. Somebody — me, specifically — had pushed a .env file to a public GitHub repo three days earlier. The access keys inside had full admin privileges. No MFA anywhere. No budget alerts configured. I didn't even notice until AWS flagged it. This article is the setup I built after that week, once I stopped feeling sick about it.

🛠️ Before You Start

💻
Hardware Any computer with a web browser
📦
Software AWS account (Free Tier works), AWS CLI optional
⏱️
Estimated Time 30-60 minutes

💡 The ARN typo that cost me an afternoon: The table was called user-sessions-prod. The ARN in my policy said user-session-prod. Missing one letter — the "s" in sessions. Lambda kept returning "Access Denied" with nothing useful in the error. I checked the policy document maybe fifteen times. Rebuilt the role twice. Opened a support ticket. Four hours in, I finally pasted the ARN from the DynamoDB console directly into the policy and diffed them character by character. One missing letter. IAM does not tell you which part of the ARN failed. It just says no.

Quick Security Checklist

Everything that went wrong with my account — every single item — maps to something on this list. I run through it now on every AWS account I touch, personal or work. If you do nothing else from this article, do these eight things.

  1. ☐ MFA enabled on root account
  2. ☐ Root access keys deleted (there shouldn't be any)
  3. ☐ IAM user created for daily use
  4. ☐ MFA enabled on IAM admin user
  5. ☐ No access keys in code repositories
  6. ☐ Budget alert configured
  7. ☐ CloudTrail enabled
  8. ☐ Old/unused credentials removed

The rest of this article walks through each one. Why it matters, how to set it up, what the actual console steps look like.

IAM stands for Identity and Access Management. Short version: it's the system that decides who gets to do what inside your AWS account. Every single API call — spinning up an EC2 instance, reading a file from S3, invoking a Lambda — goes through IAM first. If the permission isn't there, the call fails.

Four things you need to know:

Step 1: Stop Using Root

Your AWS account comes with a root user. It can do everything. Close the account, change billing, delete every resource in every region. Think of it as the god-mode account. The credentials I leaked in my incident? They were attached to a user with AdministratorAccess, which is basically root-equivalent. That's why the attackers could spin up GPU instances across four regions simultaneously.

Don't use root for anything day-to-day. Log in with it exactly three times:

That's it. For everything else, make an IAM user.

Enable MFA on Root (Right Now)

Stop reading and do this first. Seriously. If the attackers who hit my account had faced an MFA prompt, the leaked keys alone wouldn't have been enough. Here's the process:

  1. Log in to AWS Console as root
  2. Click your account name (top right) then Security credentials
  3. Find Multi-factor authentication (MFA), click Assign MFA device
  4. Pick Authenticator app
  5. Scan the QR code with whatever authenticator you use — Google Authenticator, Authy, 1Password, doesn't matter
  6. Type two consecutive codes to confirm it's working
  7. Click Assign MFA

Now someone with your root password still can't get in without physical access to your phone. That alone would have saved me a very bad week.

Step 2: Create an Admin User

Once root is locked down with MFA, you need a separate IAM user for your actual daily work. This is the account you'll log in with every day.

  1. Open the IAM Console
  2. Click Users, then Create user
  3. Give it a name you'll recognize — I use anurag-admin
  4. Check Provide user access to the AWS Management Console
  5. Pick a strong password (or let AWS generate one)
  6. Click Next

Attach Permissions

  1. Select Attach policies directly
  2. Search for AdministratorAccess and check the box
  3. Click through to Create user

AWS gives you a sign-in URL specific to IAM users. It looks something like 123456789012.signin.aws.amazon.com/console. Bookmark that URL. Use it from now on. The root login page is for emergencies only.

Enable MFA for the Admin User Too

Yes, this one too. The admin user has AdministratorAccess, which means it can do almost everything root can. MFA here is not optional.

  1. Click on the user you just created
  2. Open the Security credentials tab
  3. Under MFA, hit Assign MFA device
  4. Same steps as root — authenticator app, scan, verify, done

Step 3: Understand Least Privilege

Terminal: SSH login to server
Terminal: SSH login to server

This is the part that would have limited the damage in my case. Least privilege means: every user, every role, every service gets only the permissions it actually needs. Nothing extra. Nothing "just in case."

My leaked keys had AdministratorAccess. The attackers could do anything. If those keys had been scoped to just S3 — which is all I was actually using at the time — the miners couldn't have launched EC2 instances at all. The blast radius would have been a messed-up bucket instead of an $8K bill.

Practically, this means: your developer working on Lambda doesn't get billing access. Your CI/CD pipeline deploying to S3 doesn't get EC2 permissions. Your monitoring tool reading CloudWatch logs gets read access, not write.

Creating Limited Users

Say you need a user that only touches S3. Nothing else. Here's how:

  1. IAM, then Users, then Create user
  2. Call it s3-uploader
  3. Skip console access — this account is for scripts, not humans
  4. Attach AmazonS3FullAccess

That user can read, write, and delete S3 objects across all your buckets. But it can't launch instances, can't invoke functions, can't touch databases. If these credentials get compromised, the damage is limited to S3.

Even More Limited: Custom Policies

Here's the thing about AmazonS3FullAccess — it covers every bucket in your account. All of them. If you've got a bucket with production data and a bucket with throwaway test files, that policy treats them the same. You probably don't want that.

So you write a custom policy instead:

  1. IAM → Policies → Create policy
  2. Switch to the JSON editor
Custom Policy
{
 "Version": "2012-10-17",
 "Statement": [
 {
 "Effect": "Allow",
 "Action": [
 "s3:GetObject",
 "s3:PutObject",
 "s3:DeleteObject"
 ],
 "Resource": "arn:aws:s3:::my-specific-bucket/*"
 },
 {
 "Effect": "Allow",
 "Action": "s3:ListBucket",
 "Resource": "arn:aws:s3:::my-specific-bucket"
 }
 ]
}

That JSON locks the user to my-specific-bucket and three specific actions. If these credentials end up on GitHub — which, yes, happens more often than anyone admits — the damage stops at one bucket. Not your whole account. Not your whole infrastructure.

Step 4: Access Keys (Programmatic Access)

This is the part that got me in trouble. When your code or scripts need to talk to AWS, they authenticate with access keys. Two pieces:

Creating a key pair:

  1. IAM, then Users, then click your user
  2. Open the Security credentials tab
  3. Access keys section, click Create access key
  4. Pick your use case (CLI, application, third-party service)
  5. Copy both values somewhere safe immediately

The Golden Rules

I broke the first one. Learn from that.

Set Up AWS CLI Safely

If you do use access keys for the CLI, configure them properly:

Bash
aws configure

It prompts you for the access key, secret key, default region, and output format. Everything gets stored in ~/.aws/. Treat that directory like a password vault — if someone copies those files, they have your keys.

If you work across multiple AWS accounts (personal, work, client), use profiles:

Bash
aws configure --profile work
aws configure --profile personal

# Use a specific profile
aws s3 ls --profile work

Step 5: IAM Roles (The Right Way)

Roles are what I should have been using from the start. A role is like a user, but for AWS services — not for people. Your EC2 instance gets a role. Your Lambda function gets a role. No permanent credentials involved.

Why this matters:

Example: EC2 Instance Role

You've got an EC2 instance that needs to pull files from S3. Instead of creating access keys and baking them into the instance, you create a role:

  1. IAM, then Roles, then Create role
  2. Trusted entity: AWS service
  3. Use case: EC2
  4. Click Next
  5. Attach AmazonS3ReadOnlyAccess
  6. Name it ec2-s3-reader
  7. Create the role

Then attach it to your running instance:

  1. EC2 Console, select the instance
  2. Actions, Security, Modify IAM role
  3. Pick ec2-s3-reader
  4. Save

Done. The AWS SDK on that instance automatically picks up the role credentials. No keys in config files, no keys in environment variables, no keys anywhere that someone could copy or commit. This is how it should work.

Step 6: Groups for Easy Management

Managing permissions user by user gets painful fast once you have more than two or three people on an account. Groups fix this. You define the permissions once on the group, then just add or remove people.

  1. IAM, then User groups, then Create group
  2. Name it developers
  3. Attach the policies this group needs
  4. Add users

The groups I typically set up:

New hire shows up, you drop them in the right group. Somebody leaves, you pull them out. No chasing individual policies across fifteen different users.

Step 7: Monitor and Audit

All the permissions in the world don't help if nobody's watching. I had no monitoring when the miners hit. No alerts, no audit trail I was checking, no budget alarms. I found out because AWS emailed me. That's too late.

CloudTrail

CloudTrail records every API call in your account. Who did what, when, from which IP. New accounts usually have it turned on by default, but verify.

  1. CloudTrail, then Event history — this shows recent API calls
  2. You can search by user, resource type, event name

For anything serious, set up a trail that dumps logs to an S3 bucket. After my incident, I went back through CloudTrail and could see exactly when the attacker started launching instances — 3:47 AM on a Saturday. I was asleep.

IAM Credential Report

Go to IAM, then Credential report, then Download report. You get a CSV with:

I pull this once a month now. Any user who hasn't logged in for 90 days gets disabled. Any access key that hasn't been used in 60 days gets deleted. The credentials that got me hacked had been sitting unused for weeks before I put them in that .env file. Old credentials are just attack surface waiting to be exploited.

AWS Budgets

This is the thing that finally caught my incident. Not CloudTrail, not any fancy monitoring — a billing alert from AWS.

  1. Billing, then Budgets, then Create budget
  2. Type: Cost budget
  3. Set your monthly ceiling — $50 is reasonable for personal accounts
  4. Add alerts at 50%, 80%, and 100% of that number

When GPU instances started spinning up across four regions at once, the spending spiked immediately. A budget alert at $20 would have woken me up eight hours earlier than AWS support did. Eight hours of mining at those instance sizes is real money.

Total damage from one .env file in a public repo: $8,247. Eleven hours of crypto mining across four AWS regions on GPU instances I didn't know existed in my account. AWS waived most of it after I filed a support case and showed them the CloudTrail logs proving the activity was unauthorized. They didn't have to.

💬 Comments