Have you ever wondered who did what inside your AWS account? Maybe a bucket was deleted, a role was modified, or an instance was spun up out of nowhere. AWS gives you a built-in security camera for this exact purpose, and it's called CloudTrail.
In this post, we'll walk through setting up CloudTrail step by step so you can record, store, and review every API action that happens in your account.
Intro What Is CloudTrail, and Why Should You Care?
Think of your AWS account as a house. Every door or window is an API, and every time someone opens one, CloudTrail makes a note of it.
This means you always know who did something, what they did, when they did it, and where they did it from.
By default, CloudTrail keeps a short history of events. But to really benefit, you'll want to create a trail that stores logs in Amazon S3, streams them to CloudWatch Logs, and gives you the ability to search or alert on unusual behavior.
Stack The Services You'll Use
Log into the AWS Management Console, open CloudTrail, and navigate to Trails → Create trail.
- Name it something easy to remember, like
acct-main-trail. - Apply it to all regions so you don't miss activity outside your home region.
- Under log events, turn on Management events and select both read and write.
- Leave data events off for now unless you want deep logging for S3 or Lambda.
{
"TrailName": "acct-main-trail",
"IsMultiRegionTrail": true,
"IncludeGlobalServiceEvents": true,
"EventSelectors": [
{
"ReadWriteType": "All", // capture both reads and writes
"IncludeManagementEvents": true,
"DataResources": [] // add S3/Lambda ARNs here later
}
]
}
CloudTrail needs somewhere to drop its files. During trail creation:
- Choose Create new S3 bucket.
- Give it a unique name:
ct-logs-<account-id>-<region>. - Keep Block all public access enabled.
- Leave the default encryption, or step up to KMS if you need stronger control.
A consistent bucket naming convention like ct-logs-123456789012-us-east-1
makes cross-account log management much easier as your organization grows.
These are optional but worth doing from the start:
- Log file validation — proves your logs weren't tampered with after the fact. One checkbox, no reason to skip it.
- SSE-KMS encryption — worth enabling if you're in a regulated environment. AWS will create a new key and policy for you automatically.
- CloudTrail Lake — skip for now unless you're ready to dive into advanced analysis. It's powerful but adds complexity.
If you want real-time alerts, such as "notify me if CloudTrail is turned off," streaming to CloudWatch is the way to do it.
- Expand the CloudWatch Logs section during trail creation.
- Create a new log group, for example
/aws/cloudtrail/acct-main-trail. - Let AWS create the suggested IAM role for permissions.
# Create the log group manually if needed
aws logs create-log-group \
--log-group-name "/aws/cloudtrail/acct-main-trail" \
--region us-east-1
Click Create trail and your account is now under watch. CloudTrail will begin delivering log files to your S3 bucket within a few minutes of the first API activity.
Don't just assume it's working. Verify it:
- Open S3 in a new tab and create then delete a test bucket.
-
Go back to CloudTrail → Event history and filter by
s3.amazonaws.com. Within minutes your actions will appear. -
Check your S3 bucket for
.gzlog files organized in a time-based folder structure. - If you enabled CloudWatch, open the log group and confirm new log streams are appearing.
If you don't see events within 15 minutes, verify the S3 bucket policy allows CloudTrail to write to it. A misconfigured bucket policy is the most common reason logs stop delivering silently.
Cleanup If This Was Just Practice
- Delete the trail in CloudTrail.
- Delete the CloudWatch log group if you created one.
- Empty and delete the S3 bucket.
- Schedule deletion of the KMS key if you created one.
In real environments, CloudTrail should stay on permanently. Only clean it up in a sandbox to avoid clutter or small ongoing costs.
Next Optional Enhancements
Once your baseline is solid, here's where to go next:
- Data events — add logging for critical S3 buckets or Lambda functions to capture object-level activity, not just management plane calls.
- CloudTrail Insights — detects unusual spikes in API activity automatically, useful for catching credential misuse early.
- CloudWatch alarms — set up metric filters and alarms for high-risk events like root account logins or IAM policy changes.
- Athena queries — run SQL directly against your log files in S3 for forensic analysis without standing up any additional infrastructure.
- S3 lifecycle rules — transition logs older than 90 days to Glacier to cut storage costs without losing your audit history.
-- Example: find all IAM changes in the last 7 days using Athena
SELECT
eventtime,
useridentity.arn,
eventname,
sourceipaddress
FROM cloudtrail_logs
WHERE eventsource = 'iam.amazonaws.com'
AND date_diff('day', from_iso8601_timestamp(eventtime), now()) <= 7
ORDER BY eventtime DESC;
↗ Wrapping Up
You've now built a baseline CloudTrail setup that gives you full visibility into your AWS account. With CloudTrail logging into S3, and optionally CloudWatch, Athena, or CloudTrail Lake, you can answer the questions that matter most:
- Who did it?
- What did they do?
- When did it happen?
- Where did the request come from?
- Which resource was affected?
In other words, you just installed a reliable security camera system for your AWS account.