How To

Configure AWS VPC Flow Logs to CloudWatch

AWS VPC Flow Logs capture metadata about IP traffic going to and from network interfaces in your VPC. They are one of the most useful tools for network troubleshooting, security auditing, and compliance monitoring in AWS. By sending flow logs to CloudWatch Logs, you get real-time visibility into who is talking to what, which ports are open, and what traffic is being rejected by your security groups and NACLs.

Original content from computingforgeeks.com - post 70110

This guide walks through configuring VPC Flow Logs with CloudWatch as the destination. We cover creating the log group, setting up the required IAM role, enabling flow logs at the VPC, subnet, and ENI level, querying logs with CloudWatch Logs Insights, building alarms, and using S3 as a cost-effective alternative for long-term storage.

Prerequisites

Before you begin, make sure you have the following in place:

  • An existing AWS VPC with at least one subnet and running instances
  • AWS CLI v2 installed and configured with credentials that have IAM and VPC permissions
  • An AWS account with permissions to create IAM roles, CloudWatch log groups, and VPC flow logs
  • Basic familiarity with VPC networking concepts – subnets, security groups, NACLs

Step 1: Create a CloudWatch Log Group

Flow logs need a destination. A CloudWatch log group is where all the flow log records land. Create a dedicated log group with a retention policy so you are not paying to store logs forever.

aws logs create-log-group --log-group-name /aws/vpc/flow-logs

Set a retention period. 30 days is a solid default for troubleshooting – adjust based on your compliance requirements:

aws logs put-retention-policy --log-group-name /aws/vpc/flow-logs --retention-in-days 30

Verify the log group was created with the correct retention:

aws logs describe-log-groups --log-group-name-prefix /aws/vpc/flow-logs

You should see the log group listed with retentionInDays set to 30:

{
    "logGroups": [
        {
            "logGroupName": "/aws/vpc/flow-logs",
            "creationTime": 1711094400000,
            "retentionInDays": 30,
            "metricFilterCount": 0,
            "arn": "arn:aws:logs:us-east-1:123456789012:log-group:/aws/vpc/flow-logs:*",
            "storedBytes": 0
        }
    ]
}

Step 2: Create an IAM Role for VPC Flow Logs

VPC Flow Logs needs an IAM role with permission to publish logs to CloudWatch. The role has a trust policy that allows the flow logs service to assume it, and a permission policy that grants write access to the log group.

Create a file for the trust policy:

sudo vi /tmp/flow-logs-trust-policy.json

Add the following trust policy that allows the VPC Flow Logs service to assume this role:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "vpc-flow-logs.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

Create the IAM role using this trust policy:

aws iam create-role \
  --role-name VPCFlowLogsRole \
  --assume-role-policy-document file:///tmp/flow-logs-trust-policy.json

Now create the permission policy file that grants the role access to write to CloudWatch Logs:

sudo vi /tmp/flow-logs-permission-policy.json

Add the permissions needed for creating log streams and putting log events:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "logs:CreateLogGroup",
        "logs:CreateLogStream",
        "logs:PutLogEvents",
        "logs:DescribeLogGroups",
        "logs:DescribeLogStreams"
      ],
      "Resource": "*"
    }
  ]
}

Attach the permission policy to the role:

aws iam put-role-policy \
  --role-name VPCFlowLogsRole \
  --policy-name VPCFlowLogsPermission \
  --policy-document file:///tmp/flow-logs-permission-policy.json

Verify the role was created and the policy is attached:

aws iam get-role --role-name VPCFlowLogsRole --query 'Role.Arn' --output text

This returns the role ARN that you will use in the next step:

arn:aws:iam::123456789012:role/VPCFlowLogsRole

Step 3: Enable VPC Flow Logs

With the log group and IAM role ready, enable flow logs on your VPC. You can do this through the AWS Console or the CLI.

Enable via AWS Console

Follow these steps in the AWS Management Console:

  • Open the VPC dashboard and select Your VPCs
  • Select the VPC you want to monitor
  • Click the Flow logs tab at the bottom
  • Click Create flow log
  • Set Filter to All (captures both accepted and rejected traffic)
  • Set Maximum aggregation interval to 1 minute for near real-time data (or 10 minutes for lower cost)
  • Set Destination to Send to CloudWatch Logs
  • Select the log group /aws/vpc/flow-logs
  • Select the IAM role VPCFlowLogsRole
  • Click Create flow log

Enable via AWS CLI

First, get your VPC ID if you do not already have it:

aws ec2 describe-vpcs --query 'Vpcs[*].[VpcId,CidrBlock,Tags[?Key==`Name`].Value|[0]]' --output table

The output lists all VPCs with their IDs and CIDR blocks:

---------------------------------------------------
|                  DescribeVpcs                    |
+------------------------+-------------+-----------+
|  vpc-0a1b2c3d4e5f67890 |  10.0.0.0/16|  prod-vpc |
|  vpc-0f9e8d7c6b5a43210 |  172.31.0.0/16| default  |
+------------------------+-------------+-----------+

Get the IAM role ARN:

ROLE_ARN=$(aws iam get-role --role-name VPCFlowLogsRole --query 'Role.Arn' --output text)

Create the flow log on the VPC. Replace vpc-0a1b2c3d4e5f67890 with your actual VPC ID:

aws ec2 create-flow-logs \
  --resource-type VPC \
  --resource-ids vpc-0a1b2c3d4e5f67890 \
  --traffic-type ALL \
  --log-destination-type cloud-watch-logs \
  --log-group-name /aws/vpc/flow-logs \
  --deliver-logs-permission-arn $ROLE_ARN \
  --max-aggregation-interval 60

A successful response returns the flow log ID:

{
    "ClientToken": "abc123-def456",
    "FlowLogIds": [
        "fl-0a1b2c3d4e5f67890"
    ],
    "Unsuccessful": []
}

Verify the flow log is active:

aws ec2 describe-flow-logs --filter Name=resource-id,Values=vpc-0a1b2c3d4e5f67890

The FlowLogStatus should show ACTIVE. It takes 5-10 minutes before the first log records appear in CloudWatch.

Step 4: Enable Flow Logs for a Subnet or ENI

VPC-level flow logs capture everything, which is great for broad visibility. But sometimes you only need logs for a specific subnet (like a public-facing subnet) or a single network interface (for debugging a specific instance).

Enable for a specific subnet

Replace the subnet ID with your target subnet:

aws ec2 create-flow-logs \
  --resource-type Subnet \
  --resource-ids subnet-0abc123def456789 \
  --traffic-type ALL \
  --log-destination-type cloud-watch-logs \
  --log-group-name /aws/vpc/flow-logs \
  --deliver-logs-permission-arn $ROLE_ARN \
  --max-aggregation-interval 60

Enable for a specific network interface

This is useful for troubleshooting traffic to a single EC2 instance or load balancer:

aws ec2 create-flow-logs \
  --resource-type NetworkInterface \
  --resource-ids eni-0abc123def456789 \
  --traffic-type REJECT \
  --log-destination-type cloud-watch-logs \
  --log-group-name /aws/vpc/flow-logs \
  --deliver-logs-permission-arn $ROLE_ARN \
  --max-aggregation-interval 60

Notice the --traffic-type REJECT filter above. For targeted debugging, you often only care about blocked traffic. This reduces log volume and cost significantly.

Step 5: Understanding VPC Flow Log Fields

Each flow log record contains metadata about a single network flow. Understanding these fields is essential for writing effective queries. Here is a breakdown of the default v2 format:

FieldDescription
versionFlow log version (2 for default)
account-idAWS account ID of the owner
interface-idThe ENI that the traffic was recorded on
srcaddrSource IP address
dstaddrDestination IP address
srcportSource port number
dstportDestination port number
protocolIANA protocol number (6=TCP, 17=UDP, 1=ICMP)
packetsNumber of packets in the flow
bytesNumber of bytes in the flow
startUnix timestamp of the flow start
endUnix timestamp of the flow end
actionACCEPT or REJECT
log-statusOK, NODATA, or SKIPDATA

A raw flow log record looks like this:

2 123456789012 eni-0abc123def456789 10.0.1.25 10.0.2.50 49152 443 6 12 840 1711094400 1711094460 ACCEPT OK

This record tells us that 12 TCP packets (840 bytes) were accepted from 10.0.1.25:49152 to 10.0.2.50:443 (HTTPS) over a 60-second window. Protocol 6 is TCP.

Step 6: Query Flow Logs with CloudWatch Logs Insights

CloudWatch Logs Insights lets you run SQL-like queries against your flow logs directly in the AWS Console. It is fast, scales automatically, and requires no additional infrastructure.

To access Logs Insights:

  • Open CloudWatch in the AWS Console
  • Select Logs Insights from the left menu
  • Choose the /aws/vpc/flow-logs log group
  • Set your time range and run queries

A basic query to see the most recent flow records:

fields @timestamp, srcAddr, dstAddr, srcPort, dstPort, protocol, action
| sort @timestamp desc
| limit 50

You can also run Logs Insights queries from the CLI. This is useful for scripting and automation:

aws logs start-query \
  --log-group-name /aws/vpc/flow-logs \
  --start-time $(date -d '1 hour ago' +%s) \
  --end-time $(date +%s) \
  --query-string 'fields @timestamp, srcAddr, dstAddr, action | filter action="REJECT" | sort @timestamp desc | limit 20'

This returns a query ID. Retrieve the results with:

aws logs get-query-results --query-id "YOUR_QUERY_ID"

Step 7: Useful Flow Log Query Examples

These queries cover the most common troubleshooting and security monitoring scenarios. Paste them directly into CloudWatch Logs Insights with the /aws/vpc/flow-logs log group selected.

Find all rejected traffic

Rejected traffic means a security group or NACL is blocking the connection. This is the first query to run when troubleshooting connectivity issues:

fields @timestamp, srcAddr, dstAddr, srcPort, dstPort, protocol, action
| filter action = "REJECT"
| sort @timestamp desc
| limit 100

Top talkers by bytes transferred

Identify which source IPs are generating the most traffic. Useful for spotting unexpected data transfers or potential exfiltration:

stats sum(bytes) as totalBytes by srcAddr
| sort totalBytes desc
| limit 20

Monitor SSH access attempts

Track who is connecting (or trying to connect) to SSH on port 22. Filter by REJECT to see blocked brute-force attempts:

fields @timestamp, srcAddr, dstAddr, action
| filter dstPort = 22
| sort @timestamp desc
| limit 50

Traffic to a specific instance

Filter by the destination IP of a specific EC2 instance. Replace 10.0.1.25 with the private IP of the instance you are investigating:

fields @timestamp, srcAddr, srcPort, dstPort, protocol, action, bytes
| filter dstAddr = "10.0.1.25"
| sort @timestamp desc
| limit 100

Rejected traffic by destination port

See which destination ports are being probed or blocked. Common for detecting port scanning activity:

stats count(*) as rejections by dstPort
| filter action = "REJECT"
| sort rejections desc
| limit 20

Step 8: Create CloudWatch Alarms from Flow Logs

Queries are great for investigation, but alarms give you proactive notifications. You can create metric filters on the flow log group and trigger alarms when thresholds are exceeded.

Create a metric filter for rejected traffic

This metric filter counts every REJECT action in the flow logs:

aws logs put-metric-filter \
  --log-group-name /aws/vpc/flow-logs \
  --filter-name RejectedTrafficCount \
  --filter-pattern '[version, account_id, interface_id, srcaddr, dstaddr, srcport, dstport, protocol, packets, bytes, start, end, action="REJECT", log_status]' \
  --metric-transformations \
    metricName=RejectedPackets,metricNamespace=VPCFlowLogs,metricValue=1,defaultValue=0

Create an alarm on the metric

Trigger an alarm when rejected traffic exceeds 1000 events in a 5-minute period. Replace the SNS topic ARN with your notification topic:

aws cloudwatch put-metric-alarm \
  --alarm-name HighRejectedTraffic \
  --alarm-description "Alert when rejected VPC traffic exceeds threshold" \
  --metric-name RejectedPackets \
  --namespace VPCFlowLogs \
  --statistic Sum \
  --period 300 \
  --threshold 1000 \
  --comparison-operator GreaterThanThreshold \
  --evaluation-periods 1 \
  --alarm-actions arn:aws:sns:us-east-1:123456789012:vpc-alerts \
  --treat-missing-data notBreaching

You can similarly create alarms for SSH-specific rejected traffic by adjusting the filter pattern to match dstport="22" and action="REJECT".

Create a metric filter for SSH rejections

This filter specifically tracks rejected SSH connections:

aws logs put-metric-filter \
  --log-group-name /aws/vpc/flow-logs \
  --filter-name SSHRejectedCount \
  --filter-pattern '[version, account_id, interface_id, srcaddr, dstaddr, srcport, dstport="22", protocol, packets, bytes, start, end, action="REJECT", log_status]' \
  --metric-transformations \
    metricName=SSHRejectedPackets,metricNamespace=VPCFlowLogs,metricValue=1,defaultValue=0

Then create an alarm on this metric the same way, with a lower threshold – even 50 SSH rejections in 5 minutes could indicate a brute-force attempt.

Step 9: Send VPC Flow Logs to S3 as a Cost Alternative

CloudWatch Logs is convenient for real-time queries, but it gets expensive at scale. For long-term storage and compliance, S3 is significantly cheaper. You can run both destinations simultaneously – CloudWatch for active monitoring and S3 for archival.

Create an S3 bucket for flow logs. The bucket name must be globally unique:

aws s3api create-bucket \
  --bucket my-vpc-flow-logs-123456789012 \
  --region us-east-1

Create a flow log that sends to S3. No IAM role is needed – AWS uses a bucket policy instead:

aws ec2 create-flow-logs \
  --resource-type VPC \
  --resource-ids vpc-0a1b2c3d4e5f67890 \
  --traffic-type ALL \
  --log-destination-type s3 \
  --log-destination arn:aws:s3:::my-vpc-flow-logs-123456789012 \
  --max-aggregation-interval 600

Notice the 10-minute aggregation interval (600 seconds) instead of 1 minute. For archival purposes, 10-minute granularity is fine and reduces the number of log files created.

Add a lifecycle policy to automatically transition logs to cheaper storage classes and eventually delete them:

aws s3api put-bucket-lifecycle-configuration \
  --bucket my-vpc-flow-logs-123456789012 \
  --lifecycle-configuration '{
    "Rules": [
      {
        "ID": "FlowLogRetention",
        "Status": "Enabled",
        "Filter": {"Prefix": ""},
        "Transitions": [
          {"Days": 30, "StorageClass": "STANDARD_IA"},
          {"Days": 90, "StorageClass": "GLACIER"}
        ],
        "Expiration": {"Days": 365}
      }
    ]
  }'

With this lifecycle policy, flow logs move to Infrequent Access after 30 days, Glacier after 90 days, and are deleted after 1 year. Adjust these values based on your compliance requirements.

You can query S3-stored flow logs using Amazon Athena by creating an external table that points to the S3 bucket. This gives you SQL query capability over archived logs without loading them back into CloudWatch.

Conclusion

VPC Flow Logs to CloudWatch gives you real-time network visibility across your AWS infrastructure. With the log group, IAM role, and flow logs configured, you can run ad-hoc queries with Logs Insights, build automated alarms for security events, and archive logs to S3 for long-term compliance at a fraction of the cost.

For production environments, enable flow logs at the VPC level with ALL traffic capture, set up alarms for rejected traffic patterns, and use S3 with lifecycle policies for anything beyond 30 days. Pair flow logs with CloudWatch logging for EKS or streaming to Elasticsearch for a complete observability stack.

Related Articles

Debian How To Install Suricata on Debian 12 (Bookworm) Security Creating Locally Trusted SSL Certificates using mkcert Debian Install Pi-hole Network Ad Blocker on Debian 12/11/10 Security How to Recover Deleted Files on Linux: An Essential Guide

Leave a Comment

Press ESC to close