Securing Cloud Credentials 101...and 201...and maybe a little bit of 301.

May 6, 2025 12 min

Introduction

As security engineers, we face a fundamental paradox: the tools we use to secure our infrastructure need credentials to function, yet these credentials themselves represent one of our greatest security risks. Whether you’re running DuckDB queries against cloud APIs, deploying security scanners, or orchestrating compliance checks, you’re handling sensitive credentials that could compromise your entire security posture if mismanaged.

Default CLI credential storage mechanisms—like plaintext files in ~/.aws/credentials or environment variables in shell history—were designed for convenience, not security. For security teams responsible for protecting cloud infrastructure, these defaults fall dangerously short.

This guide walks through the credential security hierarchy, from basic local improvements to enterprise-grade zero-trust architectures. We’ll explore practical implementations that balance security with usability, because the most secure credential system is worthless if your team can’t actually use it.

Part 1: Understanding the Credential Landscape

Types of Credentials in Cloud Security

Security tools interact with various credential types, each with unique security considerations:

API Keys and Tokens

  • Long-lived, static credentials
  • Often have broad permissions
  • Common in legacy tools and integrations
  • Risk: Easily leaked through logs, repos, or memory dumps

Temporary Security Credentials

  • Short-lived (minutes to hours)
  • Generated through STS/token services
  • Ideal for automation and CI/CD
  • Risk: Still vulnerable during their lifetime

Service Account Keys

  • Used for machine-to-machine authentication
  • Often over-privileged
  • Difficult to rotate in production
  • Risk: Frequently forgotten and never rotated

Certificates and mTLS

  • Used for service authentication
  • Require PKI infrastructure
  • More complex but more secure
  • Risk: Certificate management overhead

The Shared Responsibility Model

Cloud providers secure the infrastructure, but credential security remains your responsibility:

Provider Responsibility:
├── Physical security of data centers
├── Network infrastructure security
├── Hypervisor and host OS security
└── API endpoint security

Your Responsibility:
├── Credential generation and storage
├── Access control and permissions
├── Rotation and lifecycle management
├── Audit logging and monitoring
└── Incident response for compromised credentials

Common Credential Leakage Vectors

Understanding how credentials leak helps design better defenses:

  1. Source Control: Credentials committed to Git repositories
  2. Logging Systems: Credentials in application logs or error messages
  3. Memory Dumps: Credentials extracted from process memory
  4. Shell History: Commands containing credentials saved in history files
  5. Configuration Files: Unencrypted credentials in config files
  6. CI/CD Pipelines: Credentials exposed in build logs
  7. Container Images: Credentials baked into Docker images
  8. Network Traffic: Credentials sent over unencrypted connections

Part 2: Cloud Provider Native Solutions

AWS Deep Dive

AWS Vault Setup and Configuration

AWS Vault provides secure credential storage using your operating system’s keychain:

# Install AWS Vault
brew install aws-vault  # macOS
# or
wget https://github.com/99designs/aws-vault/releases/download/v6.6.0/aws-vault-linux-amd64
chmod +x aws-vault-linux-amd64
sudo mv aws-vault-linux-amd64 /usr/local/bin/aws-vault

# Add a profile
aws-vault add security-analyst

# Execute commands with temporary credentials
aws-vault exec security-analyst -- aws s3 ls
aws-vault exec security-analyst -- duckdb -c "SELECT * FROM read_csv_auto('s3://bucket/data.csv')"

# Configure session duration and MFA
aws-vault exec security-analyst \
  --duration=4h \
  --mfa-token=123456 \
  -- your-security-tool

# Use with assume role
aws-vault exec security-analyst \
  --assume-role-arn=arn:aws:iam::123456789012:role/SecurityAuditor \
  -- aws securityhub get-findings

IAM Best Practices for Security Tools

Design your IAM strategy with security tools in mind:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "SecurityToolReadOnly",
      "Effect": "Allow",
      "Action": [
        "iam:Get*",
        "iam:List*",
        "ec2:Describe*",
        "s3:GetBucketPolicy",
        "s3:GetBucketAcl",
        "s3:ListBucket",
        "cloudtrail:LookupEvents",
        "securityhub:Get*",
        "guardduty:Get*",
        "config:Describe*",
        "config:Get*"
      ],
      "Resource": "*"
    },
    {
      "Sid": "RequireMFAForSensitiveActions",
      "Effect": "Deny",
      "Action": "*",
      "Resource": "*",
      "Condition": {
        "BoolIfExists": {
          "aws:MultiFactorAuthPresent": "false"
        }
      }
    }
  ]
}

Cross-Account Security Analysis

Implement secure cross-account access for multi-account environments:

# Set up cross-account role
aws iam create-role \
  --role-name SecurityAuditor \
  --assume-role-policy-document file://trust-policy.json

# Trust policy for security account
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::SECURITY_ACCOUNT_ID:root"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "StringEquals": {
          "sts:ExternalId": "unique-external-id-here"
        },
        "IpAddress": {
          "aws:SourceIp": ["10.0.0.0/8", "172.16.0.0/12"]
        }
      }
    }
  ]
}

# Use with AWS Vault
aws-vault exec security-account -- \
  aws sts assume-role \
  --role-arn arn:aws:iam::TARGET_ACCOUNT:role/SecurityAuditor \
  --role-session-name security-scan-$(date +%Y%m%d)

AWS SSO Integration

Modern approach using AWS SSO (Identity Center):

# Configure AWS SSO
aws configure sso
# Follow prompts for SSO portal URL and region

# Use named profile with SSO
aws s3 ls --profile security-sso-profile

# Automated token refresh
aws sso login --profile security-sso-profile

# Integration with security tools
export AWS_PROFILE=security-sso-profile
./run-security-scan.sh

Azure Considerations

Service Principals vs Managed Identities

# Create service principal for security tools
az ad sp create-for-rbac \
  --name "security-scanner-sp" \
  --role "Security Reader" \
  --scopes /subscriptions/{subscription-id}

# Login with service principal
az login --service-principal \
  --username $AZURE_CLIENT_ID \
  --password $AZURE_CLIENT_SECRET \
  --tenant $AZURE_TENANT_ID

# Better: Use certificate-based authentication
az ad sp create-for-rbac \
  --name "security-scanner-sp" \
  --create-cert \
  --cert mycert \
  --keyvault myvault

az login --service-principal \
  --username $AZURE_CLIENT_ID \
  --tenant $AZURE_TENANT_ID \
  --certificate-file mycert.pem

Azure Key Vault Integration

# Store credentials in Key Vault
az keyvault secret set \
  --vault-name SecurityVault \
  --name "api-key" \
  --value "sensitive-value"

# Retrieve in scripts
API_KEY=$(az keyvault secret show \
  --vault-name SecurityVault \
  --name "api-key" \
  --query value -o tsv)

# Use with managed identity (no credentials needed)
# When running on Azure VM/Container with managed identity
az login --identity
SECRET=$(az keyvault secret show \
  --vault-name SecurityVault \
  --name "api-key" \
  --query value -o tsv)

Device Code Flow for Headless Environments

# Initiate device code flow
az login --use-device-code

# Automated script example
#!/bin/bash
# Attempt identity-based auth first, fall back to device code
if ! az login --identity 2>/dev/null; then
    echo "Managed identity not available, using device code flow"
    az login --use-device-code
fi

# Run security scans
az security assessment list

GCP Security Patterns

Service Account Key Management

# Create service account for security tools
gcloud iam service-accounts create security-scanner \
  --display-name="Security Scanner Service Account"

# Grant minimal permissions
gcloud projects add-iam-policy-binding PROJECT_ID \
  --member="serviceAccount:security-scanner@PROJECT_ID.iam.gserviceaccount.com" \
  --role="roles/viewer"

# Create and download key (avoid if possible)
gcloud iam service-accounts keys create key.json \
  --iam-account=security-scanner@PROJECT_ID.iam.gserviceaccount.com

# Better: Use workload identity or impersonation
gcloud auth application-default login
gcloud config set auth/impersonate_service_account \
  security-scanner@PROJECT_ID.iam.gserviceaccount.com

Application Default Credentials (ADC)

# Set up ADC for local development
gcloud auth application-default login

# Use in code (automatic credential discovery)
# Python example:
from google.cloud import storage
client = storage.Client()  # Automatically uses ADC

# Override for specific environments
export GOOGLE_APPLICATION_CREDENTIALS="/path/to/key.json"

Workload Identity Federation

# Configure workload identity for GitHub Actions
gcloud iam workload-identity-pools create "github-pool" \
  --location="global" \
  --display-name="GitHub Pool"

gcloud iam workload-identity-pools providers create-oidc "github-provider" \
  --location="global" \
  --workload-identity-pool="github-pool" \
  --issuer-uri="https://token.actions.githubusercontent.com" \
  --attribute-mapping="google.subject=assertion.sub,attribute.repo=assertion.repository"

# Use in GitHub Actions
- uses: google-github-actions/auth@v1
  with:
    workload_identity_provider: 'projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/github-pool/providers/github-provider'
    service_account: 'security-scanner@PROJECT_ID.iam.gserviceaccount.com'

Part 3: Federated Authentication

OIDC/SAML Integration Patterns

Implement identity federation for multi-cloud environments:

# Example: Okta SAML configuration for AWS
saml_providers:
  okta:
    metadata_url: "https://company.okta.com/app/metadata"
    role_mapping:
      SecurityEngineer: "arn:aws:iam::123456789012:role/SecurityEngineer"
      SecurityAnalyst: "arn:aws:iam::123456789012:role/SecurityAnalyst"
    session_duration: 14400  # 4 hours
    mfa_required: true

Token Refresh Strategies

import time
import requests
from datetime import datetime, timedelta

class TokenManager:
    def __init__(self, token_endpoint, client_id, client_secret):
        self.token_endpoint = token_endpoint
        self.client_id = client_id
        self.client_secret = client_secret
        self.token = None
        self.expiry = None
    
    def get_token(self):
        # Refresh if expired or about to expire
        if not self.token or datetime.now() >= self.expiry - timedelta(minutes=5):
            self.refresh_token()
        return self.token
    
    def refresh_token(self):
        response = requests.post(
            self.token_endpoint,
            data={
                'grant_type': 'client_credentials',
                'client_id': self.client_id,
                'client_secret': self.client_secret,
                'scope': 'security.read'
            }
        )
        data = response.json()
        self.token = data['access_token']
        self.expiry = datetime.now() + timedelta(seconds=data['expires_in'])

Part 4: Enterprise-Grade Solutions with HashiCorp Vault

Vault Deployment Architecture

# Vault configuration with auto-unseal
storage "consul" {
  address = "127.0.0.1:8500"
  path    = "vault/"
}

listener "tcp" {
  address       = "0.0.0.0:8200"
  tls_cert_file = "/opt/vault/tls/cert.pem"
  tls_key_file  = "/opt/vault/tls/key.pem"
}

# AWS KMS auto-unseal
seal "awskms" {
  region     = "us-east-1"
  kms_key_id = "alias/vault-unseal-key"
}

# Azure Key Vault auto-unseal
seal "azurekeyvault" {
  tenant_id      = "12345678-1234-1234-1234-123456789012"
  vault_name     = "vault-unseal-keyvault"
  key_name       = "vault-unseal-key"
}

# GCP Cloud KMS auto-unseal
seal "gcpckms" {
  project     = "vault-project"
  region      = "global"
  key_ring    = "vault-keyring"
  crypto_key  = "vault-unseal-key"
}

api_addr = "https://vault.company.com:8200"
cluster_addr = "https://vault.company.com:8201"

Dynamic Secrets for Cloud Providers

Configure Vault to generate temporary cloud credentials:

# Enable AWS secrets engine
vault secrets enable -path=aws aws

# Configure AWS secrets engine
vault write aws/config/root \
  access_key=AKIAXXXXXXXXXXXXXXXX \
  secret_key=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX \
  region=us-east-1

# Create role for security tools
vault write aws/roles/security-analyst \
  credential_type=iam_user \
  policy_document=-<<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "iam:ListUsers",
        "iam:ListRoles",
        "iam:GetAccountPasswordPolicy",
        "ec2:DescribeInstances",
        "ec2:DescribeSecurityGroups",
        "s3:ListAllMyBuckets",
        "s3:GetBucketPolicy",
        "cloudtrail:LookupEvents"
      ],
      "Resource": "*"
    }
  ]
}
EOF

# Generate temporary credentials
vault read aws/creds/security-analyst
# Returns temporary access_key and secret_key valid for 1 hour

Vault Agent for Credential Injection

# Vault agent configuration
pid_file = "./pidfile"

vault {
  address = "https://vault.company.com:8200"
}

auto_auth {
  method {
    type = "approle"
    
    config = {
      role_id_file_path = "/etc/vault/role-id"
      secret_id_file_path = "/etc/vault/secret-id"
      remove_secret_id_file_after_reading = true
    }
  }
  
  sink {
    type = "file"
    config = {
      path = "/tmp/vault-token"
    }
  }
}

template {
  source      = "/etc/security-tools/config.tmpl"
  destination = "/etc/security-tools/config.json"
  perms       = "0600"
  
  contents = <<EOF
{
  "aws_access_key": "{{ with secret "aws/creds/security-analyst" }}{{ .Data.access_key }}{{ end }}",
  "aws_secret_key": "{{ with secret "aws/creds/security-analyst" }}{{ .Data.secret_key }}{{ end }}",
  "database_url": "{{ with secret "database/creds/readonly" }}postgresql://{{ .Data.username }}:{{ .Data.password }}@db.company.com/security{{ end }}"
}
EOF
}

Policy-Based Access Control

# Security team policy
path "aws/creds/security-analyst" {
  capabilities = ["read"]
}

path "database/creds/security-readonly" {
  capabilities = ["read"]
}

path "secret/data/security-tools/*" {
  capabilities = ["read", "list"]
}

path "pki/issue/security-tools" {
  capabilities = ["create", "update"]
}

# Audit requirements
path "sys/audit" {
  capabilities = ["read"]
}

# Restrict by time and source IP
path "aws/creds/security-analyst" {
  capabilities = ["read"]
  allowed_parameters = {
    "ttl" = ["1h", "2h", "4h"]
  }
  required_parameters = ["ttl"]
  min_wrapping_ttl = "1h"
  max_wrapping_ttl = "24h"
  
  # Restrict to office IPs
  cidr_blocks = ["10.0.0.0/8", "172.16.0.0/12"]
}

Part 5: Practical Implementation

Environment-Specific Configurations

# Development environment
export VAULT_ADDR="https://vault-dev.company.com:8200"
export VAULT_NAMESPACE="dev/security"
export AWS_PROFILE="security-dev"

# Staging environment
export VAULT_ADDR="https://vault-staging.company.com:8200"
export VAULT_NAMESPACE="staging/security"
export AWS_PROFILE="security-staging"

# Production environment
export VAULT_ADDR="https://vault-prod.company.com:8200"
export VAULT_NAMESPACE="prod/security"
export AWS_PROFILE="security-prod"

CI/CD Pipeline Credential Management

# GitHub Actions example with OIDC
name: Security Scan
on:
  schedule:
    - cron: '0 0 * * *'

jobs:
  security-scan:
    runs-on: ubuntu-latest
    permissions:
      id-token: write
      contents: read
    
    steps:
    - uses: actions/checkout@v3
    
    # AWS authentication via OIDC
    - name: Configure AWS credentials
      uses: aws-actions/configure-aws-credentials@v2
      with:
        role-to-assume: arn:aws:iam::123456789012:role/GitHubSecurityScanner
        aws-region: us-east-1
    
    # Vault authentication
    - name: Authenticate to Vault
      uses: hashicorp/vault-action@v2
      with:
        url: https://vault.company.com:8200
        method: jwt
        role: github-security-scanner
        secrets: |
          database/creds/readonly username | DB_USERNAME ;
          database/creds/readonly password | DB_PASSWORD ;
          secret/data/security-tools/api-keys datadog | DATADOG_API_KEY
    
    - name: Run security scan
      run: |
        ./security-scan.sh

Container and Kubernetes Considerations

# Kubernetes deployment with Vault sidecar
apiVersion: apps/v1
kind: Deployment
metadata:
  name: security-scanner
spec:
  template:
    metadata:
      annotations:
        vault.hashicorp.com/agent-inject: "true"
        vault.hashicorp.com/role: "security-scanner"
        vault.hashicorp.com/agent-inject-secret-aws: "aws/creds/security-analyst"
        vault.hashicorp.com/agent-inject-template-aws: |
          {{ with secret "aws/creds/security-analyst" }}
          export AWS_ACCESS_KEY_ID="{{ .Data.access_key }}"
          export AWS_SECRET_ACCESS_KEY="{{ .Data.secret_key }}"
          {{ end }}
    spec:
      serviceAccountName: security-scanner
      containers:
      - name: scanner
        image: security-scanner:latest
        command: ["/bin/bash", "-c"]
        args:
          - source /vault/secrets/aws && ./run-scan.sh

Credential Rotation Automation

#!/usr/bin/env python3
import boto3
import hvac
from datetime import datetime, timedelta

class CredentialRotator:
    def __init__(self, vault_addr, vault_token):
        self.vault = hvac.Client(url=vault_addr, token=vault_token)
        self.iam = boto3.client('iam')
    
    def rotate_service_account_keys(self, service_account):
        # List existing keys
        keys = self.iam.list_access_keys(UserName=service_account)
        
        for key in keys['AccessKeyMetadata']:
            created = key['CreateDate'].replace(tzinfo=None)
            age = datetime.now() - created
            
            if age > timedelta(days=90):
                # Create new key
                new_key = self.iam.create_access_key(UserName=service_account)
                
                # Store in Vault
                self.vault.secrets.kv.v2.create_or_update_secret(
                    path=f'service-accounts/{service_account}',
                    secret={
                        'access_key': new_key['AccessKey']['AccessKeyId'],
                        'secret_key': new_key['AccessKey']['SecretAccessKey'],
                        'created': datetime.now().isoformat()
                    }
                )
                
                # Delete old key
                self.iam.delete_access_key(
                    UserName=service_account,
                    AccessKeyId=key['AccessKeyId']
                )
                
                print(f"Rotated key for {service_account}")

Part 6: Security Analysis Tooling Integration

Securing DuckDB Connection Strings

import os
import duckdb
from vault import get_secret

# Secure credential retrieval
def get_secure_connection():
    # Get credentials from Vault
    aws_creds = get_secret('aws/creds/security-analyst')
    
    # Configure DuckDB with temporary credentials
    con = duckdb.connect()
    con.execute(f"""
        SET s3_access_key_id='{aws_creds['access_key']}';
        SET s3_secret_access_key='{aws_creds['secret_key']}';
        SET s3_region='us-east-1';
    """)
    
    return con

# Use secure connection
con = get_secure_connection()
result = con.execute("""
    SELECT * FROM read_csv_auto('s3://security-bucket/findings.csv')
    WHERE severity = 'CRITICAL'
""").fetchall()

Tailpipe Credential Configuration

# Secure Tailpipe configuration
sources:
  - name: aws_security_hub
    type: aws
    credential_source: vault
    vault_path: aws/creds/security-analyst
    regions:
      - us-east-1
      - us-west-2
    
  - name: azure_sentinel
    type: azure
    credential_source: managed_identity
    subscription_id: "{{ env.AZURE_SUBSCRIPTION_ID }}"
    
  - name: gcp_scc
    type: gcp
    credential_source: workload_identity
    project_id: "{{ env.GCP_PROJECT_ID }}"

outputs:
  - name: encrypted_s3
    type: s3
    bucket: security-findings
    encryption: aws:kms
    kms_key_id: "alias/security-findings-key"
    credential_source: iam_role

Query Result Encryption

import boto3
from cryptography.fernet import Fernet
from cryptography.hazmat.primitives import hashes
from cryptography.hazmat.primitives.kdf.pbkdf2 import PBKDF2HMAC
import base64

class SecureResultStorage:
    def __init__(self, kms_key_id):
        self.kms = boto3.client('kms')
        self.kms_key_id = kms_key_id
    
    def encrypt_results(self, data):
        # Generate data encryption key
        dek = self.kms.generate_data_key(
            KeyId=self.kms_key_id,
            KeySpec='AES_256'
        )
        
        # Encrypt data with DEK
        fernet = Fernet(base64.urlsafe_b64encode(dek['Plaintext'][:32]))
        encrypted_data = fernet.encrypt(data.encode())
        
        return {
            'encrypted_data': encrypted_data,
            'encrypted_dek': dek['CiphertextBlob']
        }
    
    def decrypt_results(self, encrypted_data, encrypted_dek):
        # Decrypt DEK
        dek = self.kms.decrypt(CiphertextBlob=encrypted_dek)
        
        # Decrypt data
        fernet = Fernet(base64.urlsafe_b64encode(dek['Plaintext'][:32]))
        return fernet.decrypt(encrypted_data).decode()

Audit Logging for Credential Access

import logging
import json
from datetime import datetime

class CredentialAuditor:
    def __init__(self, log_destination):
        self.logger = logging.getLogger('credential_audit')
        handler = logging.FileHandler(log_destination)
        handler.setFormatter(logging.Formatter(
            '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
        ))
        self.logger.addHandler(handler)
        self.logger.setLevel(logging.INFO)
    
    def log_access(self, user, credential_type, resource, action):
        audit_event = {
            'timestamp': datetime.utcnow().isoformat(),
            'user': user,
            'credential_type': credential_type,
            'resource': resource,
            'action': action,
            'source_ip': self.get_source_ip(),
            'session_id': self.get_session_id()
        }
        
        self.logger.info(json.dumps(audit_event))
    
    def log_rotation(self, credential_id, old_version, new_version):
        rotation_event = {
            'timestamp': datetime.utcnow().isoformat(),
            'event_type': 'credential_rotation',
            'credential_id': credential_id,
            'old_version': old_version,
            'new_version': new_version
        }
        
        self.logger.info(json.dumps(rotation_event))

Part 7: Incident Response Scenarios

Credential Compromise Response

#!/bin/bash
# Emergency credential revocation script

COMPROMISED_USER=$1
INCIDENT_ID=$2

# Immediate actions
echo "[$(date)] Starting emergency revocation for user: $COMPROMISED_USER"

# 1. Revoke AWS access
aws iam list-access-keys --user-name $COMPROMISED_USER --query 'AccessKeyMetadata[].AccessKeyId' --output text | \
while read key; do
    aws iam update-access-key --user-name $COMPROMISED_USER --access-key-id $key --status Inactive
    echo "[$(date)] Deactivated AWS key: $key"
done

# 2. Revoke Vault tokens
vault token revoke -accessor $(vault token lookup -format=json | jq -r '.data.accessor')

# 3. Expire all active sessions
vault lease revoke -prefix aws/creds/

# 4. Rotate affected service accounts
./rotate-service-accounts.sh $COMPROMISED_USER

# 5. Update audit log
cat << EOF >> /var/log/security/incident-$INCIDENT_ID.log
Timestamp: $(date)
User: $COMPROMISED_USER
Actions taken:
- AWS keys deactivated
- Vault tokens revoked
- Active leases expired
- Service accounts rotated
EOF

Emergency Access Patterns

# Break-glass configuration
emergency_access:
  enabled: true
  
  providers:
    aws:
      role: "arn:aws:iam::123456789012:role/EmergencyAccess"
      requires_mfa: true
      session_duration: 3600  # 1 hour
      notification_channels:
        - [email protected]
        - pagerduty-security
    
    vault:
      policy: "emergency-response"
      auth_method: "userpass"
      requires_quorum: 2  # Requires 2 people
      auto_revoke: true
      ttl: "1h"
  
  audit:
    detailed_logging: true
    immutable_storage: true
    real_time_alerts: true

Credential Forensics

#!/usr/bin/env python3
import boto3
import json
from datetime import datetime, timedelta

def investigate_credential_usage(access_key_id, hours_back=24):
    cloudtrail = boto3.client('cloudtrail')
    
    # Look back for credential usage
    end_time = datetime.utcnow()
    start_time = end_time - timedelta(hours=hours_back)
    
    events = []
    
    # Query CloudTrail
    paginator = cloudtrail.get_paginator('lookup_events')
    page_iterator = paginator.paginate(
        LookupAttributes=[
            {
                'AttributeKey': 'AccessKeyId',
                'AttributeValue': access_key_id
            }
        ],
        StartTime=start_time,
        EndTime=end_time
    )
    
    for page in page_iterator:
        for event in page['Events']:
            events.append({
                'time': event['EventTime'],
                'event_name': event['EventName'],
                'source_ip': event.get('SourceIPAddress', 'Unknown'),
                'user_agent': event.get('UserAgent', 'Unknown'),
                'error_code': event.get('ErrorCode', None),
                'resources': event.get('Resources', [])
            })
    
    # Analyze patterns
    analysis = {
        'total_events': len(events),
        'unique_ips': len(set(e['source_ip'] for e in events)),
        'unique_operations': len(set(e['event_name'] for e in events)),
        'error_rate': len([e for e in events if e['error_code']]) / len(events) if events else 0,
        'suspicious_patterns': detect_suspicious_patterns(events)
    }
    
    return analysis

def detect_suspicious_patterns(events):
    patterns = []
    
    # Rapid API calls
    if len(events) > 1000:
        patterns.append('Unusually high API call volume')
    
    # Geographic anomalies
    ips = [e['source_ip'] for e in events]
    if len(set(ips)) > 10:
        patterns.append('Access from multiple geographic locations')
    
    # Unusual operations
    sensitive_ops = ['DeleteBucket', '

~jared gore