SOC 2 Audit Log Requirements: What Needs to Be Logged
SOC 2 requires audit logs across your infrastructure and applications. Learn which events must be logged, how long to retain logs, and what evidence auditors check.
- CC7.1 and CC6.1 together require that security-relevant events are logged with sufficient detail to support investigation and accountability.
- Minimum log coverage: authentication events (success and failure), privileged access, data access, configuration changes, and API calls to cloud infrastructure.
- Logs must be tamper-resistant — stored separately from the systems generating them, with access restricted to prevent modification or deletion.
- Retention for security-relevant logs: minimum 12 months, with 90 days in hot storage for investigation access.
- Logs that nobody reads are not a control — alert routing and regular log review are required to demonstrate the monitoring control is operating.
In this guide
Criteria and Rationale
Audit logging sits at the intersection of CC6.1 (logical access accountability) and CC7.1 (detection and monitoring). CC6.1 requires that user activity on systems be logged and that logs be protected from unauthorized modification. CC7.1 requires that the entity uses detection tools that identify security events — audit logs are the primary data source for those detection tools. Without comprehensive logs, your monitoring tools have nothing to analyze.
The purpose of audit logs in a SOC 2 context is threefold: accountability (attributing actions to specific users so that unauthorized or malicious actions can be identified), detection (providing data that anomaly monitoring and SIEM tools analyze to detect security events), and forensics (providing evidence for post-incident investigation to determine root cause, scope of impact, and affected data).
The standard does not enumerate a specific list of events that must be logged. Instead, the criteria require that events relevant to security, confidentiality, and availability be captured. Auditors use professional judgment — if a significant security event could have occurred and would not appear in your logs, that is a gap regardless of what your log list contains.
What Events Must Be Logged
Authentication events: all login attempts (success and failure) to all in-scope systems. Failed login attempts are often the first indicator of a brute force or credential stuffing attack. Log the user identifier, source IP, timestamp, authentication method, and success/failure status. This applies to your application, your cloud console, your SSO system, your VPN, your code repository, and your internal tooling.
Privileged access events: any use of elevated permissions — console access to production environments, sudo commands on production servers, IAM role assumption, database admin access. These events carry higher risk and require more rigorous logging. Include the action performed, the resource accessed, and the outcome. AWS CloudTrail captures IAM role assumptions and console sign-ins automatically.
Data access and modification events: reads and writes of sensitive data fields, bulk data exports, and deletion events. Application-level logging is required here — cloud infrastructure logs do not capture which rows were queried from your database or which files were downloaded from your application. Define in your logging policy what constitutes "sensitive data" (customer PII, financial records, health information) and ensure your application generates a log entry for access to those fields.
Configuration changes: changes to security settings, user permissions, network rules, and infrastructure configurations. AWS Config and CloudTrail capture infrastructure configuration changes. GitHub commit history captures code and configuration changes. Your CI/CD pipeline logs capture deployment events. Together these provide a comprehensive audit trail of changes to your production environment.
Log Storage and Tamper Resistance
Logs must be stored in a way that prevents unauthorized modification or deletion. If an attacker who has compromised your production environment can also delete the CloudTrail logs documenting their activity, the logs provide no forensic value. Tamper resistance is achieved through: storing logs in a separate AWS account with restricted access (a dedicated logging account), enabling S3 Object Lock with compliance mode (WORM — write once, read many) for CloudTrail logs, and restricting API access to the logging account to a minimum set of principals.
The logging account should have no trust relationship with the production account that would allow production IAM principals to modify logs. CloudTrail can be configured to deliver logs to a cross-account S3 bucket — this is the AWS recommended architecture for audit-grade logging. Document the architecture and show the auditor that the logging S3 bucket is in a separate account with no delete permission for production account roles.
Log integrity can also be validated using CloudTrail log file validation — CloudTrail signs each log file and you can verify that logs were not modified after delivery. Enable log file validation in your CloudTrail configuration and document this as part of your tamper-resistance controls.
Retention Requirements
SOC 2 does not mandate a specific log retention period, but industry standard and auditor expectation is 12 months for security-relevant logs, with 90 days in hot storage and the remainder in cold storage. Hot storage allows rapid search and investigation; cold storage (S3 Glacier, Azure Archive) provides cost-effective long-term retention.
Define retention periods in your logging policy by log category: authentication logs (12 months), CloudTrail API logs (12 months), application access logs (12 months), application error logs (90 days), performance metrics (30–90 days). Configure the technical retention settings to match the policy — S3 lifecycle policies for moving to Glacier and expiry settings for deletion.
If you operate under additional regulatory frameworks (HIPAA, PCI-DSS, SOX), longer retention may be required. HIPAA requires 6-year retention for audit logs related to PHI access. PCI-DSS requires 12 months with 3 months immediately available. SOX requires 7 years for financial system logs. Design your retention architecture to satisfy the most stringent applicable requirement.
Cloud Provider Audit Logs
Cloud providers include native audit logging services that form the foundation of SOC 2 logging compliance. AWS CloudTrail logs all API calls to AWS services across all regions — who called which API, from which IP, with what parameters, and what the result was. Enable CloudTrail in all regions (including regions where you have no active resources, to detect unexpected activity) and configure S3 delivery with log file validation.
Supplementing CloudTrail, AWS Config records configuration state changes for your AWS resources over time — it answers "what did this S3 bucket's configuration look like on this date?" where CloudTrail answers "who changed this S3 bucket on this date?" Together they provide a complete audit trail of your infrastructure state and the actions that produced it.
For GCP, Cloud Audit Logs (Admin Activity, Data Access, System Events) provide equivalent coverage. For Azure, Azure Monitor Activity Log and Azure Active Directory Sign-In logs provide the equivalent. Configure each cloud provider's audit logging before your observation period begins — enabling CloudTrail after the observation period starts means you have a gap in your audit trail for the pre-enablement period.
Application-Level Logging
Cloud infrastructure logs capture what happens at the API and infrastructure level, but they do not capture application-level events — user sign-ins to your product, data queries, file downloads, admin actions within your application. These events require instrumentation within your application code.
Implement application-level audit logging for: user authentication (sign-in, sign-out, failed sign-in, MFA events), user account management (creation, modification, deletion, role changes), sensitive data access (viewing customer records, exporting reports), and administrative actions (configuration changes, billing modifications, permission grants). Structure your logs in a machine-readable format (JSON) with consistent fields: timestamp, user ID, action, resource ID, outcome, and source IP.
Route application logs to the same centralized log aggregation system as your infrastructure logs. A SIEM or log management platform (Datadog Logs, CloudWatch Logs Insights, Elastic) that receives both application and infrastructure logs enables cross-system correlation — for example, correlating an unusual data export (application log) with the user's authentication pattern (Okta log) and their API call pattern (CloudTrail) to detect a data exfiltration scenario.
Evidence Auditors Collect
Auditors testing logging requirements will request: (1) your logging policy documenting what must be logged, retention periods, and tamper-resistance controls; (2) CloudTrail configuration screenshots showing it is enabled in all regions with cross-account S3 delivery and log file validation; (3) S3 bucket configuration screenshots for the logging bucket showing Object Lock, restricted access policy, and lifecycle rules; (4) examples of security-relevant log entries from the observation period demonstrating log coverage; and (5) your SIEM or log management dashboard showing log ingestion from all in-scope sources.
A common audit preparation activity is generating a log coverage map: list all in-scope systems and document what is being logged for each, where those logs are sent, and how long they are retained. This map serves as evidence that you have thought systematically about log coverage rather than only logging systems that are easy.
Log review evidence — showing that logs are actually reviewed and not just stored — is also collected as part of CC7.2 and CC7.3 testing. The SIEM alert records, the security operations triage log, and the scheduled log review reports all demonstrate that logging is an active monitoring control, not a passive archive.
Frequently Asked Questions
Do we need to log every database query for SOC 2?
Can we use AWS CloudWatch Logs as our central log store?
What if our application does not currently have audit logging implemented?
How do we handle personally identifiable information (PII) in logs?
Does enabling CloudTrail in all regions matter if we only use us-east-1?
Automate your compliance today
AuditPath runs 86+ automated checks across AWS, GitHub, Okta, and 14 more integrations. SOC 2 and DPDP Act. Free plan available.
Start for free