Back to Blog
SOC 2 7 min read

SOC 2 Audit Log Requirements: What Needs to Be Logged

SOC 2 requires audit logs across your infrastructure and applications. Learn which events must be logged, how long to retain logs, and what evidence auditors check.

Key Takeaways
  • CC7.1 and CC6.1 together require that security-relevant events are logged with sufficient detail to support investigation and accountability.
  • Minimum log coverage: authentication events (success and failure), privileged access, data access, configuration changes, and API calls to cloud infrastructure.
  • Logs must be tamper-resistant — stored separately from the systems generating them, with access restricted to prevent modification or deletion.
  • Retention for security-relevant logs: minimum 12 months, with 90 days in hot storage for investigation access.
  • Logs that nobody reads are not a control — alert routing and regular log review are required to demonstrate the monitoring control is operating.

Criteria and Rationale

Audit logging sits at the intersection of CC6.1 (logical access accountability) and CC7.1 (detection and monitoring). CC6.1 requires that user activity on systems be logged and that logs be protected from unauthorized modification. CC7.1 requires that the entity uses detection tools that identify security events — audit logs are the primary data source for those detection tools. Without comprehensive logs, your monitoring tools have nothing to analyze.

The purpose of audit logs in a SOC 2 context is threefold: accountability (attributing actions to specific users so that unauthorized or malicious actions can be identified), detection (providing data that anomaly monitoring and SIEM tools analyze to detect security events), and forensics (providing evidence for post-incident investigation to determine root cause, scope of impact, and affected data).

The standard does not enumerate a specific list of events that must be logged. Instead, the criteria require that events relevant to security, confidentiality, and availability be captured. Auditors use professional judgment — if a significant security event could have occurred and would not appear in your logs, that is a gap regardless of what your log list contains.

What Events Must Be Logged

Authentication events: all login attempts (success and failure) to all in-scope systems. Failed login attempts are often the first indicator of a brute force or credential stuffing attack. Log the user identifier, source IP, timestamp, authentication method, and success/failure status. This applies to your application, your cloud console, your SSO system, your VPN, your code repository, and your internal tooling.

Privileged access events: any use of elevated permissions — console access to production environments, sudo commands on production servers, IAM role assumption, database admin access. These events carry higher risk and require more rigorous logging. Include the action performed, the resource accessed, and the outcome. AWS CloudTrail captures IAM role assumptions and console sign-ins automatically.

Data access and modification events: reads and writes of sensitive data fields, bulk data exports, and deletion events. Application-level logging is required here — cloud infrastructure logs do not capture which rows were queried from your database or which files were downloaded from your application. Define in your logging policy what constitutes "sensitive data" (customer PII, financial records, health information) and ensure your application generates a log entry for access to those fields.

Configuration changes: changes to security settings, user permissions, network rules, and infrastructure configurations. AWS Config and CloudTrail capture infrastructure configuration changes. GitHub commit history captures code and configuration changes. Your CI/CD pipeline logs capture deployment events. Together these provide a comprehensive audit trail of changes to your production environment.

Log Storage and Tamper Resistance

Logs must be stored in a way that prevents unauthorized modification or deletion. If an attacker who has compromised your production environment can also delete the CloudTrail logs documenting their activity, the logs provide no forensic value. Tamper resistance is achieved through: storing logs in a separate AWS account with restricted access (a dedicated logging account), enabling S3 Object Lock with compliance mode (WORM — write once, read many) for CloudTrail logs, and restricting API access to the logging account to a minimum set of principals.

The logging account should have no trust relationship with the production account that would allow production IAM principals to modify logs. CloudTrail can be configured to deliver logs to a cross-account S3 bucket — this is the AWS recommended architecture for audit-grade logging. Document the architecture and show the auditor that the logging S3 bucket is in a separate account with no delete permission for production account roles.

Log integrity can also be validated using CloudTrail log file validation — CloudTrail signs each log file and you can verify that logs were not modified after delivery. Enable log file validation in your CloudTrail configuration and document this as part of your tamper-resistance controls.

Retention Requirements

SOC 2 does not mandate a specific log retention period, but industry standard and auditor expectation is 12 months for security-relevant logs, with 90 days in hot storage and the remainder in cold storage. Hot storage allows rapid search and investigation; cold storage (S3 Glacier, Azure Archive) provides cost-effective long-term retention.

Define retention periods in your logging policy by log category: authentication logs (12 months), CloudTrail API logs (12 months), application access logs (12 months), application error logs (90 days), performance metrics (30–90 days). Configure the technical retention settings to match the policy — S3 lifecycle policies for moving to Glacier and expiry settings for deletion.

If you operate under additional regulatory frameworks (HIPAA, PCI-DSS, SOX), longer retention may be required. HIPAA requires 6-year retention for audit logs related to PHI access. PCI-DSS requires 12 months with 3 months immediately available. SOX requires 7 years for financial system logs. Design your retention architecture to satisfy the most stringent applicable requirement.

Cloud Provider Audit Logs

Cloud providers include native audit logging services that form the foundation of SOC 2 logging compliance. AWS CloudTrail logs all API calls to AWS services across all regions — who called which API, from which IP, with what parameters, and what the result was. Enable CloudTrail in all regions (including regions where you have no active resources, to detect unexpected activity) and configure S3 delivery with log file validation.

Supplementing CloudTrail, AWS Config records configuration state changes for your AWS resources over time — it answers "what did this S3 bucket's configuration look like on this date?" where CloudTrail answers "who changed this S3 bucket on this date?" Together they provide a complete audit trail of your infrastructure state and the actions that produced it.

For GCP, Cloud Audit Logs (Admin Activity, Data Access, System Events) provide equivalent coverage. For Azure, Azure Monitor Activity Log and Azure Active Directory Sign-In logs provide the equivalent. Configure each cloud provider's audit logging before your observation period begins — enabling CloudTrail after the observation period starts means you have a gap in your audit trail for the pre-enablement period.

Application-Level Logging

Cloud infrastructure logs capture what happens at the API and infrastructure level, but they do not capture application-level events — user sign-ins to your product, data queries, file downloads, admin actions within your application. These events require instrumentation within your application code.

Implement application-level audit logging for: user authentication (sign-in, sign-out, failed sign-in, MFA events), user account management (creation, modification, deletion, role changes), sensitive data access (viewing customer records, exporting reports), and administrative actions (configuration changes, billing modifications, permission grants). Structure your logs in a machine-readable format (JSON) with consistent fields: timestamp, user ID, action, resource ID, outcome, and source IP.

Route application logs to the same centralized log aggregation system as your infrastructure logs. A SIEM or log management platform (Datadog Logs, CloudWatch Logs Insights, Elastic) that receives both application and infrastructure logs enables cross-system correlation — for example, correlating an unusual data export (application log) with the user's authentication pattern (Okta log) and their API call pattern (CloudTrail) to detect a data exfiltration scenario.

Evidence Auditors Collect

Auditors testing logging requirements will request: (1) your logging policy documenting what must be logged, retention periods, and tamper-resistance controls; (2) CloudTrail configuration screenshots showing it is enabled in all regions with cross-account S3 delivery and log file validation; (3) S3 bucket configuration screenshots for the logging bucket showing Object Lock, restricted access policy, and lifecycle rules; (4) examples of security-relevant log entries from the observation period demonstrating log coverage; and (5) your SIEM or log management dashboard showing log ingestion from all in-scope sources.

A common audit preparation activity is generating a log coverage map: list all in-scope systems and document what is being logged for each, where those logs are sent, and how long they are retained. This map serves as evidence that you have thought systematically about log coverage rather than only logging systems that are easy.

Log review evidence — showing that logs are actually reviewed and not just stored — is also collected as part of CC7.2 and CC7.3 testing. The SIEM alert records, the security operations triage log, and the scheduled log review reports all demonstrate that logging is an active monitoring control, not a passive archive.

Frequently Asked Questions

Do we need to log every database query for SOC 2?
No. Logging every database query is neither required nor practical at scale — it would generate enormous log volumes and make meaningful analysis difficult. SOC 2 requires logging of security-relevant events: privileged database access (admin logins, schema changes, bulk exports), and application-level access to sensitive data fields (handled at the application layer, not the database query layer). Routine SELECT queries by your application service accounts do not need to be individually logged.
Can we use AWS CloudWatch Logs as our central log store?
Yes. CloudWatch Logs is acceptable as a central log aggregation destination. Configure log groups with appropriate retention settings (90 days minimum for hot storage, export to S3 for longer retention). CloudWatch Logs Insights provides query capability for investigation. For more advanced correlation and alerting, CloudWatch can feed events to a SIEM or to AWS Security Hub for centralized security event management.
What if our application does not currently have audit logging implemented?
Implement application audit logging before your observation period begins. Infrastructure logs (CloudTrail, Okta) provide coverage for a limited set of events, but gaps in application-level logging (user data access, admin actions) are a common first-time SOC 2 finding. Prioritize logging for your most sensitive data operations. A structured logging library (winston, pino, structlog) and a log aggregation pipeline can be implemented in a focused sprint before the observation period starts.
How do we handle personally identifiable information (PII) in logs?
Log the fact that an event occurred and the user who performed it, but avoid logging the content of sensitive data in cleartext. For example: log "user 12345 exported 500 customer records at 14:32" rather than logging the customer records themselves. For authentication logs, log user IDs and timestamps but not passwords or MFA tokens. If PII must appear in logs for debugging purposes, implement log masking or tokenization, and restrict access to unmaskd logs to authorized security personnel.
Does enabling CloudTrail in all regions matter if we only use us-east-1?
Yes. Enabling CloudTrail in all regions is important because attackers who gain access to your AWS account will sometimes use unexpected regions to create resources (launch cryptomining instances, exfiltrate data) precisely because those regions are not monitored. A multi-region trail configured to deliver to a single S3 bucket adds minimal cost but ensures you have visibility into activity anywhere in your account. AWS GuardDuty also works across all enabled regions, so multi-region CloudTrail pairs naturally with multi-region GuardDuty.

Automate your compliance today

AuditPath runs 86+ automated checks across AWS, GitHub, Okta, and 14 more integrations. SOC 2 and DPDP Act. Free plan available.

Start for free