SOC 2 for Node.js Apps: Logging, Auth, and Security Headers
SOC 2 controls for Node.js applications — covering structured logging with Winston/Pino, JWT auth best practices, security headers with Helmet, input validation, and dependency scanning.
- Structured logging with Winston or Pino and shipping to a centralized log store satisfies CC7.2 monitoring evidence.
- JWT authentication with short expiry (15 minutes), refresh tokens, and token rotation satisfies CC6.1 session management.
- Helmet.js adds 15 security headers in one line of code, satisfying CC6.7 application hardening controls.
- Input validation with Joi or Zod on all API endpoints prevents injection attacks and satisfies CC6.7.
- Dependency scanning with `npm audit` and Snyk in CI provides continuous CC7.1 vulnerability evidence.
- Rate limiting with express-rate-limit on auth endpoints prevents brute force attacks and satisfies CC6.1.
In this guide
SOC 2 Controls in a Node.js Application
SOC 2 auditors do not typically review application source code directly, but they do review application-level controls: logging configuration, authentication implementation, and dependency management. Application-layer evidence — structured logs showing successful and failed authentication attempts, dependency scan results — is required evidence for CC7.2 and CC7.1 respectively.
This guide covers the specific Node.js/Express patterns and libraries that implement SOC 2 controls. The implementations are production-ready and can be added to an existing Express application incrementally, without requiring architectural changes.
Structured Logging (CC7.2)
Structured logs (JSON format) are far more useful for SOC 2 evidence than unstructured text logs. Use Pino (fastest Node.js logger) or Winston. Pino setup: `const logger = pino({ level: process.env.LOG_LEVEL || "info", formatters: { level: (label) => ({ level: label }) } })`. Log every API request with: `logger.info({ requestId, userId, method, path, statusCode, durationMs, ip, userAgent }, "request_completed")`.
Log security-relevant events explicitly: `logger.warn({ userId, ip, failReason }, "authentication_failed")`, `logger.info({ userId, ip }, "authentication_success")`, `logger.warn({ userId, resourceId, action }, "authorization_denied")`, `logger.info({ adminId, targetUserId, action }, "admin_action")`. These structured events are what SIEM systems and Datadog parse to generate security dashboards and alerts. Without explicit security event logging, your application generates noise but not actionable security evidence.
Ship logs to a centralized store. Use the Pino transport for Datadog: `pino-datadog` npm package. Or use the Winston Datadog transport. Configure log levels: DEBUG in development (never in production), INFO for normal operations, WARN for security events, ERROR for application errors. Set `LOG_LEVEL=info` in production via environment variable. Include a `correlationId` or `requestId` in every log line, generated at the middleware layer and passed through the request context using AsyncLocalStorage.
Authentication and Session Management (CC6.1)
JWT best practices for SOC 2: (1) Short-lived access tokens — set `expiresIn: "15m"` in `jwt.sign()`. 15-minute expiry limits the window of exposure for stolen tokens. (2) Refresh token rotation — issue a new refresh token with each access token refresh and invalidate the old one. Store refresh token hashes (not plaintext) in the database. (3) Audience and issuer validation — always set and validate `aud` and `iss` claims: `jwt.verify(token, secret, { audience: "api.yourapp.com", issuer: "auth.yourapp.com" })`.
(4) Algorithm specification — always specify the algorithm in verify: `jwt.verify(token, secret, { algorithms: ["HS256"] })`. Never use `algorithms: ["none"]`. (5) Token revocation — maintain a token blocklist in Redis for revoked tokens (logged out sessions, password reset, account compromise). Check the blocklist on each request before the route handler. (6) Secure cookie attributes — for web applications, set tokens as `httpOnly: true, secure: true, sameSite: "strict"` cookies. Never store JWTs in localStorage — they are accessible to JavaScript and vulnerable to XSS.
Implement account lockout: after 10 failed password attempts, lock the account for 15 minutes. Track failed attempts in Redis: `await redis.incr(`failed_login:${userId}`)` with a 15-minute TTL. Log the lockout event: `logger.warn({ userId, ip }, "account_locked_excessive_failures")`. Send an email notification to the user when their account is locked — this alerts legitimate users to unauthorized access attempts and is considered a positive security feature by auditors.
Security Headers with Helmet (CC6.7)
Helmet.js sets security-related HTTP response headers automatically. Install: `npm install helmet`. In Express: `app.use(helmet())`. This one line sets 15 headers including: `Content-Security-Policy`, `X-Frame-Options: SAMEORIGIN`, `X-Content-Type-Options: nosniff`, `Referrer-Policy: no-referrer`, `Strict-Transport-Security`, `X-Permitted-Cross-Domain-Policies`, and others.
Configure Content-Security-Policy (CSP) explicitly rather than relying on Helmet defaults: `helmet.contentSecurityPolicy({ directives: { defaultSrc: ["'self'"], scriptSrc: ["'self'", "'nonce-{NONCE}'"], styleSrc: ["'self'", "fonts.googleapis.com"], imgSrc: ["'self'", "data:", "https:"], connectSrc: ["'self'", "api.yourapp.com"], frameAncestors: ["'none'"] } })`. A strict CSP prevents XSS attacks by restricting which scripts can execute on your pages.
Verify your security headers with SecurityHeaders.com (securityheaders.com) — paste your application URL and it grades your headers with a score of A–F. Target grade A or A+. Include a screenshot of this score in your audit evidence package as CC6.7 application hardening evidence. Re-run the check after each major application update to verify headers have not regressed.
Input Validation and Sanitization
Validate all API inputs with a schema library. With Zod (TypeScript-native): `const createUserSchema = z.object({ email: z.string().email(), name: z.string().min(1).max(100), role: z.enum(["admin", "member"]) })`. Validate in middleware: `const body = createUserSchema.safeParse(req.body); if (!body.success) { return res.status(400).json({ error: body.error.flatten() }); }`. This prevents invalid data from reaching business logic and protects against injection attacks.
For SQL-based databases, always use parameterized queries via your ORM (Prisma, TypeORM, Knex). Never concatenate user input into SQL strings. For MongoDB, use query selectors validation — reject inputs containing `$` prefix keys which could be MongoDB query operators: `if (JSON.stringify(body).includes("$")) { return res.status(400).send("Invalid input"); }`. Log validation failures with the input shape (not values) so security teams can detect automated scanning attempts.
Rate Limiting (CC6.7)
Install express-rate-limit: `npm install express-rate-limit`. For authentication endpoints: `const authLimiter = rateLimit({ windowMs: 15 * 60 * 1000, max: 10, standardHeaders: true, legacyHeaders: false, handler: (req, res) => { logger.warn({ ip: req.ip }, "rate_limit_exceeded_auth"); res.status(429).json({ error: "Too many login attempts" }); } })`. Apply to login, password reset, and MFA verification routes.
For general API rate limiting: `const apiLimiter = rateLimit({ windowMs: 60 * 1000, max: 100 })`. Use Redis-backed rate limiting for multi-instance deployments: `const RedisStore = require("rate-limit-redis"); const store = new RedisStore({ client: redisClient })`. Memory-based rate limiting does not work when you have multiple Node.js instances behind a load balancer — each instance has separate memory.
Log rate limit events and include them in your security dashboards. A spike in rate limit events on the auth endpoint is an early indicator of a credential stuffing attack. Configure a Datadog monitor: when rate limit events on `/auth/login` exceed 50 in 5 minutes, trigger a PagerDuty alert. This automated threat detection chain satisfies CC7.2 and CC7.3.
Dependency Security (CC7.1)
Run `npm audit` in CI to detect known vulnerabilities in dependencies. In GitHub Actions: add `npm audit --audit-level=critical` as a required step. This fails the build if any critical vulnerabilities exist in the dependency tree. For PRs, Dependabot automatically opens PRs to update vulnerable dependencies — enable it in your GitHub repository under Settings → Code security and analysis.
Supplement npm audit with Snyk for deeper analysis: `snyk test --severity-threshold=high`. Snyk provides transitive dependency visibility and fix recommendations beyond what npm audit shows. Add `snyk monitor` to your deployment pipeline to continuously monitor the production dependency snapshot for newly published CVEs. Export the Snyk project report monthly as CC7.1 evidence.
Implement a `package-lock.json` integrity check: add `npm ci` instead of `npm install` in your Docker build and CI pipeline. `npm ci` uses the exact versions in `package-lock.json` rather than resolving semver ranges, preventing supply chain attacks via transitive dependency version confusion. Commit and version-control `package-lock.json`.
Secrets Management in Node.js
Never hardcode secrets in source code or `.env` files committed to Git. Use `.env` for local development only with a `.gitignore` entry. For production, inject secrets via environment variables from AWS Secrets Manager using the AWS Parameter Store or Secrets Manager + Lambda extension. In ECS/EKS: inject secrets as environment variables from AWS Secrets Manager via the task definition `secrets` field.
For reading secrets safely in Node.js: `const dbPassword = await secretsManager.getSecretValue({ SecretId: "prod/app/db-password" }).then(r => JSON.parse(r.SecretString).password)`. Cache secrets in process memory for the lifetime of the process (do not re-fetch every request). Implement a secrets rotation handler — when AWS Secrets Manager rotates a credential, your application needs to refresh its cached value. Use the AWS Secrets Manager Lambda rotation function templates for common databases.
Scan for secrets in your Node.js codebase with `npx secretlint .` or `detect-secrets scan` as a pre-commit hook. Add a GitHub Actions step that runs `trufflehog filesystem --directory=.` on every PR. These scans prevent developers from accidentally committing API keys, database passwords, or JWT secrets to the repository — an audit finding that is embarrassing and may trigger a security incident.
Frequently Asked Questions
Do SOC 2 auditors review our Node.js code?
Should we use JSON Web Tokens or session cookies for SOC 2?
How do we handle SOC 2 logging without logging PII?
What is the easiest way to add audit logging to an existing Express app?
Does using TypeScript help with SOC 2 compliance?
Automate your compliance today
AuditPath runs 86+ automated checks across AWS, GitHub, Okta, and 14 more integrations. SOC 2 and DPDP Act. Free plan available.
Start for free