I've seen plenty of test suites that look green in CI but explode in production. Unit tests pass because the mocked database returns exactly what you told it to return. Then the real PostgreSQL instance throws a constraint violation you never anticipated.
Integration tests sit in that uncomfortable middle ground where theory meets messy reality. They're slower than unit tests, more brittle than you'd like, and if you're not careful, your CI pipeline turns into a 45-minute coffee break. But they catch the bugs that matter: the ones where two working components fail to work together.
I've built and maintained integration testing strategies for Node.js APIs, event-driven systems, and microservices. The hard parts aren't writing the tests themselves. It's managing test data without drowning in fixtures, deciding what to mock without defeating the purpose, and keeping tests fast enough that people actually run them.
This is what I've learned.
What Are Integration Tests? (And What They're Not)
Integration tests verify that multiple components work together correctly. Where unit tests isolate a single function or class, integration tests exercise the boundaries between components: your API layer talking to the database, your service calling another service, your event producer publishing to Kafka.
The distinction matters because the testing strategy changes completely. Unit tests mock everything external. Integration tests use real dependencies where it makes sense.
Integration vs unit tests: Unit tests verify logic in isolation. Integration tests verify interactions between components. If you're testing a function that calculates shipping costs, that's a unit test. If you're testing an API endpoint that saves an order to PostgreSQL and publishes an event to Kafka, that's integration.
Integration vs E2E tests: End-to-end tests exercise the entire system from the user's perspective, usually through a browser or API client. Integration tests focus on subsystem boundaries. The line blurs, but a good rule: if you're spinning up the entire stack and clicking through a UI, it's E2E. If you're testing a REST API with a real database but mocked external services, it's integration.
I draw the boundary at network hops and user simulation. Integration tests can make network calls, but they test individual services or service pairs, not the whole chain from frontend to database.
Why Integration Testing Matters
Unit tests catch logic errors. Integration tests catch interface mismatches, serialization bugs, database constraint violations, and all the things that happen when two working components meet for the first time.
Here's a real scenario from a payment service I worked on: the unit tests passed. We mocked the database, and the order creation logic worked perfectly. Then in staging, the API threw 500 errors because the created_at timestamp column had a NOT NULL constraint, and we weren't setting it. The ORM generated the timestamp on insert, but our test mocks didn't.
Integration tests would have caught it immediately.
The cost of bugs follows a predictable curve: fixing a bug caught by a unit test is cheap (you're already in the code). Fixing one caught by integration tests is more expensive (you need to reproduce the interaction, possibly spin up dependencies). Fixing one in E2E is expensive (you need the whole stack). Fixing one in production is a disaster (customer impact, incident response, post-mortem).
Integration tests live in the sweet spot: they catch real bugs before production, and they're faster and more focused than E2E.
The Integration Test Spectrum
Not all integration tests are created equal. I think of them as a spectrum from narrow (close to unit tests) to broad (close to E2E).
Level 1 â In-Process Integration: Multiple classes or modules working together, but external I/O is mocked. You're testing that your service layer calls your repository layer correctly, but the database is still a mock. This is barely integration testing, but it catches interface mismatches.
Level 2 â Out-of-Process Integration: Real database, message queue, or cache, but running locally or in a container. This is where I spend most of my integration testing effort. You're testing against PostgreSQL, Redis, or Kafka, but you're not calling external APIs or other services.
Level 3 â Service-to-Service: Multiple services running, making real network calls between them. Useful for microservices, but expensive to set up and maintain.
Level 4 â Contract Tests: Consumer-driven contracts using tools like Pact. Instead of spinning up both services, you verify that the consumer's expectations match the provider's actual behavior. This isn't strictly integration testing, but it solves the same problem.
I focus on Level 2 for most backend work. It gives you confidence in the database interactions, the schema, the constraints, and the query logic, without the overhead of running multiple services.
Integration Testing Strategies by Architecture
The testing strategy changes based on your architecture. What works for a monolith doesn't work for microservices.
Monolithic Applications
Monoliths are the easiest to test because everything runs in one process. Spin up a real database, seed some data, make API calls, verify the results.
I use Testcontainers to run PostgreSQL in Docker during tests. No need to install Postgres on every developer's machine or worry about conflicting versions.
Here's how I set up integration tests for a Node.js monolith:
// test/setup.js
const { GenericContainer } = require('testcontainers');
const { Pool } = require('pg');
let postgresContainer;
let dbPool;
// Start PostgreSQL container before tests
beforeAll(async () => {
postgresContainer = await new GenericContainer('postgres:16-alpine')
.withExposedPorts(5432)
.withEnvironment({
POSTGRES_USER: 'testuser',
POSTGRES_PASSWORD: 'testpass',
POSTGRES_DB: 'testdb',
})
.start();
const dbConfig = {
host: postgresContainer.getHost(),
port: postgresContainer.getMappedPort(5432),
user: 'testuser',
password: 'testpass',
database: 'testdb',
};
dbPool = new Pool(dbConfig);
// Run migrations
await runMigrations(dbPool);
}, 60000); // Container startup can take time
// Clean up after tests
afterAll(async () => {
await dbPool.end();
await postgresContainer.stop();
});
// Reset database between tests
afterEach(async () => {
await dbPool.query('TRUNCATE users, orders CASCADE');
});
module.exports = { getDb: () => dbPool };
This pattern gives you a real PostgreSQL instance, isolated per test run. The TRUNCATE in afterEach ensures tests don't pollute each other.
Microservices Architecture
Microservices are harder. You have service boundaries, network calls, and the question of how much of the system to spin up.
My rule: test one service at a time with a real database, and stub downstream services. Don't try to run the entire microservices mesh in your test suite.
Here's a test for an order service that calls a payment service:
// test/order-service.test.js
const request = require('supertest');
const nock = require('nock');
const app = require('../src/app');
const { getDb } = require('./setup');
describe('POST /orders', () => {
it('creates an order and charges payment', async () => {
// Stub the payment service
nock('http://payment-service')
.post('/charges')
.reply(200, { chargeId: 'ch_123', status: 'succeeded' });
const response = await request(app)
.post('/orders')
.send({
userId: 1,
items: [{ productId: 10, quantity: 2 }],
paymentMethod: 'card_abc',
})
.expect(201);
expect(response.body.orderId).toBeDefined();
expect(response.body.status).toBe('confirmed');
// Verify order was saved to database
const db = getDb();
const result = await db.query('SELECT * FROM orders WHERE id = $1', [
response.body.orderId,
]);
expect(result.rows[0].user_id).toBe(1);
expect(result.rows[0].total_amount).toBe(4000); // 2 items * $20
});
});
The payment service is stubbed with nock. The database is real. This catches schema issues, constraint violations, and serialization bugs without the complexity of running two services.
Event-Driven Systems
Event-driven architectures introduce asynchrony. You publish an event, and a consumer processes it sometime later. Integration tests need to account for that timing.
For Kafka-based systems, I use an in-memory broker for tests when possible, or Testcontainers for the real thing.
// test/event-processor.test.js
const { Kafka } = require('kafkajs');
const { GenericContainer } = require('testcontainers');
let kafkaContainer;
let kafka;
beforeAll(async () => {
kafkaContainer = await new GenericContainer('confluentinc/cp-kafka:7.5.0')
.withExposedPorts(9093)
.withEnvironment({
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181',
KAFKA_ADVERTISED_LISTENERS: 'PLAINTEXT://localhost:9093',
})
.start();
kafka = new Kafka({
clientId: 'test-client',
brokers: [`localhost:${kafkaContainer.getMappedPort(9093)}`],
});
});
afterAll(async () => {
await kafkaContainer.stop();
});
it('processes order.created events', async () => {
const producer = kafka.producer();
await producer.connect();
await producer.send({
topic: 'order.created',
messages: [{ value: JSON.stringify({ orderId: 123, userId: 1 }) }],
});
await producer.disconnect();
// Wait for event processing
await waitFor(() => getDb().query('SELECT * FROM processed_orders WHERE order_id = 123'));
const result = await getDb().query('SELECT * FROM processed_orders WHERE order_id = 123');
expect(result.rows.length).toBe(1);
});
The waitFor helper polls until the condition is met or times out. Asynchronous tests need explicit waits to avoid flakiness.
Test Data Management: The Hardest Part
The hardest part of integration testing is managing test data. You need realistic data to test against, but you also need isolation between tests and predictable state.
I've tried four strategies:
Strategy 1: Test fixtures and factories. Define reusable data factories that generate test objects. Good for creating complex object graphs without repetition.
// test/factories.js
const { faker } = require('@faker-js/faker');
function createUser(overrides = {}) {
return {
email: faker.internet.email(),
username: faker.internet.userName(),
createdAt: new Date(),
...overrides,
};
}
function createOrder(overrides = {}) {
return {
userId: overrides.userId || 1,
totalAmount: faker.number.int({ min: 1000, max: 50000 }),
status: 'pending',
createdAt: new Date(),
...overrides,
};
}
module.exports = { createUser, createOrder };
Factories let you generate data on-the-fly with realistic variation, while overriding specific fields for test cases.
Strategy 2: Database seeding scripts. Load a known dataset before each test. Simple but can lead to brittle tests if the seed data changes.
Strategy 3: Snapshot/restore database state. Take a database snapshot, run tests, restore the snapshot. Fast for read-heavy tests, but doesn't work well for tests that write.
Strategy 4: Isolated test databases per suite. Each test suite gets its own database. Maximum isolation, but slower and more resource-intensive.
I use factories for most cases. They give me flexibility without coupling tests to a specific dataset.
Real Dependencies vs Mocks: Decision Framework
The big question in integration testing: what do you mock, and what do you run for real?
My framework:
Use real databases almost always. Databases are the core of most backend systems. Mocking them defeats the purpose. You want to catch constraint violations, migration issues, and query bugs. Testcontainers makes this easy.
Stub external APIs. Third-party APIs have rate limits, cost money, or depend on external state you don't control. Stub them with tools like nock (Node.js) or responses (Python).
Use in-memory alternatives when available. Redis can be replaced with an in-memory cache for tests. Message queues can use in-memory brokers. But only if the in-memory version behaves the same way.
Testcontainers for everything else. If you need the real thing and it runs in Docker, use Testcontainers. I've used it for PostgreSQL, MySQL, Redis, Kafka, and Elasticsearch.
Here's the trade-off matrix I use:
| Dependency Type | Real or Mock? | Why |
|---|---|---|
| Database (PostgreSQL, MySQL) | Real (Testcontainers) | Catch schema/constraint/migration issues |
| Cache (Redis) | Real or in-memory | In-memory is fine if you're not testing Redis-specific features |
| Message queue (Kafka, RabbitMQ) | Real (Testcontainers) | Event ordering and serialization matter |
| External API (Stripe, Twilio) | Mock (nock, WireMock) | Rate limits, cost, reliability |
| Internal microservice | Mock (nock) or contract test | Spinning up multiple services is expensive |
The performance impact is real. A test suite with real PostgreSQL and Kafka containers takes 2-3x longer than one with mocks. But it catches 10x more bugs.
On a recent project, switching from mocked Postgres to Testcontainers added 90 seconds to our test suite (from 45 seconds to 2:15). We caught four production bugs in the first week. Worth it.
Testcontainers: Real Dependencies Without Pain
Testcontainers is the best thing that's happened to integration testing. It spins up Docker containers for your tests, manages the lifecycle, and tears them down when you're done.
Here's a complete setup for PostgreSQL:
// test/testcontainers-setup.js
const { PostgreSqlContainer } = require('@testcontainers/postgresql');
const { Pool } = require('pg');
let container;
let pool;
async function setupDatabase() {
container = await new PostgreSqlContainer('postgres:16-alpine')
.withDatabase('testdb')
.withUsername('testuser')
.withPassword('testpass')
.start();
pool = new Pool({
host: container.getHost(),
port: container.getPort(),
database: container.getDatabase(),
user: container.getUsername(),
password: container.getPassword(),
});
// Run migrations
await pool.query(`
CREATE TABLE users (
id SERIAL PRIMARY KEY,
email VARCHAR(255) UNIQUE NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
)
`);
return pool;
}
async function teardownDatabase() {
await pool.end();
await container.stop();
}
module.exports = { setupDatabase, teardownDatabase };
The container starts on a random port, so tests don't conflict. It's isolated, disposable, and identical to production.
Performance optimization: Container startup is slow (10-20 seconds for PostgreSQL). Reuse containers across tests when you can:
beforeAll(async () => {
pool = await setupDatabase();
}, 30000);
afterEach(async () => {
// Clean data, but keep container running
await pool.query('TRUNCATE users, orders CASCADE');
});
afterAll(async () => {
await teardownDatabase();
});
This runs one container for the entire suite, not one per test.
API Integration Testing
Testing REST APIs is the most common integration test I write. Spin up your application server, make HTTP requests, verify responses.
I use Supertest for Node.js:
// test/api/users.test.js
const request = require('supertest');
const app = require('../../src/app');
const { getDb } = require('../setup');
describe('User API', () => {
it('creates a user', async () => {
const response = await request(app)
.post('/users')
.send({ email: 'test@example.com', username: 'testuser' })
.expect(201);
expect(response.body.id).toBeDefined();
expect(response.body.email).toBe('test@example.com');
// Verify database state
const db = getDb();
const result = await db.query('SELECT * FROM users WHERE email = $1', [
'test@example.com',
]);
expect(result.rows.length).toBe(1);
});
it('returns 400 for duplicate email', async () => {
const db = getDb();
await db.query("INSERT INTO users (email, username) VALUES ('test@example.com', 'existing')");
await request(app)
.post('/users')
.send({ email: 'test@example.com', username: 'newuser' })
.expect(400);
});
it('requires authentication for user updates', async () => {
await request(app)
.patch('/users/1')
.send({ username: 'updated' })
.expect(401);
const validToken = 'Bearer valid-jwt-token';
await request(app)
.patch('/users/1')
.set('Authorization', validToken)
.send({ username: 'updated' })
.expect(200);
});
});
The pattern: make request, verify HTTP status, verify response body, verify database state. This catches serialization issues, validation logic, and database constraints.
Database Integration Testing Best Practices
Database tests need special care. You're testing against a stateful system, and tests can pollute each other.
Schema migrations in tests. Run your migrations before tests, the same way you run them in production. Don't manually create tables in test setup. If your migrations are broken, you want to know.
Isolating tests. Two strategies: transaction rollback or separate databases.
Transaction rollback is faster:
let client;
beforeEach(async () => {
const pool = getDb();
client = await pool.connect();
await client.query('BEGIN');
});
afterEach(async () => {
await client.query('ROLLBACK');
client.release();
});
Every test runs in a transaction that's rolled back after. Fast, but doesn't work if your application code manages transactions.
Separate databases are slower but foolproof:
beforeEach(async () => {
const pool = getDb();
await pool.query('TRUNCATE users, orders, payments CASCADE');
});
I use rollback when I can, TRUNCATE when I can't.
Testing database constraints. Constraints are logic that lives in the database, not your application. Test them explicitly:
it('enforces unique email constraint', async () => {
const db = getDb();
await db.query("INSERT INTO users (email) VALUES ('test@example.com')");
await expect(
db.query("INSERT INTO users (email) VALUES ('test@example.com')")
).rejects.toThrow(/duplicate key value/);
});
If your ORM or query builder swallows the error, your test will pass when it shouldn't.
Contract Testing: Consumer-Driven Contracts
Contract testing solves the service-to-service integration problem without running both services. The consumer defines what it expects from the provider, and both sides verify the contract independently.
I use Pact for contract tests.
Consumer side:
// test/pact/order-service.consumer.test.js
const { PactV3 } = require('@pact-foundation/pact');
const { getOrders } = require('../../src/clients/order-client');
const provider = new PactV3({
consumer: 'frontend',
provider: 'order-service',
});
describe('Order Service Contract', () => {
it('fetches orders for a user', async () => {
await provider
.given('user 123 has orders')
.uponReceiving('a request for orders')
.withRequest({
method: 'GET',
path: '/orders',
query: { userId: '123' },
})
.willRespondWith({
status: 200,
body: [
{ orderId: 1, totalAmount: 5000, status: 'completed' },
],
})
.executeTest(async (mockServer) => {
const orders = await getOrders(mockServer.url, 123);
expect(orders.length).toBe(1);
expect(orders[0].orderId).toBe(1);
});
});
});
The consumer test generates a contract file. The provider verifies it:
Provider side:
// test/pact/order-service.provider.test.js
const { Verifier } = require('@pact-foundation/pact');
const app = require('../../src/app');
describe('Order Service Provider', () => {
it('validates the contract', async () => {
const server = app.listen(3000);
await new Verifier({
providerBaseUrl: 'http://localhost:3000',
pactUrls: ['./pacts/frontend-order-service.json'],
stateHandlers: {
'user 123 has orders': async () => {
// Seed database with test data
await getDb().query(
"INSERT INTO orders (user_id, total_amount, status) VALUES (123, 5000, 'completed')"
);
},
},
}).verifyProvider();
server.close();
});
});
Contract tests replace service-to-service integration tests. They're faster, more maintainable, and catch breaking changes before deployment.
When to use contract tests:
- Replace integration tests: When you have microservices and running multiple services in tests is too expensive.
- Complement integration tests: For critical service boundaries where you want both contract verification and full integration tests.
When not to use them:
- Single monolith: If you're not calling external services, stick with regular integration tests.
- Same team owns both sides: If the frontend and backend are maintained by the same team, you can refactor both at once. Contracts are more useful across team boundaries.
Handling Flaky Integration Tests
Integration tests are flakier than unit tests. They depend on external state, timing, and network behavior.
Common sources of flakiness:
Timing issues. Asynchronous operations complete at unpredictable times. Use explicit waits instead of arbitrary sleeps:
// Bad
await new Promise(resolve => setTimeout(resolve, 1000));
// Good
async function waitFor(condition, timeout = 5000) {
const start = Date.now();
while (Date.now() - start < timeout) {
if (await condition()) return;
await new Promise(resolve => setTimeout(resolve, 100));
}
throw new Error('Timeout waiting for condition');
}
await waitFor(async () => {
const result = await db.query('SELECT * FROM orders WHERE id = $1', [orderId]);
return result.rows.length > 0;
});
Shared state. Tests that depend on specific database state or container state will fail if run in a different order. Use beforeEach to reset state, and avoid global state.
External dependencies. If you're calling a real external API (you shouldn't be), it can fail or rate-limit you. Stub it.
Container startup timing. Testcontainers can be slow to start. Increase timeouts for beforeAll:
beforeAll(async () => {
container = await new PostgreSqlContainer().start();
}, 60000); // 60 second timeout
Retry with exponential backoff for operations that might fail transiently:
async function retryWithBackoff(fn, maxRetries = 3) {
for (let i = 0; i < maxRetries; i++) {
try {
return await fn();
} catch (error) {
if (i === maxRetries - 1) throw error;
await new Promise(resolve => setTimeout(resolve, 2 ** i * 1000));
}
}
}
Flakiness is a signal. If a test is flaky, it's usually because the test is too broad, depends on timing, or has hidden state. Fix the root cause instead of retrying forever.
Integration Testing in CI/CD Pipelines
Integration tests belong in CI, but they need special care because they're slower and need infrastructure.
I run integration tests in a separate CI stage after unit tests. If unit tests fail, there's no point running integration tests.
# .github/workflows/ci.yml (GitHub Actions example)
jobs:
unit-tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-node@v3
- run: npm install
- run: npm run test:unit
integration-tests:
runs-on: ubuntu-latest
needs: unit-tests
steps:
- uses: actions/checkout@v3
- uses: actions/setup-node@v3
- run: npm install
- run: npm run test:integration
Parallel execution. Run tests in parallel to save time. Most test frameworks support this:
# Jest
jest --maxWorkers=4
# Mocha with parallel flag
mocha --parallel
Be careful with parallel tests that use shared databases. Either use transaction rollback or separate database instances per worker.
Test environment provisioning. estcontainers handles this for you, but you need Docker available in CI. Most CI providers support it natively.
Performance targets. I aim for integration tests to complete in under 5 minutes. Longer than that, and developers stop running them locally. If your suite is slower, split it into critical and non-critical tests, or run non-critical tests less frequently.
Performance and Speed Optimization
Integration tests are slower than unit tests. That's fine, but you need to keep them fast enough to run frequently.
Parallelize test execution. Run tests in parallel across multiple CPU cores. Jest, pytest, and most modern test frameworks support this.
Selective test running. In a microservices setup, run tests only for the services that changed:
# Run tests for the order service only
npm run test:integration -- --grep "order-service"
Container reuse. Don't start a new database container for every test. Start one for the suite, truncate data between tests.
Trade-off: Speed vs confidence. You can make tests faster by mocking more dependencies. But you lose confidence. Find the balance that works for your team.
Here's what I've seen in practice:
| Approach | Test Suite Time | Bugs Caught |
|---|---|---|
| All mocks | 30 seconds | Low (interface bugs only) |
| Real database, mocked services | 2-3 minutes | High (schema, constraints, serialization) |
| Real database, real Kafka, mocked external APIs | 5-7 minutes | Very high (event ordering, async bugs) |
| Full E2E (all services) | 20+ minutes | Maximum (but too slow to run frequently) |
I aim for the middle ground: real database and real message queue, mocked external services. It's fast enough to run on every commit, but catches the bugs that matter.
Integration Testing Anti-Patterns
Things I've done wrong:
Testing too much. Integration tests that spin up 10 services and test the entire flow from frontend to database are E2E tests disguised as integration tests. They're slow, brittle, and hard to debug. Test service boundaries, not the entire system.
Shared mutable state across tests. Tests that depend on each other or on global state will fail when run in parallel or in a different order. Use beforeEach to reset state.
Over-reliance on mocks. If you mock the database, you're not testing integration. You're testing that your mocks return what you told them to return.
Ignoring test performance until CI is unbearable. Slow tests don't get run. Optimize as you go.
No test data cleanup strategy. Tests that leave data in the database will pollute future tests. Use transactions, truncate, or separate databases.
Tools and Frameworks
Here's what I use:
Test frameworks:
- Jest (Node.js): Built-in mocking, parallel execution, snapshot testing.
- pytest (Python): Fixtures, parametrization, excellent plugin ecosystem.
- JUnit (Java): The standard for Java testing.
HTTP testing:
- Supertest (Node.js): Clean API for testing Express/Fastify apps.
- RestAssured (Java): Fluent API for REST testing.
- requests (Python): Simple HTTP library, works well with pytest.
Testcontainers:
- @testcontainers/postgresql (Node.js): PostgreSQL containers.
- testcontainers-python (Python): Supports Postgres, MySQL, Redis, Kafka.
- Testcontainers (Java): The original, most mature implementation.
Contract testing:
- Pact: Consumer-driven contracts for microservices.
- Spring Cloud Contract: Contract testing for Spring Boot apps.
Database testing:
- Flyway: Database migration tool (Java, Node.js, Python).
- Liquibase: More flexible migration tool with rollback support.
Tool recommendation matrix:
| Language | Test Framework | HTTP Testing | Testcontainers | Contract Testing |
|---|---|---|---|---|
| Node.js | Jest | Supertest | @testcontainers/* | Pact |
| Python | pytest | requests | testcontainers-python | Pact |
| Java | JUnit | RestAssured | Testcontainers | Pact, Spring Cloud Contract |
Real-World Example: E-Commerce Order Flow
Let me tie it all together with a realistic example: testing an order creation flow that touches multiple components.
Scenario: User creates an order, which:
- Saves the order to PostgreSQL
- Charges the payment method (external API call)
- Publishes an
order.createdevent to Kafka - Decrements inventory in Redis
Here's the integration test:
// test/integration/order-flow.test.js
const request = require('supertest');
const nock = require('nock');
const { Kafka } = require('kafkajs');
const redis = require('redis');
const app = require('../../src/app');
const { getDb, getKafka, getRedis } = require('../setup');
describe('Order Creation Flow', () => {
let db, kafka, redisClient;
beforeAll(async () => {
db = getDb();
kafka = getKafka();
redisClient = getRedis();
});
beforeEach(async () => {
// Seed test data
await db.query("INSERT INTO users (id, email) VALUES (1, 'user@example.com')");
await db.query("INSERT INTO products (id, name, price) VALUES (10, 'Widget', 2000)");
await redisClient.set('inventory:10', '100');
// Stub payment API
nock('https://payment-api.example.com')
.post('/charges')
.reply(200, { chargeId: 'ch_123', status: 'succeeded' });
});
afterEach(async () => {
await db.query('TRUNCATE users, orders, order_items CASCADE');
await redisClient.flushAll();
nock.cleanAll();
});
it('creates order, charges payment, publishes event, decrements inventory', async () => {
const response = await request(app)
.post('/orders')
.send({
userId: 1,
items: [{ productId: 10, quantity: 2 }],
paymentMethod: 'card_abc',
})
.expect(201);
const { orderId } = response.body;
expect(orderId).toBeDefined();
// Verify order in database
const orderResult = await db.query('SELECT * FROM orders WHERE id = $1', [orderId]);
expect(orderResult.rows[0].user_id).toBe(1);
expect(orderResult.rows[0].total_amount).toBe(4000); // 2 * $20
// Verify order items
const itemsResult = await db.query('SELECT * FROM order_items WHERE order_id = $1', [orderId]);
expect(itemsResult.rows.length).toBe(1);
expect(itemsResult.rows[0].product_id).toBe(10);
expect(itemsResult.rows[0].quantity).toBe(2);
// Verify payment was charged
expect(nock.isDone()).toBe(true);
// Verify Kafka event was published
const consumer = kafka.consumer({ groupId: 'test-group' });
await consumer.connect();
await consumer.subscribe({ topic: 'order.created', fromBeginning: true });
const messages = [];
await consumer.run({
eachMessage: async ({ message }) => {
messages.push(JSON.parse(message.value.toString()));
},
});
await waitFor(() => messages.length > 0);
expect(messages[0].orderId).toBe(orderId);
expect(messages[0].userId).toBe(1);
await consumer.disconnect();
// Verify inventory was decremented
const inventory = await redisClient.get('inventory:10');
expect(parseInt(inventory)).toBe(98); // 100 - 2
});
});
This test verifies the entire flow across four systems: PostgreSQL, an external payment API (stubbed), Kafka, and Redis. It catches serialization bugs, constraint violations, event publishing issues, and inventory logic errors.
It takes about 3 seconds to run. Fast enough for CI, realistic enough to catch real bugs.
Integration tests are where you find out if your system actually works. Unit tests verify logic. E2E tests verify user flows. Integration tests verify that the components you built in isolation can work together.
The hard parts are test data management, deciding what to mock, and keeping tests fast. Testcontainers solves the infrastructure problem. Factories solve the data problem. Discipline solves the performance problem.
I run integration tests on every commit. They've saved me from production bugs more times than I can count.
Tested environment: Node.js 20 LTS, Docker 25.0, Ubuntu 22.04