Professional Python Testing with pytest
Enterprise-grade testing patterns and production-proven techniques
Table of Contents
1. pytest Fixtures (Production-Grade Patterns)
Core Concepts
Scope | Description | Use Case |
---|---|---|
function |
Default scope - runs for each test | Isolated test data |
class |
Runs once per test class | Shared class resources |
module |
Runs once per module | Expensive setup |
session |
Runs once per test run | Global resources |
tests/
├── conftest.py # Project-wide fixtures
├── unit/
│ ├── conftest.py # Unit test fixtures
│ └── test_models.py
└── integration/
├── conftest.py # Integration fixtures
└── test_api.py
Advanced Fixtures
# Factory pattern with cleanup
@pytest.fixture
def db_connection_factory():
connections = []
def _create_connection(config):
conn = Database.connect(config)
connections.append(conn)
return conn
yield _create_connection
# Teardown all connections
for conn in connections:
conn.close()
# Parametrized fixture
@pytest.fixture(params=['memory', 'disk'])
def storage_backend(request):
if request.param == 'memory':
return MemoryStorage()
return DiskStorage()
Best Practices
- Prefer
yield
fixtures overaddfinalizer
for cleaner teardown - Use
autouse=True
sparingly - explicit is better than implicit - Compose fixtures from other fixtures for maintainability
- Name fixtures after what they provide, not how they work
Production Tip: Use pytest --setup-show
to visualize fixture execution order and dependencies.
2. Parameterized Tests (Enterprise Patterns)
Basic Parametrization
@pytest.mark.parametrize('input,expected', [
('3+5', 8),
('2*4', 8),
('6/2', 3),
pytest.param('1/0', None, marks=pytest.mark.xfail)
], ids=['add', 'multiply', 'divide', 'zero-division'])
def test_eval(input, expected):
if expected is None:
with pytest.raises(ZeroDivisionError):
eval(input)
else:
assert eval(input) == expected
Advanced Techniques
# Dynamic parameter generation
def pytest_generate_tests(metafunc):
if 'dataset' in metafunc.fixturenames:
metafunc.parametrize('dataset',
load_test_data_from_json('testcases.json'))
# Hypothesis integration
from hypothesis import given, strategies as st
@given(st.integers(), st.integers())
def test_addition_commutative(a, b):
assert a + b == b + a
Visualization & Debugging
pytest -v |
Show full parameterized test names |
pytest -k "divide" |
Run specific parameterized cases |
pytest --tb=short |
Condensed traceback for failures |
Performance Warning: Large parameter sets can significantly increase test suite runtime. Consider splitting them across multiple test functions or using property-based testing.
3. pytest Plugins (Industrial Ecosystem)
Essential Plugins
Plugin | Purpose | Production Tip |
---|---|---|
pytest-xdist |
Parallel test execution | Use -n auto for CPU core count |
pytest-cov |
Coverage reporting | Combine with --cov-report html |
pytest-mock |
Mocking integration | Use the mocker fixture |
pytest-asyncio |
Async test support | Mark async tests with @pytest.mark.asyncio |
pytest-docker |
Docker integration | Use fixture scopes carefully |
Custom Plugin Development
# conftest.py
def pytest_addoption(parser):
parser.addoption(
"--env",
action="store",
default="dev",
help="Environment to run tests against"
)
def pytest_configure(config):
config.addinivalue_line(
"markers",
"integration: mark as integration test"
)
def pytest_collection_modifyitems(config, items):
if config.getoption("--env") == "ci":
for item in items:
if "integration" not in item.keywords:
item.add_marker(pytest.mark.skip("Skipping non-integration in CI"))
Plugin Architecture Deep Dive
# Example fixture plugin
import pytest
@pytest.hookimpl(tryfirst=True)
def pytest_fixture_setup(fixturedef, request):
"""Log all fixture setups"""
if hasattr(request.config, 'workerinput'):
return # Skip in worker nodes
print(f"Setting up fixture: {fixturedef.argname}")
def pytest_terminal_summary(terminalreporter):
"""Add custom summary section"""
terminalreporter.section("Custom Stats")
terminalreporter.write_line(f"Tests run: {terminalreporter.stats['passed']}")
4. Advanced pytest Patterns
Test Organization
tests/
├── __init__.py
├── unit/
│ ├── models/
│ │ ├── test_user.py
│ │ └── test_product.py
│ └── utils/
│ └── test_helpers.py
├── integration/
│ ├── api/
│ │ └── test_routes.py
│ └── database/
│ └── test_queries.py
└── acceptance/
└── test_workflows.py
Debugging Infrastructure
pytest --pdb |
Drop to debugger on failure |
pytest --trace |
Set trace at test start |
pytest --durations=10 |
Show 10 slowest tests |
pytest --lf |
Run last failed tests only |
CI/CD Integration
# GitHub Actions example
name: Python Tests
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ['3.8', '3.9', '3.10']
steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install pytest pytest-xdist pytest-cov
pip install -e .
- name: Test with pytest
run: |
pytest tests/ --cov=myapp --junitxml=test-results.xml -n auto
- name: Upload coverage
uses: codecov/codecov-action@v1
5. Comparative Analysis
pytest vs Unittest
Feature | pytest | unittest |
---|---|---|
Assertions | Plain assert statements | assertEqual(), assertTrue(), etc. |
Fixtures | Flexible dependency injection | setUp()/tearDown() methods |
Parametrization | Built-in support | Requires TestCase subclasses |
Plugins | Rich ecosystem | Limited extension points |
Migration Guide
- Start by running existing unittest tests with pytest (it's compatible)
- Convert TestCase classes to plain functions gradually
- Replace setUp/tearDown with fixtures
- Convert assert methods to plain asserts
- Adopt pytest features like parametrization
Migration Tip: Use pytest --unittest-disable-warnings
to suppress pytest's unittest deprecation warnings during migration.
6. Complete Example Suite
# test_enterprise_patterns.py
import pytest
from datetime import datetime, timedelta
@pytest.fixture(scope='module')
def db_engine():
"""Production-like database engine"""
engine = create_engine('postgresql://test:test@localhost:5432/testdb')
yield engine
engine.dispose()
@pytest.fixture
def db_session(db_engine):
"""Transactional session with rollback"""
connection = db_engine.connect()
transaction = connection.begin()
session = Session(bind=connection)
yield session
session.close()
transaction.rollback()
connection.close()
@pytest.mark.parametrize('start_date,days,expected', [
(datetime(2023, 1, 1), 30, datetime(2023, 1, 31)),
(datetime(2023, 2, 1), 28, datetime(2023, 3, 1)),
], ids=['january', 'february'])
def test_date_addition(db_session, start_date, days, expected):
"""Test business logic with database session"""
result = db_session.execute(
"SELECT %s::date + %s::integer",
(start_date, days)
).scalar()
assert result == expected
@pytest.mark.slow
def test_large_dataset_performance(db_session):
"""Performance test with realistic data volume"""
test_data = generate_large_dataset(10_000)
start = time.time()
db_session.bulk_save_objects(test_data)
db_session.commit()
duration = time.time() - start
assert duration < 2.0 # SLA requirement
Enterprise Consideration: For mission-critical systems, combine pytest with contract testing tools like Pact for verifying service integrations.