Create Pytest Fixtures and Parametrized Tests for Comprehensive Test Coverage
Generate a full Pytest test suite with reusable fixtures, parametrized test cases, edge coverage, and advanced testing patterns.
๐ The Prompt
Write a complete Pytest test suite for a [PYTHON_MODULE_NAME] module that contains the following functions/classes: [FUNCTION_OR_CLASS_LIST]. The module's primary responsibility is [MODULE_PURPOSE].
Please generate the following:
1. **Conftest Fixtures** (conftest.py):
- A session-scoped fixture for [EXPENSIVE_RESOURCE] (e.g., database connection, API client) with proper setup and teardown using yield.
- A function-scoped fixture that provides a fresh [TEST_DATA_OBJECT] for each test, built using a factory pattern.
- A fixture that mocks [EXTERNAL_DEPENDENCY] using `unittest.mock.patch` or `pytest-mock`.
- An autouse fixture for [SETUP_TASK] (e.g., resetting state, clearing caches).
2. **Parametrized Test Cases** using `@pytest.mark.parametrize`:
- Happy path tests with at least 4 input variations for [PRIMARY_FUNCTION].
- Edge case tests covering: empty inputs, boundary values for [BOUNDARY_FIELD], None/null handling, and [DOMAIN_SPECIFIC_EDGE_CASE].
- Error/exception tests verifying that [EXPECTED_EXCEPTIONS] are raised with correct messages.
- Use `pytest.param(..., id='descriptive-name')` for readable test IDs.
3. **Advanced Patterns**:
- Indirect parametrize to transform parameters through a fixture.
- `@pytest.fixture(params=[...])` for testing against multiple [VARIANTS] (e.g., database backends, serialization formats).
- Custom pytest markers for slow tests and integration tests.
Include docstrings explaining the rationale for each fixture's scope choice and provide a command to run the tests with verbose output and coverage reporting.
๐ก Tips for Better Results
Be explicit about what your functions accept and return โ include type signatures if possible so the AI generates accurate assertions. Group related parametrize values into tuples with descriptive IDs to make test failure output immediately understandable. Always specify which external dependencies need mocking to avoid the AI generating tests that hit real services.
๐ฏ Use Cases
Python developers use this when building a robust test suite for existing or new modules, especially when they need high coverage with minimal boilerplate and want to follow Pytest best practices.