Advanced Type Annotations in Python
Enhance your code quality with Python's advanced type system features and static type checking.
Quality assurance in Python projects has evolved significantly over the years. Modern Python development requires a comprehensive approach that combines local development tools with robust CI/CD pipelines. This guide will walk you through setting up a professional-grade Python QA system, with detailed configurations and best practices.
Before diving into specific tools, it's crucial to understand the distinction between local development tools and CI pipeline checks. While CI ensures final verification, most quality checks should happen locally during development.
Local Development Tools:
CI Pipeline Checks:
pytest has become the de-facto standard for Python testing. Let's explore its advanced features and setup best practices.
# Advanced pytest example
import pytest
from typing import Generator, Any
@pytest.fixture(scope="session")
def database_connection() -> Generator[Any, None, None]:
"""Create a test database connection that's reused for the entire test session."""
from your_app.db import create_connection
connection = create_connection("test_db_url")
yield connection
connection.close()
@pytest.fixture
def user_data(database_connection: Any) -> dict:
"""Fixture that depends on another fixture."""
return {
"id": 1,
"name": "Test User",
"email": "test@example.com"
}
@pytest.mark.asyncio # Using pytest-asyncio plugin
async def test_async_operation(user_data: dict) -> None:
result = await process_user_async(user_data)
assert result["status"] == "success"
@pytest.mark.parametrize("input,expected", [
(1, 2),
(2, 4),
(3, 6)
])
def test_multiplication(input: int, expected: int) -> None:
assert input * 2 == expected
[pytest]
addopts =
--verbose
--cov=your_package
--cov-report=term-missing
--cov-report=html
--doctest-modules
--strict-markers
testpaths = tests
python_files = test_*.py
python_classes = Test*
python_functions = test_*
markers =
slow: marks tests as slow
integration: marks integration tests
asyncio: marks async tests
mypy is the reference type checker for Python, developed by the Python core team. While pyright offers some advantages in IDE integration, mypy remains the standard for CI pipelines.
[mypy]
python_version = 3.11
warn_return_any = True
warn_unused_configs = True
disallow_untyped_defs = True
disallow_incomplete_defs = True
check_untyped_defs = True
disallow_untyped_decorators = True
no_implicit_optional = True
warn_redundant_casts = True
warn_unused_ignores = True
warn_no_return = True
warn_unreachable = True
strict_equality = True
# Per-module config
[mypy.plugins.pydantic.*]
init_forbid_extra = True
init_typed = True
warn_required_dynamic_aliases = True
# Example with comprehensive type annotations
from typing import TypeVar, Generic, Optional, Protocol
from dataclasses import dataclass
T = TypeVar('T', bound='Comparable')
class Comparable(Protocol):
def __lt__(self, other: Any) -> bool: ...
@dataclass
class Item(Generic[T]):
value: T
next: Optional['Item[T]'] = None
def insert(self, new_value: T) -> None:
if self.next is None:
self.next = Item(new_value)
elif new_value < self.value:
new_item = Item(self.value, self.next)
self.value = new_value
self.next = new_item
else:
self.next.insert(new_value)
Ruff has revolutionized Python linting by providing a fast, comprehensive tool that replaces multiple traditional linters. It's written in Rust and can be up to 100x faster than traditional Python linters.
[tool.ruff]
target-version = "py311"
# Enable all rules by default, then configure specific ones
select = ["ALL"]
ignore = [
"D100", # Missing docstring in public module
"D104", # Missing docstring in public package
"FBT", # Boolean positional arguments
"T20", # Use of print statements
]
# Allow autofix behavior
fix = true
unsafe-fixes = false
# Line length and other settings
line-length = 100
indent-width = 4
[tool.ruff.mccabe]
max-complexity = 10
[tool.ruff.isort]
known-first-party = ["your_package"]
[tool.ruff.flake8-bugbear]
extend-immutable-calls = ["fastapi.Depends", "fastapi.Query"]
[tool.ruff.per-file-ignores]
"tests/*" = ["S101", "PLR2004", "D103"]
"scripts/*" = ["INP001"]
[tool.ruff.pydocstyle]
convention = "google"
pre-commit ensures quality checks run automatically before each commit. This catches issues early in the development process.
repos:
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.1.9
hooks:
- id: ruff
args: [--fix]
- id: ruff-format
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.5.0
hooks:
- id: trailing-whitespace
- id: end-of-file-fixer
- id: check-yaml
- id: check-added-large-files
- id: check-toml
- id: check-json
- id: detect-private-key
- repo: https://github.com/python-poetry/poetry
rev: 1.7.0
hooks:
- id: poetry-check
- id: poetry-lock
args: [--check]
- repo: local
hooks:
- id: mypy
name: mypy
entry: mypy
language: system
types: [python]
require_serial: true
- id: pytest
name: pytest
entry: pytest
language: system
pass_filenames: false
always_run: true
Package management in Python has evolved significantly. While tools like pip, pipenv, and Poetry have served the community well, a new contender - uv - is revolutionizing Python package management with its unprecedented speed and reliability.
uv represents a significant leap forward in Python package management. Its key advantages include:
# Install uv
curl -LsSf https://astral.sh/uv/install.sh | sh
# Create a new virtual environment with uv
uv venv
# Install dependencies from requirements.txt
uv pip install -r requirements.txt
# Install dependencies from pyproject.toml
uv pip install --reload .
[project]
name = "your-project"
version = "0.1.0"
description = "Your project description"
requires-python = ">=3.9"
dependencies = [
"fastapi>=0.100.0",
"sqlalchemy>=2.0.0",
"pydantic>=2.0.0",
]
[project.optional-dependencies]
dev = [
"pytest>=7.0.0",
"mypy>=1.0.0",
"ruff>=0.1.0",
]
[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
Security in Python projects involves multiple layers of checking, from static code analysis to dependency scanning. Let's explore the essential security tools and their configurations.
Bandit performs static security analysis of your Python code. It searches for common security issues like hardcoded passwords, use of unsafe functions, and SQL injection vulnerabilities.
[tool.bandit]
# Directories to exclude from analysis
exclude_dirs = ["tests", "scripts"]
# Skip specific checks
skips = [
"B101", # Skip assert warnings
"B404", # Skip import subprocess warning
"B603" # Skip subprocess.Popen warning
]
targets = ["src"]
# Custom settings for specific tests
[tool.bandit.assert_used]
skips = ["test_*.py"]
[tool.bandit.hardcoded_tmp_directory]
tmp_dirs = ["/tmp", "/var/tmp"]
Safety checks your dependencies against the Python Security Advisory Database to identify known security vulnerabilities. It's crucial to integrate this into both development and CI workflows.
# Running safety with common options
safety check \
--full-report \
--output text \
--file requirements.txt \
--ignore 42194 \ # Ignore specific vulnerability
--continue-on-error
safety check --update
Deptry helps maintain clean dependency trees by analyzing imports and requirements. It's particularly useful for identifying unused or miscategorized dependencies.
[tool.deptry]
extend_exclude = [
"tests",
"scripts"
]
ignore_transitive = [
"pytest"
]
ignore_obsolete = [
"typing-extensions"
]
[tool.deptry.per_rule_ignores]
DEP002 = [ # Ignore missing dev dependencies
"pytest-cov"
]
A robust CI/CD pipeline should combine all these tools while considering different environments and Python versions. Here's a complete GitHub Actions workflow that incorporates all our quality and security checks:
# .github/workflows/python-qa.yml
name: Python QA
on:
push:
branches: [ main, develop ]
pull_request:
branches: [ main, develop ]
schedule:
- cron: '0 0 * * 0' # Weekly security scans
jobs:
quality:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ['3.9', '3.10', '3.11']
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0 # Required for proper coverage reporting
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: Install Poetry
uses: snok/install-poetry@v1
with:
version: 1.7.0
virtualenvs-create: true
virtualenvs-in-project: true
- name: Load cached venv
id: cached-poetry-dependencies
uses: actions/cache@v3
with:
path: .venv
key: venv-${{ runner.os }}-${{ matrix.python-version }}-${{ hashFiles('**/poetry.lock') }}
- name: Install dependencies
if: steps.cached-poetry-dependencies.outputs.cache-hit != 'true'
run: poetry install --no-interaction --no-root
- name: Run Ruff
run: poetry run ruff check .
- name: Run mypy
run: poetry run mypy .
- name: Run tests with coverage
run: |
poetry run pytest \
--cov=your_package \
--cov-report=xml \
--cov-report=term-missing \
--junitxml=pytest.xml
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v3
with:
file: ./coverage.xml
fail_ci_if_error: true
- name: Security checks
run: |
poetry run bandit -r src/ -f json -o bandit-report.json
poetry run safety check
poetry run deptry .
- name: Upload security report
if: always()
uses: actions/upload-artifact@v3
with:
name: security-report
path: bandit-report.json
deploy:
needs: quality
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
runs-on: ubuntu-latest
steps:
- name: Deploy to production
run: |
echo "Add your deployment steps here"
Remember the distinction between local and CI checks:
A robust Python quality assurance system combines local development tools with comprehensive CI/CD pipelines. By implementing these tools and configurations, you ensure:
Regular maintenance of these tools and configurations is crucial. Keep your dependencies updated, regularly review security policies, and adjust configurations as your project evolves.