Metrics
Application Kit provides request counting and metrics collection through decorators and dependencies.
Overview
The metrics system tracks request counts per endpoint and queues them to Redis for processing. This enables:
- Usage tracking per project
- Billing and quota management
Not for analytics
This metrics system is designed for billing and quota enforcement, not user behavior analytics. It counts API calls per project to track usage against subscription limits.
Usage
Use the AddJobToMetrics dependency:
"""FastAPI metrics example."""
from functools import partial
from fastapi import APIRouter, Depends
from application_kit.fastapi.fastapi import get_fastapi_app
from application_kit.fastapi.metrics import AddJobToMetrics
app = get_fastapi_app("api")
router = APIRouter()
app.include_router(router)
# Create a partial for your service. The first argument is one of the
# registered MetricsProduct values (see application_kit.metrics.registry).
add_stores_metric = partial(AddJobToMetrics, "STORES_SEARCHES")
@router.get(
"/search",
dependencies=[Depends(add_stores_metric("search_stores_public"))],
)
async def search() -> dict[str, list[str]]:
return {"results": []}
Note
The AddJobToMetrics dependency uses FastAPI's BackgroundTasks to send the job to Redis after the response is sent.
Use the @count_request decorator:
"""Django metrics example."""
from django.http import HttpRequest, JsonResponse
from application_kit.django.decorators import authenticate_key, count_request
@authenticate_key()
@count_request("search_stores_public", "STORES_SEARCHES")
def search_view(request: HttpRequest) -> JsonResponse:
return JsonResponse({"results": []})
@authenticate_key()
@count_request("search_stores_public", "STORES_SEARCHES")
async def async_search_view(request: HttpRequest) -> JsonResponse:
return JsonResponse({"results": []})
Use the Django @count_request decorator with @chain():
"""Django Ninja metrics example."""
from django.http import HttpRequest
from application_kit.django.decorators import authenticate_key, count_request
from application_kit.shinobi.api import WoosmapApi
from application_kit.shinobi.authentication import PublicKeyAuth
from application_kit.shinobi.decorators import chain, terminate
api = WoosmapApi(description="My API")
@api.get("/search", auth=[PublicKeyAuth()])
@chain(authenticate_key(), count_request("search_stores_public", "STORES_SEARCHES"))
@terminate()
def search(request: HttpRequest) -> dict[str, list[str]]:
return {"results": []}
Dynamic product/kind (context-based)
Use AddJobWithContextToMetrics when the (product, kind) pair depends on
request-time state — for example the resolved dataset flavour, whether the
token is public or private, or query flags. The dependency yields a typed
context for the endpoint to populate; after the endpoint returns, the
dependency calls assemble_product_and_kind on the context, validates the
result against the registry, and enqueues the metrics job.
"""FastAPI context-based metrics example.
Use ``AddJobWithContextToMetrics`` when the ``(product, kind)`` pair depends on
request-time state — e.g. the resolved dataset flavour, whether the token is
public or private, or query flags like ``force_geometry``.
The dependency yields a typed context for the endpoint to populate. After the
endpoint returns, the dependency calls ``assemble_product_and_kind`` on the
context, validates the result against the registry, and enqueues the metrics
job via ``BackgroundTasks``. The context's ``count`` attribute is forwarded
to the queue, so endpoints can emit element-based counters by assigning it.
Pair this with ``application_kit.metrics.testing.assert_context_exhaustive``
to prove every combination of Literal field values produces a registered
counter.
"""
from typing import Annotated, Literal
from fastapi import APIRouter, Depends
from application_kit.authenticator.types import ProjectTokenModel
from application_kit.fastapi.fastapi import get_fastapi_app
from application_kit.fastapi.metrics import AddJobWithContextToMetrics
from application_kit.metrics.registry import MetricsProduct
app = get_fastapi_app("api")
router = APIRouter()
app.include_router(router)
class AutocompleteMetricsContext:
"""Context emitted by the autocomplete endpoint.
Fields are ``Literal``-typed so the combination space is finite and
exhaustible in tests via ``typing.get_args``.
"""
count: int = 1
dataset: Literal["standard", "advanced"] = "standard"
key_kind: Literal["public", "private"] = "public"
def assemble_product_and_kind(self, token: ProjectTokenModel) -> tuple[MetricsProduct, str]:
return "LOCALITIES", f"autocomplete_{self.dataset}_{self.key_kind}"
AutocompleteMetrics = Annotated[
AutocompleteMetricsContext,
Depends(AddJobWithContextToMetrics(AutocompleteMetricsContext)),
]
@router.get("/autocomplete")
async def autocomplete(ctx: AutocompleteMetrics) -> dict[str, list[str]]:
# Resolve request-time state and push it onto the context. The dependency
# reads these fields post-yield to derive the final (product, kind) pair.
ctx.dataset = "advanced"
ctx.key_kind = "private"
suggestions: list[str] = []
ctx.count = len(suggestions) or 1
return {"suggestions": suggestions}
Invalid (product, kind) outputs are logged (metrics.invalid_counter) and
tagged on the Datadog root span so they surface in traces. In production
the request is not failed. Pass strict=True to AddJobWithContextToMetrics
in tests/staging to surface the miss as a 500 instead.
Testing a context exhaustively
Pair your context with typing.Literal fields and
application_kit.metrics.testing.assert_context_exhaustive to guarantee
every combination of field values maps to a registered counter:
"""Exhaustive test for a metrics context.
Covers every Literal field combination × token variant. If the context ever
returns a ``(product, kind)`` outside the registry — e.g. after renaming a
kind in the registry without updating the context — this test fails with a
reproducible row.
"""
from typing import Literal, get_args
from unittest.mock import MagicMock
from application_kit.authenticator.types import ProjectTokenModel
from application_kit.metrics.registry import MetricsProduct
from application_kit.metrics.testing import assert_context_exhaustive
class AutocompleteMetricsContext:
count: int = 1
dataset: Literal["standard", "advanced"] = "standard"
key_kind: Literal["public", "private"] = "public"
def assemble_product_and_kind(self, token: ProjectTokenModel) -> tuple[MetricsProduct, str]:
return "ADDRESS", f"autocomplete_{self.dataset}_{self.key_kind}"
def test_autocomplete_context_only_emits_registered_kinds() -> None:
tokens = [MagicMock(spec=ProjectTokenModel)]
assert_context_exhaustive(
AutocompleteMetricsContext,
tokens,
dataset=get_args(AutocompleteMetricsContext.__annotations__["dataset"]),
key_kind=get_args(AutocompleteMetricsContext.__annotations__["key_kind"]),
)
assert_context_exhaustive iterates the Cartesian product of all
field_domains across every token variant, calls
assemble_product_and_kind, and asserts each output is registered. A
failure reports the offending token, field assignment, and assembled pair.
Parameters
AddJobToMetrics(product, kind, *, strict=False)
| Parameter | Type | Description |
|---|---|---|
product |
MetricsProduct |
Registered product (Literal of VALID_COUNTERS keys). Typos are rejected by mypy. |
kind |
str |
Registered kind under product (see application_kit.metrics.registry.VALID_COUNTERS). |
strict |
bool |
When True, unregistered (product, kind) pairs raise InvalidCounterRequest at request time. Production default is False — miss is logged and tagged on the Datadog root span, the request is not failed. |
AddJobWithContextToMetrics(context_factory, *, strict=False)
| Parameter | Type | Description |
|---|---|---|
context_factory |
Callable[[], KindContext] |
Zero-arg callable returning a fresh context implementing KindContext. Classes with no-arg constructors satisfy this; pass a lambda or functools.partial to close over extra arguments. |
strict |
bool |
Same semantics as above. |
Endpoints declaring the dependency as a typed parameter receive the
context instance and may mutate ctx.count for element-based counters
(e.g. ctx.count = len(results)).
count_request(request_name, product, name_lambda=None, *, strict=False)
| Parameter | Type | Description |
|---|---|---|
request_name |
str |
Registered kind under product (used as the counter kind unless name_lambda overrides). |
product |
MetricsProduct |
Registered product (Literal of VALID_COUNTERS keys). |
name_lambda |
CounterNameLambda \| None |
Optional callable to derive the kind at request time from product, endpoint, token kind and request kwargs. |
strict |
bool |
Same semantics as FastAPI: unregistered pairs log and tag the Datadog root span in production; strict=True raises InvalidCounterRequest instead. |
Configuration
Add the metrics_queue Redis dependency to your application.json:
{
"dependencies": {
"databases": [
{
"name": "metrics_queue",
"type": "redis",
"extras": {"queue_prefix": "counter"}
}
]
}
}
The queue_prefix determines the Redis key prefix for the metrics queue.
How It Works
-
When a request is processed, the decorator/dependency derives
(product, kind)— statically forAddJobToMetrics/count_request, or from the yielded context forAddJobWithContextToMetrics. -
The pair is validated against
VALID_COUNTERSinapplication_kit.metrics.registry. On miss the error is logged (metrics.invalid_counter) and tagged on the Datadog root span; instrict=Truemode the dependency raises instead. -
A job is created with organization/project IDs, product, kind, counter and source, then pushed to a Redis queue under
queue_prefix. -
A background worker processes the queue and aggregates metrics.
Count timing: FastAPI vs Django
The two frameworks count at different points of the request lifecycle and this is observable when a view raises.
AddJobToMetrics and AddJobWithContextToMetrics enqueue the metrics
job after the endpoint returns, via BackgroundTasks. If the
endpoint raises, the dependency's generator is closed by FastAPI before
the yield resumes and no job is enqueued. Counters reflect
successful responses only.
count_request enqueues the job before the view function runs. If
the view raises, the hit has already been counted. This is a
carry-over from the original decorator and differs from the FastAPI
behaviour; it is intentional on legacy code paths where the hit should
be billed whether or not the response succeeded.
When the decorator is the wrong tool
Most services that need a dynamic kind or a non-trivial count
(element-based counters, count = len(results), product/kind chosen from
resolved state) do not use count_request. They authenticate with
authenticate_key and call add_job_to_queue_for_token(project_token,
product, kind, count) from inside the view, after the billable work has
completed and the actual counts are known. That moves the count to the
end of the view lifecycle on Django too, and is the recommended pattern
when the simple static decorator doesn't fit. New code on FastAPI should
prefer AddJobWithContextToMetrics, which gets the same post-handler
timing for free.
API Reference
application_kit.fastapi.metrics
FastAPI dependencies for metrics job enqueueing.
Two flavours share the same core path:
- :class:
AddJobWithContextToMetrics— context-based, endpoint populates a typed context object and the dependency derives(product, kind)from it post-yield. Use when the kind depends on request-time state (token visibility, query flags, resolved dataset flavour, etc.). - :class:
AddJobToMetrics— static product/kind known at route definition time. Implemented as a thin subclass of :class:AddJobWithContextToMetricswith a built-in :class:JobContextcarryingcount. Endpoints that declare it as a typedDependsparameter can mutatectx.countto emit element-based counters (e.g.count=len(results)).
AddJobToMetrics
AddJobToMetrics(product, kind, *, strict=False)
Bases: AddJobWithContextToMetrics[JobContext]
Static-kind dependency.
Thin wrapper over :class:AddJobWithContextToMetrics that instantiates a
:class:JobContext carrying the fixed (product, kind) pair. Endpoints
declaring this as a typed Depends parameter receive the JobContext
and may mutate ctx.count to emit element-based counters.
Source code in application_kit/fastapi/metrics.py
104 105 106 107 | |
AddJobWithContextToMetrics
AddJobWithContextToMetrics(
context_factory, *, strict=False
)
Dependency that yields a context for the endpoint to populate,
then assembles (product, kind) and enqueues a metrics job.
:param context_factory: Zero-arg callable returning a fresh instance
implementing :class:~application_kit.metrics.context.KindContext.
Typically a lambda or functools.partial that closes over the
context's construction arguments — for example
lambda: MyContext(dataset=..., ...).
:param strict: When True, an unregistered (product, kind) pair
raises :class:~application_kit.metrics.exceptions.InvalidCounterRequest
(surfacing as a 500 via FastAPI). Intended for tests and staging.
When False (production default), the error is logged and tagged
on the Datadog root span, and the job is dropped so end-user requests
are not affected by metrics misconfiguration.
The context's count attribute is forwarded to
:func:add_job_to_queue_for_token.
Source code in application_kit/fastapi/metrics.py
75 76 77 | |
JobContext
JobContext(product, kind)
Static context for :class:AddJobToMetrics.
Carries the route-level (product, kind) plus a mutable count so
endpoints can emit element-based counters by assigning to ctx.count.
Source code in application_kit/fastapi/metrics.py
47 48 49 | |
application_kit.metrics.context
Context protocol for dynamic metrics kind assembly.
App developers implement :class:KindContext to derive (product, kind)
from per-request state (e.g. autocomplete dataset flavour, public/private key).
The FastAPI dependency :class:application_kit.fastapi.metrics.AddJobWithContextToMetrics
instantiates the context, yields it to the endpoint for field population, then
calls assemble_product_and_kind post-yield to enqueue the metrics job.
KindContext
Bases: Protocol
Derive a metrics (product, kind) pair from a project token.
Implementations hold per-request state as instance fields (typically
:class:typing.Literal-typed to enable exhaustive testing via
:func:application_kit.metrics.testing.exhaust_context).
count is forwarded to the metrics queue. Endpoints may mutate it for
element-based counting (e.g. ctx.count = len(results)). Implementations
should default it to 1.
application_kit.metrics.registry
Registry of all valid product/kind combinations for metrics counting.
Adding a new kind here is step 1. Step 2 is adding the corresponding pricing entry in the metrics service (counter/pricing_YYYY.py).
MetricsProduct
module-attribute
MetricsProduct = Literal[
"ADDRESS",
"DATASETS",
"DISTANCE",
"DISTANCE_ASYNC",
"DISTANCE_WITH_TOLLS",
"DISTANCE_WITH_TRAFFIC",
"GEOLOCATION",
"INDOOR",
"IP_USAGE",
"LOCALITIES",
"LOCALITIES_ADDRESS_UK",
"LOCALITIES_IE",
"MAPS",
"RECO",
"STORES",
"STORES_DATABASE",
"STORES_INTERNAL_USAGE",
"STORES_SEARCHES",
"TRAFFIC",
"TRANSIT",
"W3W",
"ZONES_DATABASE",
]
VALID_COUNTERS
module-attribute
VALID_COUNTERS = {
"ADDRESS": frozenset(
{
"autocomplete",
"autocomplete_private",
"autocomplete_public",
"details_full_private",
"details_full_public",
"details_location_private",
"details_location_public",
"geocode",
"geocode_private",
"geocode_public",
"reverse_geocode",
"reverse_geocode_private",
"reverse_geocode_public",
}
),
"DATASETS": frozenset(
{"hosted_size", "intersect", "tile"}
),
"DISTANCE": frozenset(
{
"distance_matrix_elements_private",
"distance_matrix_elements_public",
"distance_matrix_elements_traffic_private_old",
"distance_matrix_elements_traffic_public_old",
"distance_matrix_request",
"distance_matrix_request_private",
"distance_matrix_request_public",
"distance_matrix_request_traffic_private",
"distance_matrix_request_traffic_public",
"isochrone",
"isochrone_private",
"isochrone_public",
"route",
"route_private",
"route_public",
"route_traffic_private",
"route_traffic_public",
}
),
"DISTANCE_ASYNC": frozenset(
{
"distance_matrix_async_elements",
"distance_matrix_async_request",
}
),
"DISTANCE_WITH_TOLLS": frozenset(
{"tolls", "tolls_traffic"}
),
"DISTANCE_WITH_TRAFFIC": frozenset(
{
"distance_matrix_elements_traffic_private",
"distance_matrix_elements_traffic_public",
"distance_matrix_request_traffic_private",
"distance_matrix_request_traffic_public",
"route_traffic",
"route_traffic_private",
"route_traffic_public",
"tolls",
"tolls_traffic",
}
),
"GEOLOCATION": frozenset(
{
"position_private",
"position_public",
"stores_private",
"stores_public",
"timezone",
}
),
"INDOOR": frozenset(
{
"distance_matrix_elements",
"load",
"route",
"route_waypoints",
"search",
"search_autocomplete",
"search_autocomplete_distance_matrix",
"search_distance_matrix",
"update_poi",
"upload_features",
"upload_features_elements",
}
),
"IP_USAGE": frozenset({"partial", "success"}),
"LOCALITIES": _build_localities_kinds(),
"LOCALITIES_ADDRESS_UK": frozenset(
{
"details_address_uk_full",
"details_address_uk_location",
"geocode_address",
"geocode_address_uk",
"geocode_address_uk_full",
"geocode_address_uk_location",
"reverse_geocode_address_uk",
}
),
"LOCALITIES_IE": frozenset(
{
"details_address_ie",
"details_postal_code_ie",
"geocode_address_ie",
"geocode_postal_code_ie",
"reverse_geocode_address_ie",
"reverse_geocode_postal_code_ie",
}
),
"MAPS": frozenset(
{"load", "load_satellite", "static_map"}
),
"RECO": frozenset(
{
"get_store_recommendation",
"position_recommendation",
}
),
"STORES": frozenset({"map_load"}),
"STORES_DATABASE": frozenset(
{
"import_stores",
"remove_all_stores",
"remove_store_by_id",
"remove_stores",
}
),
"STORES_INTERNAL_USAGE": frozenset(
{"project_config", "search_stores_internal"}
),
"STORES_SEARCHES": frozenset(
{
"all_stores",
"nearby_stores",
"search_nearby_stores",
"search_stores_private",
"search_stores_public",
"store_by_id",
}
),
"TRAFFIC": frozenset(
{
"distance_matrix_elements_private",
"distance_matrix_elements_public",
"matrix_request_private",
"matrix_request_public",
"route_request_private",
"route_request_public",
}
),
"TRANSIT": frozenset({"route"}),
"W3W": frozenset(
{
"autosuggest",
"convert_to_address",
"convert_to_address_uk",
"convert_to_what_3_words",
}
),
"ZONES_DATABASE": frozenset(
{
"get_by_id",
"import",
"list",
"remove",
"remove_by_id",
"update",
"update_by_id",
}
),
}
validate_counter
validate_counter(product, kind)
Validate a product/kind combination against the registry.
:raises InvalidCounterRequest: if the product is not registered or the kind is not registered for that product.
Source code in application_kit/metrics/registry.py
325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 | |
application_kit.metrics.testing
Test helpers for metrics counter registry and context exhaustion.
Consumers of application-kit use these helpers to guarantee their
:class:application_kit.metrics.context.KindContext implementations only
ever produce (product, kind) pairs registered in
:data:application_kit.metrics.registry.VALID_COUNTERS.
The typical pattern is to parametrise over the Cartesian product of the
context's :class:typing.Literal field domains (retrieved via
:func:typing.get_args) and a set of token variants, then assert every
assembled output is valid.
assert_valid_counter
assert_valid_counter(product, kind)
Assert (product, kind) is registered. Raises AssertionError on miss.
Converts :class:InvalidCounterRequest into :class:AssertionError so
pytest reports it as a test failure rather than an error, and preserves
the registry hint via __cause__.
Source code in application_kit/metrics/testing.py
40 41 42 43 44 45 46 47 48 49 50 | |
iter_valid_kinds
iter_valid_kinds(product)
Yield registered kinds for a product, sorted for deterministic output.
Source code in application_kit/metrics/testing.py
53 54 55 56 57 58 | |
exhaust_context
exhaust_context(context_factory, tokens, **field_domains)
Enumerate every (token, fields, output) triple for a context.
context_factory is any zero-arg callable returning a fresh instance
satisfying :class:KindContext — a class object with a no-arg
constructor is the typical case; functools.partial or a lambda
can close over extra construction arguments.
field_domains maps context field names to iterables of allowed values
(typically typing.get_args(SomeLiteral)). The Cartesian product of all
iterables plus tokens is generated; for each combination a fresh
context instance is built, fields are assigned, and
assemble_product_and_kind is invoked.
Source code in application_kit/metrics/testing.py
61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 | |
assert_context_exhaustive
assert_context_exhaustive(
context_factory, tokens, **field_domains
)
Run :func:exhaust_context and assert every output is registered.
Failure message includes the offending token, field assignment, and
assembled (product, kind) so the faulty combination is trivial to
reproduce.
:raises ValueError: if tokens is empty or any entry in field_domains
is empty; an empty iteration would pass silently and give a false
sense of coverage.
Source code in application_kit/metrics/testing.py
92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 | |
application_kit.django.decorators
count_request
count_request(
request_name, product, name_lambda=None, *, strict=False
)
Counts the request, must be placed after authenticate_key or authenticate_user decorator.
(product, kind) is validated against
:data:application_kit.metrics.registry.VALID_COUNTERS on every request.
On an unregistered pair the miss is logged (metrics.invalid_counter)
and tagged on the Datadog root span; in production the request is not
failed and the job is dropped. Pass strict=True to surface the miss
as an :class:InvalidCounterRequest instead.
Source code in application_kit/django/decorators.py
71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 | |