Custom ML, batch scoring & APIs
Service
When a spreadsheet is not enough: a small API, a recurring scoring job, document Q&A over your own corpus, or monitoring checks on batches—scoped engagements. Public reference implementations live under Portfolio; here the code targets your problem and your constraints.
Overview
This is the paid counterpart to the five ML-system projects on the site: churn-style serving, batch inference, RAG over internal documents, feature consistency between train and score, or drift and quality reports—delivered as a defined milestone, not a science experiment.
We agree inputs, outputs, hosting assumptions, and handover. You get repositories or deployment artefacts you can run, plus documentation an engineer can follow.
What you get: Scoped build (e.g. API + model artefact, batch CLI, or monitoring report job), tests or checks where appropriate, and a short runbook.
Who it is for: Teams that already know they need model-backed logic or retrieval, and want a finite slice of work before committing to a larger programme.