| Can We Deploy This? |
| On-premise / air-gapped deploymentInfoSec and data governance clearance |
✔Local model hosting; AWS Bedrock supported |
✔Runs locally in R |
PartialR scripts local; web tool is external |
✔Runs locally in Python |
✔Runs locally; requires GPU |
PartialDocker available; web app is cloud-hosted |
| Pipeline integration (Python + R)Drop-in to existing Scanpy/Seurat workflows |
Bothpip install (AnnData) + CyteTypeR (Seurat) |
R onlyBioconductor package |
R onlyR scripts; no native Python |
Python onlyPyPI package |
Primarily PythonR via reticulate bridge |
Primarily RAccepts h5ad via web upload |
| GPU infrastructure requiredIT procurement and compute cost implications |
No |
No |
No |
No |
Effectively yesDocumented limitation of the method |
No |
| Commercial licensingLegal and procurement clarity |
Explicit commercial licenseOpen-source for academia; commercial terms for industry |
Open source (GPL-3); copyleft implicationsSame as scType and Azimuth now. Three of the five competitors carry copyleft obligations, which strengthens CyteType's positioning on licensing clarity. |
Open sourceGPL v3; copyleft implications |
Open sourceMIT-style |
Open sourceBSD 3-Clause |
Open sourceGPL-3; copyleft implications |
| ReproducibilityRegulatory and QC requirement: same input, same output |
✔Deterministic |
✔Deterministic |
✔Deterministic |
✔Deterministic |
PartialStochastic training; seed-dependent |
✔Deterministic |
| Does It Reduce Risk in Our Decision-Making? |
| Reference data dependencyOperational overhead of curating and maintaining reference datasets per project |
Not requiredOperates from expression data; accepts marker genes to guide annotation |
RequiredAccepts any custom labelled dataset |
Not requiredBuilt-in marker DB; custom markers via XLSX |
RequiredUser-trained custom models supported |
RequiredAccepts any labelled dataset for training |
RequiredLimited to HuBMAP curated references only |
| Evidence trail per annotationDefend calls in target review, regulatory, and cross-functional settings |
✔Full reasoning chain: markers, literature, reviewer assessment |
— |
— |
— |
— |
— |
| Cell Ontology standardisationConsistent terminology across projects, sites, and CRO partners |
✔Automatic CL ID assignment per annotation |
— |
— |
PartialCL IDs in encyclopedia; not in annotation output by default |
— |
— |
| Disease context handlingTME, inflammatory tissue, and disease-state datasets |
✔Adapts reasoning to disease vs. healthy contexts natively |
PartialOnly if reference contains disease labels |
PartialSNV calling distinguishes malignant vs. healthy |
PartialOnly if model trained on disease data |
PartialOnly if reference contains disease labels |
—Training data explicitly excludes cancer |
| Functional state resolutionDistinguish actionable cell states from coarse labels |
✔Activation, exhaustion, polarization with marker-level support |
—Constrained by reference label granularity |
—Constrained by database categories |
PartialGranularity depends on model used |
—Constrained by training labels |
PartialMulti-level hierarchy (L1/L2) available |
| Cluster-level interrogationQuery the biology behind each annotation call directly |
✔Chat interface per cluster: ask biological questions, explore evidence |
— |
— |
— |
— |
— |