Neuroimaging Biomarkers · CNS Drug Development
Luciana Bonnot, PhD — LuMMiens Consulting
Sponsor-side strategy for neuroimaging-derived digital biomarkers (NdDB) and digital medical device evidence. Helping pharma teams make early, defensible decisions — before protocol commitments, vendor lock-in, or costly trial amendments.
20+
Years CNS
Neuroimaging
5
Steps to
Endpoint-Grade
6
Identified
Failure Modes
Not ready to reach out? Download the NdDB Readiness Map to assess your program first.
Where Does Your Program Stand?
Phase II NdDB adoption most often fails not because the biomarker signal is weak, but because the acquisition-to-output measurement chain is not controlled to the level required for the intended endpoint role.
Most Phase II NdDB efforts do not fail because the biomarker signal was wrong. They fail because no single function owned the full governance chain — from clinical intent through acquisition, QC, and processing to reported outputs. Clinical development defined the endpoint. Imaging managed the measurement. Biostatistics wrote the SAP. Nobody owned the gap between them, and nobody found out until commitments were already locked.
If that sounds familiar, you are not alone. Select the failure mode closest to your situation.
The NdDB's intended trial role has not been translated into a minimum adoption package before protocol and SAP commitments are locked.
Multi-site scanner variability, protocol inconsistencies, upgrades, and site drift are not controlled to the level required — making the NdDB vulnerable to noise at the required control level.
The computation chain lacks version control, end-to-end QC lineage, and prespecified reprocessing rules needed for the intended decision role — even when the biomarker concept is clinically valid.
What counts as a change, who approves it, and how comparability is protected across updates has not been specified before the trial runs — creating late governance amendments and auditability gaps.
Vendor contracts do not embed the required audit access, delivery specifications, or change-control clauses — creating lock-in risk and uncontrolled pipeline drift during the study.
FDA clearance or CE marking is assumed to validate the trial claim, but the device intended use does not match the trial role — or SaMD lifecycle controls have not been applied proportionally to the intended endpoint criticality.
The Framework
A practical 3-step decision framework designed to avoid two common traps: overbuilding governance too early and underbuilding it until it is too late. Steps can be completed in a structured cross-functional discussion — before protocol design, vendor scope, and site plans are locked.
Triage
Eligibility gate & output classification
Select
Pipeline maturity grade
Apply AI
As a governance multiplier
Triage
The Candidate Biomarker
Axis 1 — Eligibility gate
Is the intended clinical variable defined? Is the output unambiguous and prespecifiable? Is acquisition feasible across sites? Candidates that fail this gate should not proceed to classification or maturity assessment.
Axis 2 — Output classification
Routes the prespecified output to one of eight classes (A–H) based on clinical intent, establishment status, and radiology framing — from qualified-reader judgement (A) to QC/operational metrics (H). Classification applies per output variable, not per delivery package.
Key question
"Does this output unambiguously match the intended clinical variable for the prespecified trial role?"
Select
Pipeline Maturity Grade
Research-grade
Flexible workflows; outputs may evolve. Traceability limited to research practice. Credible only for exploratory analyses and hypothesis generation.
GCP-grade
Prespecified outputs, locked versions, centralized QC, end-to-end traceability from DICOM to reported outputs. Credible for secondary/supportive endpoints and stratification when fully controlled end-to-end.
Cleared / CE-marked software
Device-grade lifecycle controls. Strong documentation, change control, operational stability by design. Clearance alone does not validate the trial claim — version and intended use must align.
Key question
"Which trial roles are credible for this output, given the delivery system we can realistically govern?"
Apply AI
As a Governance Multiplier
Locked AI model → Trial fit: HIGH
Controlled updates following a defined change-control process with versioned documentation. Lock the model version for the study period; introduce updates between studies with bridging evidence.
Continuously adapting AI → Trial fit: LOW
Continuous change undermines comparability and complicates GCP traceability unless tightly bounded and fully prespecified. If the output can vary over the life of the study because the model is not locked, avoid decision-driving use.
Key question
"Is version locking, update policy, and change control proportionate to the trial criticality of this output?"
Three Foundational Principles
Digital biomarkers are constructs.
Regulation applies to the device or software that generates or uses the measurement, and to the claims tied to its intended use — not to the signal itself.
Intended use in the trial drives burden.
Moving from exploratory use to stratification, enrichment, or a primary endpoint sharply increases expectations for prespecification, traceability, and change control.
Pipeline maturity is a deliberate decision.
The same candidate biomarker can be delivered via research-grade, GCP-grade, or device-grade pathways. Each enables different credible uses. Maturity grade applies to the delivery system, not the biomarker concept.
The Operating Model
A sponsor-side NdDB oversight and decision-visibility model built on three pillars: early readiness gates before protocol/SAP lock, prespecified change-control and comparability rules, and a small set of QC reliability deliverables from vendors and core labs. Implemented through roles, decision gates, and standard vendor outputs — without replacing core labs or building new in-house infrastructure.
Align
Endpoint
Align
Acquisition
Change
Control
Exception
Oversight
Capture
Lessons
Before any pipeline, vendor, or protocol decision is made, the NdDB's intended trial role must be explicit — and shared across the functions that will execute it. Without this alignment, different teams hold different assumptions about what "reliable enough" means. Those assumptions collide after commitments are locked, when changes are expensive and options are constrained.
The goal of this step is a shared, documented definition of the endpoint role — and a named integrator accountable for connecting clinical intent to technical and operational constraints before the program is committed to a protocol, a vendor, or a delivery model.
What this step prevents
Late protocol and SAP amendments driven by misaligned endpoint expectations. The most expensive discovery in NdDB adoption is finding out — after protocol lock — that clinical intent, acquisition reality, and processing constraints were never reconciled.
The acquisition-to-output chain is rarely owned end-to-end by a single function. Sites, imaging core labs, and processing vendors each hold part of it — and mismatches between them can silently compromise endpoint stability even when the biomarker signal is clinically robust and the images are technically adequate.
This step confirms that what sites can realistically deliver, what the core lab will QC, and what the processing pipeline requires are mutually consistent — and that this has been verified before the protocol and vendor commitments are locked.
What this step prevents
Reprocessing cycles, unplanned site retraining, and the silent sample-size erosion that follows late QC failures. Particularly critical in psychiatry and rare diseases, where acquisition variability can easily overlap the biological signal.
Minor updates to software versions, QC rules, or processing settings can shift endpoint values. If no one has defined what constitutes a change, who approves it, and how comparability is protected before those changes occur, the endpoint becomes progressively harder to defend — regardless of how well it was designed.
This step establishes a prespecified change-control framework and traceability standard before the trial runs — so every endpoint value can be reconstructed, every change has an approver, and vendor relationships remain auditable throughout the study lifecycle.
What this step prevents
Late governance amendments, vendor lock-in, and unexplained shifts in endpoint values. Sponsors who define change-control upfront resolve processing questions quickly — rather than opening protocol-level investigations mid-study.
Operational QC confirms that images were received and processed. It does not confirm that the endpoint remains stable and defensible under the governance standard required for the intended decision role. Drift, scanner upgrades, override patterns, and processing inconsistencies require a separate, sponsor-side view.
During execution, oversight is exception-based: the sponsor is engaged when reliability signals degrade, not as a continuous manual reviewer. Resolutions are documented so the endpoint remains reconstructable and audit-ready throughout the study.
What this step prevents
Incorrect Phase II conclusions driven by avoidable noise, drift, or undetected processing changes. Decision confidence improves because the measurement chain is monitored — not assumed stable. The value is highest in indications with subtle signals or small samples.
Each Phase II NdDB program generates hard-won knowledge: what controlled acquisition variability effectively, what governance rules held under pressure, and what the next program should establish earlier. Without a structured capture at key milestones, that knowledge stays informal — and the next program pays the same discovery costs.
This step converts program experience into reusable governance artifacts — reducing the effort and cost of the next engagement, and building portfolio-level readiness that compounds across indications.
What this step prevents
Each program repeating the same governance discovery costs. The operating structure established in one program becomes backbone for the next — applicable across CNS indications and alongside other biomarker modalities with analogous measurement chains.
Structured Readiness Review
A structured readiness review reduces late-rework risk by aligning stakeholders on a shared definition of the NdDB, its trial role, the required pipeline maturity, and a proportionate evidence and governance plan — before commitments are locked.
Request a Readiness ReviewWork With Me
Whether you need a structured readiness review before protocol lock, expert input on a specific governance challenge, or a longer engagement to design and implement the full operating model — I respond within one business day.
Luciana Bonnot, PhD
LuMMiens Consulting · France
Scientific consultant in neuroimaging biomarkers and digital medical device evidence strategy. 20+ years coordinating multi-site neuroimaging studies and working with cross-functional teams across neurology and psychiatry programs — translating imaging-derived measures into decision-ready outputs that are feasible across sites, traceable end to end, and aligned with the intended trial role.
Developer of the NdDB Readiness Map, a practical decision framework for Phase II sponsors. Certified Innovation Path on Digital Medical Devices — EIT Health. Full profile →
Not ready to reach out?
Download the NdDB Readiness Map — a practical decision framework for Phase II sponsors, grounded in peer-reviewed methodology.
Download the Readiness Map ↓