Based on delivery experience across multiple RoI programmes, many teams begin with a 10–18 week delivery assumption. In practice, that is often an optimistic view of the first usable version rather than the full path to a submission-ready register. Small standalone firms often land around 10–16 weeks, mid-sized institutions around 14–22 weeks, and large groups around 18–30+ weeks. The real timeline is the time needed to move from that early usable dataset to a version that can survive validation and correction cycles.
Most programmes move quickly at the start. Contracts are listed, vendors identified, fields filled. It feels controlled.
The friction usually appears when validation starts, and delays begin to emerge in handoffs between teams where ownership and sequencing break down.
The real RoI implementation timeline has three phases—and stabilisation dominates elapsed time
| Phase | What happens | Where time expands |
|---|---|---|
| Preparation (often 4–8 weeks in practice) | Scope, ownership defined | Sequencing gaps create downstream rework |
| Population (often 2–4 weeks after preparation) | Cross-functional reconciliation | Owner misalignment slows progress |
| Stabilisation (often 4–6 weeks in practice) | Validation, correction, resubmission | Iterative cycles extend elapsed time |
These are experience-based phase ranges observed across RoI programmes. They reflect how time is typically consumed in practice, not regulatory timelines, and total delivery can extend further when stabilisation triggers additional validation and rework cycles.
Copla Registry
Manage the RoI as a data lifecycle
RoI delivery breaks down between preparation, population, and stabilisation. Copla Registry maintains consistency across stages and reduces rework from validation failures.
- Support structured RoI delivery from preparation to stabilisation
- Maintain consistency across records throughout the lifecycle
- Reduce rework caused by validation and data quality issues

Preparation (often 4–8 weeks in practice): where timelines are quietly set
This phase is often underestimated in early delivery assumptions because sequencing and ownership look settled before they are tested.
Teams decide early what to include, who owns what, and how records will be structured so work can begin.
Functional mapping adds another dependency. Teams often discover early that mapping critical and important functions is still unsettled, and that slows population.
Weak preparation rarely delays the start. It extends everything that follows.
Population (often 2–4 weeks after preparation): coordination replaces speed
Population looks straightforward at first.
This is often the phase teams have in mind when they make early delivery estimates, because visible progress is still happening.
That expectation rarely holds.
Each record depends on input from several teams, and progress slows when those inputs do not line up.
This is also where mandatory data fields become difficult to extract consistently, because no single team holds the full picture.
What typically blocks exit from population
Moving into stabilisation depends on whether the dataset can move forward without reopening earlier decisions.
In practice, programmes stall mainly because:
- identifiers are incomplete or inconsistent across templates
- ownership confirmation is missing across key records
A practical signal of progress is whether the dataset can pass a validation cycle without forcing teams back into earlier decisions.
Stabilisation (often 4–6 weeks in practice): validation loops reset the timeline
Once validation starts, the dataset is tested against the EBA technical checks and validation rules for DORA RoI reporting.
At this stage, timelines move in cycles rather than forward steps:
validation error → adjustment → regeneration → revalidation → resubmission
A single failed pass can move the programme back from stabilisation into active remediation.
What resets elapsed time after a “complete” version
The point where timelines stretch most is often after a version that looks finished.
A dataset can appear complete, pass an initial validation, and still trigger a reset when changes are applied.
Illustrative scenario
One change that resets elapsed time
A provider identifier is corrected after validation.
Related records still reference the old value.
Validation fails on re-run.
Ownership confirmation is reopened.
One change moves the programme back into active remediation.
What evidence shows the programme is ready for supervisory submission
Programmes that reach submission show repeatability.
In practice, readiness is visible when:
- regeneration no longer introduces new structural breaks
- validation passes consistently after incremental changes
- ownership for fixes is stable across teams
- outputs can be generated and packaged without rework
What “submission-ready” actually means in practice
- Complete = fields are filled and internally plausible
- Submittable and resilient to change = identifiers align across templates, validation checks pass, and updates do not break relationships
This also introduces a final dependency: the packaging and generation steps required for submission.
Keep the RoI stable under change
RoI datasets often break as updates introduce inconsistencies. Copla Registry maintains alignment across identifiers and relationships to keep the register submission-ready.
What actually drives RoI delivery timelines
- Scope size: number of contracts and providers
- Coordination model: number of teams involved
- Rework exposure: how often earlier decisions need to be revisited
Group environments increase cycles, not just scope
One internal service can support several entities. That same service can depend on external providers.
Fixes rarely stay local. A change made for one entity often requires checks across the group.
Supply-chain scope extends timelines because firms must identify and link all providers in the same ICT service supply chain.
For services supporting critical or important functions, they must also include the subcontractors that effectively underpin delivery.
Those relationships then have to be reported consistently through rank and upstream-link fields.
Practical timeline ranges by institution type
| Institution type | Typical planning range |
|---|---|
| Small / standalone | ~10–16 weeks |
| Mid-sized | ~14–22 weeks |
| Large group | ~18–30+ weeks |
The early decisions that protect the RoI timeline
Programmes that hold their timelines make a small number of structural decisions early.
They define ownership clearly, agree how shared contracts will be represented, and sequence function mapping before large-scale data collection.
The same discipline becomes clearer when the register is treated as a structured dataset rather than a filing checklist, as shown in how structured RoI datasets are organised.
FAQ
-
Is 10–18 weeks a regulatory deadline? +
No. It is a planning estimate used by institutions. The regulation defines structure and reporting expectations, not delivery timelines.
-
Why isn’t a “complete” register submittable? +
Because completion is not the same as cross-template consistency. A register can be filled and still fail validation.
-
Why do shared contracts slow things down? +
They require multiple confirmations across entities and functions, increasing reconciliation passes before validation.
-
What’s the typical rework loop after a validation error? +
Adjust the data → repair identifiers → regenerate files → revalidate → resubmit.