The confusion usually starts with a simple assumption: if the templates are complete, the register is ready.
That holds right up until the European Banking Authority (EBA) reporting framework is applied. Then the same dataset that passed internal review starts failing validation checks it was never built to satisfy.
When discussing this transition point, I keep coming back to the same issue. Teams are not separating the layers. The legal requirement, the templates, the data model, and the taxonomy get treated as one thing, while it is not.
Confuse the two, and the register may look complete but still fail once reporting logic is applied.
Why the confusion happens
Templates are visible. They are structured, tangible, and easy to work with. That makes them feel like the register itself.
But templates are only one layer in a stack that includes legal requirements, structured data definitions, and reporting logic. Most teams work inside the template layer and assume the rest will follow.
The gap appears when the same dataset is interpreted differently — first as a completed worksheet, then as structured data subject to validation.
The reporting layers
| Layer | What it does | What it defines | Where confusion starts |
|---|---|---|---|
| Digital Operational Resilience Act (DORA) obligation | Creates the requirement to maintain and provide the register | Legal obligation and supervisory expectations | Treated as a documentation exercise |
| Implementing Technical Standards (ITS) templates | Define what must be reported | Fields, tables, and reporting instructions | Treated as a spreadsheet to complete |
| Data Point Model (DPM) | Defines how data is structured | Data points, relationships, identifiers, data dictionary | Mistaken as optional technical detail |
| Taxonomy (EBA reporting framework) | Translates the model into a reportable format | Machine-readable representation, validation logic | Confused with the data model itself |
| Output format (e.g. CSV) | Delivers the data to supervisors | File structure and packaging | Mistaken for the reporting framework |
The implementing technical standards (ITS) already make clear that the templates are linked structures with repeated identifiers that must align where required.
The core distinction
I would frame it this way:
- the templates describe what to report
- the Data Point Model (DPM) defines how that data fits together
- the taxonomy translates that structure into something systems can validate
The taxonomy is not “the DPM with another name”; it is the implementation layer derived from it.
Manage RoI across data & reporting layers
Copla Registry supports the full RoI lifecycle — from structured data capture aligned with the DPM to generation of outputs that meet EBA taxonomy and validation requirements.
What the Data Point Model (DPM) does
The DPM defines the structure of the dataset itself — the data points, the relationships between them, and the identifiers that connect records across tables.
This is where relational consistency lives. If records do not align here, no reporting layer will correct it.
What the taxonomy does
The taxonomy takes that structured model and expresses it in a format that systems can process and validate.
It governs how values are represented (including coded values), how relationships must appear in the submission and how validation rules are applied.
To me, the real issue is straightforward: the taxonomy assumes the data model is already correct. It does not repair structural weaknesses, but exposes them.
Where validation failures appear
On paper, the requirement looks manageable. Fill the templates, check the fields, submit the files.
That approach works at template level because templates are human-readable views, not the reporting structure itself. The problem appears when the same data is interpreted through validation rules.
A dataset can look complete and still fail because:
- values are not represented in the expected coded format
- relationships are not expressed consistently
- identifiers do not align across linked fields
This is also where the simplicity of CSV output becomes misleading. CSV only defines how the data is delivered, not how it must be structured or validated. The actual complexity appears earlier, in how the dataset is prepared for submission.
The European Supervisory Authorities (ESAs) dry run confirmed this pattern. Most errors were missing mandatory data, followed by issues in consistency and relationships (EBA DORA RoI reporting FAQ).
Illustrative scenario: a provider label passes review but fails validation
A provider is selected in the template using a readable name from a dropdown. The entry looks correct, and internal review accepts it without issue.
When the same dataset is processed through the reporting framework, that field is expected to contain a specific coded value defined in the DPM.
If the dataset stores the label instead of the code, the record fails validation.
Nothing is missing. The meaning is still correct. But the representation no longer matches what the reporting framework expects.
This type of mismatch sits at the boundary between template completion and reporting logic, and it often appears alongside other field-level issues.
What teams need to change
The failure rarely sits in the output file itself. It shows up there, but it starts earlier — in how the dataset is built before reporting ever begins.
In practice, the issue is that two different activities get treated as one. Teams complete the templates and assume the structure behind them will hold. It often does not, which is a recurring pattern in how the RoI operates in practice across institutions.
The shift is less about adding new steps and more about changing where control sits. Identifiers need to be consistent before data reaches the templates. Coded values need to follow the Data Point Model (DPM) at source, not be mapped later. Relationships across tables need to be checked as part of building the dataset, not reconstructed during submission.
Once reporting logic is applied, those assumptions are no longer flexible. Small inconsistencies become visible very quickly.
Copla Registry
Structure RoI before reporting begins
RoI failures rarely start in submission files but in how data is structured and maintained. Copla Registry helps enforce consistency at source before reporting logic is applied.
- Define consistent identifiers and relationships across the dataset
- Align structures with DPM and taxonomy requirements
- Prepare data for accurate, submission-ready reporting

FAQ
-
Is the taxonomy only relevant at submission stage? +
No. That assumption causes most rework. The taxonomy reflects expectations that should already be built into the data. If those rules are ignored during implementation, issues surface late, when fixes are most expensive.
-
Why do registers pass internal review but fail EBA validation? +
Internal reviews tend to focus on completeness at template level. EBA validation checks structure, consistency, and referential integrity across tables. A dataset can look complete and still be structurally broken.
-
Do small inconsistencies really matter? +
Yes. A minor difference in an identifier or value can break relationships across tables. The taxonomy does not interpret intent — it validates exact matches. “Close enough” fails.
-
Do you need a dedicated system to handle the taxonomy requirements? +
Not necessarily, but you need a structured data layer that respects the DPM. Ad hoc spreadsheets rarely maintain referential integrity at scale. The first reporting cycle tends to expose that quickly.
-
Where should institutions focus first: templates, model, or taxonomy? +
Start with the data model. If the structure is correct, the taxonomy becomes a validation step. If the structure is wrong, no amount of formatting will fix it.