Warning: this is a work-in-progress demonstration tool.

This is the documentation for the Competency Answerability Dashboard prototype tool created by Open Innovations on behalf of the UK Parliamentary Digital Services.

Using this tool you can explore whether or not given competencies are answerable given the current and potential future designs of the Register(s) of Members' Financial Interests.

Please Note

The tool does not actually query the data contained in the registers. It has been designed to allow the impact of changes to the structure and content of the registers to be modelled. This can then be used to provide evidence of potential impact by those proposing changes to the registers.

Tool Design

The tool contains a model of the questions being posed (referred to as competencies). You can review these on the competencies page.

Competencies are linked to features of the register(s). This linkage defines the features required to answer the competency. Features are further linked to rulesets and scopes. You can review the defined features on the features page for a given scope.

Scopes provide a context for the result of the tool. Concrete examples of scopes are the House of Commons and House of Lords. These contexts differ as the features available in the respective Registers of Members' Financial Interests will differ. We also define an Imagined scope which contains features and rulesets used for testing and demonstration purposes.

Finally, rulesets define a set of available features for a given scope. Competency answerability can be determined by cross-referencing the features required by the individual competency with the features provided by the ruleset. This is carried out in the context of a scope, as when considering answerability for the Commons, the dependency of a competency on a Lord features is irrelevant.

The fundamental question this tool can answer is

For a given ruleset, is a question...

Answerable
All required features are available in the current ruleset
Unanswerable
Not all required features are available in the current ruleset
Unaskable
There are no features that exist in this ruleset or any other which could answer this question

Data Model

erDiagram
    COMPETENCY ||--o{ FEATURE : dependency
    RULESET }o--o{ FEATURE : available
    COMPETENCY ||--o{ COMPETENCY: duplicate_of
    EVIDENCE ||--|| COMPETENCY: defines
    RULESET }o--|| SCOPE : scope
    FEATURE }o--|| SCOPE : provided_by
    FEATURE ||--o{ FEATURE: enables

    COMPETENCY {
        string ID FK "ID in evidence CSV file"
        string competency "Name of the competency"
        string notes "Optional notes"
    }

    RULESET {
        string ID "Refernence of the ruleset"
        string name "Name of the ruleset"
        string description "Brief description"
        int order "Allows sorting"
        boolean draft "Whether to make ruleset available"
    }

    FEATURE {
        string name PK "Fully qualified name"
        string description
        string notes
        string type "Type of field - e.g. text, date, checkbox"
        string format "Format of field - e.g. DD/MM/YYYY for dates"
        string units "Units for the feature"
    }

    SCOPE {
        string ID PK "Identifer of the scope"
    }

    EVIDENCE {
        string ID PK "Identifier of current evidence row - used as key for competency"
        string Duplicates "List of other competency IDs which this row dupliates"
        string Competency "Name of the competency"
        string RMFI_Category "Category in the Register of Members' Financial Interests"
    }

Competency

Competencies are derived from the evidence collected during a recent review.

The original evidence is downloaded to the site as src/_data/evidence.csv. This is automated as a GitHub action (in .github/workflows/update-data.yml) which runs at 45 minutes past every 3rd hour from 9 through 18 on Monday to Friday..

The ID, Duplicates and Competency fields are used to populate the files in src/_data/comptency/ with core reference data. The scripts/create_all_the_competencies.py script is used to populate or update the local references.

Competencies are mapped to features to enable the calculation of answerability. This is done in the src/_data/competency/ folder, with each competency having a separate data file. These are named per reference ID in the evidence CSV.

The following fields can be defined:

Categories

The RMFI Category field from the src/_data/evidence.csv file is used to create dashboard sections. The order of these sections is defined in the src/_data/dashboard/categories.yml file.

Scope

Scopes distinguish between different contexts. They exist in two places.

Firstly, they are the root namespace of the feature definition. Practically, this means that they are the names of files in the src/_data/features/ folder of the git repo. Create a new file in this folder and define the features as key/value pairs to add a suite of features within a new scope.

Secondly, scopes are referenced in the ruleset files (found in the src/_data/rulesets/ folder). This defines the scope for the given ruleset, and features in scope namespaces other than this are removed from consideration. Scopes here must match the names of the scopes defined in the features folder.

In addition, the details of the scopes is held in src/_data/scopes.yaml. At present, this is limited to the name.

Feature

Features are derived from current or potential attributes of the register. These could be simple field definitions or they could be capabilites such as field-level validation. The concept is kept vague to allow for incorporation of as yet unforseen edge cases.

They are defined in the src/_data/features/ folder, with a separate file per scope, as described above. Internally these files are object trees, with structure governed by the scope. The Commons scope, for example, has some features at the top level (e.g. commons.members_name) and others nested by category and section (e.g. commons.category_1.section_1.amount_or_value).

Lists of currently defined features can be found on the features pages.

No content is required within the feature definition, as the presence of the key is sufficient to define the feature. The commonly defined properties of the feature object are:

Ruleset

Rulesets make features available (or by omission, not available). Each ruleset belongs to a scope, allowing irrelevant information (e.g. features from another scope) to be ommitted from consideration.

Rulesets are defined in the src/_data/rulesets/ folder, and contain the following properties:

Site build

This section outlines some of the critical parts of the site build.

Ruleset

Each ruleset is emitted as a page using the src/features/list.tmpl.ts page file.

This constructs page data of the following form:

The values of the can be modified in the tmpl.js file.

The code to calculate features available in a given ruleset is defined in the generator. The function calculateFeatures takes a list of features and works out those that are transitively enabled, based of the value of the 'enables' key of the feature set.

The ruleset layout calls the dashboard.njk component to render the dashboard.

dashboard.njk renders the categories and then a styled unordered list to create the dashboard "lights". Each competency is rendered using the competency.njk component. This in turn calls the dependencies.js component which is responsible for providing the data-required-features, data-score and aria-label properties.

The "lights" are styled by css provided by the comptency.njk component.