Now that freshness is exposed in the interface, we need to examine if what it is saying makes sense and look at how users react to it. In particular, there are some cases where the freshness process needs some adjustment in order to avoid misleading users. Below I outline the Problem and then give proposals for a Solution.
...
Issues that have already been identified
- Exclude "Live", "As Needed" and "Never" datasets from no touch if already fresh rule - DONE
- Discount edits made by HDX
- Restrict which metadata changes count as updates for freshness
- Offer an "archived" icon in addition to "fresh" to indicate a dataset that is old, up-to-date, and no longer being updated. At the moment these are being called fresh, which is technically true, but tends to present a lot of old data to users.
- What to do with Date of Dataset?
Discount edits made by HDX
Edits by HDX staff are typically to fix issues and have no bearing on the up to dateness of the data, hence they should be ignored by freshness but we need to consider what to do about edits to datasets maintained by HDX.
The edits that have been performed on a dataset can be seen by looking at package_revision_list. One complication is that we must go through the history of edits because someone outside HDX could make an update, followed not long after by someone in HDX. A naive implementation could miss the first edit which should count towards freshness.
Restrict which metadata changes count as updates for freshness
Currently any dataset metadata change counts as an update from a freshness perspective. Our assumption is:
- Such changes are taken as signifying that the dataset maintainer has thought about the data and checked it
- If they had newer data, then we would expect them to put it into HDX while updating the dataset metadata
- The fact they haven't means the data is as up to date as possible
This proposal limits our assumption to certain fields - it becomes:
- Changes in certain metadata fields are taken as signifying that the dataset maintainer has thought about the data and checked it
- If they had newer data, then we would expect them to put it into HDX while updating these specific dataset metadata fields
- The fact they haven't means the data is as up to date as possible
The criteria for choosing the fields should be those that directly affect the underlying data or freshness calculation:
- Expected update frequency
- Dataset date
- Location?
- Source?
Note that if the number of fields is severely limited, this may render discounting edits by HDX unnecessary.
Points to consider:
- Expected update frequency is used to calculate freshness, but then if someone changes it from yearly to monthly, that doesn't indicate anything about the data having changed. If the dataset was delinquent with yearly update frequency, it should still be delinquent with monthly.
- Why should someone changing the dataset description be any less of an update from a freshness perspective than changing the dataset date?
- There doesn't seem to be a compelling reason to do a partial restriction of metadata changes counting for freshness - it's really all or nothing:
- either we regard any metadata change as someone indicating that the data is as fresh as it can be (as we originally envisaged)
- or we simply disregard metadata changes altogether from determination of freshness and rely solely on data changes - note that detecting file store changes specifically would need to be investigated
Offer an "archived" icon in addition to "fresh"
The data in some datasets refers to or covers a date or date period which is far in the past, but the data itself is as up to date as it could be and will not be updated again. For these cases, it makes sense to offer an archived icon instead of fresh (which would be the icon used at present for an expected update frequency of "never").
What to do with Date of Dataset?
The problem with Date of Dataset is twofold:
- What date(s) is it trying to represent?
- How should it be updated / how should contributors be encouraged to update it?
Discovering Other Issues
To discover other possible issues with how freshness is understood, the following strategy was applied:
- Take a random sample of datasets ensuring that among them are fresh, due, overdue, and delinquent datasets and that they represent a cross section of different organisations' datasets
- Evaluate what fresh and not fresh mean
- Determine if it is clear to users
- Collect any cases where the fresh label (or lack of it) is misleading
- Categorise misleading cases
With an overview of the misleading cases, we can consider what to do about the terminology we use such as fresh and not fresh that accounts for the misleading cases and provides clarity to users.
Misleading cases
The misleading cases are documented in the Google spreadsheet here and the resources for those datasets were all frozen and stored in GitHub for further analysis. From the full analysis, a subset of examples of specific cases were picked and coloured in red.
...
The dataset is marked as fresh. The data is as up to date as it can be so qualifies as fresh both on that criteria and on the fact it was updated within the expected update frequency of one year. The dataset date is 2010 and WorldPop's "date of production" says 2013 - would uses users regard data covering 2010 as fresh?
...
There is a draft document on Archiving/Versioning Best Practices here. Depending We need a process whereby when a dataset becomes delinquent, it is examined and if it looks like it is for a past crisis or is basically as up to date as it can be, its "Expected update frequency" will be set to "Never" and it will be labelled as "Archived". For datasets that are already delinquent, we will have to go through them all to do this. For datasets that will become delinquent in future, should we email maintainers or should the current process of when a dataset becomes deliquent and someone in DP looks at it be sufficient? Depending upon how the data is structured, it may be possible to determine if there is another dataset that is the next in the data series. In which case a dataset that would have been marked "Archived" could instead be labelled "Superceded" and it would be a nice feature to link to the newer dataset in the UI. While it might be technically possible to automate suggesting candidates for newer data, it would likely require manual curation to ensure that the correct dataset is identified.
...