Introduction
We would like to be able to determine how fresh is the data on HDX for two purposes. Firstly, we want to be able to encourage data contributors to make regular updates of their data where applicable, and secondly, we want to be able to tell users of HDX how up to date are the datasets in which they are interested.
There are two dates that data can have and this can cause confusion, so we define them clearly here:
- Date of update: The last time the data was was looked at to confirm it is up to date ie. it must be examined according to the update frequency
- Date of data: The actual date of the data - an update could consist of just confirming that the data has not changed
It was determined that a new field was needed on resources in HDX. This field shows the last time the resource was updated and has been implemented and released to production - HDX-4254Getting issue details... STATUS . Related to that is ongoing work to make the field visible in the UI - HDX-4894Getting issue details... STATUS .
Critical to data freshness is having an indication of the update frequency of the dataset. Hence, it was proposed to make the data_update_frequency field mandatory instead of optional and change its name to make it sound less onerous by adding "expected" ie. expected update frequency - HDX-4919Getting issue details... STATUS . It was confirmed that this field should stay at dataset level as our recommendation to data providers would be that if a dataset has resources with different update frequencies, it should be divided into multiple datasets. Assuming the field is a dropdown, it could have values: daily, weekly, fortnightly, monthly, quarterly, semiannually, annually, never. It would be good to have something pop up if the user chooses "never" making it clear that this is for datasets for which data is static. We will have to audit datasets where people pick this option as we don't want people choosing "never" because they don't want to commit to putting an expected update frequency.
A trigger has been created for Google spreadsheets that will automatically update the resource last modified date when the spreadsheet is edited. This helps with monitoring the freshness of toplines and other resources held in Google spreadsheets and we can encourage data contributors to use this where appropriate. Consideration has been given to doing something similar with Excel spreadsheets, but support issues could become burdensome.
A collaboration has been started with a team at Vienna University who are considering the issue of data freshness from an academic perspective. We will see what we can learn from them but will likely proceed with a more basic and practical approach than what they envisage. Specifically, they are looking at estimating the next change time for a resource based on previous update history, which is in an early stage of research so not ready for use in a real life system just yet.
Next Steps
The expected update frequency field requires further thought particularly on the issue of static datasets, following which there will be interface design and development effort.
Once the field is in place, there are some simple improvements we can make that will have a positive impact on data freshness. For example, we should send an automated mail reminder to data contributors if the update frequency time window for any of their datasets is missed by a certain amount. Even for ones which have an update frequency of "never", there could be an argument for a very rare mail reminder just to confirm data really is static. For the case where data is unchanged, we should give the option for contributors to respond directly to the automated mail to say so (perhaps by clicking a button in the message). Where data has changed, we would provide the link to the dataset that needs updating. We should consider if/how we batch emails if many datasets from one organisation need updating so they are not bombarded.
The amount of datasets that are hosted outside of HDX is growing rapidly and these represent a problem for data freshness as their update time may not be available. Rather than ignore them and concentrate only on HDX hosted files, it was decided to work out what we can do to handle this situation.The easiest solution is to send a reminder to users according to the update frequency - the problem is that this would be irrespective of whether they have already updated and so potentially annoying.
Another way is to provide guidance to data contributors so that as they consider how to upload resources, we steer them towards a particular technological solution that is helpful to us eg. using a Google spreadsheet with our update trigger added. We could investigate a fuller integration between HDX and Google spreadsheets so that if a data provider clicks a button in HDX, it will create a resource pointing to a spreadsheet in Google Drive with the trigger set up that opens automatically once they enter their Google credentials. We may need to investigate other platforms for example creating document alerts in OneDrive for Business and/or macros in Excel spreadsheets (although as noted earlier, this might create a support headache).
Exploration is currently under way into the header returned by HTTP requests. Sometimes, this header contains a last modified field. The percentage of externally hosted resources for which this field is usefully populated needs to be measured. For those resources where this field is not usable, then we can resort to a nightly process that hashes each file pointed to by each resource url and compares with hashes that have been stored.
Important fields
Field | Description | Purpose |
---|---|---|
data_update_frequency | Dataset expected update frequency | Shows how often the data is expected to be updated or at least checked to see if it needs updating |
revision_last_updated | Resource last modified date | Indicates the last time the resource was updated irrespective of whether it was a major or minor change |
dataset_date | Dataset date | The date referred to by the data in the dataset. It changes when data for a new date comes to HDX so may not need to change for minor updates |
Dataset Aging Methodology
A resource's age can be measured using today's date - last update time. For a dataset, we take the lowest age of all its resources. This value can be compared with the update frequency to determine an age status for the dataset.
Thought has previously gone into classification of the age of datasets. Reviewing that work, the statuses used (up to date, due, overdue and delinquent) and formulae for calculating those statuses seems sound so we will use them as a foundation. It is important that we distinguish between what we report to our users and data providers with what we need for our automated processing. For the purposes of reporting, then the terminology we would use is simply fresh or not fresh. For contacting data providers, we must give them some leeway from the due date (technically the date after which the data is no longer fresh): the automated email would be sent on the overdue date rather than the due date (but in the email we would tell the data provider that we think their data is not fresh and needs to be updated rather than referring to states like overdue). The delinquent date would also be used in an automated process that tells us it is time for us to manually contact the data providers to see if they have any problems we can help with regarding updating their data.
Update Frequency | Dataset age state thresholds (how old must a dataset be for it to have this status) | |||
---|---|---|---|---|
Fresh | Not Fresh | |||
Up-to-date | Due | Overdue | Delinquent | |
Daily | 0 days old | 1 day old due_age = f | 2 days old overdue_age = f + 2 | 3 days old delinquent_age = f + 3 |
Weekly | 0 - 6 days old | 7 days old due_age = f | 14 days old overdue_age = f + 7 | 21 days old delinquent_age = f + 14 |
Fortnightly | 0 - 13 days old | 14 days old due_age = f | 21 days old overdue_age = f + 7 | 28 days old delinquent_age = f + 14 |
Monthly | 0 -29 days old | 30 days old due_age = f | 44 days old overdue_age = f + 14 | 60 days old delinquent_age = f + 30 |
Quarterly | 0 - 89 days old | 90 days old due_age = f | 120 days old overdue_age = f + 30 | 150 days old delinquent_age = f + 60 |
Semiannually | 0 - 179 days old | 180 days old due_age = f | 210 days old overdue_age = f + 30 | 240 days old delinquent_age = f + 60 |
Annually | 0 - 364 days old | 365 days old due_age = f | 425 days old overdue_age = f + 60 | 455 days old delinquent_age = f + 90 |
Never | Always | Never | Never | Never |
Number of Files Locally and Externally Hosted
Type | Number of Resources | Percentage |
---|---|---|
File Store | 2,102 | 22% |
CPS | 2,459 | 26% |
HXL Proxy | 2,584 | 27% |
ScraperWiki | 162 | 2% |
Others | 2,261 | 24% |
Total | 9,568 | 100% |
Determining if a Resource is Updated
References
Using the Update Frequency Metadata Field and Last_update CKAN field to Manage Dataset Freshness on HDX:
https://docs.google.com/document/d/1g8hAwxZoqageggtJAdkTKwQIGHUDSajNfj85JkkTpEU/edit#
Dataset Aging service:
https://docs.google.com/document/d/1wBHhCJvlnbCI1152Ytlnr0qiXZ2CwNGdmE1OiK7PLzo/edit
https://github.com/luiscape/hdx-monitor-ageing-service