Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 45 Next »

Introduction

We would like to be able to determine how fresh is the data on HDX for two purposes. Firstly, we want to be able to encourage data contributors to make regular updates of their data where applicable, and secondly, we want to be able to tell users of HDX how up to date are the datasets in which they are interested. 

There are two dates that data can have and this can cause confusion, so we define them clearly here:

  1. Date of update: The last time the data was was looked at to confirm it is up to date. The ideal is that the date of update history corresponds with what is selected in the expected update frequency.
  2. Date of data: The actual date of the data. An update could consist of just confirming that the data has not changed.

When we talk about "update time", we are referring to option 1.
The method of determining whether a resource is updated depends upon where the file is hosted. If it is hosted by HDX, then the update time is recorded, but if externally, then there can be challenges in determining if a url has been updated or not. Early research work exists on inferring an update frequency and other approaches are being explored for obtaining and/or creating an update time.
 
Once we have an update time for a dataset's resources, we can calculate its age and combined with the update frequency, we can ascertain the freshness of the dataset. 
 
Progress

It was determined that a new field was needed on resources in HDX. This field shows the last time the resource was updated and has been implemented and released to production HDX-4254 - Getting issue details... STATUS . Related to that is ongoing work to make the field visible in the UI HDX-4894 - Getting issue details... STATUS .

Critical to data freshness is having an indication of the update frequency of the dataset. Hence, it was proposed to make the data_update_frequency field mandatory instead of optional and change its name to make it sound less onerous by adding "expected" ie. expected update frequency  HDX-4919 - Getting issue details... STATUS . It was confirmed that this field should stay at dataset level as our recommendation to data providers would be that if a dataset has resources with different update frequencies, it should be divided into multiple datasets. Assuming the field is a dropdown, it could have values: daily, weekly, fortnightly, monthly, quarterly, semiannually, annually, never. It would be good to have something pop up if the user chooses "never" making it clear that this is for datasets for which data is static. We will have to audit datasets where people pick this option as we don't want people choosing "never" because they don't want to commit to putting an expected update frequency. 

A trigger has been created for Google spreadsheets that will automatically update the resource last modified date when the spreadsheet is edited. This helps with monitoring the freshness of toplines and other resources held in Google spreadsheets and we can encourage data contributors to use this where appropriate. Consideration has been given to doing something similar with Excel spreadsheets, but support issues could become burdensome.

A collaboration has been started with a team at Vienna University who are considering the issue of data freshness from an academic perspective. We will see what we can learn from them but will likely proceed with a more basic and practical approach than what they envisage. Specifically, they are looking at estimating the next change time for a resource based on previous update history, which is in an early stage of research so not ready for use in a real life system just yet.

Next Steps

The expected update frequency field requires further thought particularly on the issue of static datasets, following which there will be interface design and development effort. 

Once the field is in place, there are some simple improvements we can make that will have a positive impact on data freshness. For example, we should send an automated mail reminder to data contributors if the update frequency time window for any of their datasets is missed by a certain amount. Even for ones which have an update frequency of "never", there could be an argument for a very rare mail reminder just to confirm data really is static. For the case where data is unchanged, we should give the option for contributors to respond directly to the automated mail to say so (perhaps by clicking a button in the message). Where data has changed, we would provide the link to the dataset that needs updating. We should consider if/how we batch emails if many datasets from one organisation need updating so they are not bombarded.

The amount of datasets that are hosted outside of HDX is growing rapidly and these represent a problem for data freshness as their update time may not be available. Rather than ignore them and concentrate only on HDX hosted files, it was decided to work out what we can do to handle this situation.The easiest solution is to send a reminder to users according to the update frequency  - the problem is that this would be irrespective of whether they have already updated and so potentially annoying.

Another way is to provide guidance to data contributors so that as they consider how to upload resources, we steer them towards a particular technological solution that is helpful to us eg. using a Google spreadsheet with our update trigger added.  We could investigate a fuller integration between HDX and Google spreadsheets so that if a data provider clicks a button in HDX, it will create a resource pointing to a spreadsheet in Google Drive with the trigger set up that opens automatically once they enter their Google credentials. We may need to investigate other platforms for example creating document alerts in OneDrive for Business and/or macros in Excel spreadsheets (although as noted earlier, this might create a support headache).

Exploration is currently under way into the header returned by HTTP requests. Sometimes, this header contains a last modified field. The percentage of externally hosted resources for which this field is usefully populated needs to be measured. For those resources where this field is not usable, then we can resort to a nightly process that hashes each file pointed to by each resource url and compares with hashes that have been stored. 

Important fields


FieldDescriptionPurpose
data_update_frequencyDataset expected update frequencyShows how often the data is expected to be updated or at least checked to see if it needs updating
revision_last_updatedResource last modified dateIndicates the last time the resource was updated irrespective of whether it was a major or minor change
dataset_dateDataset dateThe date referred to by the data in the dataset. It changes when data for a new date comes to HDX so may not need to change for minor updates


Dataset Aging Methodology

A resource's age can be measured using today's date - last update time. For a dataset, we take the lowest age of all its resources. This value can be compared with the update frequency to determine an age status for the dataset.

 

Thought has previously gone into classification of the age of datasets. Reviewing that work, the statuses used (up to date, due, overdue and delinquent) and formulae for calculating those statuses seems sound so we will use them as a foundation. It is important that we distinguish between what we report to our users and data providers with what we need for our automated processing. For the purposes of reporting, then the terminology we would use is simply fresh or not fresh. For contacting data providers, we must give them some leeway from the due date (technically the date after which the data is no longer fresh): the automated email would be sent on the overdue date rather than the due date (but in the email we would tell the data provider that we think their data is not fresh and needs to be updated rather than referring to states like overdue). The delinquent date would also be used in an automated process that tells us it is time for us to manually contact the data providers to see if they have any problems we can help with regarding updating their data.


Update Frequency

Dataset age state thresholds

(how old must a dataset be for it to have this status)

FreshNot Fresh

Up-to-date

Due

Overdue

Delinquent

Daily

0 days old

1 day old

due_age = f

2 days old

overdue_age = f + 2

3 days old

delinquent_age = f + 3

Weekly

0 - 6 days old

7 days old

due_age = f

14 days old

overdue_age = f + 7

21 days old

delinquent_age = f + 14

Fortnightly

0 - 13 days old

14 days old

due_age = f

21 days old

overdue_age = f + 7

28 days old

delinquent_age = f + 14

Monthly

0 -29 days old

30 days old

due_age = f

44 days old

overdue_age = f + 14

60 days old

delinquent_age = f + 30

Quarterly

0 - 89 days old

90 days old

due_age = f

120 days old

overdue_age = f + 30

150 days old

delinquent_age = f + 60

Semiannually

0 - 179 days old

180 days old

due_age = f

210 days old

overdue_age = f + 30

240 days old

delinquent_age = f + 60

Annually

0 - 364 days old

365 days old

due_age = f

425 days old

overdue_age = f + 60

455 days old

delinquent_age = f + 90


NeverAlwaysNeverNeverNever

Number of Files Locally and Externally Hosted

TypeNumber of ResourcesPercentage
File Store                                  2,102
22%
CPS                                  2,459
26%
HXL Proxy                                  2,584
27%
ScraperWiki                                     162
2%
Others                                  2,261
24%
Total                                  9,568
100%

Determining if a Resource is Updated

The method of determining whether a resource is updated depends upon where the file is hosted. If it is in HDX ie. in the file store, then the update time is readily available. If it is hosted externally, then it is not as straightforward to find out if the file pointed to by a url has changed. It may be possible to use the last_modified field that is returned from an HTTP HEAD request depending upon whether the hosting server supports it or not. (A GET request downloads the body ie. the whole file, whereas a HEAD request only returns the header information). We speculate that if it is a link to a file on a server like Apache or Nginx then the field will exist, but if it is a url that generates a result on the fly, that the field will either not exist or just contain today's date. Hence, the usefulness of the field needs to be determined by examining the percentage of datasets it correctly covers. According to Vienna University the figure could be as high as 60%, but this must be verified.
 
An alternative approach discussed with the researchers is to download all the external urls, hash the files and compare the hashes. The Vienna team had done some calculations and asserted that it would be too resource intensive to hash all of the files mainly due to the time to download all of the urls (and we would need to consider datasets like WorldPop which are huge). However, we can apply logic so that we do not need to download all of the files every day (except on first run) based on the due date described earlier. We could restrict the load further if necessary by checking datasets with a lower update frequency less often than nightly.
 
The flowchart below represents the logical flow for each resource in HDX and would occur nightly (unless we need to reduce the load):
  

References

Using the Update Frequency Metadata Field and Last_update CKAN field to Manage Dataset Freshness on HDX:

https://docs.google.com/document/d/1g8hAwxZoqageggtJAdkTKwQIGHUDSajNfj85JkkTpEU/edit#

Dataset Aging service:

https://docs.google.com/document/d/1wBHhCJvlnbCI1152Ytlnr0qiXZ2CwNGdmE1OiK7PLzo/edit

https://github.com/luiscape/hdx-monitor-ageing-service


University of Vienna paper on methodologies for estimating next change time for a resource based on previous update history:
University of Vienna presentation of data freshness:
  • No labels