Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 52 Next »

Introduction

We would like to be able to determine how fresh is the data on HDX for two purposes. Firstly, we want to be able to encourage data contributors to make regular updates of their data where applicable, and secondly, we want to be able to tell users of HDX how up to date are the datasets in which they are interested. 

There are two dates that data can have and this can cause confusion, so we define them clearly here:

  1. Date of update: The last time the data was was looked at to confirm it is up to date. The ideal is that the date of update history corresponds with what is selected in the expected update frequency.
  2. Date of data: The actual date of the data. An update could consist of just confirming that the data has not changed.

When we talk about "update time", we are referring to option 1.
The method of determining whether a resource is updated depends upon where the file is hosted. If it is hosted by HDX, then the update time is recorded, but if externally, then there can be challenges in determining if a url has been updated or not. Early research work exists on inferring an update frequency and other approaches are being explored for obtaining and/or creating an update time.
 
Once we have an update time for a dataset's resources, we can calculate its age and combined with the update frequency, we can ascertain the freshness of the dataset. 
 
Progress

The implementation of HDX freshness in Python reads all the datasets from HDX (using the HDX Python library) and then iterates through them one by one performing a sequence of steps.

  1. It gets the dataset's update frequency if it has one. If that update frequency is Never, then the dataset is always fresh.
  2. If not, it checks if the dataset and resource metadata have changed - this qualifies as an update from a freshness perspective. It compares the difference between the current time and update time with the update frequency and sets a status: fresh, due, overdue or delinquent.
  3. If the dataset is not fresh based on metadata, then the urls of the resources are examined. If they are internal urls (data.humdata.org - the HDX filestore, manage.hdx.rwlabs.org - CPS) then there is no further checking that can be done because when the files pointed to by these urls update, the HDX metadata is updated.
  4. If they are urls with an adhoc update frequency (proxy.hxlstandard.org, ourairports.com), then freshness cannot be determined. Currently, there is no mechanism in HDX to specify adhoc update frequencies, but there is a proposal to add this to the update frequency options. At the moment, the freshness value for adhoc datasets is based on whatever has been set for update frequency, but these datasets can be easily identified and excluded from results if needed.
  5. If the url is externally hosted and not adhoc, then we can open an HTTP GET request to the file and check the header returned for the Last-Modified field. If that field exists, then we read the date and time from it and check if that is more recent than the dataset or resource metadata modification date. If it is, we recalculate freshness.
  6. If the resource is not fresh by this measure, then we download the file and calculate an MD5 hash for it. In our database, we store previous hash values, so we can check if the hash has changed since the last time we took the hash.
  7. There are some resources where the hash changes constantly because they connect to an api which generates a file on the fly. To identify these, we hash again and check if the hash changes in the few seconds since the previous hash calculation.

Since there can be temporary connection and download issues with urls, the code has multiple retry functionality with increasing delays. Also as there are many requests to be made, rather than perform them one by one, they are executed concurrently using the asynchronous functionality that has been added to the most recent versions of Python. The code for the implementation is here: https://github.com/mcarans/hdx-data-freshness. It has tests with a high level of coverage.

It produces some simple metrics eg. on a first run (empty database):

*** Resources ***
* total: 10205 *,
adhoc-revision: 3068,
internal-revision: 4921,
revision: 1829,
revision,api: 47,
revision,error: 86,
revision,hash: 192,
revision,http header: 62

*** Datasets ***
* total: 4440 *,
0: Fresh, Updated metadata: 1883,
0: Fresh, Updated metadata,revision,api: 15,
0: Fresh, Updated metadata,revision,hash: 100,
0: Fresh, Updated metadata,revision,http header: 8,
1: Due, Updated metadata: 1710,
2: Overdue, Updated metadata: 12,
3: Delinquent, Updated metadata: 361,
3: Delinquent, Updated metadata,revision,http header: 3,
Freshness Unavailable, Updated metadata: 348

1521 datasets have update frequency of Never

eg. a second run one day later:

*** Resources ***
* total: 10207 *,
adhoc-nothing: 3068,
api: 7,
error: 84,
hash: 1,
internal-nothing: 4920,
internal-revision: 1,
nothing: 2115,
revision: 6,
same hash: 5

*** Datasets ***
* total: 4441 *,
0: Fresh, Updated api: 7,
0: Fresh, Updated hash: 1,
0: Fresh, Updated metadata: 3,
0: Fresh, Updated nothing: 1995,
1: Due, Updated nothing: 1711,
2: Overdue, Updated nothing: 12,
3: Delinquent, Updated nothing: 364,
Freshness Unavailable, Updated nothing: 348

1521 datasets have update frequency of Never

For more detailed analysis, the database it builds can can be queried eg.

select count(*) from dbresources where url like '%ourairports%' and dataset_id in (select id from dbdatasets where fresh is null);
select count(*) from dbresources where url like '%ourairports%' and dataset_id in (select id from dbdatasets where update_frequency is null);


The above lines return the same value, confirming to us that for 48 resources which have a url containing "ourairports", their freshness value is not calculable because the update frequency of the dataset is not set. This is only possible for datasets created prior to the HDX release which made the update frequency (renamed expected update frequency) mandatory. On this subject, critical to data freshness is having an indication of the update frequency of the dataset. Hence, it was proposed to make the data_update_frequency field mandatory instead of optional and change its name to make it sound less onerous by adding "expected" ie. expected update frequency  HDX-4919 - Getting issue details... STATUS . It was confirmed that this field should stay at dataset level as our recommendation to data providers would be that if a dataset has resources with different update frequencies, it should be divided into multiple datasets. The field is a dropdown with values: every day, every weekly, every two weeks, every month, every three months, every six months, every year, never and it has been implemented.

It was determined that a new field was needed on resources in HDX. This field shows the last time the resource was updated and has been implemented and released to production  HDX-4254 - Getting issue details... STATUS . Related to that is ongoing work to make the field visible in the UI  HDX-4894 - Getting issue details... STATUS .

A trigger has been created for Google spreadsheets that will automatically update the resource last modified date when the spreadsheet is edited. This helps with monitoring the freshness of toplines and other resources held in Google spreadsheets and we can encourage data contributors to use this where appropriate. Consideration has been given to doing something similar with Excel spreadsheets, but support issues could become burdensome.

A collaboration has been started with a team at Vienna University who are considering the issue of data freshness from an academic perspective. We will see what we can learn from them but will likely proceed with a more basic and practical approach than what they envisage. Specifically, they are looking at estimating the next change time for a resource based on previous update history, which is in an early stage of research so not ready for use in a real life system just yet.

Next Steps

Running data freshness has shown that there are many datasets with an update frequency of never. This is understandable because for a long time, it was the default option in the UI. As the data freshness database holds organisation information, the first step is to contact all organisations who have datasets with update frequency never and encourage them to put in a time period.

There needs to be a way to specify that although a dataset is updated, it is not according to any schedule ie. it is adhoc. However, since it would be an enticing option to pick as it does not require much thought, it is proposed to not let data contributors choose this in the UI, but to wait for them to ask us or for us to identify their dataset as stale and contact them about it. HDX-5046 - Getting issue details... STATUS  Only administrators would be able to set this in the UI. Similarly, it is proposed to make never only available to administrators because contributors may pick this simpy because they don't want to commit to putting an expected update frequency. It is probable that they will pick every year and when that timeframe passes and their dataset is not updated, we would contact them about it and they could then tell us it will never be updated.

An issue that needs to be addressed is where should data freshness run and where should it output. Currently it is running on a local computer with a local file (SQLite) database, but it uses a database framework (SQLAlchemy) so the database can be changed. Consider that for the HDX UI to use freshness information, it needs access to the freshness data. For mailing of providers when their data is overdue, that also requires access (unless this is added as a feature to data freshness itself). Also, the data team need to be able to read the data for reporting. There are two possibilities for data storage:

  1. Use a standalone database like PostGres hosted somewhere to be determined
  2. Add data into metadata of datasets and resources

For where to run data freshness, one possibility is the data team server, but if there are production processes reliant on it, then it may be better if it is on a server under the control of the developers.

As data freshness collects a lot of metadata, it could be used for more general reporting. If needed, the list of metadata collected could be extended. 

Once data freshness is running daily, there are some simple improvements we can make that will have a positive impact on data freshness. For example, we should send an automated mail reminder to data contributors if the update frequency time window for any of their datasets is missed by a certain amount. Even for ones which have an update frequency of "never", there could be an argument for a very rare mail reminder just to confirm data really is static. For the case where data is unchanged, we should give the option for contributors to respond directly to the automated mail to say so (perhaps by clicking a button in the message). Where data has changed, we would provide the link to the dataset that needs updating. We should consider if/how we batch emails if many datasets from one organisation need updating so they are not bombarded.

The amount of datasets that are hosted outside of HDX is growing rapidly and these represent a problem for data freshness as their update time may not be available. Rather than ignore them and concentrate only on HDX hosted files, it was decided to work out what we can do to handle this situation.The easiest solution is to send a reminder to users according to the update frequency  - the problem is that this would be irrespective of whether they have already updated and so potentially annoying.

Another way is to provide guidance to data contributors so that as they consider how to upload resources, we steer them towards a particular technological solution that is helpful to us eg. using a Google spreadsheet with our update trigger added.  We could investigate a fuller integration between HDX and Google spreadsheets so that if a data provider clicks a button in HDX, it will create a resource pointing to a spreadsheet in Google Drive with the trigger set up that opens automatically once they enter their Google credentials. We may need to investigate other platforms for example creating document alerts in OneDrive for Business and/or macros in Excel spreadsheets (although as noted earlier, this might create a support headache).

Exploration is currently under way into the header returned by HTTP requests. Sometimes, this header contains a last modified field. The percentage of externally hosted resources for which this field is usefully populated needs to be measured. For those resources where this field is not usable, then we can resort to a nightly process that hashes each file pointed to by each resource url and compares with hashes that have been stored. 

Important fields


FieldDescriptionPurpose
data_update_frequencyDataset expected update frequencyShows how often the data is expected to be updated or at least checked to see if it needs updating
revision_last_updatedResource last modified dateIndicates the last time the resource was updated irrespective of whether it was a major or minor change
dataset_dateDataset dateThe date referred to by the data in the dataset. It changes when data for a new date comes to HDX so may not need to change for minor updates


Dataset Aging Methodology

A resource's age can be measured using today's date - last update time. For a dataset, we take the lowest age of all its resources. This value can be compared with the update frequency to determine an age status for the dataset.

 

Thought has previously gone into classification of the age of datasets. Reviewing that work, the statuses used (up to date, due, overdue and delinquent) and formulae for calculating those statuses are sound so they have been used as a foundation. It is important that we distinguish between what we report to our users and data providers with what we need for our automated processing. For the purposes of reporting, then the terminology we would use is simply fresh or not fresh. For contacting data providers, we must give them some leeway from the due date (technically the date after which the data is no longer fresh): the automated email would be sent on the overdue date rather than the due date (but in the email we would tell the data provider that we think their data is not fresh and needs to be updated rather than referring to states like overdue). The delinquent date would also be used in an automated process that tells us it is time for us to manually contact the data providers to see if they have any problems we can help with regarding updating their data.


Update Frequency

Dataset age state thresholds

(how old must a dataset be for it to have this status)

FreshNot Fresh

Up-to-date

Due

Overdue

Delinquent

Daily

0 days old

1 day old

due_age = f

2 days old

overdue_age = f + 2

3 days old

delinquent_age = f + 3

Weekly

0 - 6 days old

7 days old

due_age = f

14 days old

overdue_age = f + 7

21 days old

delinquent_age = f + 14

Fortnightly

0 - 13 days old

14 days old

due_age = f

21 days old

overdue_age = f + 7

28 days old

delinquent_age = f + 14

Monthly

0 -29 days old

30 days old

due_age = f

44 days old

overdue_age = f + 14

60 days old

delinquent_age = f + 30

Quarterly

0 - 89 days old

90 days old

due_age = f

120 days old

overdue_age = f + 30

150 days old

delinquent_age = f + 60

Semiannually

0 - 179 days old

180 days old

due_age = f

210 days old

overdue_age = f + 30

240 days old

delinquent_age = f + 60

Annually

0 - 364 days old

365 days old

due_age = f

425 days old

overdue_age = f + 60

455 days old

delinquent_age = f + 90


NeverAlwaysNeverNeverNever

Number of Files Locally and Externally Hosted

TypeNumber of ResourcesPercentage
File Store                                  2,102
22%
CPS                                  2,459
26%
HXL Proxy                                  2,584
27%
ScraperWiki                                     162
2%
Others                                  2,261
24%
Total                                  9,568
100%

Determining if a Resource is Updated

The method of determining whether a resource is updated depends upon where the file is hosted. If it is in HDX ie. in the file store, then the update time is readily available. If it is hosted externally, then it is not as straightforward to find out if the file pointed to by a url has changed. It is possible to use the last_modified field that is returned from an HTTP GET request provided the hosting server supports it or not. (For performance reasons, we open a stream, so that we have the option to only read the header information rather than the entire file). If it is a link to a file on a server like Apache or Nginx then the field often exists, but if it is a url that generates a result on the fly, then it does not. The field has turned out to be valuable needs to be determined by examining the percentage of datasets it correctly covers. According to Vienna University the figure could be as high as 60%, but this must be verified.
 
An alternative approach discussed with the researchers is to download all the external urls, hash the files and compare the hashes. The Vienna team had done some calculations and asserted that it would be too resource intensive to hash all of the files mainly due to the time to download all of the urls (and we would need to consider datasets like WorldPop which are huge). However, we can apply logic so that we do not need to download all of the files every day (except on first run) based on the due date described earlier. We could restrict the load further if necessary by checking datasets with a lower update frequency less often than nightly.
 
The flowchart below represents the logical flow for each resource in HDX and would occur nightly (unless we need to reduce the load):
  

References

Using the Update Frequency Metadata Field and Last_update CKAN field to Manage Dataset Freshness on HDX:

https://docs.google.com/document/d/1g8hAwxZoqageggtJAdkTKwQIGHUDSajNfj85JkkTpEU/edit#

Dataset Aging service:

https://docs.google.com/document/d/1wBHhCJvlnbCI1152Ytlnr0qiXZ2CwNGdmE1OiK7PLzo/edit

https://github.com/luiscape/hdx-monitor-ageing-service


University of Vienna paper on methodologies for estimating next change time for a resource based on previous update history:
University of Vienna presentation of data freshness:
proxy.hxlstandard.org', 'ourairports.com
select count(*) from dbresources where url like '%ourairports%' and dataset_id in (select id from dbdatasets where fresh is null);select count(*) from dbresources where url like '%ourairports%' and dataset_id in (select id from dbdatasets where update_frequency is null);
  • No labels