Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Introduction

...

The implementation of HDX freshness in Python reads all the datasets from HDX (using the HDX Python library) and then goes through a sequence of steps. Firstly, it gets the dataset's update frequency if it has one. If that update frequency is Never, then the dataset is always fresh. If not, it checks if dataset and resource metadata has changed - this qualifies as an update from a freshness perspective. It compares the difference between the current time and update time with the update frequency and sets a status: fresh, due, overdue or delinquent. If the dataset is not fresh based on metadata, then the urls of the resources are examined. If they are internal urls (data.humdata.org - the HDX filestore, manage.hdx.rwlabs.org - CPS) then when the files pointed to by these urls update, the HDX metadata is updated, so there is no further checking to be done. If they are urls with an adhoc update frequency (proxy.hxlstandard.org, ourairports.com), then freshness cannot be determined. Currently, there is no mechanism in HDX to specify adhoc update frequencies, but there is a proposal to add this to the update frequency options. The At the moment, the freshness value for adhoc datasets is based on whatever has been set for update frequency which may not 

It was determined that a new field was needed on resources in HDX. This field shows the last time the resource was updated and has been implemented and released to production

Jira Legacy
serverJIRA (humanitarian.atlassian.net)
serverIdefab48d4-6578-3042-917a-8174481cd056
keyHDX-4254
. Related to that is ongoing work to make the field visible in the UI
Jira Legacy
serverJIRA (humanitarian.atlassian.net)
serverIdefab48d4-6578-3042-917a-8174481cd056
keyHDX-4894
.

Critical to data freshness is having an indication of the update frequency of the dataset. Hence, it was proposed to make the data_update_frequency field mandatory instead of optional and change its name to make it sound less onerous by adding "expected" ie. expected update frequency  Jira LegacyserverJIRA (humanitarian.atlassian.net), but these datasets can be easily identified and excluded from results if needed. If the url is externally hosted and not adhoc, then we can open an HTTP GET request to the file and check the header returned for the last_modified field. If that field exists, then we read the date and time from it and check if that is more recent than the dataset or resource metadata modification date. If it is, we recalculate freshness. If the resource is not fresh by this measure, then we download the file and calculate an MD5 hash for it. In our database, we store previous hash values, so we can check if the hash has changed since the last time we took the hash. There are some resources where the hash changes constantly which is because they connect to an api which generates a file on the fly. To identify these, we hash again and check if it changes immediately. 

It was determined that a new field was needed on resources in HDX. This field shows the last time the resource was updated and has been implemented and released to production

Jira Legacy
serverJIRA (humanitarian.atlassian.net)
serverIdefab48d4-6578-3042-917a-8174481cd056
keyHDX-4254
. Related to that is ongoing work to make the field visible in the UI
Jira Legacy
serverJIRA (humanitarian.atlassian.net)
serverIdefab48d4-6578-3042-917a-8174481cd056
keyHDX-4894
.

Critical to data freshness is having an indication of the update frequency of the dataset. Hence, it was proposed to make the data_update_frequency field mandatory instead of optional and change its name to make it sound less onerous by adding "expected" ie. expected update frequency 

Jira Legacy
serverJIRA (humanitarian.atlassian.net)
serverIdefab48d4-6578-3042-917a-8174481cd056
keyHDX-4919
. It was confirmed that this field should stay at dataset level as our recommendation to data providers would be that if a dataset has resources with different update frequencies, it should be divided into multiple datasets. Assuming the field is a dropdown, it could have values: daily, weekly, fortnightly, monthly, quarterly, semiannually, annually, never. It would be good to have something pop up if the user chooses "never" making it clear that this is for datasets for which data is static. We will have to audit datasets where people pick this option as we don't want people choosing "never" because they don't want to commit to putting an expected update frequency. . It was confirmed that this field should stay at dataset level as our recommendation to data providers would be that if a dataset has resources with different update frequencies, it should be divided into multiple datasets. Assuming the field is a dropdown, it could have values: daily, weekly, fortnightly, monthly, quarterly, semiannually, annually, never. It would be good to have something pop up if the user chooses "never" making it clear that this is for datasets for which data is static. We will have to audit datasets where people pick this option as we don't want people choosing "never" because they don't want to commit to putting an expected update frequency. 

A trigger has been created for Google spreadsheets that will automatically update the resource last modified date when the spreadsheet is edited. This helps with monitoring the freshness of toplines and other resources held in Google spreadsheets and we can encourage data contributors to use this where appropriate. Consideration has been given to doing something similar with Excel spreadsheets, but support issues could become burdensome.

A collaboration has been started with a team at Vienna University who are considering the issue of data freshness from an academic perspective. We will see what we can learn from them but will likely proceed with a more basic and practical approach than what they envisage. Specifically, they are looking at estimating the next change time for a resource based on previous update history, which is in an early stage of research so not ready for use in a real life system just yet.

...

A trigger has been created for Google spreadsheets that will automatically update the resource last modified date when the spreadsheet is edited. This helps with monitoring the freshness of toplines and other resources held in Google spreadsheets and we can encourage data contributors to use this where appropriate. Consideration has been given to doing something similar with Excel spreadsheets, but support issues could become burdensome.

A collaboration has been started with a team at Vienna University who are considering the issue of data freshness from an academic perspective. We will see what we can learn from them but will likely proceed with a more basic and practical approach than what they envisage. Specifically, they are looking at estimating the next change time for a resource based on previous update history, which is in an early stage of research so not ready for use in a real life system just yet.

Next Steps

Contact all organisations who have datasets with update frequency Never.

Where should data freshness run and where should it output eg. database, HDX metadata? Consider that for UI to use freshness information, it needs access and data team need access for reporting. Aside more general reporting can be done as data freshness runs every day and collects alot of dataset metadata (the list of which could be extended) 

adhoc update_frequency and only allow admins to set "Never" and "Adhoc".

The expected update frequency field requires further thought particularly on the issue of static datasets, following which there will be interface design and development effort. 

...


FieldDescriptionPurpose
data_update_frequencyDataset expected update frequencyShows how often the data is expected to be updated or at least checked to see if it needs updating
revision_last_updatedResource last modified dateIndicates the last time the resource was updated irrespective of whether it was a major or minor change
dataset_dateDataset dateThe date referred to by the data in the dataset. It changes when data for a new date comes to HDX so may not need to change for minor updates


Dataset Aging Methodology

A resource's age can be measured using today's date - last update time. For a dataset, we take the lowest age of all its resources. This value can be compared with the update frequency to determine an age status for the dataset.

 

Thought has previously gone into classification of the age of datasets. Reviewing that work, the statuses used (up to date, due, overdue and delinquent) and formulae for calculating those statuses are sound so they have been used as a foundation. It is important that we distinguish between what we report to our users and data providers with what we need for our automated processing. For the purposes of reporting, then the terminology we would use is simply fresh or not fresh. For contacting data providers, we must give them some leeway from the due date (technically the date after which the data is no longer fresh): the automated email would be sent on the overdue date rather than the due date (but in the email we would tell the data provider that we think their data is not fresh and needs to be updated rather than referring to states like overdue). The delinquent date would also be used in an automated process that tells us it is time for us to manually contact the data providers to see if they have any problems we can help with regarding updating their data.


Update Frequency

Dataset age state thresholds

(how old must a dataset be for it to have this status)

FreshNot Fresh

Up-to-date

Due

Overdue

Delinquent

Daily

0 days old

1 day old

due_age = f

2 days old

overdue_age = f + 2

3 days old

delinquent_age = f + 3

Weekly

0 - 6 days old

7 days old

due_age = f

14 days old

overdue_age = f + 7

21 days old

delinquent_age = f + 14

Fortnightly

0 - 13 days old

14 days old

due_age = f

21 days old

overdue_age = f + 7

28 days old

delinquent_age = f + 14

Monthly

0 -29 days old

30 days old

due_age = f

44 days old

overdue_age = f + 14

60 days old

delinquent_age = f + 30

Quarterly

0 - 89 days old

90 days old

due_age = f

120 days old

overdue_age = f + 30

150 days old

delinquent_age = f + 60

Semiannually

0 - 179 days old

180 days old

due_age = f

210 days old

overdue_age = f + 30

240 days old

delinquent_age = f + 60

Annually

0 - 364 days old

365 days old

due_age = f

425 days old

overdue_age = f + 60

455 days old

delinquent_age = f + 90


NeverAlwaysNeverNeverNever

Drawio
baseUrlhttps://humanitarian.atlassian.net/wiki
diagramNameEmails and Alerts
width502
pageId20742158
height932
revision3

Number of Files Locally and Externally Hosted

TypeNumber of ResourcesPercentage
File Store                                  2,102
22%
CPS                                  2,459
26%
HXL Proxy                                  2,584
27%
ScraperWiki                                     162
2%
Others                                  2,261
24%
Total                                  9,568
100%


Determining if a Resource is Updated

The method of determining whether a resource is updated depends upon where the file is hosted. If it is in HDX ie. in the file store, then the update time is readily available. If it is hosted externally, then it is not as straightforward to find out if the file pointed to by a url has changed. It is possible to use the last_modified field that is returned from an HTTP GET request provided the hosting server supports it or not. (For performance reasons, we open a stream, so that we have the option to only read the header information rather than the entire file). If it is a link to a file on a server like Apache or Nginx then the field often exists, but if it is a url that generates a result on the fly, then it does not. The field has turned out to be valuable needs to be determined by examining the percentage of datasets it correctly covers. According to Vienna University the figure could be as high as 60%, but this must be verified.
 
An alternative approach discussed with the researchers is to download all the external urls, hash the files and compare the hashes. The Vienna team had done some calculations and asserted that it would be too resource intensive to hash all of the files mainly due to the time to download all of the urls (and we would need to consider datasets like WorldPop which are huge). However, we can apply logic so that we do not need to download all of the files every day (except on first run) based on the due date described earlier. We could restrict the load further if necessary by checking datasets with a lower update frequency less often than nightly.
 
The flowchart below represents the logical flow for each resource in HDX and would occur nightly (unless we need to reduce the load):
Drawio
baseUrlhttps://humanitarian.atlassian.net/wiki
diagramNameHashing Flowchart
width419.5
pageId20742158
height1052
revision3
  

References

...

View file
namefreshness_hdx.pdf
height250
proxy.hxlstandard.org', 'ourairports.com