Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Now that freshness is exposed in the interface, we need to examine if what it is saying makes sense and look at how users react to it. In particular, there are some cases where the freshness process needs some adjustment in order to avoid misleading users. Below I outline the Problem and then give proposals for a Solution.

...

The following issues were found prior to the start of this investigation:

  • Exclude "Live", "As Needed" and "Never" datasets from no touch if already fresh rule - DONE
  • Discount edits made by HDX
  • Restrict which metadata changes count as updates for freshness
  • Offer an "archived" icon in addition to "fresh" to indicate a dataset that is old, up-to-date, and no longer being updated. At the moment these are being called fresh, which is technically true, but tends to present a lot of old data to users.
  • Date of Dataset is used in different ways

Discount edits made by HDX

Edits by HDX staff are typically to fix issues and have no bearing on the up to dateness of the data, hence they should be ignored by freshness but we need to consider what to do about edits to datasets maintained by HDX.

The edits that have been performed on a dataset can be seen by looking at package_revision_list. One complication is that we must go through the history of edits because someone outside HDX could make an update, followed not long after by someone in HDX. A naive implementation could miss the first edit which should count towards freshness.

Restrict which metadata changes count as updates for freshness

Currently any dataset metadata change counts as an update from a freshness perspective. Our assumption is:

  • Such changes are taken as signifying that the dataset maintainer has thought about the data and checked it
  • If they had newer data, then we would expect them to put it into HDX while updating the dataset metadata
  • The fact they haven't means the data is as up to date as possible

This proposal limits our assumption to certain fields - it becomes:

  • Changes in certain metadata fields are taken as signifying that the dataset maintainer has thought about the data and checked it
  • If they had newer data, then we would expect them to put it into HDX while updating these specific dataset metadata fields
  • The fact they haven't means the data is as up to date as possible

The criteria for choosing the fields should be those that directly affect the underlying data or freshness calculation:

  • Expected update frequency
  • Dataset date
  • Location?
  • Source?

Note that if the number of fields is severely limited, this may render discounting edits by HDX unnecessary.

Points to consider:

  • Expected update frequency is used to calculate freshness, but then if someone changes it from yearly to monthly, that doesn't indicate anything about the data having changed. If the dataset was delinquent with yearly update frequency, it should still be delinquent with monthly.
  • Why should someone changing the dataset description be any less of an update from a freshness perspective than changing the dataset date?
  • There doesn't seem to be a compelling reason to do a partial restriction of metadata changes counting for freshness - it's really all or nothing:
    • either we regard any metadata change as someone indicating that the data is as fresh as it can be (as we originally envisaged)
    • or we simply disregard metadata changes altogether from determination of freshness and rely solely on data changes - note that detecting file store changes specifically would need to be investigated

Offer an "archived" icon in addition to "fresh"

The data in some datasets refers to or covers a date or date period which is far in the past, but the data itself is as up to date as it could be and will not be updated again. For these cases, it makes sense to offer an archived icon instead of fresh (which would be the icon used at present for an expected update frequency of "never"). 

Date of Dataset is used in different ways

...

More on point 1 below in Confusing concepts related to Date of Dataset.

Discovering Other Issues

To discover other possible issues with how freshness is understood, the following strategy was applied:

  1. Take a random sample of datasets ensuring that among them are fresh, due, overdue, and delinquent datasets and that they represent a cross section of different organisations' datasets
  2. Evaluate what fresh and not fresh mean
  3. Determine if it is clear to users
  4. Collect any cases where the fresh label (or lack of it) is misleading
  5. Categorise misleading cases

With an overview of the misleading cases, we can consider what to do about the terminology we use such as fresh and not fresh that accounts for the misleading cases and provides clarity to users.

Misleading cases

The misleading cases are documented in the Google spreadsheet here and the resources for those datasets were all frozen and stored in GitHub for further analysis. From the full analysis, a subset of examples of specific cases were picked and coloured in red.

...

Confusing concepts related to freshness

The following are possible dates freshness could use:

  • What date or date period does the data in the dataset cover
  • The date the data in the dataset was last modified
    • Was the update significant or minor?
  • The date the metadata of the dataset was last modified
    • Was the change significant or relevant to any dates we report?

...

Given the different concepts being conflated in the term fresh, it makes sense to me to separate them. Instead of having just one icon "fresh" with a range of meanings to users, we should have multiple icons that make it clear the concept we are trying to get across. I propose the following categories:

  1. "CurrentActive" for data that covers a date or date period close to the present- using the "Date Coverage" metadata field(s) described earlier."Active" for data that has been recently updated or reviewed (where we need to decide what recent means has been recently updated or reviewed (where we need to decide what recent means eg. last 2 weeks) - using the "Last Modified" dataset field mentioned above. 
  2. "New" for newly created datasets (where we need to decide what new means eg. last 2 weeks) - using the "Dataset Created" dataset field. 
  3. "Up to date" for data that has been updated or reviewed within its expected update frequency (or is Live) -  using the "Last Modified" and "Expected update frequency" dataset fields. 
  4. "Current" for data that covers a date or date period close to the present - using the "Date Coverage" metadata field(s) described earlier.
  5. "Archived" for data that covers a date or date period far from the present, will not be updated again (expected update frequency=Never) and is as up to date as it can be - using the "Date Coverage", "Last Modified" and "Expected update frequency" dataset fields. 
  6. "Superceded" for data that covers a date or date period far from the present, will not be updated again (expected update frequency=Never) and for which another dataset exists with more current data)- using the "Date Coverage", "Last Modified" and "Expected update frequency" dataset fields. While useful, is it feasible to identify these? 

The categories Current and Archived are obviously mutually exclusive. It makes sense for "Active" and "New" not to be used together ie. if a dataset is newly created, there is no need to identify it as "Active" as well. 

...

Other than these exceptions, the icons categories can appear be assigned together ie. a dataset can be in multiple categories. However, if icons are displayed for all the categories a dataset is in, this could lead to there being too many displayed at once (at least on the dataset list/search page) - we may need to be experiment to see if this causes confusion and/or looks ugly. If so, I can think of two possibilities:

  1. We could consider limiting the dataset list/search page to show the most relevant category which means that there must be a priority for when multiple categories apply, for example for a dataset that is "New", "Current" and "Up to date", show "New" but not "Current" or "Up to date".
  2. We group the categories under fewer icons, for example the "New" icon would actually mean the dataset is in categories "New", "Current" and "Up to date" and the icon "Active" would mean it is "Active", "Current" and "Up to date".

In the dataset list/search UI, the categories could be sorted on. We need to decide a priority perhaps using a points-based system such that "New" is at the top, then "Active"+"Current"+"Up to date" is next etc. Additionally, it would be very useful to be able to filter by these different categories.

I do not suggest distinguishing major and minor updates because it is probably impossible to detect automatically.

By using the new "Last Modified" field, metadata updates will not be counted - we are only concerned with updates or reviews of data. Given the "Reviewed" button, we do not need to look at metadata changes to infer that a dataset's data has been checked. Ignoring all metadata changes negates the need to "discount edits by HDX" as that is a byproduct, but adds the technical problem of distinguishing updates of files in the filestore from other metadata updates in CKAN. Filestore updates not metadata updates should update the corresponding resource's "Last modified" field. I cannot see anything in the API to distinguish updates of the filestore from other updates - is there anything in the CKAN database?These categories would be sorted in the dataset list/search UI. We need to decide a priority perhaps using a points-based system such that "New" is at the top, then "Active"+"Current"+"Up to date" is next etc. It would be very useful to be able to filter by these different categories.