Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 3 Next »

Process we seek to follow

Initial Call with the client (data contributor)

At this time we seek to understand their expectations / what data they have what type of visualization they believe they require. Note we don't go into too much detail on what can actually be produced until we see the proposed data.

Output a DataVis Brief and a record & link kept on the datavis process control sheet

Get the Data

We request contributors load the data onto HDX. A key point about our dynamic datavis work is we use this as a key value add of HDX. Once the data is on HDX with our direct or indirect support we will seek to plug their data into a data visualisation so its dynamically updating. They can then reuse this datavis on other digital properties ideally for operational purposes. The onus on the contributor is to keep updating the data so the datavis remains of value.


Review the Data

Once we have the data on HDX shared publicly or privately – we can then move to the “what is the state of the data” phase. At this stage what is possible becomes far more clear as we have an opportunity to review the data with someone from our data team. At this stage in the process it should become clear how much data cleaning is required. We rate datasets according to quality. If the datasets is flat, or of high quality and has all the required meta data we can look on this source as “curated data”. Unfortunately many of the datasets that arrive on HDX do not initially meet this threshold so require intervention by someone from our data team.

What is possible

Once the data is in a state where it can be for example overlaid on a map, graphed or made comparable with another dataset – datavis possibilities will become more apparent. At this stage we would then have a realistic call with the data contributor to discuss given the data what datavis is possible. Ideally at this stage we would have someone from our datavis team on the call together with the data scientist and someone from the main data team. Combine our team and ideas from the data contributor collectively confirm or redirect the client depending on the type of data and what works from a dynamic datavis perspective. At this stage we sometimes observe that some users have very detailed ideas on how they want their datavis to look like. We would at this stage seek to limit expectations to an MVP (minimal viable product) focussing on the core information that is seeking to be highlighted.

Mockups

Following our call where we dig into what is possible we agree that the next stage is where our datavis person will present a mockup of the proposed datavis. We use this approach to ensure we are broadly on track in terms of joint understanding and before coding starts. This is a crucial phase as clients can sometimes have widely different expectations. The mockup makes  clear what is the final expected datavis. Note if we are able to re-use an existing datavis at this stage we will just plug the data in and should be able to generate the datavis quite easily. See examples 3W


Show and Tell - Mockups

At this stage we present the first mockup and take feedback. We seek to document any proposed changes as following this stage code is written. Now we seek to reinforce the need for the client to keep their data up to date or even better ensure their systems automatically update the source from which the datavis is derived. link to MM as a case in point.


Show and Tell – Version 1 DataVis

At this stage we show version 1 of the datavis. We hope to only have to catch small changes in the datavis. Labelling / positioning. We again document these changes and confirm them to the client.


Show and Tell – Version 2 DataVis

This should be the final show and tell. The datavis should now meet the client expectations and only very minor changes should be expected. At this stage we should be able to agree what is the release plan incl any tweets, gif or sometimes a guest blog.


Weakness in our current process

We dont have a system to manage this process. In addition the process is slightly ad-hoc as we are still encountering edge cases which sometimes is not apparent at the beginning of the process. One weakness in our curren tprocess is finding a mechanism where we make clear to the client the need for them to take responsibility for updating the data. We seek to reinforce this message but sometimes clients only want a datavis for a very specific occasion / event or funding objective and dont see the deeper benefit of plugging their operational data into these systems as it often requires process / change management a the client end for which they have limited resources/ capacity /skill. We don't currently have a process in place to review are orgs with datavis maintaining their data. Although we are soon to release a new set of features that focus more heavily on “data freshness” . We do seek to make improvements to this process.


  • No labels