Progress in research data metrics

It’s May! – and this may be the last post I write about our IRUS-based download project as an “alpha” as we edge ever closer to the “beta” stage.

But I just wanted to start by highlighting the much earlier-stage citation work – energised by a recent meeting you may have seen two blog posts drawn from work we commissioned by Cameron Neylon – one on current international work on data citation over on the main Jisc RDM blog, the other on here looking at the knotty and fascinating issues concerning the nature of citation. As a result of this I have already joined the DCIP working group and look forward to their support on our proposed collection of use cases – and there are other areas of work under consideration.

Continued excitement about Cameron Neylon’s discussion paper on data citation aside, we’re still working hard on our IRUS-based service for research data repositories – there are now 15 test sites actively sending download data and accessing statistics. Later this month we’ll be drawing representatives from these together  for the first of what I hope will be regular meetings – allowing us to understand at a very detailed level how the data our service produces is used within institutions and research data centres.

Knowing this will of course help us to improve our pilot as it eases gently into “beta”, but we’re also delighted to be able to feed in to the contemporaneous development of the COUNTER code of practice for research data. Long-time readers will recall that Project COUNTER  helps our IRUS-based services to identify “real” downloads – filtering out things like multiple clicks and web spiders. This is a huge deal – of 172,416 “downloads” since the inception of our research data IRUS (at the time of writing), only 20,710 can be considered genuine taking these rules into account.

But are these rules (currently generalised for all repository contents) the right ones for research data? Already we’ve chosen to look at downloads for a file rather than item level, but what are the other changes we should make? How should we address – for example – the growing use of research robots to analyse multiple datasets? This is some of what our intrepid repository managers will be debating, and will be what COUNTER seek eventually to codify.

We’re also pleased to report that test data has been successfully received by IRUS from both Figshare and Elsevier Pure – one of the final stages in the integration process, this means we will be able to incorporate download data from both these services (used by numerous institutions and, in the former case, individual researchers to share research data) in the very near future.

Those of you who have been following Jisc work on a research data shared service for the UK will note that our downloads service will be an integral component of the offer there. A benefit of using lightweight and widely-recognised standards is an ability to easily integrate across a range of platforms – so whatever you are using (other than, currently, Converis…) chances are we can get you set up to use the pilot service. Do get in touch if this sounds like something you would like to be involved in.

Leave a Reply

The following information is needed for us to identify you and display your comment. We’ll use it, as described in our standard privacy notice, to provide the service you’ve requested, as well as to identify problems or ways to make the service better. We’ll keep the information until we are told that you no longer want us to hold it.
Your email address will not be published. Required fields are marked *