The importance of sharing for HEIs

The University of Hong Kong has recently signed an agreement with Springer regarding a trial of Open Access to materials – as well as signing the Berlin Declaration.

While the detail of this particular agreement can be discussed – and has been raised on the JISC Repositories list – a posting from David Palmer at the University highlighted the strategic thinking that underlies the work that is being done in the area. See the webpage “Promoting Knowledge Exchange and Demenstrating Leadership in Communitites Across the Region” on the University’s website.

This is an articulate and concise exploration of why knowledge exchange is important to an HE institution; the interconnectedness of the institution, staff, alumni, local community and the wider outside world; and the inseperability of knowledge exchange with teaching, learning and research.

PS And, thankfully, such strategic ambitions now do seem to be trumping the old argument of the “free-rider” problem: of commercial companies being able to make money from research being given to them for free, previously raised by publishers, without a hint of irony.

Bill Hubbard

PEER Baseline – why don’t authors deposit?

Looking at the recent room released PEER baseline report on authors and users regarding journals and repositories from Loughborough University, this is packed full of very useful and very dense information and analysis which will repay many close readings.

For advocates of open access, and in particular institutional repositories, one immediately interesting question is Q 22: What reservations do you have about placing your peer-reviewed journal articles in publicly available repositories?

Responses to this are quite fascinating. As a (very) rough analysis, putting together the figures seems to show that the most significant concern is a reluctance to put research publications in a repository where other materials have not been peer-reviewed, with nearly 50% considering this either very important or important.

Following this close behind are concerns about infringing copyright and infringing embargo periods; concern about the paper not having been “properly edited by the publisher”; not knowing of a suitable repository; a concern about plagiarism or unknown reuse; then not knowing how to deposit material in a repository and not knowing what a repository was.  Other concerns are then a step change down from these.

If as advocates we want to get more material into repositories, these might well be the key questions for advocacy to address.  Interestingly, none of these are unanswerable, require policy change or mandates and revolve around a simple lack of knowledge. 

For instance, the top concern of sharing server-space with pre-prints really revolves around a lack of knowledge as to how the open access repository system works.  I doubt if academics really object to their words being held on adjacent tracks on a hard disk to non-peer reviewed material.  I suspect it is that in accessing the material they see a user being presented with their hard earned peer review material “displayed” alongside non-peer reviewed material.

In other words it is the difference between storage and access.  Material can be deposited and stored in a repository, but users will access the material in a separate fashion and be able to separate out by subject, peer review status, etc.  If this distinction is not appreciated by an author, then they may well see the repository as both storage and access mechanism: whereas for almost all users the actual repository — and its accompanying content — will be reduced in use to a single cover sheet on the article that they actually want.

The concerns about copyright and embargo again, are really a matter of the author being given the right information at the right time.  Repository managers commonly use RoMEO to find out this information: there is a strong case for arguing that RoMEO ‘s API should be used more widely to embed the information directly into the deposit process.  Or at least, tell authors that copyright and embargo information is readily available and that this should not be an issue for them. (Disclosure: we run RoMEO from the CRC here in Nottingham)

Concerns about plagiarism and how the material will be used can also be addressed.  Far from being an invitation to plagiarism, making materials openly accessible simply increases the chance that the plagiarist will be detected. Authors have always said they have been reassured when I have pointed this out: but it does seem a matter of someone personally addressing the concern.

For those concerned about depositing  materials that have not been “properly edited” by the publisher, again the answer is information as to how the system works — allowing, in most cases, the deposit of the authors-final version, after peer review changes.

The other three highest concerns again revolve around a lack of information as to how the system works: not knowing of a suitable repository, not knowing how to deposit, and not knowing what an open access repository is.

Although this Question 22 reveals a range of strongly felt concerns which stop authors using repositories, nonetheless it is reassuring to note that none of the concerns need be showstoppers: it’s just an argument for continued, repetitive, hard slog advocacy of the basics.

Bill Hubbard

Open Access and Glacial Melting

Interesting to see the news stories (and here)  on the discredited claim by the IPCC that the Himalayan glaciers could melt within 35 years. I think that the basis of Open Access has a lot to say about such incidents as the glacier-melt story. Open Access has a clear message of transparency and accessibility and even though this might be a well-known message for ourselves, I think it is always worth relating to stories such as this and making repeatedly in public fora.

In other words, open access brings traceability to materials – if all research material, including (at least published) grey literature were to be made open access as a matter of course, then tracing back references would be a matter of simple mouse clicks. In the case of the Himalayan glacier-melt, 1 or 2 mouse clicks away:
**   IPCC claims glacier-melt and gives reference of 2005 report by conservation group WWF. This reference is to an open access report and so –
**   2005 WWF report quotes an interview in 1999 by Indian glaciologist Syed Hasnain. Either it mentions the interview in which case, the track is ended as a non-peer-reviewed source – or the interview itself was published in which case
**   And the researcher has the source and can make their judgement as to its validity.

And of course, if materials were referenced and openly accessible so that they could be checked at this speed, anyone writing or editing a report would be able to recheck all of their sources before any report was ever published, rather than like at present, for example, relying on notes taken from some interlibrary loan 2 years before, relying on a contributor to have checked their sources, or seeing that the source is a WWF Report and assuming that the sources they used would be peer-reviewed.

If reports and articles could have their references checked in this way by the sceptical scientific process, it could be argued that a subscription based service would also serve – at least for those wealthy enough to have access to the world literature and dismissing the others.  But a comprehensive Open Access system makes it available for everyone: references could be checked by journalists, concerned members of the public, students, and others, as well as all academic researchers.

Open Access transparency should act to reassure the public that the decisions and actions taken on their behalf are based on a secure footing. If scientists and politicians are truly concerned about a loss of trust in the scientific process by the public, then Open Access is one way to reclaim that trust.

Bill Hubbard 

OA in Times Higher

An article by Zoe Corbyn has appeared in the Times Higher today (12th November 2009), reviewing the current state of open access and rehearsing some of the arguments for and against open access.  This is a long article (5,500 words) and given the wide readership of the Times Higher within UK HE has the potential to be a significant piece.

In spite of the advocacy work that has been done over the last 6 or 7 years, many academics are still unaware of open access and what it may mean for them.  Very often this is not because the information has not got through to them in the first place, but rather that without immediate application of the ideas, academics, quite naturally, forget. We all live in an information-rich environment, with so many calls on our attention that unless advocacy leads to immediate action, details and ideas can be lost in the barrage.

The advantage of such a piece in the Times Higher is that is has the ability to be read by many academics and other staff at the same time and to start conversations in the coffee room or SCR; it comes with a certain badge of relevance given by the publication itself and the reporting touches enough sensitive spots for people to sit up and take notice.

Some of the quotes are robust:

“”Repositories are parasitic on the existing journal structure for their peer-review process,” says Ian Russell, chief executive of the Association of Learned and Professional Society Publishers.”

and the debate represented likewise:

“However, the open-access movement counters that the journal structure itself can be seen as parasitic, profiting from the free peer-review services that academics provide.”

The article also quite fairly looks at some of the (largely unnecessary) divisions within the open access community between the promotion of gold or green routes.

It will be interesting to see what responses this generates. Other reports in the Times Higher and other general HE publications have tended to be far shorter single issue pieces and able to be dismissed as minor items of specialist interest. This is far more wide-ranging in scope and may be enough to embed the topic as one that is of interest for everyone.

Given the scope of OA and other research communication developments (text-mining, access to grey literature, etc) it is vital that there is more general debate and reporting like this:  that research communications as a whole are seen as a proper and interesting topic to report on and for everyone to discuss. Ultimately, any change has to be made with the agreement and engagement of those concerned. A significant aim of advocacy has always been engagement of academics and other institutional staff with the debate itself: let us hope that this opens the wider debate.

Bill Hubbard 

Developing complexity and service response

Following from the release of a major upgrade to RoMEO during Open Access week, the Centre for Research Communications, which runs the SHERPA services RoMEO, JULIET and OpenDOAR, has now launched two User Surveys to gather feedback from the community – a survey for RoMEO and a survey for OpenDOAR and ROAR. These surveys are to help prepare for support of an increasingly diverse research communications environment.

As part of RoMEO we have always had a suggestion form for new publishers or for updating information and an active community of contributors and suggestions. However, we wanted to launch the current survey to more formally gather comment, opinion and wishes for the future development of RoMEO as the circumstance of its use changes over time.

Development of publisher contracts

The service originally developed to interpret publishers’ copyright transfer agreements for author self-archiving and we want this to continue as the core of RoMEO. The system started with a single aim and could interpret, summarise and present information from this single viewpoint. As time has passed the situation for archiving has grown more complex – and users’ needs have matched this. The growth of “hybrid” options for journals has made a single interpretation of a journal’s copyright contract impossible. Individual funders have come to agreements with some publishers for Open Access publishing and therefore (sometimes but not always) Open Access archiving rights also apply to publication of work they have funded. Sometimes individual publishers have recognised and matched the requirements of some (but often not all) funding agencies and for them allow their standard terms to be modified.

Complexity for Authors

All of this gives a far more complex environment for authors to work in and underlines the need for assistance in guiding authors through their options and responsibilities. It also presents real challenges to RoMEO in providing this. If any service is to be used successfully by end-users, then it has to reflect the users’ needs and fit into their workflow. If one of the current drivers for archiving work is compliance with funders’ mandates, then these need to be represented and permissions summarised.

However, many mandates have a focus on OA publication, rather than archiving. Given the number of funding agencies and the complexity of their requirements (summarised in and linked to JULIET) as these apply to every publisher, the original fairly clear RoMEO interface became quite crowded. The upgrade from last week has attempted to deal with this, in allowing “single funder views” of the data, as it were, but the diversity of possible approaches to the data remains, We are aware that archiving in an institutional repository practically takes a place within a suite of options that needs to be presented with clarity and simplicity.

This is a reflection of a larger picture – how will this look in future? What is being developed in practice within institutions to deal with the requirements of funders, authors and publishers? From a strategic point of view, what can services like RoMEO give in support of wider access to information?


We have also released a survey for OpenDOAR and ROAR. These services, run by the CRC and University of Southampton respectively, share some aspects of work in analysing the world’s repositories, but exist as separate services with individual aims. ROAR has a focus on quantitative and statistical analysis of repositories and their holdings; OpenDOAR has a focus on qualitative analysis and policy and standards development. Each of the services has healthy feedback from its users, but again, we wanted to more formally gather comments from the community on the services as they will be used in a more diverse picture of repositories.

Development of Repository Environment – Full-text holdings

Here too the situation has become more complex over the years that they have been in operation. While the original aim and distinctive difference for open access repositories was that anyone could access the full text, for many repositories this has been bypassed by conflicting needs so that for some the great majority of their content is merely metadata. Many, and probably most, repositories accept metadata entries, maybe driven by concerns to display high levels of records irrespective of full-text links; or because the repository is used for internal purposes that require no more than full-text; or because there is the hope that at some point in the future, there will be enough staff resource to chase down the full-text.

Whatever the reason, the decision to accept metadata is a significant one. It means that many searches of open access repositories now end in a bibliographic entry with no access to the full-text article, or simply a link to it held on the publisher’s website. For the researcher looking for material, this effectively undercuts the rationale for searching repositories in the first place. It is hard for any advocate to engage researchers with open access as a distinctive and different service when the full-text content is not there.

Having said that, some drivers for the adoption of institutional repositories now seem sufficiently strong (at least, to some institutions) to match the original idea of full-text access. The use of the repository as an enhanced research publications database is one example: others include it as an administrative system for projected REF needs; other requirements may be met by full-text access on-campus, even if restricted off-campus.

While the continued growth of metadata-only records remains a significant challenge for advocates and the future use of the repository network, here too, developments take their place within a wider and more complex environment of different use, structure and purpose of repositories.

Development of Repository Environment – Open Access?

Again, one of the original distinctions was that the repositories should be openly accessible. The fact that many repositories are set up as closed access in some way (registration & password systems, even subscriptions or pay-per-view) but identify themselves as open access was one of the drivers in the establishment of OpenDOAR, with a policy of a human accessing each repository and sampling holdings to check that what was being claimed was true. Since the start of OpenDOAR we have rejected between 25 – 33% of candidate repositories because they are out of scope – no full text at all, not open, test sites, junk data etc.

The types of material held in repositories has grown to include research data, learning objects, varieties of grey literature, specialist collections, and others. Some of this content brings with it understandable restrictions on access while at the same time being appropriate for a repository-like collection and (partial) exposure.

Combined with the variety of purposes that repositories are accumulating, this means that repositories of different “flavours” now take their place in a more complex, interesting and ultimately more rewarding environment. While I believe that we cannot afford to loose sight of the key goal of access to full-text research, services offered by OpenDOAR and ROAR (among many others) have to change to reflect this and allow the diverse requirements of the users of repository content and the diverse basis for repositories to be reflected in the service they provide.

Future directions through complexity

The level to which this happens with these services is a reflection of the larger question. To what extent should we all continue to press for the original OA vision if this is at the expense of the easy growth of some alternatives (metadata repositories, partial access etc)? Should future development in the field be guided by what has proved popular and practical so far, if this fails to address the original goal of full-text open access and original method and goal of author-engagement and self-archiving? Do we set goals that are the natural extension of what we see developing, or aim for the more robust and clear vision that was articulated in Budapest and elsewhere?

Have your say in how some of the support services in this developing environment will themselves develop. Do contribute to the RoMEO survey and the OpenDOAR and ROAR survey. We will be interested to see your thoughts.

Bill Hubbard

Open Access Week – the challenge from the Wellcome Trust

This week has seen “Open Access Week” with large numbers of events, announcements and similar awareness-raising activities. It’s an excellent indication of the current environment that we can talk about having an open access week — an international open access week — quite seriously and have a sufficiently large number of events and engagement to back up the rhetoric.

JISC has been active in this, being a joint organiser of Open Access Week itself, as well as many of its projects either putting on events, releasing updates, upgrades or announcements. JISC has released a booklet, which makes interesting reading which reviews its achievements in its continued and long-term support of open access. The whole field has now been going for long enough for developments to be tracked over time. A summary of JISC’s achievements is available online, including the fact that it has been active in this area for over 10 years.

There have been several number-based announcements this week that on reflection are actually quite significant indicators of scale and pace — the University of Salford announcing the world’s 100th open-access mandate; OpenDOAR putting in its 1,500th repository; the fifth birthday of PLoS Medicine — all signs of the scale of open access and further evidence that this is very probably now truly an unstoppable movement.

If this is unstoppable, then whatever the timescale the alarm bell has to ring and businesses (not just publishers — including universities) have to accept that change is inevitable and plan quite carefully to deal with it.

For some years it has been apparent that significant change to traditional publication is coming in some form. Here I am including e-journals as pretty much a translation of traditional publishing into another medium, rather than a true change in product, process or business model: the true change has yet to roll-out. Open access is just one thread in a changing environment of business and investment practices, public and academic expectations, and the requirements from other technical and social developments in scholarly communication.

As in any period of rapid evolution, some smaller, fragile players may disappear, often because it is in the nature of small fragile players to be unstable. Some more major players will survive because their sheer size means that they can take an inefficiency hit during transition, while others will diminish because their size has bought inertia. But whatever the size — businesses will have to respond. In dealing with this larger change, at least there are business models available to help deal with that part of developments which is open access.

Four years ago the Wellcome Trust, after producing a report on open access publishing, introduced the idea that they would pay for open access publication as an additional charge, to give publishers additional income on top of normal subscriptions. This was not simply a reward for offering an open access option, but a deliberate offer to help fund a transition period while publishers experimented with and adopted true open access business models.

So far, evidence for any reduction in serials’ subscription costs as a result of additional open access income has been thin on the ground, with the OUP being a notable exception. Publishers say, with some justification, that it can be difficult to balance a true pro-rata reduction in subscriptions to open access income: however, there is an existing and growing expectation on behalf of subscribers that change now has to be seen.

It is for this reason that I think that one of the most significant developments this week has been a press release from the Wellcome Trust.

In this, Sir Mark Walport, Director of the Wellcome Trust, comments:

“We would like to see a commitment from publishers to show the uptake of their open access option and to adjust their subscription rates to reflect increases in income from open access fees,” says Sir Mark. “Some publishers, for example Oxford University Press, have already done this and we would like to see all publishers behave the same way.”

The fact that this view is now being openly stated – by those that are providing the funding – puts further pressure on the pace of change.

In terms of numbers, some truly significant numbers are those from the Houghton Report, showing a financial benefit to the UK overall simply from greater accessibility to research in the government-funded sector of an additional £172 million per year. For higher education institutions, a shift from subscription to open access publishing has been identified as giving a potential of £80 million of savings. This report was produced in January 2009 and with an openness to match its subject, the model itself made available for use by anybody who wanted to use different financial assumptions. To my knowledge, there has still not been a serious challenge to these original estimates.

In the coming squeeze on public finances, which will be deep and last long, it is inevitable that numbers like this will attract attention. It is likely that the change coming down the track will now come very fast and will require businesses on both sides of the equation to be inventive and agile in their response. The Wellcome Trust statement is one that cannot be ignored.


Confederation of OA Repositories

Today I signed JISC up as a founder member of the Confederation of Open Access Repositories, COAR (interim website here). There are members from North America, China, Japan, as well as Europe, and, so Norbert Lossau and Dale Peters from the DRIVER project, who have done the initial set-up work, are to be congratulated on getting us this far. For the remainder of 2009 you can still join COAR for the very reasonable price of 100 euros. The fees thereafter have not yet been set, but are likely to be higher, especially for members from rich countries.

The aim of COAR is “to enhance and progress the provision, visibility and application of research outputs through global networks of Open Access digital repositories”. This is clearly a key aim of JISC too, and so we are very pleased to be a founding member.

So, why might you want to join, especially when it’s not yet clear exactly what COAR will be doing in practical terms? I think that may be a good reason; early members will have a chance to shape the organisation’s direction and initial objectives. I’ll be honest and say that is one of the reasons that JISC has joined now, apart from strongly supporting the organisation’s aim of course.

To give you a flavour of the anticipated direction, key words in the discussion seemed to be interoperability, raising awareness, promoting OA and repositories, support for the repository community, and working with partners in closely related fields (research management and publishing, to name but two). What that will mean in practical terms, we have yet to see.

There was an extended discussion at the meeting about who can join COAR. If you’re interested though, I suggest you email Dale Peters .

Neil Jacobs

Using institutional repositories to raise compliance

JISC’s work over the last few years in encouraging the growth of institutional repositories means that the UK now has a virtually unparalleled and impressive infrastructure of institutional repositories that virtually covers the research-base of UK higher education.

Of course the issue which faces us all in this area is one of content. The repositories are there, but the content — at least measured against the potential content — isn’t.

It is therefore an interesting development with Funder policies requiring deposit, that some of these require deposit in Funder-repositories. While I quite appreciate the political and organisational benefits from having a Funder-based repository, the experience of Funder mandates so far is of low compliance. The Wellcome Trust report a compliance rate of around 36%. Some of this lack of compliance is down to individual authors, some down to publishers seemingly not fulfilling their contract to deposit in return for their open access publication fee.

The situation that we seem to have, therefore, is of an already existing network of repositories with institutional staff assigned to deal with deposit, but without any overriding incentive for authors to use them: and the development of a complementary network of Funder-repositories, where there is an incentive for authors to deposit, but with no on-site assistance and low compliance.

As I have suggested elsewhere, I think the best solution is to engage with institutional repository managers, who would be able to provide authors on the spot assistance with depositing material, give person-to-person advice on the suitability of various materials to deposit, and, significantly, to be able to monitor and facilitate compliance.

Of course, the question is then what do the institutional repository managers — and the institutions — get out of it? This is where the collaborative nature of repository holdings comes in. If funders ask their authors to deposit into the institutional repository, then it is a simple matter for the Funder-based repository to harvest material (metadata and full-text) from the institutional repository.

The advantage of institutional deposit lies in the support and compliance checking that can come from institutional staff, and of course, the author having a “one-stop shop” for deposit. If all funders harvest material from institutional repositories, then the author only has one interface to learn. Where an institution offers mediated deposit, then they do not even have to do this — but can let repository staff deposit on their behalf.

Of course, this then brings benefits for the institution, in that it collects a record of the intellectual output of its own staff in its own repository, which can then be used to drive other services within the institution — the generation of publication records, facilitating collection of material for REF, generating staff-web pages, generating research group web pages, etc.

The fact that the material is open means that harvesting into a Funder-repository is straightforward. Effectively, it means that the institutional repository becomes a personally supported interface or ingest mechanism for the Funder-repository.

There is the issue that some Funder-repositories may require different metadata fields, or different metadata standards than a typical institutional repository. Again, a Funder-repository might require a particular format of deposit — such as XML.

These are certainly issues to consider, but balanced against the support and compliance which could come from such a system, surely an enhanced institutional deposit mechanism to match funders’ requirements is not beyond joint development?

One possible way forward would be for principal UK funders to agree a joint deposit-requirement and suggest this to be adopted by institutional repositories, in exchange for mandates requiring deposit within institutional repositories.

Bill Hubbard

Students’ use of research content

Students use research outputs; papers, books and so on. However, students are – like many other actual and potential users of research outputs – not familiar with the landscape of academic research, and the ways in which one can discover resources that are useful and relevant. (It is worth noting that the recent PRC report on SMEs access to research output cites sources suggesting that even professionals working in hi-tech SMEs have the same trouble.) A recent JISC report by a team at UCLAN led by Stuart Hampton-Reeves sheds some light on how students discover and access research outputs. The report tells us a little more about the “Google Generation”, noting that “most students will go to their library catalogue first, then Google” (and not social networking sites) to discover research, and that they do not generally have a sophisticated understanding of peer review. The report adds to a growing body of evidence, including that from UCL’s Ciber group and an ongoing user observational study, on how we can improve the “discover-ability” and accessibility of research outputs. It certainly seems that there are ways in which both tutors and libraries might better help students act as “apprentice researchers”, navigating the unfamiliar research landscape more effectively. Perhaps more fundamentally, infrastructure – including repositories – is making it possible to add more research outputs to the open web, which is where these users are.

Cross-linking between repositories

A thread on JISC-Repositories this week has been discussing whether to delete repository records when an academic leaves.  This set me thinking about such policies in general and how the interaction of different policies between repositories may affect access or collections in the long run. It is an example, I think, of the way that institutional repositories work best when seen as a network of interdependent and collaborative nodes that can be driven by their own needs but produce a more general collective system.

Our policy in Nottingham is that we see our repository as a collection of material that is produced by our staff.  Therefore, our policy developed that when a member of staff leaves, we will not delete their items as this is a record of their research production while they were here.

More than that, authors should not expect such deletion even upon their request, except in very unusual circumstances.  If repositories are to be used as trusted sources of information, the stability of the records they hold is very important.

If authors have put material into the repository which includes their “back-catalogue” produced at previous institutions, then that is fine too — we will accept them and keep them.  Strictly, they did not produce this material while they are employed at Nottingham, but if they are not openly accessible elsewhere, why not take them?  It might be slightly anomalous to hold this material but if it opens access to research information, that’s the basis of what it’s all about. 

I think there is a transition period here, while academics adopt the idea of depositing material.  I think it’s likely that academics will put their back-catalogue to date into the first major repository that they use in earnest, if they have the right versions available.  Thereafter, as this material should be kept safe and accessible, they can always link back to it.  In other words, once they have deposited their back catalogue, there are unlikely to want to do it at every subsequent institution they move to: as long as they know it will be safe and that they can link to it.  There is an advocacy theme here to help researchers understand that repositories are linked and that the repository – and repository network – will serve them throughout their career.

For a newly-arrived member of staff with material in a previous institution’s repository, then it all depends on the new institution’s collection policy as to whether the institution would prefer them to just deposit outputs they produce from that time on; deposit all their own material again; or create a virtual full record of outputs by copying the metadata and linking back to full-text in the previous repository(ies). This will depend in turn as to whether the previous repositories are trusted to match the new institution’s own terms for access and preservation.

Maybe if the material is held on a repository without long-term assurance of durability — maybe on a commercial service — and if the institution’s repository works on a level which cannot be matched, then there would be a rationale for holding a local copy of the full-text.  This may be held and exposed, or possibly be held in reserve in case of external service failure. Otherwise, simply linking back to the full-text held on the previous repository seems most practical if a full record is required.

If the previous repository is trusted to provide the same level of service in access, preservation, and stability, then it does not really matter which URL or repository underlies the “click for full text” link.  Academics can compile their list of publications and draw from the different institutions at which they have worked: repositories can hold their own copy of metadata records and link to external trusted repositories; and as far as the reader-user is concerned it’s still “search for paper — find — click — get full-text paper”.

This kind of pragmatic approach may well mean that some duplicates (metadata record and/or full-text) get into the system by being held at more than one location.  Duplication/ close-to-duplication will have to become a non-issue. I cannot see that duplication can be completely avoided in future: it already happens.  As such, handling close and exact duplicates is an issue we cannot avoid and must solve in some way as it inevitably arises. That is not to say that the publisher’s version will automatically become the “official” record in the way that it tends to be used now. We do not know how versions/ variants/ dynamic developments of papers will be used and regarded by researchers: we are just at the start of a period of change in research communications. Therefore if a process offers solutions and benefits, associated risks of duplication are not sufficient to dismiss the process as impractical.

After all, what is the alternative?  If as repository managers we start deleting records when folks leave and have to create/import/ask the academic for a complete set of their outputs when they arrive at a new institution, I think we, and significantly the users of open access research, will very quickly get into a situation where we lose track of what is where. 

Even if we try to create policies or systems to replace an old link (to the now-deleted full-text), with a new link (to the full-text in the new repository), I cannot see this working seamlessly and things will get lost.  In addition I think that subsequent moves by the author would create daisy chains of onward references which would be very fragile. 

While the use of repository references in citation of materials relates to research practice and so is for resolution between researchers rather than between ourselves, I don’t think we should deliberately disrupt longer-term references to material. Rather, I would see the system building on existing stable records and all institutional repositories able to play a part in the system-wide provision of information as stable sources.

Therefore, I would suggest that repositories should continue to hold staff items after they have left, as this helps fulfil their role as institutional assets and records. Repositories can accept an academic’s back-catalogue, even if it has not been produced at the institution, as being anomalous but in line with our joint overall aim of providing access to research information. Adopting standard practices will help reassure each institution that other repositories can be trusted with access and curation and allow stable cross-linking. Once a repository has material openly accessible, then, given matching service levels, the whole system supports linking to that material, without anything but possible local needs for additional, duplicate copies.   Overall, repositories can follow their institutional self-interest and still create a robust networked system.

Bill Hubbard