Category: Open access

New book imagines the archive of the future

Chambers Cat 2.02.qxdFacet Publishing announce the publication of Archival Futures edited by Caroline Brown.

It is widely acknowledged that the archival discipline is facing a time of change. The digital world has presented changes in how records are created, used, stored and communicated. At the same time, there is increased public debate over issues such as ownership of and access to information and its authenticity and reliability in a networked and interconnected world.

Archival Futures draws on the contributions of a range of international experts to consider the current archival landscape and imagine the archive of the future. Firmly rooted in current professional debate and scholarship, the book offers thought provoking and accessible chapters that aim to challenge and inspire archivists globally and to encourage debate about their futures. Chapters cover the role of archives in relation to individuals, organisations, communities and society; how appraisal, arrangement, description and access might be affected in the future; changing societal expectations in terms of access to information, how information is exchanged, and how things are recorded and remembered; the impact of new technologies, including blockchain and automation; the place of traditional archives and what ‘the archive’ is or might become; the future role of the archive profession; and archives as authentic and reliable evidence

Tom Nesmith (University of Manitoba), said

‘Archives play a unique and powerful role in making the past available for an extraordinary array of current purposes. But do archives have a future, particularly given disruptive changes in communication technologies? Archival Futures addresses this and other challenges to find ways forward for the now pivotal role of archives in society.’

The book will appeal to an international audience of students, academics and practitioners in archival science, records management, and library and information science.

Caroline Brown is Programme Leader for the archives programmes at the Centre for Archive and Information Studies at the University of Dundee where she is also University Archivist.. She is a Chair of Archives and Records Association (UK & Ireland’s) Conference Committee, sits on its Professional Development Committee, having formerly served as the Chair of the Education, Training and Development Committee, and is a member of the Executive Committee for ARA Scotland. She is a sits on the Section Bureau of the International Council on Archives Section on Archival Education and is active in ICA/SUV . She is an Arts and Humanities Research Council Peer Review College and Panel Member and has written and spoken on a range of archival and recordkeeping issues. She is the editor of Archives and Recordkeeping: Theory into practice (Facet, 2013).

Contributors
Jenny Bunn, University College London; Luciana Duranti, University of British Columbia; Joanne Evans, Monash University; Craig Gauld, University of Dundee; Victoria Lemieux, University of British Columbia; Michael Moss, Northumbria University; Gillian Oliver, Monash University; Sonia Ranade, The National Archives; Barbara Reed, consultant; Kate Theimer, writer, speaker and commentator; David Thomas, Northumbria University; Frank Upward, Monash University; Geoffrey Yeo University College London.

 

Stay up-to-date with all the latest books from Facet by signing up to our mailing list

Advertisements

Making the case for open licensing in cultural heritage institutions

Facet Publishing have announced the release of Open Licensing for Cultural Heritage by Gill Hamilton and Fred Saunderson.

9781783301850

In the digital era, libraries, archives, museums and galleries are no longer constrained by the physical limitations of their buildings, analogue books, manuscripts, maps, paintings and artefacts. Cultural collections now can be safely distributed and shared globally. To ensure that the benefits of this ability to share are realised, cultural institutions must endeavour to provide free and open access to their digital collections. The tool for achieving this is open licensing.

Featuring real-world case studies from diverse education and heritage organizations, Open Licensing for Cultural Heritage digs into the concept of ‘open’ in relation to intellectual property. It explores the organizational benefits of open licensing and the open movement, including the importance of content discoverability, arguments for wider collections impact and access, the practical benefits of simplicity and scalability, and more ethical and principled arguments related to the protection of public content and the public domain.

The authors said,

“Openly sharing our knowledge, experience, content and culture for free is not a new concept. Sharing is an innate and natural part of our human character. Forward looking, inclusive, modern, relevant cultural heritage organizations must play a central role in supporting free, open access to culture at a global level. This is possible, practical and achievable with considered and informed application of an open licensing framework. Our book will provide readers with the insight, knowledge, and confidence to make a case for and implement an open licensing approach.”

Gill Hamilton is Digital Access Manager at the National Library of Scotland where she leads on access to the Library’s extensive digital collections, and oversees its resource discovery and library management systems.

Fred Saunderson is the National Library of Scotland’s Intellectual Property Specialist where he has responsibility for providing copyright and intellectual property advice and guidance, as well as coordinating licensing and re-use procedures.

 

Sign up to our mailing list to hear more about new and forthcoming books. Plus, receive an introductory 30% off a book of your choice – just fill in your details below and we’ll be in touch to help you redeem this special discount:*

 

*Offer not available to customers from USA, Canada, Australia, New Zealand, Asia-Pacific

Love Your Data Week Roundup

Last week Facet participated in Love Your Data Week, a 5-day international event to help reasearchers take better care of their data. We have gathered all the resources we published during the week below

data-scrabble-turned

Image source: data (scrabble) by Flickr user justgrimes

New Open Access chapters

During the week, we made several new chapters from our research data management titles available Open Access. All the chapters can be downloaded below.

Supporting data literacy by Robin Rice and John Southall from The Data Librarian’s Handbook

Training researchers to manage data for better results, re-use and long-term access by Heather Coates from Dynamic Research Support for Academic Libraries

Specific interventions in the research process or lifecycle by Moira J Bent from Practical Tips for Facilitating Research

The lifecycle of data management by Sarah Higgins from Managing Research Data

A pathway to sustainable research data services: from scoping to sustainability by Angus Whyte from Delivering Research Data Management Services

Blogposts from Facet authors

Starr Hoffman explored the difference between research data and secondary data using the speed at which the DeLorean in Back to the Future will time jump as an example in her blogpost, Data Services and Terminology: Research Data versus Secondary Data

Robin Rice and John Southall provided practical advice for data librarians undertaking a reference consultation or interview to match users to the data required in their blogpost, Top tips for a data reference interview

Gillian Oliver talked about practical ways of ensuring you have a successful relationship with data in her blogpost, Five ways to love your data

Angus Whyte looked at what has changed in the world of research data management in the past three years in his blogpost, If data is loved so much, why is so much of it running around loose, dirty and in no fit state to get a job?

Sign up to our mailing list to hear more about new and forthcoming books:

If data is loved so much, why is so much of it running around loose, dirty and in no fit state to get a job?

Guest post by Angus Whyte, co-author of Delivering Research Data Management Services

Librarians have grown to love research data so much they can’t get enough of it! Well some at least have, and Love Your Data Week will help spread the love. Of course nobody loves data more than the researchers who produce it. Funders love it too; after all they pay for it to come into the world. If data is loved so much, why is so much of it running around loose, dirty and in no fit state to get a job? Is all that is needed a little more discipline?

data-lrgo

Image source: data (lego) by Flickr user justgrimes

Three years ago when Delivering Research Data Management Services was first published, my co-authors Graham Pryor and Sarah Jones were working with colleagues in the Digital Curation Centre and in universities across the UK to help them get support for research data off the ground and into the roster of institutional service development. At the time, as Graham said in his introduction, institution-wide RDM services had “at last begun to gain a foothold”.

The (now open access) chapter titled “a pathway to sustainable research data services: from scoping to sustainability”described six phases, from envisioning and initiating, through discovering requirements, to design, implementation and evaluation.  Across the UK sector as a whole, few institutions had got beyond the discovery phase. Some of the early adopters in the UK, US and Australia have case studies featured in the book, providing more fully-fledged examples of the mix of soft and hard service components that a ‘research data management service’ typically comprises. Broadly these include support for researchers to produce Data Management Plans, tools and storage infrastructure for managing active data, support for selection and handover to a suitable repository for long-term preservation, and support for others to discover what data the institution has produced.

So what has changed? The last three years have seen evolution, consolidation and growth. According to one recent survey of European academic research libraries almost all will be offering institutional RDM services within two years.[1] The mantra of FAIR data (findable, accessible, interoperable and reusable) has spurred a flurry of data policy-making by funders, journals and institutions.[2] Many organisations have yet to adopt one,but policy harmonisation is now a more pressing need than formulation. Data repositories have mushroomed, with re3data.org now listing about three times the number it did three years ago. Training materials and courses are becoming pervasive, and data stewardship is increasingly recognised as essential to data science.

The burgeoning development in each of these aspects of RDM does not hide the immaturity of the field; each aspects is the subject of international effort by groups like COAR (Confederation of Open Access Repositories), and the Research Data Alliance, to consolidate and codify the organisational and technical knowledge needed to further join up services. European initiatives to establish ‘Research Infrastructures’ have demonstrated how this can be done, at least for some disciplines.

Over the same period, many institutions have learned to love ‘the cloud’; gaining scalability and flexibility by integrating cloud storage and computation services with their IT infrastructure.  The same is not yet true of the higher-level RDM services that require academic libraries to collaborate with their IT and research office colleagues. Shared services are a trend that has seen some domain-focused data centres spread their disciplinary wings. Ambitious initiatives like the European Open Science Cloud pilot, will tell us how far ‘up the stack’ cloud services to support open science can go to offer better value to science and society.[3]

cloud

Image source: 3D Cloud Computing by ccPix.com

The biggest challenges in 2013 are still big challenges now. Political and cultural change is messy, for a number of reasons.There is high-level political will to fund data infrastructure as it’s seen as essential for innovation, as well as for research integrity. But the economic understanding to direct resources to where they are most needed, to ensure data is not only loved but properly cared for? That requires better understanding of what kinds of care produce good outcomes, like citation and reuse. Evaluation studies have been thin on the ground and, perhaps as a result, funding for data infrastructure still tends to be short-term and piecemeal.

The book offers a comprehensive grounding in the issues and sources to follow up. Its basic premise is as true now as when it was published: keeping data requires a mix of generic and domain-specific stewardship competencies, together with organisational commitments and basic infrastructure.  The basic challenge is as true now as then; research domains are fluid and tribal, crossing national and international boundaries and operating to norms that tend to resist institutional containers.  But that has always been the case, and yet institutions and their libraries continue to adapt and survive.

By happy coincidence the International Digital Curation Conference (IDCC17) is happening the week after Love Your Data Week. You can follow it as it happens on twitter at #idcc17

Dr Angus Whyte is a Senior Institutional Support Officer at the Digital Curation Centre, University of Edinburgh. He is responsible for developing online guidance and consultancy to research organisations, to support their development of research data services.  This is informed by studies of research data practices and stakeholder engagement in research institutions.

[1] Research Data Services in Europe’s Academic Research Libraries by Liber Europe

[2] Wilkinson, M. D., Dumontier, M., Aalbersberg, Ij. J., Appleton, G., Axton, M., Baak, A., … others. (2016). The FAIR Guiding Principles for scientific data management and stewardship

[3] European Open Science Cloud pilot

Sign up to our mailing list to hear more about new and forthcoming books:

A pathway to sustainable research data services: from scoping to sustainability

As Love Your Data Week draws to a close we’ve got one final open access chapter to share. The chapter, A pathway to sustainable research data services by Angus Whyte, is part of Delivering Research Data Management Services. You can download the chapter here9781856049337.

For one last chance to win one of our research data management books, share a tweet about why you (or your institution) are participating in Love Your Data Week 2017 using #WhyILYD17. More details about the prize draw are available here.

Sign up to our mailing list to hear more about new and forthcoming books:

Data Services and Terminology: Research Data versus Secondary Data

Guest post by Starr Hoffman, editor of Dynamic Research Support for Academic Libraries.

Similar to the confusion between open access as opposed to open source, the terms research data and secondary data are sometimes confused in the academic library context. A large source of confusion is that the simple term “data” is used interchangeably for both of these concepts.

What is Research Data?

As research data management (RDM) has become a hot topic in higher education due to grant funding requirements, libraries have become involved. Federal grants now require researchers to include data management plans (DMPs) detailing how they will responsibly make taxpayer-funded research data 1) available to the public via open access (for instance, depositing it in a repository) and 2) preserve it for the future. Because there are often gaps in campus infrastructure around RDM and open access, many academic libraries have stepped in to provide guidance with writing data management plans, finding appropriate repositories, and in other good data management practices.

This pertains to original research data–that is, data that is collected by the researcher during the course of their research. Research data may be observational (from sensors, etc), experimental (gene sequences), derived (data or text mining), among other type, and may take a variety of forms, including spreadsheets, codebooks, lab notebooks, diaries, artifacts, scripts, photos, and many others. Data takes many forms not only in different disciplines, but in different methodologies and studies.

Example: For instance, Dr. Emmett “Doc” Brown performs a series of experiments in which he notes the exact speed at which a DeLorean will perform a time jump (88 MPH). This set of data is original research data.

delorean

Image source: Back to the Futue by Graffiti Life from Flickr user MsSaraKelly

What is Secondary Data?

Secondary data is usually called simply “data” or “datasets.” (For the sake of clarity, I prefer to refer to it as “secondary data.”) Unlike research data, secondary data is data that the researcher did not personally gather or produce during the course of their research. It is pre-existing data on which the researcher will perform their own analysis. Secondary data may be used either to perform original analyses or for replication (studies which follow the exact methodology of a previous study, in order to test the reliability of the results; replication may also be performed by following the same methodology but gathering a new set of original research data). Secondary data can also be joined to additional datasets, including datasets from different sources or joining with original research data.

Example: Let’s say that Marty McFly makes a copy of Doc Brown’s original data and performs a new analysis on it. The new analysis reveals that the DeLorean was only able to time-jump at the speed of 88 MPH due to additional variables (including a power input of 1.21 jigowatts). In this case, the dataset is secondary data.

Reuse of Research Data

Another potential point of confusion is that one researcher’s original research data can be another researcher’s secondary data. For instance, in the example above, the same dataset is considered original research data for Doc Brown, but is secondary data for Marty McFly.

Back to the Future

Image source: Back to the Future by Flickr user Garry Knight

Data Services: RDM or Secondary Data?

The phrase “data services” can also be confusing, because it may encompass a variety of services. A potential menu of data services could include:

  • Assistance locating and/or accessing datasets.
    o This might pertain to vendor-provided data collections, consortial collections (such as ICPSR), locally-produced data (in an institutional repository), or with publically-accessible data (such as the U.S. census).
    o Because this service specifically focuses on accessing data, it by default pertains to secondary data.
  • Data management plan (DMP) assistance.
    o Typically only applies to original research data.
  • Data curation and/or RDM services.
    o These may include education on good RDM practices, assistance depositing data into an institutional repository (IR), assistance (or full-service) creating descriptive or other metadata, and more.
    o Typically only provided for original research data. However, if transformative work has been done to a secondary dataset (such as merging with additional datasets or transforming variables), data curation / RDM may be necessary.
  •  Assistance with data analysis.
    o This service is more often provided for students than for faculty, but may include both groups.
    o Services may include providing analysis software, software support, methodological support, and/or analytical support.
    o May include support for both original research data and secondary data.

You Say “Data Are,” I Say “Data Is” …Let’s Not Call the Whole Thing Off!

So in the end, what does all this matter? The primary takeaway is to be clear, particularly when communicating about services the library will or won’t provide, about specific types of data. In many cases this will be obvious–for instance, “RDM” contains within it the term “research data” and is thus clear. Less clear is when a library department decides to provide “assistance with data.” What does this mean? What kind of assistance, and for what kind of data? Is the goal of the service to support good management of original research data? Or is the goal to support the finding and analysis of secondary data that the library has purchased? Or another goal altogether?

Clarity is key both to understanding each other and to clearly communicating emerging services to our researchers.

Starr Hoffman is Head of Planning and Assessment at the University of Nevada, Las Vegas, where she assesses many activities, including the library’s support for and impact on research. Previously she supported data-intensive research as the Journalism and Digital Resources Librarian at Columbia University in New York. Her research interests include the impact of academic libraries on students and faculty, the role of libraries in higher education and models of effective academic leadership. She is the editor of Dynamic Research Support for Academic LibrariesWhen she’s not researching, she’s taking photographs and travelling the world.

Sign up to our mailing list to hear more about our books:

Specific interventions in the research process or lifecycle

It’s Love Your Data Week 2017 and today we have made Section 8 fro9781783300174m Moira Bent’s Practical Tips for Facilitating Research available open access. A PDF of the Section, Specific interventions in the research process or lifecycle, canbe downloaded here.

We will be releasing more Open Access chapters throughout Love Your Data We
ek and publishing blogposts from our authors.  For a chance to win one of our research
data management books, share a tweet about why you (or your institution) are participating in Love Your Data Week 2017 using #WhyILYD17. More details about the prize draw are available here.

Sign up to our mailing list to hear more about new and forthcoming books: