Category: Preservation

How can heritage institutions work with their communities to build broader, more inclusive and culturally relevant collections?

Facet Publishing have announced the release of Participatory Heritage, edite9781783301232.jpgd by Henriette Roued-Cunliffe and Andrea Copeland

 The internet as a platform for facilitating human organization without the need for organizations has, through social media, created new challenges for cultural heritage institutions. Challenges include but are not limited to: how to manage copyright, ownership, orphan works, open data access to heritage representations and artefacts, crowdsourcing, cultural heritage amateurs, information as a commodity or information as public domain, sustainable preservation, attitudes towards openness and much more.

 Participatory Heritage uses a selection of international case studies to explore these issues. It demonstrates that in order for personal and community-based documentation and artefacts to be preserved and included in social and collective histories, individuals and community groups need the technical and knowledge infrastructures of support that formal cultural institutions can provide. In other words, both groups need each other.

The editors said, “It is our hope that this book will help information and heritage professionals learn from others who are engaging with participatory heritage communities”.

Henriette Roued-Cunliffe, DPhil is an Assistant Professor at the Royal School of Library and Information Science, University of Copenhagen, Denmark. She teaches and researches heritage data and information, and in particular how DIY culture is engaging with cultural heritage online and often outside of institutions. Her website is: roued.com.

Andrea Copeland is an Associate Professor in the Department of Library and Information Science in the School of Informatics and Computing at Indiana University, Indianapolis. Her research focus is public libraries and their relationship with communities, with a current emphasis on connecting the cultural outputs of individuals and community groups to a sustainable preservation infrastructure.

If data is loved so much, why is so much of it running around loose, dirty and in no fit state to get a job?

Guest post by Angus Whyte, co-author of Delivering Research Data Management Services

Librarians have grown to love research data so much they can’t get enough of it! Well some at least have, and Love Your Data Week will help spread the love. Of course nobody loves data more than the researchers who produce it. Funders love it too; after all they pay for it to come into the world. If data is loved so much, why is so much of it running around loose, dirty and in no fit state to get a job? Is all that is needed a little more discipline?

data-lrgo

Image source: data (lego) by Flickr user justgrimes

Three years ago when Delivering Research Data Management Services was first published, my co-authors Graham Pryor and Sarah Jones were working with colleagues in the Digital Curation Centre and in universities across the UK to help them get support for research data off the ground and into the roster of institutional service development. At the time, as Graham said in his introduction, institution-wide RDM services had “at last begun to gain a foothold”.

The (now open access) chapter titled “a pathway to sustainable research data services: from scoping to sustainability”described six phases, from envisioning and initiating, through discovering requirements, to design, implementation and evaluation.  Across the UK sector as a whole, few institutions had got beyond the discovery phase. Some of the early adopters in the UK, US and Australia have case studies featured in the book, providing more fully-fledged examples of the mix of soft and hard service components that a ‘research data management service’ typically comprises. Broadly these include support for researchers to produce Data Management Plans, tools and storage infrastructure for managing active data, support for selection and handover to a suitable repository for long-term preservation, and support for others to discover what data the institution has produced.

So what has changed? The last three years have seen evolution, consolidation and growth. According to one recent survey of European academic research libraries almost all will be offering institutional RDM services within two years.[1] The mantra of FAIR data (findable, accessible, interoperable and reusable) has spurred a flurry of data policy-making by funders, journals and institutions.[2] Many organisations have yet to adopt one,but policy harmonisation is now a more pressing need than formulation. Data repositories have mushroomed, with re3data.org now listing about three times the number it did three years ago. Training materials and courses are becoming pervasive, and data stewardship is increasingly recognised as essential to data science.

The burgeoning development in each of these aspects of RDM does not hide the immaturity of the field; each aspects is the subject of international effort by groups like COAR (Confederation of Open Access Repositories), and the Research Data Alliance, to consolidate and codify the organisational and technical knowledge needed to further join up services. European initiatives to establish ‘Research Infrastructures’ have demonstrated how this can be done, at least for some disciplines.

Over the same period, many institutions have learned to love ‘the cloud’; gaining scalability and flexibility by integrating cloud storage and computation services with their IT infrastructure.  The same is not yet true of the higher-level RDM services that require academic libraries to collaborate with their IT and research office colleagues. Shared services are a trend that has seen some domain-focused data centres spread their disciplinary wings. Ambitious initiatives like the European Open Science Cloud pilot, will tell us how far ‘up the stack’ cloud services to support open science can go to offer better value to science and society.[3]

cloud

Image source: 3D Cloud Computing by ccPix.com

The biggest challenges in 2013 are still big challenges now. Political and cultural change is messy, for a number of reasons.There is high-level political will to fund data infrastructure as it’s seen as essential for innovation, as well as for research integrity. But the economic understanding to direct resources to where they are most needed, to ensure data is not only loved but properly cared for? That requires better understanding of what kinds of care produce good outcomes, like citation and reuse. Evaluation studies have been thin on the ground and, perhaps as a result, funding for data infrastructure still tends to be short-term and piecemeal.

The book offers a comprehensive grounding in the issues and sources to follow up. Its basic premise is as true now as when it was published: keeping data requires a mix of generic and domain-specific stewardship competencies, together with organisational commitments and basic infrastructure.  The basic challenge is as true now as then; research domains are fluid and tribal, crossing national and international boundaries and operating to norms that tend to resist institutional containers.  But that has always been the case, and yet institutions and their libraries continue to adapt and survive.

By happy coincidence the International Digital Curation Conference (IDCC17) is happening the week after Love Your Data Week. You can follow it as it happens on twitter at #idcc17

Dr Angus Whyte is a Senior Institutional Support Officer at the Digital Curation Centre, University of Edinburgh. He is responsible for developing online guidance and consultancy to research organisations, to support their development of research data services.  This is informed by studies of research data practices and stakeholder engagement in research institutions.

[1] Research Data Services in Europe’s Academic Research Libraries by Liber Europe

[2] Wilkinson, M. D., Dumontier, M., Aalbersberg, Ij. J., Appleton, G., Axton, M., Baak, A., … others. (2016). The FAIR Guiding Principles for scientific data management and stewardship

[3] European Open Science Cloud pilot

Sign up to our mailing list to hear more about new and forthcoming books:

Data Services and Terminology: Research Data versus Secondary Data

Guest post by Starr Hoffman, editor of Dynamic Research Support for Academic Libraries.

Similar to the confusion between open access as opposed to open source, the terms research data and secondary data are sometimes confused in the academic library context. A large source of confusion is that the simple term “data” is used interchangeably for both of these concepts.

What is Research Data?

As research data management (RDM) has become a hot topic in higher education due to grant funding requirements, libraries have become involved. Federal grants now require researchers to include data management plans (DMPs) detailing how they will responsibly make taxpayer-funded research data 1) available to the public via open access (for instance, depositing it in a repository) and 2) preserve it for the future. Because there are often gaps in campus infrastructure around RDM and open access, many academic libraries have stepped in to provide guidance with writing data management plans, finding appropriate repositories, and in other good data management practices.

This pertains to original research data–that is, data that is collected by the researcher during the course of their research. Research data may be observational (from sensors, etc), experimental (gene sequences), derived (data or text mining), among other type, and may take a variety of forms, including spreadsheets, codebooks, lab notebooks, diaries, artifacts, scripts, photos, and many others. Data takes many forms not only in different disciplines, but in different methodologies and studies.

Example: For instance, Dr. Emmett “Doc” Brown performs a series of experiments in which he notes the exact speed at which a DeLorean will perform a time jump (88 MPH). This set of data is original research data.

delorean

Image source: Back to the Futue by Graffiti Life from Flickr user MsSaraKelly

What is Secondary Data?

Secondary data is usually called simply “data” or “datasets.” (For the sake of clarity, I prefer to refer to it as “secondary data.”) Unlike research data, secondary data is data that the researcher did not personally gather or produce during the course of their research. It is pre-existing data on which the researcher will perform their own analysis. Secondary data may be used either to perform original analyses or for replication (studies which follow the exact methodology of a previous study, in order to test the reliability of the results; replication may also be performed by following the same methodology but gathering a new set of original research data). Secondary data can also be joined to additional datasets, including datasets from different sources or joining with original research data.

Example: Let’s say that Marty McFly makes a copy of Doc Brown’s original data and performs a new analysis on it. The new analysis reveals that the DeLorean was only able to time-jump at the speed of 88 MPH due to additional variables (including a power input of 1.21 jigowatts). In this case, the dataset is secondary data.

Reuse of Research Data

Another potential point of confusion is that one researcher’s original research data can be another researcher’s secondary data. For instance, in the example above, the same dataset is considered original research data for Doc Brown, but is secondary data for Marty McFly.

Back to the Future

Image source: Back to the Future by Flickr user Garry Knight

Data Services: RDM or Secondary Data?

The phrase “data services” can also be confusing, because it may encompass a variety of services. A potential menu of data services could include:

  • Assistance locating and/or accessing datasets.
    o This might pertain to vendor-provided data collections, consortial collections (such as ICPSR), locally-produced data (in an institutional repository), or with publically-accessible data (such as the U.S. census).
    o Because this service specifically focuses on accessing data, it by default pertains to secondary data.
  • Data management plan (DMP) assistance.
    o Typically only applies to original research data.
  • Data curation and/or RDM services.
    o These may include education on good RDM practices, assistance depositing data into an institutional repository (IR), assistance (or full-service) creating descriptive or other metadata, and more.
    o Typically only provided for original research data. However, if transformative work has been done to a secondary dataset (such as merging with additional datasets or transforming variables), data curation / RDM may be necessary.
  •  Assistance with data analysis.
    o This service is more often provided for students than for faculty, but may include both groups.
    o Services may include providing analysis software, software support, methodological support, and/or analytical support.
    o May include support for both original research data and secondary data.

You Say “Data Are,” I Say “Data Is” …Let’s Not Call the Whole Thing Off!

So in the end, what does all this matter? The primary takeaway is to be clear, particularly when communicating about services the library will or won’t provide, about specific types of data. In many cases this will be obvious–for instance, “RDM” contains within it the term “research data” and is thus clear. Less clear is when a library department decides to provide “assistance with data.” What does this mean? What kind of assistance, and for what kind of data? Is the goal of the service to support good management of original research data? Or is the goal to support the finding and analysis of secondary data that the library has purchased? Or another goal altogether?

Clarity is key both to understanding each other and to clearly communicating emerging services to our researchers.

Starr Hoffman is Head of Planning and Assessment at the University of Nevada, Las Vegas, where she assesses many activities, including the library’s support for and impact on research. Previously she supported data-intensive research as the Journalism and Digital Resources Librarian at Columbia University in New York. Her research interests include the impact of academic libraries on students and faculty, the role of libraries in higher education and models of effective academic leadership. She is the editor of Dynamic Research Support for Academic LibrariesWhen she’s not researching, she’s taking photographs and travelling the world.

Sign up to our mailing list to hear more about our books:

Preserving our Heritage wins SAA Preservation Publication Award

Preserving our Heritage: Perspectives from antiquity to the digital age by Michele V Cloonan is the recipient of the 2016 Society of American Archivists’ Preservation Publication Award.9781856049467

The book offers a unique compilation of key texts from a range of international contributors, charting the development of preservation from its origins to modern day practice and offers an overview of longevity, reversibility, enduring value and authenticity of information preservation.

The Awards Committee said “Preserving Our Heritage is undeniably a monumental achievement and a welcome contribution to the bookshelves of preservation professionals everywhere”.

Established in 1993, the SAA Preservation Publication Award recognises and acknowledges the author or editor of an outstanding published work related to archives preservation and, through this acknowledgement, encourages outstanding achievement by others.

Find out more about the book on the Facet Publishing website.

New edited collection on managing digital cultural objects

Facet Publishing have announced the publication of Managing Digital Cultural Objects: Analysis, discovery and retrieval edited by Allen Foster and Pauline Rafferty both at Aberystwyth University.Foster & R Managing digital cultural objects_COVER

The book explores the analysis and interpretation, discovery and retrieval of a variety of non-textual objects, including image, music and moving image.

Bringing together chapters written by leading experts in the field, the first part of this book provides an overview of the theoretical and academic aspects of digital cultural documentation and considers both technical and strategic issues relating to cultural heritage projects, digital asset management and sustainability. The second part includes contributions from practitioners in the field focusing on case studies from libraries, archives and museums. While the third and final part considers social networking and digital cultural objects.

Managing Digital Cultural Objects: Analysis, discovery and retrieval draws from disciplines including information retrieval, library and information science (LIS), digital preservation, digital humanities, cultural theory, digital media studies and art history. It’s argued that this multidisciplinary and interdisciplinary approach is both necessary and useful in the age of the ubiquitous and mobile web.

Key topics covered include:

  • Managing, searching and finding digital cultural objects
  • Data modelling for analysis, discovery and retrieval
  • Social media data as a historical source
  • Visual digital humanities
  • Digital preservation of audio content
  • Photos on social networking sites
  • Searching and creating affinities in web music collections
  • Film retrieval on the web.

The book will provide inspiration for students seeking to develop creative and innovative research projects at Masters and PhD levels and will be essential reading for those studying digital cultural object management. Equally, it should serve practitioners in the field who wish to create and develop innovative, creative and exciting projects in the future.

About the editors:

Allen Foster has a BA in Social History, a Master’s in Information Management and a PhD in Information Science.  As Reader in Information Science, he has held various roles, including Head of Department for Information Studies, at Aberystwyth University.  His research interest areas span the research process of Master’s and
PhD students, the development of models for information behaviour and serendipity, and user experience of information systems, creativity and information retrieval. He has guest edited for several journal special issues, is a regional editor for The Electronic Library and is a member of journal editorial boards, international panels and conference committees.

Dr Pauline Rafferty MA(Hons) MSc MCLIP is a Senior Lecturer and Director of Teaching and Learning at the Department of Information Studies, Aberystwyth University. She previously taught at the Department of Information Science, City University London, and in the School of Information Studies and Department of Media and Communication at the University of Central England, Birmingham.

Contributors:

Sarah Higgins, Aberystwyth University

Katrin Weller, GESIS Leibniz Institute for the Social Sciences

Hannah Dee, Aberystwyth University

Lorna Hughes, University of Glasgow

Lloyd Roderick, Aberystwyth University

Alexander Brown,  Aberystwyth University

Maureen Pennock, British Library

Michael Day, British Library

Will Prentice, British Library

Corinne Jörgensen, Florida State University (Emeritus)

Nicola Orio, University of Padua

Kathryn La Barre, University of Illinois at Urbana-Champaign

Rosa Ines de Novias Cordeiro, Federal Fluminense University, Rio de Janeiro

Long-awaited new edition of the Directory of Rare Book and Special Collections now available

Facet Publishing have announced the release of the 3rd edition of the Directory of Rare Book and Special Collections in the United Kingdom and the Republic of Ireland.9781783300167

The Directory is the only publication to bring together rare book and special collections from all kinds of libraries across the UK and Ireland and is an essential research tool for researchers and librarians throughout the world.

Fully updated since the second edition was published in 1997, this comprehensive and up-to-date guide encompasses collections held in national libraries, academic libraries,  public libraries, subscription libraries, clergy libraries, libraries for other professions, school libraries, companies, London clubs, museums and archives, and libraries in stately homes.

Richard Ovenden, Bodley’s Librarian at the University of Oxford said, “The new edition is a long-awaited reference work which will help researchers identify the UK and Republic of Ireland’s great collections of research materials. It provides detailed and authoritative information and is a must for all serious researchers.”

Edited by Karen Attar, Curator of Rare Books and University Art at Senate House Library, The Directory:

  • contains a national, cross-sectoral overview of rare book and special collections
  • offers full contact details, and descriptions of rare book and named special collections including quantities and particular subject and language strengths
  • provides a quick and easy summary of individual libraries’ holdings
  • directs researchers to the libraries most relevant for them
  • assists libraries to evaluate their special collections according to a ‘unique and distinctive’ model
  • enables libraries to make informed decisions about acquisition and collaboration
  • helps booksellers and donors to target offers.

David Prosser, Executive Director of Research Libraries UK said, “Together, institutions in the UK and Ireland hold unrivalled special collections.  From our great National Libraries, through university collections to the smaller collections of specialist societies, cathedrals, historic homes, and museums we have a centuries-old tradition of collecting, preserving and giving access.  Scholars from around the world and across disciplinary differences rely on the treasures held by libraries listed in the Directory to pursue their research and help us make sense of the world in which we live.”

Should we equate preservation of cultural heritage with human rights?

temple-baal-shamin-libraries-times-crisis

Michele Cloonan, Dean Emirata and Professor at the Graduate School of Library & Information Science, Simmons College and editor of Preserving our Heritage :  Perspective from antiquity to the digital age, writes about the destruction of cultural heritage in a new blog for CILIP. An extract is below:

While most of us don’t equate preservation with human rights, the relationship has been touched on at least as early as the nineteenth century —although the destruction of cultural heritage has taken place for as long as there has been heritage. In the nineteenth century the concept of human rights was considered in the context of war. Swiss businessman and reformer Henri Dunant was an organiser of the First Geneva Conference for the Amelioration of the Condition of the Wounded Armies in the Field (1863-64) and a founder of the Red Cross (see his Memory of Solferino [Geneva, Switzerland: International Committee of the Red Cross, 1986]).

At just about the same time as these activities were taking place in Europe, Francis Lieber, a German jurist who settled in the United States, prepared for the Union Army General Orders No. 100: Instructions for the Government of the Armies of the United States in the Field, better known as the Lieber Code; it established rules for the humane treatment of civilians in areas of conflict and forbade the execution of prisoners of war. Further it sought the protection of works of art, scientific collections, and hospitals in war-torn areas. These ideas were further developed in the Hague Peace Conferences that were held from 1899-1907 and in the later Hague Convention for the Protection of Cultural Property in the Event of Armed Conflict (1954 and the 1999 Second Protocol). Excerpts of these codes, conventions, and protocols are included in chapter 9 of my books Preserving Our Heritage: Perspectives from antiquity to the digital age (London: Facet, 2015).

Read the full blog on the CILIP website.