Tagged: Open access

Making the case for open licensing in cultural heritage institutions

Facet Publishing have announced the release of Open Licensing for Cultural Heritage by Gill Hamilton and Fred Saunderson.

9781783301850

In the digital era, libraries, archives, museums and galleries are no longer constrained by the physical limitations of their buildings, analogue books, manuscripts, maps, paintings and artefacts. Cultural collections now can be safely distributed and shared globally. To ensure that the benefits of this ability to share are realised, cultural institutions must endeavour to provide free and open access to their digital collections. The tool for achieving this is open licensing.

Featuring real-world case studies from diverse education and heritage organizations, Open Licensing for Cultural Heritage digs into the concept of ‘open’ in relation to intellectual property. It explores the organizational benefits of open licensing and the open movement, including the importance of content discoverability, arguments for wider collections impact and access, the practical benefits of simplicity and scalability, and more ethical and principled arguments related to the protection of public content and the public domain.

The authors said,

“Openly sharing our knowledge, experience, content and culture for free is not a new concept. Sharing is an innate and natural part of our human character. Forward looking, inclusive, modern, relevant cultural heritage organizations must play a central role in supporting free, open access to culture at a global level. This is possible, practical and achievable with considered and informed application of an open licensing framework. Our book will provide readers with the insight, knowledge, and confidence to make a case for and implement an open licensing approach.”

Gill Hamilton is Digital Access Manager at the National Library of Scotland where she leads on access to the Library’s extensive digital collections, and oversees its resource discovery and library management systems.

Fred Saunderson is the National Library of Scotland’s Intellectual Property Specialist where he has responsibility for providing copyright and intellectual property advice and guidance, as well as coordinating licensing and re-use procedures.

 

Sign up to our mailing list to hear more about new and forthcoming books. Plus, receive an introductory 30% off a book of your choice – just fill in your details below and we’ll be in touch to help you redeem this special discount:*

 

*Offer not available to customers from USA, Canada, Australia, New Zealand, Asia-Pacific

Advertisements

Cutting through the complexity of electronic resources management

Facet Publishing have announced the release of Alana Verminski and Kelly MBlanchett et al. Guide info lit 101.qxdarie Blanchat’s Fundamentals of Electronic Resources Management.

Electronic and digital resources are dynamic and ever-changing and there is an increasing demand for competent professionals to manage them. Fundamentals of Electronic Resources Management cuts through the complexity and serves as an invaluable introduction to those entering the field as well as a ready reference guide for current practitioners.

The authors said, “Electronic resources are a reality for libraries today, yet information professionals both experienced and new to the field face a steep learning curve when keeping up with the ever-evolving world of electronic resources management. This book aims to provide hands-on tools that can be used on the job, from beginning to end of the electronic resources lifecycle. It also provides information about the current marketplace and industry practices, putting the work of libraries into context with their external partners.”

The book covers:

  • the full range of purchasing options, from unbundling package subscriptions to pay per view
  • evaluating both new content and current resources
  • common clauses in licensing agreements and what they mean
  • selecting and managing open access resources
  • understanding methods of e-resources access authentication
  • using a triage approach to troubleshoot electronic resources access issues
  • the basic principles of usage statistics, and ways to use COUNTER reports when evaluating renewals
  • tips for activating targets in a knowledge base
  • marketing tools and techniques
  • clear explanations of jargon, important terms, and acronyms.

Alana Verminski is the collection development librarian at the Bailey/Howe Library at the University of Vermont.

Kelly Marie Blanchat is the electronic resources support librarian at Yale University Library.

“Sign up to our mailing list to hear more about new and forthcoming books. Plus, receive an introductory 30% off a book of your choice – just fill in your details
below and we’ll be in touch to help you redeem this special discount:”*

*Offer not available to customers from USA, Canada, Australia, New Zealand, Asia-Pacific

Love Your Data Week Roundup

Last week Facet participated in Love Your Data Week, a 5-day international event to help reasearchers take better care of their data. We have gathered all the resources we published during the week below

data-scrabble-turned

Image source: data (scrabble) by Flickr user justgrimes

New Open Access chapters

During the week, we made several new chapters from our research data management titles available Open Access. All the chapters can be downloaded below.

Supporting data literacy by Robin Rice and John Southall from The Data Librarian’s Handbook

Training researchers to manage data for better results, re-use and long-term access by Heather Coates from Dynamic Research Support for Academic Libraries

Specific interventions in the research process or lifecycle by Moira J Bent from Practical Tips for Facilitating Research

The lifecycle of data management by Sarah Higgins from Managing Research Data

A pathway to sustainable research data services: from scoping to sustainability by Angus Whyte from Delivering Research Data Management Services

Blogposts from Facet authors

Starr Hoffman explored the difference between research data and secondary data using the speed at which the DeLorean in Back to the Future will time jump as an example in her blogpost, Data Services and Terminology: Research Data versus Secondary Data

Robin Rice and John Southall provided practical advice for data librarians undertaking a reference consultation or interview to match users to the data required in their blogpost, Top tips for a data reference interview

Gillian Oliver talked about practical ways of ensuring you have a successful relationship with data in her blogpost, Five ways to love your data

Angus Whyte looked at what has changed in the world of research data management in the past three years in his blogpost, If data is loved so much, why is so much of it running around loose, dirty and in no fit state to get a job?

 

Sign up to our mailing list to hear more about new and forthcoming books:

Data Services and Terminology: Research Data versus Secondary Data

Guest post by Starr Hoffman, editor of Dynamic Research Support for Academic Libraries.

Similar to the confusion between open access as opposed to open source, the terms research data and secondary data are sometimes confused in the academic library context. A large source of confusion is that the simple term “data” is used interchangeably for both of these concepts.

What is Research Data?

As research data management (RDM) has become a hot topic in higher education due to grant funding requirements, libraries have become involved. Federal grants now require researchers to include data management plans (DMPs) detailing how they will responsibly make taxpayer-funded research data 1) available to the public via open access (for instance, depositing it in a repository) and 2) preserve it for the future. Because there are often gaps in campus infrastructure around RDM and open access, many academic libraries have stepped in to provide guidance with writing data management plans, finding appropriate repositories, and in other good data management practices.

This pertains to original research data–that is, data that is collected by the researcher during the course of their research. Research data may be observational (from sensors, etc), experimental (gene sequences), derived (data or text mining), among other type, and may take a variety of forms, including spreadsheets, codebooks, lab notebooks, diaries, artifacts, scripts, photos, and many others. Data takes many forms not only in different disciplines, but in different methodologies and studies.

Example: For instance, Dr. Emmett “Doc” Brown performs a series of experiments in which he notes the exact speed at which a DeLorean will perform a time jump (88 MPH). This set of data is original research data.

delorean

Image source: Back to the Futue by Graffiti Life from Flickr user MsSaraKelly

What is Secondary Data?

Secondary data is usually called simply “data” or “datasets.” (For the sake of clarity, I prefer to refer to it as “secondary data.”) Unlike research data, secondary data is data that the researcher did not personally gather or produce during the course of their research. It is pre-existing data on which the researcher will perform their own analysis. Secondary data may be used either to perform original analyses or for replication (studies which follow the exact methodology of a previous study, in order to test the reliability of the results; replication may also be performed by following the same methodology but gathering a new set of original research data). Secondary data can also be joined to additional datasets, including datasets from different sources or joining with original research data.

Example: Let’s say that Marty McFly makes a copy of Doc Brown’s original data and performs a new analysis on it. The new analysis reveals that the DeLorean was only able to time-jump at the speed of 88 MPH due to additional variables (including a power input of 1.21 jigowatts). In this case, the dataset is secondary data.

Reuse of Research Data

Another potential point of confusion is that one researcher’s original research data can be another researcher’s secondary data. For instance, in the example above, the same dataset is considered original research data for Doc Brown, but is secondary data for Marty McFly.

Back to the Future

Image source: Back to the Future by Flickr user Garry Knight

Data Services: RDM or Secondary Data?

The phrase “data services” can also be confusing, because it may encompass a variety of services. A potential menu of data services could include:

  • Assistance locating and/or accessing datasets.
    o This might pertain to vendor-provided data collections, consortial collections (such as ICPSR), locally-produced data (in an institutional repository), or with publically-accessible data (such as the U.S. census).
    o Because this service specifically focuses on accessing data, it by default pertains to secondary data.
  • Data management plan (DMP) assistance.
    o Typically only applies to original research data.
  • Data curation and/or RDM services.
    o These may include education on good RDM practices, assistance depositing data into an institutional repository (IR), assistance (or full-service) creating descriptive or other metadata, and more.
    o Typically only provided for original research data. However, if transformative work has been done to a secondary dataset (such as merging with additional datasets or transforming variables), data curation / RDM may be necessary.
  •  Assistance with data analysis.
    o This service is more often provided for students than for faculty, but may include both groups.
    o Services may include providing analysis software, software support, methodological support, and/or analytical support.
    o May include support for both original research data and secondary data.

You Say “Data Are,” I Say “Data Is” …Let’s Not Call the Whole Thing Off!

So in the end, what does all this matter? The primary takeaway is to be clear, particularly when communicating about services the library will or won’t provide, about specific types of data. In many cases this will be obvious–for instance, “RDM” contains within it the term “research data” and is thus clear. Less clear is when a library department decides to provide “assistance with data.” What does this mean? What kind of assistance, and for what kind of data? Is the goal of the service to support good management of original research data? Or is the goal to support the finding and analysis of secondary data that the library has purchased? Or another goal altogether?

Clarity is key both to understanding each other and to clearly communicating emerging services to our researchers.

Starr Hoffman is Head of Planning and Assessment at the University of Nevada, Las Vegas, where she assesses many activities, including the library’s support for and impact on research. Previously she supported data-intensive research as the Journalism and Digital Resources Librarian at Columbia University in New York. Her research interests include the impact of academic libraries on students and faculty, the role of libraries in higher education and models of effective academic leadership. She is the editor of Dynamic Research Support for Academic LibrariesWhen she’s not researching, she’s taking photographs and travelling the world.

Sign up to our mailing list to hear more about our books:

Practical guidance on using altmetrics to measure, share, connect and communicate research

Facet Publishing have announced the release of Altmetrics: A practical guide for librarians, researchers and academics, edited by Andy Tattersall.9781783300105

This new book brings together experts in their fields to guide readers through the practical and technical aspects of altmetrics.

Altmetrics focuses on research artefact level metrics that are not exclusive to traditional journal papers but also extend to book chapters, posters and data sets, among other items. It offers additional indicators of attention, review and impact that add highly responsive layers to the slower to accrue traditional research metrics.

Contributed to by leading atmetric innovators including Euan Adie, founder and CEO of Altmetric.com, William Gunn, Head of Academic Outreach at Mendeley and Ben Showers, author of the bestselling Library Analytics and Metrics, the book details  the methods that can be employed to reach different audiences, even with  only minimal resources.

The book explains the theory behind altmetrics and provides practical advice to using the  increasing number of tools available for librarians and researchers to measure, share, connect and communicate research including Academia.edu, Facebook, Mendeley, ResearchGate, Twitter, LinkedIn, YouTube, Figshare, Altmetric.com, SlideShare Kudos and many more.

Anyone wanting to understand altmetrics and encourage others to use it will find this book essential reading including library and information professionals working in higher education, researchers, academics and higher education leaders and strategi
sts.

The editor, Andy Tattersall, has also made a video to complement the book which provides a whistle-stop tour of altmetrics and associated tools. The video can be viewed here.

About the authors:
Andy Tattersall (editor) writes and gives talks about digital academia, learning technology, scholarly communications, open research, web tools, altmetrics and social media– in particular, their application for research, teaching, learning, knowledge management and collaboration. He is very interested in how we manage information and how information overload affects our professional and personal lives. His teaching interests lie in encouraging colleagues and students to use the many tools and technologies(quite often freely) available to aid them carry out research and collaboration within the academic and clinical setting. He is Secretary for the Chartered Institute of Library and Information Professionals – Multi Media and Information Technology Committee.

Euan Adie is founder and CEO of Altmetric.com, which supplies altmetrics data to funders, universities and publishers. Originally a computational biologist at the University of Edinburgh, in 2005 Euan developed postgenomic.com, which aggregated blog posts written by life scientists about published scholarly articles. This effort was supported by Nature Publishing Group, where he then worked in product management roles until starting Altmetric.com in 2011.

Claire Beecroft is a university teacher/information specialist at the School of Health and Related Research (ScHARR), the University of Sheffield. Claire currently teaches on a variety of courses within ScHARR and the wider university, including the Health Technology Assessment (HTA) MOOC (Massive Open Online Course) and the MScs in Health Informatics, Public Health and HTA. Claire’s key research interests are around e-learning, e-health, applications of Web 2.0 to healthcare, teaching of health informatics and information skills and support for NHS librarians and staff to develop key informatics skills. Her main teaching interests are around literature searching and evidence retrieval, critical appraisal, e-health, telemedicine, media portrayal of health research and health economics, and information study skills.

Dr Andrew Booth is Reader in Evidence Based Information Practice at the School of Health and Related Research (ScHARR), University of Sheffield. Between 2008 and 2014 he served as Director of Research Information (Outputs) for ScHARR, helping to prepare the school’s Research Excellence Framework submission. Prior to this he worked in a wide range of roles supporting research data management, information management and evidence-based practice and delivering writing workshops to researchers. With a background in information science, Andrew has a particular interest in bibliometrics and literature review. Andrew currently serves on the editorial boards of Systematic Reviews, Implementation Science and Health Information & Libraries Journal. Over his 33-year career to date in health information management and health services research he has authored four books and over 150 peer-reviewed journal articles.

Dr William Gunn is the Head of Academic Outreach for Mendeley. He attended Tulane University as a Louisiana Board of Regents Fellow, receiving his PhD in Biomedical Science from the Center for Gene Therapy at Tulane University. Frustrated with the inefficiencies of the modern research process, he left academia and established the biology programme at Genalyte, a novel diagnostics start-up, then joined Mendeley. Dr Gunn is an Open Access advocate, co-founder of the Reproducibility Initiative and serves on the National Information Standards Organisation (NISO) Altmetrics working group.

Ben Showers is a Digital Delivery Manager at the Cabinet Office, using digital technologies to transform government services and systems for the better.Previously,he worked at JISC where he was Head of Scholarly and Library Futures, working on projects that included a shared library analytics service, as well as projects exploring the future of library systems, digital libraries, usability and digitization. He is the author ofLibrary Analytics and Metrics: using data to drive decisions and services (Facet Publishing, 2015).