At Facet Publishing we endeavour to commission and publish high quality, authoritative content for the information scholar and practitioner worldwide. We are committed to advancing the profession and publishing material that will prepare and inform students and researchers to meet the challenges of the future.
We support scholars and researchers throughout the publishing process ensuring every book we publish is peer reviewed, available through green open access, optimized for discoverability, professionally designed, copyedited and printed at speed. Every title receives a worldwide, bespoke marketing push to maximise impact. Find out more about what we offer below.
Scholarly publications from leading researchers worldwide
For students, academics, early career and next generation researchers, we commission and publish scholarly research in monographs and edited collections from some of the leading scholars in the world. We aim to address the critical information issues of our time by commissioning current active research in established topics and adjacent fields, you can see some of our latest examples here.
All of our scholarly titles are peer reviewed by specifically selected scholars and we offer open/single blind/double-blind review depending on the wishes of our authors. For the iResearch series we have a bespoke editorial board. We also use our editorial advisory team, comprising thought leaders from around the world, in a variety of sectors as a sounding board for our list development ideas.
We know how important it is for our academics to upload their research to their institutional repository directly after publication in order to share their research/practice as widely as possible. In order to facilitate this, we have a green open access policy that supports an author’s right to voluntarily self-archive their work without embargo or payment. We are open and flexible with our authors and invite discussion of our policies.
We are committed to increasing the discoverability of our authors’ content. The full text of all our books is discoverable through Google scholar and library discovery services. We aid discovery by individually indexing our book chapters with DOIs, adding carefully selected keywords and expertly chosen book trade subject codes. Our books are available in print and digitally throughout the world.
We are expert, agile marketers and ensure our titles are offered for review in leading relevant journals around the world. In addition, we target scholarly communities through social media to ensure that scholars from Mumbai to Jakarta and from Syracuse to Durban are aware of new content relevant to them. We select the most appropriate conferences and seminars and ensure that our authors’ content is represented to its target readership.
Care and quality
We pride ourselves on our attention to detail. As a small team we can be highly flexible and responsive. We are able to give our authors the care and attention they require from inception through to post publication. We work closely with our writers to develop their proposals, nurture them through the writing process and offer them the best editorial and production support that we can. We are quick to market, dynamic, and possess many years of combined experience across academic and professional publishing.
Talk to us
If you’d like to know more about how we can work with you and help get your original research published and brilliantly marketed in a rapid time frame, talk to Damian or Helen or come and chat with us at the iConference in Sheffield.
Helen Carley is Publishing Director at Facet Publishing and can be reached on firstname.lastname@example.org
Damian Mitchell is Commissioning Editor at Facet Publishing and can be reached on email@example.com
Guest post by Angus Whyte, co-author of Delivering Research Data Management Services
Librarians have grown to love research data so much they can’t get enough of it! Well some at least have, and Love Your Data Week will help spread the love. Of course nobody loves data more than the researchers who produce it. Funders love it too; after all they pay for it to come into the world. If data is loved so much, why is so much of it running around loose, dirty and in no fit state to get a job? Is all that is needed a little more discipline?
Three years ago when Delivering Research Data Management Services was first published, my co-authors Graham Pryor and Sarah Jones were working with colleagues in the Digital Curation Centre and in universities across the UK to help them get support for research data off the ground and into the roster of institutional service development. At the time, as Graham said in his introduction, institution-wide RDM services had “at last begun to gain a foothold”.
The (now open access) chapter titled “a pathway to sustainable research data services: from scoping to sustainability”described six phases, from envisioning and initiating, through discovering requirements, to design, implementation and evaluation. Across the UK sector as a whole, few institutions had got beyond the discovery phase. Some of the early adopters in the UK, US and Australia have case studies featured in the book, providing more fully-fledged examples of the mix of soft and hard service components that a ‘research data management service’ typically comprises. Broadly these include support for researchers to produce Data Management Plans, tools and storage infrastructure for managing active data, support for selection and handover to a suitable repository for long-term preservation, and support for others to discover what data the institution has produced.
So what has changed? The last three years have seen evolution, consolidation and growth. According to one recent survey of European academic research libraries almost all will be offering institutional RDM services within two years. The mantra of FAIR data (findable, accessible, interoperable and reusable) has spurred a flurry of data policy-making by funders, journals and institutions. Many organisations have yet to adopt one,but policy harmonisation is now a more pressing need than formulation. Data repositories have mushroomed, with re3data.org now listing about three times the number it did three years ago. Training materials and courses are becoming pervasive, and data stewardship is increasingly recognised as essential to data science.
The burgeoning development in each of these aspects of RDM does not hide the immaturity of the field; each aspects is the subject of international effort by groups like COAR (Confederation of Open Access Repositories), and the Research Data Alliance, to consolidate and codify the organisational and technical knowledge needed to further join up services. European initiatives to establish ‘Research Infrastructures’ have demonstrated how this can be done, at least for some disciplines.
Over the same period, many institutions have learned to love ‘the cloud’; gaining scalability and flexibility by integrating cloud storage and computation services with their IT infrastructure. The same is not yet true of the higher-level RDM services that require academic libraries to collaborate with their IT and research office colleagues. Shared services are a trend that has seen some domain-focused data centres spread their disciplinary wings. Ambitious initiatives like the European Open Science Cloud pilot, will tell us how far ‘up the stack’ cloud services to support open science can go to offer better value to science and society.
The biggest challenges in 2013 are still big challenges now. Political and cultural change is messy, for a number of reasons.There is high-level political will to fund data infrastructure as it’s seen as essential for innovation, as well as for research integrity. But the economic understanding to direct resources to where they are most needed, to ensure data is not only loved but properly cared for? That requires better understanding of what kinds of care produce good outcomes, like citation and reuse. Evaluation studies have been thin on the ground and, perhaps as a result, funding for data infrastructure still tends to be short-term and piecemeal.
The book offers a comprehensive grounding in the issues and sources to follow up. Its basic premise is as true now as when it was published: keeping data requires a mix of generic and domain-specific stewardship competencies, together with organisational commitments and basic infrastructure. The basic challenge is as true now as then; research domains are fluid and tribal, crossing national and international boundaries and operating to norms that tend to resist institutional containers. But that has always been the case, and yet institutions and their libraries continue to adapt and survive.
By happy coincidence the International Digital Curation Conference (IDCC17) is happening the week after Love Your Data Week. You can follow it as it happens on twitter at #idcc17
Dr Angus Whyte is a Senior Institutional Support Officer at the Digital Curation Centre, University of Edinburgh. He is responsible for developing online guidance and consultancy to research organisations, to support their development of research data services. This is informed by studies of research data practices and stakeholder engagement in research institutions.
 Research Data Services in Europe’s Academic Research Libraries by Liber Europe
 Wilkinson, M. D., Dumontier, M., Aalbersberg, Ij. J., Appleton, G., Axton, M., Baak, A., … others. (2016). The FAIR Guiding Principles for scientific data management and stewardship
Sign up to our mailing list to hear more about new and forthcoming books:
As Love Your Data Week continues, today we have made a new chapter from Managing Research Data available Open Access. The chapter, The lifecycle of data management by Sarah Higgins, is available to download here.
We will be releasing more Open Access chapters throughout the week and publishing blogposts from our authors. For a chance to win one of our research data management books, share a tweet about why you (or your institution) are participating in Love Your Data Week 2017 using #WhyILYD17. More details about the prize draw are available here.
Sign up to our mailing list to hear more about new and forthcoming books:
Guest post by Starr Hoffman, editor of Dynamic Research Support for Academic Libraries.
Similar to the confusion between open access as opposed to open source, the terms research data and secondary data are sometimes confused in the academic library context. A large source of confusion is that the simple term “data” is used interchangeably for both of these concepts.
What is Research Data?
As research data management (RDM) has become a hot topic in higher education due to grant funding requirements, libraries have become involved. Federal grants now require researchers to include data management plans (DMPs) detailing how they will responsibly make taxpayer-funded research data 1) available to the public via open access (for instance, depositing it in a repository) and 2) preserve it for the future. Because there are often gaps in campus infrastructure around RDM and open access, many academic libraries have stepped in to provide guidance with writing data management plans, finding appropriate repositories, and in other good data management practices.
This pertains to original research data–that is, data that is collected by the researcher during the course of their research. Research data may be observational (from sensors, etc), experimental (gene sequences), derived (data or text mining), among other type, and may take a variety of forms, including spreadsheets, codebooks, lab notebooks, diaries, artifacts, scripts, photos, and many others. Data takes many forms not only in different disciplines, but in different methodologies and studies.
Example: For instance, Dr. Emmett “Doc” Brown performs a series of experiments in which he notes the exact speed at which a DeLorean will perform a time jump (88 MPH). This set of data is original research data.
What is Secondary Data?
Secondary data is usually called simply “data” or “datasets.” (For the sake of clarity, I prefer to refer to it as “secondary data.”) Unlike research data, secondary data is data that the researcher did not personally gather or produce during the course of their research. It is pre-existing data on which the researcher will perform their own analysis. Secondary data may be used either to perform original analyses or for replication (studies which follow the exact methodology of a previous study, in order to test the reliability of the results; replication may also be performed by following the same methodology but gathering a new set of original research data). Secondary data can also be joined to additional datasets, including datasets from different sources or joining with original research data.
Example: Let’s say that Marty McFly makes a copy of Doc Brown’s original data and performs a new analysis on it. The new analysis reveals that the DeLorean was only able to time-jump at the speed of 88 MPH due to additional variables (including a power input of 1.21 jigowatts). In this case, the dataset is secondary data.
Reuse of Research Data
Another potential point of confusion is that one researcher’s original research data can be another researcher’s secondary data. For instance, in the example above, the same dataset is considered original research data for Doc Brown, but is secondary data for Marty McFly.
Data Services: RDM or Secondary Data?
The phrase “data services” can also be confusing, because it may encompass a variety of services. A potential menu of data services could include:
- Assistance locating and/or accessing datasets.
o This might pertain to vendor-provided data collections, consortial collections (such as ICPSR), locally-produced data (in an institutional repository), or with publically-accessible data (such as the U.S. census).
o Because this service specifically focuses on accessing data, it by default pertains to secondary data.
- Data management plan (DMP) assistance.
o Typically only applies to original research data.
- Data curation and/or RDM services.
o These may include education on good RDM practices, assistance depositing data into an institutional repository (IR), assistance (or full-service) creating descriptive or other metadata, and more.
o Typically only provided for original research data. However, if transformative work has been done to a secondary dataset (such as merging with additional datasets or transforming variables), data curation / RDM may be necessary.
- Assistance with data analysis.
o This service is more often provided for students than for faculty, but may include both groups.
o Services may include providing analysis software, software support, methodological support, and/or analytical support.
o May include support for both original research data and secondary data.
You Say “Data Are,” I Say “Data Is” …Let’s Not Call the Whole Thing Off!
So in the end, what does all this matter? The primary takeaway is to be clear, particularly when communicating about services the library will or won’t provide, about specific types of data. In many cases this will be obvious–for instance, “RDM” contains within it the term “research data” and is thus clear. Less clear is when a library department decides to provide “assistance with data.” What does this mean? What kind of assistance, and for what kind of data? Is the goal of the service to support good management of original research data? Or is the goal to support the finding and analysis of secondary data that the library has purchased? Or another goal altogether?
Clarity is key both to understanding each other and to clearly communicating emerging services to our researchers.
Starr Hoffman is Head of Planning and Assessment at the University of Nevada, Las Vegas, where she assesses many activities, including the library’s support for and impact on research. Previously she supported data-intensive research as the Journalism and Digital Resources Librarian at Columbia University in New York. Her research interests include the impact of academic libraries on students and faculty, the role of libraries in higher education and models of effective academic leadership. She is the editor of Dynamic Research Support for Academic Libraries. When she’s not researching, she’s taking photographs and travelling the world.
Sign up to our mailing list to hear more about our books:
Facet Publishing have announced the release of Developing Digital Scholarship: Emerging practices in academic libraries
This new book, edited by Alison Mackenzie and Lindsey Martin, provides strategic insights drawn from librarians who are meeting the challenge of digital scholarship, utilizing the latest technologies and creating new knowledge in partnership with researchers, scholars, colleagues and students.
The impact of digital on libraries has extended far beyond its transformation of content, to the development of services, the extension and enhancement of access to research and to teaching and learning systems. As a result, the fluidity of the digital environment can often be at odds with the more systematic approaches to development traditionally taken by academic libraries, which has also led to a new generation of roles and shifting responsibilities with staff training and development often playing ‘catch-up’. One of the key challenges to emerge is how best to demonstrate expertise in digital scholarship which draws on the specialist technical knowledge of the profession and maintains and grows its relevance for staff, students and researchers.
Developing Digital Scholarship spans a wide range of contrasting perspectives, contexts, insights and case studies, which explore the relationships between digital scholarship, contemporary academic libraries and professional practice.
The editors said,” Our book demonstrates that there are opportunities to be bold, remodel, trial new approaches and reposition the library as a key partner in the process of digital scholarship.”
Alison Mackenzie is the Dean of Learning Services at Edge Hill University. Alison has been an active contributor in the development of the profession having held roles on the SCONUL Board, and as Chair of the performance Measurement and Quality Strategy group. She is currently a member of the Northern Collaboration steering group and is co-editor of this book.
Lindsey Martin is the Assistant Head of learning Services and is responsible for the learning technologies managed and supported by Learning Services. She has responsibility for the virtual learning environment and its associated systems, media production, classroom AV, and development of staff digital capability. Lindsey has worked in academic libraries for the past 20+ years in a variety of roles. She has been active on the Heads of eLearning Forum Steering group (HeLF) for a number of years and is currently its Chair. She is co-editor of this book.
If you would like to receive monthly eBulletins from Facet Publishing join the mailing list below.
Facet Publishing have announced the release of Altmetrics: A practical guide for librarians, researchers and academics, edited by Andy Tattersall.
This new book brings together experts in their fields to guide readers through the practical and technical aspects of altmetrics.
Altmetrics focuses on research artefact level metrics that are not exclusive to traditional journal papers but also extend to book chapters, posters and data sets, among other items. It offers additional indicators of attention, review and impact that add highly responsive layers to the slower to accrue traditional research metrics.
Contributed to by leading atmetric innovators including Euan Adie, founder and CEO of Altmetric.com, William Gunn, Head of Academic Outreach at Mendeley and Ben Showers, author of the bestselling Library Analytics and Metrics, the book details the methods that can be employed to reach different audiences, even with only minimal resources.
The book explains the theory behind altmetrics and provides practical advice to using the increasing number of tools available for librarians and researchers to measure, share, connect and communicate research including Academia.edu, Facebook, Mendeley, ResearchGate, Twitter, LinkedIn, YouTube, Figshare, Altmetric.com, SlideShare Kudos and many more.
Anyone wanting to understand altmetrics and encourage others to use it will find this book essential reading including library and information professionals working in higher education, researchers, academics and higher education leaders and strategi
The editor, Andy Tattersall, has also made a video to complement the book which provides a whistle-stop tour of altmetrics and associated tools. The video can be viewed here.
About the authors:
Andy Tattersall (editor) writes and gives talks about digital academia, learning technology, scholarly communications, open research, web tools, altmetrics and social media– in particular, their application for research, teaching, learning, knowledge management and collaboration. He is very interested in how we manage information and how information overload affects our professional and personal lives. His teaching interests lie in encouraging colleagues and students to use the many tools and technologies(quite often freely) available to aid them carry out research and collaboration within the academic and clinical setting. He is Secretary for the Chartered Institute of Library and Information Professionals – Multi Media and Information Technology Committee.
Euan Adie is founder and CEO of Altmetric.com, which supplies altmetrics data to funders, universities and publishers. Originally a computational biologist at the University of Edinburgh, in 2005 Euan developed postgenomic.com, which aggregated blog posts written by life scientists about published scholarly articles. This effort was supported by Nature Publishing Group, where he then worked in product management roles until starting Altmetric.com in 2011.
Claire Beecroft is a university teacher/information specialist at the School of Health and Related Research (ScHARR), the University of Sheffield. Claire currently teaches on a variety of courses within ScHARR and the wider university, including the Health Technology Assessment (HTA) MOOC (Massive Open Online Course) and the MScs in Health Informatics, Public Health and HTA. Claire’s key research interests are around e-learning, e-health, applications of Web 2.0 to healthcare, teaching of health informatics and information skills and support for NHS librarians and staff to develop key informatics skills. Her main teaching interests are around literature searching and evidence retrieval, critical appraisal, e-health, telemedicine, media portrayal of health research and health economics, and information study skills.
Dr Andrew Booth is Reader in Evidence Based Information Practice at the School of Health and Related Research (ScHARR), University of Sheffield. Between 2008 and 2014 he served as Director of Research Information (Outputs) for ScHARR, helping to prepare the school’s Research Excellence Framework submission. Prior to this he worked in a wide range of roles supporting research data management, information management and evidence-based practice and delivering writing workshops to researchers. With a background in information science, Andrew has a particular interest in bibliometrics and literature review. Andrew currently serves on the editorial boards of Systematic Reviews, Implementation Science and Health Information & Libraries Journal. Over his 33-year career to date in health information management and health services research he has authored four books and over 150 peer-reviewed journal articles.
Dr William Gunn is the Head of Academic Outreach for Mendeley. He attended Tulane University as a Louisiana Board of Regents Fellow, receiving his PhD in Biomedical Science from the Center for Gene Therapy at Tulane University. Frustrated with the inefficiencies of the modern research process, he left academia and established the biology programme at Genalyte, a novel diagnostics start-up, then joined Mendeley. Dr Gunn is an Open Access advocate, co-founder of the Reproducibility Initiative and serves on the National Information Standards Organisation (NISO) Altmetrics working group.
Ben Showers is a Digital Delivery Manager at the Cabinet Office, using digital technologies to transform government services and systems for the better.Previously,he worked at JISC where he was Head of Scholarly and Library Futures, working on projects that included a shared library analytics service, as well as projects exploring the future of library systems, digital libraries, usability and digitization. He is the author ofLibrary Analytics and Metrics: using data to drive decisions and services (Facet Publishing, 2015).