This guest post from Sara Mannheimer and Ryer Banta is about introducing undergraduates to the foundations of research data management through something everyone can relate to—organizing personal digital files. You can read more about their experience in The Complete Guide to Personal Digital Archiving which features their co-authored chapter, “Personal Digital Archiving as a Bridge to Research Data Management”
In 2016, we were both working at Montana State University Library, but working in totally different divisions. Sara was the Data Management Librarian with a focus on research data management and data management planning for faculty and graduate students. Ryer was the Undergraduate Experience Librarian, focusing on information literacy instruction and support for undergraduate students. In many ways it would seem that we were living in very different parts of the library.
Although our jobs were quite different, we connected over our shared conviction that undergraduates would benefit from learning fundamental research data management skills. Undergraduates are entering a data-driven job market, where skills related to data management are in high demand. In industry, data scientists are working in a wide variety of sectors, and in academia, researchers are increasingly required to publish research data. Tailoring research data management lessons to undergraduates also served a key student population at MSU. Montana State University is a mid-sized university with about 16,500 total students, about 14,000 of whom are undergraduates. So we knew that there could be a big potential impact if we could figure out a meaningful way to help undergraduates build research data management skills.
As we began to think about creating a useful and meaningful research data management related lesson for undergraduates, our immediate challenge was figuring out how to get over the hurdle of making research data management relevant. Most undergraduates do not encounter research data on a regular basis, and we wanted to connect research data management to their daily life, their current schoolwork, or ideally both. The instructional principle of making lessons relevant may seem to be fairly common sense, but it is also supported by constructivist learning theory. We dipped our toes into this rich area of scholarship while developing our lesson, focusing on a couple of aspects of constructivist learning theory. For anyone developing learning experiences, we highly recommend dipping your toes, and even diving headlong, into constructivist learning theory and related theories.
Constructivist learning theory encompasses several principles, but we focused on the principles related to active, student-focused discovery. Two core tenets of constructivist learning theory specify that:
- New learning builds on prior knowledge. By tapping into students’ past experiences, educators can create a learning sequence that extends from prior knowledge to the current lesson to a lifelong pattern of curiosity and learning.
- Meaningful learning develops through “authentic” tasks. Activities conducted in class should simulate activities that students will use in their class assignments and in their real lives. This strategy ensures that the skills students learn in the classroom have direct relevance to their lives outside of the classroom.
Applying these tenets provided us with new insights about how to make research data management relevant for undergraduates. Given that new learning builds on prior knowledge, we aimed to understand students’ prior knowledge regarding data, tap into students’ past learning experiences, and then build upon that knowledge in the classroom. Given that meaningful learning develops through “authentic” tasks, we aimed to teach concrete, relatable skills that could be practiced both during instruction and afterwards. We wanted to position research data management skills in the context of students’ current lives, rather than promising a theoretical applicability to an abstract future career.
Taking a cue from constructivist learning theories, we realized that we could start with data that students already use and manage on a daily basis, specifically their digital files on their computers. At the same time, we also realized that many of the basic principles of research data management are also found in personal digital archiving practice. These dual realizations helped us focus our lesson on principles and practices that could be immediately applied to students’ digital files. In fact, in our lesson, we designed activities that got students started on reorganizing their files following personal digital archiving best practices. We organized our lesson into four key sections:
- Set the stage. Students describe the use, importance, and challenges of data within their discipline or other personally relevant contexts. This step helps prepare students to apply the lesson to their own lives.
- Basics of personal digital archiving. Students discover basic personal digital archiving strategies and principles that are also used to manage research data. This step provides a foundation of knowledge that informs in-class activities.
- Apply learning with activities. Students apply personal digital archiving strategies and principles to organize and document their own files and data. This step provides students with hands-on experience with personal digital archiving strategies.
- Debrief to connect personal digital archiving to research data management. Students reflect upon the value of the personal digital archiving principles and practices for their own personal data and discover the connection and similarities between personal digital archiving and research data management. This step allows students to process the lesson and consider future applications of the skills they learned.
We have had success with this lesson, and we have found that teaching personal digital archiving practices can act as a bridge that connects key practices of research data management to students’ everyday lives. Personal digital archiving builds on students’ prior knowledge of their digital belongings, and allows students to learn through authentic tasks that have immediate relevance to their daily lives. We hope that other librarians and educators can adapt and reuse the basic instructional strategies that we developed in their own learning contexts. Critical thinking about managing digital materials—whether personal files or research data—is a foundational skill that will benefit students during their undergraduate education and in their future careers.
Ryer Banta is the information literacy and technology librarian at Centralia College (WA), where he manages digital resources and services, and helps learners develop information literacy and lifelong learning skills. His research interests include open education, instructional design, educational technology, information literacy, and user experience.
You can follow Ryer on Twitter @RyerBanta
Sara Mannheimer is the data librarian at Montana State University in Bozeman, where she facilitates research data management and sharing, and promotes digital scholarship using library collections and “big data” sources. Her research focuses on data management practices, data discovery, digital preservation, and the social, ethical, and technical issues surrounding data-driven research.
You can follow Sara on Twitter @saramannheimer
The Complete Guide to Personal Digital Archiving helps information professionals break down archival concepts and best practices into teachable solutions. Whether it’s an academic needing help preserving their scholarly records, a student developing their data literacy skills or someone backing up family photos and videos to protect against hard-drive failure, this book will show information professionals how to offer assistance.
Find out more about the book and read a sample chapter here.
Join our mailing list
Sign up to our mailing list to hear more about new and forthcoming books.
Guest post by Selena Killick, co-author of the forthcoming book Putting Library Assessment Data to Work
With the move to increasing online provision within the education sector, and by association academic libraries, we have seen an explosion in big data sets. The initial COUNTER compliant statistics containing electronic journal title downloads seem quite quaint now as the granularity and scale of data harvesting has continued to expand. It is not uncommon for an academic library to have usage data for its electronic resources at an item level on an individual customer basis. The University of Huddersfield Library Impact Data Project were the first exploration into how library analytics could be used to identify the impact the library was having on student success. The researchers successfully identified a correlation between library usage and student success (Stone & Ramsden 2013). Coincidentally this research was being replicated in Australia and the USA at the same period, all with similar results (Cox & Jantti 2012) (Soria et al. 2013). Five years on, where are we now? And, if data really is the new oil, where should we drill next…?
The Ethical Debate
Before we look at where we could drill next, the question on whether we should drill at all needs to be considered. The librarian field is divided on this issue with strong views on both sides. Quantitative data is not the only source of information and it should not be used in isolation to evaluate library performance. That being said, it is a source of insight available to us and we should consider carefully if and how we use it. Personally, I think that if we are able to use library analytics anonymously, ethically, transparently, legally and with the goal of improving learner success we should exploit the data to benefit our students. I know some will disagree with me, and I’m happy to debate the subject. As data analytics becomes increasing part of the academic institutional infrastructure libraries need to identify the role they will play in this arena or risk becoming obsolete and ultimately redundant.
The Next Drilling Expedition
Until now, library analytics research has focussed on student satisfaction, library usage and student success. The Jisc Library Data Labs project has worked with librarians to combine and visualise various data sets. SCONUL Statistics and National Student Survey (NSS) scores have been combined to see if students who are the most satisfied with library provision are studying at the universities with the largest library budgets (Baylis & Burke 2017). LibQUAL+ satisfaction scores have been combined with Association of College and Research Libraries (ACRL) statistics to see if there is a correlation between satisfaction, usage and expenditure in academic libraries (Hunter & Perret 2011).
As an increasing number of library services move online the ability to harvest and analyse user data in the areas of enquiry handling and information literacy training is growing. Libraries are starting to use customer relationship management systems to manage enquiries received at help points (Killick 2017). Webchat transcripts between customers and librarians can be analysed to identify common enquiries with a view of improving the customer experience (Mungin 2017). Within the area of information literacy training provision, live online tuition and webinars are now being used by academic libraries. Delivery tools such as Adobe Connect can provide the library with data at the individual student level for live attendance and subsequent video views. With regards to data drilling, services are the new content, and it is only a matter of time before someone breaks ground.
Within the wider academic sphere the field of learning analytics has emerged, using big data to understand the characteristics of successful students with a view to optimise the learning environment (Rienties et al. 2017). As an important stakeholder in the learning environment, libraries are considering their role in supporting the learning analytics agenda (Oakleaf et al. 2017). The Library Integration in Institutional Learning Analytics (LIILA) Project is currently reviewing how libraries can support learning analytics. This one-year Institute of Museum and Library Services National Forum grant is working with a variety of international stakeholders, including librarians, system vendors and policy makers. The project team hope to get to the position where libraries are culturally ready and technically able to engage in this arena. This is our opportunity to shape the use of library data in this field, ensuring its use is anonymous, legal, ethical, and transparent; with the goal of improving learner success. If we fail to engage in the debate I suspect our publishers will bypass the library and pipe their usage data to the learning analytics community directly.
Selena Killick is co-author of the forthcoming book Putting LibraryAssessment Data to Work alongside Frankie Wilson. She has presented, published, and provided consultancy services to academic libraries on an international basis on library assessment for over 15 years. She is currently an editorial board member of the International Conference on Performance Measurement in Libraries. In 2003 she was part of the team that introduced LibQUAL+ to the UK in partnership with the Association of Research Libraries. Previously she has worked on the SCONUL Value & Impact Measurement Programme (VAMP) and the SCONUL Statistics e-measures pilot.
You can follow Selena on Twitter @selenakillick
Baylis, L. & Burke, S., 2017. Insights from Jisc & HESA Analytics Labs: An Agile, cross-institutional approach. In 12th International Conference on Performance Measurement in Libraries. Oxford.
Cox, B.L. & Jantti, M., 2012. Capturing business intelligence required for targeted marketing, demonstrating value, and driving process improvement. Library and Information Science Research, 34(4), pp.308–316.
Hunter, B. & Perret, R., 2011. Can Money Buy Happiness? A Statistical Analysis of Predictors for User Satisfaction. The Journal of Academic Librarianship, 37(5), pp.402–408.
Killick, S., 2017. Exploiting customer relationship management analytics to improve the student experience. In 12th International Conference on Performance Measurement in Libraries. Oxford.
Mungin, M., 2017. Stats Don’t Tell the Whole Story: Using Qualitative Data Analysis of Chat Reference Transcripts to Assess and Improve Services. Journal of Library and Information Services in Distance Learning, 11(1–2), pp.25–36.
Oakleaf, M. et al., 2017. Academic Libraries & Institutional Learning Analytics: One Path to Integration. Journal of Academic Librarianship, 43(5), pp.454–461.
Rienties, B. et al., 2017. A review of ten years of implementation and research in aligning learning design with learning analytics at the Open University UK. Interaction Design and Architecture(s) Journal, 33, pp.134–154.
Soria, M.K., Fransen, J. & Nackerud, S., 2013. Library Use and Undergraduate Student Outcomes: New Evidence for Students’ Retention and Academic Success. portal: Libraries and the Academy, 13, pp.147–164.
Stone, G. & Ramsden, B., 2013. Library Impact Data Project: looking for the link between library usage and student attainment. College and Research Libraries, 74(6), pp.546–559.
Join our mailing list
Sign up to our mailing list to hear more about new and forthcoming books.