Back to Blog

Best practice in effective long-term data management: the FAIR principles | MirrorWeb

Marketing Team


Best practice in effective long-term data management: the FAIR principles

In this piece, we're joined by Arkivum's Marketing Manager, Tom Lynam to discuss how the adoption of data management principles can help higher education and heritage organisations take further steps to embrace digital during this challenging period.

In the current climate, many businesses around the world are coming to terms with balancing survival through lockdown imposed by governments and looking ahead to a future return to an as yet unknown new-normal.

Higher education and heritage organisations are no different and have both been heavily impacted by the changes. Many have been forced to quickly shift services online, most of which had typically been delivered in person previously.

Online classes, tutorials and museum tours have all become the norm for many. While many face-to-face activities will likely return once lockdown begins to ease, this shift will undoubtedly have a longer-term impact of pushing more of these services and content permanently online and into digital formats.

This change will in turn give rise to more born-digital content than ever before and predicate the need for effective long-term data management. In short, seeking an answer to the question “where and how do I store all this new content?”.

Many may seek to answer this question by starting with a technology solution, but effective digital archiving and preservation is not simply about having the right tools (although it is important), but ensuring you’re following best practice as well.

That’s where something such as the FAIR data management principles come in.

What is FAIR?


FAIR (or Findable, Accessible, Interoperable and Reusable) is a set of guiding data management principles created by a diverse community of scholars, librarians, archivists, publishers and research funders who sought to facilitate change towards improved knowledge creation and sharing.

The principles were created to support the reuse of scholarly data through good data management, particularly through enhancing the ability of machines to automatically find and use data.

The four principles are as follows;

  • FINDABLE: Data and metadata should be easily findable by both humans and machines. This includes, for example, being assigned a unique identifier, described with rich metadata and indexed correctly. This will only become more challenging (and important) as the volume of data that we need to store increases.

  • ACCESSIBLE: The data not only needs to be findable as described above it also needs to be accessible; this could include everything from accessing immediately through to details of how to gain authentication or authorization where required.

    Accessibility also goes beyond that, if data is being accessed many years down the line from when it was first created, it must still exist, devoid of any corruption and be in a format that can be opened on a modern machine.


  • INTEROPERABLE: Data should be interoperable, that is it uses a formal, accessible, shared, and broadly applicable language for knowledge representation. If data is siloed, then elements of it can be inaccessible or take a long time to access. This again will only increase in importance (particularly for larger organisations) as more and more data is generated from multiple sources. Unifying these many sources and potentially storage area, and ensuring it is interoperable will be key.


  • REUSABLE: In its simplest form the purpose of data management is to ensure that the data that is being stored can be used at a later date for whatever purpose it is needed for. By ensuring data is reusable (with clear usage guidelines) those who need to access it later on can do so easily and efficiently (and with confidence that it is accurate) regardless of how long it has been stored, or in what format. Higher education and heritage organisations must ensure that their stored knowledge has longevity and can be used by future students and customers seeking to learn in a particular area.

 

Why is FAIR so important?


The creation of FAIR is incredibly important to an ever-increasing range of industries who rely on good data management. We started to allude to some of them in the earlier parts of this blog post, and it is something that will only continue to increase in importance and grow as a challenge if left unchecked.

What the FAIR principles achieve though, is to create clarity on what good data management looks like.

Data is also sometimes viewed more of as a burden, instead of something which can add value to an organisation. Concerns over where do I store a particular data set, or where did I leave that file I need, trump other thoughts of how maximise value from the data that you own. By building robust data management processes, supported by the right technology, you can start to realise additional value from your data, rather than view it as a burden.

Finally, for higher education and heritage organisations, in many ways this conversation is about more than simply storing old information, or even gaining value from data. It’s about the responsibility and accountability for the long-term care and stewardship of that information. Future students and customers will expect or even assume that the information they want will be available when they want it.

Why are the FAIR principles so important?


Where to begin?


This post has largely explored what the best can look like, and for many this might be a daunting prospect. It is usually best to start small, inspect and investigate what you currently have in place and build upon that.

Using guiding principles such as FAIR is a good start to asking the right questions about how you look after and manage your data. From a small beginning, you can start to build robust and effective processes, and ultimately move towards good data management. At the same time, you can leverage the tools available to facilitate and, in many cases, automate those processes.

At Arkivum, we work with many organisations in a range of industries, but each and every one of our customers has identified the importance of good practice in long-term data management. Our solution supports and enables the processes and practices that they have built to ensure the effective long-term management of their data.

About Arkivum

Arkivum is a leading provider of software, services and domain expertise for long-term data management and archiving. Our focus is on ensuring our customers long-term data is secure, accessible and usable for as long as they need it.

Our robust data management processes, coupled with no data lock-in enables us to provide our customers with a 100% data integrity guarantee. In addition, our customers are able to leverage our team of experts who provide best practice guidance for long-term data management including adhering to compliance and regulatory requirements, digital preservation needs and maximising business value from their data. Arkivum is ISO 27001 and ISO 9001 certified. To find out more about our long-term data management solutions simply
click here.

 

More from the Blog

Whatsapp Compliance, Self-Reporting, and Ripping off the Band-Aid

The SEC has incentivized firms to self-report on off-channel violations. We look into the process and its benefits.

Read Story

FINRA Report 2024: Recordkeeping Takeaways

Key recordkeeping teakeaways from the 2024 FINRA Annual Regulatory Oversight Report.

Read Story

How MirrorWeb Evolves with Demand

Adaptability is vital in the world of communications surveillance. This blog looks at MirrorWeb’s journey as a company, and why it's helped us be agile and reactive to a challenging regulatory landscape.

Read Story

See what we can do for you.

Let us show you why MirrorWeb is trusted by organizations across the globe for their compliance and digital preservation needs.