The Library of Congress made an announcement earlier this week that has left some usually vocal library pundits speechless.
MARC is Dead! – RDA made irrelevant! – cries that can be heard rattling around the bibliographic blogo-twittersphere. My opinion is that this is an inevitable move based upon serious consideration, and has been building on several initiatives that have been brewing for many months.
Bold though – very bold. I am sure that there are many in the library community, who have invested much of their careers in MARC and its slightly more hip cousin RDA, who are now suffering from vertigo as they feel the floor being pulled from beneath their feet.
The Working Group of the Future of Bibliographic Control, as it examined technology for the future, wrote that the Library community’s data carrier, MARC, is “based on forty-year-old techniques for data management and is out of step with programming styles of today.”
Many of the libraries taking part in the test [of RDA] indicated that they had little confidence RDA changes would yield significant benefits…
And on a more positive note:
The Library of Congress (LC) and its MARC partners are interested in a deliberate change that allows the community to move into the future with a more robust, open, and extensible carrier for our rich bibliographic data….
….The new bibliographic framework project will be focused on the Web environment, Linked Data principles and mechanisms, and the Resource Description Framework (RDF) as a basic data model.
There is still a bit of confusion there between a data carrier and a framework for describing resources. Linked Data is about linking descriptions of things, not necessarily transporting silos of data from place to place. But maybe I quibble a little too much at this early stage.
So now what:
The Library of Congress will be developing a grant application over the next few months to support this initiative. The two-year grant will provide funding for the Library of Congress to organize consultative groups (national and international) and to support development and prototyping activities. Some of the supported activities will be those described above: developing models and scenarios for interaction within the information community, assembling and reviewing ontologies currently used or under development, developing domain ontologies for the description of resources and related data in scope, organizing prototypes and reference implementations.
I know that this is the way that LoC and the library community do things, but do I hope that this doesn’t mean that they will disappear into an insular huddle for a couple of years to re-emerge with something that is almost right yet missing some of the evolution that is going on around them over that period.
As per other recent announcements, such as the vote to openly share European Libraries’ data, the report from the W3C’s Library Linked Data Incubator Group, and now the report from the Stanford Linked Data Workshop. I welcome these developments. However I warn those involved that these are great opportunities [to enable the valuable resources catalogued and curated by libraries over decades to become foundational assets of the future web] that can be easily squandered by not applying the open thinking that characterise successes in the web of data.
One very relevant example of the success of applying open thinking and approach to the bibliographic word using Linked Data is the open publishing of the British National Bibliography (BnB). Readers of this blog will know that we at Talis have worked closely with the team at the BL in their ground breaking work. The data model they produced is an example of one of those things that may induce that feeling of vertigo that I mentioned. It doesn’t look much like a MARC record! I can assure the sceptical that although it may be very different to what you are used to, it is easy to get your head around. (Drop us a line if you want some guidance).
As Talis host the BnB Linked Data for the BL, I can testify to the success of this work – only launched in mid July. It’s use is growing rapidly, receiving just short of 2 million hits in the last month alone.
With the British Library, along with the National Libraries of Canada and Germany, being quoted as partners with the LoC in this initiative, plus their work being referenced as an exemplar in the other reports I mention, I hold out a great hope that things are headed in the right direction.
As comments to some of my previous posts attest, there is concern from some in the community of domain experts, that this RDF stuff is too simple and light-weight and will not enable them capture the rich detail that they need. They are missing a couple of points. Firstly, it is this simplicity that will help non-domain experts to understand, reference and link to their rich resources. Secondly, RDF is more than capable of describing the rich detail that they require – using several emerging ontologies including the RDA ontology, FRBR, etc. Finally and most importantly, it is not a binary choice between widely comprehended simplicity and and domain specific detailed description. The RDF for a resource can, and probably should, contain both.
So Library of Congress, I welcome your announcement and offer a friendly reminder that you not only need to draw expertise from the forward thinking library community, but also from the wider Linked Data world. I am sure your partners from the British Library will reinforce this message.
This post was also published on the Talis Consulting Blog