Sometimes you need work to be done at the push of a button. Or, better yet, while you’re fast asleep or attending a social function. Our processes put you in control of the schedule.
We like holding hands. Our profiles and wikis will guide you in making decisions about how to manage your data. But having too many choices can be overwhelming. So we're happy to personally walk you through the process wherever you'd like our help.
Our data processing services center on enriching your records. Whether we're updating headings to their current forms (authority control), finding more complete records or fields (record upgrades), changing the call number schema (reclassification), or merging multiple records that describe the same materials (deduplication), we're making your data better in ways that enhance access to your collections.
Anything in a catalog record that a search term can be matched up to is an access point. If a record contains more relevant access points, that item will turn up in more search results.
Authority control compares subject and name headings in your records against databases maintained by the Library of Congress and other authoritative sources. Our service standardizes your access points and cross-references them with synonyms and pseudonyms, creating a more complete web of access to your collections.
Typically our process begins by processing your entire bibliographic file. We call this a "Base File" and your entire authority work is built upon it's base. It's the foundation of your successful authority work going forward. Once your bibliographic file has been processed we know that you will be updating your bibliographic database over time. We can work with you to determine a schedule for on-going bibliographic record processing, what we call "CurCat" or "Current Cataloging" as well as sending you updates to your authority records, what we call "Notification Service."
Your authority control profile has six sections. Most questions have one option highlighted, indicating recommendations for standard processing. The other options represent variations in preference. Each question also has a pop-up synopsis that briefly explains key details and links through to additional information in our wiki. Inside this synopsis are links to our wiki.
There are no wrong answers. You can choose the options that make the most sense for your catalog, your discovery system, and your institution's local practices. You can select and deselect to your heart's content. If you’ve chosen two contradictory options, that’s an opportunity for us to reach out to you for clarification. You’re not on your own. We like to meet to answer questions and talk through the options while you and your staff are reviewing the profile or after you've made your initial selections.
We have built up an exhaustive wiki dedicated to our authority control service. From defining terminology and acronyms to providing in-depth explanations about each of our profile options, you can find your answers here.
The wiki is loaded with before-and-after examples to help you envision how each option will affect your records.
Where possible, we also cite our sources. You'll find links to standards and guidelines to clarify the reasoning behind our options.
Sometimes an interconnected web reference wiki is exactly what you need, and sometimes the orderly, linear progression of a book format just works better. Our authority control profile guide takes you through the profile from start to finish, with details and explanations for every question.
We've also produced an RDA profile guide to walk you through enrichment options for your AACR2 records. With more than 270 pages of combined content, these guides are available as PDF files or printed and bound for your reference.
The best way to know what you're getting is to see how our processing looks when we run your data. Take a stab at a customized profile or just request the default settings. Send us a sample set of your records. We typically recommend at least 1,000 records, and you can include more if will help in your evaluation. We'll process a sample for you at no cost and with no obligation.
Throw in some tricky records. Let’s see how our system handles those for you. We have one client that periodically sends us a test record composed entirely of diacritics as part of their regular, ongoing processing. We love this! It allows the library to verify that our system continues to treat their diacritics correctly, and it helps us make sure we consistently meet their expectations.
Once you've run a sample or processed your catalog, how do you know what transpired? We offer a wide range of reports to help you track and understand the changes. Our informational reports tell you how many of this were updated to that. Our actionable reports tell you where to look in your records to verify near matches or improve your match rate on the next run. Finally, we have a process review report that summarizes, in five pages or so, the high points of what happened to your file.
You can opt to receive any or all of the reports. Some of our clients simply file away a few statistical reports for reference, while others use their favorite reports as a starting point for their own database maintentance efforts. It's entirely up to you.
Your records can always get better.
Libraries are always changing. You're adding, creating, defining new ways to serve your patrons. Our aim is to help you serve your patrons by making it easier for them to find access to the content of your collections.
Is your catalog a jumble of AACR2 and RDA records trying to coexist? Our RDA enrichment process will refresh your AACR2 metadata by adding key RDA elements to create rich, new hybrid records.
Why? The changes in practice prescribed by RDA are aimed at future developments, including visions of linked data and library catalogs potentially operating in non-MARC environments. More practically, information such as media format now appears in different fields in newer records. By bringing the older content of your catalog up to the new standards, your records will display consistently in current discovery systems.
Take a look at our wiki and our RDA enrichment profile. See what options might help make your records more discoverable now and for your future plans.
Explore the RDA Wiki.
Linked open data promises to revolutionize access to library resources. But linked data is only open when the links are persistent, freely accessible, and authoritative. Where do you start? The answer is URI enrichment.
During authority control processing, we can add URI links from external controlled vocabulary databases. For instance, your name and subject headings may link to the Library of Congress (NAF and LCSH), the Virtual International Authority File (VIAF), or the International Standard Name Identifier (ISNI) database. Similarly, CMC fields (336, 337, 338) may link to their corresponding RDA registries.
We follow current recommendations from the PCC Task Group on URIs in MARC to append URIs to bibliographic headings, placing NAF and LCSH URIs from fully matched headings in a $0 and VIAF and ISNI URIs in a $1.
Performing URI enrichment prior to converting your catalog to BIBFRAME or another linked data schema optimizes your metadata for the conversion. If your institution is ready to lead the way, let us run some samples to show you what we can do to help.
Depending on the materials in your collections, data such as tables of contents, summaries, author affiliation notes, and fiction profiles can create numerous additional access points in your bibliographic data. We work with two data partners whose strengths stand out with different types of collections. Give us a call to talk about what data sources might best enhance your catalog.
Whether your local schools prefer to reference Lexile Measures or Accelerated Reader Levels, enriching records in your juvenile collections with reading level data can help younger patrons, along with their parents and teachers, to select materials that offer the right degree of challenge to develop fluency and comprehension skills and to become lifelong readers.
Brief acquisition records, CIP and other publisher records, order records, and spreadsheets may be inadequate sources of bibliographic data for your catalog, but they can be the perfect starting point for retrieving a full-featured copy record. You may also have full MARC records that lack elements you prefer to have in your catalog, like additional subject headings, non-Roman fields, or call numbers that match your classification system. Our automated search tools can query and find matching copy in external databases from the Library of Congress, OCLC, NLM, RLUK, and BDS, as well as our Backstage databases.
Our matching takes into account numerical data such as control numbers, ISBNs, and ISSNs in combination with text fields like titles, creators, editions, and publishers to find the best possible match. We can verify records on the data points that are important to you to filter out records that don't measure up.
With your new records, you’ll receive side-by-side reports that show which fields were used to find and validate the match or which fields were merged into your existing record.
As with all of our automation processes, the settings are up to you. We're happy to work through as many sample rounds as it takes to ensure that you’re happy with your new records.
Duplication is a fact of life in large databases. Whether you're combining separate collections in your library system, adding a new institution to your consortium, or simply dealing with duplication in records from vendors and other sources over the years, a deduplication process can significantly streamline your catalog.
As with a record upgrade process, we start by identifying your preferred match points in numerical and text fields, such as control numbers, ISBNs, titles, editions, material types, etc. Searching each record for potential duplicates within your catalog, we verify these matches on additional points to decide whether to merge two records into one. Generally, fields from one record are copied into the other, but we can also remove fields or deduplicate specific fields within a record, all according to your specifications.
Programmatically deleting records from your catalog would make anyone uneasy. Before we start removing duplicate records, we run sample data as many times as it takes to be sure that the process is giving you the results you want. We give you easy-to-read spreadsheets and HTML reports with side-by-side comparisons. We identify near matches to aid in refining the matching process and to allow for manual corrections where the automation can't reliably verify a duplicate record.
You can continue to make changes to your online profile until you're satisfied that the settings are reliably producing the results you want before we begin to deduplicate your full database.
Large labeling or relabeling projects go more quickly when the label data is organized to print in the logical order that best fits your plan. Whether you're processing a large donation of materials, automating a collection for the first time, or reclassifying your library, the right label order can make all the difference.
Spine labels, smart or dumb barcodes, whatever you require, our automation team will work with you to format the labels just the way you want them. We'll review PDF proofs with you and make certain that everything is in order before transferring the data to our trusted printing partner for final production.
We can also apply labels for you. Cataloging projects can be paired with shelf-ready processing, and large collections can even be packed and shipped to you in shelflist order. Our on-site teams can also come to your location to relabel, interleave, reshelve, and move items as your project requires.
Perhaps your project involves altering data in a way that doesn't fit standard processing models. Whether you’re an old pro at scripts, macros, database queries, and programming or you have never thought about the possibilities of manipulating data in large batches, we have the tools and expertise to help.
Our expert programmers have created ways to massage data from one XML schema into another. We've parsed and split record data for catalogs with millions of items. We've altered records to fit local practices in cutting-edge experimental catalogs. If there’s something you’re not quite sure how to handle, give us a call. We'll guide you in the right direction and provide you with a solution tailored to your unique needs.
At Backstage, we stand behind our work with a guarantee that has no expiration date. Quality is the foundation of our success, and we're confident enough in our ability to do things right the first time that we're willing to stand behind our work forever. Our promise here is very simple:
We will correct to the client’s satisfaction, and at our expense, any problem with our services, no matter when such a problem comes to light.