Skip to main content
Presentation
Going Beyond DDA’s “They Clicked It → We Bought It → Done” Assessing Ebook Use Pre- and Post-Purchase
Staff publications, research, and presentations
  • Nicole Branch, Santa Clara University
  • Tina Chrzastowski, Santa Clara University
  • Jessica Harris, Santa Clara University
Document Type
Presentation
Publication Date
10-31-2016
Abstract

Ebook DDA (demand-driven acquisitions) programs have become common in academic libraries of all sizes. To establish a DDA program, a library creates potential subject collections by establishing profiles and downloading e-records that match them into their online catalog; when books are “used” (in an amazingly wide array of options), the library buys the book, often after a certain threshold is met. This is a fairly seamless process that users are often unaware is happening. DDA assessment by libraries, however, is often limited to the obvious demarcation between those ebooks that are purchased (after meeting library thresholds), and those that remain as viewable records in the OPAC, but are either never purchased or don’t move beyond the library’s preset limit of free views. Ironically, the data lurking behind DDA programs are extensive, albeit often confusing and difficult to navigate. And while libraries are frequently content to know an ebook has been selected and purchased through a vetted “use” process, there is much more to learn from DDA.

This poster will present a clear guide to ebook use data, comparing a third-party vendor (EBL) and a publisher (Springer). EBL (employing in-house data) and Springer (employing COUNTER) both provide ebook use data, but they are surprisingly different. The research questions for this study are:

• What are the options for pre- and post-purchase DDA ebook assessment?

• How do data from COUNTER differ from those offered by EBL’s in-house data?

• What are the positives and negatives of the wide variety of “use” definitions?

• Are there any comparable data points uniformly available for ebook use across platforms?

• Do COUNTER and EBL statistics provide sufficient data for assessing ebook usage?

• What are the implications of the lack of uniformity with data provided by ebook vendors and publishers?

Much of the confusion surrounding post-purchase ebook assessment stems from the wide variety of data options provided by publishers and third-party vendors. How many chapter downloads might equal how many page views? Does a total book download trump all other use, or do minutes spent inside a book, with specific pages viewed, carry more weight? And why does it matter?

Ebook assessment matters because it offers new ways of evaluating and measuring how readers read and how books are read. Detailed DDA data are showing us what’s possible to learn about ebook use. These data can show us when users dip in and out (pages skipped), when they judge a book by its preface (reading only the first few pages), or when they read the entire book (rarely!). Ebook data provide libraries with a peek inside how our users use books, something a print book could never tell us. By evaluating and analyzing ebook use data, libraries can begin to better understand why users choose to read ebooks based on how they read ebooks, facts that will greatly enhance the collection development of ebook collections going forward.

Comments

Conference: Library Assessment Conference (Arlington, VA)
Date Presented: Monday, October 31st, 2016

Citation Information
Nicole Branch, Tina Chrzastowski and Jessica Harris. "Going Beyond DDA’s “They Clicked It → We Bought It → Done” Assessing Ebook Use Pre- and Post-Purchase" (2016)
Available at: http://works.bepress.com/nicole-branch/11/