New & Noteworthy

Library Hi Tech News

ISSN: 0741-9058

Article publication date: 14 October 2013

154

Citation

(2013), "New & Noteworthy", Library Hi Tech News, Vol. 30 No. 8. https://doi.org/10.1108/LHTN-09-2013-0051

Publisher

:

Emerald Group Publishing Limited


New & Noteworthy

Article Type: New & Noteworthy From: Library Hi Tech News, Volume 30, Issue 8

Guidance documents for electronic theses and dissertations available for review

Colleges and universities are steadily transitioning from traditional paper and microfilm to digital formats for the submission and dissemination of graduate theses and dissertations. While this move from print-based to Electronic Theses and Dissertations (ETDs) greatly enhances the accessibility and sharing of graduate student research, it also presents significant challenges for the academic libraries that must preserve this digital content.

The ETD Lifecycle Management project (http://metaarchive.org/imls) has released for public review the draft Guidance Documents for Lifecycle Management of ETDs. Funded by the Institute for Museum and Library Services (IMLS) and led by the University of North Texas, in partnership with the NDLTD and Educopia Institute, the ETD Lifecycle Management project is promoting best practices and improving the capacity of academic libraries to preserve ETDs for future researchers.

Written by ETD program experts from several established and well-respected academic institutions (see below), the Guidance Documents are geared towards the full range of stakeholders in ETD programs from administrators to graduate schools to librarians to vendors. As indicated by the table of contents below, the Guidance Documents cover a range of curation topics that span the lifecycle for ETDs.

Table of contents

1. Guidance Documents for the Lifecycle Management of ETDs.

2. Guidelines for Implementing ETD Programs – Roles and Responsibilities.

3. Guide to Access Levels and Embargoes of ETDs.

4. Briefing on Copyright and Fair Use Issues in ETDs.

5. Guidelines for Collecting Usage Metrics and Demonstrations of Value for ETD Programs.

6. Managing the Lifecycle of ETDs: Curatorial Decisions and Practices.

7. Metadata for ETD Lifecycle Management.

8. Guide to ETD Program Planning and Cost Estimation.

9. Guide to Options for ETD Programs.

Interested ETD stakeholders can register to receive a copy of the guidance documents. By requesting the documents, reviewers are voluntarily agreeing to provide our project with feedback to help improve the documents. Reviewers may provide feedback on all of the documents or just the document(s) that prove most relevant to their areas of interest. The project staff will follow up with reviewers individually one month from the date they receive the documents, and will incorporate their suggestions into the final draft. The review period will close on December 31, 2013.

The guidance document for Lifecycle Management of ETDs have been authored by ETD program experts from the University of North Texas, Virginia Tech, Rice University, Boston College, Indiana State University, Pennsylvania State University, and University of Arizona. The documents were edited by representatives from the Educopia Institute, the MetaArchive Cooperative, and the Networked Digital Library of Theses and Dissertations.

Register to receive a copy of the guidance documents at: http://publishing.educopia.org/etd-lifecycle-guidance-documents/

The ETD Lifecycle Management project: http://metaarchive.org/imls

NISO publishes update to metrics data dictionary for libraries and information providers

The National Information Standards Organization (NISO) has announced the publication of the fifth edition of the standard ANSI/NISO Z39.7, “Information Services and Use: Metrics and Statistics for Libraries and Information Providers – Data Dictionary”. The purpose of the Z39.7 Data Dictionary is to assist the information community in the identification, definition, collection, and interpretation of statistical data used to describe the current status and condition of libraries in the USA. It absorbs many of the de facto definitions established in various national surveys and data collection programs to provide a body of valid and comparable data on American libraries.

Originally published in 1968 with the title Library Statistics, the standard has evolved through its subsequent editions, culminating in an online data dictionary and new title in the 2004 edition. In 2008, NISO moved the standard from periodic to continuous maintenance and established the Z39.7 Standing Committee to maintain the standard.

“With the standard under continuous maintenance, users can submit comments and suggestions for improvements and have them evaluated by the Z39.7 Standing Committee biannually for possible inclusion in the standard”, explained Nettie Lagace, NISO Associate Director for Programs. “When a sufficient number of changes have been accepted, a revision of the standard is presented for approval by the NISO Voting Members. This fifth edition includes all the accepted changes since the previous edition was published”.

“The importance of addressing our digital environment with integrative thinking is apparent in the new standard”, stated Martha Kyrillidou, Senior Director, ARL Statistics and Service Quality Programs, and Chair of the Z39.7 Standing Committee. “In addition to reformatting and better organization, the e-metrics that were introduced in the 2004 edition as a separate section have been updated and integrated into the body of the standard to make them easier to use. Additional data gathering tools were added and all survey references were updated”.

“In addition to evaluating suggestions from the community, the Z39.7 Standing Committee regularly reviews other metrics-related standards and best practices with an eye of continuously improving the Data Dictionary”, said Steve Hiller, Director of Assessment and Planning, University of Washington Libraries, member of the Z39.7 Standing Committee, and incoming Chair of the International Organization for Standardization (ISO) Subcommittee on Quality – Statistics and performance evaluation (TC464/SC8). “Currently, we are assessing the forthcoming revision to the ISO standard on International library statistics (ISO 2789) for areas where the standards can be better aligned and for proposed new statistics and methods”.

“Statistics collection is a critical process of libraries to document their service provision, their value, and changes in the use of services”, states Todd Carpenter, NISO’s Executive Director. “Standards in this area are especially important to ensure that data is consistently collected across time periods and also by different libraries so that data can be compared. Z39.7 is the key standard in the US for this purpose and the current revision ensures that the standard remains up-to-date with the changing environment in today’s libraries”.

The Z39.7 Data Dictionary is available in open access on the NISO web site. A downloadable PDF version of the standard is also available. Users of the standard are encouraged to submit suggestions to the Z39.7 Standing Committee at any time. Information on the continuous maintenance process is available from the Committees’ webpage at: http://www.niso.org/workrooms/z39-7.

The Z39.7 Data Dictionary: http://z39-7.niso.org/

PDF version: http://www.niso.org/apps/group_public/download.php/11282/Z39-7-2013_metrics.pdf

Higher Ed associations to build federated system for publicly funded research

The Association of Research Libraries (ARL), the Association of American Universities (AAU), and the Association of Public and Land-grant Universities (APLU) today announced the formation of a joint steering group to advance a proposed network of digital repositories at universities, libraries, and other research institutions across the US that will provide long-term public access to federally funded research articles and data.

The steering group will oversee a feasibility study, guide policy, and explore governance structures necessary for prototyping and implementing the network. This repository network, the SHared Access Research Ecosystem (SHARE), is being developed as one response to a White House directive instructing federal funding agencies to make the results of research they fund available to the public.

The SHARE Steering Group will be chaired by Rick Luce, associate vice president for research and dean of university libraries at University of Oklahoma, and Tyler Walters, dean of university libraries at Virginia Tech. Other members of the steering group include: Richard McCarty, provost and vice chancellor for academic affairs, Vanderbilt University; MacKenzie Smith, university librarian, University of California, Davis; Brad Wheeler, vice president for information technology and CIO, Indiana University; and Caroline Whitacre, vice president for research, Ohio State University. Additionally, Joyce Backus, associate director for library operations at the National Library of Medicine where PubMed Central resides, will serve as a National Institutes of Health liaison to the SHARE Steering Group.

For more information on SHARE: http://www.arl.org/share

Frontiers in massive data analysis: new report from USA National Research Council

From Facebook to Google searches to bookmarking a webpage in our browsers, today’s society has become one with an enormous amount of data. Some internet-based companies such as Yahoo! are even storing exabytes (10 to the 18 bytes) of data. Like these companies and the rest of the world, scientific communities are also generating large amounts of data – mostly terabytes and in some cases near petabytes – from experiments, observations, and numerical simulation. However, the scientific community, along with defense enterprise, has been a leader in generating and using large data sets for many years. The issue that arises with this new type of large data is how to handle it – this includes sharing the data, enabling data security, working with different data formats and structures, dealing with the highly distributed data sources, and more.

After a two-year effort, the US National Research Council’s Committee on the Analysis of Massive Data has released their final report, entitled “Frontiers in Massive Data Analysis”. The effort includes perspectives from Statistics, Databases, Machine Learning, Systems and includes input from both industry and academia. “Frontiers in Massive Data Analysis” presents the Committee on the Analysis of Massive Data’s work to make sense of the current state of data analysis for mining of massive sets of data, to identify gaps in the current practice and to develop methods to fill these gaps. The committee thus examines the frontiers of research that is enabling the analysis of massive data which includes data representation and methods for including humans in the data-analysis loop. The report includes the committee’s recommendations, details concerning types of data that build into massive data, and information on the seven computational giants of massive data analysis.

A pre-publication version of the report is available for free download at: http://www.nap.edu/catalog.php?record_id=18374

2014 National Agenda for Digital Stewardship now available

Effective digital preservation is vital to maintaining the public records necessary for understanding and evaluating government actions, the scientific evidence base for replicating experiments, building on prior knowledge, and the preservation of the nation’s cultural heritage. Substantial work is needed to ensure that today’s valuable digital content remains accessible, useful, and comprehensible in the future – supporting a thriving economy, a robust democracy, and a rich cultural heritage. The 2014 National Agenda for Digital Stewardship integrates the perspective of dozens of experts and hundreds of institutions, convened through the Library of Congress, to provide funders and other executive decision-makers with insight into emerging technological trends, gaps in digital stewardship capacity, and key areas for development. It is meant to inform, rather than replace, individual organizational efforts, planning, goals, and opinions. Its aim is to offer inspiration and guidance and suggest potential directions and areas of inquiry for research and future work in digital stewardship.

The Agenda outlines the challenges and opportunities related to digital preservation activities in four broad areas: organizational roles, policies, and practices; digital content areas; infrastructure development; and research priorities. The sections are arranged from the most comprehensive and encompassing topics to sequentially drill down to more specific challenges and recommendations. The organizational roles, policies, and practices section discusses the overarching challenges the digital preservation community faces. The digital content areas section highlights specific kinds of content that need attention. The infrastructure development section identifies opportunities and makes specific recommendations for how the digital preservation community can respond. The research priorities section provides detailed recommendations to prioritize resource allocation towards areas of research that are critical to the advancement of both basic understanding and the effective practice of digital preservation.

The inaugural “National Agenda” was released in July 2013, inconjunction with the first day of the Digital Preservation 2013 meeting in Washington, DC. Over the coming year the National Digital Stewardship Alliance (NDSA) will work to promote the Agenda and explore educational and collaborative opportunities with all interested parties.

Download the full document: http://www.digitalpreservation.gov/ndsa/documents/2014NationalAgenda.pdf

National Digital Stewardship Alliance: http://www.digitalpreservation.gov/ndsa/

Latest news from the Scalable Preservation Environments project

The SCAPE project will develop scalable services for planning and execution of institutional preservation strategies on an open source platform that orchestrates semi-automated workflows for large-scale, heterogeneous collections of complex digital objects. SCAPE will enhance the state of the art of digital preservation in three ways: by developing infrastructure and tools for scalable preservation actions; by providing a framework for automated, quality-assured preservation workflows and by integrating these components with a policy-based preservation planning and watch system. These concrete project results will be validated within three large-scale Testbeds from diverse application areas. The SCAPE project is co-funded by the European Union and coordinated by AIT Austrian Institute of Technology GmbH.

The project has recently published the fifth edition of the SCAPE newsletter. This issue includes a report on the results of the second-year EC review, which took place on April 18-19, 2013 at the AIT office in Vienna. Fourteen members of the SCAPE consortium presented the project results over two days. During the first day, the four technical sub-projects reported on progress in the morning, and several live technical demonstrations were given in the afternoon. Highlights included a demonstration of round-trip content profiling (c3po), preservation planning (PLATO), and preservation plan execution with an integrated repository (RODA). On the second day, Takeup and Project Management were presented, during which the project staff reported on a large number of publications and events in year two, as well as on the project’s exemplary use of resources (13 on-time and mostly public deliverables).

The fifth edition of the newsletter also includes news on upcoming training events; a tool highlight with video of the Matchbox open source toolset that provides decision-making support for various quality assurance tasks of digital libraries; an interview with Krešimir Ðuretec of the Vienna University of Technology; more information on the 4C project (Collaboration to Clarify the Costs of Curation); and an overview of SCAPE’s presence at Open Repositories 2013, the Joint Conference on Digital Libraries 2013 (JCDL2013), and the International Conference on Preservation of Digital Objects (iPres 2013).

Read the latest SCAPE project newsletter at: http://eepurl.com/BqMv1

SCAPE project home: http://www.scape-project.eu/

Backstage Library Works completes TEI encoding of Leeser correspondence

Backstage Library Works and the University of Pennsylvania Libraries have collaborated to bring full-text search and discovery to the entire collection of Isaac Leeser’s correspondence.

The Gershwind-Bennett Isaac Leeser Digital Repository features access to the personal papers and publications of Isaac Leeser, widely regarded as the foremost American Jewish leader in antebellum America. The repository is the first major collaborative effort undertaken by the Jesselson-Kaplan American Genizah project.

The Leeser site contains digital images of over 2,100 original letters. The Penn Libraries cataloged the correspondence according to local standards and transcribed the letters for legibility. Backstage then encoded each letter using TEI to allow for sophisticated full-text search and discovery. Links to facsimiles of the original letters are embedded in the TEI documents.

David McKnight, Director of the Rare Book and Manuscript Library, indicates why Penn Libraries chose Backstage for the project: “In our desire to move the Leeser project forward, we had funding which enabled us to consider outsourcing the TEI encoding. The Penn Libraries had some experience working with Backstage so we decided to partner with them on this project. We worked closely with the Backstage team to ensure our project requirements were well understood. It turned out to be a success”.

The Leeser Digital Repository increases access to the Penn Libraries’ Dropsie College Isaac Leeser Collection for scholars around the world. The Penn Libraries’ materials are also placed in a larger context among other collections of Isaac Leeser’s personal papers and publications, providing a more complete understanding of this man and his impact as an American Jewish Leader in Antebellum America.

Additionally, the project provides an example of how digital technologies and metadata enhance access to dispersed archival documents and produce dynamic forms of discovery through full-text searchability of transcribed hand-written documents and printed works.

The Leeser Repository can be accessed at: http://ubuwebser.cajs.upenn.edu/

Backstage Library Works: http://www.bslw.com/

Video summarizes findings from survey of special collections in the UK and Ireland

Special collections and archives play a key role in the future of research libraries. Significant challenges face institutions that wish to capitalize on that value, however, if they are to fully leverage and make available the rich content in special collections in order to support research, teaching, and community engagement.

To help address these concerns, OCLC Research and Research Libraries UK (RLUK) collaborated to survey the special collections practices of RLUK members and OCLC Research Library Partnership institutions in the UK and Ireland. OCLC Research Program Officers Jackie Dooley and Merrilee Proffitt provide an overview of the survey in a video available on YouTube.

Complete findings from the survey and the resulting recommendations are published in the report, “Survey of Special Collections and Archives in the United Kingdom and Ireland”, freely available from the OCLC Research web site. The evidence-based assessment of the state of special collections in the UK and Ireland detailed in the report provides institutional leaders, curators, special collections staff, and archivists both evidence and inspiration to plan for much needed and deserved transformation of special collections. It also provides a backdrop for continued discussion, both within special collections and the larger library enterprise, for the role of special collections in an evolved information economy. In addition, it provides both evidence and a basis for action as part of the RLUK’s Unique and Distinctive Collections workstrand and OCLC Research’s Mobilizing Unique Materials theme.

“Survey of Special Collections and Archives in the United Kingdom and Ireland” builds on the foundation established by “Taking Our Pulse: The OCLC Research Survey of Special Collections and Archives”, a report published in 2010 that provides a rigorous, evidence-based appraisal of the state of special collections in the US and Canada. Together, the survey findings published in both reports establish a baseline for comparison of practices in the US and Canada between those in the UK and Ireland, and help to pave the way for building on mutual strengths and planning for joint activities where warranted.

View the video overview of the survey: http://youtu.be/2O5ciBnJdLY

Download the full survey at: http://www.oclc.org/research/publications/library/2013/2013-01r.html

“Taking Our Pulse” (2010 report) [pdf]: http://www.oclc.org/content/dam/research/publications/library/2010/2010-11.pdf

2013 Joint Conference on Digital Libraries keynote speakers videos available

Three keynote speeches from the 2013 Joint Conference on Digital Libraries, which took place in Indianapolis on July 22-26, are now available on YouTube.

Clifford Lynch, Executive Director, CNI (Coalition for Networked Information), “Building Social Scale Information Infrastructure: Challenges of Coherence, Interoperability and Priority”, deals with issues about the changing nature of digital libraries and the shift to very large-scale systems, and the challenges of coherence and prioritization that we face as a result of this shift.

Jill Cousins, Executive Director of the Europeana Foundation, “Why Europeana”, about the development and current state of the Europeana project (including a look at important very late breaking developments).

David DeRoure, Professor of e-Research and Director of the interdisciplinary Oxford e-Research Centre, “Social Machines of Science and Scholarship”, a very wide ranging talk about the implications of data intensive scholarship and implications for the future of the scholarly communications systems.

Clifford Lynch: http://www.youtube.com/watch?v=71bE2y6i__M

Jill Cousins: http://www.youtube.com/watch?v=aaqJJ3GyazY

David DeRoure: http://www.youtube.com/watch?v=p_lFtP92GZI

1.7 Million Titles from HathiTrust Digital Library now live on DPLA Homepage

From the DPLA blog, September 3, 2013: “At the start of the summer we announced the introduction of the HathiTrust Digital Library as a DPLA Content Hub. Now, with the summer winding down, we’re excited to share that 1.7 million metadata records associated with almost 3.5 million of HathiTrust’s freely available books, journals, government documents, and more are now accessible on dp.la and through the DPLA Application Programming Interface (API). That doubles the DPLA’s offerings from 2.4 million records in April to almost 4.5 million today”.

In June, the HathiTrust Digital Library announced that it would partner with the recently launched Digital Public Library of America (DPLA) to expand discovery and use of HathiTrust’s public domain and other openly available content.

DPLA provides an online portal to freely available digital material held by libraries, archives, and museums across the US. By offering a unified discovery point for these disparate collections, DPLA aims to make readily available to the public the words, images, sounds, and objects of America’s shared cultural heritage.

“HathiTrust’s joining the Digital Public Library of America more than doubles the size of our unified collection, and – as so many have asked for – fills it with millions of books. We could not be more delighted. Over the last five years, HathiTrust has built an incredible digital infrastructure to store the scanned holdings of its many university and library partners, and we in turn look forward to providing a large general audience for these valuable works, and new pathways into them”, said Dan Cohen, DPLA’s Executive Director.

According to HathiTrust Executive Director John Wilkin, the partnership reflects the complementary nature of the two organizations. “The first priority of HathiTrust has always been preservation”, he said. “But to fulfill the preservation mission, we must provide access: content that can’t be found and used risks being forgotten”. Wilkin stressed that HathiTrust will continue to enhance its own discovery and access platform, first launched in 2008. But DPLA puts HathiTrust’s collection before a broader audience, alongside innovative search and use tools, including timelines, maps, and a growing number of apps.

Of HathiTrust’s nearly 11 million volumes, the metadata records associated with the almost 3.5 million that are freely available will be accessible on the web at dp.la, and through the DPLA API, making HathiTrust a DPLA “content hub”. (The digitized volumes themselves will continue to reside in HathiTrust.) The partnership makes HathiTrust the single largest DPLA content hub, in the company of institutions such as the Smithsonian, the National Archives, the New York Public Library, and many others.

“DPLA, like HathiTrust, was founded on the belief that digital collections in aggregate become much more valuable than the sum of their parts”, Wilkin said. This shared vision was a strong incentive to overcome barriers to the partnership. The HathiTrust metadata will be contributed under the terms of a Creative Commons “CC0” license, and Wilkin cites the support of OCLC, the worldwide library cooperative, for the contribution of records possibly derived from its WorldCat database. Sandy Yee, Chair of the OCLC Board of Trustees, explains that DPLA’s Data Use Best Practices, which request that users provide attribution to metadata providers, are in keeping with OCLC community data norms. Yee said, “We are very pleased to support the discovery of this rich aggregation of freely available texts via the DPLA. Their work and that of HathiTrust amplifies and extends the efforts of the thousands of library contributors to the OCLC cooperative”.

Search the newly added HathiTrust content at: http://dp.la/search?partner[]=HathiTrust

The Digital Public Library of America: http://dp.la/

OpenStax College free books adopted by almost 300 educational institutions

Free textbook publisher OpenStax College announced in August that nearly 300 educational institutions on four continents will use OpenStax textbooks for the coming school year.

“Our adoptions have almost doubled in the past four months, and we estimate we will save about 40,000 students more than $3.7 million in the coming school year”, said Richard Baraniuk, founding director of the Rice University-based publisher.

OpenStax College provides free, peer-reviewed textbooks. Its growing catalog includes titles for five of the most-attended introductory college courses – physics, sociology, anatomy and both majors and non-majors biology. OpenStax Colleges’ free books have been accessed online by more than 1.7 million people and downloaded more than 170,000 times since June 2012. “Word-of-mouth endorsements are really spurring adoptions”, said David Harris, editor-in-chief of OpenStax College. “That’s great news for students by helping reduce their college costs while giving them access to high quality, peer-reviewed textbooks”.

OpenStax College exceeded its first-year targets for both usage and market share. “One of the biggest lessons for us during this first-year has been the extraordinary level of interest from faculty”, said Baraniuk, Rice’s Victor E. Cameron Professor of Engineering. “We knew students wanted these books, but we also found that many faculty want high-quality, affordable alternatives. That’s a significant driver for us”.

OpenStax College’s sixth title, Introductory Statistics, will debut in October, and introductory texts in five more subjects – precalculus, chemistry, economics, US history and psychology – will be added by 2015. OpenStax College has also added a second title, Introduction to Sociology, to its Apple iBooks catalog. Its first iBook, College Physics, debuted in November. IBooks versions of OpenStax titles have a nominal cost of $4.99 and include interactive graphics, videos, quizzes, demos and other immersive features that are not included in the free PDF version of the book.

“While many people want and need the free versions of our books, others want something more”, Harris said. “We’re committed to providing our books in as many formats as possible in order to pass on more savings and educational opportunities to students”. Harris said OpenStax College plans to make iBooks versions of all its titles. Low-cost print versions are also available for faculty and students who prefer a printed book. Harris said about 4,000 printed books were sold last-year.

OpenStax College launched in February 2012 with a venture philanthropic model to offer free, high-quality, peer-reviewed, full-color textbooks for the 25 most heavily attended college courses in the nation. Its books are free online for everyone. OpenStax College is a non-profit initiative of Rice University and is made possible by the generous support of the William and Flora Hewlett Foundation, the Laura and John Arnold Foundation, the Bill and Melinda Gates Foundation, the 20 Million Minds Foundation, the Maxfield Foundation, the Calvin K. Kanzanjian Foundation and the Leon Lowenstein Foundation.

For more information: http://openstaxcollege.org

Scientific knowledge navigation tools available free from AQnowledge

Johannes (Jan) Velterop, Managing Director of AQnowledge, Semantics for Science, recently invited feedback from researchers, students, and librarians on the unique features of the Utopia Documents semantic scientific interactive PDF-reader and the semantic AQnowledge Bookmarklet. These are free tools that enable researchers and students to navigate knowledge easily even when the scientific literature (or any other scientific content) being studied doesn’t itself provide link-outs or other navigation tools. Although these tools are currently being used by some publishers, they are completely publisher-independent and publisher-agnostic, giving researchers and students the additional benefits of opportunistic discovery and serendipity from whichever scientific content they are reading. The tools are optimized for the life- and biomedical-sciences, including biochemistry. Utopia Documents is available for Mac as well as Windows and Linux, and the AQnowledge Bookmarklet works in any browser.

The tools can be freely downloaded from the AQnowledge web site and freely shared. Although Utopia Documents invites users to register upon installation, this is not a requirement.

In related news, a trial of enrichment of biomedical articles in Elsevier’s Science Direct with the AQnowledge scientific laboratory products links box has been concluded successfully. AQnowledge has signed a new contract with Elsevier to supply scientific product links for display alongside appropriate abstracts and full-text articles in Science Direct, as a service to its readers. The AQnowledge scientific products links are now available, apart from at Science Direct, at CiteULike, SciBite, Utopia Documents, and BioMed Central. Its availability at more publishers and scientific sites will be announced soon.

Download the knowledge navigation tools at: http://aqnowledge.com/aqnowledge-semantic-knowledge-navigation-overview.html

AQnowledge home: http://aqnowledge.com/

Free user experience mapping guide from Adaptive Path

Adaptive Path is an experience design firm with studios in San Francisco and Austin. The firm helps companies create products and services that deliver great experiences for their customers. Adaptive Path was one of the first design firms focused on user experience, and over the last decade has done much to forward the practice and expose a wider audience to the concept.

In a recent posting on its blog, Adaptive Path released its “Guide to Experience Mapping” for free download. From the blog:

“In the guide, we provide a succinct overview of the process of mapping experiences in collaboration with your organization. Why collaborative? We firmly believe that the activity of mapping, not the artifact of the map, is the crucial ingredient to ensure that a better understanding of your customers will lead to change”.

An experience map is a strategic tool for capturing and presenting key insights into the complex customer interactions that occur across experiences with a product, service, or ecosystem. At the heart of an experience map lies the customer journey model, an archetypal journey created from an aggregate of all customers going from point A to point B as they attempt to achieve a goal or satisfy a need. The activity of mapping builds knowledge and consensus across teams and stakeholders, and the map as artifact allows you to create and support better customer experiences. In short, experience mapping is a journey that can involve and impact your entire organization.

Download the “Guide to Experience Mapping”: http://mappingexperiences.com/

Read more about Adaptive Path’s user experience design: http://adaptivepath.com/ideas

Related articles