World Library  
Flag as Inappropriate
Email this Article

Web archiving

Article Id: WHEBN0006014932
Reproduction Date:

Title: Web archiving  
Author: World Heritage Encyclopedia
Language: English
Subject: WebCite, Link rot, Web crawler, Memento Project, List of websites blocked in the United Kingdom
Collection:
Publisher: World Heritage Encyclopedia
Publication
Date:
 

Web archiving

Web archiving is the process of collecting portions of the Internet Archive which strives to maintain an archive of the entire Web. The International Web Archiving Workshop (IWAW), begun in 2001, has provided a platform to share experiences and exchange ideas. The later founding of the International Internet Preservation Consortium (IIPC), in 2003, has greatly facilitated international collaboration in developing standards and open source tools for the creation of web archives. These developments, and the growing portion of human culture created and recorded on the web, combine to make it inevitable that more and more libraries and archives will have to face the challenges of web archiving. National libraries, national archives and various consortia of organizations are also involved in archiving culturally important Web content. Commercial web archiving software and services are also available to organizations who need to archive their own web content for corporate heritage, regulatory, or legal purposes.

Collecting the web

Web archivists generally archive various types of web content including HTML web pages, style sheets, JavaScript, images, and video. They also archive metadata about the collected resources such as access time, MIME type, and content length. This metadata is useful in establishing authenticity and provenance of the archived collection.

Methods of collection

Remote harvesting

The most common web archiving technique uses web crawlers to automate the process of collecting web pages. Web crawlers typically access web pages in the same manner that users with a browser see the Web, and therefore provide a comparatively simple method of remote harvesting web content. Examples of web crawlers used for web archiving include:

There exist various free services which may be used to archive web resources "on-demand", using web crawling techniques. These services include the Wayback Machine and WebCite.

Database archiving

Database archiving refers to methods for archiving the underlying content of database-driven websites. It typically requires the extraction of the database content into a standard schema, often using XML. Once stored in that standard format, the archived content of multiple databases can then be made available using a single access system. This approach is exemplified by the DeepArc and Xinq tools developed by the Bibliothèque nationale de France and the National Library of Australia respectively. DeepArc enables the structure of a relational database to be mapped to an XML schema, and the content exported into an XML document. Xinq then allows that content to be delivered online. Although the original layout and behavior of the website cannot be preserved exactly, Xinq does allow the basic querying and retrieval functionality to be replicated.

Transactional archiving

Transactional archiving is an event-driven approach, which collects the actual transactions which take place between a web server and a web browser. It is primarily used as a means of preserving evidence of the content which was actually viewed on a particular website, on a given date. This may be particularly important for organizations which need to comply with legal or regulatory requirements for disclosing and retaining information.

A transactional archiving system typically operates by intercepting every HTTP request to, and response from, the web server, filtering each response to eliminate duplicate content, and permanently storing the responses as bitstreams.

Difficulties and limitations

Crawlers

Web archives which rely on web crawling as their primary means of collecting the Web are influenced by the difficulties of web crawling:

  • The robots exclusion protocol may request crawlers not access portions of a website. Some web archivists may ignore the request and crawl those portions anyway.
  • Large portions of a web site may be hidden in the Deep Web. For example, the results page behind a web form lies in the Deep Web because most crawlers cannot follow a link to the results page.
  • Crawler traps (e.g., calendars) may cause a crawler to download an infinite number of pages, so crawlers are usually configured to limit the number of dynamic pages they crawl.

However, it is important to note that a native format web archive, i.e., a fully browsable web archive, with working links, media, etc., is only really possible using crawler technology.

The Web is so large that crawling a significant portion of it takes a large amount of technical resources. The Web is changing so fast that portions of a website may change before a crawler has even finished crawling it.

General limitations

Some web servers are configured to return different pages to web archiver requests than they would in response to regular browser requests.[2] This is typically done to fool search engines into directing more user traffic to a website, and is often done to avoid accountability, or to provide enhanced content only to those browsers that can display it.

Not only must web archivists deal with the technical challenges of web archiving, they must also contend with intellectual property laws. Peter Lyman[3] states that "although the Web is popularly regarded as a public domain resource, it is copyrighted; thus, archivists have no legal right to copy the Web". However national libraries in some countries may have a legal right to copy portions of the web under an extension of a legal deposit.

Some private non-profit web archives that are made publicly accessible like WebCite, the Internet Archive or the Internet Memory Foundation allow content owners to hide or remove archived content that they do not want the public to have access to. Other web archives are only accessible from certain locations or have regulated usage. WebCite cites a recent lawsuit against Google's caching, which Google won.[4]

Aspects of web curation

Web curation, like any digital curation, entails:

  • Certification of the trustworthiness and integrity of the collection content
  • Collecting verifiable Web assets
  • Providing Web asset search and retrieval
  • Semantic and ontological continuity and comparability of the collection content

Thus, besides the discussion on methods of collecting the Web, those of providing access, certification, and organizing must be included. There are a set of popular tools that addresses these curation steps:

A suite of tools for Web Curation by International Internet Preservation Consortium:

Other open source tools for manipulating web archives:

  • WARC Tools - for creating, reading, parsing and manipulating, WARC archives programmatically
  • Google Search Tools - for indexing and searching full-text and metadata within web archives

Free but not open source tools also exists:

  • WARC Software Development Kit (WSDK) represents a set of simple, compact, and highly optimized Erlang modules to manipulate (create/read/write) the WARC ISO 28500:2009 file format.

See also

References

  1. ^
  2. ^
  3. ^ Lyman (2002)
  4. ^ FAQ Webcitation.org
  5. ^
Bibliography

External links

  • International Internet Preservation Consortium (IIPC) - International consortium whose mission is to acquire, preserve, and make accessible knowledge and information from the Internet for future generations
  • International Web Archiving Workshop (IWAW) - Annual workshop that focuses on web archiving
  • National Library of Australia, Preserving Access to Digital Information (PADI)
  • Library of Congress - Web Archiving
  • Web archiving bibliography - Lengthy list of web-archiving resources
  • Julien Masanès, Bibliothèque Nationale de France - Towards continuous web archiving
  • Comparison of web archiving services
  • List of blogs about web archiving, 2015
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
 
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
 
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
 



Copyright © World Library Foundation. All rights reserved. eBooks from World Library are sponsored by the World Library Foundation,
a 501c(4) Member's Support Non-Profit Organization, and is NOT affiliated with any governmental agency or department.