World Library  
Flag as Inappropriate
Email this Article

Dead link

Article Id: WHEBN0000845844
Reproduction Date:

Title: Dead link  
Author: World Heritage Encyclopedia
Language: English
Subject: Hyperlink, Cajanus, One Year War, John Hooper, Cheryl Jacques, Womanism, Garcia-class frigate, Nashville Rhythm, Woodlands, Glasgow, Lynch on Lynch
Publisher: World Heritage Encyclopedia

Dead link

Link rot (or linkrot), also known as link death or link breaking, is an informal term for the process by which hyperlinks (either on individual websites or the Internet in general) point to web pages, servers or other resources that have become permanently unavailable. The phrase also describes the effects of failing to update out-of-date web pages that clutter search engine results. A link that does not work any more is called a broken link, dead link or dangling link.


A link may become broken for several reasons. The simplest and most common reason is that the website it links to doesn't exist anymore. The most common result of a dead link is a 404 error, which indicates that the web server responded, but the specific page could not be found.

Some news sites contribute to the problem of link rot by keeping only recent news articles freely accessible, then removing them or moving them to a paid subscription area. This causes a heavy loss of supporting links in sites discussing newsworthy events and using news sites as references.

Another type of dead link occurs when the server that hosts the target page stops working or relocates to a new domain name. In this case the browser may return a DNS error, or it may display a site unrelated to the content sought. The latter can occur when a domain name is allowed to lapse and is subsequently reregistered by another party. Domain names acquired in this manner are attractive to those who wish to take advantage of the stream of unsuspecting surfers that will inflate hit counters and PageRanking.

A link might also be broken because of some form of blocking such as content filters or firewalls. Dead links commonplace on the Internet can also occur on the authoring side, when website content is assembled, copied, or deployed without properly verifying the targets, or simply not kept up to date. Dead links can also occur when a website without Clean URLs is "re-organized".[1]


The 404 "Not Found" response is familiar to even the occasional web user. A number of studies have examined the prevalence of link rot on the web, in academic literature, and in digital libraries. In a 2003 experiment, Fetterly et al. discovered that about one link out of every 200 disappeared each week from the internet. McCown et al. (2005) discovered that half of the URLs cited in D-Lib Magazine articles were no longer accessible 10 years after publication, and other studies have shown link rot in academic literature to be even worse (Spinellis, 2003, Lawrence et al., 2001). Nelson and Allen (2002) examined link rot in digital libraries and found that about 3% of the objects were no longer accessible after one year.


Detecting link rot for a given URL is difficult using automated methods. If a URL is accessed and returns an HTTP 200 (OK) response, it may be considered accessible, but the contents of the page may have changed and may no longer be relevant. Some web servers also return a soft 404, a page returned with a 200 (OK) response (instead of a 404 that indicates the URL is no longer accessible). Bar-Yossef et al. (2004) [2] developed a heuristic for automatically discovering soft 404s.


Due to the unprofessional image that dead links bring to both sites linking and linked to, there are multiple solutions that are available to tackle them: some working to prevent them in the first place, and others trying to resolve them when they have occurred. There are several tools that have been developed to help combat link rot.

Server side

  • Avoiding unmanaged hyperlink collections
  • Avoiding links to pages deep in a website ("deep linking")
  • Using redirection mechanisms (e.g. "301: Moved Permanently") to automatically refer browsers and crawlers to the new location of a URL
  • Content management systems may offer builtin solutions to the management of links, e.g. links are updated when content is changed or moved on the site.
  • WordPress guards against link rot by replacing non-canonical URLs with their canonical versions.[3]
  • IBM's Peridot attempts to automatically fix broken links.
  • Permalinking stops broken links by guaranteeing that the content will not move for the foreseeable future. Another form of permalinking is linking to a permalink that then redirects to the actual content, ensuring that even though the real content may be moved etc., links pointing to the resources stay intact.

User side

  • The Linkgraph widget gets the URL of the correct page based upon the old broken URL by using historical location information.
  • The Google 404 Widget employs Google technology to 'guess' the correct URL, and also provides the user a Google search box to find the correct page.
  • When a user receives a 404 response, the Google Toolbar attempts to assist the user in finding the missing page.[4]
  •[5] gathers and ranks alternate urls for a broken link using Google Cache, the Internet Archive, and user submissions.[6] Typing left of a broken link in the browser's address bar and pressing enter loads a ranked list of alternate urls, or (depending on user preference) immediately forwards to the best one.[7]

Web archiving

To combat link rot, web archivists are actively engaged in collecting the Web or particular portions of the Web and ensuring the collection is preserved in an archive, such as an archive site, for future researchers, historians, and the public. The largest web archiving organization is the Internet Archive, whose goal is to maintain an archive of the entire Web, taking periodic snapshots of pages that can then be accessed for free via the Wayback Machine and without registration many years later simply by typing in the URL, or automatically by using browser extensions.[8] National libraries, national archives and various consortia of organizations are also involved in archiving culturally important Web content.

Individuals may also use a number of tools that allow them to archive web resources that may go missing in the future:

  • WebCite, a tool specifically for scholarly authors, journal editors and publishers to permanently archive "on-demand" and retrieve cited Internet references (Eysenbach and Trudel, 2005).
  • Archive-It, a subscription service that allows institutions to build, manage and search their own web archive
  • Some social bookmarking websites such as,[9] or[10] allow users to make online clones of any web page on the internet, creating a copy at an independent url which remains online even if the original page goes down.
  • Google keeps a text-based cache (temporary copy) of the pages it has crawled, which can be used to read the information of recently removed pages. However, unlike in archiving services, cached pages are not stored permanently.
  • The WayBack Machine, at the Internet Archive,[11] is a free website that archives old web pages. It does not archive websites whose owners have stated they do not want their website archived.
  • WebCite analog, but also saves images and can save pages from Web 2.0 sites (like Twitter).

Authors citing URLs

A number of studies have shown how widespread link rot is in academic literature (see below). Authors of scholarly publications have also developed best practices for combating link rot in their work:

  • Avoiding URL citations that point to resources on a researcher's personal home page (McCown et al., 2005)
  • Using Persistent Uniform Resource Locators (PURLs) and digital object identifiers (DOIs) whenever possible
  • Using web archiving services (e.g. WebCite) to permanently archive and retrieve cited Internet references (Eysenbach and Trudel, 2005).

See also

Further reading

Link rot on the Web

In academic literature

In digital libraries


External links

  • Future-Proofing Your URIs
  • "Fighting Linkrot", Jakob Nielsen's Alertbox, June 14, 1998.
  • Internet Archive and search engine caches
  • - user-contributed databases of moved URLs
  • W3C Link Checker
  • Apache module that reports broken links.
  • - archives a copy of the referenced content, and generates a link to an unalterable hosted instance of the site.
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.

Copyright © World Library Foundation. All rights reserved. eBooks from World Library are sponsored by the World Library Foundation,
a 501c(4) Member's Support Non-Profit Organization, and is NOT affiliated with any governmental agency or department.