Consumer reviews and reports on scam companies, bad products and services
vivint
Vivint APX ALARM Class Action Lawsuit for Vivint Customers Provo, Utah
19th of Jul, 2011 by User485425
First of all I cannot believe I haven't found any Class Action Lawsuits against Vivint / APX, I see many complaints but no actual claims at this time. I did enjoye the services of Vivint which was APX two years ago during my initial contract signing, upon my 2 year contract ending just a month ago I decided I couldn't effort the luxury of such a service so I made sure to call the monitoring company 6 months in advance stating I wanted to cancel my account, their answer was you must wait before cancelling, later I found that I had a 30 day window of time to cancel before the account would automatically turn into another 1 year contract . I missed my window of cancellation by about 10 days, they're answer was sorry didn't you read the contract we cannot change the renewal process. I figure I will File a Class Action Lawsuit against Vivint because I found it virtually impossible to pinpoint the specific time frame accepted to cancel my account. They will not work with me what so ever. The only way I can be removed from the ongoing contract is to be sent to collections upon a delinquency of payment , I have been informed by vivint that I am still obligated to pay for the renewed year term regardless. Enough said on this note....If you wish to be included within this Class Action Lawsuit feel free to email me at and subject the email something like C.A.L or Lawsuit. When I file the claim I will go back through the emails and send you the link where you can add yourself onto the claim. My argument is going to be entrapment, the inability to close an account because of a specific window of time for the company to receive and acknowledge a hand written cancellation to be considered as closure. Vivint informed me that they have the ability to accept a cancellation under certain circumstances although, it sounded like it had to be something very serious like being in a coma or becoming prosecuted. It shouldn't be such a drastic measure to simply say I want out .... Mark
Comments
4852 days ago by Turk
A web hosting service is a type of Internet hosting service that allows individuals and organizations to make their own website accessible via the World Wide Web. Web hosts are companies that provide space on a server they own or lease for use by their clients as well as providing Internet connectivity, typically in a data center. Web hosts can also provide data center space and connectivity to the Internet for servers they do not own to be located in their data center, called colocation or Housing as it is commonly called in Latin America or France..
The scope of web hosting services varies greatly. The most basic is web page and small-scale file hosting, where files can be uploaded via File Transfer Protocol (FTP) or a Web interface. The files are usually delivered to the Web "as is" or with little processing. Many Internet service providers (ISPs) offer this service free to their subscribers. People can also obtain Web page hosting from other, alternative service providers. Personal web site hosting is typically free, advertisement-sponsored, or inexpensive. Business web site hosting often has a higher expense.
Single page hosting is generally sufficient only for personal web pages. A complex site calls for a more comprehensive package that provides database support and application development platforms (e.g. PHP, Java, Ruby on Rails, ColdFusion, and ASP.NET). These facilities allow the customers to write or install scripts for applications like forums and content management. For e-commerce, SSL is also highly recommended.
The host may also provide an interface or control panel for managing the Web server and installing scripts as well as other modules and service applications like e-mail. Some hosts specialize in certain software or services (e.g. e-commerce). They are commonly used by larger companies to outsource network infrastructure to a hosting company.
Contents [hide]
1 Reliability and uptime
2 Types of hosting
3 Obtaining hosting
4 See also
5 External links
[edit]Reliability and uptime



Multiple racks of servers
The availability of a website is measured by the percentage of a year in which the website is publically accessible and reachable via the internet. Th is is different than measuring the uptime of a system, as uptime refers to the system itself being online, however does not take into account being able to reach it – as in the event of a network outage.
The formula to determine a system’s availability is relatively easy: Total time = 365 days per year * 24 hours per day * 60 minutes per hour = 525, 600 minutes per year. To calculate how many minutes of downtime your system may experience per year, you can take your uptime guarantee and multiply it by total time in a year.
In this example I'll use 99.99%: (1 - .9999) * 525, 600 = allowable minutes down per year.
The following table shows the translation from a given availability percentage to the corresponding amount of time a system would be unavailable per year, month, or week.

Availability % Downtime per year Downtime per month* Downtime per week
90% ("one nine") 36.5 days 72 hours 16.8 hours
95% 18.25 days 36 hours 8.4 hours
97% 10.96 days 21.6 hours 5.04 hours
98% 7.30 days 14.4 hours 3.36 hours
99% ("two nines") 3.65 days 7.20 hours 1.68 hours
99.5% 1.83 days 3.60 hours 50.4 minutes
99.8% 17.52 hours 86.23 minutes 20.16 minutes
99.9% ("three nines") 8.76 hours 43.2 minutes 10.1 minutes
99.95% 4.38 hours 21.56 minutes 5.04 minutes
99.99% ("four nines") 52.56 minutes 4.32 minutes 1.01 minutes
99.999% ("five nines") 5.26 minutes 25.9 seconds 6.05 seconds
99.9999% ("six nines") 31.5 seconds 2.59 seconds 0.605 seconds
* For monthly calculations, a 30-day month is used
A hosting provider’s SLAs may include a certain amount of scheduled downtime per year so that they can perform maintenance on the systems. This scheduled downtime is often excluded from the SLA timeframe, and needs to be subtracted from the Total Time when availability is calculated. Depending on the verbage of an SLA, if the availability of a system drops below that in the signed SLA, a hosting provider often will provide a partial refund for time lost.
[edit]Types of hosting



A typical server "rack, " commonly seen in colocation centres
Internet hosting services can run Web servers.
Many large companies who are not internet service providers also need a computer permanently connected to the web so they can send email, files, etc. to other sites. They may also use the computer as a website host so they can provide details of their goods and services to anyone interested. Additionally these people may decide to place online orders.
Free web hosting service: offered by different companies with limited services, sometimes supported by advertisements, and often limited when compared to paid hosting.
Shared web hosting service: one's website is placed on the same server as many other sites, ranging from a few to hundreds or thousands. Typically, all domains may share a common pool of server resources, such as RAM and the CPU. The features available with this type of service can be quite extensive. A shared website may be hosted with a reseller.
Reseller web hosting: allows clients to become web hosts themselves. Resellers could function, for individual domains, under any combination of these listed types of hosting, depending on who they are affiliated with as a reseller. Resellers' accounts may vary tremendously in size: they may have their own virtual dedicated server to a colocated server. Many resellers provide a nearly identical service to their provider's shared hosting plan and provide the technical support themselves.
Virtual Dedicated Server: also known as a Virtual Private Server (VPS), divides server resources into virtual servers, where resources can be allocated in a way that does not directly reflect the underlying hardware. VPS will often be allocated resources based on a one server to many VPSs relationship, however virtualisation may be done for a number of reasons, including the ability to move a VPS container between servers. The users may have root access to their own virtual space. Customers are sometimes responsible for patching and maintaining the server.
Dedicated hosting service: the user gets his or her own Web server and gains full control over it (user has root access for Linux/administrator access for Windows); however, the user typically does not own the server. Another type of Dedicated hosting is Self-Managed or Unmanaged. This is usually the least expensive for Dedicated plans. The user has full administrative access to the server, which means the client is responsible for the security and maintenance of his own dedicated server.
Managed hosting service: the user gets his or her own Web server but is not allowed full control over it (user is denied root access for Linux/administrator access for Windows); however, they are allowed to manage their data via FTP or other remote management tools. The user is disallowed full control so that the provider can guarantee quality of service by not allowing the user to modify the server or potentially create configuration problems. The user typically does not own the server. The server is leased to the client.
Colocation web hosting service: similar to the dedicated web hosting service, but the user owns the colo server; the hosting company provides physical space that the server takes up and takes care of the server. This is the most powerful and expensive type of web hosting service. In most cases, the colocation provider may provide little to no support directly for their client's machine, providing only the electrical, Internet access, and storage facilities for the server. In most cases for colo, the client would have his own administrator visit the data center on site to do any hardware upgrades or changes.
Cloud Hosting: is a new type of hosting platform that allows customers powerful, scalable and reliable hosting based on clustered load-balanced servers and utility billing. A cloud hosted website may be more reliable than alternatives since other computers in the cloud can compensate when a single piece of hardware goes down. Also, local power disruptions or even natural disasters are less problematic for cloud hosted sites, as cloud hosting is decentralized. Cloud hosting also allows providers (such as Amazon) to charge users only for resources consumed by the user, rather than a flat fee for the amount the user expects they will use, or a fixed cost upfront hardware investment. Alternatively, the lack of centralization may give users less control on where their data is located which could be a problem for users with data security or privacy concerns.
Clustered hosting: having multiple servers hosting the same content for better resource utilization. Clustered Servers are a perfect solution for high-availability dedicated hosting, or creating a scalable web hosting solution. A cluster may separate web serving from database hosting capability. (Usually Web hosts use Clustered Hosting for their Shared hosting plans, as there are multiple benefits to the mass managing of clients)
Grid hosting: this form of distributed hosting is when a server cluster acts like a grid and is composed of multiple nodes.
Home server: usually a single machine placed in a private residence can be used to host one or more web sites from a usually consumer-grade broadband connection. These can be purpose-built machines or more commonly old PCs. Some ISPs actively attempt to block home servers by disallowing incoming requests to TCP port 80 of the user's connection and by refusing to provide static IP addresses. A common way to attain a reliable DNS hostname is by creating an account with a dynamic DNS service. A dynamic DNS service will automatically change the IP address that a URL points to when the IP address changes.
Some specific types of hosting provided by web host service providers:
File hosting service: hosts files, not web pages
Image hosting service
Video hosting service
Blog hosting service
Pastebin
Shopping cart software
E-mail hosting service
[edit]Obtaining hosting

Web hosting is often provided as part of a general Internet access plan; there are many free and paid providers offering these types of web hosting.
A customer needs to evaluate the requirements of the application to choose what kind of hosting to use. Such considerations include database server software, scripting software, and operating system. Most hosting providers provide Linux-based web hosting which offers a wide range of different software. A typical configuration for a Linux server is the LAMP platform: Linux, Apache, MySQL, and PHP/Perl/Python. The web hosting client may want to have other services, such as email for their business domain, databases or multi-media services for streaming media. A customer may also choose Windows as the hosting platform. The customer still can choose from PHP, Perl, and Python but may also use ASP .Net or Classic ASP. Web hosting packages often include a Web Content Management System, so the end-user does not have to worry about the more technical aspects.
4852 days ago by Turk
An Internet hosting service is a service that runs Internet servers, allowing organizations and individuals to serve content to the Internet. There are various levels of service and various kinds of services offered.
A common kind of hosting is web hosting. Most hosting providers offer a combined variety of services. Web hosting services also offer e-mail hosting service, for example. DNS hosting service is usually bundled with domain name registration.
Web hosting technology has been causing some controversy lately as Web.com claims that it holds patent rights to some common hosting technologies, including the use of a web-based control panel to manage the hosting service, with its 19 patents. Hostopia, a large wholesale host, recently purchased a license to use that technology from web.com for 10% of retail revenues. Web.com recently sued Go Daddy as well for similar patent infringement.[1]
Generic, yet rather powerful, kinds of Internet hosting provide a server where the clients can run anything they want (including web servers and other servers) and have Internet connections with good upstream bandwidth.
Contents [hide]
1 Types
1.1 Full-featured hosting
1.2 Other
2 Bandwidth cost
3 References
4 See also
[edit]Types

Types of Internet hosting service
Full-featured hosting
Virtual private server · Dedicated hosting · Colocation centre · Cloud hosting
Web hosting
Free hosting · Shared · Clustered · Reseller
Application-specific web hosting
Blog (comments) · Guild hosting · Image · Video · Wiki farms · Application · Social network
By content format
File · Image · Video · Music
Other types
Remote backup · Game server · DNS · E-mail
v · d · e
[edit]Full-featured hosting
Full-featured hosting services include:
Dedicated hosting service, also called managed hosting service, where the hosting service provider owns and manages the machine, leasing full control to the client. Management of the server can include monitoring to ensure the server continues to work effectively, backup services, installation of security patches and various levels of technical support.
Virtual private server, in which virtualization technology is employed in order to allow multiple logical servers to run on a single physical server
Colocation facilities, which provide just the Internet connection, uninterruptible power and climate control, but let the client do his own system administration; the most expensive
Cloud hosting, which can also be termed time-share or on-demand hosting, in which the user only pays for the system time and space used, and capacity can be quickly scaled up or down as computing requirements change.
[edit]Other
Limited or application-specific hosting services include:
Web hosting service
E-mail hosting service
DNS hosting service
Game servers
Wiki farms
[edit]Bandwidth cost

Internet hosting services include the required Internet connection; they may charge a flat rate per month or charge per bandwidth used — a common payment plan is to charge for the 95th percentile bandwidth.
4852 days ago by Turk
A domain name registrar is an organization or commercial entity, accredited by both ICANN and generic top-level domain registry (gTLD) to sell gTLDs and/or by a country code top-level domain (ccTLD) registry to sell ccTLDs; to manage the reservation of Internet domain names in accordance with the guidelines of the designated domain name registries and to offer such services to the public.
Contents [hide]
1 History
2 Designated registrar
3 DNS hosting
4 Domain name transfer
4.1 Transfer scams
5 Drop catcher
6 Registrar rankings
7 See also
8 References
9 External links
10 Sources
[edit]History

Until 1999, Network Solutions (NSI) operated the com, net, and org registries. In addition to the function of domain name registry operator, it was also the sole registrar for these domains. However, several companies had developed independent registrar services. One such company, NetNames, developed in 1996 the concept of a standalone commercial domain name registration service to sell to the public domain registration and other associated services. This effectively created the retail model into the industry and assigning a wholesale role to the registries. NSI assimilated this model, which ultimately led to the separation of registry and registrar functions.
In 1997, PGMedia filed an anti-trust suit against NSI citing the ROOT zone as an essential facility, and the US National Science Foundation (NSF) was enjoined to this action.[1] Ultimately, NSI was granted immunity from anti-trust litigation, but the litigation created enough pressure to restructure the domain name market.
In October 1998, following pressure from the growing domain name registration business and other interested parties, NSI's agreement with the United States Department of Commerce was amended. This amendment required the creation of a shared registration system that supported multiple registrars. This system officially commenced service on November 30, 1999 under the supervision of Internet Corporation for Assigned Names and Numbers (ICANN), although there had been several testbed registrars using the system since March 11, 1999. Since then, over 500 registrars have entered the market for domain name registration services.
Of the registrars who initially entered the market, many have continued to grow and outpace rivals. Go Daddy is the largest registrar. Other successful registrars include eNom, Tucows, Melbourne IT and Key-Systems. Registrars who initially led the market but later were surpassed by rivals include Network Solutions and Dotster.[citation needed]
Each ICANN-accredited registrar must pay a fixed fee of US$4, 000 plus a variable fee. The sum of variable registrar fees is intended to total US$3.8 million.[citation needed] The competition created by the shared registration system enables end users to choose from many registrars offering a range of related services at varying prices.
[edit]Designated registrar

Domain registration information is maintained by the domain name registries, which contract with domain registrars to provide registration services to the public. An end user selects a registrar to provide the registration service, and that registrar becomes the designated registrar for the domain chosen by the user.
Only the designated registrar may modify or delete information about domain names in a central registry database. It is not unusual for an end user to switch registrars, invoking a domain transfer process between the registrars involved, that is governed by specific domain name transfer policies.
When a registrar registers a com domain name for an end-user, it must pay a maximum annual fee of US$7.34 to VeriSign, the registry operator for com, and a US$0.18 annual administration fee to ICANN. Most domain registrars price their services and products to address both the annual fees and the administration fees that must be paid to ICANN. Barriers to entry into the bulk registrar industry are high for new companies without an existing customer base.[citation needed]
Many registrars also offer registration through reseller affiliates. An end-user registers either directly with a registrar, or indirectly through one or more layers of resellers. As of 2010, the retail cost generally ranges from a low of about $7.50 per year to about $35 per year for a simple domain registration, although registrars often drop the price far lower – sometimes even free – when ordered with other products such as web hosting services.
The maximum period of registration for a domain name is 10 years. Some registrars offer longer periods of up to 100 years, but such offers involve the registrar renewing the registration for their customer; the 100-year registration would not be in the official registration database.
[edit]DNS hosting

Main articles: DNS hosting and DNS
Registration of a domain name establishes a set of Start of Authority (SOA) records in the DNS servers of the parent domain, indicating the IP address (or domain name) of DNS servers that are authoritative for the domain. This provides merely a reference for how to find the domain data – not the actual domain data.
Registration of a domain does not automatically imply the provision of DNS services for the registered domain. Most registrars do offer DNS hosting as an optional free service for domains registered through them. If DNS services are not offered, or the end-user opts out, the end-user is responsible for procuring or self-hosting DNS services. Without DNS services for the domain, the registration is essentially useless for Internet services, although this situation is often encountered with domain parking and cybersquatting.
[edit]Domain name transfer

A domain name transfer is the process of changing the designated registrar of a domain name. ICANN has defined a Policy on Transfer of Registrations between Registrars[2] The usual process of a domain name transfer is:
The end user verifies that the whois admin contact info is correct, particularly the email address; obtains the authentication code (EPP transfer code) from the old registrar, and removes any domain lock that has been placed on the registration. If the whois information had been out of date and is now updated, the end-user should wait 12-24 hours before proceeding further, to allow time for the updated data to propagate.
The end user contacts the new registrar with the wish to transfer the domain name to their service, and supplies the authentication code.
The gaining Registrar must obtain express authorization from either the Registered Name Holder or the Administrative Contact. A transfer may only proceed if confirmation of the transfer is received by the gaining Registrar from one of these contacts. The authorization must be made via a valid Standardized Form of Authorization, which may be sent e.g. by e-mail to the e-mail addresses listed in the WHOIS. The Registered Name Holder or the Administrative Contact must confirm the transfer. The new registrar starts electronically the transfer of the domain with the help of the authentication code (auth code).
The old registrar will contact the end user to confirm the authenticity of this request. The end user may have to take further action with the old registrar, such as returning to the online management tools, to re-iterate their desire to proceed, in order to expedite the transfer.
The old registrar will release authority to the new registrar.
The new registrar will notify the end user of transfer completion. The new registrar may have automatically copied over the domain server information, and everything on the website will continue to work as before. Otherwise, the domain server information will need to be updated with the new registrar.
After this process, the new registrar is the domain name's designated registrar. The process may take about five days. In some cases, the old registrar may intentionally delay the transfer as long as allowable. After transfer, the domain cannot be transferred again for 60 days, except back to the previous registrar.
It is unwise to attempt to transfer a domain immediately before it expires. In some cases, a transfer can take up to 14 days, meaning that the transfer may not complete before the registration expires. This could result in loss of the domain name registration and failure of the transfer. To avoid this, end users should either transfer well before the expiration date, or renew the registration before attempting the transfer.[1]
If a domain registration expires, irrespective of the reason, it can be difficult, expensive, or impossible for the original owner to get it back. After the expiration date, the domain status often passes through several management phases, often for a period of months; usually it does not simply become generally available.[3]
[edit]Transfer scams
Main article: Domain slamming
With the introduction of SRS, [clarification needed] many smaller registrars had to compete with each other. Some companies offered value added services or used viral marketing, while others, such as VeriSign and the Domain Registry of America attempted to trick customers to switch from their current registrar using a practice known as domain slamming.
Many of these transfer scams involve a notice sent in the mail, fax, or e-mail. Some scammers contact end-users by telephone (because the contact information is available through WHOIS) to obtain more information. These notices would include information publicly available from the WHOIS database to add to the look of authenticity. The text would include legalese to confuse the end user into thinking that it is an official binding notice. Scam registrars go after domain names that are expiring soon or have recently expired. Expired domain names do not have to go through the authentication process to be transferred, as the previous registrar would have relinquished management rights of the domain name. Domain name expiry dates are readily available via WHOIS.
[edit]Drop catcher

A drop catcher is a domain name registrar who offers the service of attempting to quickly register a given domain name for a customer if that name becomes available—that is, to "catch" a "dropped" name—when the domain name's registration expires, either because the registrant does not want the domain anymore or because the registrant did not renew the registration on time.
[edit]Registrar rankings

Several organizations post market-share-ranked lists of domain name registrars and numbers of domains registered at each. The published lists differ in which top-level domains (TLDs) they use; in the frequency of updates; and in whether their basic data is absolute numbers provided by registries, or daily changes derived from Zone files.
The lists appear to all use at most 16 publicly available generic TLDs (gTLDs) that existed as of December 2009, plus .us. A February 2010 ICANN zone file access concept paper explains that most country code TLD (ccTLD) registries stopped providing zone files in 2003, citing abuse.
Published rankings and reports include:
Monthly (but with approximately a three-month delay), ICANN posts reports created by the registries of 16 gTLDs. These reports list absolute numbers of domains registered with each ICANN-accredited registrar.
Monthly (but with three-month delay as it relies on ICANN data.) Dotandco.net publishes a list of registrars by volume.
Yearly (but covering only the period from 2002 to 2007), DomainTools.com, operated by Name Intelligence, Inc., published registrar statistics. Totals included .com, .net, .org, .info, .biz and .us. It cites "daily changes" (presumably from daily zone files) as the basis for its yearly aggregates, although it only lists quarterly changes.
4852 days ago by Turk
With a web browser, one can view web pages that may contain text, images, videos, and other multimedia and navigate between them via hyperlinks. Using concepts from earlier hypertext systems, British engineer and computer scientist Sir Tim Berners-Lee, now Director of the World Wide Web Consortium, wrote a proposal in March 1989 for what would eventually become the World Wide Web.[1] At CERN in Geneva, Switzerland, Berners-Lee and Belgian computer scientist Robert Cailliau proposed in 1990 to use "HyperText ... to link and access information of various kinds as a web of nodes in which the user can browse at will", [3] and publicly introduced the project in December.[4]
"The World-Wide Web was developed to be a pool of human knowledge, and human culture, which would allow collaborators in remote sites to share their ideas and all aspects of a common project."[5]
Contents [hide]
1 History
2 Function
2.1 Linking
2.2 Dynamic updates of web pages
2.3 WWW prefix
3 Privacy
4 Security
5 Standards
6 Accessibility
7 Internationalization
8 Statistics
9 Speed issues
10 Caching
11 See also
12 References
13 Further Reading
14 External links
[edit]History

Main article: History of the World Wide Web
In the May 1970 issue of Popular Science magazine Arthur C. Clarke was reported to have predicted that satellites would one day "bring the accumulated knowledge of the world to your fingertips" using a console that would combine the functionality of the Xerox, telephone, television and a small computer, allowing data transfer and video conferencing around the globe.[6]
In March 1989, Tim Berners-Lee wrote a proposal that referenced ENQUIRE, a database and software project he had built in 1980, and described a more elaborate information management system.[7]
With help from Robert Cailliau, he published a more formal proposal (on November 12, 1990) to build a "Hypertext project" called "WorldWideWeb" (one word, also "W3") as a "web" of "hypertext documents" to be viewed by "browsers" using a client–server architecture.[3] This proposal estimated that a read-only web would be developed within three months and that it would take six months to achieve "the creation of new links and new material by readers, [so that] authorship becomes universal" as well as "the automatic notification of a reader when new material of interest to him/her has become available." While the read-only goal was met, accessible authorship of web content took longer to mature, with the wiki concept, blogs, Web 2.0 and RSS/Atom.[8]
The proposal was modeled after the Dynatext SGML reader by Electronic Book Technology, a spin-off from the Institute for Research in Information and Scholarship at Brown University. The Dynatext system, licensed by CERN, was technically advanced and was a key player in the extension of SGML ISO 8879:1986 to Hypermedia within HyTime, but it was considered too expensive and had an inappropriate licensing policy for use in the general high energy physics community, namely a fee for each document and each document alteration.


This NeXT Computer used by Tim Berners-Lee at CERN became the first web server


The CERN datacenter in 2010 housing some www servers
A NeXT Computer was used by Berners-Lee as the world's first web server and also to write the first web browser, WorldWideWeb, in 1990. By Christmas 1990, Berners-Lee had built all the tools necessary for a working Web:[9] the first web browser (which was a web editor as well); the first web server; and the first web pages, [10] which described the project itself. On August 6, 1991, he posted a short summary of the World Wide Web project on the alt.hypertext newsgroup.[11] This date also marked the debut of the Web as a publicly available service on the Internet. The first photo on the web was uploaded by Berners-Lee in 1992, an image of the CERN house band Les Horribles Cernettes.
Web as a "Side Effect" of the 40 years of Particle Physics Experiments. It happened many times during history of science that the most impressive results of large scale scientific efforts appeared far away from the main directions of those efforts... After the World War 2 the nuclear centers of almost all developed countries became the places with the highest concentration of talented scientists. For about four decades many of them were invited to the international CERN's Laboratories. So specific kind of the CERN's intellectual "entire culture" (as you called it) was constantly growing from one generation of the scientists and engineers to another. When the concentration of the human talents per square foot of the CERN's Labs reached the critical mass, it caused an intellectual explosion The Web – crucial point of human's history – was born... Nothing could be compared to it... We cant imagine yet the real scale of the recent shake, because there has not been so fast growing multi-dimension social-economic processes in human history...[12]

The first server outside Europe was set up at SLAC to host the SPIRES-HEP database. Accounts differ substantially as to the date of this event. The World Wide Web Consortium says December 1992, [13] whereas SLAC itself claims 1991.[14][15] This is supported by a W3C document entitled A Little History of the World Wide Web.[16]
The crucial underlying concept of hypertext originated with older projects from the 1960s, such as the Hypertext Editing System (HES) at Brown University, Ted Nelson's Project Xanadu, and Douglas Engelbart's oN-Line System (NLS). Both Nelson and Engelbart were in turn inspired by Vannevar Bush's microfilm-based "memex", which was described in the 1945 essay "As We May Think".[citation needed]
Berners-Lee's breakthrough was to marry hypertext to the Internet. In his book Weaving The Web, he explains that he had repeatedly suggested that a marriage between the two technologies was possible to members of both technical communities, but when no one took up his invitation, he finally tackled the project himself. In the process, he developed three essential technologies:
a system of globally unique identifiers for resources on the Web and elsewhere, the Universal Document Identifier (UDI), later known as Uniform Resource Locator (URL) and Uniform Resource Identifier (URI);
the publishing language HyperText Markup Language (HTML);
the Hypertext Transfer Protocol (HTTP).[17]
The World Wide Web had a number of differences from other hypertext systems that were then available. The Web required only unidirectional links rather than bidirectional ones. This made it possible for someone to link to another resource without action by the owner of that resource. It also significantly reduced the difficulty of implementing web servers and browsers (in comparison to earlier systems), but in turn presented the chronic problem of link rot. Unlike predecessors such as HyperCard, the World Wide Web was non-proprietary, making it possible to develop servers and clients independently and to add extensions without licensing restrictions. On April 30, 1993, CERN announced[18] that the World Wide Web would be free to anyone, with no fees due. Coming two months after the announcement that the server implementation of the Gopher protocol was no longer free to use, this produced a rapid shift away from Gopher and towards the Web. An early popular web browser was ViolaWWW for Unix and the X Windowing System.
Scholars generally agree that a turning point for the World Wide Web began with the introduction[19] of the Mosaic web browser[20] in 1993, a graphical browser developed by a team at the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign (NCSA-UIUC), led by Marc Andreessen. Funding for Mosaic came from the U.S. High-Performance Computing and Communications Initiative, a funding program initiated by the High Performance Computing and Communication Act of 1991, one of several computing developments initiated by U.S. Senator Al Gore.[21] Prior to the release of Mosaic, graphics were not commonly mixed with text in web pages and the Web's popularity was less than older protocols in use over the Internet, such as Gopher and Wide Area Information Servers (WAIS). Mosaic's graphical user interface allowed the Web to become, by far, the most popular Internet protocol.
The World Wide Web Consortium (W3C) was founded by Tim Berners-Lee after he left the European Organization for Nuclear Research (CERN) in October, 1994. It was founded at the Massachusetts Institute of Technology Laboratory for Computer Science (MIT/LCS) with support from the Defense Advanced Research Projects Agency (DARPA), which had pioneered the Internet; a year later, a second site was founded at INRIA (a French national computer research lab) with support from the European Commission DG InfSo; and in 1996, a third continental site was created in Japan at Keio University. By the end of 1994, while the total number of websites was still minute compared to present standards, quite a number of notable websites were already active, many of which are the precursors or inspiration for today's most popular services.
Connected by the existing Internet, other websites were created around the world, adding international standards for domain names and HTML. Since then, Berners-Lee has played an active role in guiding the development of web standards (such as the markup languages in which web pages are composed), and in recent years has advocated his vision of a Semantic Web. The World Wide Web enabled the spread of information over the Internet through an easy-to-use and flexible format. It thus played an important role in popularizing use of the Internet.[22] Although the two terms are sometimes conflated in popular use, World Wide Web is not synonymous with Internet.[23] The Web is a collection of documents and both client and server software using Internet protocols such as TCP/IP and HTTP.
[edit]Function

The terms Internet and World Wide Web are often used in every-day speech without much distinction. However, the Internet and the World Wide Web are not one and the same. The Internet is a global system of interconnected computer networks. In contrast, the Web is one of the services that runs on the Internet. It is a collection of textual documents and other resources, linked by hyperlinks and URLs, transmitted by web browsers and web servers. In short, the Web can be thought of as an application "running" on the Internet.[24]
Viewing a web page on the World Wide Web normally begins either by typing the URL of the page into a web browser, or by following a hyperlink to that page or resource. The web browser then initiates a series of communication messages, behind the scenes, in order to fetch and display it. As an example, consider the Wikipedia page for this article with the URL http://en.wikipedia.org/wiki/World_Wide_Web .
First, the browser resolves the server-name portion of the URL (en.wikipedia.org) into an Internet Protocol address using the globally distributed database known as the Domain Name System (DNS); this lookup returns an IP address such as 208.80.152.2. The browser then requests the resource by sending an HTTP request across the Internet to the computer at that particular address. It makes the request to a particular application port in the underlying Internet Protocol Suite so that the computer receiving the request can distinguish an HTTP request from other network protocols it may be servicing such as e-mail delivery; the HTTP protocol normally uses port 80. The content of the HTTP request can be as simple as the two lines of text
GET /wiki/World_Wide_Web HTTP/1.1
Host: en.wikipedia.org
The computer receiving the HTTP request delivers it to Web server software listening for requests on port 80. If the web server can fulfill the request it sends an HTTP response back to the browser indicating success, which can be as simple as
HTTP/1.0 200 OK
Content-Type: text/html; charset=UTF-8
followed by the content of the requested page. The Hypertext Markup Language for a basic web page looks like
<html>
<head>
<title>World Wide Web — Wikipedia, the free encyclopedia</title>
</head>
<body>
<p>The '''World Wide Web''', abbreviated as '''WWW''' and commonly known ...</p>
</body>
</html>
The web browser parses the HTML, interpreting the markup (<title>, <b> for bold, and such) that surrounds the words in order to draw that text on the screen.
Many web pages consist of more elaborate HTML which references the URLs of other resources such as images, other embedded media, scripts that affect page behavior, and Cascading Style Sheets that affect page layout. A browser that handles complex HTML will make additional HTTP requests to the web server for these other Internet media types. As it receives their content from the web server, the browser progressively renders the page onto the screen as specified by its HTML and these additional resources.
[edit]Linking
Most web pages contain hyperlinks to other related pages and perhaps to downloadable files, source documents, definitions and other web resources (this Wikipedia article is full of hyperlinks). In the underlying HTML, a hyperlink looks like
<a href="http://www.w3.org/History/19921103-hypertext/hypertext/WWW/">Early archive
of the first Web site</a>


Graphic representation of a minute fraction of the WWW, demonstrating hyperlinks
Such a collection of useful, related resources, interconnected via hypertext links is dubbed a web of information. Publication on the Internet created what Tim Berners-Lee first called the WorldWideWeb (in its original CamelCase, which was subsequently discarded) in November 1990.[3]
The hyperlink structure of the WWW is described by the webgraph: the nodes of the webgraph correspond to the webpages (or URLs) the directed edges between them to the hyperlinks.
Over time, many web resources pointed to by hyperlinks disappear, relocate, or are replaced with different content. This makes hyperlinks obsolete, a phenomenon referred to in some circles as link rot and the hyperlinks affected by it are often called dead links. The ephemeral nature of the Web has prompted many efforts to archive web sites. The Internet Archive, active since 1996, is one of the best-known efforts.
[edit]Dynamic updates of web pages
Main article: Ajax (programming)
JavaScript is a scripting language that was initially developed in 1995 by Brendan Eich, then of Netscape, for use within web pages.[25] The standardized version is ECMAScript.[25] To overcome some of the limitations of the page-by-page model described above, some web applications also use Ajax (asynchronous JavaScript and XML). JavaScript is delivered with the page that can make additional HTTP requests to the server, either in response to user actions such as mouse-clicks, or based on lapsed time. The server's responses are used to modify the current page rather than creating a new page with each response. Thus the server only needs to provide limited, incremental information. Since multiple Ajax requests can be handled at the same time, users can interact with a page even while data is being retrieved. Some web applications regularly poll the server to ask if new information is available.[26]
[edit]WWW prefix
Many domain names used for the World Wide Web begin with www because of the long-standing practice of naming Internet hosts (servers) according to the services they provide. The hostname for a web server is often www, in the same way that it may be ftp for an FTP server, and news or nntp for a USENET news server. These host names appear as Domain Name System (DNS) subdomain names, as in www.example.com. The use of 'www' as a subdomain name is not required by any technical or policy standard; indeed, the first ever web server was called nxoc01.cern.ch, [27] and many web sites exist without it. Many established websites still use 'www', or they invent other subdomain names such as 'www2', 'secure', etc. Many such web servers are set up such that both the domain root (e.g., example.com) and the www subdomain (e.g., www.example.com) refer to the same site; others require one form or the other, or they may map to different web sites.
The use of a subdomain name is useful for load balancing incoming web traffic by creating a CNAME record that points to a cluster of web servers. Since, currently, only a subdomain can be cname'ed the same result cannot be achieved by using the bare domain root.
When a user submits an incomplete website address to a web browser in its address bar input field, some web browsers automatically try adding the prefix "www" to the beginning of it and possibly ".com", ".org" and ".net" at the end, depending on what might be missing. For example, entering 'microsoft' may be transformed to http://www.microsoft.com/ and 'openoffice' to http://www.openoffice.org. This feature started appearing in early versions of Mozilla Firefox, when it still had the working title 'Firebird' in early 2003, from a much more ancient practice in browsers such as Lynx.[28] It is reported that Microsoft was granted a US patent for the same idea in 2008, but only for mobile devices.[29]
The scheme specifiers (http:// or https://) in URIs refer to the Hypertext Transfer Protocol and to HTTP Secure respectively and so define the communication protocol to be used for the request and response. The HTTP protocol is fundamental to the operation of the World Wide Web; the added encryption layer in HTTPS is essential when confidential information such as passwords or banking information are to be exchanged over the public Internet. Web browsers usually prepend the scheme to URLs too, if omitted.
In English, www is pronounced by individually pronouncing the name of characters (double-u double-u double-u). Although some technical users pronounce it dub-dub-dub, this is not widespread. The English writer Douglas Adams once quipped in The Independent on Sunday (1999): "The World Wide Web is the only thing I know of whose shortened form takes three times longer to say than what it's short for, " with Stephen Fry later pronouncing it in his "Podgrammes" series of podcasts as "wuh wuh wuh." In Mandarin Chinese, World Wide Web is commonly translated via a phono-semantic matching to wàn wéi w?ng (???), which satisfies www and literally means "myriad dimensional net", [30] a translation that very appropriately reflects the design concept and proliferation of the World Wide Web. Tim Berners-Lee's web-space states that World Wide Web is officially spelled as three separate words, each capitalized, with no intervening hyphens.[31]
Use of the www prefix is declining as Web 2.0 web applications seek to brand their domain names and make them easily pronounceable.[32] As the mobile web grows in popularity, services like Gmail.com, MySpace.com, Facebook.com and Twitter.com are most often discussed without adding the www to the domain.
[edit]Privacy


This section may require cleanup to meet Wikipedia's quality standards. (Consider using more specific clean up instructions.) Please improve this section if you can. The talk page may contain suggestions. (June 2011)

This section has been nominated to be checked for its neutrality.
Discussion of this nomination can be found on the talk page. (June 2011)
Computer users, who save time and money, and who gain conveniences and entertainment, may or may not have surrendered the right to privacy in exchange for using a number of technologies including the Web.[33][vague] For example: more than a half billion people worldwide have used a social network service, [34] and of Americans who grew up with the Web, half created an online profile[35] and are part of a generational shift that could be changing norms.[36][37][further explanation needed] The social network Facebook progressed from U.S. college students to a 70% non-U.S. audience, but in 2009 estimated that only 20% of its members use privacy settings.[38] In 2010 (six years after co-founding the company), Mark Zuckerberg wrote, "we will add privacy controls that are much simpler to use".[39]
Privacy representatives from 60 countries have resolved to ask for laws to complement industry self-regulation, for education for children and other minors who use the Web, and for default protections for users of social networks.[40] They also believe data protection for personally identifiable information benefits business more than the sale of that information.[40] Users can opt-in to features in browsers to clear their personal histories locally and block some cookies and advertising networks[41] but they are still tracked in websites' server logs, and particularly web beacons.[42] Berners-Lee and colleagues see hope in accountability and appropriate use achieved by extending the Web's architecture to policy awareness, perhaps with audit logging, reasoners and appliances.[43]
In exchange for providing free content, vendors hire advertisers who spy on Web users and base their business model on tracking them.[44] Since 2009, they buy and sell consumer data on exchanges (lacking a few details that could make it possible to de-anonymize, or identify an individual).[44][45] Hundreds of millions of times per day, Lotame Solutions captures what users are typing in real time, and sends that text to OpenAmplify who then tries to determine, to quote a writer at The Wall Street Journal, "what topics are being discussed, how the author feels about those topics, and what the person is going to do about them".[46][47]
Microsoft backed away in 2008 from its plans for strong privacy features in Internet Explorer, [48] leaving its users (50% of the world's Web users) open to advertisers who may make assumptions about them based on only one click when they visit a website.[49] Among services paid for by advertising, Yahoo! could collect the most data about users of commercial websites, about 2, 500 bits of information per month about each typical user of its site and its affiliated advertising network sites. Yahoo! was followed by MySpace with about half that potential and then by AOL–TimeWarner, Google, Facebook, Microsoft, and eBay.[50]
[edit]Security

The Web has become criminals' preferred pathway for spreading malware. Cybercrime carried out on the Web can include identity theft, fraud, espionage and intelligence gathering.[51] Web-based vulnerabilities now outnumber traditional computer security concerns, [52][53] and as measured by Google, about one in ten web pages may contain malicious code.[54] Most Web-based attacks take place on legitimate websites, and most, as measured by Sophos, are hosted in the United States, China and Russia.[55] The most common of all malware threats is SQL injection attacks against websites.[56] Through HTML and URIs the Web was vulnerable to attacks like cross-site scripting (XSS) that came with the introduction of JavaScript[57] and were exacerbated to some degree by Web 2.0 and Ajax web design that favors the use of scripts.[58] Today by one estimate, 70% of all websites are open to XSS attacks on their users.[59]
Proposed solutions vary to extremes. Large security vendors like McAfee already design governance and compliance suites to meet post-9/11 regulations, [60] and some, like Finjan have recommended active real-time inspection of code and all content regardless of its source.[51] Some have argued that for enterprise to see security as a business opportunity rather than a cost center, [61] "ubiquitous, always-on digital rights management" enforced in the infrastructure by a handful of organizations must replace the hundreds of companies that today secure data and networks.[62] Jonathan Zittrain has said users sharing responsibility for computing safety is far preferable to locking down the Internet.[63]
[edit]Standards

Main article: Web standards
Many formal standards and other technical specifications and software define the operation of different aspects of the World Wide Web, the Internet, and computer information exchange. Many of the documents are the work of the World Wide Web Consortium (W3C), headed by Berners-Lee, but some are produced by the Internet Engineering Task Force (IETF) and other organizations.
Usually, when web standards are discussed, the following publications are seen as foundational:
Recommendations for markup languages, especially HTML and XHTML, from the W3C. These define the structure and interpretation of hypertext documents.
Recommendations for stylesheets, especially CSS, from the W3C.
Standards for ECMAScript (usually in the form of JavaScript), from Ecma International.
Recommendations for the Document Object Model, from W3C.
Additional publications provide definitions of other essential technologies for the World Wide Web, including, but not limited to, the following:
Uniform Resource Identifier (URI), which is a universal system for referencing resources on the Internet, such as hypertext documents and images. URIs, often called URLs, are defined by the IETF's RFC 3986 / STD 66: Uniform Resource Identifier (URI): Generic Syntax, as well as its predecessors and numerous URI scheme-defining RFCs;
HyperText Transfer Protocol (HTTP), especially as defined by RFC 2616: HTTP/1.1 and RFC 2617: HTTP Authentication, which specify how the browser and server authenticate each other.
[edit]Accessibility

Main article: Web accessibility
Access to the Web is for everyone regardless of disability—including visual, auditory, physical, speech, cognitive, and neurological. Accessibility features also help others with temporary disabilities like a broken arm or the aging population as their abilities change.[64] The Web is used for receiving information as well as providing information and interacting with society, making it essential that the Web be accessible in order to provide equal access and equal opportunity to people with disabilities.[65] Tim Berners-Lee once noted, "The power of the Web is in its universality. Access by everyone regardless of disability is an essential aspect."[64] Many countries regulate web accessibility as a requirement for websites.[66] International cooperation in the W3C Web Accessibility Initiative led to simple guidelines that web content authors as well as software developers can use to make the Web accessible to persons who may or may not be using assistive technology.[64][67]
[edit]Internationalization

The W3C Internationalization Activity assures that web technology will work in all languages, scripts, and cultures.[68] Beginning in 2004 or 2005, Unicode gained ground and eventually in December 2007 surpassed both ASCII and Western European as the Web's most frequently used character encoding.[69] Originally RFC 3986 allowed resources to be identified by URI in a subset of US-ASCII. RFC 3987 allows more characters—any character in the Universal Character Set—and now a resource can be identified by IRI in any language.[70]
[edit]Statistics

Between 2005 and 2010, the number of Web users doubled, and was expected to surpass two billion in 2010.[71] According to a 2001 study, there were a massive number, over 550 billion, of documents on the Web, mostly in the invisible Web, or Deep Web.[72] A 2002 survey of 2, 024 million Web pages[73] determined that by far the most Web content was in English: 56.4%; next were pages in German (7.7%), French (5.6%), and Japanese (4.9%). A more recent study, which used Web searches in 75 different languages to sample the Web, determined that there were over 11.5 billion Web pages in the publicly indexable Web as of the end of January 2005.[74] As of March 2009, the indexable web contains at least 25.21 billion pages.[75] On July 25, 2008, Google software engineers Jesse Alpert and Nissan Hajaj announced that Google Search had discovered one trillion unique URLs.[76] As of May 2009, over 109.5 million websites operated.[77] Of these 74% were commercial or other sites operating in the .com generic top-level domain.[77]
Statistics measuring a website's popularity are usually based either on the number of page views or associated server 'hits' (file requests) that it receives.
[edit]Speed issues

Frustration over congestion issues in the Internet infrastructure and the high latency that results in slow browsing has led to a pejorative name for the World Wide Web: the World Wide Wait.[78] Speeding up the Internet is an ongoing discussion over the use of peering and QoS technologies. Other solutions to reduce the congestion can be found at W3C.[79] Guidelines for Web response times are:[80]
0.1 second (one tenth of a second). Ideal response time. The user doesn't sense any interruption.
1 second. Highest acceptable response time. Download times above 1 second interrupt the user experience.
10 seconds. Unacceptable response time. The user experience is interrupted and the user is likely to leave the site or system.
[edit]Caching

If a user revisits a Web page after only a short interval, the page data may not need to be re-obtained from the source Web server. Almost all web browsers cache recently obtained data, usually on the local hard drive. HTTP requests sent by a browser will usually only ask for data that has changed since the last download. If the locally cached data are still current, it will be reused. Caching helps reduce the amount of Web traffic on the Internet. The decision about expiration is made independently for each downloaded file, whether image, stylesheet, JavaScript, HTML, or whatever other content the site may provide. Thus even on sites with highly dynamic content, many of the basic resources only need to be refreshed occasionally. Web site designers find it worthwhile to collate resources such as CSS data and JavaScript into a few site-wide files so that they can be cached efficiently. This helps reduce page download times and lowers demands on the Web server.
There are other components of the Internet that can cache Web content. Corporate and academic firewalls often cache Web resources requested by one user for the benefit of all. (See also Caching proxy server.) Some search engines also store cached content from websites. Apart from the facilities built into Web servers that can determine when files have been updated and so need to be re-sent, designers of dynamically generated Web pages can control the HTTP headers sent back to requesting users, so that transient or sensitive pages are not cached. Internet banking and news sites frequently use this facility. Data requested with an HTTP 'GET' is likely to be cached if other conditions are met; data obtained in response to a 'POST' is assumed to depend on the data that was POSTed and so is not cached.
4852 days ago by Turk
A web page or webpage is a document or information resource that is suitable for the World Wide Web and can be accessed through a web browser and displayed on a monitor or mobile device. This information is usually in HTML or XHTML format, and may provide navigation to other web pages via hypertext links. Web pages frequently subsume other resources such as style sheets, scripts and images into their final presentation.
Web pages may be retrieved from a local computer or from a remote web server. The web server may restrict access only to a private network, e.g. a corporate intranet, or it may publish pages on the World Wide Web. Web pages are requested and served from web servers using Hypertext Transfer Protocol (HTTP).
Web pages may consist of files of static text and other content stored within the web server's file system (static web pages), or may be constructed by server-side software when they are requested (dynamic web pages). Client-side scripting can make web pages more responsive to user input once on the client browser.
Contents [hide]
1 Colour, typography, illustration, and interaction
1.1 Dynamic behavior
2 Browsers
3 Elements
4 Rendering
5 URL
6 Viewing
7 Creation
8 Saving
9 See also
10 References
[edit]Colour, typography, illustration, and interaction

Web pages usually include information as to the colors of text and backgrounds and very often also contain links to images and sometimes other types of media to be included in the final view. Layout, typographic and color-scheme information is provided by Cascading Style Sheet (CSS) instructions, which can either be embedded in the HTML or can be provided by a separate file, which is referenced from within the HTML. The latter case is especially relevant where one lengthy stylesheet is relevant to a whole website: due to the way HTTP works, the browser will only download it once from the web server and use the cached copy for the whole site. Images are stored on the web server as separate files, but again HTTP allows for the fact that once a web page is downloaded to a browser, it is quite likely that related files such as images and stylesheets will be requested as it is processed. An HTTP 1.1 web server will maintain a connection with the browser until all related resources have been requested and provided. Web browsers usually render images along with the text and other material on the displayed web page.
[edit]Dynamic behavior
Main article: dynamic web page
Client-side computer code such as JavaScript or code implementing Ajax techniques can be provided either embedded in the HTML of a web page or, like CSS stylesheets, as separate, linked downloads specified in the HTML. These scripts may run on the client computer, if the user allows.
[edit]Browsers

A web browser can have a Graphical User Interface, like Internet Explorer, Mozilla Firefox, Chrome and Opera, or can be text-based, like Lynx or Links.
Web users with disabilities often use assistive technologies and adaptive strategies to access web pages.[1] Users may be color blind, may or may not want to use a mouse perhaps due to repetitive stress injury or motor-neurone problems, may be deaf and require audio to be captioned, may be blind and using a screen reader or braille display, may need screen magnification, etc.
Disabled and able-bodied users may disable the download and viewing of images and other media, to save time, network bandwidth or merely to simplify their browsing experience. Users of mobile devices often have restricted displays and bandwidth. Anyone may prefer not to use the fonts, font sizes, styles and color schemes selected by the web page designer and may apply their own CSS styling to the page.
The World Wide Web Consortium (W3C) and Web Accessibility Initiative (WAI) recommend that all web pages should be designed with all of these options in mind.
[edit]Elements

A web page, as an information set, can contain numerous types of information, which is able to be seen, heard or interact by the end user:
Perceived (rendered) information:
Textual information: with diverse render variations.
Non-textual information:
Static images may be raster graphics, typically GIF, JPEG or PNG; or vector formats such as SVG or Flash.
Animated images typically Animated GIF and SVG, but also may be Flash, Shockwave, or Java applet.
Audio, typically MP3, ogg or various proprietary formats.
Video, WMV (Windows), RM (Real Media), FLV (Flash Video), MPG, MOV (QuickTime)
Interactive information: see interactive media.
For "on page" interaction:
Interactive text: see DHTML.
Interactive illustrations: ranging from "click to play" images to games, typically using script orchestration, Flash, Java applets, SVG, or Shockwave.
Buttons: forms providing alternative interface, typically for use with script orchestration and DHTML.
For "between pages" interaction:
Hyperlinks: standard "change page" reactivity.
Forms: providing more interaction with the server and server-side databases.
Internal (hidden) information:
Comments
Linked Files through Hyperlink (Like DOC, XLS, PDF, etc).
Metadata with semantic meta-information, Charset information, Document Type Definition (DTD), etc.
Diagramation and style information: information about rendered items (like image size attributes) and visual specifications, as Cascading Style Sheets (CSS).
Scripts, usually JavaScript, complement interactivity and functionality.
Note: on server-side the web page may also have "Processing Instruction Information Items".
The web page can also contain dynamically adapted information elements, dependent upon the rendering browser or end-user location (through the use of IP address tracking and/or "cookie" information).
From a more general/wide point of view, some information (grouped) elements, like a navigation bar, are uniform for all website pages, like a standard. These kind of "website standard information" are supplied by technologies like web template systems.
[edit]Rendering

Web pages will often require more screen space than is available for a particular display resolution. Most modern browsers will place a scrollbar (a sliding tool at the side of the screen that allows the user to move the page up or down, or side-to-side) in the window to allow the user to see all content. Scrolling horizontally is less prevalent than vertical scrolling, not only because such pages often do not print properly, but because it inconveniences the user more so than vertical scrolling would (because lines are horizontal; scrolling back and forth for every line is much more inconvenient than scrolling after reading a whole screen; also most computer keyboards have page up and down keys, and many computer mice have vertical scroll wheels, but the horizontal scrolling equivalents are rare).
When web pages are stored in a common directory of a web server, they become a website. A website will typically contain a group of web pages that are linked together, or have some other coherent method of navigation. The most important web page to have on a website is the index page. Depending on the web server settings, this index page can have many different names, but the most common is index.html. When a browser visits the homepage for a website, or any URL pointing to a directory rather than a specific file, the web server will serve the index page to the requesting browser. If no index page is defined in the configuration, or no such file exists on the server, either an error or directory listing will be served to the browser.
A web page can either be a single HTML file, or made up of several HTML files using frames or Server Side Includes (SSIs). Frames have been known to cause problems with web accessibility, copyright, [2] navigation, printing and search engine rankings[3], and are now less often used than they were in the 1990s.[4][5] Both frames and SSIs allow certain content which appears on many pages, such as page navigation or page headers, to be repeated without duplicating the HTML in many files. Frames and the W3C recommended alternative of 2000, the <object> tag, [4] also allow some content to remain in one place while other content can be scrolled using conventional scrollbars. Modern CSS and JavaScript client-side techniques can also achieve all of these goals and more.
When creating a web page, it is important to ensure it conforms to the World Wide Web Consortium (W3C) standards for HTML, CSS, XML and other standards. The W3C standards are in place to ensure all browsers which conform to their standards can display identical content without any special consideration for proprietary rendering techniques. A properly coded web page is going to be accessible to many different browsers old and new alike, display resolutions, as well as those users with audio or visual impairments.
[edit]URL

Main article: Uniform Resource Locator
Typically, web pages today are becoming more dynamic. A dynamic web page is one that is created server-side when it is requested, and then served to the end-user. These types of web pages typically do not have a permalink, or a static URL, associated with them. Today, this can be seen in many popular forums, online shopping, and even on Wikipedia. This practice is intended to reduce the amount of static pages in lieu of storing the relevant web page information in a database. Some search engines may have a hard time indexing a web page that is dynamic, so static web pages can be provided in those instances.
[edit]Viewing

In order to graphically display a web page, a web browser is needed. This is a type of software that can retrieve web pages from the Internet. Most current web browsers include the ability to view the source code. Viewing a web page in a text editor will also display the source code, not the visual product.
[edit]Creation

To create a web page, a text editor or a specialized HTML editor is needed. In order to upload the created web page to a web server, traditionally an FTP client is needed.
The design of a web page is highly personal. A design can be made according to one's own preference, or a premade web template can be used. Web templates let web page designers edit the content of a web page without having to worry about the overall aesthetics. Many people publish their own web pages using products like Tripod, or Angelfire. These web publishing tools offer free page creation and hosting up to a certain size limit.
Other ways of making a web page is to download specialized software, like a Wiki, CMS, or forum. These options allow for quick and easy creation of a web page which is typically dynamic.
[edit]Saving

While one is viewing a web page, a copy of it is saved locally; this is what is being viewed. Depending on the browser settings, this copy may be deleted at any time, or stored indefinitely, sometimes without the user realizing it. Most GUI browsers provide options for saving a web page more permanently. These may include:
Save the rendered text without formatting or images, with hyperlinks reduced to plain text
Save the HTML as it was served — Overall structure preserved, but some links may be broken
Save the HTML with relative links changed to absolute ones so that hyperlinks are preserved
Save the entire web page — All images and other resources including stylesheets and scripts are downloaded and saved in a new folder alongside the HTML, with links to them altered to refer to the local copies. Other relative links changed to absolute
Save the HTML as well as all images and other resources into a single MHTML file. This is supported by Internet Explorer and Opera.[6] Other browsers may support this if a suitable plugin has been installed.
Most operating systems allow applications such as web browsers not only to print the currently viewed web page to a printer, but optionally to "print" to a file that can be viewed or printed later. Some web pages are designed, for example by use of CSS, so that hyperlinks, menus and other navigation items, which will be useless on paper, are rendered into print with this in mind. Sometimes, the destination addresses of hyperlinks may be shown explicitly, either within the body of the page or listed at the end of the printed version. Web page designers may specify in CSS that non-functional menus, navigational blocks and other items may simply be absent from the printed version.
4852 days ago by Turk
HTML, which stands for HyperText Markup Language, is the predominant markup language for web pages. HTML elements are the basic building-blocks of webpages.
HTML is written in the form of HTML elements consisting of tags, enclosed in angle brackets (like <html>), within the web page content. HTML tags normally come in pairs like <h1> and </h1>. The first tag in a pair is the start tag, the second tag is the end tag (they are also called opening tags and closing tags). In between these tags web designers can add text, tables, images, etc..
The purpose of a web browser is to read HTML documents and compose them into visual or audible web pages. The browser does not display the HTML tags, but uses the tags to interpret the content of the page.
HTML elements form the building blocks of all websites. HTML allows images and objects to be embedded and can be used to create interactive forms. It provides a means to create structured documents by denoting structural semantics for text such as headings, paragraphs, lists, links, quotes and other items. It can embed scripts in languages such as JavaScript which affect the behavior of HTML webpages.
Web browsers can also refer to Cascading Style Sheets (CSS) to define the appearance and layout of text and other material. The W3C, maintainer of both the HTML and the CSS standards, encourages the use of CSS over explicitly presentational HTML markup.[1]
Contents [hide]
1 History
1.1 Origins
1.2 First specifications
1.3 Version history of the standard
1.3.1 HTML version timeline
1.3.2 HTML draft version timeline
1.3.3 XHTML versions
2 Markup
2.1 Elements
2.1.1 Element examples
2.1.2 Attributes
2.2 Character and entity references
2.3 Data types
2.4 Document type declaration
3 Semantic HTML
4 Delivery
4.1 HTTP
4.2 HTML e-mail
4.3 Naming conventions
4.4 HTML Application
5 Current variations
5.1 SGML-based versus XML-based HTML
5.2 Transitional versus strict
5.3 Frameset versus transitional
5.4 Summary of specification versions
6 Hypertext features not in HTML
7 WYSIWYG editors
8 See also
9 References
10 External links
[edit]History



The historic logo made by the W3C
[edit]Origins


Tim Berners-Lee
In 1980, physicist Tim Berners-Lee, who was a contractor at CERN, proposed and prototyped ENQUIRE, a system for CERN researchers to use and share documents. In 1989, Berners-Lee wrote a memo proposing an Internet-based hypertext system.[2] Berners-Lee specified HTML and wrote the browser and server software in the last part of 1990. In that year, Berners-Lee and CERN data systems engineer Robert Cailliau collaborated on a joint request for funding, but the project was not formally adopted by CERN. In his personal notes[3] from 1990 he lists[4] "some of the many areas in which hypertext is used" and puts an encyclopedia first.
[edit]First specifications
The first publicly available description of HTML was a document called HTML Tags, first mentioned on the Internet by Berners-Lee in late 1991.[5][6] It describes 20 elements comprising the initial, relatively simple design of HTML. Except for the hyperlink tag, these were strongly influenced by SGMLguid, an in-house SGML based documentation format at CERN. Thirteen of these elements still exist in HTML 4.[7]
Hypertext markup language is a markup language that web browsers use to interpret and compose text, images and other material into visual or audible web pages. Default characteristics for every item of HTML markup are defined in the browser, and these characteristics can be altered or enhanced by the web page designer's additional use of CSS. Many of the text elements are found in the 1988 ISO technical report TR 9537 Techniques for using SGML, which in turn covers the features of early text formatting languages such as that used by the RUNOFF command developed in the early 1960s for the CTSS (Compatible Time-Sharing System) operating system: these formatting commands were derived from the commands used by typesetters to manually format documents. However, the SGML concept of generalized markup is based on elements (nested annotated ranges with attributes) rather than merely print effects, with also the separation of structure and processing; HTML has been progressively moved in this direction with CSS.
Berners-Lee considered HTML to be an application of SGML. It was formally defined as such by the Internet Engineering Task Force (IETF) with the mid-1993 publication of the first proposal for an HTML specification: "Hypertext Markup Language (HTML)" Internet-Draft by Berners-Lee and Dan Connolly, which included an SGML Document Type Definition to define the grammar.[8] The draft expired after six months, but was notable for its acknowledgment of the NCSA Mosaic browser's custom tag for embedding in-line images, reflecting the IETF's philosophy of basing standards on successful prototypes.[9] Similarly, Dave Raggett's competing Internet-Draft, "HTML+ (Hypertext Markup Format)", from late 1993, suggested standardizing already-implemented features like tables and fill-out forms.[10]
After the HTML and HTML+ drafts expired in early 1994, the IETF created an HTML Working Group, which in 1995 completed "HTML 2.0", the first HTML specification intended to be treated as a standard against which future implementations should be based.[9] Published as Request for Comments 1866, HTML 2.0 included ideas from the HTML and HTML+ drafts.[11] The 2.0 designation was intended to distinguish the new edition from previous drafts.[12]
Further development under the auspices of the IETF was stalled by competing interests. Since 1996, the HTML specifications have been maintained, with input from commercial software vendors, by the World Wide Web Consortium (W3C).[13] However, in 2000, HTML also became an international standard (ISO/IEC 15445:2000). The last HTML specification published by the W3C is the HTML 4.01 Recommendation, published in late 1999. Its issues and errors were last acknowledged by errata published in 2001.
[edit]Version history of the standard
HTML
HTML and HTML5
Dynamic HTML
XHTML
XHTML Mobile Profile and C-HTML
Canvas element
Character encodings
Document Object Model
Font family
HTML editor
HTML element
HTML Frames
HTML5 video
HTML scripting
Web browser engine
Quirks mode
Style sheets
Unicode and HTML
W3C and WHATWG
Web colors
Web Storage
Comparison of
document markup languages
web browsers
layout engines for
HTML
HTML5
HTML5 Canvas
HTML5 Media
Non-standard HTML
XHTML (1.1)
This box: view · talk · edit
[edit]HTML version timeline
November 24, 1995
HTML 2.0 was published as IETF RFC 1866. Supplemental RFCs added capabilities:
November 25, 1995: RFC 1867 (form-based file upload)
May 1996: RFC 1942 (tables)
August 1996: RFC 1980 (client-side image maps)
January 1997: RFC 2070 (internationalization)
In June 2000, all of these were declared obsolete/historic by RFC 2854.
January 1997
HTML 3.2[14] was published as a W3C Recommendation. It was the first version developed and standardized exclusively by the W3C, as the IETF had closed its HTML Working Group in September 1996.[15]
HTML 3.2 dropped math formulas entirely, reconciled overlap among various proprietary extensions and adopted most of Netscape's visual markup tags. Netscape's blink element and Microsoft's marquee element were omitted due to a mutual agreement between the two companies.[13] A markup for mathematical formulas similar to that in HTML was not standardized until 14 months later in MathML.
December 1997
HTML 4.0[16] was published as a W3C Recommendation. It offers three variations:
Strict, in which deprecated elements are forbidden,
Transitional, in which deprecated elements are allowed,
Frameset, in which mostly only frame related elements are allowed;
Initially code-named "Cougar", [17] HTML 4.0 adopted many browser-specific element types and attributes, but at the same time sought to phase out Netscape's visual markup features by marking them as deprecated in favor of style sheets. HTML 4 is an SGML application conforming to ISO 8879 - SGML.[18]
April 1998
HTML 4.0[19] was reissued with minor edits without incrementing the version number.
December 1999
HTML 4.01[20] was published as a W3C Recommendation. It offers the same three variations as HTML 4.0 and its last errata were published May 12, 2001.
May 2000
ISO/IEC 15445:2000[21][22] ("ISO HTML", based on HTML 4.01 Strict) was published as an ISO/IEC international standard. In the ISO this standard falls in the domain of the ISO/IEC JTC1/SC34 (ISO/IEC Joint Technical Committee 1, Subcommittee 34 - Document description and processing languages).[21]
As of mid-2008, HTML 4.01 and ISO/IEC 15445:2000 are the most recent versions of HTML. Development of the parallel, XML-based language XHTML occupied the W3C's HTML Working Group through the early and mid-2000s.
[edit]HTML draft version timeline


Logo of HTML 5
October 1991
HTML Tags, [5] an informal CERN document listing twelve HTML tags, was first mentioned in public.
June 1992
First informal draft of the HTML DTD, [23] with seven[24][25][26] subsequent revisions (July 15, August 6, August 18, November 17, November 19, November 20, November 22)
November 1992
HTML DTD 1.1 (the first with a version number, based on RCS revisions, which start with 1.1 rather than 1.0), an informal draft[26]
June 1993
Hypertext Markup Language[27] was published by the IETF IIIR Working Group as an Internet-Draft (a rough proposal for a standard). It was replaced by a second version[28] one month later, followed by six further drafts published by IETF itself[29] that finally led to HTML 2.0 in RFC1866
November 1993
HTML+ was published by the IETF as an Internet-Draft and was a competing proposal to the Hypertext Markup Language draft. It expired in May 1994.
April 1995 (authored March 1995)
HTML 3.0[30] was proposed as a standard to the IETF, but the proposal expired five months later without further action. It included many of the capabilities that were in Raggett's HTML+ proposal, such as support for tables, text flow around figures and the display of complex mathematical formulas.[31]
W3C began development of its own Arena browser as a test bed for HTML 3 and Cascading Style Sheets, [32][33][34] but HTML 3.0 did not succeed for several reasons. The draft was considered very large at 150 pages and the pace of browser development, as well as the number of interested parties, had outstripped the resources of the IETF.[13] Browser vendors, including Microsoft and Netscape at the time, chose to implement different subsets of HTML 3's draft features as well as to introduce their own extensions to it.[13] (See Browser wars) These included extensions to control stylistic aspects of documents, contrary to the "belief [of the academic engineering community] that such things as text color, background texture, font size and font face were definitely outside the scope of a language when their only intent was to specify how a document would be organized."[13] Dave Raggett, who has been a W3C Fellow for many years has commented for example, "To a certain extent, Microsoft built its business on the Web by extending HTML features."[13]
January 2008
HTML5 was published as a Working Draft (link) by the W3C.[35]
Although its syntax closely resembles that of SGML, HTML5 has abandoned any attempt to be an SGML application and has explicitly defined its own "html" serialization, in addition to an alternative XML-based XHTML5 serialization.[36]
[edit]XHTML versions
Main article: XHTML
XHTML is a separate language that began as a reformulation of HTML 4.01 using XML 1.0. It continues to be developed:
XHTML 1.0, [37] published January 26, 2000, as a W3C Recommendation, later revised and republished August 1, 2002. It offers the same three variations as HTML 4.0 and 4.01, reformulated in XML, with minor restrictions.
XHTML 1.1, [38] published May 31, 2001, as a W3C Recommendation. It is based on XHTML 1.0 Strict, but includes minor changes, can be customized, is reformulated using modules from Modularization of XHTML, which was published April 10, 2001, as a W3C Recommendation.
XHTML 2.0, .[39] There is no XHTML 2.0 standard. XHTML 2.0 is only a draft document and it is inappropriate to cite this document as other than work in progress. XHTML 2.0 is incompatible with XHTML 1.x and, therefore, would be more accurately characterized as an XHTML-inspired new language than an update to XHTML 1.x.
XHTML5, which is an update to XHTML 1.x, is being defined alongside HTML5 in the HTML5 draft.[40]
[edit]Markup

HTML markup consists of several key components, including elements (and their attributes), character-based data types, character references and entity references. Another important component is the document type declaration, which triggers standards mode rendering.
The Hello world program, a common computer program employed for comparing programming languages, scripting languages and markup languages is made of 9 lines of code although in HTML newlines are optional:
<!DOCTYPE html>
<html>
<head>
<title>Hello HTML</title>
</head>
<body>
<p>Hello World!</p>
</body>
</html>
(The text between <html> and </html> describes the web page, and the text between <body> and </body> is the visible page content. The markup text '<title>Hello HTML</title>' defines the browser tab title.)
This Document Type Declaration is for HTML5. If the <!doctype html> declaration is not included, various browsers will revert to "quirks mode" for rendering.[41]
[edit]Elements
Main article: HTML element
HTML documents are composed entirely of HTML elements that, in their most general form have three components: a pair of tags, a "start tag" and "end tag"; some attributes within the start tag; and finally, any textual and graphical content between the start and end tags, perhaps including other nested elements. The HTML element is everything between and including the start and end tags. Each tag is enclosed in angle brackets.
The general form of an HTML element is therefore: <tag attribute1="value1" attribute2="value2">content</tag>. Some HTML elements are defined as empty elements and take the form <tag attribute1="value1" attribute2="value2" >. Empty elements may enclose no content. The name of an HTML element is the name used in the tags. Note that the end tag's name is preceded by a slash character, "/", and that in empty elements the slash does not appear. If attributes are not mentioned, default values are used in each case.
[edit]Element examples
Header of the HTML document:<head>...</head>. Usually the title should be included in the head, for example:
<head>
<title>The title</title>
</head>
Headings: HTML headings are defined with the <h1> to <h6> tags:
<h1>Heading1</h1>
<h2>Heading2</h2>
<h3>Heading3</h3>
<h4>Heading4</h4>
<h5>Heading5</h5>
<h6>Heading6</h6>
Paragraphs:
<p>Paragraph 1</p> <p>Paragraph 2</p>
Line breaks:<br>. The difference between <br> and <p> is that 'br' breaks a line without altering the semantic structure of the page, whereas 'p' sections the page into paragraphs. Note also that 'br' is an empty element in that, while it may have attributes, it can take no content and it does not have to have an end tag.
<p>This <br> is a paragraph <br> with <br> line breaks</p>
Comments:
<!-- This is a comment -->
Comments can help understanding of the markup and do not display in the webpage.
There are several types of markup elements used in HTML.
Structural markup describes the purpose of text
For example, <h2>Golf</h2> establishes "Golf" as a second-level heading. Structural markup does not denote any specific rendering, but most web browsers have default styles for element formatting. Content may be further styled using Cascading Style Sheets (CSS).
Presentational markup describes the appearance of the text, regardless of its purpose
For example <b>boldface</b> indicates that visual output devices should render "boldface" in bold text, but gives little indication what devices that are unable to do this (such as aural devices that read the text aloud) should do. In the case of both <b>bold</b> and <i>italic</i>, there are other elements that may have equivalent visual renderings but which are more semantic in nature, such as <strong>strong text</strong> and <em>emphasised text</em> respectively. It is easier to see how an aural user agent should interpret the latter two elements. However, they are not equivalent to their presentational counterparts: it would be undesirable for a screen-reader to emphasize the name of a book, for instance, but on a screen such a name would be italicized. Most presentational markup elements have become deprecated under the HTML 4.0 specification, in favor of using CSS for styling.
Hypertext markup makes parts of a document into links to other documents
An anchor element creates a hyperlink in the document and its href attribute sets the link's target URL. For example the HTML markup, <a href="http://en.wikipedia.org/">Wikipedia</a>, will render the word "Wikipedia" as a hyperlink. To render an image as a hyperlink, an 'img' element is inserted as content into the 'a' element. Like 'br', 'img' is an empty element with attributes but no content or closing tag. <a href="http://example.org"><img src="image.gif" alt="descriptive text" width="50" height="50" border="0"></a>.
[edit]Attributes
Most of the attributes of an element are name-value pairs, separated by "=" and written within the start tag of an element after the element's name. The value may be enclosed in single or double quotes, although values consisting of certain characters can be left unquoted in HTML (but not XHTML).[42][43] Leaving attribute values unquoted is considered unsafe.[44] In contrast with name-value pair attributes, there are some attributes that affect the element simply by their presence in the start tag of the element, [5] like the ismap attribute for the img element.[45]
There are several common attributes that may appear in many elements:
The id attribute provides a document-wide unique identifier for an element. This is used to identify the element so that stylesheets can alter its presentational properties, and scripts may alter, animate or delete its contents or presentation. Appended to the URL of the page, it provides a globally unique identifier for the element, typically a sub-section of the page. For example, the ID "Attributes" in http://en.wikipedia.org/wiki/HTML#Attributes
The class attribute provides a way of classifying similar elements. This can be used for semantic or presentation purposes. For example, an HTML document might semantically use the designation class="notation" to indicate that all elements with this class value are subordinate to the main text of the document. In presentation, such elements might be gathered together and presented as footnotes on a page instead of appearing in the place where they occur in the HTML source. Class attributes are used semantically in microformats. Multiple class values may be specified; for example class="notation important" puts the element into both the 'notation' and the 'important' classes.
An author may use the style attribute to assign presentational properties to a particular element. It is considered better practice to use an element's id or class attributes to select the element from within a stylesheet, though sometimes this can be too cumbersome for a simple, specific, or ad hoc styling.
The title attribute is used to attach subtextual explanation to an element. In most browsers this attribute is displayed as a tooltip.
The lang attribute identifies the natural language of the element's contents, which may be different from that of the rest of the document. For example, in an English-language document:
<p>Oh well, <span lang="fr">c'est la vie</span>, as they say in France.</p>
The abbreviation element, abbr, can be used to demonstrate some of these attributes:
<abbr id="anId" class="jargon" style="color:purple;" title="Hypertext Markup Language">HTML</abbr>
This example displays as HTML; in most browsers, pointing the cursor at the abbreviation should display the title text "Hypertext Markup Language."
Most elements also take the language-related attribute dir to specify text direction, such as with "rtl" for right-to-left text in, for example, Arabic, Persian or Hebrew.[46]
[edit]Character and entity references
See also: List of XML and HTML character entity references and Unicode and HTML
As of version 4.0, HTML defines a set of 252 character entity references and a set of 1, 114, 050 numeric character references, both of which allow individual characters to be written via simple markup, rather than literally. A literal character and its markup counterpart are considered equivalent and are rendered identically.
The ability to "escape" characters in this way allows for the characters < and & (when written as &lt; and &amp;, respectively) to be interpreted as character data, rather than markup. For example, a literal < normally indicates the start of a tag, and & normally indicates the start of a character entity reference or numeric character reference; writing it as &amp; or &#x26; or &#38; allows & to be included in the content of an element or in the value of an attribute. The double-quote character ("), when used to quote an attribute value, must also be escaped as &quot; or &#x22; or &#34; when it appears within the attribute value itself. Equivalently, the single-quote character ('), when used to quote an attribute value, must also be escaped as &#x27; or &#39; (not as &apos; except in XHTML documents[47]) when it appears within the attribute value itself. If document authors overlook the need to escape such characters, some browsers can be very forgiving and try to use context to guess their intent. The result is still invalid markup, which makes the document less accessible to other browsers and to other user agents that may try to parse the document for search and indexing purposes for example.
Escaping also allows for characters that are not easily typed, or that are not available in the document's character encoding, to be represented within element and attribute content. For example, the acute-accented e (é), a character typically found only on Western European keyboards, can be written in any HTML document as the entity reference &eacute; or as the numeric references &#233; or &#xE9;, using characters that are available on all keyboards and are supported in all character encodings. Unicode character encodings such as UTF-8 are compatible with all modern browsers and allow direct access to almost all the characters of the world's writing systems.[48]
[edit]Data types
HTML defines several data types for element content, such as script data and stylesheet data, and a plethora of types for attribute values, including IDs, names, URIs, numbers, units of length, languages, media descriptors, colors, character encodings, dates and times, and so on. All of these data types are specializations of character data.
[edit]Document type declaration
HTML documents are required to start with a Document Type Declaration (informally, a "doctype"). In browsers, the doctype helps to define the rendering mode—particularly whether to use quirks mode.
The original purpose of the doctype was to enable parsing and validation of HTML documents by SGML tools based on the Document Type Definition (DTD). The DTD to which the DOCTYPE refers contains a machine-readable grammar specifying the permitted and prohibited content for a document conforming to such a DTD. Browsers, on the other hand, do not implement HTML as an application of SGML and by consequence do not read the DTD. HTML5 does not define a DTD, because of the technology's inherent limitations, so in HTML5 the doctype declaration, <!doctype html>, does not refer to a DTD.
An example of an HTML 4 doctype is
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
This declaration references the DTD for the 'strict' version of HTML 4.01. SGML-based validators read the DTD in order to properly parse the document and to perform validation. In modern browsers, a valid doctype activates standards mode as opposed to quirks mode.
In addition, HTML 4.01 provides Transitional and Frameset DTDs, as explained below.
[edit]Semantic HTML

Main article: Semantic HTML
Semantic HTML is a way of writing HTML that emphasizes the meaning of the encoded information over its presentation (look). HTML has included semantic markup from its inception, [49] but has also included presentational markup such as <font>, <i> and <center> tags. There are also the semantically neutral span and div tags. Since the late 1990s when Cascading Style Sheets were beginning to work in most browsers, web authors have been encouraged to avoid the use of presentational HTML markup with a view to the separation of presentation and content.[50]
In a 2001 discussion of the Semantic Web, Tim Berners-Lee and others gave examples of ways in which intelligent software 'agents' may one day automatically trawl the Web and find, filter and correlate previously unrelated, published facts for the benefit of human users.[51] Such agents are not commonplace even now, but some of the ideas of Web 2.0, mashups and price comparison websites may be coming close. The main difference between these web application hybrids and Berners-Lee's semantic agents lies in the fact that the current aggregation and hybridisation of information is usually designed in by web developers, who already know the web locations and the API semantics of the specific data they wish to mash, compare and combine.
An important type of web agent that does trawl and read web pages automatically, without prior knowledge of what it might find, is the Web crawler or search-engine spider. These software agents are dependent on the semantic clarity of web pages they find as they use various techniques and algorithms to read and index millions of web pages a day and provide web users with search facilities without which the World Wide Web would be only a fraction of its current usefulness.
In order for search-engine spiders to be able to rate the significance of pieces of text they find in HTML documents, and also for those creating mashups and other hybrids as well as for more automated agents as they are developed, the semantic structures that exist in HTML need to be widely and uniformly applied to bring out the meaning of published text.[52]
Presentational markup tags are deprecated in current HTML and XHTML recommendations and are illegal in HTML5.
Good semantic HTML also improves the accessibility of web documents (see also Web Content Accessibility Guidelines). For example, when a screen reader or audio browser can correctly ascertain the structure of a document, it will not waste the visually impaired user's time by reading out repeated or irrelevant information when it has been marked up correctly.
[edit]Delivery

HTML documents can be delivered by the same means as any other computer file. However, they are most often delivered either by HTTP from a web server or by email.
[edit]HTTP
Main article: Hypertext Transfer Protocol
The World Wide Web is composed primarily of HTML documents transmitted from web servers to web browsers using the Hypertext Transfer Protocol (HTTP). However, HTTP is used to serve images, sound, and other content, in addition to HTML. To allow the Web browser to know how to handle each document it receives, other information is transmitted along with the document. This meta data usually includes the MIME type (e.g. text/html or application/xhtml+xml) and the character encoding (see Character encoding in HTML).
In modern browsers, the MIME type that is sent with the HTML document may affect how the document is initially interpreted. A document sent with the XHTML MIME type is expected to be well-formed XML; syntax errors may cause the browser to fail to render it. The same document sent with the HTML MIME type might be displayed successfully, since some browsers are more lenient with HTML.
The W3C recommendations state that XHTML 1.0 documents that follow guidelines set forth in the recommendation's Appendix C may be labeled with either MIME Type.[53] The current XHTML 1.1 Working Draft also states that XHTML 1.1 documents should[54] be labeled with either MIME type.[55]
[edit]HTML e-mail
Main article: HTML email
Most graphical email clients allow the use of a subset of HTML (often ill-defined) to provide formatting and semantic markup not available with plain text. This may include typographic information like coloured headings, emphasized and quoted text, inline images and diagrams. Many such clients include both a GUI editor for composing HTML e-mail messages and a rendering engine for displaying them. Use of HTML in e-mail is controversial because of compatibility issues, because it can help disguise phishing attacks, because it can confuse spam filters and because the message size is larger than plain text.
[edit]Naming conventions
The most common filename extension for files containing HTML is .html. A common abbreviation of this is .htm, which originated because some early operating systems and file systems, such as DOS and FAT, limited file extensions to three letters.
[edit]HTML Application
Main article: HTML Application
An HTML Application (HTA; file extension ".hta") is a Microsoft Windows application that uses HTML and Dynamic HTML in a browser to provide the application's graphical interface. A regular HTML file is confined to the security model of the web browser, communicating only to web servers and manipulating only webpage objects and site cookies. An HTA runs as a fully trusted application and therefore has more privileges, like creation/editing/removal of files and Windows Registry entries. Because they operate outside the browser's security model, HTAs cannot be executed via HTTP, but must be downloaded (just like an EXE file) and executed from local file system.
[edit]Current variations

HTML is precisely what we were trying to PREVENT— ever-breaking links, links going outward only, quotes you can't follow to their origins, no version management, no rights management.
Ted Nelson[56]
Since its inception, HTML and its associated protocols gained acceptance relatively quickly. However, no clear standards existed in the early years of the language. Though its creators originally conceived of HTML as a semantic language devoid of presentation details, [57] practical uses pushed many presentational elements and attributes into the language, driven largely by the various browser vendors. The latest standards surrounding HTML reflect efforts to overcome the sometimes chaotic development of the language[58] and to create a rational foundation for building both meaningful and well-presented documents. To return HTML to its role as a semantic language, the W3C has developed style languages such as CSS and XSL to shoulder the burden of presentation. In conjunction, the HTML specification has slowly reined in the presentational elements.
There are two axes differentiating various variations of HTML as currently specified: SGML-based HTML versus XML-based HTML (referred to as XHTML) on one axis, and strict versus transitional (loose) versus frameset on the other axis.
[edit]SGML-based versus XML-based HTML
One difference in the latest HTML specifications lies in the distinction between the SGML-based specification and the XML-based specification. The XML-based specification is usually called XHTML to distinguish it clearly from the more traditional definition. However, the root element name continues to be 'html' even in the XHTML-specified HTML. The W3C intended XHTML 1.0 to be identical to HTML 4.01 except where limitations of XML over the more complex SGML require workarounds. Because XHTML and HTML are closely related, they are sometimes documented in parallel. In such circumstances, some authors conflate the two names as (X)HTML or X(HTML).
Like HTML 4.01, XHTML 1.0 has three sub-specifications: strict, transitional and frameset.
Aside from the different opening declarations for a document, the differences between an HTML 4.01 and XHTML 1.0 document—in each of the corresponding DTDs—are largely syntactic. The underlying syntax of HTML allows many shortcuts that XHTML does not, such as elements with optional opening or closing tags, and even EMPTY elements which must not have an end tag. By contrast, XHTML requires all elements to have an opening tag and a closing tag. XHTML, however, also introduces a new shortcut: an XHTML tag may be opened and closed within the same tag, by including a slash before the end of the tag like this: <br/>. The introduction of this shorthand, which is not used in the SGML declaration for HTML 4.01, may confuse earlier software unfamiliar with this new convention. A fix for this is to include a space before closing the tag, as such: <br />.[59]
To understand the subtle differences between HTML and XHTML, consider the transformation of a valid and well-formed XHTML 1.0 document that adheres to Appendix C (see below) into a valid HTML 4.01 document. To make this translation requires the following steps:
The language for an element should be specified with a lang attribute rather than the XHTML xml:lang attribute. XHTML uses XML's built in language-defining functionality attribute.
Remove the XML namespace (xmlns=URI). HTML has no facilities for namespaces.
Change the document type declaration from XHTML 1.0 to HTML 4.01. (see DTD section for further explanation).
If present, remove the XML declaration. (Typically this is: <?xml version="1.0" encoding="utf-8"?>).
Ensure that the document's MIME type is set to text/html. For both HTML and XHTML, this comes from the HTTP Content-Type header sent by the server.
Change the XML empty-element syntax to an HTML style empty element (<br/> to <br>).
Those are the main changes necessary to translate a document from XHTML 1.0 to HTML 4.01. To translate from HTML to XHTML would also require the addition of any omitted opening or closing tags. Whether coding in HTML or XHTML it may just be best to always include the optional tags within an HTML document rather than remembering which tags can be omitted.
A well-formed XHTML document adheres to all the syntax requirements of XML. A valid document adheres to the content specification for XHTML, which describes the document structure.
The W3C recommends several conventions to ensure an easy migration between HTML and XHTML (see HTML Compatibility Guidelines). The following steps can be applied to XHTML 1.0 documents only:
Include both xml:lang and lang attributes on any elements assigning language.
Use the empty-element syntax only for elements specified as empty in HTML.
Include an extra space in empty-element tags: for example <br /> instead of <br/>.
Include explicit close tags for elements that permit content but are left empty (for example, <div></div>, not <div />).
Omit the XML declaration.
By carefully following the W3C's compatibility guidelines, a user agent should be able to interpret the document equally as HTML or XHTML. For documents that are XHTML 1.0 and have been made compatible in this way, the W3C permits them to be served either as HTML (with a text/html MIME type), or as XHTML (with an application/xhtml+xml or application/xml MIME type). When delivered as XHTML, browsers should use an XML parser, which adheres strictly to the XML specifications for parsing the document's contents.
[edit]Transitional versus strict
HTML 4 defined three different versions of the language: Strict, Transitional (once called Loose) and Frameset. The Strict version is intended for new documents and is considered best practice, while the Transitional and Frameset versions were developed to make it easier to transition documents that conformed to older HTML specification or didn't conform to any specification to a version of HTML 4. The Transitional and Frameset versions allow for presentational markup, which is omitted in the Strict version. Instead, cascading style sheets are encouraged to improve the presentation of HTML documents. Because XHTML 1 only defines an XML syntax for the language defined by HTML 4, the same differences apply to XHTML 1 as well. The Transitional version allows the following parts of the vocabulary, which are not included in the Strict version:
A looser content model
Inline elements and plain text are allowed directly in: body, blockquote, form, noscript and noframes
Presentation related elements
underline (u)(Deprecated. can confuse a visitor with a hyperlink.)
strike-through (s)
center(Deprecated. use CSS instead.)
font(Deprecated. use CSS instead.)
basefont(Deprecated. use CSS instead.)
Presentation related attributes
background(Deprecated. use CSS instead.) and bgcolor(Deprecated. use CSS instead.) attributes for body(required element according to the W3C.) element.
align(Deprecated. use CSS instead.) attribute on div, form, paragraph (p) and heading (h1...h6) elements
align(Deprecated. use CSS instead.), noshade(Deprecated. use CSS instead.), size(Deprecated. use CSS instead.) and width(Deprecated. use CSS instead.) attributes on hr element
align(Deprecated. use CSS instead.), border, vspace and hspace attributes on img and object(caution: the object element is only supported in Internet Explorer(from the major browsers)) elements
align(Deprecated. use CSS instead.) attribute on legend and caption elements
align(Deprecated. use CSS instead.) and bgcolor(Deprecated. use CSS instead.) on table element
nowrap(Obsolete), bgcolor(Deprecated. use CSS instead.), width, height on td and th elements
bgcolor(Deprecated. use CSS instead.) attribute on tr element
clear(Obsolete) attribute on br element
compact attribute on dl, dir and menu elements
type(Deprecated. use CSS instead.), compact(Deprecated. use CSS instead.) and start(Deprecated. use CSS instead.) attributes on ol and ul elements
type and value attributes on li element
width attribute on pre element
Additional elements in Transitional specification
menu(Deprecated. use CSS instead.) list (no substitute, though unordered list is recommended)
dir(Deprecated. use CSS instead.) list (no substitute, though unordered list is recommended)
isindex(Deprecated.) (element requires server-side support and is typically added to documents server-side, form and input elements can be used as a substitute)
applet (Deprecated. use the object element instead.)
The language(Obsolete) attribute on script element (redundant with the type attribute).
Frame related entities
iframe
noframes
target(Deprecated in the map, link and form elements.) attribute on a, client-side image-map (map), link, form and base elements
The Frameset version includes everything in the Transitional version, as well as the frameset element (used instead of body) and the frame element.
[edit]Frameset versus transitional
In addition to the above transitional differences, the frameset specifications (whether XHTML 1.0 or HTML 4.01) specifies a different content model, with frameset replacing body, that contains either frame elements, or optionally noframes with a body.
[edit]Summary of specification versions
As this list demonstrates, the loose versions of the specification are maintained for legacy support. However, contrary to popular misconceptions, the move to XHTML does not imply a removal of this legacy support. Rather the X in XML stands for extensible and the W3C is modularizing the entire specification and opening it up to independent extensions. The primary achievement in the move from XHTML 1.0 to XHTML 1.1 is the modularization of the entire specification. The strict version of HTML is deployed in XHTML 1.1 through a set of modular extensions to the base XHTML 1.1 specification. Likewise, someone looking for the loose (transitional) or frameset specifications will find similar extended XHTML 1.1 support (much of it is contained in the legacy or frame modules). The modularization also allows for separate features to develop on their own timetable. So for example, XHTML 1.1 will allow quicker migration to emerging XML standards such as MathML (a presentational and semantic math language based on XML) and XForms—a new highly advanced web-form technology to replace the existing HTML forms.
In summary, the HTML 4.01 specification primarily reined in all the various HTML implementations into a single clearly written specification based on SGML. XHTML 1.0, ported this specification, as is, to the new XML defined specification. Next, XHTML 1.1 takes advantage of the extensible nature of XML and modularizes the whole specification. XHTML 2.0 will be the first step in adding new features to the specification in a standards-body-based approach.
[edit]Hypertext features not in HTML

HTML lacks some of the features found in earlier hypertext systems, such as typed links, source tracking, fat links and others.[60] Even some hypertext features that were in early versions of HTML have been ignored by most popular web browsers until recently, such as the link element and in-browser Web page editing.
Sometimes Web services or browser manufacturers remedy these shortcomings. For instance, wikis and content management systems allow surfers to edit the Web pages they visit.
[edit]WYSIWYG editors

There are some WYSIWYG editors (What You See Is What You Get), in which the user lays out everything as it is to appear in the HTML document using a graphical user interface (GUI), where the editor renders this as an HTML document, no longer requiring the author to have extensive knowledge of HTML.
The WYSIWYG editing model has been criticized, [61][62] primarily because of the low quality of the generated code; there are voices advocating a change to the WYSIWYM model (What You See Is What You Mean).
WYSIWYG editors remains a controversial topic because of their perceived flaws such as:
Relying mainly on layout as opposed to meaning, often using markup that does not convey the intended meaning but simply copies the layout.[63]
Often producing extremely verbose and redundant code that fails to make use of the cascading nature of HTML and CSS.
Often producing ungrammatical markup often called tag soup.
As a great deal of information of HTML documents is not in the layout, the model has been criticized for its 'what you see is all you get'-nature.[64]
Nevertheless, since WYSIWYG editors offer convenience over hand-coded pages as well as not requiring the author to know the finer details of HTML, they still dominate web authoring.

Post your Comment

Complaint Details


Get new code


 

Recently Updated Reports

1
1499 days ago by thestoryofdianegerrish
goldendoodle world - goldendoodle world lake ridge kennels Vulgar,...
I just came across this page a few minutes ago. I am Sandra Johnson and although this page was...
2
1723 days ago by freeinfofraud
Bitky.io - Unable to withdraw funds
Bitky be Aware! Unable to withdraw money! Bitky idoes not allow you to withdraw your funds, do...
4
1727 days ago by ned l.
Bi Polar Bullies - Bi Polar Bullies Kennel Karen Wolfe BUYERS BEWERE OF THIS...
thank you for bringing this to my attention...my name is Karen Wolfe, i'm the owner of...
7
1728 days ago by ned l.
ServiceMagic ServiceMagic scams and cheats contractors...
Jason - I'm sorry to hear about your experiences with your leads recently. The leads that...
     

User Registration

Already a ScamExposure.com member? Log in now.
Username
E-mail address
Password
 
Get new code

User Registration

A confirmation email was sent to "".
To confirm your account, please click the link in the message.

If you don't see the email in your Inbox, please check your Spam box.

User Login

Not a member of ScamExposure.com? Register now.
E-mail address
Password
Forgot your password?
E-mail address
Back
Loading, please wait...
Your password has been sent to the specified email address. Log in