Download Movies for Mobiles and Tablets on FzMovies

Welcome to Frendz Forum
On Twitter - On Facebook - Search Forum - Google Search - Combine Search[NEW]

Download UCBrowser for faster downloads from frendz4m


User Options
Home Page--Register--Login--Menu


Forum Main>>PC Softwares/Games>> Tips N Tweaks>>HiSTORiES

New Topic-HiSTORiES - Frendz ForumReply

Author
Page: First - 1, - Last (Liked)
dmRULZ
PM [12518]
Rank:H0TB0T Member
Collection
| Options | Thanx |



guys here i and ud shall upload all PC parts and Peripheral Histories which were unknown to u

Visit regularly.


post edited by-dmrulz

Attachment :

( 265 KB ) [ 9237 hits]
dmRULZ
PM [12518]
Rank:H0TB0T Member
Collection
| Options | Thanx | Like(5) | Quote |   Mar 27, 2009 4:26PM
History Of Hard Disk

The commercial usage of hard disk drives began in 1956 with the shipment of an IBM 305 RAMAC system including IBM Model 350 disk storage[1].

For many years, hard disk drives were large, cumbersome devices, more suited to use in the protected environment of a data center or large office than in a harsh industrial environment (due to their delicacy), or small office or home (due to their size and power consumption). Before the early 1980s, most hard disk drives had 8-inch (actually, 210 - 195 mm) or 14-inch platters, required an equipment rack or a large amount of floor space (especially the large removable-media drives, which were frequently comparable in size to washing machines), and in many cases needed high-current and/or three-phase power hookups due to the large motors they used. Because of this, hard disk drives were not commonly used with microcomputers until after 1980, when Seagate Technology introduced the ST-506, the first 5.25-inch hard drives, with a formatted capacity of 5 megabytes.

The capacity of hard drives has grown exponentially over time. With early personal computers, a drive with a 20 megabyte capacity was considered large. During the mid to late 1990s, when PCs were capable of storing not just text files and documents but pictures, music, and video, internal drives were made with 8 to 20 GB capacities. As of mid 2008, desktop hard disk drives typically have a capacity of 500 to 750 gigabytes, while the largest-capacity drives are 2 terabytes.




1950s - 1970s
Main article: early IBM disk storage

The IBM 350 Disk File, invented by Reynold Johnson, was introduced in 1956 with the IBM 305 RAMAC computer. This drive had fifty 24 inch platters, with a total capacity of five million characters. A single head assembly having two heads was used for access to all the platters, making the average access time very slow (just under 1 second).

The IBM 1301 Disk Storage Unit[2], announced in 1961, introduced the usage of a head for each data surface with the heads having self acting air bearings (flying heads).

The first disk drive to use removable media was the IBM 1311 drive, which used the IBM 1316 disk pack to store two million characters.

In 1973, IBM introduced the IBM 3340 "Winchester" disk drive, the first significant commercial use of low mass and low load heads with lubricated media. All modern disk drives now use this technology and/or derivatives thereof. Project head designer/lead designer Kenneth Haughton named it after the Winchester 30-30 rifle after the developers called it the "30-30" because of it was planned to have two 30 MB spindles; however, the actual product shipped with two spindles for data modules of either 35 MB or 70 MB[3].


1980s - PC era

Internal drives became the system of choice on PCs in the 1980s. Most microcomputer hard disk drives in the early 1980s were not sold under their manufacturer's names, but by OEMs as part of larger peripherals (such as the Corvus Disk System and the Apple ProFile). The IBM PC/XT had an internal hard disk drive, however, and this started a trend toward buying "bare" drives (often by mail order) and installing them directly into a system.

External hard drives remained popular for much longer on the Apple Macintosh and other platforms. Every Mac made between 1986 and 1998 has a SCSI port on the back, making external expansion easy; also, "toaster" Compact Macs did not have easily accessible hard drive bays (or, in the case of the Mac Plus, any hard drive bay at all), so on those models, external SCSI disks were the only reasonable option.

1950s thru 1990s

see: Five Decades Of Disk Drive Industry Firsts[4] maintained by Disk/Trend an HDD industry marketing consultancy.


1980s to present day
1980 - The world's first gigabyte-capacity disk drive, the IBM 3380, was the size of a refrigerator, weighed 550 pounds (about 250 kg), and had a price tag of $40,000.
1986 - Standardization of SCSI
1989 - Jimmy Zhu and H. Neal Bertram from UCSD proposed exchange decoupled granular microstructure for thin film disk storage media, still used today.
1991 - 2.5-inch 100 megabyte hard drive
1991 - PRML Technology (Digital Read Channel with 'Partial Response Maximum Likelihood' algorithm)
1992 - first 1.3-inch hard disk drive - HP Kittyhawk
1994 - IBM introduces Laser Textured Landing Zones (LZT)
1996 - IBM introduces GMR (Giant MR) Technology for read sensors
1998 - UltraDMA/33 and ATAPI standardized
1999 - IBM releases the Microdrive in 170 MB and 340 MB capacities
2002 - 137 GB addressing space barrier broken
2003 - Serial ATA introduced
2005 - First 500 GB hard drive shipping (Hitachi GST)
2005 - Serial ATA 3G standardized
2005 - Seagate introduces Tunnel MagnetoResistive Read Sensor (TMR) and Thermal Spacing Control
2005 - Introduction of faster SAS (Serial Attached SCSI)
2005 - First Perpendicular recording HDD shipped: Toshiba 1.8-inch 40/80 GB[5]
2006 - First 750 GB hard drive (Seagate)
2006 - First 200 GB 2.5" hard drive utilizing Perpendicular recording (Toshiba)
2006 - Fujitsu develops heat-assisted magnetic recording (HAMR) that could one day achieve one terabit per square inch densities.[6]
2007 - First 1 terabyte hard drive[7] (Hitachi GST)
2008 - First 1.5 terabyte hard drive[8] (Seagate)
2009 - First 2.0 terabyte hard drive[9] (Western Digital)

-----------------
1 thanks:,mamu.ji
dmRULZ
PM [12518]
Rank:H0TB0T Member
Collection
| Options | Thanx | Like(5) | Quote |   Mar 27, 2009 4:27PM
History Of Internet

The Internet was the result of some visionary thinking by people in the early 1960s who saw great potential value in allowing computers to share information on research and development in scientific and military fields. J.C.R. Licklider of MIT, first proposed a global network of computers in 1962, and moved over to the Defense Advanced Research Projects Agency (DARPA) in late 1962 to head the work to develop it. Leonard Kleinrock of MIT and later UCLA developed the theory of packet switching, which was to form the basis of Internet connections. Lawrence Roberts of MIT connected a Massachusetts computer with a California computer in 1965 over dial-up telephone lines. It showed the feasibility of wide area networking, but also showed that the telephone line's circuit switching was inadequate. Kleinrock's packet switching theory was confirmed. Roberts moved over to DARPA in 1966 and developed his plan for ARPANET. These visionaries and many more left unnamed here are the real founders of the Internet.
When Senator Ted Kennedy heard in 1968 that the pioneering Massachusetts company BBN had won the ARPA contract for an "interface message processor (IMP)," he sent a congratulatory telegram to BBN for their ecumenical spirit in winning the "interfaith message processor" contract.


The Internet, then known as ARPANET, was brought online in 1969 under a contract let by the renamed Advanced Research Projects Agency (ARPA) which initially connected four major computers at universities in the southwestern US (UCLA, Stanford Research Institute, UCSB, and the University of Utah). The contract was carried out by BBN of Cambridge, MA under Bob Kahn and went online in December 1969. By June 1970, MIT, Harvard, BBN, and Systems Development Corp (SDC) in Santa Monica, Cal. were added. By January 1971, Stanford, MIT's Lincoln Labs, Carnegie-Mellon, and Case-Western Reserve U were added. In months to come, NASA/Ames, Mitre, Burroughs, RAND, and the U of Illinois plugged in. After that, there were far too many to keep listing here.
Who was the first to use the Internet?
Charley Kline at UCLA sent the first packets on ARPANet as he tried to connect to Stanford Research Institute on Oct 29, 1969. The system crashed as he reached the G in LOGIN!


The Internet was designed in part to provide a communications network that would work even if some of the sites were destroyed by nuclear attack. If the most direct route was not available, routers would direct traffic around the network via alternate routes.

The early Internet was used by computer experts, engineers, scientists, and librarians. There was nothing friendly about it. There were no home or office personal computers in those days, and anyone who used it, whether a computer professional or an engineer or scientist or librarian, had to learn to use a very complex system.Did Al Gore invent the Internet?
According to a CNN transcript of an interview with Wolf Blitzer, Al Gore said,"During my service in the United States Congress, I took the initiative in creating the Internet." Al Gore was not yet in Congress in 1969 when ARPANET started or in 1974 when the term Internet first came into use. Gore was elected to Congress in 1976. In fairness, Bob Kahn and Vint Cerf acknowledge in a paper titled Al Gore and the Internet that Gore has probably done more than any other elected official to support the growth and development of the Internet from the 1970's to the present .


E-mail was adapted for ARPANET by Ray Tomlinson of BBN in 1972. He picked the @ symbol from the available symbols on his teletype to link the username and address. The telnet protocol, enabling logging on to a remote computer, was published as a Request for Comments (RFC) in 1972. RFC's are a means of sharing developmental work throughout community. The ftp protocol, enabling file transfers between Internet sites, was published as an RFC in 1973, and from then on RFC's were available electronically to anyone who had use of the ftp protocol.

Libraries began automating and networking their catalogs in the late 1960s independent from ARPA. The visionary Frederick G. Kilgour of the Ohio College Library Center (now OCLC, Inc.) led networking of Ohio libraries during the '60s and '70s. In the mid 1970s more regional consortia from New England, the Southwest states, and the Middle Atlantic states, etc., joined with Ohio to form a national, later international, network. Automated catalogs, not very user-friendly at first, became available to the world, first through telnet or the awkward IBM variant TN3270 and only many years later, through the web. See The History of OCLC
Ethernet, a protocol for many local networks, appeared in 1974, an outgrowth of Harvard student Bob Metcalfe's dissertation on "Packet Networks." The dissertation was initially rejected by the University for not being analytical enough. It later won acceptance when he added some more equations to it.


The Internet matured in the 70's as a result of the TCP/IP architecture first proposed by Bob Kahn at BBN and further developed by Kahn and Vint Cerf at Stanford and others throughout the 70's. It was adopted by the Defense Department in 1980 replacing the earlier Network Control Protocol (NCP) and universally adopted by 1983.

The Unix to Unix Copy Protocol (UUCP) was invented in 1978 at Bell Labs. Usenet was started in 1979 based on UUCP. Newsgroups, which are discussion groups focusing on a topic, followed, providing a means of exchanging information throughout the world . While Usenet is not considered as part of the Internet, since it does not share the use of TCP/IP, it linked unix systems around the world, and many Internet sites took advantage of the availability of newsgroups. It was a significant part of the community building that took place on the networks.

Similarly, BITNET (Because It's Time Network) connected IBM mainframes around the educational community and the world to provide mail services beginning in 1981. Listserv software was developed for this network and later others. Gateways were developed to connect BITNET with the Internet and allowed exchange of e-mail, particularly for e-mail discussion lists. These listservs and other forms of e-mail discussion lists formed another major element in the community building that was taking place.

In 1986, the National Science Foundation funded NSFNet as a cross country 56 Kbps backbone for the Internet. They maintained their sponsorship for nearly a decade, setting rules for its non-commercial government and research uses.

As the commands for e-mail, FTP, and telnet were standardized, it became a lot easier for non-technical people to learn to use the nets. It was not easy by today's standards by any means, but it did open up use of the Internet to many more people in universities in particular. Other departments besides the libraries, computer, physics, and engineering departments found ways to make good use of the nets--to communicate with colleagues around the world and to share files and resources.

While the number of sites on the Internet was small, it was fairly easy to keep track of the resources of interest that were available. But as more and more universities and organizations--and their libraries-- connected, the Internet became harder and harder to track. There was more and more need for tools to index the resources that were available.

The first effort, other than library catalogs, to index the Internet was created in 1989, as Peter Deutsch and his crew at McGill University in Montreal, created an archiver for ftp sites, which they named Archie. This software would periodically reach out to all known openly available ftp sites, list their files, and build a searchable index of the software. The commands to search Archie were unix commands, and it took some knowledge of unix to use it to its full capability.
McGill University, which hosted the first Archie, found out one day that half the Internet traffic going into Canada from the United States was accessing Archie. Administrators were concerned that the University was subsidizing such a volume of traffic, and closed down Archie to outside access. Fortunately, by that time, there were many more Archies available.


At about the same time, Brewster Kahle, then at Thinking Machines, Corp. developed his Wide Area Information Server (WAIS), which would index the full text of files in a database and allow searches of the files. There were several versions with varying degrees of complexity and capability developed, but the simplest of these were made available to everyone on the nets. At its peak, Thinking Machines maintained pointers to over 600 databases around the world which had been indexed by WAIS. They included such things as the full set of Usenet Frequently Asked Questions files, the full documentation of working papers such as RFC's by those developing the Internet's standards, and much more. Like Archie, its interface was far from intuitive, and it took some effort to learn to use it well.

Peter Scott of the University of Saskatchewan, recognizing the need to bring together information about all the telnet-accessible library catalogs on the web, as well as other telnet resources, brought out his Hytelnet catalog in 1990. It gave a single place to get information about library catalogs and other telnet resources and how to use them. He maintained it for years, and added HyWebCat in 1997 to provide information on web-based catalogs.

In 1991, the first really friendly interface to the Internet was developed at the University of Minnesota. The University wanted to develop a simple menu system to access files and information on campus through their local network. A debate followed between mainframe adherents and those who believed in smaller systems with client-server architecture. The mainframe adherents "won" the debate initially, but since the client-server advocates said they could put up a prototype very quickly, they were given the go-ahead to do a demonstration system. The demonstration system was called a gopher after the U of Minnesota mascot--the golden gopher. The gopher proved to be very prolific, and within a few years there were over 10,000 gophers around the world. It takes no knowledge of unix or computer architecture to use. In a gopher system, you type or click on a number to select the menu selection you want.

Gopher's usability was enhanced much more when the University of Nevada at Reno developed the VERONICA searchable index of gopher menus. It was purported to be an acronym for Very Easy Rodent-Oriented Netwide Index to Computerized Archives. A spider crawled gopher menus around the world, collecting links and retrieving them for the index. It was so popular that it was very hard to connect to, even though a number of other VERONICA sites were developed to ease the load. Similar indexing software was developed for single sites, called JUGHEAD (Jonzy's Universal Gopher Hierarchy Excavation And Display).
Peter Deutsch, who developed Archie, always insisted that Archie was short for Archiver, and had nothing to do with the comic strip. He was disgusted when VERONICA and JUGHEAD appeared.


In 1989 another significant event took place in making the nets easier to use. Tim Berners-Lee and others at the European Laboratory for Particle Physics, more popularly known as CERN, proposed a new protocol for information distribution. This protocol, which became the World Wide Web in 1991, was based on hypertext--a system of embedding links in text to link to other text, which you have been using every time you selected a text link while reading these pages. Although started before gopher, it was slower to develop.

The development in 1993 of the graphical browser Mosaic by Marc Andreessen and his team at the National Center For Supercomputing Applications (NCSA) gave the protocol its big boost. Later, Andreessen moved to become the brains behind Netscape Corp., which produced the most successful graphical type of browser and server until Microsoft declared war and developed its MicroSoft Internet Explorer.

MICHAEL DERTOUZOS
1936-2001

The early days of the web was a confused period as many developers tried to put their personal stamp on ways the web should develop. The web was threatened with becoming a mass of unrelated protocols that would require different software for different applications. The visionary Michael Dertouzos of MIT's Laboratory for Computer Sciences persuaded Tim Berners-Lee and others to form the World Wide Web Consortium in 1994 to promote and develop standards for the Web. Proprietary plug-ins still abound for the web, but the Consortium has ensured that there are common standards present in every browser.

Read Tim Berners-Lee's tribute to Michael Dertouzos.


Since the Internet was initially funded by the government, it was originally limited to research, education, and government uses. Commercial uses were prohibited unless they directly served the goals of research and education. This policy continued until the early 90's, when independent commercial networks began to grow. It then became possible to route traffic across the country from one commercial site to another without passing through the government funded NSFNet Internet backbone.

Delphi was the first national commercial online service to offer Internet access to its subscribers. It opened up an email connection in July 1992 and full Internet service in November 1992. All pretenses of limitations on commercial use disappeared in May 1995 when the National Science Foundation ended its sponsorship of the Internet backbone, and all traffic relied on commercial networks. AOL, Prodigy, and CompuServe came online. Since commercial usage was so widespread by this time and educational institutions had been paying their own way for some time, the loss of NSF funding had no appreciable effect on costs.

Today, NSF funding has moved beyond supporting the backbone and higher educational institutions to building the K-12 and local public library accesses on the one hand, and the research on the massive high volume connections on the other.

Microsoft's full scale entry into the browser, server, and Internet Service Provider market completed the major shift over to a commercially based Internet. The release of Windows 98 in June 1998 with the Microsoft browser well integrated into the desktop shows Bill Gates' determination to capitalize on the enormous growth of the Internet. Microsoft's success over the past few years has brought court challenges to their dominance. We'll leave it up to you whether you think these battles should be played out in the courts or the marketplace.

During this period of enormous growth, businesses entering the Internet arena scrambled to find economic models that work. Free services supported by advertising shifted some of the direct costs away from the consumer--temporarily. Services such as Delphi offered free web pages, chat rooms, and message boards for community building. Online sales have grown rapidly for such products as books and music CDs and computers, but the profit margins are slim when price comparisons are so easy, and public trust in online security is still shaky. Business models that have worked well are portal sites, that try to provide everything for everybody, and live auctions. AOL's acquisition of Time-Warner was the largest merger in history when it took place and shows the enormous growth of Internet business! The stock market has had a rocky ride, swooping up and down as the new technology companies, the dot.com's encountered good news and bad. The decline in advertising income spelled doom for many dot.coms, and a major shakeout and search for better business models took place by the survivors.

A current trend with major implications for the future is the growth of high speed connections. 56K modems and the providers who supported them spread widely for a while, but this is the low end now. 56K is not fast enough to carry multimedia, such as sound and video except in low quality. But new technologies many times faster, such as cablemodems and digital subscriber lines (DSL) are predominant now.

Wireless has grown rapidly in the past few years, and travellers search for the wi-fi "hot spots" where they can connect while they are away from the home or office. Many airports, coffee bars, hotels and motels now routinely provide these services, some for a fee and some for free.

The next big growth area is the surge towards universal wireless access, where almost everywhere is a "hot spot". Municipal wi-fi or city-wide access, wiMAX offering broader ranges than wi-fi, Verizon's EV-DO, and other formats will joust for dominance in the USA in the months ahead. The battle is both economic and political.

Another trend that is beginning to affect web designers is the growth of smaller devices to connect to the Internet. Small tablets, pocket PCs, smart phones, game machines, and even GPS devices are now capable of tapping into the web on the go, and many web pages are not designed to work on that scale.

As Heraclitus said in the 4th century BC, "Nothing is permanent, but change!"

-----------------
1 thanks:,mamu.ji
dmRULZ
PM [12518]
Rank:H0TB0T Member
Collection
| Options | Thanx | Like(5) | Quote |   Mar 27, 2009 4:28PM
History Of Mouse

How did your first mouse look ?


At the beginning the first mouse was a small wooden box Exactly 40 years ago, the first computer mouse went public. Personal computers existed a long time before on the market (remember Kypro metal case 2 drive computer portable). On December-9 1968 the computer pioneer Douglas C Engelbart demonstrated for the first time his invention at a conference in San Francisco. Insiders were excited, the market showed little enthusiasm

Many years to come for mouse to market

And it should take still many years, until the mouse could begin the world-wide triumphant victory. The today computer mice have not much in common with the first mouse, which was built by the chief engineer of Engelbart's research center at the Stanford Research of institutes (SRI) by Bill English. The prototype consisted of a small wood box with strips, a red button to click and a wheel, which converted the movements of the equipment on the screen.

Steven jobs Commercial mouse for Apple 1983


14 years later, 1983, presented Apple under the name " Lisa" the computer, which controlled the graphic user surface with a mouse. Apple boss Steve Jobs had recognized the potential of the mouse, at that time still an expensive input device . The mouse in the legendary California research center Palo Alto Research center (PARC) was ignored by Xerox.Jobs made the mouse with the ball to the essential Apple computer equipment and took a license for the mouse to create the Apple freak community. Many years apple dominated so the print offices.

Logitech C7 1985 swiss made mouse $99

Today Logitech is the market leader Europe ahead of Microsoft. Both competing hard in the US

-----------------
1 thanks:,mamu.ji
dmRULZ
PM [12518]
Rank:H0TB0T Member
Collection
| Options | Thanx | Like(6) | Quote |   Mar 27, 2009 4:31PM
History of Windows

Windows XP History


Windows NT 5.x, otherwise known as Windows 2000 and Windows XP, are definitely the two most-used NT-based operating systems.



Windows 2000, released in Professional, Server, Advanced Server, and Datacenter Server flavors, began its development in late 1996 with a projected release date sometime in 1997. This, however, was a long shot because there were so many features that needed to be added, removed, or fixed to Windows NT, such as improved directory services, plug-and-play support, and FAT32 support, just to name a few.



The first Windows NT 5.0 beta was released in 1997. Many new features were displayed, and Microsoft's goal at the time was to make NT 5 the ideal OS, a must-have upgrade to any previous version of Windows. There was just one issue...problems arose when trying to upgrade 9x to NT 5, as they had dreaded. Trying ever-so-hard to work all the features they'd like into the OS, the release date kept getting pushed farther and farther back. They were working on errorless networking, Windows Installer, NTFS 5.0, the Microsoft Management Console (MMC), file encryption, FAT32 support, and even more. It seemed like Microsoft was doing more than they could possibly handle.


Trying to make a late 1998 release date, Microsoft said that they would possibly cut some features from the OS. Features such as IntelliMirror and Active Directory were the features in question, but found their way into NT 5 after all. At this time, it was also confirmed that the future of Windows would lie in the hands of Windows NT. NT 5 was hoping to be the merger of the home and business operating systems around the NT kernel.


Windows NT 5 was also now optimized for laptops with new hibernation support, hot docking, and hot-swapping technologies. These, as well as IntelliMirror, would be featured in Beta 2, which was supposed to appear in June of 1998.


Along the way, it was decided that Internet Explorer 5.0 would be integrated into Windows NT 5, now pending a release date in early 1999. NT was now to have a seamless integration of DirectX and the Internet and advanced storage features, among other features that need not be mentioned. Also, in May of 1998, the Active Directory was finally being completed.


Promises were still being made that Beta 2 would appear in June, but alas, that did not happen. It was decided that NT 5 Beta 2 would not be released with nearly as many features as the Windows development team would have liked, but this just made way for a Beta 3 release to follow it up, hopefully to fill in the holes left by Beta 2.


Finally in August of 1998, Beta 2 was released. The features showcased in this build were the Personalized Start Menu, better hardware support, and a definite end to what became known as "DLL Hell". Also at around this time, Microsoft came to the realization that Windows NT 5 would be a strictly business-oriented operating system. Trying to integrate some of the features from Windows 98, for example, WebTV for Windows, among others, was becoming more of a pain and consequentally, more of a delay.


In September of 1998, the release date was pushed back to late 1999, if not early 2000, and in October, the name "Windows 2000" was chosen for the OS.


In early 1999, it was decided that Windows 2000 would receive no more new features, and therefore, a release date of October 6th was in order. Beta 3 was due to be released in mid-April, and it was supposed to be a fairly stable operating system, also determined to be the best laptop OS on the market. Beta 3 was released on April 30th, and was, in fact, a quite solid operating system. Release candidates were right around the corner.


Three release candidates later, and on December 14, 1999, Windows 2000 build 2195 was designated as the last build of Windows 2000. The next day, it "went gold". Finally in mid-February 2000, Windows 2000 shipped.


Microsoft Windows 2000 SP1 in a Nutshell

Provides users with updated driver sets
Corrects many reliability issues such as data corruption and memory loss
Fixes many Windows 2000 installation issues
Y2K compliance
Latest security hole fixes (Hot fixes)


Microsoft Windows 2000 SP2 in a Nutshell

Fixes many DHCP issues
Corrects SP1 setup issues
Installs 128-bit encryption
Latest security hole fixes (Hot fixes)


Microsoft Windows 2000 SP3 in a Nutshell

Adds 48-bit LBA Support for ATAPI Disk Drives in Windows 2000
Corrects SP2 printer issues
Latest security hole fixes (Hot fixes)


Microsoft Windows 2000 SP4 in a Nutshell

Improved program compatibility
Many access violations and stop errors fixed
Improvements to Active Directory
Latest security hole fixes (Hot fixes)


Even during the Windows 2000 development, Windows XP was being planned. XP is an OS that is taken one of two ways: a Windows 2000 clone filled with eye candy, or the greatest incarnation of Windows ever created. I wouldn't necessarily agree entirely with the latter, but Windows XP is quite a good OS bringing home and business users alike to the NT kernel.


Plans for Windows XP, then codename Neptune (for the home release) and codename Odyssey (for the business release), started in early 1999. It was decided that this version of Windows would be the one that would end the days of the Win9x kernel. The NT kernel was now here to stay. After one development release of Neptune, both Neptune and Odyssey were scrapped. Not much came out of this other than "feature" integration between Windows 2000 and Windows ME.


Early in 2000, Windows codename Whistler was born, which would eventually become Windows XP. In the first couple of builds, not much had really changed from Windows 2000. In fact, the About screens still said "Windows 2000". Finally, in build 2250, visual enhancements were being made. The rather cool watercolor theme (which I wish they didn't scrap) was introduced as was the new Start panel. Development was really on its way. Aside from visual enhancements, actual features were being developed. In build 2257, Microsoft introduced the Personal Firewall, which is a quite handy tool in the final release of XP. A few builds later, not much had really changed. The OS was made a little more useable, but that was about it.


After a somewhat long wait, on October 31, 2000, Beta 1 was released to testers. It was also learned (probably to not much of a surprise) that Microsoft would be integrating media player and instant messaging technologies into the OS. Other than that, there is not really much to say about Beta 1. However, early in January of 2001, Microsoft released Whistler build 2410, the build that introduced Windows Product Activation. This build also introduced the integration of Windows Media Player and Windows Messenger. New, fancier icon sets were being developed, and the OS was slowly transforming into the Windows XP that we know today. Grouping was available in Windows Explorer, fast user switching came to be, and Rover showed up as your "handy" search assistant. What more could you possibly ask for?


Not too long after, build 2416 was introduced. The Help and Support center was fancied up a bit and looked less like it did in Windows ME. Windows Media Player 8 was being tweaked, and the new icons were beginning to show up everywhere. Aside from that, there is nothing much to say about this build. Build 2419 was rapidly released after that, and it had the completed XP setup procedure as well as the Bliss wallpaper.


In February of 2001, Beta 2 was release which introduced the new Luna interface. Shortly after the release, the name "Windows XP" was decided on. Many builds were released, improving on the UI and under-the-hood features. XP was coming together and was nearly complete. A couple release candidates later, XP was completed, and on August 24, 2001, Windows XP, both home and professional flavors, were released to manufacturing, and on October 25, 2001 in New York City, the operating system was launched.


Microsoft Windows XP SP1 in a Nutshell

Improved program compatibility
USB 2.0 support added
Improved FireWire support
Many stop errors addressed
Latest security hole fixes (Hot fixes)


Microsoft Windows XP SP2 in a Nutshell

More improved program compatibility
Security issues with Internet Explorer addressed
Microsoft's Security Center makes sure your computer is not vulnerable to virueses and exploits
Internet Explorer now includes pop-up blocking and ActiveX blocking
Many more stop errors addressed
Latest security hole fixes (Hot fixes)

it good that windows xp came .


-----------------
1 thanks:,mamu.ji
LiMPBiZKiT
PM [39922]
Rank: ***FRENDZ STAR***
Collection
| Options | Thanx | Like(5) | Quote |   Mar 27, 2009 10:20PM
Gr8 work dude keep it up --->

starzlove
PM [8003]
Rank:Platinum Member
Collection
| Options | Thanx | Like(5) | Quote |   Mar 28, 2009 4:09AM
more..

amchi.mumbai
PM [37076]
Rank:Average Member
Collection
| Options | Thanx | Like(5) | Quote |   Mar 28, 2009 3:45PM
in chip mag. there are many like moniter, viruses hdd etc

Udrulz
PM [51188]
Rank:Junior Member
Collection
| Options | Thanx | Like(5) | Quote |   Mar 28, 2009 4:37PM
graphof monitor data

See a graphical view of the metrics that you collected for an extended period of time. A Graph History window shows you graphs for long periods of time for any of the available metrics as compared to the System Monitor window, which shows you real-time data for the last hour. The Graph History window allows you to display one graph at a time. However, you can display more than one Graph History window to make comparisons between systems.

How the Graph History function works

The amount of data that is available to be graphed is determined by the settings that you selected from the Collection Services properties, specifically the Collection retention period. Use Operations Navigator to activate PM/400 over multiple systems. When you activate PM/400, you can use the Graph History function to see data that was collected days ago, weeks ago, or months ago. You go beyond the real-time monitor capabilities. You have access to summary data or detailed data. Without PM/400 enabled, the graph data field supports 1 to 7 days. With PM/400 enabled, you define how long your management collection objects remain on the system:
Detailed data
The length of time that management collection objects remain in the file system before they are deleted. You can select a specific time period in hours or days, or you can select Permanent. If you select Permanent, the management collection objects will not be automatically deleted.
Graph data
The length of time that the data for the details and properties data that are shown in the Graph History window remain in the system before they are deleted. If you do not start PM/400, you can specify one to seven days. If you do start PM/400, you can specify 1 to 30 days. The default is one hour.
Summary data
The length of time that the data collection points of a graph can be displayed in the Graph History window or remain in the system before they are deleted. No details or properties data is available. You must start PM/400 to enable the summary data fields. The default is one month.

Collection points on the graph line are shown by three different graphics that correspond to the three levels of data that are available:
A square collection point means the data includes both the detailed information and properties information.
A triangular collection point represents summarized data that contains detailed information.
A circular collection point represents data that contains no detailed information or properties information.

To view the graph history of the data that you are monitoring, do these steps:
Follow the Operations Navigator online help for starting Collection Services on either a single system or on a system group.
From the Start Collection Services - General page, select Start Performance Management/400 if needed.
Make changes to the other values for the collection retention period.
Click OK.
You can view the graph history by right-clicking either a system monitor or a Collection Services object and selecting Graph History.
Click Refresh to see the graphical view.

See managing Performance Management/400 for other tasks that you can perform with PM/400.

See monitor systems for other tools that you can use to monitor system performance.



-----------------
1 thanks:,mamu.ji
Udrulz
PM [51188]
Rank:Junior Member
Collection
| Options | Thanx | Like(6) | Quote |   Mar 28, 2009 4:39PM
THE COMPUTER AGE AND THE INVENTOR

KONRAD ZUSE (1910-1995)




Inventor of first working programmable computer (Z3, 1941)

Pronounce:
`Conrud
Tsoosay'



1935-1938 : Konrad Zuse builds Z1, world's first program-controlled computer. Despite certain mechanical engineering problems it had all the basic ingredients of modern machines, using the binary system and today's standard separation of storage and control. Zuse's 1936 patent application (Z23139/GMD Nr. 005/021) also suggests a von Neumann architecture (re-invented in 1945) with program and data modifiable in storage.

1941: Zuse completes Z3, world's first fully functional programmable computer.

1945 : Zuse describes Plankalkuel, world's first higher-level programming language, containing many standard features of today's programming languages. FORTRAN came almost a decade later. Zuse also used Plankalkuel to design world's first chess program.

1946 : Zuse founds world's first computer startup company: the Zuse-Ingenieurbüro Hopferau. Venture capital raised through ETH Zürich and an IBM option on Zuse's patents.

1949 : Wilkes und Renwick complete EDSAC (Cambridge, UK). Program and data both modifiable in storage, as suggested in Zuse's 1936 patent application, but not implemented in Z1-Z3.

1950: Despite having lost many years of work through the destruction of Berlin, Zuse leases world's first commercial computer (the Z4) to ETHZ, several months before the sale of the first UNIVAC.

1967: Zuse is the first to suggest that the universe itself is running on a grid of computers (digital physics); 1969 he publishes the book "Rechnender Raum" (Computing Space); in the new millennium such wild ideas have suddenly started to attract a lot of attention (e.g., see the "everything" archive).


Different eras of political history are frequently identified with royal dynasties, or great wars and revolutions.

Eras in the history of art and architecture may be distinguished by styles such as Renaissance, Gothic, Impressionist or Surrealist, and so on.

Techniques too have marked different eras over the centuries: from the primitive tools of the Stone Age, to the Industrial Age marked by steam and electrical power and the discovery of turbines, and engines.

Today, we have entered a new era: the computer age – an age which owes everything to inventors.

Charles Babbage, an English mathematician, is considered to be the great-grandfather of the computer. Over 150 years ago, in 1840 to be exact, he invented a sophisticated calculating machine, and called it the "Analytical Engine." As with many inventions, his creation was far in advance of its time.

It took another 100 years before the first computers were built, and as you know, they were huge and incredibly heavy. Take, for instance, the famous Mark I. It was the world’s first electro-mechanical computer and was used during World War 2 by the U.S. Navy. In comparison to 20th-century systems, it could be likened to a battleship: 2.6 meters high, 16 meters wide, 2 meters deep, and weighing a massive 5 tons!

The machine – the hardware – could not develop without the software to match, of course. In this respect, two women mathematicians played key roles.

Ada Lovelace Byron, daughter of the poet Lord Byron, wrote in 1843 what today we'd call programs for Charles Babbage’s "Analytical Engine." She was a pioneer and is considered to be the very first programmer in history. That's why 130 years later, the U.S. Department of Defence gave her forename – Ada – A-D-A – to one of the most important computer programs in the world. It is used not only by the U.S. Army, Navy and Air Force but also by big industry, universities, and other centers of research.

Grace Hopper, an American woman, invented in 1952 the very first compiler of all times, a program which translates a programming language so that it can be understood by computers. It was a sensational breakthrough which opened doors to automatic programming and thus directly to contemporary personal computers (PCs).

Today, computers are at the center of thousands upon thousands of other inventions. They are the heartbeats of the modern world. Computers are every-where – from kitchens to concrete mixers, from planes to pockets. They listen. They speak. They act. Never in world history has one invention had such an influence on humanity as a whole. Without the computer age, there would be no global awareness.

Internet, in particular, has created a brand new environment. A new culture has been born – free, rapid, and universal – where people share their knowledge and expertise. Information and communication techniques have been turned upside down, distance has been eliminated, frontiers abolished. A tremendous interactive potential is burgeoning on our planet Earth today. Like it or lump it – none can stop it!

I would like to mention something concerning Internet. The inventors in 1990 of the World Wide Web (WWW), which revolutionized the contemporary computer world, did not become millionaires. British Tim Berners-Lee and Belgian Robert Caillau, both researchers at European Centre for Nuclear Research (CERN) in Geneva, did not make any money through their invention of the WWW. They refused to patent it. They feared that in so doing, the use of the Web would prove prohibitively expensive preventing its use worldwide. Thus, they passed up a fortune so that our world can learn and communicate today, and we should be grateful to them for their foresight.



The invention of the computer with its multitude of programs and new information technologies is transforming the traditional perception of an inventor. A more positive image is emerging. No longer personified by an eccentric crackpot, a crackpot male genius working alone in attic, garage or basement, today's inventors resemble more and more millions of other scientists, industrial researchers and entrepreneurs in workshops or laboratories surrounded by a computer station. All use the "mouse" instead of a pencil, and their drawing boards are computer screens.

Women inventors have also contributed to this change in the traditional image of the inventor, particularly in certain fields such as chemistry, pharmaceuticals, biotechnology, not to speak of computer software.

In the USA, for instance, the number of women inventors with patents in the field of chemistry increased three-and-half times during the period from 1977 (2.8%) to 1988 (9.9%). It would be interesting indeed to see what further increases have taken place over the past 10 years.

Another popular fallacy is not only that the large majority of inventors are eccentric and male, but they are also perceived as being rather ancient! The truth is that, thanks to the computer, people are actually inventing more and more at an increasingly youthful age. In Silicon Valley, a 30-year old inventor is considered already long in the tooth, and many newcomers to the inventive world are in their 20s. Some predict that in a few years time, there'll be a new generation of 14-year-old millionaire inventors appearing in Silicon Valley!

Unfortunately, this new generation of inventors – women and very young people – is insufficiently present among representatives of most inventor associations worldwide. These are still run by people who, although totally dedicated to their work, were neither born nor grew up in the computer age. Therefore they find adaptation difficult. Information technology frequently passes them by. This is often a cause of very real problems.



Let's now consider some of the ways inventors can make use of the new technologies of the computer age.

We all know that inventors need a lot of information. Technological information contained in patent documents is essential at the very earliest stages of invention. It can avoid duplication in research work. It can provide ideas for further development of existing technology. It can also give a glimpse of the technological activities of competitors. That is why Patent Offices have put their patent documentation databases on the Internet. Access is not only fast, but easily accessible, and available 24 hours a day, 365 days a year.

It's also free in the sense that it doesn’t cost the inventor a single cent to consult such documentation! Time-consuming travel to Patent Offices or libraries storing patent documents is a thing of the past. The inventor also has access to much more data than through a single database. Obviously, the ideal is one huge library, containing millions of patent documents from all over the world.

The European Patent Office (EPO) has tried to create this world library of patent documents. I am glad to inform you that IFIA Web site allows surfers to visit this EPO site, and through it, to jump to the major providers of patent information in the world, whether they be Patent Offices or private enterprises, such as IBM. A further advantage is the constant updating of all these databases by each of the providers. In brief, it's sufficient to click on one address, the EPO address, to access millions of documents: <http:www.european-patent-office.org/online/index.htm>.



For many inventors, the marketing stage often starts with a prototype to prove that the product works satisfactorily, and what's more, works safely. The greater a model's perfection, the greater the chances of selling a license to a manufacturer. But a professional prototype, as close to the final product as possible, can rapidly become extremely expensive.

One fantastic and inexpensive alternative to a physical prototype is a computerized model. Basically, it amounts to modelling the invention from all angles on a computer, with self-running commentary, demonstrations and animation of all the invention’s functions. The diskette or ZIP disk can be duplicated in as many copies as necessary, and sent via regular mail.

The computerized prototype can also be loaded onto a video tape and copies made. Busy executives – prospective investors, licensees or buyers – seem, however, to prefer a diskette which is easy to put into the computer, in addition to the fact that most offices do not have a TV and VCR. The video tape would seem more appropriate when presenting an invention at an exhibition or fair.

On the subject of invention shows, let me stress in passing that virtual exhibitions exist already. One of IFIA's members, the Hungarian Association of Inventors, even launched an international competition of inventions last March with a virtual jury, each member sitting serenely in front of his/her computer screen, somewhere around the world.



With the computer age upon us, we are also moving slowly but surely away from the traditional paper system of filing patent applications to the new electronic filing system – a rapid and cheap transmission system of text and image data.

Patent Offices are now engaged in preparing the necessary tools to assist inventors and other applicants in this form of electronic commerce. Naturally, their Web sites will have to provide links to reference material, technical guidelines and instructions on filing applications.

The Patent Cooperation Treaty (PCT), administered by the World Intellectual Property Organization (WIPO) in Geneva, provides inventors and industry with an advantageous route for obtaining patent protection worldwide. Starting from January 1, 1999, the PCT is offering a reduction of US$ 200 (two hundred) for every electronic filing. That's quite an encouragement to use this system!

However, no system is perfect. It still remains a fact that Patent Offices are faced with serious technical issues related to information security. Namely: How to ensure the security and authenticity of the transmission and exchange of unpublished – therefore confidential – data? The next question to arise is: Who will be responsible in case of third-party intrusions? The Patent Office? – or the applicant?

Because of the international nature of the patent system, it has been decided recently that all information security issues will be examined in the framework of WIPO.

To better understand some of the many issues involved, I would like to give two examples as described in a WIPO document discussed a few days ago in Geneva:

" ... any exchange between applicants and examiners requires excellent levels of security and data privacy. Furthermore, many of these activities require some assurance of the identity of one party or another. For example, if an applicant is exchanging information with an examiner, the examiner needs to know that the individual is indeed authorized to provide information, (e.g. proof of identity), and the applicant needs to be confident that he or she is indeed in contact with a patent examiner and not a clever hacker. [...]"

"The exchange of priority documents provides another interesting example. If a priority document is to be exchanged in electronic form, it needs to be validated by the originating party. In other words, the document needs to be signed to demonstrate its authenticity, it needs to have a guaranteed time stamp associated with the transaction, preferably by a third party (to prevent presumed or actual forgery of dates and times), and it needs to have some guarantee of accuracy, so that a party obtaining the document can tell if tampering occurred..."



Every now and then we hear some people say, "There's hardly anything left to invent. Everything has been invented already!". What a silly remark! You can be certain that inventors will continue inventing, and new discoveries will be made, right up to the very last minute before the world comes to an end! But to return to today, with the computer age, the possibilities of invention are endless and in all possible fields.

It has also been said that the computer will eventually invent the inventor. By that I mean that one day, the computer will replace the inventor. Up to a point, I must agree – but only to a certain extent. You can feed the computer with billions of data. One has even beaten a world chess champion. Nevertheless, the computer has no humanity, no imagination, no sensitivity or affectivity, and no inherent wisdom. Can it smell the perfume of a rose? ...interpret the color of a sunrise? Can it caress the cheek of a child? ...or savor the taste of Hong Kong's dim sum?! Above all it's a machine – a fantastic machine – but remember, it’s only a machine.

So let's not make a new god out of the computer, as some tend to do. But rather use its possibilities to a maximum ... and through it, try quite simply to build a better world. That should be our motto.


Note : Babbage (UK, around 1840) planned but was unable to build a non-binary, decimal, programmable machine. The binary ABC (US, 1942) of Atanasoff (of Bulgarian origin) and Eckert and Mauchly's decimal ENIAC (US, 1945/46) were special purpose calculators, in principle like those of Schickard, (1623), Pascal (1640) and Leibniz (1670), though faster (with tubes instead of gears; today we use transistors). None of these machines was freely programmable. Neither was Turing et al.'s Colossus (UK, 1943-45) used to break the Nazi code. The first programmable machine built by someone other than Zuse was Aiken's MARK I (US, 1944) which was still decimal, without separation of storage and control.




In 1970, Peter's renowned atlas of world history already listed Zuse among the century's 30 most important figures, along with Einstein, Gandhi, Hitler, Lenin, Roosevelt, Mao, Picasso, etc. A fairly complete collection of Zuse's writings and pictures of his machines can be found near UDrulz.



post edited by-Udrulz
Udrulz
PM [51188]
Rank:Junior Member
Collection
| Options | Thanx | Like(5) | Quote |   Mar 28, 2009 4:44PM
Liquid crystal display

Liquid crystal display
From Wikipedia, the free encyclopedia
(Redirected from LCD monitor)
Jump to: navigation, search
"LCD" redirects here. For other uses, see LCD (disambiguation).

Reflective twisted nematic liquid crystal display.
Polarizing filter film with a vertical axis to polarize light as it enters.
Glass substrate with ITO electrodes. The shapes of these electrodes will determine the dark shapes that will appear when the LCD is turned on or off. Vertical ridges etched on the surface are smooth.
Twisted nematic liquid crystals.
Glass substrate with common electrode film (ITO) with horizontal ridges to line up with the horizontal filter.
Polarizing filter film with a horizontal axis to block/pass light.
Reflective surface to send light back to viewer. (In a backlit LCD, this layer is replaced with a light source.)

A subpixel of a color LCD
Comparison of the OLPC XO-1 display (left) with a typical color LCD display. The images show 1×1 mm of each screen. A typical LCD addresses groups of 3 locations as pixels. The XO-1 display addresses each location as a separate pixel.A liquid crystal display (LCD) is a thin, flat display device made up of any number of color or monochrome pixels arrayed in front of a light source or reflector. It is often utilized in battery-powered electronic devices because it uses very small amounts of electric power.

Contents
1 Overview
2 Specifications
3 Brief history
4 Color displays
5 Passive-matrix and active-matrix addressed LCDs
6 Active matrix technologies
6.1 Twisted nematic (TN)
6.2 In-plane switching (IPS)
6.3 Vertical alignment (VA)
7 Quality control
8 Zero-power (bistable) displays
9 Drawbacks
10 See also
10.1 LCD technologies
10.2 Other display technologies
10.3 Display applications
10.4 Manufacturers
11 References
12 External links - Tutorials
12.1 General information



Overview

Each pixel of an LCD typically consists of a layer of molecules aligned between two transparent electrodes, and two polarizing filters, the axes of transmission of which are (in most of the cases) perpendicular to each other. With no liquid crystal between the polarizing filters, light passing through the first filter would be blocked by the second (crossed) polarizer.

The surface of the electrodes that are in contact with the liquid crystal material are treated so as to align the liquid crystal molecules in a particular direction. This treatment typically consists of a thin polymer layer that is unidirectionally rubbed using, for example, a cloth. The direction of the liquid crystal alignment is then defined by the direction of rubbing. Electrodes are made of a transparent conductor called "ITO" or Indium Tin Oxide.

Before applying an electric field, the orientation of the liquid crystal molecules is determined by the alignment at the surfaces. In a twisted nematic device (still the most common liquid crystal device), the surface alignment directions at the two electrodes are perpendicular to each other, and so the molecules arrange themselves in a helical structure, or twist. Because the liquid crystal material is birefringent, light passing through one polarizing filter is rotated by the liquid crystal helix as it passes through the liquid crystal layer, allowing it to pass through the second polarized filter. Half of the incident light is absorbed by the first polarizing filter, but otherwise the entire assembly is transparent.

When a voltage is applied across the electrodes, a torque acts to align the liquid crystal molecules parallel to the electric field, distorting the helical structure (this is resisted by elastic forces since the molecules are constrained at the surfaces). This reduces the rotation of the polarization of the incident light, and the device appears gray. If the applied voltage is large enough, the liquid crystal molecules in the center of the layer are almost completely untwisted and the polarization of the incident light is not rotated as it passes through the liquid crystal layer. This light will then be mainly polarized perpendicular to the second filter, and thus be blocked and the pixel will appear black. By controlling the voltage applied across the liquid crystal layer in each pixel, light can be allowed to pass through in varying amounts thus constituting different levels of gray.


LCD alarm clockThe optical effect of a twisted nematic device in the voltage-on state is far less dependent on variations in the device thickness than that in the voltage-off state. Because of this, these devices are usually operated between crossed polarizers such that they appear bright with no voltage (the eye is much more sensitive to variations in the dark state than the bright state). These devices can also be operated between parallel polarizers, in which case the bright and dark states are reversed. The voltage-off dark state in this configuration appears blotchy, however, because of small variations of thickness across the device.

Both the liquid crystal material and the alignment layer material contain ionic compounds. If an electric field of one particular polarity is applied for a long period of time, this ionic material is attracted to the surfaces and degrades the device performance. This is avoided either by applying an alternating current or by reversing the polarity of the electric field as the device is addressed (the response of the liquid crystal layer is identical, regardless of the polarity of the applied field).

When a large number of pixels is needed in a display, it is not technically possible to drive each directly since then each pixel would require independent electrodes. Instead, the display is multiplexed. In a multiplexed display, electrodes on one side of the display are grouped and wired together (typically in columns), and each group gets its own voltage source. On the other side, the electrodes are also grouped (typically in rows), with each group getting a voltage sink. The groups are designed so each pixel has a unique, unshared combination of source and sink. The electronics, or the software driving the electronics then turns on sinks in sequence, and drives sources for the pixels of each sink.


Specifications

Important factors to consider when evaluating an LCD monitor:

Resolution: The horizontal and vertical size expressed in pixels (e.g., 1024x768). Unlike CRT monitors, LCD monitors have a native-supported resolution for best display effect.
Dot pitch: The distance between the centers of two adjacent pixels. The smaller the dot pitch size, the less granularity is present, resulting in a sharper image. Dot pitch may be the same both vertically and horizontally, or different (less common).
Viewable size: The size of an LCD panel measured on the diagonal (more specifically known as active display area).
Response time: The minimum time necessary to change a pixel's color or brightness. Response time is also divided into rise and fall time. For LCD Monitors, this is measured in btb (black to black) or gtg (gray to gray). These different types of measurements make comparison difficult.
Refresh rate: The number of times per second in which the monitor draws the data it is being given. A refresh rate that is too low can cause flickering and will be more noticeable on larger monitors. Many high-end LCD televisions now have a 120 Hz refresh rate (current and former NTSC countries only). This allows for less distortion when movies filmed at 24 frames per second (fps) are viewed due to the elimination of telecine (3:2 pulldown). The rate of 120 was chosen as the least common multiple of 24 fps (cinema) and 30 fps (TV).
Matrix type: Active or Passive.
Viewing angle: (coll., more specifically known as viewing direction).
Color support: How many types of colors are supported (coll., more specifically known as color gamut).
Brightness: The amount of light emitted from the display (coll., more specifically known as luminance).
Contrast ratio: The ratio of the intensity of the brightest bright to the darkest dark.
Aspect ratio: The ratio of the width to the height (for example, 4:3, 16:9 or 16:10).
Input ports (e.g., DVI, VGA, LVDS, or even S-Video and HDMI).

Brief history

1888: Friedrich Reinitzer (1858-1927) discovers the liquid crystalline nature of cholesterol extracted from carrots (that is, two melting points and generation of colors) and published his findings at a meeting of the Vienna Chemical Society on May 3, 1888 (F. Reinitzer: Beiträge zur Kenntniss des Cholesterins, Monatshefte für Chemie (Wien) 9, 421-441 (1888)).[1]
1904: Otto Lehmann publishes his work "Liquid Crystals".
1911: Charles Mauguin describes the structure and properties of liquid crystals.
1936: The Marconi Wireless Telegraph company patents the first practical application of the technology, "The Liquid Crystal Light Valve".
1962: The first major English language publication on the subject "Molecular Structure and Properties of Liquid Crystals", by Dr. George W. Gray.[2]
1962: Richard Williams of RCA found that liquid crystals had some interesting electro-optic characteristics and he realized an electro-optical effect by generating stripe-patterns in a thin layer of liquid crystal material by the application of a voltage. This effect is based on an electro-hydrodynamic instability forming what is now called “Williams domains” inside the liquid crystal.[3]
1964: In the fall of 1964 George H. Heilmeier, then working in the RCA laboratories on the effect discovered by Williams realized the switching of colors by field-induced realignment of dichroic dyes in a homeotropically oriented liquid crystal. Practical problems with this new electro-optical effect made Heilmeier to continue work on scattering effects in liquid crystals and finally the realization of the first operational liquid crystal display based on what he called the dynamic scattering mode (DSM). Application of a voltage to a DSM display switches the initially clear transparent liquid crystal layer into a milky turbid state. DSM displays could be operated in transmissive and in reflective mode but they required a considerable current to flow for their operation.[4][5][6]
Pioneering work on liquid crystals was undertaken in the late 1960s by the UK's Royal Radar Establishment at Malvern. The team at RRE supported ongoing work by George Gray and his team at the University of Hull who ultimately discovered the cyanobiphenyl liquid crystals (which had correct stability and temperature properties for application in LCDs).

1968: NCR's John L. Janning invented liquid crystal displays (LCD). NCR History. Retrieved on 2008-01-24.
1970: On December 4, 1970, the twisted nematic field effect in liquid crystals was filed for patent by Hoffmann-LaRoche in Switzerland, (Swiss patent No. 532 261) with Wolfgang Helfrich and Martin Schadt (then working for the Central Research Laboratories) listed as inventors.[4] Hoffmann-La Roche then licensed the invention to the Swiss manufacturer Brown, Boveri & Cie who produced displays for wrist watches during the 1970's and also to Japanese electronics industry which soon produced the first digital quartz wrist watches with TN-LCDs and numerous other products. James Fergason at the Westinghouse Research Laboratories in Pittsburgh while working with Sardari Arora and Alfred Saupe at Kent State University Liquid Crystal Institute filed an identical patent in the USA on April 22, 1971.[7] In 1971 the company of Fergason ILIXCO (now LXD Incorporated) produced the first LCDs based on the TN-effect, which soon superseded the poor-quality DSM types due to improvements of lower operating voltages and lower power consumption.
1972: The first active-matrix liquid crystal display panel was produced in the United States by T. Peter Brody.[8]
2008: LCD TVs are the main stream with 50% market share of the 200 million TVs forecasted to ship globally in 2008. [9]
A detailed description of the origins and the complex history of liquid crystal displays from the perspective of an insider during the early days has been published by Joseph A. Castellano in "Liquid Gold, The Story of Liquid Crystal Displays and the Creation of an Industry" [10].

The same history seen from a different perspective has been described and published by Hiroshi Kawamoto, available at the IEEE History Center.[11]


Color displays

Simulation of an LCD monitor up closeIn color LCDs each individual pixel is divided into three cells, or subpixels, which are colored red, green, and blue, respectively, by additional filters (pigment filters, dye filters and metal oxide filters). Each subpixel can be controlled independently to yield thousands or millions of possible colors for each pixel. CRT monitors employ a similar 'subpixel' structures via phosphors, although the analog electron beam employed in CRTs do not hit exact 'subpixels'.

Color components may be arrayed in various pixel geometries, depending on the monitor's usage. If software knows which type of geometry is being used in a given LCD, this can be used to increase the apparent resolution of the monitor through subpixel rendering. This technique is especially useful for text anti-aliasing.

To reduce smudging in a moving picture when pixels do not respond quickly enough to color changes, so-called pixel overdrive may be used.


Passive-matrix and active-matrix addressed LCDs

A general purpose alphanumeric LCD, with two lines of 16 characters.LCDs with a small number of segments, such as those used in digital watches and pocket calculators, have individual electrical contacts for each segment. An external dedicated circuit supplies an electric charge to control each segment. This display structure is unwieldy for more than a few display elements.

Small monochrome displays such as those found in personal organizers, or older laptop screens have a passive-matrix structure employing super-twisted nematic (STN) or double-layer STN (DSTN) technology (DSTN corrects a color-shifting problem with STN), and (CSTN) color-STN (a technology where color is added by using an internal color filter). Each row or column of the display has a single electrical circuit. The pixels are addressed one at a time by row and column addresses. This type of display is called passive-matrix addressed because the pixel must retain its state between refreshes without the benefit of a steady electrical charge. As the number of pixels (and, correspondingly, columns and rows) increases, this type of display becomes less feasible. Very slow response times and poor contrast are typical of passive-matrix addressed LCDs.

High-resolution color displays such as modern LCD computer monitors and televisions use an active matrix structure. A matrix of thin-film transistors (TFTs) is added to the polarizing and color filters. Each pixel has its own dedicated transistor, allowing each column line to access one pixel. When a row line is activated, all of the column lines are connected to a row of pixels and the correct voltage is driven onto all of the column lines. The row line is then deactivated and the next row line is activated. All of the row lines are activated in sequence during a refresh operation. Active-matrix addressed displays look "brighter" and "sharper" than passive-matrix addressed displays of the same size, and generally have quicker response times, producing much better images.


Active matrix technologies

A Casio 1.8" colour TFT liquid crystal display which equips the Sony Cyber-shot DSC-P93A digital compact camerasMain article: TFT LCD, Active-matrix liquid crystal display

Twisted nematic (TN)
Twisted nematic displays contain liquid crystal elements which twist and untwist at varying degrees to allow light to pass through. When no voltage is applied to a TN liquid crystal cell, the light is polarized to pass through the cell. In proportion to the voltage applied, the LC cells twist up to 90 degrees changing the polarization and blocking the light's path. By properly adjusting the level of the voltage almost any grey level or transmission can be achieved.

For a more comprehensive description refer to the section on the twisted nematic field effect.


In-plane switching (IPS)
In-plane switching is an LCD technology which aligns the liquid crystal cells in a horizontal direction. In this method, the electrical field is applied through each end of the crystal, but this requires two transistors for each pixel instead of the single transistor needed for a standard thin-film transistor (TFT) display. This results in blocking more transmission area, thus requiring a brighter backlight, which will consume more power, making this type of display less desirable for notebook computers.


Vertical alignment (VA)
Vertical alignment displays are a form of LCD displays in which the liquid crystal material naturally exists in a horizontal state removing the need for extra transistors (as in IPS). When no voltage is applied the liquid crystal cell, it remains perpendicular to the substrate creating a black display. When voltage is applied, the liquid crystal cells shift to a horizontal position, parallel to the substrate, allowing light to pass through and create a white display. VA liquid crystal displays provide some of the same advantages as IPS panels, particularly an improved viewing angle and improved black level.


Quality control
Some LCD panels have defective transistors, causing permanently lit or unlit pixels which are commonly referred to as stuck pixels or dead pixels respectively. Unlike integrated circuits (ICs), LCD panels with a few defective pixels are usually still usable. It is also economically prohibitive to discard a panel with just a few defective pixels because LCD panels are much larger than ICs. Manufacturers have different standards for determining a maximum acceptable number of defective pixels. The maximum acceptable number of defective pixels for LCD varies greatly. At one point, Samsung held a zero-tolerance policy for LCD monitors sold in Korea.[12] Currently, though, Samsung adheres to the less restrictive ISO 13406-2 standard.[13] Other companies have been known to tolerate as many as 11 dead pixels in their policies.[14] Dead pixel policies are often hotly debated between manufacturers and customers. To regulate the acceptability of defects and to protect the end user, ISO released the ISO 13406-2 standard.[15] However, not every LCD manufacturer conforms to the ISO standard and the ISO standard is quite often interpreted in different ways.


Examples of defects in LCDsLCD panels are more likely to have defects than most ICs due to their larger size. In the example to the right, a 300 mm SVGA LCD has 8 defects and a 150 mm wafer has only 3 defects. However, 134 of the 137 dies on the wafer will be acceptable, whereas rejection of the LCD panel would be a 0% yield. The standard is much higher now due to fierce competition between manufacturers and improved quality control. An SVGA LCD panel with 4 defective pixels is usually considered defective and customers can request an exchange for a new one. Some manufacturers, notably in South Korea where some of the largest LCD panel manufacturers, such as LG, are located, now have "zero defective pixel guarantee", which is an extra screening process which can then determine "A" and "B" grade panels. Many manufacturers would replace a product even with one defective pixel. Even where such guarantees do not exist, the location of defective pixels is important. A display with only a few defective pixels may be unacceptable if the defective pixels are near each other. Manufacturers may also relax their replacement criteria when defective pixels are in the center of the viewing area.

LCD panels also have defects known as mura, which look like a small-scale crack with very small changes in luminance or color.[16]


Zero-power (bistable) displays
The zenithal bistable device (ZBD), developed by QinetiQ (formerly DERA), can retain an image without power. The crystals may exist in one of two stable orientations (Black and "White" and power is only required to change the image. ZBD Displays is a spin-off company from QinetiQ who manufacture both grayscale and color ZBD devices.

A French company, Nemoptic, has developed another zero-power, paper-like LCD technology which has been mass-produced since July 2003. This technology is intended for use in applications such as Electronic Shelf Labels, E-books, E-documents, E-newspapers, E-dictionaries, Industrial sensors, Ultra-Mobile PCs, etc. Zero-power LCDs are a category of electronic paper.

Kent Displays has also developed a "no power" display that uses Polymer Stabilized Cholesteric Liquid Crystals (ChLCD). The major drawback to the ChLCD is slow refresh rate, especially with low temperatures.

In 2004 researchers at the University of Oxford demonstrated two new types of zero-power bistable LCDs based on Zenithal bistable techniques.[17]

Several bistable technologies, like the 360° BTN and the bistable cholesteric, depend mainly on the bulk properties of the liquid crystal (LC) and use standard strong anchoring, with alignment films and LC mixtures similar to the traditional monostable materials. Other bistable technologies (i.e. Binem Technology) are based mainly on the surface properties and need specific weak anchoring materials.


Drawbacks

Laptop LCD screen viewed at an extreme angle.LCD technology still has a few drawbacks in comparison to some other display technologies:

While CRTs are capable of displaying multiple video resolutions without introducing artifacts, LCDs produce crisp images only in their "native resolution" and, sometimes, fractions of that native resolution. Attempting to run LCD panels at non-native resolutions usually results in the panel scaling the image, which introduces blurriness or "blockiness" and is susceptible in general to multiple kinds of HDTV blur. Many LCDs are incapable of displaying very low resolution screen modes (such as 320x200) due to these scaling limitations.
Although LCDs typically have more vibrant images and better "real-world" contrast ratios (the ability to maintain contrast and variation of color in bright environments) than CRTs, they do have lower contrast ratios than CRTs in terms of how deep their blacks are. A contrast ratio is the difference between a completely on (white) and off (black) pixel, and LCDs can have "backlight bleed" where light (usually seen around corners of the screen) leaks out and turns black into gray. However, as of December 2007, the very best LCDs can approach the contrast ratios of plasma displays in terms of delivering a deep black.
LCDs typically have longer response times than their plasma and CRT counterparts, especially older displays, creating visible ghosting when images rapidly change. For example, when moving the mouse quickly on an LCD, multiple cursors can sometimes be seen.
Some LCDs have significant input lag. If the lag delay is large enough, such displays can be unsuitable for fast and time-precise mouse operations (CAD, FPS gaming) as compared to CRT displays or smaller LCD panels with negligible amounts of input lag. Short lag times are sometimes emphasized in marketing.
LCD panels using TN tend to have a limited viewing angle relative to CRT and plasma displays. This reduces the number of people able to conveniently view the same image – laptop screens are a prime example. Usually when looking below the screen, it gets much darker; looking from above makes it look lighter. Many panels such as 22" and 24" LCDs which are based on the IPS, MVA, or PVA panels have much improved viewing angles; typically the color only gets a little brighter when viewing at extreme angles.
Consumer LCD monitors tend to be more fragile than their CRT counterparts. The screen may be especially vulnerable due to the lack of a thick glass shield as in CRT monitors.
Dead pixels can occur when the screen is damaged or pressure is put upon the screen; few manufacturers replace screens with dead pixels for free.
Horizontal and/or vertical banding is a problem in some LCD screens. This flaw occurs as part of the manufacturing process, and cannot be repaired (short of total replacement of the screen). Banding can vary substantially even among LCD screens of the same make and model. The degree is determined by the manufacturer's quality control procedures.
The cold-cathode florescent bulbs sometimes used for back-lights contain mercury.



LCD technologies
List of LCD matrices
TFT LCD
Transreflective liquid crystal display – adaptation to environment brightness
Active-matrix liquid crystal display (AMLCD)
Anisotropic Conductive Film
Backlight
HDTV Blur

Other display technologies
Comparison of display technology
Cathode ray tube (CRT)
Digital Light Processing (DLP)
Field emission display (FED)
Light-emitting diode (LED)
Liquid crystal on silicon (LCOS)
Organic light-emitting diode (OLED)
Plasma display panel (PDP)
Surface-conduction electron-emitter display (SED)
Vacuum fluorescent display (VFD)

Display applications
Television and digital television
Liquid crystal display television (LCD TV)
LCD projector
Computer monitor

Manufacturers
Acer (company)
AU Optronics
Barco
BenQ
Casio
Chi Mei Optoelectronics
CoolTouch Monitors
Corning Inc.
Eizo
Epson
Fujitsu
HP
International Display Works
JVC
Lenovo
LG.Philips LCD
LXD Incorporated
Medion
NEC Display Solutions
Panasonic (Matsushita)
Polaroid Corporation
Samsung Electronics
Sharp Corporation
S-LCD
Sony
Toshiba
Viewsonic
Vizio
Xerox

References
^ Tim Sluckin: Ueber die Natur der kristallinischen Flüssigkeiten und flüssigen Kristalle (The early history of liquid crystals), Bunsen-Magazin, 7.Jahrgang, 5/2005
^ George W. Gray, Stephen M. Kelly: "Liquid crystals for twisted nematic display devices", J. Mater. Chem., 1999, 9, 2037–2050
^ R. Williams, “Domains in liquid crystals,” J. Phys. Chem., vol. 39, pp. 382–388, July 1963
^ a b Castellano, Joseph A. (2006), "Modifying Light", American Scientist 94 (5): pp. 438-445
^ G. H. Heilmeier and L. A. Zanoni, “Guest-host interactions in nematic liquid crystals. A new electro-optic effect,” Appl. Phys. Lett., vol. 13, no. 3, pp. 91–92, 1968
^ G. H. Heilmeier, L. A. Zanoni, and L. A. Barton, “Dynamic scattering: A new electrooptic effect in certain classes of nematic liquid crystals,” Proc. IEEE, vol. 56, pp. 1162–1171, July 1968
^ Modifying Light. American Scientist Online.
^ Brody, T.P., "Birth of the Active Matrix", Information Display, Vol. 13, No. 10, 1997, pp. 28-32.
^ Full HD To Net 58% Of LCD TV Market In 2008; 40 Inch Plus Prices to Stabilize.
^ LIQUID GOLD, The Story of Liquid Crystal Displays and the Creation of an Industry, 2005 World Scientific Publishing Co. Pte. Ltd., ISBN 981-238-956-3
^ Hiroshi Kawamoto: The History of Liquid-Crystal Displays, Proc. IEEE, Vol. 90, No. 4, April 2002
^ Samsung to Offer 'Zero-PIXEL-DEFECT' Warranty for LCD Monitors. Forbes.com (December 30, 2004). Retrieved on 2007-09-03.
^ What is Samsung's Policy on dead pixels?. Samsung (February 5, 2005). Retrieved on 2007-08-03.
^ Display (LCD) replacement for defective pixels - ThinkPad. Lenovo (June 25, 2007). Retrieved on 2007-07-13.
^ What is the ISO 13406-2 standard for LCD screen pixel faults?. Anders Jacobsen's blog (January 4, 2006).
^ EBU – TECH 3320, "User requirements for Video Monitors in Television Production", EBU/UER, May 2007, p. 11.
^ Dr Chidi Uche. Development of bistable displays. University of Oxford. Retrieved on 2007-07-13.

External links - Tutorials
Animated tutorial of LCD technology by 3M
Wikimedia Commons has media related to:
Liquid Crystal DisplaysHistory and Physical Properties of Liquid Crystals by Nobelprize.org
Definitions of basic terms relating to low-molar-mass and polymer liquid crystals (IUPAC Recommendations 2001)
An intelligible introduction to liquid crystals from Case Western Reserve University
Liquid Crystal Physics tutorial from the Liquid Crystals Group, University of Colorado
Introduction to liquid crystals from the Liquid Crystal Technology Group, Oxford University
Liquid Crystals & Photonics Group - Ghent University (Belgium), good tutorial
Liquid crystals Liquid Crystals Interactive Online (not updated since 1999)
Liquid Crystal Institute Kent State University
Liquid Crystals a journal by Taylor&Francis
Molecular Crystals and Liquid Crystals a journal by Taylor&Francis
Hot-spot detection techniques for ic's
What are liquid crystals? from Chalmers University of Technology, Sweden

General information
How LCDs are made, an interactive demonstration from AUO (LCD manufacturer).
Development of Liquid Crystal Displays: Interview with George Gray, Hull University, 2004 – Video by the Vega Science Trust.
History of Liquid Crystals – Presentation and extracts from the book Crystals that Flow: Classic papers from the history of liquid crystals by its co-author Timothy J. Sluckin
Display Technology, by Geoff Walker in the September 2001 issue of Pen Computing
Overview of 3LCD technology, Presentation Technology
LCD Module technical resources and application notes, Diamond Electronics
LCD Phase and Clock Adjustment, Techmind offers a free test screen to get a better LCD picture quality than the LCDs "auto-tune" function.
How to clean your LCD screen About.com: PC Support
TFT CentralLCD Monitor Reviews, Specs, Articles and News
Pictures of lamps for projectors Interlight
Interfacing Alphanumeric LCD to Microcontroller

....l

Udrulz
PM [51188]
Rank:Junior Member
Collection
| Options | Thanx | Like(5) | Quote |   Mar 28, 2009 4:46PM
Processor

What is the difference between the Pentium, Celeron, and Athlon processors?" To best answer this question, we must first understand a bit of processor history, starting with Intel's flagship processor, the Pentium.

In 1993, Intel brought the PC to a new level with the Pentium processor. The first Pentium processor ran at an astounding 60 Mhz, had 3.3 million transistors, and performed 100 Million Instructions Per Second (MIPS). Although no one today refers to the first Pentium processor as a Pentium 1, it is the original of 4 types of Pentium processors developed by Intel.

Once the first Pentium processor technology became obsolete, the Pentium 2 was introduced. Starting at 233 MHz, the Pentium 2 took over its sibling's footsteps and was designed to run from 233 MHz to 450 Mhz. At about the same time, the Intel Celeron processor was presented; it was identical to the Pentium 2 except it was considered a "lower end" processor because of two main differences: a smaller cache and a slower bus speed. Cache is a special part of the processor which helps to process frequently used information faster; bus speed (also known as FSB or Front Side Bus) is responsible for the speed at which all parts of the computer communicate with each other. Bus speed has a dramatic effect on overall speed of the computer. In comparison, Pentium 2 processor had a 100 MHz bus, whereas the lower-end Celeron only operated at a 66 MHz bus.

Not too long after the introduction of the Celeron, the first Pentium 3 processor replaced the Pentium 2 and ran at 450 MHz. Both the Pentium 3 and Celeron processors are still in production today, reaching speeds up to 1260 MHz (or 1.26 GHz) and beyond. The Pentium 3 bus was first rated at 100 MHz but then increased to 133 MHz beginning with the 500 MHz model processor -- also known as the "500EB" model.

Even though AMD has been around for quite some time, AMD's popularity did not come into the spotlight until the introduction of the Athlon processor. At around the same time that Intel introduced their 600 MHz Pentium 3 processors, AMD wowed the world with the Athlon. The Athlon processor not only ran programs just as well as the Intel Pentium 3 and its predecessors -- its bus speed also ran twice as fast as the Pentium 3. AMD's groundbreaking technology utilized a dual bus, even though the clock-speed (MHz rating) was the same. AMD became a success story with the Athlon processor and, like Intel, began producing a lower cost processor -- the AMD Duron -- which also had less cache. In comparison to Intel's Celeron, the Duron still had a 200 MHz bus, while the Celeron ran only at a mere 66 MHz.

Today's Processors

As of late, AMD changed their processor architecture once more and has introduced their new line of Athlon processor: the Athlon XP. While still an Athlon processor, the major difference with the Athlon XP is that it does not use the conventional MHz rating to depict its speed. This is because AMD believes that a MHz rating would undermine its true performance and therefore wishes to change public perception. For those who insist of raw MHz numbers, AMD claims a 25% performance increase of their XP 1900+ compared to a Pentium 4 running at 1900 Mhz.

Since the word about AMD's success spread, Intel recently introduced the Pentium 4 with a groundbreaking speed of 1400 MHz. Although the Pentium 4 offers a bus speed of 400 MHz, the processor is still twice as expensive as the AMD's Athlon XP processor, while performing roughly the same, if not better.

Summary

The absolute most important aspect of AMD's Duron and Athlon processors is that they are half the price of Intel's Celeron and Pentium 3 and 4 processors and still outperform them in almost every instance.

AMD processors are also popular because they are extremely overclockable: that is -- with a good motherboard, they are able to go faster than what they are rated. For example: I own an AMD Athlon 850 MHz, but have it overclocked and running at 1000 MHz with a 266 MHz bus. While overclocking is an issue for the more technically inclined, I will simply state this as another reason for AMD's popularity.

Here's a quick reference chart depicting processor speeds, and their respective front side bus (FS ratings:

Processor Speed (MHz) Front Side Bus (FS Pentium 1 60 - 233 66 MHz FSB Pentium 2 233 - 450 100 MHz FSB Celeron 300 - 66 MHz FSB up to Celeron 766 MHz 100 MHz FSB starting with Celeron 800 MHz Pentium 3 450 - 100 MHz FSB up to Pentium 3 500 MHz 133 MHz FSB starting with Pentium 3 533 MHz Athlon 600 - 1400 AMD offers both 200 and 266 MHz FSB

Udrulz
PM [51188]
Rank:Junior Member
Collection
| Options | Thanx | Like(5) | Quote |   Mar 28, 2009 4:50PM
The History of Monitor Lizards

As the monitors spread across the Earth experiencing different habitats and climates they diversified. Over many millions of years this process has resulted in the emergence of at least seventy or eighty (probably many thousands of) species. Some of them appeared to have died out quickly, whilst other, apparently ancient, species have survived until the present. Many monitor lizards appear to have evolved comparatively recently. It would be nice to know where the monitor lizards first came from, what the early species looked like, how they behaved and why they died out.

Fossils provide us with a tantalising glimpse of a world that we will never know. They have the unerring ability to create more questions than they answer. The chances of an animal or plant being preserved as a fossil is extremely slight. It must be covered with a protective layer as soon as it dies and many millions of years later it must somehow get back onto, or close to, the surface of the rock. Then somebody has to find it. The vast majority of known fossils come from marine organisms, only a tiny proportion are of terrestrial vertebrates and the monitor lizards are poorly represented there. All the monitor fossils I have seen have been unexceptional. Our knowledge of this family before the dawn of civilisation comes from fossilised remnants which are sometimes nothing more than a single vertebrae or fragment of jaw. Often it is very difficult to tell what sort of animal a scrap of bone belonged to over 80 million years ago, with the result that whilst some authors will consider a fossil bone to be that of a monitor lizard the next may claim that it is in fact a piece of prehistoric tortoise. Fossils records of monitor lizards from Africa and Australia are very rare, probably reflecting the unsuitable conditions that existed for fossilisation rather than the scarcity of the animals. Thus this little "history" of the monitor lizards must be taken with a large pinch of salt. Except where indicated the following account follows Estes (1983).

We do know that by 300 million years ago at least three major groups of reptiles had established themselves on Earth. The synapsids included the lizard-like pelycosaurs, some of which closely resembled the monitor lizards of today (e.g. Varanosaurus from what were then the swamps of Texas) and the "theraspids" which may have survived to the present in the form of modern mammals. The anapsids include the living turtles and tortoises and other orders, all of which had died out by 250 million years ago. The diapsids gave rise to dinosaurs and other ruling reptiles as well as birds, crocodiles, tuatara, snakes and lizards. True monitor lizard-like animals (varanoids) appeared in the Late Jurassic era, about 180 million years ago. Aigialosaurs were small aquatic lizards that were probably closely related to the monitors. They gave rise to mosasaurs, a diverse group of water lizards growing up to 50m long that roamed the seas for over 100 million years before dying out altogether (Cox et al 1988; Zug 1994; Steel 1996). The large mososaurs are the beiggest lizards known to have ever existed. As they disappeared monitor lizards first appeared on the land.

According to the available evidence monitor lizards and their close relatives the heloderms (Gila lizards) and lanthonotids (earless monitors) probably originated in northern Asia at least 90 million years ago (Pregill et al 1986). At this time the reign of the dinosaurs was coming to an end and flowers had begun to cover the Earth. The oldest monitor lizards known are from Mongolia: Telmasaurus grangeri, Saniwides mongoliensis and Estesia mongoliensis. All of them must have been quite similar to modern monitor lizards in appearance, but the latter possessed grooved teeth which probably transmitted venom in the same manner as modern-day Gila monsters (Pregill et al 1986, Norell et al 1992). The exact relationship between these lizards and the modern heloderms and varanids is not clear.

Early fossils tentatively identified as monitor lizards have also been found in Alberta and Wyoming in North America. Most authorities agree that this part of America was still attached to Asia when monitor lizards appeared. Paleosanawina canadensis lived at least 70 million years ago and probably reached a total length of about 240cm (Gilmore 1928). These lizards had long backward pointing, serrated teeth that show grooves similar to those of the Mongolian Estesia. Although they too must have been very similar to the present day monitor lizards, their inclusion in the family Varanidae has been questioned.

The oldest fossils definitely identified as belonging to the monitor lizard family belong to the once widespread genus Saniwa which appeared at least 55 million years ago. Described species include Saniwa ensidens, S.grandis, and S.crassa from Wyoming, S.paucidens from Wyoming and Utah. S.brooksi from California, S.orsmaelensis from Belgium and unidentified species from France, New Mexico, Wyoming and Nebraska. Apart from differences in size there is little to distinguish these fossils from each other, nor from living monitor lizards. Saniwa may not have survived for long in Europe but they persisted in North America until at least 15 million years ago.

The living genus, Varanus, does not appear in the fossil record until about 25 million years ago. The oldest fossils come from Khazakstan are too fragmented to be assigned to species (Reshetov et al. 1978, Lungu et al. 1989). About 20 million years ago Varanus rusingensis inhabited Rusinga Island (now in Lake Victoria) and other areas of Kenya (Clos 1995). This monitor was very similar to the living African monitors V.niloticus, V.exanthematicus and V.albigularis. Like the living species, it probably fed largely on molluscs and reached a length of at least 2m. It may have lead a semi-aquatic existence in forests, much like V.niloticus ornatus.

The oldest European monitor is Varanus hofmanni, about 10 million years younger, which is known from France, Spain and Germany. At the same time the closely related Iberovaranus catalonicus lived in Spain and Portugal and Varanus pronini lived in Khazakstan (Zerova & Ckhikvadze 1986). V. marathonensis appeared at least 5 million years ago and is known from Greece, Hungary and Turkey. An unconfirmed record of this species from Italy suggests that the monitor lizards may have survived in Europe until less than a million years ago. Orlov & Tuniev (1986) suggest that V.marathonensis was very closely related to the living V. griseus and to V. darevskii, which lived in Tadjikistan about 5 million years ago (Levshakova 1986). V.semjonovi is known from Ukraine and another species, V.lungui has been described from Moldavia (Zerova & Ckhikvadze 1986, Lungu et al 1989). At this time a very large monitor lizard approaching 3m in length, V.silvalensis, lived in India. Other extinct fossil species include V.hooijeri, a close relative of the present day V.olivaceus, which lived on Flores less than 5 million years ago and possibly V.bolkayi, known to have inhabited Java and Timor about 2 million years ago (these fossils may represent the living species V.salvator (Auffenberg 1981)).

Unfortunately virtually nothing is known of the monitor lizards' history in Australia. The earliest fossils known come from South Australia and are around 10 million years old (Estes 1984). Fossil vertebrae of a species similar to V.giganteus from New South Wales are less than 2 million years old. On immunological evidence Baverstock et al (1994) suggest that monitor lizards reached Australia from south-east Asia less than 20 million years ago. When monitor lizards reached Australia, something very strange happened to them. Throughout the world fossil monitors appear as large or medium sized lizards, but few, if any, ever exceeded 300cm in length. In Australia both gigantic and dwarf monitor lizards evolved. Megalania (or Varanus) prisca was the largest land-dwelling lizard that has ever lived. Adults may have weighed over 600kg and measured more than 7m in length. They appear to have been widespread in Australia (remains have been found in New South Wales, Queensland and South Australia). This immane goanna is not a long dead and buried species. They may have survived until less than 25,000 years ago and are believed to have preyed upon the giant ancestors of kangaroos and wombats. Giant goannas may also have preyed on early human settlers, who must have regarded its extinction with great relief, even if they did not play a direct role in its demise themselves (Owen 1860, 1880, Anderson 1931, Hecht 1975, Rich 1985, Molnar 1990). The artist's impression of the giant goanna given here does not take into account the fact that this enormous monitor lizard may have had a bony crest on top of its head. Other ancient Australian goannas include an unidentified species that lived in South Australia 5 million years ago and had very large blunted teeth (Archer & Wade 1976) and the fossil V.emeritus from Queensland, which may represent another extinct species. Recent fossils of the living lace and sand goannas have been recovered from cave deposits in Victoria (Wells et al 1984).

Whilst some of the Australian monitor lizards became massive the more successful ones had adopted an opposite strategy. They shrank and diversified to form a unique group of diminutive varanids that spread throughout Australia and then began to move northwards (Storr 1980). To date they have not got very far; to the south of New Guinea and a few islands in the Timor Sea. Nevertheless, they are a very young group of lizards and already account for two-thirds of the living species of the Varanidae in Australia, and a third of the family world-wide. The larger monitor lizards have also persisted in Australia with at least 9 species living there today.

Today at least 46 species of monitor lizards are known to exist in Africa, Asia and Australasia. Baverstock et al (1994) suggest that all living species have evolved from a common ancestor within the last 45 million years. The confusion over the extinct varanids is unlikely ever to be resolved fully, but ample opportunities remain to study the surviving species. It is hoped that by examining many different characteristics of the living monitors it will be possible to gain some idea of how they are related to each other and eventually, it is hoped, their dispersal routes and the chronological order of species evolution will become apparent.



Reply
You need to login