venki stall

http://www.mediafire.com/?znn3mngfgmd

McAfee VirusScan Plus 2008's


McAfee VirusScan Plus 2008's
interface strives to be novice-friendly, with cute green check marks and red crosses. Unfortunately, while the interface is novice-friendly, the program is not. Some advanced configurations are available only after drilling down two or three menus. During installation, McAfee asks for user input on an important settings question, but the small dialog box doesn't offer enough information to fully explain the ramifications of the settings.

Attempting to cover every security function, this suite handles more than viruses by adding a firewall, hard drive cleaner, disk defragmenter, and network security manager. By including everything and the kitchen sink, McAfee VirusScan Plus 2008 feels bloated and it used more memory than we would have liked. Help is convoluted, and there's a chasm between the novice and expert user. We simply found McAfee too complex for novice users and not as advanced-user-friendly as similar suites.

Download with below link

Code:

http://depositfiles.com/files/k0tginj99

Aryans - Ankhon Main Tera Hi Chehra Bollywood Hindi Pop Album songs






How Bluetooth Works






There are lots of different ways that electronic devices can connect to one another. For example:
• Many desktop computer systems have a CPU unit connected to a mouse, a keyboard, a printer and so on.
• A personal digital assistant (PDA) will normally connect to the computer with a cable and a docking cradle.
• A TV will normally connect to a VCR and a cable box, with a remote control for all three components.
• A cordless phone connects to its base unit with radio waves, and it may have a headset that connects to the phone with a wire.
• In a stereo system, a CD player and other audio devices connect to the receiver, which connects to the speakers.
When you use computers, entertainment systems or telephones, the various pieces and parts of the systems make up a community of electronic devices. These devices communicate with each other using a variety of wires, cables, radio signals and infrared light beams, and an even greater variety of connectors, plugs and protocols.
The art of connecting things is becoming more and more complex every day. We sometimes feel as if we need a Ph.D. in electrical engineering just to set up the electronics in our homes! In this article, we will look at a completely different way to form the connections, called Bluetooth. Bluetooth is wireless and automatic, and has a number of interesting features that can simplify our daily lives.
The Problems
When any two devices need to talk to each other, they have to agree on a number of points before the conversation can begin. The first point of agreement is physical: Will they talk over wires, or through some form of wireless signals? If they use wires, how many are required -- one, two, eight, 25? Once the physical attributes are decided, several more questions arise:
• Information can be sent 1 bit at a time in a scheme called serial communications, or in groups of bits (usually 8 or 16 at a time) in a scheme called parallel communications. A desktop computer uses both serial and parallel communications to talk to different devices: Modems, mice and keyboards tend to talk through serial links, while printers tend to use parallel links.
• All of the parties in an electronic discussion need to know what the bits mean and whether the message they receive is the same message that was sent. In most cases, this means developing a language of commands and responses known as a protocol. Some types of products have a standard protocol used by virtually all companies so that the commands for one product will tend to have the same effect on another. Modems fall into this category. Other product types each speak their own language, which means that commands intended for one specific product will seem gibberish if received by another. Printers are like this, with multiple standards like PCL and PostScript
Companies that manufacture computers, entertainment systems and other electronic devices have realized that the incredible array of cables and connectors involved in their products makes it difficult for even expert technicians to correctly set up a complete system on the first try. Setting up computers and home entertainment systems becomes terrifically complicated when the person buying the equipment has to learn and remember all the details to connect all the parts. In order to make home electronics more user friendly, we need a better way for all the electronic parts of our modern life to talk to each other. That's where Bluetooth comes in.
Bluetooth Basics
Bluetooth is a standard developed by a group of electronics manufacturers that allows any sort of electronic equipment -- from computers and cell phones to keyboards and headphones -- to make its own connections, without wires, cables or any direct action from a user. Bluetooth is intended to be a standard that works at two levels:
• It provides agreement at the physical level -- Bluetooth is a radio-frequency standard.
• It also provides agreement at the next level up, where products have to agree on when bits are sent, how many will be sent at a time and how the parties in a conversation can be sure that the message received is the same as the message sent.


The companies belonging to the Bluetooth Special Interest Group, and there are more than 1,000 of them, want to let Bluetooth's radio communications take the place of wires for connecting peripherals, telephones and computers.
Other Wireless Connections
There are already a couple of ways to get around using wires. One is to carry information between components via beams of light in the infrared spectrum. Infrared refers to light waves of a lower frequency than human eyes can receive and interpret. Infrared is used in most television remote control systems, and with a standard called IrDA (Infrared Data Association) it's used to connect some computers with peripheral devices. For most of these computer and entertainment purposes, infrared is used in a digital mode -- the signal is pulsed on and off very quickly to send data from one point to another.
Infrared communications are fairly reliable and don't cost very much to build into a device, but there are a couple of drawbacks. First, infrared is a "line of sight" technology. For example, you have to point the remote control at the television or DVD player to make things happen. The second drawback is that infrared is almost always a "one to one" technology. You can send data between your desktop computer and your laptop computer, but not your laptop computer and your PDA at the same time.
These two qualities of infrared are actually advantageous in some regards. Because infrared transmitters and receivers have to be lined up with each other, interference between devices is uncommon. The one-to-one nature of infrared communications is useful in that you can make sure a message goes only to the intended recipient, even in a room full of infrared receivers.
The second alternative to wires, cable synchronizing, is a little more troublesome than infrared. If you have a Palm Pilot, a Windows CE device or a Pocket PC, you know about synchronizing data. In synchronizing, you attach the PDA to your computer (usually with a cable), press a button and make sure that the data on the PDA and the data on the computer match. It's a technique that makes the PDA a valuable tool for many people, but synchronizing the PDA with the computer and making sure you have the correct cable or cradle to connect the two can be a real hassle.
The Bluetooth Solution
Bluetooth is intended to get around the problems that come with both infrared and cable synchronizing systems. The hardware vendors, which include Siemens, Intel, Toshiba, Motorola and Ericsson, have developed a specification for a very small radio module to be built into computer, telephone and entertainment equipment. From the user's point of view, there are three important features to Bluetooth:
• It's wireless. When you travel, you don't have to worry about keeping track of a briefcase full of cables to attach all of your components, and you can design your office without wondering where all the wires will go.
• It's inexpensive.
• You don't have to think about it. Bluetooth doesn't require you to do anything special to make it work. The devices find one another and strike up a conversation without any user input at all.



Bluetooth Frequency
Bluetooth communicates on a frequency of 2.45 gigahertz, which has been set aside by international agreement for the use of industrial, scientific and medical devices (ISM).

A number of devices that you may already use take advantage of this same radio-frequency band. Baby monitors, garage-door openers and the newest generation of cordless phones all make use of frequencies in the ISM band. Making sure that Bluetooth and these other devices don't interfere with one another has been a crucial part of the design process.
Why is it called Bluetooth?
Harald Bluetooth was king of Denmark in the late 900s. He managed to unite Denmark and part of Norway into a single kingdom then introduced Christianity into Denmark. He left a large monument, the Jelling rune stone, in memory of his parents. He was killed in 986 during a battle with his son, Svend Forkbeard. Choosing this name for the standard indicates how important companies from the Baltic region (nations including Denmark, Sweden, Norway and Finland) are to the communications industry, even if it says little about the way the technology works.
Avoiding Interference: Low Power
One of the ways Bluetooth devices avoid interfering with other systems is by sending out very weak signals of 1 milliwatt. By comparison, the most powerful cell phones can transmit a signal of 3 watts. The low power limits the range of a Bluetooth device to about 10 meters, cutting the chances of interference between your computer system and your portable telephone or television. Even with the low power, the walls in your house won't stop a Bluetooth signal, making the standard useful for controlling several devices in different rooms.


With many different Bluetooth devices in a room, you might think they'd interfere with one another, but it's unlikely. On the next page, we'll see why.
Avoiding Interference: Hopping
It is unlikely that several devices will be on the same frequency at the same time, because Bluetooth uses a technique called spread-spectrum frequency hopping. In this technique, a device will use 79 individual, randomly chosen frequencies within a designated range, changing from one to another on a regular basis. In the case of Bluetooth, the transmitters change frequencies 1,600 times every second, meaning that more devices can make full use of a limited slice of the radio spectrum. Since every Bluetooth transmitter uses spread-spectrum transmitting automatically, it’s unlikely that two transmitters will be on the same frequency at the same time. This same technique minimizes the risk that portable phones or baby monitors will disrupt Bluetooth devices, since any interference on a particular frequency will last only a tiny fraction of a second.

When Bluetooth-capable devices come within range of one another, an electronic conversation takes place to determine whether they have data to share or whether one needs to control the other. The user doesn't have to press a button or give a command -- the electronic conversation happens automatically. Once the conversation has occurred, the devices -- whether they're part of a computer system or a stereo -- form a network. Bluetooth systems create a personal-area network (PAN), or piconet, that may fill a room or may encompass no more distance than that between the cell phone on a belt-clip and the headset on your head. Once a piconet is established, the members randomly hop frequencies in unison so they stay in touch with one another and avoid other piconets that may be operating in the same room.
Example: Networks
Let’s take a look at how the Bluetooth frequency hopping and personal-area network keep systems from becoming confused. Let’s say you’ve got a typical modern living room with the typical modern stuff inside. There’s an entertainment system with a stereo, a DVD player, a satellite TV receiver and a television; there's a cordless telephone and a personal computer. Each of these systems uses Bluetooth, and each forms its own piconet to talk between main unit and peripheral.
The cordless telephone has one Bluetooth transmitter in the base and another in the handset. The manufacturer has programmed each unit with an address that falls into a range of addresses it has established for a particular type of device. When the base is first turned on, it sends radio signals asking for a response from any units with an address in a particular range. Since the handset has an address in the range, it responds, and a tiny network is formed. Now, even if one of these devices should receive a signal from another system, it will ignore it since it’s not from within the network. The computer and entertainment system go through similar routines, establishing networks among addresses in ranges established by manufacturers. Once the networks are established, the systems begin talking among themselves. Each piconet hops randomly through the available frequencies, so all of the piconets are completely separated from one another.
Now the living room has three separate networks established, each one made up of devices that know the address of transmitters it should listen to and the address of receivers it should talk to. Since each network is changing the frequency of its operation thousands of times a second, it’s unlikely that any two networks will be on the same frequency at the same time. If it turns out that they are, then the resulting confusion will only cover a tiny fraction of a second, and software designed to correct for such errors weeds out the confusing information and gets on with the network’s business.
Example: Half/Full Duplex
Most of the time, a network or communications method either works in one direction at a time, called half-duplex communication, or in both directions simultaneously, called full-duplex communication. A speakerphone that lets you either listen or talk, but not both, is an example of half-duplex communication, while a regular telephone handset is a full-duplex device. Because Bluetooth is designed to work in a number of different circumstances, it can be either half-duplex or full-duplex.
The cordless telephone is an example of a use that will call for a full-duplex (two-way) link, and Bluetooth can send data at more than 64,000 bits per second in a full-duplex link -- a rate high enough to support several human voice conversations. If a particular use calls for a half-duplex link -- connecting to a computer printer, for example -- Bluetooth can transmit up to 721 kilobits per second (Kbps) in one direction, with 57.6 Kbps in the other. If the use calls for the same speed in both directions, a link with 432.6-Kbps capacity in each direction can be made.
Bluetooth Specs
Here are some specification details from the Bluetooth Web site
• The devices in a piconet share a common communication data channel. The channel has a total capacity of 1 megabit per second (Mbps). Headers and handshaking information consume about 20 percent of this capacity.
• In the United States and Europe, the frequency range is 2,400 to 2,483.5 MHz, with 79 1-MHz radio frequency (RF) channels. In practice, the range is 2,402 MHz to 2,480 MHz. In Japan, the frequency range is 2,472 to 2,497 MHz with 23 1-MHz RF channels.
• A data channel hops randomly 1,600 times per second between the 79 (or 23) RF channels.
• Each channel is divided into time slots 625 microseconds long.
• A piconet has a master and up to seven slaves. The master transmits in even time slots, slaves in odd time slots.
• Packets can be up to five time slots wide.
• Data in a packet can be up to 2,745 bits in length.
• There are currently two types of data transfer between devices: SCO (synchronous connection oriented) and ACL (asynchronous connectionless).
• In a piconet, there can be up to three SCO links of 64,000 bits per second each. To avoid timing and collision problems, the SCO links use reserved slots set up by the master.
• Masters can support up to three SCO links with one, two or three slaves.
• Slots not reserved for SCO links can be used for ACL links.
• One master and slave can have a single ACL link.
• ACL is either point-to-point (master to one slave) or broadcast to all the slaves.
• ACL slaves can only transmit when requested by the master.

How BIOS Works




One of the most common uses of Flash memory is for the basic input/output system of your computer, commonly known as the BIOS (pronounced "bye-ose"). On virtually every computer available, the BIOS makes sure all the other chips, hard drives, ports and CPU function together.
Every desktop and laptop computer in common use today contains a microprocessor as its central processing unit. The microprocessor is the hardware component. To get its work done, the microprocessor executes a set of instructions known as software (see How Microprocessors Work for details). You are probably very familiar with two different types of software:
• The operating system - The operating system provides a set of services for the applications running on your computer, and it also provides the fundamental user interface for your computer. Windows 98 and Linux are examples of operating systems. (See How Operating Systems Work for lots of details.)
• The applications - Applications are pieces of software that are programmed to perform specific tasks. On your computer right now you probably have a browser application, a word processing application, an e-mail application and so on. You can also buy new applications and install them.
It turns out that the BIOS is the third type of software your computer needs to operate successfully. In this article, you'll learn all about BIOS -- what it does, how to configure it and what to do if your BIOS needs updating.
What BIOS Does
The BIOS software has a number of different roles, but its most important role is to load the operating system. When you turn on your computer and the microprocessor tries to execute its first instruction, it has to get that instruction from somewhere. It cannot get it from the operating system because the operating system is located on a hard disk, and the microprocessor cannot get to it without some instructions that tell it how. The BIOS provides those instructions. Some of the other common tasks that the BIOS performs include:
• A power-on self-test (POST) for all of the different hardware components in the system to make sure everything is working properly
• Activating other BIOS chips on different cards installed in the computer - For example, SCSI and graphics cards often have their own BIOS chips.
• Providing a set of low-level routines that the operating system uses to interface to different hardware devices - It is these routines that give the BIOS its name. They manage things like the keyboard, the screen, and the serial and parallel ports, especially when the computer is booting.
• Managing a collection of settings for the hard disks, clock, etc.
The BIOS is special software that interfaces the major hardware components of your computer with the operating system. It is usually stored on a Flash memory chip on the motherboard, but sometimes the chip is another type of ROM.

BIOS uses Flash memory, a type of ROM.
When you turn on your computer, the BIOS does several things. This is its usual sequence:
1. Check the CMOS Setup for custom settings
2. Load the interrupt handlers and device drivers
3. Initialize registers and power management
4. Perform the power-on self-test (POST)
5. Display system settings
6. Determine which devices are bootable
7. Initiate the bootstrap sequence
The first thing the BIOS does is check the information stored in a tiny (64 bytes) amount of RAM located on a complementary metal oxide semiconductor (CMOS) chip. The CMOS Setup provides detailed information particular to your system and can be altered as your system changes. The BIOS uses this information to modify or supplement its default programming as needed. We will talk more about these settings later.
Interrupt handlers are small pieces of software that act as translators between the hardware components and the operating system. For example, when you press a key on your keyboard, the signal is sent to the keyboard interrupt handler, which tells the CPU what it is and passes it on to the operating system. The device drivers are other pieces of software that identify the base hardware components such as keyboard, mouse, hard drive and floppy drive. Since the BIOS is constantly intercepting signals to and from the hardware, it is usually copied, or shadowed, into RAM to run faster.
Booting the Computer
Whenever you turn on your computer, the first thing you see is the BIOS software doing its thing. On many machines, the BIOS displays text describing things like the amount of memory installed in your computer, the type of hard disk and so on. It turns out that, during this boot sequence, the BIOS is doing a remarkable amount of work to get your computer ready to run. This section briefly describes some of those activities for a typical PC.
After checking the CMOS Setup and loading the interrupt handlers, the BIOS determines whether the video card is operational. Most video cards have a miniature BIOS of their own that initializes the memory and graphics processor on the card. If they do not, there is usually video driver information on another ROM on the motherboard that the BIOS can load.
Next, the BIOS checks to see if this is a cold boot or a reboot. It does this by checking the value at memory address 0000:0472. A value of 1234h indicates a reboot, and the BIOS skips the rest of POST. Anything else is considered a cold boot.
If it is a cold boot, the BIOS verifies RAM by performing a read/write test of each memory address. It checks the PS/2 ports or USB ports for a keyboard and a mouse. It looks for a peripheral component interconnect (PCI) bus and, if it finds one, checks all the PCI cards. If the BIOS finds any errors during the POST, it will notify you by a series of beeps or a text message displayed on the screen. An error at this point is almost always a hardware problem.
The BIOS then displays some details about your system. This typically includes information about:
• The processor
• The floppy drive and hard drive
• Memory
• BIOS revision and date
• Display
Any special drivers, such as the ones for small computer system interface (SCSI) adapters, are loaded from the adapter, and the BIOS displays the information. The BIOS then looks at the sequence of storage devices identified as boot devices in the CMOS Setup. "Boot" is short for "bootstrap," as in the old phrase, "Lift yourself up by your bootstraps." Boot refers to the process of launching the operating system. The BIOS will try to initiate the boot sequence from the first device. If the BIOS does not find a device, it will try the next device in the list. If it does not find the proper files on a device, the startup process will halt. If you have ever left a floppy disk in the drive when you restarted your computer, you have probably seen this message. in the above picture


The BIOS has tried to boot the computer off of the floppy disk left in the drive. Since it did not find the correct system files, it could not continue. Of course, this is an easy fix. Simply pop out the disk and press a key to continue.
Configuring BIOS
In the previous list, you saw that the BIOS checks the CMOS Setup for custom settings. Here's what you do to change those settings.
To enter the CMOS Setup, you must press a certain key or combination of keys during the initial startup sequence. Most systems use "Esc," "Del," "F1," "F2," "Ctrl-Esc" or "Ctrl-Alt-Esc" to enter setup. There is usually a line of text at the bottom of the display that tells you "Press ___ to Enter Setup."
Once you have entered setup, you will see a set of text screens with a number of options. Some of these are standard, while others vary according to the BIOS manufacturer. Common options include:
• System Time/Date - Set the system time and date
• Boot Sequence - The order that BIOS will try to load the operating system
• Plug and Play - A standard for auto-detecting connected devices; should be set to "Yes" if your computer and operating system both support it
• Mouse/Keyboard - "Enable Num Lock," "Enable the Keyboard," "Auto-Detect Mouse"...
• Drive Configuration - Configure hard drives, CD-ROM and floppy drives
• Memory - Direct the BIOS to shadow to a specific memory address
• Security - Set a password for accessing the computer
• Power Management - Select whether to use power management, as well as set the amount of time for standby and suspend
• Exit - Save your changes, discard your changes or restore default settings

CMOS Setup
Be very careful when making changes to setup. Incorrect settings may keep your computer from booting. When you are finished with your changes, you should choose "Save Changes" and exit. The BIOS will then restart your computer so that the new settings take effect.
The BIOS uses CMOS technology to save any changes made to the computer's settings. With this technology, a small lithium or Ni-Cad battery can supply enough power to keep the data for years. In fact, some of the newer chips have a 10-year, tiny lithium battery built right into the CMOS chip!
Updating Your BIOS
Occasionally, a computer will need to have its BIOS updated. This is especially true of older machines. As new devices and standards arise, the BIOS needs to change in order to understand the new hardware. Since the BIOS is stored in some form of ROM, changing it is a bit harder than upgrading most other types of software.
To change the BIOS itself, you'll probably need a special program from the computer or BIOS manufacturer. Look at the BIOS revision and date information displayed on system startup or check with your computer manufacturer to find out what type of BIOS you have. Then go to the BIOS manufacturer's Web site to see if an upgrade is available. Download the upgrade and the utility program needed to install it. Sometimes the utility and update are combined in a single file to download. Copy the program, along with the BIOS update, onto a floppy disk. Restart your computer with the floppy disk in the drive, and the program erases the old BIOS and writes the new one. You can find a BIOS Wizard that will check your BIOS at BIOS Upgrades.
Major BIOS manufacturers include:
• American Megatrends Inc. (AMI)
• Phoenix Technologies
• ALi
• Winbond
As with changes to the CMOS Setup, be careful when upgrading your BIOS. Make sure you are upgrading to a version that is compatible with your computer system. Otherwise, you could corrupt the BIOS, which means you won't be able to boot your computer. If in doubt, check with your computer manufacturer to be sure you need to upgrade.


CHIPSETS


A chipset or "PCIset" is a group of microcircuits that orchestrate the flow of data to and from key components of a PC. This includes the CPU itself, the main memory, the secondary cache and any devices situated on the ISA and PCI buses. The chipset also controls data flow to and from hard disks, and other devices connected to the IDE channels. While new microprocessor technologies and speed improvements tend to receive all the attention, chipset innovations are, in fact, equally important.
Although there have always been other chipset manufacturers - such as SIS, VIA and Opti - for many years Intel's "Triton" chipsets were by far the most popular. Indeed, the introduction of the Intel Triton chipset caused something of a revolution in the motherboard market, with just about every manufacturer using it in preference to anything else. Much of this was down to the ability of the Triton to get the best out of both the Pentium processor and the PCI bus, together with its built-in master EIDE support, enhanced ISA bridge and ability to handle new memory technologies like EDO and SDRAM. However, the new PCI chipsets" potential performance improvements will only be realised when used in conjunction with BIOSes capable of taking full advantage of the new technologies on offer.
During the late 1990s things became far more competitive, with Acer Laboratories (ALI), SIS and VIA Technologies all developing chipsets designed to operate with Intel, AMD and Cyrix processors. 1998 was a particularly important year in chipset development, with what had become an unacceptable bottleneck - the PC's 66MHz system bus - to finally being overcome. Interestingly, it was not Intel but rival chipmakers that made the first move, pushing Socket 7 chipsets to 100MHz. Intel responded with its 440BX, one of many chipsets to use the ubiquitous Northbridge/Southbridge architecture. It was not long before Intel's hold on the chipset market loosened further still, and again, the company had no-one but itself to blame. In 1999, its single-minded commitment to Direct Rambus DRAM (DRDRAM) left it in the embarrassing position of not having a chipset that supported the 133MHz system bus speed its latest range of processors were capable of. This was another situation it's rivals were able to exploit, and in so doing gain market share.
The following charts the evolution of Intel chipsets over the years, from the time of it's first Triton chipset. During this time there have also been a number of special chipsets optimised for the Pentium Pro or designed for use with notebook PCs.
Triton 430FX
Introduced in early 1995, the 82430FX - to give it its full name - was Intel's first Triton chipset and conformed to the PCI 2.0 specification. It introduced support for EDO memory configurations of up to 128MB and for pipelined burst cache and synchronous cache technologies. However, it did not support a number of emerging technologies such as SDRAM and USB and was superseded in 1996 - little more than a year after its launch - by a pair of higher performance chipsets.
Triton 430VX
The Triton 430VX chipset conforms to the PCI 2.1 specification, and is designed to support Intel's Universal Serial Bus (USB) and Concurrent PCI standards. With the earlier 430FX, a bus master (on the ISA or PCI bus), such as a network card or disk controller, would lock the PCI bus whenever it transferred data in order to have a clear path to memory. This interrupted other processes, and was inefficient because the bus master would never make full use of the 100 MBps bandwidth of the PCI bus. With Concurrent PCI, the chipset can wrest control of the PCI bus from an idle bus master to give other processes access on a timeshare basis. Theoretically, this should allow for data transfer rates of up to 100 MBps, 15% more than the 430FX chipset, and smooth intensive PCI tasks such as video playback when bus masters are present.
The 430VX chipset was aimed fairly and squarely at the consumer market. It was intended to speed up multimedia and office applications, and it was optimised for 16-bit. Furthermore, it was designed to work with SDRAM, a special type of memory that's optimised for intensive multimedia processing. Although the performance gains are slight for this type of RAM over EDO RAM, the advantage is that it can operate efficiently from a single Dual In-line Memory Module (DIMM) and does not need to be paired.
The 430VX provided improved EDO memory timings which was supposed to allow cacheless systems to be built without compromising performance, at least compared to a PC with asynchronous cache. In practice, though, most manufacturers continued to provide at least some secondary cache, with most using synchronous cache to maximise performance.
Triton 430HX
The Triton 430HX chipset is geared towards business machines and was developed with networking, video conferencing and MPEG video playback in mind. It supports multiple processors, has been optimised for 32-bit operation and to work with large memory arrays (up to 512MB) and provides error control (ECC) facilities on the fly when 32-bit parity SIMMs are used. The 430HX does not support SDRAM.
The biggest difference between the HX and VX chipsets is the packaging. Where the VX consists of four separate chips, all built using the traditional plastic quad flat packaging, the HX chipset comprises just two chips, the 82439HX System Controller (SC), which manages the host and PCI buses, and the 82371SB PIIX3 for the ISA bus and all the ports.
The SC comes in a new ball grid array (BGA) packaging which reduces overall chip size and makes it easier to incorporate onto motherboard designs. It exerts the greatest influence on the machine's CPU performance, as it manages communications between the CPU and memory. The CPU has to be fed data from the secondary cache as quickly as possible, and if the necessary data isn"t already in the cache, the SC fetches it from main memory and loads it into the cache. The SC also ensures that data written into cache by the CPU is "flushed" back into main memory.
The PIIX3 chip manages the many processes involved in getting data into and out of RAM from the other devices in the PC. It provides two EIDE channels, both of which can accept two drives. IDE drives contain most of the controlling circuitry built into the hard disk itself, so the PIIX is mainly responsible for shifting data from the drives into RAM and back as quickly as possible. It also provides two 115,200bit/s buffered serial ports, an error correcting Enhanced Parallel Port, a PS/2 mouse port and a keyboard controller. The PIIX also supports additional connections that many motherboards have yet to adopt as the norm, such as a Universal Serial Bus connector and an infrared port.
Triton 430TX
The Triton 430TX includes all the features found on the earlier chipsets, including Concurrent PCI, USB support, aggressive EDO RAM timings and SDRAM support and is optimised for MMX processors and is designed to be used in both desktop and mobile computers.
The Triton 430TX also continues the high-integration two-chip BGA packaging first seen with the 430HX chipset, comprising the 82439TX System Controller (MTXC) and the 82371AB PCI ISA IDE Xcelerator (PIIX4). The former integrates the cache and main memory DRAM control functions and provides bus control to transfers between the CPU, cache, main memory, and the PCI Bus. The latter is a multi-function PCI device implementing a PCI-to-ISA bridge function, a PCI IDE function, a Universal Serial Bus host/hub function, and an Enhanced Power Management function.
The diagram below provides an overview of the overall architecture and shows the division of functionality between the System Controller and the Peripheral Bus Controller components - which are often referred to as "Northbridge" and "Southbridge" chipsets respectively.

The TX incorporates the Dynamic Power Management Architecture (DPMA) which reduces overall system power consumption and offers intelligent power-saving features like suspend to RAM and suspend to disk. The TX chipset also supports the new Ultra DMA disk protocol which enables a data throughput of 33 MBps from the hard disk drive to enhance performance in the most demanding applications.
440LX
The 440LX (by this time Intel had dropped the term "Triton") was the successor to the Pentium Pro 440FX chipset and was developed by Intel to consolidate on the critical success of the Pentium II processor launched a few months earlier. The most important feature of the 440LX is support for the Accelerated Graphics Port (AGP), a new, fast, dedicated bus designed to eliminate bottlenecks between the CPU, graphics controller and system memory, which will aid fast, high-quality 3D graphics.
Other improvements with the LX are more like housekeeping, bringing the Pentium II chipset up to the feature set of the 430TX by providing support for SDRAM and Ultra DMA IDE channels. The chipset includes the Advanced Configuration and Power Interface (ACPI), allowing quick power down and up, remote start-up over a LAN for remote network management, plus temperature and fan speed sensors. The chipset also has better integration with the capabilities of the Pentium II, such as support for dynamic execution and processor pipelining.
440EX
The 440EX AGPset, based on the core technology of the 440LX AGPset, is designed for use with the Celeron family of processors. It is ACPI-compliant and extends support for a number of advanced features such as AGP, UltraDMA/33, USB and 66MHz SDRAM, to the "Basic PC" market segment.
440BX
The PC's system bus had been a bottleneck for too long. Manufacturers of alternative motherboard chipsets had made the first move, pushing Socket 7 chipsets beyond Intel's 66MHz. Intel's response came in April 1998, with the release of its 440BX chipset, which represented a major step in the Pentium II architecture. The principal advantage of the 440BX chipset is support for a 100MHz system bus and 100MHz SDRAM. The former 66MHz bus speed is supported, allowing the BX chipset to be used with older (233MHz-333MHz) Pentium IIs.
The 440BX chipset features Intel's Quad Port Acceleration (QPA) to improve bandwidth between the Pentium II processor, the Accelerated Graphics Port, 100-MHz SDRAM and the PCI bus. QPA combines enhanced bus arbitration, deeper buffers, open-page memory architecture and ECC memory control to improve system performance. Other features include support for dual processors, 2x AGP, and the Advanced Configuration Interface (ACPI).
440ZX
The 440ZX is designed for lower cost form factors without sacrificing the performance expected from an AGPset, enabling 100MHz performance in form factors like microATX. With footprint compatibility with the 440BX, the 440ZX is intended to allow OEMs to leverage BX design and validation investment to produce new systems to meet entry level market segment needs.
440GX
Released at the same time as the Pentium II Xeon processor in mid-1998, the 440GX chipset was an evolution of the 440BX AGPset intended for use with Xeon-based workstations and servers. Built around the core architecture of its 440BX predecessor, the 440GX includes support for both Slot 1 and Slot 2 implementations, a 2x AGP expansion slot, dual CPUs and a maximum of 2GB of memory.
Importantly, the chipset supports full speed backside bus operation, enabling the Pentium II Xeon's Level 2 cache to run at the same speed as the core of the CPU.

810 AGPset
Formerly codenamed "Whitney", the 810 AGPset finally reached the market in the summer of 1999. It is a three-chip solution comprising the 82810 Graphics Memory Controller Hub (GMCH), 82801 I/O Controller Hub (ICH) and 82802 Firmware Hub (FWH) for storing the system and video BIOS. A break from tradition is that these components don't communicate with each other over the PCI bus. Instead, they use a dedicated 8-bit 266 MBps proprietary bus, thereby taking load off the PCI subsystem. The SDRAM memory interface is also unusual in that it runs at 100MHz irrespective of the system bus speed. There's no ISA support, but it could be implemented if a vendor added an extra bridge chip.
At the time of its launch, there were two versions of the 810 - the 82810 and 81810-DC100. The former is 66MHz part with no graphics memory, while the latter is a 100MHz-capable chip with support for 4MB of on-board graphics memory. The Direct AGP graphics architecture uses 11MB of system memory for frame buffer, textures and Z-buffer if no display cache is implemented. This drops to 7MB if the display cache is implemented. The whole configuration is known as Direct Video Memory technology. Also incorporated in the chipset is an AC-97 CODEC, which allows software modem and audio functionality. Vendors can link this to an Audio Modem Riser (AMR) slot to facilitate future plug-in audio or modem upgrades.
In the autumn of 1999 a subsequent version of the chipset - the 810E - extended support processors with a 133 MHz system bus. The Intel 810E chipset features a unique internal gear arbitration, allowing it to run seamlessly with 66 MHz, 100 MHz and 133 MHz processor busses.
As the cost of processors come down, the marginal costs of the motherboard, graphics and sound subsystems becomes an increasingly important factor in vendors' efforts to hit ever-lower price points. However, high levels of integration can be a double-edged sword: it reduces vendors' bill-of-materials (BOM) costs, but also limits their capability for product differentiation. Many manufacturers defer their decisions on graphics and sound options to late in the production cycle in order to maintain a competitive marketing advantage. Given that other highly integrated solutions - such as Cyrix's Media GX - haven't fared particularly well in the past, the 810 AGPset represents a bold move on Intel's part and one that signals the company's determination to capture a greater share of the "value PC" market which had been effectively ceded to AMD and Cyrix over the prior couple of years.
820 chipset
Originally scheduled to be available concurrently with the Pentium III processor in the spring of 1999, Intel's much delayed 820 chipset was finally launched in November that year. Those delays - which had left Intel in the position not having a chipset that supported the 133MHz system bus speed their latest range of processors were capable of - were largely due to delays in the production of Direct Rambus DRAM (DRDRAM), a key component in Intel's 133MHz platform strategy.
Direct RDRAM memory provides a memory bandwidth capable of delivering 1.6 GBps of maximum theoretical memory bandwidth - twice the peak memory bandwidth of 100MHz SDRAM systems. Additionally, the 820's support for AGP 4x technology allows graphics controllers to access main memory at more than 1 GBps - twice that of previous AGP platforms. The net result is the significantly improved graphics and multimedia handling performance expected to be necessary to accommodate future advances in both software and hardware technology.
The 820 chipset employs the Accelerated Hub Architecture that is offered in all Intel 800 series chipsets - the first chipset architecture to move away from the traditional Northbridge /Southbridge design. It supports a bandwidth of 266 MBps and, with it's optimised arbitration rules which allow more functions to run concurrently, delivers significantly improved audio and video handling. The chipset's three primary components are:
• Memory Controller Hub
• I/O Controller Hub, and
• Firmware Hub.

The Memory Controller Hub provides a high-performance interface for the CPU, memory and AGP and supports up to 1GB of memory via a single channel of RDRAM using 64-, 128- and 256-Mbit technology. With an internal bus running at 1.6 GBps and an advanced buffering and queuing structure, the Memory Hub Controller balances system resources and enables concurrent processing in either single or dual processor configurations.
The I/O Controller Hub forms a direct connection from the PC's I/O devices to the main memory. This results in increased bandwidth and significantly reduced arbitration overhead, creating a faster path to main memory. To capitalise further on this faster path to main memory, the 820 chipset features an integrated AC97 controller in addition to an ATA66 drive controller, dual USB ports and PCI add-in cards.
The Firmware Hub stores system and video BIOS and includes a first for the PC platform - a hardware-based random number generator. The Intel RNG provides truly random numbers through the use of thermal noise - thereby enabling stronger encryption, digital signing and security protocols. This is expected to be of particular benefit to the emerging class of e-commerce applications.
The i820 hadn't long been on the market before Intel - recognising that the price of RDRAM was likely to remain high for sometime - designed and released an add-on chip, the 82805 Memory Translator Hub (MTH), which, when implemented on the motherboard, allowed the use of PC100 SDRAM. Sitting between the i820's Memory Controller Hub (MCH) and the RDRAM memory slots, the MTH chip translates the Rambus memory protocol that's used by RDRAM into the parallel protocol required by SDRAM, thereby allowing the i820 to use this much more price attractive memory.
Within a few months, a bug in the MTH component came to light. This was serious enough to cause Intel to recall all MTH-equipped i820-motherboards. Since it wasn't possible to replace the defective chip Intel took the extraordinary step of giving every owner of an MTH-equipped i820 motherboard a replacement non-MTH motherboard as well as RDRAM to replace the SDRAM that was used before!
815 chipset
The various problems that had so delayed the introduction of Direct Rambus DRAM (DRDRAM), finally resulted in Intel doing what it had been so reluctant to do for so long - release a chipset supporting PC133 SDRAM. In fact, in mid-2000, it announced two such chipsets - formerly codenamed "Solano" - the 815 Chipset and the 815E Chipset.
Both chipsets use Intel's Graphics and Memory Controller Hub (GMCH). This supports both PC133 and PC100 SDRAM and provides onboard graphics, with a 230MHz RAMDAC and limited 3D acceleration. This gives system integrators the option of using the on-board graphics - and system memory - for lower cost systems or upgrading via an external graphics card for either AGP 4x or AGP 2x graphics capabilities.
Additionally, and like the 820E Chipset before it, the 815E features a new I/O Controller Hub (ICH2) for greater system performance and flexibility. This provides an additional USB controller, a Local Area Network (LAN) Connect Interface, dual Ultra ATA /100 controllers and up to six-channel audio capabilities. Integrating a Fast Ethernet controller directly into the chipsets makes it easier for computer manufacturers and system integrators to implement cost-effective network connections into PCs. The ICH2's enhanced AC97 interface supports full surround-sound for Dolby Digital audio found on DVD and simultaneously supports a soft modem connection.
850 chipset
Designed in tandem with the Pentium 4 processor, Intel's 850 Chipset represents the next step in the evolution of the Intel Hub Architecture, the successor to the previous northbridge /southbridge technology first seen on the 810 Chipset. Comprising the 82850 Memory Controller Hub (MCH) and 82801BA I/O Controller Hub (ICH2), the new chipset's principal features are:
• a 400MHz system bus
• dual RDRAM memory channels, operating in lock step to deliver 3.2 GBps of memory bandwidth to the processor
• support for 1.5V AGP4x technology, allowing graphics controllers to access main memory at over 1 GBps - twice the speed of previous AGP platforms
• two USB controllers, doubling the bandwidth available for USB peripherals to 24 MBps over four ports
• dual Ultra ATA/100 controllers support the fastest IDE interface for transfers to storage devices.
To ensure maximum performance, the system bus is balanced with the dual RDRAM channels at 3.2 GBps, providing 3x the bandwidth of platforms based on Intel III processors and allowing better concurrency for media-rich applications and multitasking.
In the autumn of 2002, some 18 months after the i850 was first introduced, the i850E variant was released, extending the capabilities of the chipset to support Hyper-Threading, a 533MHz system bus and PC1066 memory, for Pentium 4 class processors.
i845 chipset
The fact that system builders were obliged to use expensive DRDRAM - by virtue of the absence of any Pentium 4 chipsets supporting conventional SDRAM - had been an issue ever since the Pentium 4's launch at the end of 2000. The situation changed during the course of 2001, with chipmakers SiS and VIA both releasing Pentium 4 chipsets with DDR SDRAM support. Although this was a move of which Intel disapproved, it did have the effect of boosting the appeal of the Pentium 4, whose sales hitherto had been disappointing.
In the summer of 2001 Intel eventually gave in to market pressures and released their 845 chipset - previously codenamed "Brookdale" - supporting Pentium 4 systems' use of PC133 SDRAM. Whilst the combination of i845 and PC133 SDRAM meant lower prices - given that the speed of the memory bus was about three times slower than that of the Pentium 4 system bus - it also meant significantly poorer performance than that of an i850/DRDRAM based system. The reason the i845 didn't support faster DDR SDRAM at this time was apparently because they were prevented from allowing this until the start of the following year by the terms of a contract they'd entered into with Rambus, the inventors of DRDRAM.
Sure enough, at the beginning of 2002 re-released of the i845 chipset. The new version - sometimes being referred to as i845D - differs from its predecessor only in respect of its memory controller, which now supports PC1600 and PC2100 SDRAM - sometimes referred to as DDR200 and DDR266 respectively - in addition to PC133 SDRAM. It had reportedly been Intel's original intention for the i845 chipset to support only DDR200 SDRAM - capable of providing a maximum bandwidth of 1600MBps. However, the boom in the use of DDR SDRAM - and the consequent dramatic fall in prices - caused a rethink and the subsequent decision to extend support to DDR266 (maximum bandwidth 2100MBps). The fact that the company was prepared to make this decision even though it was bound to adversely impact the market share of its i850 chipset appears to indicate that the company's apparent infatuation with DRDRAM is well and truly over.
The 400MHz system bus of the i845 solution enables up to 3.2GBps of memory bandwidth to Pentium 4 processor. Compare this with the up to 1 GBps of data transfer possible from PC133 SDRAM and it is clear why faster DDR SDRAM makes such a difference to overall system performance. Its 1.5V 4x AGP interface with provides over 1 GBps of graphics bandwidth. Other features of the i845 chipset include an 4x AGP interface, 133MBps to the PCI, support for four USB ports, six-channel audio, a generally unused LAN connect interface, dual ATA-100 controllers and CNR support.
The i845 is Intel's first chipset to use a Flip Chip BGA packaging for the chip itself. This improves heat conductivity between the Memory & Controller Hub (MCH) and its heatsink which is required for proper operation. It is also the first MCH built using a 0.18-micron process; earlier versions have been 0.25-micron. The smaller die allows another first - the incorporation of a Level 3-like write cache, significantly increasing the speed at which the CPU is able to write data. It is expected that the transition to 0.13-micron MCH/Northbridges will enable this idea to be further developed, to the point where chipsets include much larger, genuine Level 3 caches on the MCH itself. The i845 further capitalises on the performance advantage realised by its high-speed write cache by the provision of deep data buffers. These play an important role in assisting the CPU and write cache to sustain its high data throughput levels.
A number of newer versions of the i845 chipset were subsequently released, all supporting the USB 2.0 interface (which increases bandwidth up to 40 times over the previous USB 1.1 standard):
• The i845G chipset, incorporating a new generation of integrated graphics - dubbed Intel Extreme Graphics - and targeted at the high-volume business and consumer desktop market segments.
• The i845E chipset, which works with discrete graphics components
• The i845GL chipset, designed for Celeron processor-based PCs.
i845GE chipset
The i845GE chipset was designed and optimised to support Hyper-Threading, Intel's innovative technology that achieves significant performance gains by allowing a single processor to be treated as two logical processors. Whilst not the first i845 chipset to support HT technology, it was the first in which that support was actually implemented, being launched at the same time as the first Intel's first HT-enabled desktop CPU, the 3.06GHz Pentium 4 unveiled in late 2002.
As well as supporting a faster, 266MHz version of Intel's Extreme Graphics core, the i845GE also supports a system bus speed of either 400 or 533MHz, up to DDR333 main memory and offers maximum display (digital CRT or TV) flexibility through an AGP4x connector.
The i845PE and i845GV chipsets are lower-spec variants of the i845GE, the former having no integrated graphics and the latter limiting both the Intel Extreme Graphics core and main memory support to DDR266 SDRAM.
Intel E7205 chipset
At the end of 2002, Intel announced the launch of a dozen Intel Xeon processor family products, including new processors, chipsets and platforms for Intel-based servers and workstations. Amongst these was one single-processor chipset, the E7205, formerly codenamed Granite Bay.
For some time the most viable way of balancing the bandwidth between the Pentium 4 CPU and its memory subsystem had been to couple the i850E chipset with dual-channel RDRAM. However, given the price and availability issues surrounding high-density RDRAM modules, this was a far from ideal solution. Despite - as it's server/workstation class chipset nomenclature implies - not originally being intended for desktop use, the E7205 chipset was to provide an answer to this dilemma. With a specification which includes support for:
• Dual Channel DDR266 memory bus (4.2GBps memory bandwidth)
• 400/533MHz FSB support (3.2GBps - 4.2GBps FSB bandwidth)
• AGP 8x
• USB 2.0, and
• integrated LAN.
it didn't take long for the motherboard manufacturers to produce boards based on the new chipset.
The E7205's memory controller is fully synchronous, meaning that the memory in E7205-based motherboards is clocked at the rate equal to the FSB frequency. Consequently, only DDR200 SDRAM may be used with CPUs supporting a 400MHz FSB and only DDR266 SDRAM with processors supporting a 533MHz FSB. The E7205 does not support DDR333 SDRAM.
With the Pentium 4 family destined to make the transition to a 800MHz Quad Pumped Bus - at which time the CPU's bus bandwidth will increase to 6.4GBps - it appears reasonable to assume that the likely way for memory subsystems to have comparable bandwidth will be the continued use of dual-channel DDR SDRAM. To that extent, the E7205 can be viewed as a prototype of the Canterwood and Springdale chipsets slated to appear in 2003.
i875P chipset
Originally, Intel had planned to introduce a 800MHz FSB in the context of the Prescott, the upcoming 90nm Pentium 4 core. However, in the event this was brought forward to the spring of 2003. The rationale was to extend the Pentium 4's performance curve within the confines of their current 0.13-micron process, without having to increase clock speeds to unsustainable levels. The transition from 533MHz to 800MHz FSB was aided and abetted by an associated new chipset platform, the 875P chipset, formerly codenamed Canterwood.
A 64-bit 800MHz FSB provides 6.4GBps of bandwidth between the Memory Controller Hub (or Northbridge) and the CPU. In a move that appears to further reduce the strategic importance of DRDRAM in Intel's product planning, and that had been signalled by the earlier E7205 chipset, the memory subsystem the 875P uses to balance bandwidth between the Memory Controller Hub (MCH) and memory banks is dual channel DDR SDRAM, all of the DDR400, DDR333 and DD266 variants.
Currently, there are two different strategies being employed in dual-channel memory controllers, one in which where each memory bank has its own memory channel and an arbiter distributes the load between them and the other to actually create a wider memory channel, thereby "doubling up" on standard DDR's 64-bit data paths. The i875P employs the latter technique, with each pair of installed DIMMs acting as a 128-bit memory module, able to transfer twice as much data as a single-channel solution, without the need for an arbiter.
As a consequence, dual channel operation is dependent on a number of conditions being met, Intel specifying that motherboards should default to single-channel mode in the event of any of these being violated:
• DIMMs must be installed in pairs
• Both DIMMs must use the same density memory chips
• Both DIMMs must use the same DRAM bus width
• Both DIMMs must be either single-sided or dual-sided.
The 875P chipset also introduces two significant platform innovations:
• Intel Performance Acceleration Technology (PAT), and
• Communications Streaming Architecture (CSA).
PAT optimises memory access between the processor and system memory for platforms configured with both the new 800Mhz FSB and Dual-Channel DDR400 memory. CSA is a new communications architecture that creates a dedicated link from the Memory Controller Hub (MCH) to the network interface, thereby offloading network traffic from the PCI bus. Used in conjunction with the new Intel PRO/1000 CT Desktop Connection gigabit Ethernet controller, it is claimed that CSA doubles the networking bandwidth possible with traditional PCI bus-based solutions.
Additionally, the 875P chipset includes a high-performance AGP 8x graphics interface, integrated Hi-Speed USB 2.0, optional ECC is supported for users that demand memory data reliability and integrity and dual independent DMA audio engines, enabling a user to make a PC phone call whilst at the same time playing digital music streams. The chipset is also Intel's first to offer native Serial ATA (SATA), a special version designated by the "-R" suffix adding RAID - albeit only RAID 0 (data striping) - support.

i865 chipset
If the i875 chipset can be viewed as the logical successor to i850E, then it's mainstream variant, the i865 chipset - formerly codenamed Springdale - can be viewed as the logical successor to the i845 series of chipsets. Not only do the i875/i865 chipsets represent a huge technological leap compared to their predecessors, but the performance gap between the pair of recent chipsets is significantly less than it was between the i850E and i845 family.
There is a clear trend in PC hardware towards parallel processes, epitomised by Intel's Hyper-Threading technology. However, there are other examples of where performing several tasks at the same time is preferable to carrying out a single task quickly. Hence the increasing popularity of small RAID arrays and now the trend towards dual-channel memory subsystems.
Currently, there are two different strategies being employed in dual-channel memory controllers, one in which where each memory bank has its own memory channel and an arbiter distributes the load between them and the other to actually create a wider memory channel, thereby "doubling up" on standard DDR's 64-bit data paths. In common with the i875P chipset, the i865's Memory Controller Hub employs the latter, the same conditions for dual-channel operation also applying.
The i865 memory controller is the same as that used by the i875P chipset, supporting:
• Hyper Threading
• Dual 64-bit DDR memory channels
• Communication Streaming Architecture bus for gigabit Ethernet
and capable of being paired with either the ICH5 or ICH5R chip - which handles things like the 10/100 Ethernet interface, 6-channel AC97 audio interface, USB 2.0, the PCI bus, etc., to provide the following additional features:
• 8 USB 2.0 ports
• Dual independent Serial ATA ports
The ICH5R also provides software RAID for Serial ATA drives.
The upshot is that - unlike the i875P - i865 chipsets are available in three different versions:
• i865P: supports DDR266 and DDR333 memory only and doesn't support the 800MHz FSB.
• i865PE: as per i865P, plus 800MHz FSB and DDR400 memory support.

• i865G: as per i865PE, plus Intel's integrated graphics core.
While the i865G's graphics core is the same as was featured on the i845G chipset, its performance will be faster, due both to a faster memory subsystem and a higher working frequency of the graphics core itself.
The following table compares a number of major characteristics of the i865P chipset with a selection of Intel's other recent Hyper-Threading chipset offerings


i925X PCI Express chipset
In the summer of 2004 Intel introduced a new family of chipsets that they claimed brought the most profound changes in PC platform architecture in more than a decade. The relative positioning of the chipsets - codenamed Alderwood and Grantsdale - is similar to that of the Canterwood and Springdale chipsets which preceded it. The 925X PCI Express chipset is the higher-end of the two, boasting a number of specific performance enhancements and being designed to deliver the ultimate gaming experience when coupled with Pentium 4 Extreme Edition CPUs.
The new chipsets are designed for use with the latest Prescott-cored Pentium 4 CPUs, designated by the new numeric model naming scheme - initially the 560 at 3.6GHz, down to the 520 at 2.8GHz. They will therefore only be used in motherboards that support Intel's innovative LGA775 package, which facilitates a direct electrical connection between the chip module substrate and the motherboard which the company claims will provide the robust power and signal delivery needed for future performance headroom.
All the new chipsets support Hyper-Threading, an 800MHz FSB and dual-channel DDR2-533 memory and enable a broad spectrum of new platform capabilities:
• Intel High Definition Audio enables multistreaming, 7.1 surround sound and dynamic jack retasking in a groundbreaking PC audio solution that provides performance comparable to high-end consumer electronics (CE) equipment.
• Intel Matrix Storage Technology provides the performance benefits of RAID 0 for media-intensive applications and the added protection of RAID 1 for critical digital media files and data on just two drives.
• The I/O Controller Hub 6 (ICH6R version) supports four 1.5 GBps Serial ATA (SATA) ports with Advanced Host Controller Interface (AHCI) capability, enabling Native Command Queuing for enhanced storage performance.
• Four PCI Express x1 high-speed expansion ports are ready for Gigabit Ethernet and future applications, including multiple TV tuners implemented in a single card.
• Intel Wireless Connect Technology enables users to create or expand a wireless network without external access point hardware. Intel Wireless Connect Technology requires a specific Intel 9XX Express Chipset and a separate Intel wireless LAN solution to operate.
Intel's new Flex memory system introduces some welcome flexibility, with dual-channel operation no longer being restricted to identical memory modules bought in matched pairs. Now the requirement is simply for the same amount of memory - whatever the configuration - in each of the two available banks.
Foremost amongst the innovations is the introduction of the PCI Express (PCX) bus technology. As digital video content becomes ever more important in today’s electronic universe, no single aspect of the personal computing platform requires as much performance increase as the graphics interface.
The new chipsets address this need in the shape of the revolutionary 16x PCI Express graphics interface, as its name implies, an aggregation of 16 lanes. This provides the increased bandwidth and scalability necessary to tackle the most demanding multimedia tasks, with up to four times the theoretical maximum bandwidth over previous generation AGP8X-based solutions - up to 4 GBps of peak bandwidth per direction and up to 8 GBps concurrent bandwidth.
AGP is unceremoniously consigned to history, the new chipsets providing no AGP interface at all. In time 1x PCX will replace the decade-old PCI standard.

i915 Express chipsets
Announced at the same time as the i925X Express", the i915 Express chipset family - codenamed Grantsdale and comprising the i915P and i915G chipsets - have the same features as it's sibling with the exception of some specific performance improvements.
The principal differences between the i915 and i925X chipsets are in graphics and memory support. The i915 supports traditional dual-channel DDR memory as well as the more expensive DDR2 variety. In addition, the i915G chipset includes an integrated Intel Graphics Media Accelerator 900, optimised for Microsoft DirectX 9 and capable of providing dual independent display capability with support for the latest 16:9 ratio monitors, in addition to conventional 4:3 displays.

The 3D graphics pipeline is broken up into four major stages, including geometry processing, setup (vertex processing), texture application and rasterisation. The Intel GMA 900 is optimised to use the Intel Pentium 4 processor for software-based geometry processing (such as transform and lighting) defined by Microsoft DirectX 9.
The Intel GMA 900 handles the remaining three stages, including converting vertices to pixels, applying textures to pixels, and rasterisation — the application of lighting and other effects to produce the final pixel value. From the rasterisation stage the Intel GMA 900 writes the final pixel value to the frame buffer for display. Intel GMA 900 includes two independent display pipelines that enable operation of dual displays.
The Intel GMA 900 utilises a shared memory architecture, its support for dual-channel DDR2/533-MHz memory ensuring the memory bandwidth so critically important for quality and performance.


the inner view of a mouse
pic 2 how a touch screen works

INPUT DEVICES

Keyboards
A computer keyboard is an array of switches, each of which sends the PC a unique signal when pressed. Two types of switch are commonly used: mechanical and rubber membrane. Mechanical switches are simply spring-loaded "push to make" types, so when pressed down they complete the circuit and then break it again when released. These are the type used in clicky keyboards with plenty of tactile feedback.
Membranes are composed of three sheets: the first has conductive tracks printed on it, the second is a separator with holes in it and the third is a conductive layer with bumps on it. A rubber mat over this gives the springy feel. When a key is pressed, it pushes the two conductive layers together to complete the circuit. On top is a plastic housing which includes sliders to keep the keys aligned.
An important factor for keys is their force displacement curve, which shows how much force is needed to depress a key, and how this force varies during the key's downward travel. Research shows most people prefer 80g to 100g, but games consoles may go to 120g or higher while other keys could be as low as 50g.
The keys are connected up as a matrix, and their row and column signals feed into the keyboard's own microcontroller chip. This is mounted on a circuit board inside the keyboard, and interprets the signals with its built-in firmware program. A particular key press might signal as row 3, column B, so the controller might decode this as an A and send the appropriate code for A back to the PC. These "scan codes" are defined as standard in the PC's BIOS, though the row and column definitions are specific only to that particular keyboard.
Increasingly, keyboard firmware is becoming more complex as manufacturers make their keyboards more sophisticated. It is not uncommon for a programmable keyboard, in which some keys have switchable multiple functions, to need 8KB of ROM to store its firmware. Most programmable functions are executed through a driver running on the PC.
A keyboard's microcontroller is also responsible for negotiating with the keyboard controller in the PC, both to report its presence and to allow software on the PC to do things like toggling the status light on the keyboard. The two controllers communicate asynchronously over the keyboard cable.
Many "ergonomic" keyboards work according to one principle; angling the two halves of the main keypad to allow the elbows to rest in a more natural position. Apple's Adjustable Keyboard has a wide, gently sloping wrist rest, and splits down the middle, enabling the user to find the most comfortable typing angle. It has a detachable numeric keypad so the user can position the mouse closer to the alphabetic keys. Cherry Electrical sells a similar split keyboard for the PC. The keyboard which sells in the largest volumes (and is one of the cheapest) is the Microsoft Natural Keyboard. This also separates the keys into two halves and its undulating design is claimed to accommodate the natural curves of the hand.



Mice
In the early 1980s the first PCs were equipped with the traditional user input device - a keyboard. By the end of the decade however, a mouse device had become an essential for PCs running the GUI-based Windows operating system.


The commonest mouse used today is opto-electronic. Its ball is steel for weight and rubber-coated for grip, and as it rotates it drives two rollers, one each for x and y displacement. A third spring-loaded roller holds the ball in place against the other two.
These rollers then turn two disks with radial slots cut in them. Each disk rotates between a photo-detector cell, and each cell contains two offset light emitting diodes (LEDs) and light sensors. As the disk turns, the sensors see the light appear to flash, showing movement, while the offset between the two light sensors shows the direction of movement.
Also inside the mouse are a switch for each button, and a microcontroller which interpret the signals from the sensors and the switches, using its firmware program to translate them into packets of data which are sent to the PC. Serial mice use voltages of 12V and an asynchronous protocol from Microsoft comprised of three bytes per packet to report x and y movement plus button presses. PS/2 mice use 5V and an IBM-developed communications protocol and interface.
1999 saw the introduction of the most radical mouse design advancement since its first appearance way back in 1968 in the shape of Microsoft's revolutionary IntelliMouse. Gone are the mouse ball and other moving parts inside the mouse used to track the mouse's mechanical movement, replaced by a tiny complementary metal oxide semiconductor (CMOS) optical sensor - the same chip used in digital cameras - and an on-board digital signal processor (DSP).
Called the IntelliEye, this infrared optical sensor emits a red glow beneath the mouse to capture high-resolution digital snapshots at the rate of 1,500 images per second which are compared by the DSP and translated changes into on-screen pointer movements. The technique, called image correlation processing, executes 18 million instructions per second (MIPS) and results in smoother, more precise pointer movement. The absence of moving parts means the mouse's traditional enemies - such as food crumbs, dust and grime - are all but completely avoided. The IntelliEye works on nearly any surface, such as wood, paper, and cloth - although it does have some difficulty with reflective surfaces, such as CD jewel cases, mirrors, and glass.

Touchscreens
A touchscreen is an intuitive computer input device that works by simply touching the display screen, either by a finger, or with a stylus, rather than typing on a keyboard or pointing with a mouse. Computers with touchscreens have a smaller footprint, and can be mounted in smaller spaces; they have fewer movable parts, and can be sealed. Touchscreens may be built in, or added on. Add-on touchscreens are external frames with a clear see-through touchscreen which mount onto the monitor bezel and have a controller built into their frame. Built-in touchscreens are internal, heavy-duty touchscreens mounted directly onto the CRT tube.
The touchscreen interface - whereby users navigate a computer system by touching icons or links on the screen itself - is the most simple, intuitive, and easiest to learn of all PC input devices and is fast is fast becoming the interface of choice for a wide variety of applications, such as:
• Public Information Systems: Information kiosks, tourism displays, and other electronic displays are used by many people that have little or no computing experience. The user-friendly touchscreen interface can be less intimidating and easier to use than other input devices, especially for novice users, making information accessible to the widest possible audience.
• Restaurant/POS Systems: Time is money, especially in a fast paced restaurant or retail environment. Because touchscreen systems are easy to use, overall training time for new employees can be reduced. And work can get done faster, because employees can simply touch the screen to perform tasks, rather than entering complex key strokes or commands.
• Customer Self-Service: In today's fast pace world, waiting in line is one of the things that has yet to speed up. Self-service touchscreen terminals can be used to improve customer service at busy stores, fast service restaurants, transportation hubs, and more. Customers can quickly place their own orders or check themselves in or out, saving them time, and decreasing wait times for other customers.
• Control / Automation Systems: The touchscreen interface is useful in systems ranging from industrial process control to home automation. By integrating the input device with the display, valuable workspace can be saved. And with a graphical interface, operators can monitor and control complex operations in real-time by simply touching the screen.
• Computer Based Training: Because the touchscreen interface is more user-friendly than other input devices, overall training time for computer novices, and therefore training expense, can be reduced. It can also help to make learning more fun and interactive, which can lead to a more beneficial training experience for both students and educators.
Any touchscreen system comprises the following three basic components;
• a touchscreen sensor panel, that sits above the display and which generates appropriate voltages according to where, precisely, it is touched
• a touchscreen controller, that processes the signals received from the sensor and translates these into touch event data which is passed to the PC's processor, usually via a serial or USB interface
• a software driver, provides an interface to the PC's operating system and which translates the touch event data into mouse events, essentially enabling the sensor panel to "emulate" a mouse.
The first touchscreen was created by adding a transparent surface to a touch-sensitive graphic digitizer, and sizing it to fit a computer monitor. Initially, the purpose was to increase the speed at which data could be entered into a computer. Subsequently, several types of touchscreen technologies have emerged, each with its own advantages and disadvantages that may, or may not, make it suitable for any given application:
Resistive touchscreens respond to the pressure of a finger, a fingernail, or a stylus. They typically comprise a glass or acrylic base that is coated with electrically conductive and resistive layers. The thin layers are separated by invisible separator dots. When operating, an electrical current is constantly flowing through the conductive material. In the absence of a touch, the separator dots prevent the conductive layer from making contact with the resistive layer. When pressure is applied to the screen the layers are pressed together, causing a change in the electrical current. This is detected by the touchscreen controller, which interprets it as a vertical/horizontal coordinate on the screen (x- and y-axes) and registers the appropriate touch event.
Resistive type touchscreens are generally the most affordable. Although clarity is less than with other touchscreen types, they're durable and able to withstand a variety of harsh environments. This makes them particularly suited for use in POS environments, restaurants, control/automation systems and medical applications.
Infrared touchscreens are based on light-beam interruption technology. Instead of placing a layer on the display surface, a frame surrounds it. The frame assembly is comprised of printed wiring boards on which the opto-electronics are mounted and is concealed behind an IR-transparent bezel. The bezel shields the opto-electronics from the operating environment while allowing the IR beams to pass through. The frame contains light sources - or light-emitting diodes - on one side, and light detectors - or photosensors - on the opposite side. The effect of this is to create an optical grid across the screen. When any object touches the screen, the invisible light beam is interrupted, causing a drop in the signal received by the photosensors. Based on which photosensors stop receiving the light signals, it is easy to isolate a screen coordinate.
Infrared touch systems are solid state technology and have no moving mechanical parts. As such, they have no physical sensor that can be abraded or worn out with heavy use over time. Furthermore, since they do not require an overlay - which can be broken - they are less vulnerable to vandalism and also extremely tolerant of shock and vibration.

Surface Acoustic Wave technology is one of the most advanced touchscreen types. SAW touchscreens work much like their infrared brethren except that sound waves, not light beams, are cast across the screen by transducers. Two sound waves, one emanating from the left of the screen and another from the top, move across the screen's surface. The waves continually bounce off reflectors located on all sides of the screen until they reach sensors located on the opposite side from where they originated.
When a finger touches the screen, the waves are absorbed and their rate of travel thus slowed. Since the receivers know how quickly the waves should arrive relative to when they were sent, the resulting delay allows them to determine the x- and y-coordinates of the point of contact and the appropriate touch event to be registered, Unlike other touch-screen technologies, the z-axis (depth) of the touch event can also be calculated; if the screen is touched with more than usual force, the water in the finger absorbs more of the wave's energy, thereby delaying it more.
Because the panel is all glass and there are no layers that can be worn, Surface Acoustic Wave touchscreens are highly durable and exhibit excellent clarity characteristics. The technology is recommended for public information kiosks, computer based training, or other high traffic indoor environments.
Capacitive touchscreens consist of a glass panel with a capacitive (charge storing) material coating its surface. Unlike resistive touchscreens, where any object can create a touch, they require contact with a bare finger or conductive stylus. When the screen is touched by an appropriate conductive object, current from each corner of the touchscreen is drawn to the point of contact. This causes oscillator circuits located at corners of the screen to vary in frequency depending on where the screen was touched. The resultant frequency changes are measured to determine the x- and y- co-ordinates of the touch event.
Capacitive type touchscreens are very durable, and have a high clarity. They are used in a wide range of applications, from restaurant and POS use to industrial controls and information kiosks.
The table below summarises the principal advantages/disadvantages of each of the described technologies









pic1 crt monitot
pic2 inner geometry
pic3 dot trio ,aperturegrill, slotted mask respectively
pic4 buttons
pic5 the range of view

CRT MONITORS

In an industry in which development is so rapid, it is somewhat surprising that the technology behind monitors and televisions is over a hundred years old. Whilst confusion surrounds the precise origins of the cathode-ray tube, or CRT, it's generally agreed that German scientist Karl Ferdinand Braun developed the first controllable CRT in 1897, when he added alternating voltages to the device to enable it to send controlled streams of electrons from one end of the tube to the other. However, it wasn't until the late 1940s that CRTs were used in the first television sets. Although the CRTs found in modern day monitors have undergone modifications to improve picture quality, they still follow the same basic principles.
The demise of the CRT monitor as a desktop PC peripheral had been long predicted, and not without good reason:
• they're heavy and bulky
• they're power hungry - typically 150W for a 17in monitor
• their high-voltage electric field, high- and low frequency magnetic fields and x-ray radiation have proven to be harmful to humans in the past
• the scanning technology they employ makes flickering unavoidable, causing eye strain and fatigue
• their susceptibility to electro-magnetic fields makes them vulnerable in military environments
• their surface is often either spherical or cylindrical, with the result that straight lines do not appear straight at the edges.
Whilst competing technologies - such as LCDs and PDPs had established themselves in specialist areas, there are several good reasons to explain why the CRT was able to maintain its dominance in the PC monitor market into the new millennium:
• phosphors have been developed over a long period of time, to the point where they offer excellent colour saturation at the very small particle size required by high-resolution displays
• the fact that phosphors emit light in all directions means that viewing angles of close to 180 degrees are possible
• since an electron current can be focused to a small spot, CRTs can deliver peak luminances as high as 1000 cd/m2 (or 1000 nits)
• CRTs use a simple and mature technology and can therefore be manufactured inexpensively in many industrialised countries
• whilst the gap is getting smaller all the time, they remain significantly cheaper than alternative display technologies.
However, by 2001 the writing was clearly on the wall and the CRT's long period of dominance appeared finally to be coming to an end. In the summer of that year Philips Electronics - the world's largest CRT manufacturer - had agreed to merge its business with that of rival LG Electronics, Apple had begun shipping all its systems with LCD monitors and Hitachi had closed its $500m-a-year CRT operation, proclaiming that "there are no prospects for growth of the monitor CRT market". Having peaked at a high of approaching $20 billion in 1999, revenues from CRT monitor sales were forecast to plunge to about half that figure by 2007.

Anatomy

Most CRT monitors have case depths about as deep as the screen is wide, begging the question "what is it that's inside a monitor that requires as much space as a PC's system case itself?"
A CRT is essentially an oddly-shaped, sealed glass bottle with no air inside. It begins with a slim neck and tapers outward until it forms a large base. The base is the monitor's "screen" and is coated on the inside with a matrix of thousands of tiny phosphor dots. Phosphors are chemicals which emit light when excited by a stream of electrons: different phosphors emit different coloured light. Each dot consists of three blobs of coloured phosphor: one red, one green, one blue. These groups of three phosphors make up what is known as a single pixel.

In the "bottle neck" of the CRT is the electron gun, which is composed of a cathode, heat source and focusing elements. Colour monitors have three separate electron guns, one for each phosphor colour. Images are created when electrons, fired from the electron guns, converge to strike their respective phosphor blobs.
Convergence is the ability of the three electron beams to come together at a single spot on the surface of the CRT. Precise convergence is necessary as CRT displays work on the principal of additive coloration, whereby combinations of different intensities of red green and blue phosphors create the illusion of millions of colours. When each of the primary colours are added in equal amounts they will form a white spot, while the absence of any colour creates a black spot. Misconvergence shows up as shadows which appear around text and graphic images.


The electron gun radiates electrons when the heater is hot enough to liberate electrons (negatively charged) from the cathode. In order for the electrons to reach the phosphor, they have first to pass through the monitor's focusing elements. While the radiated electron beam will be circular in the middle of the screen, it has a tendency to become elliptical as it spreads its outer areas, creating a distorted image in a process referred to as astigmatism. The focusing elements are set up in such a way as to initially focus the electron flow into a very thin beam and then - having corrected for astigmatism - in a specific direction. This is how the electron beam lights up a specific phosphor dot, the electrons being drawn toward the phosphor dots by a powerful, positively charged anode, located near the screen.
The deflection yoke around the neck of the CRT creates a magnetic field which controls the direction of the electron beams, guiding them to strike the proper position on the screen. This starts in the top left corner (as viewed from the front) and flashes on and off as it moves across the row, or "raster", from left to right. When it reaches the edge of the screen, it stops and moves down to the next line. Its motion from right to left is called horizontal retrace and is timed to coincide with the horizontal blanking interval so that the retrace lines will be invisible. The beam repeats this process until all lines on the screen are traced, at which point it moves from the bottom to the top of the screen - during the vertical retrace interval - ready to display the next screen image.
Since the surface of a CRT is not truly spherical, the beams which have to travel to the centre of the display are foreshortened, while those that travel to the corners of the display are comparatively longer. This means that the period of time beams are subjected to magnetic deflection varies, according to their direction. To compensate, CRT's have a deflection circuit which dynamically varies the deflection current depending on the position that the electron beam should strike the CRT surface.
Before the electron beam strikes the phosphor dots, it travels thorough a perforated sheet located directly in front of the phosphor. Originally known as a "shadow mask", these sheets are now available in a number of forms, designed to suit the various CRT tube technologies that have emerged over the years. They perform a number of important functions:
• they "mask" the electron beam, forming a smaller, more rounded point that can strike individual phosphor dots cleanly
• they filter out stray electrons, thereby minimising "overspill" and ensuring that only the intended phosphors are hit
• by guiding the electrons to the correct phosphor colours, they permit independent control of brightness of the monitor's three primary colours.

When the beam impinges on the front of the screen, the energetic electrons collide with the phosphors that correlate to the pixels of the image that's to be created on the screen. When this happens each is illuminated, to a greater or lesser extent, and light is emitted in the colour of the individual phosphor blobs. Their proximity causes the human eye to perceive the combination as a single coloured pixel.
Resolution and refresh rate
The most important aspect of a monitor is that it should give a stable display at the chosen resolution and colour palette. A screen that shimmers or flickers, particularly when most of the picture is showing white (as in Windows), can cause itchy or painful eyes, headaches and migraines. It is also important that the performance characteristics of a monitor be carefully matched with those of the graphics card driving it. It's no good having an extremely high performance graphics accelerator, capable of ultra high resolutions at high flicker-free refresh rates, if the monitor cannot lock onto the signal.
Resolution is the number of pixels the graphics card is describing the desktop with, expressed as a horizontal by vertical figure. Standard VGA resolution is 640x480 pixels. This was pretty much obsolete by the beginning of the new millennium, when the commonest CRT monitor resolutions were SVGA and XGA - 800x600 and 1024x768 pixels respectively.
Refresh rate, or vertical scanning frequency, is measured in Hertz (Hz) and represents the number of frames displayed on the screen per second. Too few, and the eye will notice the intervals in between and perceive a flickering display. It is generally accepted - including by standards bodies such as VESA - that a monitor requires a refresh rate of 75Hz or above for a flicker-free display. A computer's graphics circuitry creates a signal based on the Windows desktop resolution and refresh rate. This signal is known as the horizontal scanning frequency, (HSF) and is measured in KHz. A multi-scanning or "autoscan" monitor is capable of locking on to any signal which lies between a minimum and maximum HSF. If the signal falls out of the monitor's range, it will not be displayed.
Thus, the formula for calculating a CRT monitor's maximum refresh rate is:
VSF = HSF / number of horizontal lines x 0.95, where
VSF = vertical scanning frequency (refresh rate) and HSF = horizontal scanning frequency.
So, a monitor with a horizontal scanning frequency of 96kHz at a resolution of 1280x1024 would have a maximum refresh rate of:
VSF = 96,000 / 1024 x 0.95 = 89Hz.
If the same monitor were set to a resolution of 1600x1200, its maximum refresh rate would be:
VSF = 96,000 / 1200 x 0.95 = 76Hz.
Interlacing
Back in the 1930s, TV broadcast engineers had to design a transmission and reception system that satisfied a number of criteria:
• that functioned in harmony with the electricity supply system
• was economic with broadcast radio wave bandwidth
• could produce an acceptable image on the CRT displays of the time without undue flicker.
The mains electricity supply in Europe and the USA was 50Hz and 60Hz respectively and an acceptable image frame rate for portraying motion in cinemas had already been established at 24fps. At the time it was not practical to design a TV system that operated at either of the main electricity rates at the receiver end and, in any case, the large amount of broadcast bandwidth required would have been uneconomical. Rates of 25fps and 30fps would reduce the broadcast space needed to within acceptable bounds but updating images at those rates on a phosphor type CRT display would produce an unacceptable level of flickering.
The solution the engineers came up with was to split each TV frame into two parts, or "fields", each of which would contain half the scan lines from each frame. The first field - referred to as either the "top" or "odd" field - would contain all the odd numbered scan lines, while the "bottom" or "even" field would contain all the even numbered scan lines. The electron gun in the TV's CRT would scan through all the odd rows from top to bottom, then start again with the even rows, each pass taking 1/50th or 1/60th of a second in Europe or the USA respectively.
This interlaced scanning system proved to be an effective compromise. In Europe it amounted to an effective update frequency of 50Hz, reducing the perception of flicker to within acceptable bounds whilst at the same time using no more broadcast bandwidth than a 25fps (50 fields per second) system. The reason it works so well is due to a combination of the psycho-visual characteristics of the Human Visual System (HVS) and the properties of the phosphors used in a CRT display. Flicker perceptibility depends on many factors including image size, brightness, colour, viewing angle and background illumination and, in general, the HVS is far less sensitive to flickering detail than to large area flicker. The effect of this, in combination with the fact that phosphors continue to glow for a period of time after they have been excited by an electron beam, is what creates the illusion of the two fields of each TV frame merging together to create the appearance of complete frames.
There was a time when whether or not a PC's CRT monitor was interlaced was as important an aspect of its specification as its refresh rate. However, for a number of years now these displays have been designed for high resolution computer graphics and text and with shorter persistence phosphors, making operation in interlaced mode completely impractical. Moreover, by the new millennium display many alternative display technologies had emerged - LCD, PDP, LEP, DLP etc. - that were wholly incompatible with the concept of interlaced video signals.
Dot pitch
The maximum resolution of a monitor is dependent on more than just its highest scanning frequencies. Another factor is dot pitch, the physical distance between adjacent phosphor dots of the same colour on the inner surface of the CRT. Typically, this is between 0.22mm and 0.3mm. The smaller the number, the finer and better resolved the detail. However, trying to supply too many pixels to a monitor without a sufficient dot pitch to cope causes very fine details, such as the writing beneath icons, to appear blurred.
There's more than one way to group three blobs of coloured phosphor - indeed, there's no reason why they should even be circular blobs. A number of different schemes are currently in use, and care needs to be taken in comparing the dot pitch specification of the different types. With standard dot masks, the dot pitch is the centre-to-centre distance between two nearest-neighbour phosphor dots of the same colour, which is measured along a diagonal. The horizontal distance between the dots is 0.866 times the dot pitch. For masks which use stripes rather than dots, the pitch equals the horizontal distance between two same coloured strips. This means that the dot pitch on a standard shadow mask CRT should be multiplied by 0.866 before it is compared with the dot pitch of these other types of monitor.
Some monitor manufacturers publish a mask pitch instead of a dot pitch. However, since the mask is about 1/2in behind the phosphor surface of the screen, a 0.21mm mask pitch might actually translate into a 0.22mm phosphor dot pitch by the time the beam strikes the screen. Also, because CRT tubes are not completely flat, the electron beam tends to spread out into an oval shape as it reaches the edges of the tube. This has led to some manufacturers specifying two dot pitch measurements, one for the centre of the screen and one for the its outermost edges.
Overall, the difficulty in directly comparing the dot pitch values of different displays means that other factors - such as convergence, video bandwidth and focus - are often a better basis for comparing monitors than dot pitch.
Dot trio
The vast majority of computer monitors use circular blobs of phosphor and arrange them in triangular formation. These groups are known as "triads" and the arrangement is a dot trio design. The shadow mask is located directly behind the phosphor layer - each perforation corresponding with phosphor dot trios - and assists in masking unnecessary electrons, avoiding overspill and resultant blurring of the final picture.
Because the distance between the source and the destination of the electron stream towards the middle of the screen is smaller than at the edges, the corresponding area of the shadow mask get hotter. To prevent it from distorting - and redirecting the electrons incorrectly - manufacturers typically construct it from Invar, an alloy with a very low coefficient of expansion.

This is all very well, except that the shadow mask used to avoid overspill occupies a large percentage of the screen area. Where there are portions of mask, there's no phosphor to glow and less light means a duller image.
The brightness of an image matters most for full-motion video and with multimedia becoming an increasing important market consideration a number of improvements have been made to make dot-trio mask designs brighter. Most approaches to minimising glare involve filters that also affect brightness. The new schemes filter out the glare without affecting brightness as much.
Toshiba's Microfilter CRT places a separate filter over each phosphor dot and makes it possible to use a different colour filter for each colour dot. Filters over the red dots, for example, let red light shine through, but they also absorb other colours from ambient light shining on screen - colours that would otherwise reflect off as glare. The result is brighter, purer colours with less glare. Other companies are offering similar improvements. Panasonic's Crystal Vision CRTs use a technology called dye-encapsulated phosphor, which wraps each phosphor particle in its own filter and ViewSonic offers an equivalent capability as part of its new SuperClear screens.

Aperture Grill
In the 1960s, Sony developed an alternative tube technology known as Trinitron. It combined the three separate electron guns into one device: Sony refers to this as a Pan Focus gun. Most interesting of all, Trinitron tubes were made from sections of a cylinder, vertically flat and horizontally curved, as opposed to conventional tubes using sections of a sphere which are curved in both axes. Rather than grouping dots of red, green and blue phosphor in triads, Trinitron tubes lay their coloured phosphors down in uninterrupted vertical stripes.


Consequently, rather than use a solid perforated sheet, Trinitron tubes use masks which separate the entire stripes instead of each dot - and Sony calls this the "aperture grill". This replaces the shadow mask with a series of narrow alloy strips that run vertically across the inside of the tube. Their equivalent measure to a shadow mask's dot pitch is known as "stripe pitch". Rather than using conventional phosphor dot triplets, aperture grill-based tubes have phosphor lines with no horizontal breaks, and so rely on the accuracy of the electron beam to define the top and bottom edges of a pixel. Since less of the screen area is occupied by the mask and the phosphor is uninterrupted vertically, more of it can glow, resulting in a brighter, more vibrant display.
Aperture grill monitors also confer advantages with respect to the sharpness of an image's focus. Since more light can pass through an aperture grill than a shadow mask, it means that bright images can be displayed with less current. The more current needed to write an image to the screen, the thicker the electron beam becomes. The consequence of this is that the electron beam illuminates areas around the spot for which it is intended, causing the edges of the intended image to blur.
Because aperture grill strips are very narrow, there's a possibility that they might move, due to expansion or vibration. In an attempt to eliminate this, horizontal damper wires are fitted to increase stability. This reduces the chances of aperture grill misalignment, which can cause vertical streaking and blurring. The down side is that because the damper wires obstruct the flow of electrons to the phosphors, they are just visible upon close inspection. Trinitron tubes below 17in or so get away with one wire, while the larger model require two. A further down side is mechanical instability. A tap on the side of a Trinitron monitor can cause the image wobble noticeably for a moment. This is understandable given that the aperture grill's fine vertical wires are held steady in only one or two places, horizontally.
Mitsubishi followed Sony's lead with the design of its similar Diamondtron tube.


Slotted mask
Capitalising on the advantages of both the shadow mask and aperture grill approaches, NEC has developed a hybrid mask type which uses a slot-mask design borrowed from a TV monitor technology originated in the late 1970s by RCA and Thorn. Virtually all non-Trinitron TV sets use elliptically-shaped phosphors grouped vertically and separated by a slotted mask.
In order to allow a greater amount of electrons through the shadow mask, the standard round perforations are replaced with vertically-aligned slots. The design of the trios is also different, and features rectilinear phosphors that are arranged to make best use of the increased electron throughput.





The slotted mask design is mechanically stable due to the criss-cross of horizontal mask sections but exposes more phosphor than a conventional dot-trio design. The result is not quite as bright as with an aperture grill but much more stable and still brighter than dot-trio. It is unique to NEC, and the company capitalised on the design's improved stability in early 1996 when it fit the first ChromaClear monitors to come to market with speakers and microphones and claimed them to be "the new multimedia standard".
Enhanced Dot Pitch
Developed by Hitachi, EDP is the newest mask technology, coming to market in late 1997. This takes a slightly different approach, concentrating more on the phosphor implementation than the shadow mask or aperture grill.



On a typical shadow mask CRT, the phosphor trios are more or less arranged equilaterally, creating triangular groups that are distributed evenly across the inside surface of the tube. Hitachi has reduced the distance between the phosphor dots on the horizontal, creating a dot trio that's more akin to an isosceles triangle. To avoid leaving gaps between the trios, which might reduce the advantages of this arrangement, the dots themselves are elongated, so are oval rather than round.
The main advantage of the EDP design is most noticeable in the representation of fine vertical lines. In conventional CRTs, a line drawn from the top of the screen to the bottom will sometimes "zigzag" from one dot trio to the next group below, and then back to the one below that. Bringing adjacent horizontal dots closer together reduces this and has an effect on the clarity of all images.
Electron beam
If the electron beam is not lined up correctly with the shadow mask or aperture grille holes the beam is prevented from being passed through to the phosphors, thereby causing a reduction in pixel illumination. As the beam scans it may sometimes regain alignment and so succeed in passing through the mask/grille to reach the phosphors. The result is that the brightness rises and falls, producing a wavelike pattern on the screen, referred to as moiré. Moiré patterns are often most visible when a screen background is set to a pattern of dots, for example a grey screen background consisting of alternate black and white dots. The phenomenon is actually common in monitors with improved focus techniques as monitors with poor focus will have a wider electron beam and therefore have more chance of hitting the target phosphors instead of the mask/grille. In the past the only way to eliminate moiré effects was to defocus the beam, but now a number of monitor manufacturers have developed techniques to increase the beam size, without degrading the focus.




A large part of the efforts being directed at improving the CRT's image are aimed at creating a beam with less spread, so that the beam can address smaller individual dots on the screen more accurately - that is, without impinging on adjacent dots. This can be achieved by forcing the beam through smaller holes in the electron gun's grid assembly - but at the cost of decreasing the image's brightness. Of course, this can be countered by driving the cathode with a higher current so as to liberate more electrons. However, doing this causes the barium that is the source of the electrons to be consumed more quickly and so reduces the life of the cathode.
Sony's answer to this dilemma is SAGIC, or small aperture G1 with impregnated cathode. This comprises a cathode impregnated with tungsten and barium material whose shape and quantity has been varied so as to avoid the high current required for a denser electron beam consuming the cathode. This arrangement allows the first element in the grid - known as G1 - to be made with a much smaller aperture, thus reducing the diameter of the beam that passes through the rest of the CRT. By early 1999 the technology had helped Sony reduce its aperture grill pitch to 0.22mm - down from the 0.25mm of conventional Trinitron tubes - the tighter beam and narrower aperture grill working together to provide a noticeably sharper image.




In addition to dot size, control over dot shape is also essential, and the electron gun must correct errors that occur naturally due to the geometry of the tube for optimal performance. The problem arises because the angle at which the electron beam strikes the screen must necessarily vary across the screen's width and height. For dots in the centre of the screen, the beam comes straight through the electron gun and, undeflected by the yoke, strikes the phosphor at a perfect 90 degrees. However, as the beam scans closer to the edges of the screen, it strikes the phosphor at an angle, with the result that the area illuminated becomes increasingly elliptical as the angle changes. The effect is even worse in the corners - especially with screens which aren't perfectly flat - when the dot grows in both directions. If image quality isn't to suffer, it's essential that the monitor's electronics compensate for the problem.
By using additional components in the electron gun, it's possible to alter the shape of the beam itself in sync with the sweeping of the beam across the screen. In effect, the beam is made elliptical in the opposite direction so that the final dot shape on the screen remains circular.
Controls
Not so long ago, advanced controls were found only on high-end monitors. Now, even budget models boast a wealth of image correction controls. This is just as well since the image fed through to the monitor by the graphics card can be subject to a number of distortions. An image can sometimes be too far to one side or appear too high up on the screen or need to be made wider, or taller. These adjustments can be made using the horizontal or vertical sizing and positioning controls. The most common of the "geometric controls" is barrel or pincushion, which corrects the image from dipping in or bowing out at the edges. Trapezium correction can straighten sides which slope in together, or out from each other. Parallelogram corrections will prevent the image from leaning to one side, while some models even allow the entire image to be rotated.

Making more common appearances too, these days, are on-screen controls. These are superimposed graphics which appear on the screen (obscuring parts of the main image) usually indicating what is about to be adjusted. Its the same as TV sets superimposing, say, a volume bar whilst the sound is being adjusted. There's no standard for on-screen graphics, so consequently there's a huge range of icons, bars, colours and sizes in use. Some are much better than others. The whole point, however, is to render adjustments as intuitive, as quick and as easy as possible.
Design
By the beginning of 1998 15in monitors were gradually slipping to bargain-basement status, and the 17in size, an excellent choice for working at 1024x768 (XGA) resolution, was moving into the slot reserved for mainstream desktops. At the high end, a few 21in monitors were offering resolutions as high as 1800x1440.
In late 1997 a number of 19in monitors appeared on the market, with prices and physical sizes close to those of high-end 17in models, offering a cost-effective compromise for high resolution. A 19in CRT is a good choice for 1280x1024 (SXGA) - the minimum resolution needed for serious graphics or DTP, and the power user's minimum for business applications. It's also a practical minimum size for displaying at 1600x1200 (UXGA), although bigger monitors are preferable for that resolution.
One of the main problems with CRTs is their bulk. The larger the viewable area gets, the more the CRT's depth increases. The long-standing rule of thumb was that a monitor's depth matched its diagonal CRT size. CRT makers had been trying to reduce the depth by increasing the angle of deflection within the tube. However, the more the beam is deflected, the harder it is to maintain focus. Radical measures deployed included putting the deflection coils inside the glass CRT; they normally sit around the CRT's neck.
The result of this development effort is the so-called "short-neck" CRT. In early 1998 17in short-neck monitors measuring around 15in deep reached the market. The downside was that the new design had a tendency to degrade images, especially at a screen's corners and edges. This was addressed by improvements in the technology the following year with the introduction of tube designs employing a 100-degree deflection tube - in place of conventional 90-degree tubes - and narrower electron gun assemblies. The consequent increase in the beam deflection angle allowed the gun to be placed closer to the screen without the penalty of any image distortion. The result was a new rule of thumb that short-necked monitors should be about two inches shorter than their diagonal size.
The shape of a monitor's screen is another important factor. The three most common CRT shapes are spherical (a section of a sphere, used in the oldest and most inexpensive monitors), cylindrical (a section of a cylinder, used in aperture-grille CRTs), and flat square (a section of a sphere large enough to make the screen nearly flat).
Flat square tube (FST) is an industry standard term used since 1997 to describe shadow mask monitors that have minimal curvature (but still a curvature) of the monitor tube. They also have a larger display area - closer to the tube size - and nearly square corners. There's a design penalty for a flatter, squarer screen, as the less of a spherical section the screen surface is, the harder it is to control the geometry and focus of the displayed images. Modern monitors use microprocessors to apply techniques like dynamic focusing to compensate for the flatter screen.
FSTs require the use of a special alloy, Invar, for the shadow mask. The flatter screen means that the shortest beam path is in the centre of the screen. This is the point where the beam energy tends to concentrate, and consequently the shadow mask gets hotter here than at the corners and sides of the display. Uneven heating across the mask can make it expand and eventually warp and buckle. Any distortion in the mask means that its holes no longer register with the dot triplets on the screen and image quality will be reduced. Invar alloy is used in the best monitors as it has a low coefficient of expansion.
By 2000, monitors that used alternative mask technologies were available with completely flat screens. The principal advantages of a truly flat surface is that they have minimal glare and display images that have a more realistic appearance. However, these benefits are gained at the cost of accentuating the problem of the shape of the electron beam being elliptical at the point at which it strikes the screen at its edges. Furthermore, the use of perfectly flat glass gives rise to an optical illusion caused by the refraction of light, resulting in the image looking concave. As a result, many tube manufacturers employ a double layer glass surface, the inner surface of which introduces a curve that counters the concave appearance. The downside of this is that it reduces brightness - and sometimes contrast - and can give rise to warping at the screen's corners.
Sound facilities have become commonplace on many PCs, requiring additional loudspeakers and possibly a microphone too. The "multimedia monitor" avoids lots of separate boxes and cables by building in loudspeakers of some sort, maybe a microphone and in some cases a camera for video conferencing. At the back of these monitors are connections for a sound card. However, the quality of these additional components is often questionable, adding only a few pounds to the cost of manufacture. For high quality sound nothing beats decent external speakers which can also be properly magnetically shielded.
Another development which has become increasingly available since the launch of Microsoft's Windows 98, which brought with it the necessary driver software, is USB-compliant CRTs. The Universal Serial Bus applies to monitors in two ways. First, the monitor itself can use a USB connection to allow screen settings to be controlled with software. Second, a USB hub can be added to a monitor (normally in its base) for use as a convenient place to plug in USB devices such as keyboards and mice. The hub provides the connection to the PC.
Digital CRTs
Nearly 99 percent of all video displays sold in 1998 were connected using an analogue VGA interface, an ageing technology that represents the minimum standard for a PC display. In fact, today VGA represents an impediment to the adoption of new flat panel display technologies, largely because of the added cost for these systems to support the analogue interface. Another fundamental is the degradation of image quality that occurs when a digital signal is converted to analogue, and then back to digital before driving an analogue input LCD display.
The autumn of 1998 saw the formation of Digital Display Working Group (DDWG) - including computer industry leaders Intel, Compaq, Fujitsu, Hewlett-Packard, IBM, NEC and Silicon Image - with the objective of delivering a robust, comprehensive and extensible specification of the interface between digital displays and high-performance PCs. In the spring of 1999 the DDWG approved the first version of the Digital Visual Interface (DVI) specification based on Silicon Image's PanelLink technology, using a Transition Minimised Differential Signaling (TMDS) digital signal protocol.
Whilst primarily of benefit to flat panel displays - which can now operate in an standardised all-digital environment without the need to perform an analogue-to-digital conversion on the signals from the graphics card driving the display device - the DVI specification potentially has ramifications for conventional CRT monitors too.
Most complaints of poor image quality on CRTs can be traced to incompatible graphics controllers on the motherboard or graphics card. In today's cost-driven market, marginal signal quality is not all that uncommon. The incorporation of DVI with a traditional analogue CRT monitor will allow monitors to be designed to receive digital signals, with the necessary digital-to-analogue conversion being carried out within the monitor itself. This will give manufacturers added control over final image quality, making differentiation based on image quality much more of a factor than it has been hitherto. However, the application of DVI with CRT monitors is not all plain sailing.

One of the drawbacks is that since it was originally designed for use with digital flat panels, DVI has a comparatively low bandwidth of 165MHz. This means that a working resolution of 1280x1024 could be supported at up to an 85Hz refresh rate. Although this isn't a problem for LCD monitors, it's a serious issue for CRT displays. The DVI specification supports a maximum resolution of 1600x1200 at a refresh rate of only 60Hz - totally unrealistic in a world of ever increasing graphics card performance and ever bigger and cheaper CRT monitors.
The solution is the provision of additional bandwidth overhead for horizontal and vertical retrace intervals - facilitated through the use of two TMDS links. With such an arrangement digital CRTs compliant with VESA's Generalised Timing Formula (GTF) would be capable of easily supporting resolutions exceeding 2.75 million pixels at an 85Hz refresh rate. However, implementation was to prove to be difficult, with noise, reflections, skew and drive limitations within DVI chips making it difficult to achieve the theoretical potential of a dual DVI link. In the event it was not until 2002 that the first dual-link DVI graphics cards began to emerge.
Another problem is that it's more expensive to digitally scale the refresh rate of a monitor than using a traditional analogue multisync design. This could lead to digital CRTs being more costly than their analogue counterparts. An alternative is for digital CRTs to have a fixed frequency and resolution like a LCD display and thereby eliminate the need for multisync technology.
DVI anticipates that in the future screen refresh functionality will become part of the display itself. New data will need to be sent to the display only when changes to the data need to be displayed. With a selective refresh interface, DVI can maintain the high refresh rates required to keep a CRT display ergonomically pleasing while avoiding an artificially high data rate between the graphics controller and the display. Of course, a monitor would have to employ frame buffer memory to enable this feature.
The first DVI-compliant controller designed specifically for implementation in digital CRT monitors came to market during 2001, and by the end of that year DVI had become firmly and the sales of flat panels had surged to such an extent that prices fell dramatically. However, it remained unclear as to how relevant DVI was going to be for conventional monitors, with some convinced that DVI was the future of CRT technology and others remaining sceptical.




LightFrame technology
CRT monitors and TVs have each been optimised for the applications they've been traditionally used for. The former excel at displaying close-up high-resolution content, such as text, while lower-resolution TV screens' larger dot pitch and higher light output make them more suitable for rendering low-resolution photography such as film, intended for viewing at a distance.
TVs can use an extremely high beam current to produce vivid images, and take advantage of a phenomenon called "pixel blooming", in which adjacent pixels illuminate one another, thereby achieving a higher level of brightness. Another TV technique is "peaking", which artificially sharpens a video signal's light/dark transitions.
The problem is that neither of these techniques is appropriate on high-resolution PC monitors, as they would result in a performance degradation in traditional computer applications - such as word processing and spreadsheets. Consequently, PC users have had to live with TV-quality applications often appearing flat, dull and lifeless when displayed on a CRT monitor. Of course, the rise of home video editing, DVD playback on the desktop and even video content on the Web means this deficiency has become increasingly unacceptable.
Philips' answer to the problem came in the shape of their unique and innovative LightFrame technology, first revealed in late 2000. In essence, LightFrame seeks to simulate the output performance of a TV screen on a PC monitor, theoretically delivering the best of both worlds.
It comprises a software application and an integrated circuit embedded in a monitor which work together to selectively raise brightness and sharpness. The software transmits co-ordinates of the selected screen area to the monitor by writing instructions on the last line of the video signal. These are translated by a proprietary integrated circuit in the monitor to boost sharpness and brightness in the selected area before being blanked out. Non-selected portions of the screen are unaffected by the process.
Extensive testing has confirmed that LightFrame does not adversely effect monitor life. Modern-day monitors have improved phosphors, designed for high light output. Though the peak brightness of a highlighted area is strongly increased, the average brightness - a determining factor for cathode deterioration - is not normally increased. In any case, LightFrame employs a special Automatic Beam Limited (ABL) circuit to keep a monitor's maximum average brightness within acceptable levels.
A year after the technology was first introduced, LightFrame 2 was launched, offering automatic detection and enhancement of applications that benefit from the technology. This was followed in the summer of 2002 by the announcement of LightFrame 3, boasting the ability to automatically enhance up to 16 images simultaneously in Microsoft's Internet Explorer and up to 8 when using photo-viewing applications. Interestingly, Philips intend to migrate LightFrame 3 to its LCD monitors too.
LightFrame works by identifying a rectangular screen area for highlighting. On occasions, certain backgrounds or borders prevent a photo or video from being detected automatically. In such cases it's necessary to highlight it manually. This is accomplished by dragging a rectangle to encompass the selected area, or, to select an entire window, by a single click of the mouse.


Safety standards
In the late 1980s concern over possible health issues related to monitor use led Swedac, the Swedish testing authority, to make recommendations concerning monitor ergonomics and emissions. The resulting standard was called MPR1. This was amended in 1990 to the internationally adopted MPR2 standard, which called for the reduction of electrostatic emissions with a conductive coating on the monitor screen.
In 1992 a further standard, entitled TCO, was introduced by the Swedish Confederation of Professional Employees. The emission levels in TCO92 were based on what monitor manufacturers thought was possible rather than on any particular safety level, while MPR2 had been based on what they could achieve without a significant cost increase. As well as setting stiffer levels for emission it required monitors to meet the international EN60950 standard for electrical and fire safety. Subsequent TCO standards were introduced in 1995 and again in 1999.
Apart from Sweden, the main impetus for safety standards has come from the US. In 1993, VESA initiated its DPMS standard, or Display Power Management Signalling. A DPMS compliant graphics card enables the monitor to achieve four states: on, standby, suspend and off, at user-defined periods. Suspend mode must draw less than 8W so the CRT, its heater and its electron gun are likely to be shut off. Standby takes the power consumption down to below about 25W, with the CRT heater usually left on for faster resuscitation.
VESA has also produced several standards for plug-and-play monitors. Known under the banner of DDC (Display Data Channel), they should in theory allow your system to figure out and select the ideal settings, but in practice this very much depends on the combination of hardware.
EPA Energy Star is a power saving standard, mandatory in the US and widely adopted in Europe, requiring a mains power saving mode drawing less than 30W. Energy Star was initiated in 1993 but really took hold in 1995 when the US Government, the world's largest PC purchaser, adopted a policy to buy only Energy Star compliant products.
Other relevant standards include:
• ISO 9241 part 3, the international standard for monitor ergonomics
• EN60950, the European standard for the electrical safety of IT equipment
• the German TUV/EG mark, which means a monitor has been tested to both standards, in addition to the German standard for basic ergonomics (ZH/618) and MPR2 emission levels.
TCO standards
In 1995, TCO modified the requirements for visual ergonomics and added a range of conditions to cover environmental issues, including the use of certain chemicals in manufacturing and the recycling of components. The most stringent standard so far, and the result of collaboration between the TCO (The Swedish Confederation of Professional Employees), Naturskyddsforeningen (The Swedish Society for Nature Conservation) and NUTEK (The National Board for Industry and Technical Development in Sweden), TCO95 became the first global environmental labelling scheme. It was more comprehensive than the German Blue Angel label and more exacting than the ISO international standards. The display, system unit and keyboard can be certified separately and the manufacturer's environmental policy is addressed at every stage from production to disposal. Over and above TCO92, the product may not contain cadmium or lead, the plastic housing must be of biodegradable material and free of brominated flame retardants and the production process must avoid use of CFCs (freons) and chlorinated solvents. The emission and power saving requirements remain unaltered although picture performance and luminance uniformity have been addressed.
TCO standards also require that screens be treated with conductive coatings to reduce the static charge on the monitor. Although static electricity generated on the front surface of a CRT has been alleged to be a factor in a number of health risks, it has not yet been confirmed.
TCP99 is the latest iteration of the standard. TCO99 doesn't change the emission levels from those in the previous versions, but it does alter the testing procedures to deal with certain loopholes. The new approval mainly concentrates on improving the visual ergonomics requirements. Improvements in visual ergonomics include better luminance uniformity and contrast. There is also a new requirement that screen colour temperature adjustment, when present, should be accurate.
To reduce eye fatigue caused by image flicker, the minimum required refresh rate is increased to 85Hz for displays of less than 20in, with 100MHz recommended, and to a minimum of 75Hz for 20in or greater. Although harder to control, there are measures to address the problem of screen contrast in the office environment. To help manufacturers achieve the right balance between anti-reflection treatment and the minimum amount of light reaching the user, a minimum diffuse reflectance level of 20% is specified.
More exacting attention is paid to power saving and environmental impact, with TCO99-certified monitors saving up to 50% more energy than TCO95 displays. There's a different requirement for monitors with USB hubs, which can suspend at 15W and restart in three seconds; non-USB monitors must suspend at 5W. Manufacturing requirements are more stringent too. No chlorinated solvents may be used and product vendors must provide corporate and domestic customers with a recycling path using a competent recycling body.
Ergonomics
Whilst the quality of the monitor and graphics card, and in particular, the refresh rate at which the combination can operate is of crucial importance in ensuring that users spending long hours in front of a CRT monitor can do so in as much comfort as possible, it is not the only factor that should be considered. Physical positioning is also important, and expert advice has recently been revised in this area. Previously it had been thought that the centre of the monitor should be at eye level. It is now believed that to reduce fatigue as much as possible, the top of the screen should be at eye level, with the screen between 0 and 30 degrees below the horizontal and tilted slightly upwards. However, seeking to achieve this arrangement with furniture designed in accordance with the previous rules is not that easy to achieve, however, without causing other problems with respect to seating position and, for example, the comfortable positioning of keyboard and mouse. It is also important to sit directly in front of the monitor rather than to one side, and to locate the screen so as to avoid reflections and glare from external light sources.