CRT MONITORSIn an industry in which development is so rapid, it is somewhat surprising that the technology behind monitors and televisions is over a hundred years old. Whilst confusion surrounds the precise origins of the cathode-ray tube, or CRT, it's generally agreed that German scientist Karl Ferdinand Braun developed the first controllable CRT in 1897, when he added alternating voltages to the device to enable it to send controlled streams of electrons from one end of the tube to the other. However, it wasn't until the late 1940s that CRTs were used in the first television sets. Although the CRTs found in modern day monitors have undergone modifications to improve picture quality, they still follow the same basic principles.
The demise of the CRT monitor as a desktop PC peripheral had been long predicted, and not without good reason:
• they're heavy and bulky
• they're power hungry - typically 150W for a 17in monitor
• their high-voltage electric field, high- and low frequency magnetic fields and x-ray radiation have proven to be harmful to humans in the past
• the scanning technology they employ makes flickering unavoidable, causing eye strain and fatigue
• their susceptibility to electro-magnetic fields makes them vulnerable in military environments
• their surface is often either spherical or cylindrical, with the result that straight lines do not appear straight at the edges.
Whilst competing technologies - such as LCDs and PDPs had established themselves in specialist areas, there are several good reasons to explain why the CRT was able to maintain its dominance in the PC monitor market into the new millennium:
• phosphors have been developed over a long period of time, to the point where they offer excellent colour saturation at the very small particle size required by high-resolution displays
• the fact that phosphors emit light in all directions means that viewing angles of close to 180 degrees are possible
• since an electron current can be focused to a small spot, CRTs can deliver peak luminances as high as 1000 cd/m2 (or 1000 nits)
• CRTs use a simple and mature technology and can therefore be manufactured inexpensively in many industrialised countries
• whilst the gap is getting smaller all the time, they remain significantly cheaper than alternative display technologies.
However, by 2001 the writing was clearly on the wall and the CRT's long period of dominance appeared finally to be coming to an end. In the summer of that year Philips Electronics - the world's largest CRT manufacturer - had agreed to merge its business with that of rival LG Electronics, Apple had begun shipping all its systems with LCD monitors and Hitachi had closed its $500m-a-year CRT operation, proclaiming that "there are no prospects for growth of the monitor CRT market". Having peaked at a high of approaching $20 billion in 1999, revenues from CRT monitor sales were forecast to plunge to about half that figure by 2007.
AnatomyMost CRT monitors have case depths about as deep as the screen is wide, begging the question "what is it that's inside a monitor that requires as much space as a PC's system case itself?"
A CRT is essentially an oddly-shaped, sealed glass bottle with no air inside. It begins with a slim neck and tapers outward until it forms a large base. The base is the monitor's "screen" and is coated on the inside with a matrix of thousands of tiny phosphor dots. Phosphors are chemicals which emit light when excited by a stream of electrons: different phosphors emit different coloured light. Each dot consists of three blobs of coloured phosphor: one red, one green, one blue. These groups of three phosphors make up what is known as a single pixel.
In the "bottle neck" of the CRT is the electron gun, which is composed of a cathode, heat source and focusing elements. Colour monitors have three separate electron guns, one for each phosphor colour. Images are created when electrons, fired from the electron guns, converge to strike their respective phosphor blobs.
Convergence is the ability of the three electron beams to come together at a single spot on the surface of the CRT. Precise convergence is necessary as CRT displays work on the principal of additive coloration, whereby combinations of different intensities of red green and blue phosphors create the illusion of millions of colours. When each of the primary colours are added in equal amounts they will form a white spot, while the absence of any colour creates a black spot. Misconvergence shows up as shadows which appear around text and graphic images.
The electron gun radiates electrons when the heater is hot enough to liberate electrons (negatively charged) from the cathode. In order for the electrons to reach the phosphor, they have first to pass through the monitor's focusing elements. While the radiated electron beam will be circular in the middle of the screen, it has a tendency to become elliptical as it spreads its outer areas, creating a distorted image in a process referred to as astigmatism. The focusing elements are set up in such a way as to initially focus the electron flow into a very thin beam and then - having corrected for astigmatism - in a specific direction. This is how the electron beam lights up a specific phosphor dot, the electrons being drawn toward the phosphor dots by a powerful, positively charged anode, located near the screen.
The deflection yoke around the neck of the CRT creates a magnetic field which controls the direction of the electron beams, guiding them to strike the proper position on the screen. This starts in the top left corner (as viewed from the front) and flashes on and off as it moves across the row, or "raster", from left to right. When it reaches the edge of the screen, it stops and moves down to the next line. Its motion from right to left is called horizontal retrace and is timed to coincide with the horizontal blanking interval so that the retrace lines will be invisible. The beam repeats this process until all lines on the screen are traced, at which point it moves from the bottom to the top of the screen - during the vertical retrace interval - ready to display the next screen image.
Since the surface of a CRT is not truly spherical, the beams which have to travel to the centre of the display are foreshortened, while those that travel to the corners of the display are comparatively longer. This means that the period of time beams are subjected to magnetic deflection varies, according to their direction. To compensate, CRT's have a deflection circuit which dynamically varies the deflection current depending on the position that the electron beam should strike the CRT surface.
Before the electron beam strikes the phosphor dots, it travels thorough a perforated sheet located directly in front of the phosphor. Originally known as a "shadow mask", these sheets are now available in a number of forms, designed to suit the various CRT tube technologies that have emerged over the years. They perform a number of important functions:
• they "mask" the electron beam, forming a smaller, more rounded point that can strike individual phosphor dots cleanly
• they filter out stray electrons, thereby minimising "overspill" and ensuring that only the intended phosphors are hit
• by guiding the electrons to the correct phosphor colours, they permit independent control of brightness of the monitor's three primary colours.
When the beam impinges on the front of the screen, the energetic electrons collide with the phosphors that correlate to the pixels of the image that's to be created on the screen. When this happens each is illuminated, to a greater or lesser extent, and light is emitted in the colour of the individual phosphor blobs. Their proximity causes the human eye to perceive the combination as a single coloured pixel.
Resolution and refresh rate
The most important aspect of a monitor is that it should give a stable display at the chosen resolution and colour palette. A screen that shimmers or flickers, particularly when most of the picture is showing white (as in Windows), can cause itchy or painful eyes, headaches and migraines. It is also important that the performance characteristics of a monitor be carefully matched with those of the graphics card driving it. It's no good having an extremely high performance graphics accelerator, capable of ultra high resolutions at high flicker-free refresh rates, if the monitor cannot lock onto the signal.
Resolution is the number of pixels the graphics card is describing the desktop with, expressed as a horizontal by vertical figure. Standard VGA resolution is 640x480 pixels. This was pretty much obsolete by the beginning of the new millennium, when the commonest CRT monitor resolutions were SVGA and XGA - 800x600 and 1024x768 pixels respectively.
Refresh rate, or vertical scanning frequency, is measured in Hertz (Hz) and represents the number of frames displayed on the screen per second. Too few, and the eye will notice the intervals in between and perceive a flickering display. It is generally accepted - including by standards bodies such as VESA - that a monitor requires a refresh rate of 75Hz or above for a flicker-free display. A computer's graphics circuitry creates a signal based on the Windows desktop resolution and refresh rate. This signal is known as the horizontal scanning frequency, (HSF) and is measured in KHz. A multi-scanning or "autoscan" monitor is capable of locking on to any signal which lies between a minimum and maximum HSF. If the signal falls out of the monitor's range, it will not be displayed.
Thus, the formula for calculating a CRT monitor's maximum refresh rate is:
VSF = HSF / number of horizontal lines x 0.95, where
VSF = vertical scanning frequency (refresh rate) and HSF = horizontal scanning frequency.
So, a monitor with a horizontal scanning frequency of 96kHz at a resolution of 1280x1024 would have a maximum refresh rate of:
VSF = 96,000 / 1024 x 0.95 = 89Hz.
If the same monitor were set to a resolution of 1600x1200, its maximum refresh rate would be:
VSF = 96,000 / 1200 x 0.95 = 76Hz.
Interlacing
Back in the 1930s, TV broadcast engineers had to design a transmission and reception system that satisfied a number of criteria:
• that functioned in harmony with the electricity supply system
• was economic with broadcast radio wave bandwidth
• could produce an acceptable image on the CRT displays of the time without undue flicker.
The mains electricity supply in Europe and the USA was 50Hz and 60Hz respectively and an acceptable image frame rate for portraying motion in cinemas had already been established at 24fps. At the time it was not practical to design a TV system that operated at either of the main electricity rates at the receiver end and, in any case, the large amount of broadcast bandwidth required would have been uneconomical. Rates of 25fps and 30fps would reduce the broadcast space needed to within acceptable bounds but updating images at those rates on a phosphor type CRT display would produce an unacceptable level of flickering.
The solution the engineers came up with was to split each TV frame into two parts, or "fields", each of which would contain half the scan lines from each frame. The first field - referred to as either the "top" or "odd" field - would contain all the odd numbered scan lines, while the "bottom" or "even" field would contain all the even numbered scan lines. The electron gun in the TV's CRT would scan through all the odd rows from top to bottom, then start again with the even rows, each pass taking 1/50th or 1/60th of a second in Europe or the USA respectively.
This interlaced scanning system proved to be an effective compromise. In Europe it amounted to an effective update frequency of 50Hz, reducing the perception of flicker to within acceptable bounds whilst at the same time using no more broadcast bandwidth than a 25fps (50 fields per second) system. The reason it works so well is due to a combination of the psycho-visual characteristics of the Human Visual System (HVS) and the properties of the phosphors used in a CRT display. Flicker perceptibility depends on many factors including image size, brightness, colour, viewing angle and background illumination and, in general, the HVS is far less sensitive to flickering detail than to large area flicker. The effect of this, in combination with the fact that phosphors continue to glow for a period of time after they have been excited by an electron beam, is what creates the illusion of the two fields of each TV frame merging together to create the appearance of complete frames.
There was a time when whether or not a PC's CRT monitor was interlaced was as important an aspect of its specification as its refresh rate. However, for a number of years now these displays have been designed for high resolution computer graphics and text and with shorter persistence phosphors, making operation in interlaced mode completely impractical. Moreover, by the new millennium display many alternative display technologies had emerged - LCD, PDP, LEP, DLP etc. - that were wholly incompatible with the concept of interlaced video signals.
Dot pitch
The maximum resolution of a monitor is dependent on more than just its highest scanning frequencies. Another factor is dot pitch, the physical distance between adjacent phosphor dots of the same colour on the inner surface of the CRT. Typically, this is between 0.22mm and 0.3mm. The smaller the number, the finer and better resolved the detail. However, trying to supply too many pixels to a monitor without a sufficient dot pitch to cope causes very fine details, such as the writing beneath icons, to appear blurred.
There's more than one way to group three blobs of coloured phosphor - indeed, there's no reason why they should even be circular blobs. A number of different schemes are currently in use, and care needs to be taken in comparing the dot pitch specification of the different types. With standard dot masks, the dot pitch is the centre-to-centre distance between two nearest-neighbour phosphor dots of the same colour, which is measured along a diagonal. The horizontal distance between the dots is 0.866 times the dot pitch. For masks which use stripes rather than dots, the pitch equals the horizontal distance between two same coloured strips. This means that the dot pitch on a standard shadow mask CRT should be multiplied by 0.866 before it is compared with the dot pitch of these other types of monitor.
Some monitor manufacturers publish a mask pitch instead of a dot pitch. However, since the mask is about 1/2in behind the phosphor surface of the screen, a 0.21mm mask pitch might actually translate into a 0.22mm phosphor dot pitch by the time the beam strikes the screen. Also, because CRT tubes are not completely flat, the electron beam tends to spread out into an oval shape as it reaches the edges of the tube. This has led to some manufacturers specifying two dot pitch measurements, one for the centre of the screen and one for the its outermost edges.
Overall, the difficulty in directly comparing the dot pitch values of different displays means that other factors - such as convergence, video bandwidth and focus - are often a better basis for comparing monitors than dot pitch.
Dot trio
The vast majority of computer monitors use circular blobs of phosphor and arrange them in triangular formation. These groups are known as "triads" and the arrangement is a dot trio design. The shadow mask is located directly behind the phosphor layer - each perforation corresponding with phosphor dot trios - and assists in masking unnecessary electrons, avoiding overspill and resultant blurring of the final picture.
Because the distance between the source and the destination of the electron stream towards the middle of the screen is smaller than at the edges, the corresponding area of the shadow mask get hotter. To prevent it from distorting - and redirecting the electrons incorrectly - manufacturers typically construct it from Invar, an alloy with a very low coefficient of expansion.
This is all very well, except that the shadow mask used to avoid overspill occupies a large percentage of the screen area. Where there are portions of mask, there's no phosphor to glow and less light means a duller image.
The brightness of an image matters most for full-motion video and with multimedia becoming an increasing important market consideration a number of improvements have been made to make dot-trio mask designs brighter. Most approaches to minimising glare involve filters that also affect brightness. The new schemes filter out the glare without affecting brightness as much.
Toshiba's Microfilter CRT places a separate filter over each phosphor dot and makes it possible to use a different colour filter for each colour dot. Filters over the red dots, for example, let red light shine through, but they also absorb other colours from ambient light shining on screen - colours that would otherwise reflect off as glare. The result is brighter, purer colours with less glare. Other companies are offering similar improvements. Panasonic's Crystal Vision CRTs use a technology called dye-encapsulated phosphor, which wraps each phosphor particle in its own filter and ViewSonic offers an equivalent capability as part of its new SuperClear screens.
Aperture GrillIn the 1960s, Sony developed an alternative tube technology known as Trinitron. It combined the three separate electron guns into one device: Sony refers to this as a Pan Focus gun. Most interesting of all, Trinitron tubes were made from sections of a cylinder, vertically flat and horizontally curved, as opposed to conventional tubes using sections of a sphere which are curved in both axes. Rather than grouping dots of red, green and blue phosphor in triads, Trinitron tubes lay their coloured phosphors down in uninterrupted vertical stripes.
Consequently, rather than use a solid perforated sheet, Trinitron tubes use masks which separate the entire stripes instead of each dot - and Sony calls this the "aperture grill". This replaces the shadow mask with a series of narrow alloy strips that run vertically across the inside of the tube. Their equivalent measure to a shadow mask's dot pitch is known as "stripe pitch". Rather than using conventional phosphor dot triplets, aperture grill-based tubes have phosphor lines with no horizontal breaks, and so rely on the accuracy of the electron beam to define the top and bottom edges of a pixel. Since less of the screen area is occupied by the mask and the phosphor is uninterrupted vertically, more of it can glow, resulting in a brighter, more vibrant display.
Aperture grill monitors also confer advantages with respect to the sharpness of an image's focus. Since more light can pass through an aperture grill than a shadow mask, it means that bright images can be displayed with less current. The more current needed to write an image to the screen, the thicker the electron beam becomes. The consequence of this is that the electron beam illuminates areas around the spot for which it is intended, causing the edges of the intended image to blur.
Because aperture grill strips are very narrow, there's a possibility that they might move, due to expansion or vibration. In an attempt to eliminate this, horizontal damper wires are fitted to increase stability. This reduces the chances of aperture grill misalignment, which can cause vertical streaking and blurring. The down side is that because the damper wires obstruct the flow of electrons to the phosphors, they are just visible upon close inspection. Trinitron tubes below 17in or so get away with one wire, while the larger model require two. A further down side is mechanical instability. A tap on the side of a Trinitron monitor can cause the image wobble noticeably for a moment. This is understandable given that the aperture grill's fine vertical wires are held steady in only one or two places, horizontally.
Mitsubishi followed Sony's lead with the design of its similar Diamondtron tube.
Slotted maskCapitalising on the advantages of both the shadow mask and aperture grill approaches, NEC has developed a hybrid mask type which uses a slot-mask design borrowed from a TV monitor technology originated in the late 1970s by RCA and Thorn. Virtually all non-Trinitron TV sets use elliptically-shaped phosphors grouped vertically and separated by a slotted mask.
In order to allow a greater amount of electrons through the shadow mask, the standard round perforations are replaced with vertically-aligned slots. The design of the trios is also different, and features rectilinear phosphors that are arranged to make best use of the increased electron throughput.
The slotted mask design is mechanically stable due to the criss-cross of horizontal mask sections but exposes more phosphor than a conventional dot-trio design. The result is not quite as bright as with an aperture grill but much more stable and still brighter than dot-trio. It is unique to NEC, and the company capitalised on the design's improved stability in early 1996 when it fit the first ChromaClear monitors to come to market with speakers and microphones and claimed them to be "the new multimedia standard".
Enhanced Dot PitchDeveloped by Hitachi, EDP is the newest mask technology, coming to market in late 1997. This takes a slightly different approach, concentrating more on the phosphor implementation than the shadow mask or aperture grill.
On a typical shadow mask CRT, the phosphor trios are more or less arranged equilaterally, creating triangular groups that are distributed evenly across the inside surface of the tube. Hitachi has reduced the distance between the phosphor dots on the horizontal, creating a dot trio that's more akin to an isosceles triangle. To avoid leaving gaps between the trios, which might reduce the advantages of this arrangement, the dots themselves are elongated, so are oval rather than round.
The main advantage of the EDP design is most noticeable in the representation of fine vertical lines. In conventional CRTs, a line drawn from the top of the screen to the bottom will sometimes "zigzag" from one dot trio to the next group below, and then back to the one below that. Bringing adjacent horizontal dots closer together reduces this and has an effect on the clarity of all images.
Electron beamIf the electron beam is not lined up correctly with the shadow mask or aperture grille holes the beam is prevented from being passed through to the phosphors, thereby causing a reduction in pixel illumination. As the beam scans it may sometimes regain alignment and so succeed in passing through the mask/grille to reach the phosphors. The result is that the brightness rises and falls, producing a wavelike pattern on the screen, referred to as moiré. Moiré patterns are often most visible when a screen background is set to a pattern of dots, for example a grey screen background consisting of alternate black and white dots. The phenomenon is actually common in monitors with improved focus techniques as monitors with poor focus will have a wider electron beam and therefore have more chance of hitting the target phosphors instead of the mask/grille. In the past the only way to eliminate moiré effects was to defocus the beam, but now a number of monitor manufacturers have developed techniques to increase the beam size, without degrading the focus.
A large part of the efforts being directed at improving the CRT's image are aimed at creating a beam with less spread, so that the beam can address smaller individual dots on the screen more accurately - that is, without impinging on adjacent dots. This can be achieved by forcing the beam through smaller holes in the electron gun's grid assembly - but at the cost of decreasing the image's brightness. Of course, this can be countered by driving the cathode with a higher current so as to liberate more electrons. However, doing this causes the barium that is the source of the electrons to be consumed more quickly and so reduces the life of the cathode.
Sony's answer to this dilemma is SAGIC, or small aperture G1 with impregnated cathode. This comprises a cathode impregnated with tungsten and barium material whose shape and quantity has been varied so as to avoid the high current required for a denser electron beam consuming the cathode. This arrangement allows the first element in the grid - known as G1 - to be made with a much smaller aperture, thus reducing the diameter of the beam that passes through the rest of the CRT. By early 1999 the technology had helped Sony reduce its aperture grill pitch to 0.22mm - down from the 0.25mm of conventional Trinitron tubes - the tighter beam and narrower aperture grill working together to provide a noticeably sharper image.
In addition to dot size, control over dot shape is also essential, and the electron gun must correct errors that occur naturally due to the geometry of the tube for optimal performance. The problem arises because the angle at which the electron beam strikes the screen must necessarily vary across the screen's width and height. For dots in the centre of the screen, the beam comes straight through the electron gun and, undeflected by the yoke, strikes the phosphor at a perfect 90 degrees. However, as the beam scans closer to the edges of the screen, it strikes the phosphor at an angle, with the result that the area illuminated becomes increasingly elliptical as the angle changes. The effect is even worse in the corners - especially with screens which aren't perfectly flat - when the dot grows in both directions. If image quality isn't to suffer, it's essential that the monitor's electronics compensate for the problem.
By using additional components in the electron gun, it's possible to alter the shape of the beam itself in sync with the sweeping of the beam across the screen. In effect, the beam is made elliptical in the opposite direction so that the final dot shape on the screen remains circular.
ControlsNot so long ago, advanced controls were found only on high-end monitors. Now, even budget models boast a wealth of image correction controls. This is just as well since the image fed through to the monitor by the graphics card can be subject to a number of distortions. An image can sometimes be too far to one side or appear too high up on the screen or need to be made wider, or taller. These adjustments can be made using the horizontal or vertical sizing and positioning controls. The most common of the "geometric controls" is barrel or pincushion, which corrects the image from dipping in or bowing out at the edges. Trapezium correction can straighten sides which slope in together, or out from each other. Parallelogram corrections will prevent the image from leaning to one side, while some models even allow the entire image to be rotated.
Making more common appearances too, these days, are on-screen controls. These are superimposed graphics which appear on the screen (obscuring parts of the main image) usually indicating what is about to be adjusted. Its the same as TV sets superimposing, say, a volume bar whilst the sound is being adjusted. There's no standard for on-screen graphics, so consequently there's a huge range of icons, bars, colours and sizes in use. Some are much better than others. The whole point, however, is to render adjustments as intuitive, as quick and as easy as possible.
DesignBy the beginning of 1998 15in monitors were gradually slipping to bargain-basement status, and the 17in size, an excellent choice for working at 1024x768 (XGA) resolution, was moving into the slot reserved for mainstream desktops. At the high end, a few 21in monitors were offering resolutions as high as 1800x1440.
In late 1997 a number of 19in monitors appeared on the market, with prices and physical sizes close to those of high-end 17in models, offering a cost-effective compromise for high resolution. A 19in CRT is a good choice for 1280x1024 (SXGA) - the minimum resolution needed for serious graphics or DTP, and the power user's minimum for business applications. It's also a practical minimum size for displaying at 1600x1200 (UXGA), although bigger monitors are preferable for that resolution.
One of the main problems with CRTs is their bulk. The larger the viewable area gets, the more the CRT's depth increases. The long-standing rule of thumb was that a monitor's depth matched its diagonal CRT size. CRT makers had been trying to reduce the depth by increasing the angle of deflection within the tube. However, the more the beam is deflected, the harder it is to maintain focus. Radical measures deployed included putting the deflection coils inside the glass CRT; they normally sit around the CRT's neck.
The result of this development effort is the so-called "short-neck" CRT. In early 1998 17in short-neck monitors measuring around 15in deep reached the market. The downside was that the new design had a tendency to degrade images, especially at a screen's corners and edges. This was addressed by improvements in the technology the following year with the introduction of tube designs employing a 100-degree deflection tube - in place of conventional 90-degree tubes - and narrower electron gun assemblies. The consequent increase in the beam deflection angle allowed the gun to be placed closer to the screen without the penalty of any image distortion. The result was a new rule of thumb that short-necked monitors should be about two inches shorter than their diagonal size.
The shape of a monitor's screen is another important factor. The three most common CRT shapes are spherical (a section of a sphere, used in the oldest and most inexpensive monitors), cylindrical (a section of a cylinder, used in aperture-grille CRTs), and flat square (a section of a sphere large enough to make the screen nearly flat).
Flat square tube (FST) is an industry standard term used since 1997 to describe shadow mask monitors that have minimal curvature (but still a curvature) of the monitor tube. They also have a larger display area - closer to the tube size - and nearly square corners. There's a design penalty for a flatter, squarer screen, as the less of a spherical section the screen surface is, the harder it is to control the geometry and focus of the displayed images. Modern monitors use microprocessors to apply techniques like dynamic focusing to compensate for the flatter screen.
FSTs require the use of a special alloy, Invar, for the shadow mask. The flatter screen means that the shortest beam path is in the centre of the screen. This is the point where the beam energy tends to concentrate, and consequently the shadow mask gets hotter here than at the corners and sides of the display. Uneven heating across the mask can make it expand and eventually warp and buckle. Any distortion in the mask means that its holes no longer register with the dot triplets on the screen and image quality will be reduced. Invar alloy is used in the best monitors as it has a low coefficient of expansion.
By 2000, monitors that used alternative mask technologies were available with completely flat screens. The principal advantages of a truly flat surface is that they have minimal glare and display images that have a more realistic appearance. However, these benefits are gained at the cost of accentuating the problem of the shape of the electron beam being elliptical at the point at which it strikes the screen at its edges. Furthermore, the use of perfectly flat glass gives rise to an optical illusion caused by the refraction of light, resulting in the image looking concave. As a result, many tube manufacturers employ a double layer glass surface, the inner surface of which introduces a curve that counters the concave appearance. The downside of this is that it reduces brightness - and sometimes contrast - and can give rise to warping at the screen's corners.
Sound facilities have become commonplace on many PCs, requiring additional loudspeakers and possibly a microphone too. The "multimedia monitor" avoids lots of separate boxes and cables by building in loudspeakers of some sort, maybe a microphone and in some cases a camera for video conferencing. At the back of these monitors are connections for a sound card. However, the quality of these additional components is often questionable, adding only a few pounds to the cost of manufacture. For high quality sound nothing beats decent external speakers which can also be properly magnetically shielded.
Another development which has become increasingly available since the launch of Microsoft's Windows 98, which brought with it the necessary driver software, is USB-compliant CRTs. The Universal Serial Bus applies to monitors in two ways. First, the monitor itself can use a USB connection to allow screen settings to be controlled with software. Second, a USB hub can be added to a monitor (normally in its base) for use as a convenient place to plug in USB devices such as keyboards and mice. The hub provides the connection to the PC.
Digital CRTs
Nearly 99 percent of all video displays sold in 1998 were connected using an analogue VGA interface, an ageing technology that represents the minimum standard for a PC display. In fact, today VGA represents an impediment to the adoption of new flat panel display technologies, largely because of the added cost for these systems to support the analogue interface. Another fundamental is the degradation of image quality that occurs when a digital signal is converted to analogue, and then back to digital before driving an analogue input LCD display.
The autumn of 1998 saw the formation of Digital Display Working Group (DDWG) - including computer industry leaders Intel, Compaq, Fujitsu, Hewlett-Packard, IBM, NEC and Silicon Image - with the objective of delivering a robust, comprehensive and extensible specification of the interface between digital displays and high-performance PCs. In the spring of 1999 the DDWG approved the first version of the Digital Visual Interface (DVI) specification based on Silicon Image's PanelLink technology, using a Transition Minimised Differential Signaling (TMDS) digital signal protocol.
Whilst primarily of benefit to flat panel displays - which can now operate in an standardised all-digital environment without the need to perform an analogue-to-digital conversion on the signals from the graphics card driving the display device - the DVI specification potentially has ramifications for conventional CRT monitors too.
Most complaints of poor image quality on CRTs can be traced to incompatible graphics controllers on the motherboard or graphics card. In today's cost-driven market, marginal signal quality is not all that uncommon. The incorporation of DVI with a traditional analogue CRT monitor will allow monitors to be designed to receive digital signals, with the necessary digital-to-analogue conversion being carried out within the monitor itself. This will give manufacturers added control over final image quality, making differentiation based on image quality much more of a factor than it has been hitherto. However, the application of DVI with CRT monitors is not all plain sailing.
One of the drawbacks is that since it was originally designed for use with digital flat panels, DVI has a comparatively low bandwidth of 165MHz. This means that a working resolution of 1280x1024 could be supported at up to an 85Hz refresh rate. Although this isn't a problem for LCD monitors, it's a serious issue for CRT displays. The DVI specification supports a maximum resolution of 1600x1200 at a refresh rate of only 60Hz - totally unrealistic in a world of ever increasing graphics card performance and ever bigger and cheaper CRT monitors.
The solution is the provision of additional bandwidth overhead for horizontal and vertical retrace intervals - facilitated through the use of two TMDS links. With such an arrangement digital CRTs compliant with VESA's Generalised Timing Formula (GTF) would be capable of easily supporting resolutions exceeding 2.75 million pixels at an 85Hz refresh rate. However, implementation was to prove to be difficult, with noise, reflections, skew and drive limitations within DVI chips making it difficult to achieve the theoretical potential of a dual DVI link. In the event it was not until 2002 that the first dual-link DVI graphics cards began to emerge.
Another problem is that it's more expensive to digitally scale the refresh rate of a monitor than using a traditional analogue multisync design. This could lead to digital CRTs being more costly than their analogue counterparts. An alternative is for digital CRTs to have a fixed frequency and resolution like a LCD display and thereby eliminate the need for multisync technology.
DVI anticipates that in the future screen refresh functionality will become part of the display itself. New data will need to be sent to the display only when changes to the data need to be displayed. With a selective refresh interface, DVI can maintain the high refresh rates required to keep a CRT display ergonomically pleasing while avoiding an artificially high data rate between the graphics controller and the display. Of course, a monitor would have to employ frame buffer memory to enable this feature.
The first DVI-compliant controller designed specifically for implementation in digital CRT monitors came to market during 2001, and by the end of that year DVI had become firmly and the sales of flat panels had surged to such an extent that prices fell dramatically. However, it remained unclear as to how relevant DVI was going to be for conventional monitors, with some convinced that DVI was the future of CRT technology and others remaining sceptical.
LightFrame technologyCRT monitors and TVs have each been optimised for the applications they've been traditionally used for. The former excel at displaying close-up high-resolution content, such as text, while lower-resolution TV screens' larger dot pitch and higher light output make them more suitable for rendering low-resolution photography such as film, intended for viewing at a distance.
TVs can use an extremely high beam current to produce vivid images, and take advantage of a phenomenon called "pixel blooming", in which adjacent pixels illuminate one another, thereby achieving a higher level of brightness. Another TV technique is "peaking", which artificially sharpens a video signal's light/dark transitions.
The problem is that neither of these techniques is appropriate on high-resolution PC monitors, as they would result in a performance degradation in traditional computer applications - such as word processing and spreadsheets. Consequently, PC users have had to live with TV-quality applications often appearing flat, dull and lifeless when displayed on a CRT monitor. Of course, the rise of home video editing, DVD playback on the desktop and even video content on the Web means this deficiency has become increasingly unacceptable.
Philips' answer to the problem came in the shape of their unique and innovative LightFrame technology, first revealed in late 2000. In essence, LightFrame seeks to simulate the output performance of a TV screen on a PC monitor, theoretically delivering the best of both worlds.
It comprises a software application and an integrated circuit embedded in a monitor which work together to selectively raise brightness and sharpness. The software transmits co-ordinates of the selected screen area to the monitor by writing instructions on the last line of the video signal. These are translated by a proprietary integrated circuit in the monitor to boost sharpness and brightness in the selected area before being blanked out. Non-selected portions of the screen are unaffected by the process.
Extensive testing has confirmed that LightFrame does not adversely effect monitor life. Modern-day monitors have improved phosphors, designed for high light output. Though the peak brightness of a highlighted area is strongly increased, the average brightness - a determining factor for cathode deterioration - is not normally increased. In any case, LightFrame employs a special Automatic Beam Limited (ABL) circuit to keep a monitor's maximum average brightness within acceptable levels.
A year after the technology was first introduced, LightFrame 2 was launched, offering automatic detection and enhancement of applications that benefit from the technology. This was followed in the summer of 2002 by the announcement of LightFrame 3, boasting the ability to automatically enhance up to 16 images simultaneously in Microsoft's Internet Explorer and up to 8 when using photo-viewing applications. Interestingly, Philips intend to migrate LightFrame 3 to its LCD monitors too.
LightFrame works by identifying a rectangular screen area for highlighting. On occasions, certain backgrounds or borders prevent a photo or video from being detected automatically. In such cases it's necessary to highlight it manually. This is accomplished by dragging a rectangle to encompass the selected area, or, to select an entire window, by a single click of the mouse.
Safety standardsIn the late 1980s concern over possible health issues related to monitor use led Swedac, the Swedish testing authority, to make recommendations concerning monitor ergonomics and emissions. The resulting standard was called MPR1. This was amended in 1990 to the internationally adopted MPR2 standard, which called for the reduction of electrostatic emissions with a conductive coating on the monitor screen.
In 1992 a further standard, entitled TCO, was introduced by the Swedish Confederation of Professional Employees. The emission levels in TCO92 were based on what monitor manufacturers thought was possible rather than on any particular safety level, while MPR2 had been based on what they could achieve without a significant cost increase. As well as setting stiffer levels for emission it required monitors to meet the international EN60950 standard for electrical and fire safety. Subsequent TCO standards were introduced in 1995 and again in 1999.
Apart from Sweden, the main impetus for safety standards has come from the US. In 1993, VESA initiated its DPMS standard, or Display Power Management Signalling. A DPMS compliant graphics card enables the monitor to achieve four states: on, standby, suspend and off, at user-defined periods. Suspend mode must draw less than 8W so the CRT, its heater and its electron gun are likely to be shut off. Standby takes the power consumption down to below about 25W, with the CRT heater usually left on for faster resuscitation.
VESA has also produced several standards for plug-and-play monitors. Known under the banner of DDC (Display Data Channel), they should in theory allow your system to figure out and select the ideal settings, but in practice this very much depends on the combination of hardware.
EPA Energy Star is a power saving standard, mandatory in the US and widely adopted in Europe, requiring a mains power saving mode drawing less than 30W. Energy Star was initiated in 1993 but really took hold in 1995 when the US Government, the world's largest PC purchaser, adopted a policy to buy only Energy Star compliant products.
Other relevant standards include:
• ISO 9241 part 3, the international standard for monitor ergonomics
• EN60950, the European standard for the electrical safety of IT equipment
• the German TUV/EG mark, which means a monitor has been tested to both standards, in addition to the German standard for basic ergonomics (ZH/618) and MPR2 emission levels.
TCO standards
In 1995, TCO modified the requirements for visual ergonomics and added a range of conditions to cover environmental issues, including the use of certain chemicals in manufacturing and the recycling of components. The most stringent standard so far, and the result of collaboration between the TCO (The Swedish Confederation of Professional Employees), Naturskyddsforeningen (The Swedish Society for Nature Conservation) and NUTEK (The National Board for Industry and Technical Development in Sweden), TCO95 became the first global environmental labelling scheme. It was more comprehensive than the German Blue Angel label and more exacting than the ISO international standards. The display, system unit and keyboard can be certified separately and the manufacturer's environmental policy is addressed at every stage from production to disposal. Over and above TCO92, the product may not contain cadmium or lead, the plastic housing must be of biodegradable material and free of brominated flame retardants and the production process must avoid use of CFCs (freons) and chlorinated solvents. The emission and power saving requirements remain unaltered although picture performance and luminance uniformity have been addressed.
TCO standards also require that screens be treated with conductive coatings to reduce the static charge on the monitor. Although static electricity generated on the front surface of a CRT has been alleged to be a factor in a number of health risks, it has not yet been confirmed.
TCP99 is the latest iteration of the standard. TCO99 doesn't change the emission levels from those in the previous versions, but it does alter the testing procedures to deal with certain loopholes. The new approval mainly concentrates on improving the visual ergonomics requirements. Improvements in visual ergonomics include better luminance uniformity and contrast. There is also a new requirement that screen colour temperature adjustment, when present, should be accurate.
To reduce eye fatigue caused by image flicker, the minimum required refresh rate is increased to 85Hz for displays of less than 20in, with 100MHz recommended, and to a minimum of 75Hz for 20in or greater. Although harder to control, there are measures to address the problem of screen contrast in the office environment. To help manufacturers achieve the right balance between anti-reflection treatment and the minimum amount of light reaching the user, a minimum diffuse reflectance level of 20% is specified.
More exacting attention is paid to power saving and environmental impact, with TCO99-certified monitors saving up to 50% more energy than TCO95 displays. There's a different requirement for monitors with USB hubs, which can suspend at 15W and restart in three seconds; non-USB monitors must suspend at 5W. Manufacturing requirements are more stringent too. No chlorinated solvents may be used and product vendors must provide corporate and domestic customers with a recycling path using a competent recycling body.
Ergonomics
Whilst the quality of the monitor and graphics card, and in particular, the refresh rate at which the combination can operate is of crucial importance in ensuring that users spending long hours in front of a CRT monitor can do so in as much comfort as possible, it is not the only factor that should be considered. Physical positioning is also important, and expert advice has recently been revised in this area. Previously it had been thought that the centre of the monitor should be at eye level. It is now believed that to reduce fatigue as much as possible, the top of the screen should be at eye level, with the screen between 0 and 30 degrees below the horizontal and tilted slightly upwards. However, seeking to achieve this arrangement with furniture designed in accordance with the previous rules is not that easy to achieve, however, without causing other problems with respect to seating position and, for example, the comfortable positioning of keyboard and mouse. It is also important to sit directly in front of the monitor rather than to one side, and to locate the screen so as to avoid reflections and glare from external light sources.