Tag Archives: PCGuide

RAID Levels Comparison

Summary Comparison of RAID Levels

Below you will find a table that summarizes the key quantitative attributes of the various RAID levels for easy comparison. For the full details on any RAID level, see its own page, accessible here. For a description of the different characteristics, see the discussion of factors differentiating RAID levels. Also be sure to read the notes that follow the table:

Comparison of all RAID Levels

Notes on the table:

  • For the number of disks, the first few valid sizes are shown; you can figure out the rest from the examples given in most cases. Minimum size is the first number shown; maximum size is normally dictated by the controller. RAID 01/10 and RAID 15/51 must have an even number of drives, minimum 6. RAID 03/30 and 05/50 can only have sizes that are a product of integers, minimum 6.
  • For capacity and storage efficiency, “S” is the size of the smallest drive in the array, and “N” is the number of drives in the array. For the RAID 03 and 30, “N0” is the width of the RAID 0 dimension of the array, and “N3” is the width of the RAID 3 dimension. So a 12-disk RAID 30 array made by creating three 4-disk RAID 3 arrays and then striping them would have N3=4 and N0=3. The same applies for “N5” in the RAID 05/50 row.
  • Storage efficiency assumes all drives are of identical size. If this is not the case, the universal computation (array capacity divided by the sum of all drive sizes) must be used.
  • Performance rankings are approximations and to some extent, reflect my personal opinions. Please don’t over-emphasize a “half-star” difference between two scores!
  • Cost is relative and approximate, of course. In the real world it will depend on many factors; the dollar signs are just intended to provide some perspective.

The PC Guide
Site Version: 2.2.0 – Version Date: April 17, 2001
© Copyright 1997-2004 Charles M. Kozierok. All Rights Reserved.

This is an archive of Charles M. Kozierok’s PCGuide (pcguide.com) which disappeared from the internet in 2018. We wanted to preserve Charles M. Kozierok’s knowledge about computers and are permanently hosting a selection of important pages from PCGuide.

Access Time

One of the most commonly quoted performance statistics for CD-ROM drives is access time. As with most commonly-used performance metric, it is abused at least as much as it is used properly. Curiously, access time is used extensively in quoting the specs of CD-ROM drives, but is virtually never mentioned with respect to hard disks. With hard drives it is much more common to see quotes of the other metrics that are combined to make up access time. (This despite how similarly the devices access data…).

Access time is meant to represent the amount of time it takes from the start of a random read operation until the data starts to be read from the disk. It is a composite metric, really being composed of the following other metrics:

  • Speed Change Time: For CLV drives, the time for the spindle motor to change to the correct speed.
  • Seek Time: The time for the drive to move the heads to the right location on the disk.
  • Latency: The amount of time for the disk to turn so that the right information spins under the read head.

Although access time is made up of the time for these separate operations, this doesn’t mean that you can simply add these other measurements together to get access time. The relationship is more complex than this because some of these items can happen in parallel. For example, there is no reason that the speed of the spindle motor couldn’t be varied at the same time that the heads are moved (and in fact this is done).

The access time of CD-ROM drives in general depends on the rated “X” speed of the drive, although this can and does vary widely from drive to drive. The oldest 1X drives generally had truly abysmal access times, often exceeding 300 ms; as drives have become faster and faster, access times have dropped, and now are below 100 ms on the top-end drives.

Note that while faster “X” rated drives have lower access times, this is due to improvements that reduce the three metrics listed above that contribute to access time. Some of it (latency for example) is reduced when you spin the disk at 8X instead of 1X. On the other hand, seek time improvement is independent of the spin speed of the disk, which is why some 8X drives will have much better access time performance than other 8X drives, for example.

Even the fastest CD-ROM drives are significantly slower than even the slowest hard disks; access time on a high-end CD-ROM is still going to be four or five times higher than that of a high-end disk drive. This is just the nature of the device; CD-ROM drives are based on technology originally developed for playing audio CDs, where random seek performance is very unimportant. CDs do not have cylinders like a hard disk platter, but rather a long continuous spiral of bits, which makes finding specific pieces of data much more difficult.

Even though access time is important in some ways, its importance is generally vastly overstated by the people that sell CD-ROM drives. Random access performance is one component of overall CD-ROM performance, and how essential it is depends on what you are doing with your drive. However even if high random access performance is important, you must bear in mind that there are far fewer random reads done, in general, to a CD than to a hard disk.

Another point is that manufacturers are not always consistent in how they define their averages. Some companies may use different testing methods, and some may even exaggerate in order to make their drives look much better than they actually are. A small difference in quoted access time is not usually going to make any noticeable real-world difference. For most purposes, a drive with a 100 ms access time is going to behave the same as one with a 110 ms access time. It’s usually better at that point to differentiate them based on other performance characteristics or features (or price).


The PC Guide
Site Version: 2.2.0 – Version Date: April 17, 2001
© Copyright 1997-2004 Charles M. Kozierok. All Rights Reserved.

This is an archive of Charles M. Kozierok’s PCGuide (pcguide.com) which disappeared from the internet in 2018. We wanted to preserve Charles M. Kozierok’s knowledge about computers and are permanently hosting a selection of important pages from PCGuide.

101-Key “Enhanced” Keyboard Layout

In 1986, IBM introduced the IBM PC/AT Model 339. Included in this last AT-family system was the new Enhanced 101-key keyboard. Little did IBM realize at the time, perhaps, but this 101-key keyboard would become the de-facto standard for keyboards for the next decade and beyond. Even today’s Windows keyboards and fancy variants with extra buttons and keys are based on this layout.

101-key "enhanced" keyboard

The “Enhanced” keyboard was electrically the same as the 84-key AT keyboard, but featured a radically redesigned key layout. The major changes included these:

  • Dedicated Cursor and Navigation Keys: Finally, separate keys were provided for cursor control and navigation. This enabled the numeric keyboard to be used along with the cursor and navigation keys. The cursor keys were also made into an “inverted-T” configuration for easier switching between “Up” and “Down” with a single finger.
  • Relocated Function Keys: The function keys were moved from the left-hand side of the keyboard to a row along the top, and divided into groups of four for convenience. While many users had been asking for this, they found that sometimes the grass really isn’t greener on the other side of the fence, as I discuss below…
  • Relocated <Esc> and <Caps Lock> Keys: The <Esc> key was moved back to the left-hand side of the keyboard, though it was placed up above the main typing area. The <Caps Lock> key was moved above the left <Shift> key.
  • Extra Function Keys: Two additional function keys, <F11> and <F12> were added to the keyboard.
  • Extra <Ctrl> and <Alt> Keys: Additional <Ctrl> and <Alt> keys were added on the right side of the <Space Bar>.
  • Extra Numeric Keypad Keys: The numeric keypad was fitted with an additional <Enter> key, as well as the “/” (divide operator) that had been missing up to that point.

Compared the 84-key keyboard the Enhanced keyboard layout was perceived by most users to be far superior. It was an immediate hit despite its one obvious inferiority to the AT keyboard: the smaller main <Enter> key. (The <Space Bar> is also a bit smaller.) Obviously, some of the changes made with the Enhanced keyboard are undeniable. However, others are in this author’s opinion good examples of the old warning: “be careful what you ask for”…

Many PC users, after having complained for years about changes they wanted made to the PC keyboard layout, found they weren’t all that happy with them once their wish was granted! Having never complained about the issues that were changed with the Enhanced keyboard myself, I found some of the changes quite frustrating–and I later discovered that I was not alone. My personal beefs with this layout involve the locations of the following:

  • Left <Ctrl> Key: With the older layout, the left-hand <Ctrl> key is readily accessible, and it is used by computer enthusiasts dozens, if not hundreds of times a day. (For example, cut, copy and paste are universal functions with standard Windows short-cuts of <Ctrl>+X, <Ctrl>+C and <Ctrl>+V respectively.) The new design puts the <Ctrl> key below the main keyboard, requiring a move of the entire left hand to reach it. And while having the <Caps Lock> key above the left <Shift> may be of use to some, I use the <Caps Lock> key maybe once or twice a month, how about you? :^) Overall, a really bad swap in my opinion.
  • Function Keys: Having the function keys on the left-side of the keyboard makes them easy to reach, particularly in combination with the <Shift>, <Ctrl> and <Alt> keys. Again, these are frequently used keys which are hard to reach when above the keyboard; most combinations that used to be simple with one hand now require two. For example, a command I use frequently when writing is <Ctrl>+<F6>, the Microsoft Word (and FrontPage) function to switch between documents. Compare the motion required to type this combination on an Enhanced keyboard to what was required with the function keys on the left side and the <Ctrl> key above the <Shift> key. Also consider <Alt>+<F4>, the standard combination to close a Windows application… and so it goes.
    The real irony, of course, is that the “on-screen labels corresponding to function keys”, which is what caused people to want the function keys along the top of the keyboard, disappeared from software applications many years ago!
  • <Esc> Key: This key is still a reach with the Enhanced design. Compare how often you use the <Esc> key in a day to the number of times you type a backwards quote or tilde! Again, a poorly-considered decision.

Despite these limitations, the 101-key keyboard remains the standard (actually, the 104-key Windows keyboard is the standard now, but the two layouts are nearly identical). Of course, countless variations of the basic design exist. A common modification is to enlarge the <Enter> key back to its “84-key layout size”, and squeeze the backslash / vertical-pipe key between the “=/+” key and the <Backspace>. An improvement in my estimation!

As for me, rather than curse the darkness, I lit a candle: I use a 124-key Gateway Anykey programmable keyboard with function keys both above and to the left of the main typing area, and a large main <Enter> key. I relocate the left <Ctrl> to where it belongs and the <Caps Lock> key somewhere out of the way where it belongs. :^) I swap the <Esc> key and the backquote/tilde key as well. Ah, freedom. :^)


The PC Guide
Site Version: 2.2.0 – Version Date: April 17, 2001
© Copyright 1997-2004 Charles M. Kozierok. All Rights Reserved.

This is an archive of Charles M. Kozierok’s PCGuide (pcguide.com) which disappeared from the internet in 2018. We wanted to preserve Charles M. Kozierok’s knowledge about computers and are permanently hosting a selection of important pages from PCGuide.

Cyrix 5×86 CPU

Cyrix 5×86 (“M1sc”)

Despite having the same name as AMD’s 5×86 processor, the Cyrix 5×86 is a totally different animal. While AMD designed its 5×86 by further increasing the clock on the 486DX4, Cyrix took the opposite approach by modifying its M1 processor core (used for the 6×86 processor) to make a “lite” version to work on 486 motherboards. As such, the Cyrix 5×86 in some ways resembles a Pentium OverDrive (which is a Pentium core modified to work in a 486 motherboard) internally more than it resembles the AMD 5×86. This chip is probably the hardest to classify as either fourth or fifth generation.

The 5×86 employs several architectural features that are normally found only in fifth-generation designs. The pipeline is extended to six stages, and the internal architecture is 64 bits wide. It has a larger (16 KB) primary cache than the 486DX4 chip. It uses branch prediction to improve performance.

The 5×86 was available in two speeds, 100 and 120 MHz. The 5×86-120 is the most powerful chip that will run in a 486 motherboard–it offers performance comparable to a Pentium 90 or 100. The 5×86 is still a clock-tripled design, so it runs in 33 and 40 MHz motherboards. (The 100 MHz version will actually run at 50×2 as well, but normally was run at 33 MHz.) It is a 3 volt design and is intended for a Socket 3 motherboard. It will run in an earlier 486 socket if a voltage regulator is used. I have heard that some motherboards will not run this chip properly so you may need to check with Cyrix if trying to use this chip in an older board. These chips have been discontinued by Cyrix but are still good performers, and for those with a compatible motherboard, as good as you can get. Unfortunately, they are extremely difficult to find now.

Look here for an explanation of the categories in the processor summary table below, including links to more detailed explanations.

General Information

Manufacturer

Cyrix

Family Name

5×86

Code name

"M1sc"

Processor Generation

Fourth

Motherboard Generation

Fourth

Version

5×86-100

5×86-120

Introduced

1996?

Variants and Licensed Equivalents

Speed Specifications

Memory Bus Speed (MHz)

33 / 50

40

Processor Clock Multiplier

3.0 / 2.0

3.0

Processor Speed (MHz)

100

120

"P" Rating

P75

P90

Benchmarks

iCOMP Rating

~610

~735

iCOMP 2.0 Rating

~67

~81

Norton SI

264

316

Norton SI32

~16

19

CPUmark32

~150

~180

Physical Characteristics

Process Technology

CMOS

Circuit Size (microns)

0.65

Die Size (mm^2)

144

Transistors (millions)

2.0

Voltage, Power and Cooling

External or I/O Voltage (V)

3.45

Internal or Core Voltage (V)

3.45

Power Management

SMM

Cooling Requirements

Active heat sink

Packaging

Packaging Style

168-Pin PGA

Motherboard Interface

Socket 3; or 168-Pin Socket, Socket 1, Socket 2 (with voltage regulator)

External Architecture

Data Bus Width (bits)

32

Maximum Data Bus Bandwidth (Mbytes/sec)

127.2

152.6

Address Bus Width (bits)

32

Maximum Addressable Memory

4 GB

Level 2 Cache Type

Motherboard

Level 2 Cache Size

Usually 256 KB

Level 2 Cache Bus Speed

Same as Memory Bus

Multiprocessing

No

Internal Architecture

Instruction Set

x86

MMX Support

No

Processor Modes

Real, Protected, Virtual Real

x86 Execution Method

Native

Internal Components

Register Size (bits)

32

Pipeline Depth (stages)

6

Level 1 Cache Size

16 KB Unified

Level 1 Cache Mapping

4-Way Set Associative

Level 1 Cache Write Policy

Write-Through, Write-Back

Integer Units

1

Floating Point Unit / Math Coprocessor

Integrated

Instruction Decoders

1

Branch Prediction Buffer Size / Accuracy

!? entries / !? %

Write Buffers

!?

Performance Enhancing Features


The PC Guide
Site Version: 2.2.0 – Version Date: April 17, 2001
© Copyright 1997-2004 Charles M. Kozierok. All Rights Reserved.

This is an archive of Charles M. Kozierok’s PCGuide (pcguide.com) which disappeared from the internet in 2018. We wanted to preserve Charles M. Kozierok’s knowledge about computers and are permanently hosting a selection of important pages from PCGuide.

History of NTFS

Overview and History of NTFS

In the early 1990s, Microsoft set out to create a high-quality, high-performance, reliable and secure operating system. The goal of this operating system was to allow Microsoft to get a foothold in the lucrative business and corporate market–at the time, Microsoft’s operating systems were MS-DOS and Windows 3.x, neither of which had the power or features needed for Microsoft to take on UNIX or other “serious” operating systems. One of the biggest weaknesses of MS-DOS and Windows 3.x was that they relied on the FAT file system. FAT provided few of the features needed for data storage and management in a high-end, networked, corporate environment. To avoid crippling Windows NT, Microsoft had to create for it a new file system that was not based on FAT. The result was the New Technology File System or NTFS.

It is often said (and sometimes by me, I must admit) that NTFS was “built from the ground up”. That’s not strictly an accurate statement, however. NTFS is definitely “new” from the standpoint that it is not based on the old FAT file system. Microsoft did design it based on an analysis of the needs of its new operating system, and not based on something else that they were attempting to maintain compatibility with, for example. However, NTFS is not entirely new, because some of its concepts were based on another file system that Microsoft was involved with creating: HPFS.

Before there was Windows NT, there was OS/2. OS/2 was a joint project of Microsoft and IBM in the early 1990s; the two companies were trying to create the next big success in the world of graphical operating systems. They succeeded, to some degree, depending on how you are measuring success. :^) OS/2 had some significant technical accomplishments, but suffered from marketing and support issues. Eventually, Microsoft and IBM began to quarrel, and Microsoft broke from the project and started to work on Windows NT. When they did this, they borrowed many key concepts from OS/2’s native file system, HPFS, in creating NTFS.

NTFS was designed to meet a number of specific goals. In no particular order, the most important of these are:

  • Reliability: One important characteristic of a “serious” file system is that it must be able to recover from problems without data loss resulting. NTFS implements specific features to allow important transactions to be completed as an integral whole, to avoid data loss, and to improve fault tolerance.
  • Security and Access Control: A major weakness of the FAT file system is that it includes no built-in facilities for controlling access to folders or files on a hard disk. Without this control, it is nearly impossible to implement applications and networks that require security and the ability to manage who can read or write various data.
  • Breaking Size Barriers: In the early 1990s, FAT was limited to the FAT16 version of the file system, which only allowed partitions up to 4 GiB in size. NTFS was designed to allow very large partition sizes, in anticipation of growing hard disk capacities, as well as the use of RAID arrays.
  • Storage Efficiency: Again, at the time that NTFS was developed, most PCs used FAT16, which results in significant disk space due to slack. NTFS avoids this problem by using a very different method of allocating space to files than FAT does.
  • Long File Names: NTFS allows file names to be up to 255 characters, instead of the 8+3 character limitation of conventional FAT.
  • Networking: While networking is commonplace today, it was still in its relatively early stages in the PC world when Windows NT was developed. At around that time, businesses were just beginning to understand the importance of networking, and Windows NT was given some facilities to enable networking on a larger scale. (Some of the NT features that allow networking are not strictly related to the file system, though some are.)

Of course, there are also other advantages associated with NTFS; these are just some of the main design goals of the file system. There are also some disadvantages associated with NTFS, compared to FAT and other file systems–life is full of tradeoffs. :^) In the other pages of this section we will fully explore the various attributes of the file system, to help you decide if NTFS is right for you.

For their part, Microsoft has not let NTFS lie stagnant. Over time, new features have been added to the file system. Most recently, NTFS 5.0 was introduced as part of Windows 2000. It is similar in most respects to the NTFS used in Windows NT, but adds several new features and capabilities. Microsoft has also corrected problems with NTFS over time, helping it to become more stable, and more respected as a “serious” file system. Today, NTFS is becoming the most popular file system for new high-end PC, workstation and server implementations. NTFS shares the stage with various UNIX file systems in the world of small to moderate-sized business systems, and is becoming more popular with individual “power” users as well.


The PC Guide
Site Version: 2.2.0 – Version Date: April 17, 2001
© Copyright 1997-2004 Charles M. Kozierok. All Rights Reserved.

This is an archive of Charles M. Kozierok’s PCGuide (pcguide.com) which disappeared from the internet in 2018. We wanted to preserve Charles M. Kozierok’s knowledge about computers and are permanently hosting a selection of important pages from PCGuide.