Tag Archives: computer repair

RAID Levels 03 and 30

RAID Levels 0+3 (03 or 53) and 3+0 (30)

Common Name(s): The most confusing naming of any of the RAID levels. :^) In an ideal world, this level would be named RAID 0+3 (or 03) or RAID 3+0 (30). Instead, the number 53 is often used in place of 03 for reasons I have never been able to determine, and worse, 53 is often actually implemented as 30, not 03. As always, verify the details of the implementation to be sure of what you have.

Technique(s) Used: Byte striping with dedicated parity combined with block striping.

Description: RAID 03 and 30 (though often called 53 for a reason that utterly escapes me) combine byte striping, parity and block striping to create large arrays that are conceptually difficult to understand. :^) RAID 03 is formed by putting into a RAID 3 array a number of striped RAID 0 arrays; RAID 30 is more common and is formed by striping across a number of RAID 3 sub-arrays. The combination of parity, small-block striping and large-block striping makes analyzing the theoretical performance of this level difficult. In general, it provides performance better than RAID 3 due to the addition of RAID 0 striping, but closer to RAID 3 than RAID 0 in overall speed, especially on writes. RAID 30 provides better fault tolerance and rebuild performance than RAID 03, but both depend on the “width” of the RAID 3 dimension of the drive relative to the RAID 0 dimension: the more parity drives, the lower capacity and storage efficiency, but the greater the fault tolerance. See the examples below for more explanation of this.

Most of the characteristics of RAID 0+3 and 3+0 are similar to those of RAID 0+5 and 5+0. RAID 30 and 03 tend to be better for large files than RAID 50 and 05.

Controller Requirements: Generally requires a high-end hardware controller.

Hard Disk Requirements: Number of drives must be able to be factored into two integers, one of which must be 2 or higher and the other 3 or higher (you can make a RAID 30 array from 10 drives but not 11). Minimum number of drives is six, with the maximum set by the controller.

Array Capacity: For RAID 03: (Size of Smallest Drive) * (Number of Drives In Each RAID 0 Set) * (Number of RAID 0 Sets – 1). For RAID 30: (Size of Smallest Drive) * (Number of Drives In Each RAID 3 Set – 1) * (Number of RAID 3 Sets).

For example, the capacity of a RAID 03 array made of 15 18 GB drives arranged as three five-drive RAID 0 sets would be 18 GB * 5 * (3-1) = 180 GB. The capacity of a RAID 30 array made of 21 18 GB drives arranged as three seven-drive RAID 3 sets would be 18 GB * (7-1) * 3 = 324 GB. The same 21 drives arranged as seven three-drive RAID 3 sets would have a capacity of 18 GB * (3-1) * 7 = “only” 252 GB.

Storage Efficiency: For RAID 03: ( (Number of RAID 0 Sets – 1) / Number of RAID 0 Sets). For RAID 30: ( (Number of Drives In Each RAID 3 Set – 1) / Number of Drives In Each RAID 3 Set).

Taking the same examples as above, the 15-drive RAID 03 array would have a storage efficiency of (3-1)/3 = 67%. The first RAID 30 array, configured as three seven-drive RAID 3 sets, would have a storage efficiency of (7-1)/7 = 86%, while the other RAID 30 array would have a storage efficiency of, again, (3-1)/3 = 67%.

Fault Tolerance: Good to very good, depending on whether it is RAID 03 or 30, and the number of parity drives relative to the total number. RAID 30 will provide better fault tolerance than RAID 03.

Consider the two different 21-drive RAID 30 arrays mentioned above: the first one (three seven-drive RAID 3 sets) has higher capacity and storage efficiency, but can only tolerate three maximum potential drive failures; the one with lower capacity and storage efficiency (seven three-drive RAID 3 sets) can handle as many as seven , if they are in different RAID 3 sets. Of course few applications really require tolerance for seven independent drive failures! And of course, if those 21 drives were in a RAID 03 array instead, failure of a second drive after one had failed and taken down one of the RAID 0 sub-arrays would crash the entire array.

Availability: Very good to excellent.

Degradation and Rebuilding: Relatively little for RAID 30 (though more than RAID 10); can be more substantial for RAID 03.

Random Read Performance: Very good, assuming RAID 0 stripe size is reasonably large.

Random Write Performance: Fair.

Sequential Read Performance: Very good to excellent.

Sequential Write Performance: Good.

Cost: Relatively high due to requirements for a hardware controller and a large number of drives; storage efficiency is better than RAID 10 however and no worse than any other RAID levels that include redundancy.

Special Considerations: Complex and expensive to implement.

Recommended Uses: Not as widely used as many other RAID levels. Applications include data that requires the speed of RAID 0 with fault tolerance and high capacity, such as critical multimedia data and large database or file servers. Sometimes used instead of RAID 3 to increase capacity as well as performance.


The PC Guide
Site Version: 2.2.0 – Version Date: April 17, 2001
© Copyright 1997-2004 Charles M. Kozierok. All Rights Reserved.

This is an archive of Charles M. Kozierok’s PCGuide (pcguide.com) which disappeared from the internet in 2018. We wanted to preserve Charles M. Kozierok’s knowledge about computers and are permanently hosting a selection of important pages from PCGuide.

RAID Levels 01 and 10

RAID Levels 0+1 (01) and 1+0 (10)

Common Name(s): RAID 0+1, 01, 0/1, “mirrored stripes”, “mirror of stripes”; RAID 1+0, 10, 1/0, “striped mirrors”, “stripe of mirrors”. Labels are often used incorrectly; verify the details of the implementation if the distinction between 0+1 and 1+0 is important to you.

Technique(s) Used: Mirroring and striping without parity.

Description: The most popular of the multiple RAID levels, RAID 01 and 10 combine the best features of striping and mirroring to yield large arrays with high performance in most uses and superior fault tolerance. RAID 01 is a mirrored configuration of two striped sets; RAID 10 is a stripe across a number of mirrored sets. RAID 10 and 01 have been increasing dramatically in popularity as hard disks become cheaper and the four-drive minimum is legitimately seen as much less of an obstacle. RAID 10 provides better fault tolerance and rebuild performance than RAID 01. Both array types provide very good to excellent overall performance by combining the speed of RAID 0 with the redundancy of RAID 1 without requiring parity calculations.

RAID Levels 01 and 10

This illustration shows how files of different sizes are distributed between the drives on an eight-disk RAID 0+1 array using a 16 kiB stripe size for the RAID 0 portion. As with the RAID 0 illustration, the red file is 4 kiB in size; the blue is 20 kiB; the green is 100 kiB; and the magenta is 500 kiB, with each vertical pixel representing 1 kiB of space. The large, patterned rectangles represent the two RAID 0 “sub arrays”, which are mirrored using RAID 1 to create RAID 0+1.
The contents of the striped sets are thus identical. The diagram for RAID 1+0
would be the same except for the groupings: instead of two large boxes dividing the drives horizontally, there would be four large boxes dividing the drives vertically into mirrored pairs. These pairs would then be striped together to form level 1+0. Contrast this diagram to the ones for RAID 0 and RAID 1.

Controller Requirements: Almost all hardware controllers will support one or the other of RAID 10 or RAID 01, but often not both. Even low-end cards will support this multiple level, usually RAID 01. High-end cards may support both 01 and 10.

Hard Disk Requirements: An even number of hard disks with a minimum of four; maximum dependent on controller. All drives should be identical.

Array Capacity: (Size of Smallest Drive) * (Number of Drives ) / 2.

Storage Efficiency: If all drives are the same size, 50%.

Fault Tolerance: Very good for RAID 01; excellent for RAID 10.

Availability: Very good for RAID 01; excellent for RAID 10.

Degradation and Rebuilding: Relatively little for RAID 10; can be more substantial for RAID 01.

Random Read Performance: Very good to excellent.

Random Write Performance: Good to very good.

Sequential Read Performance: Very good to excellent.

Sequential Write Performance: Good to very good.

Cost: Relatively high due to large number of drives required and low storage efficiency (50%).

Special Considerations: Low storage efficiency limits potential array capacity.

Recommended Uses: Applications requiring both high performance and reliability and willing to sacrifice capacity to get them. This includes enterprise servers, moderate-sized database systems and the like at the high end, but also individuals using larger IDE/ATA hard disks on the low end. Often used in place of RAID 1 or RAID 5 by those requiring higher performance; may be used instead of RAID 1 for applications requiring more capacity.


The PC Guide
Site Version: 2.2.0 – Version Date: April 17, 2001
© Copyright 1997-2004 Charles M. Kozierok. All Rights Reserved.

This is an archive of Charles M. Kozierok’s PCGuide (pcguide.com) which disappeared from the internet in 2018. We wanted to preserve Charles M. Kozierok’s knowledge about computers and are permanently hosting a selection of important pages from PCGuide.

RAID Levels Comparison

Summary Comparison of RAID Levels

Below you will find a table that summarizes the key quantitative attributes of the various RAID levels for easy comparison. For the full details on any RAID level, see its own page, accessible here. For a description of the different characteristics, see the discussion of factors differentiating RAID levels. Also be sure to read the notes that follow the table:

Comparison of all RAID Levels

Notes on the table:

  • For the number of disks, the first few valid sizes are shown; you can figure out the rest from the examples given in most cases. Minimum size is the first number shown; maximum size is normally dictated by the controller. RAID 01/10 and RAID 15/51 must have an even number of drives, minimum 6. RAID 03/30 and 05/50 can only have sizes that are a product of integers, minimum 6.
  • For capacity and storage efficiency, “S” is the size of the smallest drive in the array, and “N” is the number of drives in the array. For the RAID 03 and 30, “N0” is the width of the RAID 0 dimension of the array, and “N3” is the width of the RAID 3 dimension. So a 12-disk RAID 30 array made by creating three 4-disk RAID 3 arrays and then striping them would have N3=4 and N0=3. The same applies for “N5” in the RAID 05/50 row.
  • Storage efficiency assumes all drives are of identical size. If this is not the case, the universal computation (array capacity divided by the sum of all drive sizes) must be used.
  • Performance rankings are approximations and to some extent, reflect my personal opinions. Please don’t over-emphasize a “half-star” difference between two scores!
  • Cost is relative and approximate, of course. In the real world it will depend on many factors; the dollar signs are just intended to provide some perspective.

The PC Guide
Site Version: 2.2.0 – Version Date: April 17, 2001
© Copyright 1997-2004 Charles M. Kozierok. All Rights Reserved.

This is an archive of Charles M. Kozierok’s PCGuide (pcguide.com) which disappeared from the internet in 2018. We wanted to preserve Charles M. Kozierok’s knowledge about computers and are permanently hosting a selection of important pages from PCGuide.

Access Time

One of the most commonly quoted performance statistics for CD-ROM drives is access time. As with most commonly-used performance metric, it is abused at least as much as it is used properly. Curiously, access time is used extensively in quoting the specs of CD-ROM drives, but is virtually never mentioned with respect to hard disks. With hard drives it is much more common to see quotes of the other metrics that are combined to make up access time. (This despite how similarly the devices access data…).

Access time is meant to represent the amount of time it takes from the start of a random read operation until the data starts to be read from the disk. It is a composite metric, really being composed of the following other metrics:

  • Speed Change Time: For CLV drives, the time for the spindle motor to change to the correct speed.
  • Seek Time: The time for the drive to move the heads to the right location on the disk.
  • Latency: The amount of time for the disk to turn so that the right information spins under the read head.

Although access time is made up of the time for these separate operations, this doesn’t mean that you can simply add these other measurements together to get access time. The relationship is more complex than this because some of these items can happen in parallel. For example, there is no reason that the speed of the spindle motor couldn’t be varied at the same time that the heads are moved (and in fact this is done).

The access time of CD-ROM drives in general depends on the rated “X” speed of the drive, although this can and does vary widely from drive to drive. The oldest 1X drives generally had truly abysmal access times, often exceeding 300 ms; as drives have become faster and faster, access times have dropped, and now are below 100 ms on the top-end drives.

Note that while faster “X” rated drives have lower access times, this is due to improvements that reduce the three metrics listed above that contribute to access time. Some of it (latency for example) is reduced when you spin the disk at 8X instead of 1X. On the other hand, seek time improvement is independent of the spin speed of the disk, which is why some 8X drives will have much better access time performance than other 8X drives, for example.

Even the fastest CD-ROM drives are significantly slower than even the slowest hard disks; access time on a high-end CD-ROM is still going to be four or five times higher than that of a high-end disk drive. This is just the nature of the device; CD-ROM drives are based on technology originally developed for playing audio CDs, where random seek performance is very unimportant. CDs do not have cylinders like a hard disk platter, but rather a long continuous spiral of bits, which makes finding specific pieces of data much more difficult.

Even though access time is important in some ways, its importance is generally vastly overstated by the people that sell CD-ROM drives. Random access performance is one component of overall CD-ROM performance, and how essential it is depends on what you are doing with your drive. However even if high random access performance is important, you must bear in mind that there are far fewer random reads done, in general, to a CD than to a hard disk.

Another point is that manufacturers are not always consistent in how they define their averages. Some companies may use different testing methods, and some may even exaggerate in order to make their drives look much better than they actually are. A small difference in quoted access time is not usually going to make any noticeable real-world difference. For most purposes, a drive with a 100 ms access time is going to behave the same as one with a 110 ms access time. It’s usually better at that point to differentiate them based on other performance characteristics or features (or price).


The PC Guide
Site Version: 2.2.0 – Version Date: April 17, 2001
© Copyright 1997-2004 Charles M. Kozierok. All Rights Reserved.

This is an archive of Charles M. Kozierok’s PCGuide (pcguide.com) which disappeared from the internet in 2018. We wanted to preserve Charles M. Kozierok’s knowledge about computers and are permanently hosting a selection of important pages from PCGuide.

101-Key “Enhanced” Keyboard Layout

In 1986, IBM introduced the IBM PC/AT Model 339. Included in this last AT-family system was the new Enhanced 101-key keyboard. Little did IBM realize at the time, perhaps, but this 101-key keyboard would become the de-facto standard for keyboards for the next decade and beyond. Even today’s Windows keyboards and fancy variants with extra buttons and keys are based on this layout.

101-key "enhanced" keyboard

The “Enhanced” keyboard was electrically the same as the 84-key AT keyboard, but featured a radically redesigned key layout. The major changes included these:

  • Dedicated Cursor and Navigation Keys: Finally, separate keys were provided for cursor control and navigation. This enabled the numeric keyboard to be used along with the cursor and navigation keys. The cursor keys were also made into an “inverted-T” configuration for easier switching between “Up” and “Down” with a single finger.
  • Relocated Function Keys: The function keys were moved from the left-hand side of the keyboard to a row along the top, and divided into groups of four for convenience. While many users had been asking for this, they found that sometimes the grass really isn’t greener on the other side of the fence, as I discuss below…
  • Relocated <Esc> and <Caps Lock> Keys: The <Esc> key was moved back to the left-hand side of the keyboard, though it was placed up above the main typing area. The <Caps Lock> key was moved above the left <Shift> key.
  • Extra Function Keys: Two additional function keys, <F11> and <F12> were added to the keyboard.
  • Extra <Ctrl> and <Alt> Keys: Additional <Ctrl> and <Alt> keys were added on the right side of the <Space Bar>.
  • Extra Numeric Keypad Keys: The numeric keypad was fitted with an additional <Enter> key, as well as the “/” (divide operator) that had been missing up to that point.

Compared the 84-key keyboard the Enhanced keyboard layout was perceived by most users to be far superior. It was an immediate hit despite its one obvious inferiority to the AT keyboard: the smaller main <Enter> key. (The <Space Bar> is also a bit smaller.) Obviously, some of the changes made with the Enhanced keyboard are undeniable. However, others are in this author’s opinion good examples of the old warning: “be careful what you ask for”…

Many PC users, after having complained for years about changes they wanted made to the PC keyboard layout, found they weren’t all that happy with them once their wish was granted! Having never complained about the issues that were changed with the Enhanced keyboard myself, I found some of the changes quite frustrating–and I later discovered that I was not alone. My personal beefs with this layout involve the locations of the following:

  • Left <Ctrl> Key: With the older layout, the left-hand <Ctrl> key is readily accessible, and it is used by computer enthusiasts dozens, if not hundreds of times a day. (For example, cut, copy and paste are universal functions with standard Windows short-cuts of <Ctrl>+X, <Ctrl>+C and <Ctrl>+V respectively.) The new design puts the <Ctrl> key below the main keyboard, requiring a move of the entire left hand to reach it. And while having the <Caps Lock> key above the left <Shift> may be of use to some, I use the <Caps Lock> key maybe once or twice a month, how about you? :^) Overall, a really bad swap in my opinion.
  • Function Keys: Having the function keys on the left-side of the keyboard makes them easy to reach, particularly in combination with the <Shift>, <Ctrl> and <Alt> keys. Again, these are frequently used keys which are hard to reach when above the keyboard; most combinations that used to be simple with one hand now require two. For example, a command I use frequently when writing is <Ctrl>+<F6>, the Microsoft Word (and FrontPage) function to switch between documents. Compare the motion required to type this combination on an Enhanced keyboard to what was required with the function keys on the left side and the <Ctrl> key above the <Shift> key. Also consider <Alt>+<F4>, the standard combination to close a Windows application… and so it goes.
    The real irony, of course, is that the “on-screen labels corresponding to function keys”, which is what caused people to want the function keys along the top of the keyboard, disappeared from software applications many years ago!
  • <Esc> Key: This key is still a reach with the Enhanced design. Compare how often you use the <Esc> key in a day to the number of times you type a backwards quote or tilde! Again, a poorly-considered decision.

Despite these limitations, the 101-key keyboard remains the standard (actually, the 104-key Windows keyboard is the standard now, but the two layouts are nearly identical). Of course, countless variations of the basic design exist. A common modification is to enlarge the <Enter> key back to its “84-key layout size”, and squeeze the backslash / vertical-pipe key between the “=/+” key and the <Backspace>. An improvement in my estimation!

As for me, rather than curse the darkness, I lit a candle: I use a 124-key Gateway Anykey programmable keyboard with function keys both above and to the left of the main typing area, and a large main <Enter> key. I relocate the left <Ctrl> to where it belongs and the <Caps Lock> key somewhere out of the way where it belongs. :^) I swap the <Esc> key and the backquote/tilde key as well. Ah, freedom. :^)


The PC Guide
Site Version: 2.2.0 – Version Date: April 17, 2001
© Copyright 1997-2004 Charles M. Kozierok. All Rights Reserved.

This is an archive of Charles M. Kozierok’s PCGuide (pcguide.com) which disappeared from the internet in 2018. We wanted to preserve Charles M. Kozierok’s knowledge about computers and are permanently hosting a selection of important pages from PCGuide.

Cyrix 5×86 CPU

Cyrix 5×86 (“M1sc”)

Despite having the same name as AMD’s 5×86 processor, the Cyrix 5×86 is a totally different animal. While AMD designed its 5×86 by further increasing the clock on the 486DX4, Cyrix took the opposite approach by modifying its M1 processor core (used for the 6×86 processor) to make a “lite” version to work on 486 motherboards. As such, the Cyrix 5×86 in some ways resembles a Pentium OverDrive (which is a Pentium core modified to work in a 486 motherboard) internally more than it resembles the AMD 5×86. This chip is probably the hardest to classify as either fourth or fifth generation.

The 5×86 employs several architectural features that are normally found only in fifth-generation designs. The pipeline is extended to six stages, and the internal architecture is 64 bits wide. It has a larger (16 KB) primary cache than the 486DX4 chip. It uses branch prediction to improve performance.

The 5×86 was available in two speeds, 100 and 120 MHz. The 5×86-120 is the most powerful chip that will run in a 486 motherboard–it offers performance comparable to a Pentium 90 or 100. The 5×86 is still a clock-tripled design, so it runs in 33 and 40 MHz motherboards. (The 100 MHz version will actually run at 50×2 as well, but normally was run at 33 MHz.) It is a 3 volt design and is intended for a Socket 3 motherboard. It will run in an earlier 486 socket if a voltage regulator is used. I have heard that some motherboards will not run this chip properly so you may need to check with Cyrix if trying to use this chip in an older board. These chips have been discontinued by Cyrix but are still good performers, and for those with a compatible motherboard, as good as you can get. Unfortunately, they are extremely difficult to find now.

Look here for an explanation of the categories in the processor summary table below, including links to more detailed explanations.

General Information

Manufacturer

Cyrix

Family Name

5×86

Code name

"M1sc"

Processor Generation

Fourth

Motherboard Generation

Fourth

Version

5×86-100

5×86-120

Introduced

1996?

Variants and Licensed Equivalents

Speed Specifications

Memory Bus Speed (MHz)

33 / 50

40

Processor Clock Multiplier

3.0 / 2.0

3.0

Processor Speed (MHz)

100

120

"P" Rating

P75

P90

Benchmarks

iCOMP Rating

~610

~735

iCOMP 2.0 Rating

~67

~81

Norton SI

264

316

Norton SI32

~16

19

CPUmark32

~150

~180

Physical Characteristics

Process Technology

CMOS

Circuit Size (microns)

0.65

Die Size (mm^2)

144

Transistors (millions)

2.0

Voltage, Power and Cooling

External or I/O Voltage (V)

3.45

Internal or Core Voltage (V)

3.45

Power Management

SMM

Cooling Requirements

Active heat sink

Packaging

Packaging Style

168-Pin PGA

Motherboard Interface

Socket 3; or 168-Pin Socket, Socket 1, Socket 2 (with voltage regulator)

External Architecture

Data Bus Width (bits)

32

Maximum Data Bus Bandwidth (Mbytes/sec)

127.2

152.6

Address Bus Width (bits)

32

Maximum Addressable Memory

4 GB

Level 2 Cache Type

Motherboard

Level 2 Cache Size

Usually 256 KB

Level 2 Cache Bus Speed

Same as Memory Bus

Multiprocessing

No

Internal Architecture

Instruction Set

x86

MMX Support

No

Processor Modes

Real, Protected, Virtual Real

x86 Execution Method

Native

Internal Components

Register Size (bits)

32

Pipeline Depth (stages)

6

Level 1 Cache Size

16 KB Unified

Level 1 Cache Mapping

4-Way Set Associative

Level 1 Cache Write Policy

Write-Through, Write-Back

Integer Units

1

Floating Point Unit / Math Coprocessor

Integrated

Instruction Decoders

1

Branch Prediction Buffer Size / Accuracy

!? entries / !? %

Write Buffers

!?

Performance Enhancing Features


The PC Guide
Site Version: 2.2.0 – Version Date: April 17, 2001
© Copyright 1997-2004 Charles M. Kozierok. All Rights Reserved.

This is an archive of Charles M. Kozierok’s PCGuide (pcguide.com) which disappeared from the internet in 2018. We wanted to preserve Charles M. Kozierok’s knowledge about computers and are permanently hosting a selection of important pages from PCGuide.

History of NTFS

Overview and History of NTFS

In the early 1990s, Microsoft set out to create a high-quality, high-performance, reliable and secure operating system. The goal of this operating system was to allow Microsoft to get a foothold in the lucrative business and corporate market–at the time, Microsoft’s operating systems were MS-DOS and Windows 3.x, neither of which had the power or features needed for Microsoft to take on UNIX or other “serious” operating systems. One of the biggest weaknesses of MS-DOS and Windows 3.x was that they relied on the FAT file system. FAT provided few of the features needed for data storage and management in a high-end, networked, corporate environment. To avoid crippling Windows NT, Microsoft had to create for it a new file system that was not based on FAT. The result was the New Technology File System or NTFS.

It is often said (and sometimes by me, I must admit) that NTFS was “built from the ground up”. That’s not strictly an accurate statement, however. NTFS is definitely “new” from the standpoint that it is not based on the old FAT file system. Microsoft did design it based on an analysis of the needs of its new operating system, and not based on something else that they were attempting to maintain compatibility with, for example. However, NTFS is not entirely new, because some of its concepts were based on another file system that Microsoft was involved with creating: HPFS.

Before there was Windows NT, there was OS/2. OS/2 was a joint project of Microsoft and IBM in the early 1990s; the two companies were trying to create the next big success in the world of graphical operating systems. They succeeded, to some degree, depending on how you are measuring success. :^) OS/2 had some significant technical accomplishments, but suffered from marketing and support issues. Eventually, Microsoft and IBM began to quarrel, and Microsoft broke from the project and started to work on Windows NT. When they did this, they borrowed many key concepts from OS/2’s native file system, HPFS, in creating NTFS.

NTFS was designed to meet a number of specific goals. In no particular order, the most important of these are:

  • Reliability: One important characteristic of a “serious” file system is that it must be able to recover from problems without data loss resulting. NTFS implements specific features to allow important transactions to be completed as an integral whole, to avoid data loss, and to improve fault tolerance.
  • Security and Access Control: A major weakness of the FAT file system is that it includes no built-in facilities for controlling access to folders or files on a hard disk. Without this control, it is nearly impossible to implement applications and networks that require security and the ability to manage who can read or write various data.
  • Breaking Size Barriers: In the early 1990s, FAT was limited to the FAT16 version of the file system, which only allowed partitions up to 4 GiB in size. NTFS was designed to allow very large partition sizes, in anticipation of growing hard disk capacities, as well as the use of RAID arrays.
  • Storage Efficiency: Again, at the time that NTFS was developed, most PCs used FAT16, which results in significant disk space due to slack. NTFS avoids this problem by using a very different method of allocating space to files than FAT does.
  • Long File Names: NTFS allows file names to be up to 255 characters, instead of the 8+3 character limitation of conventional FAT.
  • Networking: While networking is commonplace today, it was still in its relatively early stages in the PC world when Windows NT was developed. At around that time, businesses were just beginning to understand the importance of networking, and Windows NT was given some facilities to enable networking on a larger scale. (Some of the NT features that allow networking are not strictly related to the file system, though some are.)

Of course, there are also other advantages associated with NTFS; these are just some of the main design goals of the file system. There are also some disadvantages associated with NTFS, compared to FAT and other file systems–life is full of tradeoffs. :^) In the other pages of this section we will fully explore the various attributes of the file system, to help you decide if NTFS is right for you.

For their part, Microsoft has not let NTFS lie stagnant. Over time, new features have been added to the file system. Most recently, NTFS 5.0 was introduced as part of Windows 2000. It is similar in most respects to the NTFS used in Windows NT, but adds several new features and capabilities. Microsoft has also corrected problems with NTFS over time, helping it to become more stable, and more respected as a “serious” file system. Today, NTFS is becoming the most popular file system for new high-end PC, workstation and server implementations. NTFS shares the stage with various UNIX file systems in the world of small to moderate-sized business systems, and is becoming more popular with individual “power” users as well.


The PC Guide
Site Version: 2.2.0 – Version Date: April 17, 2001
© Copyright 1997-2004 Charles M. Kozierok. All Rights Reserved.

This is an archive of Charles M. Kozierok’s PCGuide (pcguide.com) which disappeared from the internet in 2018. We wanted to preserve Charles M. Kozierok’s knowledge about computers and are permanently hosting a selection of important pages from PCGuide.

Commodore Plus/4

Plus/4 – 121 colors in 1984!

Model:           Commodore Plus/4 

Manufactured:    1984 

Processor:       7501/8501 ~0.88MHz when the raster beam is on the
visible screen and ~1.77MHz the rest of the time. (The TED chip
generates the processor frequency). The resulting speed is equal to the
vic-20. A PAL vic-20 is faster than this NTSC machine, but a PAL Plus/4
is just a little faster than a PAL vic-20.

Memory:          64Kb (60671 bytes available in Basic)

Graphics:        TED 7360 (Text Editing Device 7360 HMOS)
          
Hi-Resolution:   320x200                 
                 Colors: 121 (All can be visible at the same time)     
                 Hardware reverse display of characters     
                 Hardware blinking
                 Hardware cursor
                 Smooth scrolling
                 Multicolor 160x200
                 (No sprites)

Sound:           TED (7360)
                 2 voices (two tones or one tone + noise)
"OS"             Basic 3.5
Built in         Tedmon, software:
                 "3-plus-1" = word processor, spreadsheet, database and
                 graphs.

History and thoughts

The Plus/4 was called 264 as a prototype (January 1984) and was supposed to have customer selectable built in software. But they decided to ship all with the same built in software and rename the computer Plus/4 (June 1984). (The reason for the long delay was that Commodore’s factories were busy producing C64s). There was other versions available of the same “TED” computer (more or less): The C16 – Looks like a black Vic20 with white keys but is the same computer as the Plus/4, but with no built in software (except for Tedmon), only 16kb of ram, and no RS232. Why it looks like a vic-20 is because Commodore intended it as a replacement for the vic-20 when it was cancelled in 1984. There was also a C116 with the same features as the C16 but looked like a Plus/4 with rubber keys. About 400,000 Plus/4s were made (compared to 2,5 million vic-20s and something like 15 million C64s).

The reason why the Plus/4 wasn’t more popular was one: The C64! Commodore kind of competed with themselves. Let’s list the benefits with the two computers:

 Plus/4:
   * 121 colors (compared to c64's 16)
   * Very powerful basic
   * Built in machine language monitor
   * A little faster
   * Built in software
   * Lower price

 C64:
   * Sprite graphics
   * Better sound
   * Lots of software available
   * All your frieds have one
   * Your old vic-20 tape recorder will work without an adapter
   * Your old vic-20 joysticks will work without adapters

Well, which would you choose?

Well, Basic 3.5 is quite powerful. It has commands for graphics, sound, disk commands, error handling etc. I counted 111 commands/functions (compared to 70 for the C64). On the c64, POKE and PEEK is the only way to access graphics, sprites and sound. And with most of those registers being two bytes big and the chips a bit complex to set up, that is quite troublesome and time consuming for the basic. And drawing graphics with lines, circles etc using only basic on the c64 is just impossible (or would take a year!) On the other hand – if basic programming doesn’t interest you, but copying pirate copied games from your friends, then the c64 is your computer… I mean back then! 😉

There was more reasons than just the c64 for the Plus/4’s lack of success. There are many theories about this on the internet so instead of just repeating them, I would like to contribute with another one: The strange names! Why on earth name the same line of computers so differently! The Plus/4, C16 and C116 is more compatible than a vic-20 with and without memory expansion! And they even look different! I would have made two different computers:”TED-64″ (Plus/4) and”TED-16″ (The C16, but in a Plus/4 case).

They would also have normal joystick and tape ports (or adapters included with the computer). The 3-plus-1 software could have been left out and been sold separately on a cartridge to bring down the price of the computer. It could have been sold together with the computer in a bundle at a reduced price if you wanted to. This way the original 264 idea about customer selectable included software could have been doable with all the selectable software on different cartridges.


My impressions

I have just got the Plus/4, but my impression of it so far is very positive. It’s little and neat. I like the basic and the graphics. The computer has very much “Commodore” feeling. I would say it’s like a mix between the vic-20 (for the simplicity, one graphcis/sound chip and default colors), the C64 (for the similar graphics) and the C128 (for the powerful basic and the similarities with the 128’s VDC chip features like blinking etc.) The Plus/4 also have the Esc codes that the C128 has. The machine language monitor is also almost the same. But in the same time the Plus/4 is simple and easy to survey like the vic-20. I think it’s a well designed computer. The only thing I don’t like about the Plus/4 is the lack of a Restore key. But there are work-arounds (Runstop+reset for example). I have written some more tips about this in the manuals below.

The same people designing the Plus/4 (except for one) later designed the C128.

If you plan to get a Plus/4, then you might want to know that the 1541 diskdrive is working, the video cable is the same as for the c64 (at least composite and sound that my cable is using). But for joysticks, you need to make a little adapter, also for the tape recorder (if it isn’t of the black type that has a built in adapter).

My Plus/4 is a NTSC machine with a 110V power supply. And living in Sweden I needed to buy a 220->110v converter. The Plus/4 does not need the frequency from the PSU (like the C64), so a simple converter that generates 110v 50Hz is fine. My Plus/4 has a square power plug. Others have a round one, and then I could have used an European c64 power supply instead. There are of course PAL Plus/4s as well, but I got mine for free and I like the NTSC display too. No BIG border around the screen like on all PAL Commodores. The NTSC Plus/4 has also a little faster key repeat, so it feels a little faster even though the PAL version runs faster. BUT – There is MUCH more PAL software available it seems…


This is an archive of pug510w’s Dator Museum which disappeared from the internet in 2017. We wanted to preserve the knowledge about the Commodore Plus/4 and are permanently hosting a copy of Dator Museum.

Commodore Plus/4 Service Manual

Convert FAT Disks to NTFS

This article describes how to convert FAT disks to NTFS. See the Terms sidebar for definitions of FAT, FAT32 and NTFS. Before you decide which file system to use, you should understand the benefits and limitations of each of them.

Changing a volume’s existing file system can be time–consuming, so choose the file system that best suits your long–term needs. If you decide to use a different file system, you must back up your data and then reformat the volume using the new file system. However, you can convert a FAT or FAT32 volume to an NTFS volume without formatting the volume, though it is still a good idea to back up your data before you convert.

Note  Some older programs may not run on an NTFS volume, so you should research the current requirements for your software before converting.

Choosing Between NTFS, FAT, and FAT32

You can choose between three file systems for disk partitions on a computer running Windows XP: NTFS, FAT, and FAT32. NTFS is the recommended file system because it’s is more powerful than FAT or FAT32, and includes features required for hosting Active Directory as well as other important security features. You can use features such as Active Directory and domain–based security only by choosing NTFS as your file system.

Converting to NTFS Using the Setup Program

The Setup program makes it easy to convert your partition to the new version of NTFS, even if it used FAT or FAT32 before. This kind of conversion keeps your files intact (unlike formatting a partition).

Setup begins by checking the existing file system. If it is NTFS, conversion is not necessary. If it is FAT or FAT32, Setup gives you the choice of converting to NTFS. If you don’t need to keep your files intact and you have a FAT or FAT32 partition, it is recommended that you format the partition with NTFS rather than converting from FAT or FAT32. (Formatting a partition erases all data on the partition and allows you to start fresh with a clean drive.) However, it is still advantageous to use NTFS, regardless of whether the partition was formatted with NTFS or converted.

Converting to NTFS Using Convert.exe

A partition can also be converted after Setup by using Convert.exe. For more information about Convert.exe, after completing Setup, click Start, click Run, type cmd, and then press ENTER. In the command window, type help convert, and then press ENTER.

It is easy to convert partitions to NTFS. The Setup program makes conversion easy, whether your partitions used FAT, FAT32, or the older version of NTFS. This kind of conversion keeps your files intact (unlike formatting a partition.

To find out more information about Convert.exe

1.After completing Setup, click Start, click Run, type cmd, and then press ENTER.
2.In the command window, type help convert and then press ENTER.
Information about converting FAT volumes to NTFS is made available as shown below.
Converting FAT volumes to NTFS

To convert a volume to NTFS from the command prompt

1.Open Command Prompt. Click Start, point to All Programs, point to Accessories, and then click Command Prompt.
2.In the command prompt window, type: convert drive_letter: /fs:ntfs 

For example, typing convert D: /fs:ntfs would format drive D: with the ntfs format. You can convert FAT or FAT32 volumes to NTFS with this command.

Important  Once you convert a drive or partition to NTFS, you cannot simply convert it back to FAT or FAT32. You will need to reformat the drive or partition which will erase all data, including programs and personal files, on the partition.

Commodore Computer History Archive

As I train new our computer systems engineers I have found that few of them know anything about the Commodore home computer systems. In the early 1990s, when I first started getting into electronics and computers, Commodores were everywhere. By the mid 90s they were ancient relics. I always had five or six laying around the shop. Most were given to me for spare parts from customers. The majority of them had no issues, they were just out dated. For fun and to train new guys, we repaired many of them over the years. Over the years, less and less of our computer systems engineers had any experience on Commodores. Today, virtually no one under 35 knows what a Commodore computer system is.

The MOS 6502 chip

The reason why a 15 year old could work on a Commodore was that the systems were all based around simple CPUs. The MOS 6502 was very easy to diagnose issues with and repair. All I needed to work on the circuits was a simple analog volt meter and a reference voltage. Digital voltmeters were very expensive in the 1990s, I don’t think we had one until the late 90s.

For example, most prominent home computer systems and video game systems in the 1980s and 1990s had a MOS 6502 or a derivative within them. These derivative chips were called the 650x or the 6502 family of chips. The Commodore VIC-20, Commodore 64, Apple II, Atari 800, Atari 2600 and NES all had a 6502 or 650x chips in them. Almost everything made from the mid 1970s to the mid 1980s had a connection to the 6502 family. By the late 1980s newer and faster chips by Motorola and Intel replaced the MOS 6502 family as the primary go to processor.

Commodore History Disappearing

While I train new field engineers here at Karls Technology I have been looking online for reference materials about Commodores. Back in the 1990s reference material was available at the library, in hobby magazines and BBS’s. Today, I find very little good reference material about Commodores, MOS or the 6502 family of chips. Previously, you could find people that worked for MOS, Commodore or GMT around the internet. As those engineers of yesterday pass way their knowledge of the history of computing leaves us.

Before the days of blogs, much of the early computing history was recorded on early engineer’s personal websites. Those websites have gone offline or were hosted by companies that not longer exist.

Computer History Archive

Due to this knowledge leaving us and much of it only existing in an offline capacity; we decided to start archiving Commodore, 6502 family and other early computer history information. Therefore, we will scan and post below any knowledge we find in an offline repository. In addition, any historical personal websites about early computer history from yesteryear will be archived here. Our goal is to document as much early computer history as possible.

Text Editing Device TED 7360 Datasheet

Commodore Plus/4 Specifications

Commodore Plus/4 Service Manual

Commodore Semiconductor Group’s Superfund Site from the EPA

Designing Calm Technology by Mark Weiser, Xerox, 1995.