Tag Archives: hard drive

Back Up Your Computer

A failing hard drive is one of the worst things that would happen to your computer, because of all of important documents, photos, and more saved on there. Backing up your computer is highly recommended by both Mac and Microsoft technical support and would ease your mind greatly. It’s also simple to do!

For Macs:

You can either use iCloud or Macs backup feature, Time Machine. From there, you can reestablish any lost or accidental deleting from your computers’ hard drive. This is an easy go-to without the hassle of purchasing an external hard drive. However, when Time Machine is full, the oldest saved files will be removed. And if your computer is ever stolen or destroyed, you’ll lose everything.

For Microsoft:

Grab yourself an external hard drive and connect it to your computer to allow it access to your files. Back up your computer by using File History:

  1. Start Menu
  2. Settings
  3. Navigate and Update Security
  4. Backup

For help with any hard drive problems with either Mac or Microsoft, call us at 1-800-620-5285.  Karls Technology is a nationwide computer service company with offices in many major cities. This blog post was brought to you from our staff at the Frisco Computer Repair Service, if you need computer repair in Frisco, TX please call or text the local office at (469) 299-9005.

Free Up Hard Drive Space

No one enjoys their computer running slower than usual. It’s a nuisance, especially if you use it for business. But there are plenty of ways to speed up your computer simply by freeing up your hard drive space.

Check how your Storage Usage is being handled under the Settings tab, to System, and then Storage. You can then click on which drive is running out of space. This will show you a breakdown of what it all contains, and you can click on each items listed to give you more detail. That way, you will be able to clear up any hard drive space by deleting unnecessary files.
Also, use Storage Sense to free up files. This feature will automatically delete files. You can also make this a manual option by clicking on “Change how we free up space automatically”, and under “Free up space now” click Clean Now.

Don’t forget to use Disk Cleanup. Search for Disk Cleanup when you click on the Start menu (Windows Icon). Deleting temporary internet files, downloads, and Recycling Bin in the Cleanup is one of the easiest ways to free up space on your Hard Drive.

You can find other ways to help alleviate issues with your hard drive by checking out Windows Central.

For help with space issues or any hard drive problem, call us at 1-800-620-5285.  Karls Technology is a nationwide computer service company with offices in many major cities. This blog post was brought to you from our staff at the Frisco Computer Repair Service, if you need computer repair in Frisco, TX please call or text the local office at (469) 299-9005.

Hard Drive Air Filters

Air Filters

Nearly all hard disk drives have two air filters. One filter is called the recirculating filter, and the other is called either a barometric or breather filter. These filters are permanently sealed inside the drive and are designed never to be changed for the life of the drive, unlike many older mainframe hard disks that had changeable filters. Many mainframe drives circulate air from outside the drive through a filter that must be changed periodically.

A hard disk on a PC system does not circulate air from inside to outside the HDA, or vice versa. The recirculating filter that is permanently installed inside the HDA is designed to filter only the small particles of media scraped off the platters during head takeoffs and landings (and possibly any other small particles dislodged inside the drive). Because PC hard disk drives are permanently sealed and do not circulate outside air, they can run in extremely dirty environments (see Figure 1-6).

FIG. 1-6  Air circulation in a hard disk.

The HDA in a hard disk is sealed but not airtight. The HDA is vented through a barometric or breather filter element that allows for pressure equalization (breathing) between the inside and outside of the drive. For this reason, most hard drives are rated by the drive’s manufacturer to run in a specific range of altitudes, usually from -1,000 to +10,000 feet above sea level. In fact, some hard drives are not rated to exceed 7,000 feet while operating because the air pressure would be too low inside the drive to float the heads properly. As the environmental air pressure changes, air bleeds into or out of the drive so that internal and external pressures are identical. Although air does bleed through a vent, contamination usually is not a concern, because the barometric filter on this vent is designed to filter out all particles larger than 0.3 micron (about 12 µ-in) to meet the specifications for cleanliness inside the drive. You can see the vent holes on most drives, which are covered internally by this breather filter. Some drives use even finer-grade filter elements to keep out even smaller particles.

Hard Disk Temperature Acclimation

To allow for pressure equalization, hard drives have a filtered port to bleed air into or out of the HDA as necessary. This breathing also enables moisture to enter the drive, and after some period of time, it must be assumed that the humidity inside any hard disk is similar to that outside the drive. Humidity can become a serious problem if it is allowed to condense — and especially if the drive is powered up while this condensation is present. Most hard disk manufacturers have specified procedures for acclimating a hard drive to a new environment with different temperature and humidity ranges, especially for bringing a drive into a warmer environment in which condensation can form. This situation should be of special concern to users of laptop or portable systems with hard disks. If you leave a portable system in an automobile trunk during the winter, for example, it could be catastrophic to bring the machine inside and power it up without allowing it to acclimate to the temperature indoors.

The following text and Table 1.3 are taken from the factory packaging that Control Data Corporation (later Imprimis and eventually Seagate) used to ship its hard drives:

If you have just received or removed this unit from a climate with temperatures at or below 50°F (10°C) do not open this container until the following conditions are met, otherwise condensation could occur and damage to the device and/or media may result. Place this package in the operating environment for the time duration according to the temperature chart.

Table 1.3  Hard Disk Drive Environmental Acclimation Table.
Previous Climate Temp.Acclimation Time
+40°F (+4°C)13 hours
+30°F (-1°C)15 hours
+20°F (-7°C)16 hours
+10°F (-12°C)17 hours
0°F (-18°C)18 hours
-10°F (-23°C)20 hours
-20°F (-29°C)22 hours
-30°F (-34°C) or less27 hours

As you can see from this table, a hard disk that has been stored in a colder-than-normal environment must be placed in the normal operating environment for a specified amount of time to allow for acclimation before it is powered on.


This is an archive of Alasir Enterprise’s MicroHouse PC Hardware Library Volume I: Hard Drives by Rhett M. Hollander (alasir.com) which disappeared from the internet in 2017. We wanted to preserve Rhett M. Hollander’s knowledge about hard drives and are permanently hosting a selection of important pages from alasir.com.

Hard Drive Sector Format and Structure

The basic unit of data storage on a hard disk is the sector. The name “sector” comes from the mathematical term, which refers to a “pie-shaped” angular section of a circle, bounded on two sides by radii and the third by the perimeter of the circle. On a hard disk containing concentric circular tracks, that shape would define a sector of each track of the platter surface that it intercepted. This is what is called a sector in the hard disk world: a small segment along the length of a track. At one time, all hard disks had the same number of sectors per track, and in fact, the number of sectors in each track was fairly standard between models. Today’s advances have allowed the number of sectors per track (“SPT”) to vary significantly, as discussed here.

In the PC world, each sector of a hard disk can store 512 bytes of user data. (There are some disks where this number can be modified, but 512 is the standard, and found on virtually all hard drives by default.) Each sector, however, actually holds much more than 512 bytes of information. Additional bytes are needed for control structures and other information necessary to manage the drive, locate data and perform other “support functions”. The exact details of how a sector is structured depends on the drive model and manufacturer. However, the contents of a sector usually include the following general elements:

  • ID Information: Conventionally, space is left in each sector to identify the sector’s number and location. This is used for locating the sector on the disk. Also included in this area is status information about the sector. For example, a bit is commonly used to indicate if the sector has been marked defective and remapped.
  • Synchronization Fields: These are used internally by the drive controller to guide the read process.
  • Data: The actual data in the sector.
  • ECC: Error correcting code used to ensure data integrity.
  • Gaps: One or more “spacers” added as necessary to separate other areas of the sector, or provide time for the controller to process what it has read before reading more bits.

Note: In addition to the sectors, each containing the items above, space on each track is also used for servo information (on embedded servo drives, which is the design used by all modern units).

The amount of space taken up by each sector for overhead items is important, because the more bits used for “management”, the fewer overall that can be used for data. Therefore, hard disk manufacturers strive to reduce the amount of non-user-data information that must be stored on the disk. The term format efficiency refers to the percentage of bits on each disk that are used for data, as opposed to “other things”. The higher the format efficiency of a drive, the better (but don’t expect to find statistics on this for your favorite drive easy to find!)

One of the most important improvements in sector format was IBM’s creation of the No-ID Format in the mid-1990s. The idea behind this innovation is betrayed by the name: the ID fields are removed from the sector format. Instead of labeling each sector within the sector header itself, a format map is stored in memory and referenced when a sector must be located. This map also contains information about what sectors have been marked bad and relocated, where the sectors are relative to the location of servo information, and so on. Not only does this improve format efficiency, allowing up to 10% more data to be stored on the surface of each platter, it also improves performance. Since this critical positioning information is present in high-speed memory, it can be accessed much more quickly. “Detours” in chasing down remapped sectors are also eliminated.


The PC Guide
Site Version: 2.2.0 – Version Date: April 17, 2001
© Copyright 1997-2004 Charles M. Kozierok. All Rights Reserved.

This is an archive of Charles M. Kozierok’s PCGuide (pcguide.com) which disappeared from the internet in 2018. We wanted to preserve Charles M. Kozierok’s knowledge about computers and are permanently hosting a selection of important pages from PCGuide.

Hard Drive Error Correcting Code (ECC)

The basis of all error detection and correction in hard disks is the inclusion of redundant information and special hardware or software to use it. Each sector of data on the hard disk contains 512 bytes, or 4,096 bits, of user data. In addition to these bits, an additional number of bits are added to each sector for the implementation of error correcting code or ECC (sometimes also called error correction code or error correcting circuits). These bits do not contain data; rather, they contain information about the data that can be used to correct any problems encountered trying to access the real data bits.

There are several different types of error correcting codes that have been invented over the years, but the type commonly used on PCs is the Reed-Solomon algorithm, named for researchers Irving Reed and Gustave Solomon, who first discovered the general technique that the algorithm employs. Reed-Solomon codes are widely used for error detection and correction in various computing and communications media, including magnetic storage, optical storage, high-speed modems, and data transmission channels. They have been chosen because they are easier to decode than most other similar codes, can detect (and correct) large numbers of missing bits of data, and require the least number of extra ECC bits for a given number of data bits. Look in the memory section for much more general information on error detection and correction.

When a sector is written to the hard disk, the appropriate ECC codes are generated and stored in the bits reserved for them. When the sector is read back, the user data read, combined with the ECC bits, can tell the controller if any errors occurred during the read. Errors that can be corrected using the redundant information are corrected before passing the data to the rest of the system. The system can also tell when there is too much damage to the data to correct, and will issue an error notification in that event. The sophisticated firmware present in all modern drives uses ECC as part of its overall error management protocols. This is all done “on the fly” with no intervention from the user required, and no slowdown in performance even when errors are encountered and must be corrected.

The capability of a Reed Solomon ECC implementation is based on the number of additional ECC bits it includes. The more bits that are included for a given amount of data, the more errors that can be tolerated. There are multiple trade offs involved in deciding how many bits of ECC information to use. Including more bits per sector of data allows for more robust error detection and correction, but means fewer sectors can be put on each track, since more of the linear distance of the track is used up with non-data bits. On the other hand, if you make the system more capable of detecting and correcting errors, you make it possible to increase areal density or make other performance improvements, which could pay back the “investment” of extra ECC bits, and then some. Another complicating factor is that the more ECC bits included, the more processing power the controller must possess to process the Reed Solomon algorithm. The engineers who design hard disks take these various factors into account in deciding how many ECC bits to include for each sector.

The PC Guide
Site Version: 2.2.0 – Version Date: April 17, 2001
© Copyright 1997-2004 Charles M. Kozierok. All Rights Reserved.

This is an archive of Charles M. Kozierok’s PCGuide (pcguide.com) which disappeared from the internet in 2018. We wanted to preserve Charles M. Kozierok’s knowledge about computers and are permanently hosting a selection of important pages from PCGuide.

Hard Drive Spindle Speed

As hard disks become more advanced, virtually every component in them is required to do more and work harder, and the spindle motor is no exception. As discussed in detail here, increasing the speed at which the platters spin improves both positioning and transfer performance: the data can be read off the disk faster during sequential operations, and rotational latency–the time that the heads must wait for the correct sector number to come under the head–is also reduced, improving random operations. For this reason, there has been a push to increase the speed of the spindle motor, and more than at any other time in the past, hard disk spin speeds are changing rapidly.

At one time all PC hard disks spun at 3,600 RPM; in fact, for the first 10 years of the PC’s existence, that was all there was. One reason for this is that their designs were based on the old designs of large, pre-PC hard disks that used AC motors, and standard North American AC power is 60 Hz per second: 3,600 RPM. In the early 1990s manufacturers began to realize how much performance could be improved by increasing spindle speeds. The next step up from 3,600 RPM was 4,500 RPM; 5,400 RPM soon followed and became a standard for many years. From there speeds have steadily marched upwards. Usually, faster PC hard disk speeds “debut” on SCSI drives that are used in higher-performance applications, and then filter down to IDE/ATA a few years later. At one time 7,200 RPM spindles were only found on top-of-the-line SCSI drives; they are now being used in consumer IDE/ATA disks sold at retail while SCSI has moved on to loftier heights. This table shows the most common PC spindle speeds, their associated average rotational latency, and their typical applications as of early 2000:

Spindle Speed (RPM)Average Latency (Half Rotation) (ms)Typical Current Applications
3,6008.3Former standard, now obsolete
4,2007.1Laptops
4,5006.7IBM Microdrive, laptops
4,9006.1Laptops
5,2005.8Obsolete
5,4005.6Low-end  IDE/ATA, laptops
7,2004.2High-end IDE/ATA, Low-end SCSI
10,0003.0High-end SCSI
12,0002.5High-end SCSI
15,0002.0Top-of-the-line SCSI

Note: Hard disks for laptops and specialty applications come in a wide variety of spindle speeds, even beyond the several speeds listed above. I have not exhaustively researched and listed these here.

Increasing spindle motor speed creates many design challenges, particularly aimed at keeping vibration and heat under control. As discussed here, when the motor spins faster these become more of an issue; some high-end drives have very serious heat, vibration and noise problems that require special mounting and cooling work to allow them to run without problems. To some extent, there is a trade off between spindle speed, and the heat and noise issue. Engineers generally focus on keeping these matters under control, and usually improve them significantly after the first generation of drives at any given spindle speed. However, in some applications, using a slower and quieter drive can make sense.


The PC Guide
Site Version: 2.2.0 – Version Date: April 17, 2001
© Copyright 1997-2004 Charles M. Kozierok. All Rights Reserved.

This is an archive of Charles M. Kozierok’s PCGuide (pcguide.com) which disappeared from the internet in 2018. We wanted to preserve Charles M. Kozierok’s knowledge about computers and are permanently hosting a selection of important pages from PCGuide.

RAID Levels 03 and 30

RAID Levels 0+3 (03 or 53) and 3+0 (30)

Common Name(s): The most confusing naming of any of the RAID levels. :^) In an ideal world, this level would be named RAID 0+3 (or 03) or RAID 3+0 (30). Instead, the number 53 is often used in place of 03 for reasons I have never been able to determine, and worse, 53 is often actually implemented as 30, not 03. As always, verify the details of the implementation to be sure of what you have.

Technique(s) Used: Byte striping with dedicated parity combined with block striping.

Description: RAID 03 and 30 (though often called 53 for a reason that utterly escapes me) combine byte striping, parity and block striping to create large arrays that are conceptually difficult to understand. :^) RAID 03 is formed by putting into a RAID 3 array a number of striped RAID 0 arrays; RAID 30 is more common and is formed by striping across a number of RAID 3 sub-arrays. The combination of parity, small-block striping and large-block striping makes analyzing the theoretical performance of this level difficult. In general, it provides performance better than RAID 3 due to the addition of RAID 0 striping, but closer to RAID 3 than RAID 0 in overall speed, especially on writes. RAID 30 provides better fault tolerance and rebuild performance than RAID 03, but both depend on the “width” of the RAID 3 dimension of the drive relative to the RAID 0 dimension: the more parity drives, the lower capacity and storage efficiency, but the greater the fault tolerance. See the examples below for more explanation of this.

Most of the characteristics of RAID 0+3 and 3+0 are similar to those of RAID 0+5 and 5+0. RAID 30 and 03 tend to be better for large files than RAID 50 and 05.

Controller Requirements: Generally requires a high-end hardware controller.

Hard Disk Requirements: Number of drives must be able to be factored into two integers, one of which must be 2 or higher and the other 3 or higher (you can make a RAID 30 array from 10 drives but not 11). Minimum number of drives is six, with the maximum set by the controller.

Array Capacity: For RAID 03: (Size of Smallest Drive) * (Number of Drives In Each RAID 0 Set) * (Number of RAID 0 Sets – 1). For RAID 30: (Size of Smallest Drive) * (Number of Drives In Each RAID 3 Set – 1) * (Number of RAID 3 Sets).

For example, the capacity of a RAID 03 array made of 15 18 GB drives arranged as three five-drive RAID 0 sets would be 18 GB * 5 * (3-1) = 180 GB. The capacity of a RAID 30 array made of 21 18 GB drives arranged as three seven-drive RAID 3 sets would be 18 GB * (7-1) * 3 = 324 GB. The same 21 drives arranged as seven three-drive RAID 3 sets would have a capacity of 18 GB * (3-1) * 7 = “only” 252 GB.

Storage Efficiency: For RAID 03: ( (Number of RAID 0 Sets – 1) / Number of RAID 0 Sets). For RAID 30: ( (Number of Drives In Each RAID 3 Set – 1) / Number of Drives In Each RAID 3 Set).

Taking the same examples as above, the 15-drive RAID 03 array would have a storage efficiency of (3-1)/3 = 67%. The first RAID 30 array, configured as three seven-drive RAID 3 sets, would have a storage efficiency of (7-1)/7 = 86%, while the other RAID 30 array would have a storage efficiency of, again, (3-1)/3 = 67%.

Fault Tolerance: Good to very good, depending on whether it is RAID 03 or 30, and the number of parity drives relative to the total number. RAID 30 will provide better fault tolerance than RAID 03.

Consider the two different 21-drive RAID 30 arrays mentioned above: the first one (three seven-drive RAID 3 sets) has higher capacity and storage efficiency, but can only tolerate three maximum potential drive failures; the one with lower capacity and storage efficiency (seven three-drive RAID 3 sets) can handle as many as seven , if they are in different RAID 3 sets. Of course few applications really require tolerance for seven independent drive failures! And of course, if those 21 drives were in a RAID 03 array instead, failure of a second drive after one had failed and taken down one of the RAID 0 sub-arrays would crash the entire array.

Availability: Very good to excellent.

Degradation and Rebuilding: Relatively little for RAID 30 (though more than RAID 10); can be more substantial for RAID 03.

Random Read Performance: Very good, assuming RAID 0 stripe size is reasonably large.

Random Write Performance: Fair.

Sequential Read Performance: Very good to excellent.

Sequential Write Performance: Good.

Cost: Relatively high due to requirements for a hardware controller and a large number of drives; storage efficiency is better than RAID 10 however and no worse than any other RAID levels that include redundancy.

Special Considerations: Complex and expensive to implement.

Recommended Uses: Not as widely used as many other RAID levels. Applications include data that requires the speed of RAID 0 with fault tolerance and high capacity, such as critical multimedia data and large database or file servers. Sometimes used instead of RAID 3 to increase capacity as well as performance.


The PC Guide
Site Version: 2.2.0 – Version Date: April 17, 2001
© Copyright 1997-2004 Charles M. Kozierok. All Rights Reserved.

This is an archive of Charles M. Kozierok’s PCGuide (pcguide.com) which disappeared from the internet in 2018. We wanted to preserve Charles M. Kozierok’s knowledge about computers and are permanently hosting a selection of important pages from PCGuide.

RAID Levels 01 and 10

RAID Levels 0+1 (01) and 1+0 (10)

Common Name(s): RAID 0+1, 01, 0/1, “mirrored stripes”, “mirror of stripes”; RAID 1+0, 10, 1/0, “striped mirrors”, “stripe of mirrors”. Labels are often used incorrectly; verify the details of the implementation if the distinction between 0+1 and 1+0 is important to you.

Technique(s) Used: Mirroring and striping without parity.

Description: The most popular of the multiple RAID levels, RAID 01 and 10 combine the best features of striping and mirroring to yield large arrays with high performance in most uses and superior fault tolerance. RAID 01 is a mirrored configuration of two striped sets; RAID 10 is a stripe across a number of mirrored sets. RAID 10 and 01 have been increasing dramatically in popularity as hard disks become cheaper and the four-drive minimum is legitimately seen as much less of an obstacle. RAID 10 provides better fault tolerance and rebuild performance than RAID 01. Both array types provide very good to excellent overall performance by combining the speed of RAID 0 with the redundancy of RAID 1 without requiring parity calculations.

RAID Levels 01 and 10

This illustration shows how files of different sizes are distributed between the drives on an eight-disk RAID 0+1 array using a 16 kiB stripe size for the RAID 0 portion. As with the RAID 0 illustration, the red file is 4 kiB in size; the blue is 20 kiB; the green is 100 kiB; and the magenta is 500 kiB, with each vertical pixel representing 1 kiB of space. The large, patterned rectangles represent the two RAID 0 “sub arrays”, which are mirrored using RAID 1 to create RAID 0+1.
The contents of the striped sets are thus identical. The diagram for RAID 1+0
would be the same except for the groupings: instead of two large boxes dividing the drives horizontally, there would be four large boxes dividing the drives vertically into mirrored pairs. These pairs would then be striped together to form level 1+0. Contrast this diagram to the ones for RAID 0 and RAID 1.

Controller Requirements: Almost all hardware controllers will support one or the other of RAID 10 or RAID 01, but often not both. Even low-end cards will support this multiple level, usually RAID 01. High-end cards may support both 01 and 10.

Hard Disk Requirements: An even number of hard disks with a minimum of four; maximum dependent on controller. All drives should be identical.

Array Capacity: (Size of Smallest Drive) * (Number of Drives ) / 2.

Storage Efficiency: If all drives are the same size, 50%.

Fault Tolerance: Very good for RAID 01; excellent for RAID 10.

Availability: Very good for RAID 01; excellent for RAID 10.

Degradation and Rebuilding: Relatively little for RAID 10; can be more substantial for RAID 01.

Random Read Performance: Very good to excellent.

Random Write Performance: Good to very good.

Sequential Read Performance: Very good to excellent.

Sequential Write Performance: Good to very good.

Cost: Relatively high due to large number of drives required and low storage efficiency (50%).

Special Considerations: Low storage efficiency limits potential array capacity.

Recommended Uses: Applications requiring both high performance and reliability and willing to sacrifice capacity to get them. This includes enterprise servers, moderate-sized database systems and the like at the high end, but also individuals using larger IDE/ATA hard disks on the low end. Often used in place of RAID 1 or RAID 5 by those requiring higher performance; may be used instead of RAID 1 for applications requiring more capacity.


The PC Guide
Site Version: 2.2.0 – Version Date: April 17, 2001
© Copyright 1997-2004 Charles M. Kozierok. All Rights Reserved.

This is an archive of Charles M. Kozierok’s PCGuide (pcguide.com) which disappeared from the internet in 2018. We wanted to preserve Charles M. Kozierok’s knowledge about computers and are permanently hosting a selection of important pages from PCGuide.

RAID Levels Comparison

Summary Comparison of RAID Levels

Below you will find a table that summarizes the key quantitative attributes of the various RAID levels for easy comparison. For the full details on any RAID level, see its own page, accessible here. For a description of the different characteristics, see the discussion of factors differentiating RAID levels. Also be sure to read the notes that follow the table:

Comparison of all RAID Levels

Notes on the table:

  • For the number of disks, the first few valid sizes are shown; you can figure out the rest from the examples given in most cases. Minimum size is the first number shown; maximum size is normally dictated by the controller. RAID 01/10 and RAID 15/51 must have an even number of drives, minimum 6. RAID 03/30 and 05/50 can only have sizes that are a product of integers, minimum 6.
  • For capacity and storage efficiency, “S” is the size of the smallest drive in the array, and “N” is the number of drives in the array. For the RAID 03 and 30, “N0” is the width of the RAID 0 dimension of the array, and “N3” is the width of the RAID 3 dimension. So a 12-disk RAID 30 array made by creating three 4-disk RAID 3 arrays and then striping them would have N3=4 and N0=3. The same applies for “N5” in the RAID 05/50 row.
  • Storage efficiency assumes all drives are of identical size. If this is not the case, the universal computation (array capacity divided by the sum of all drive sizes) must be used.
  • Performance rankings are approximations and to some extent, reflect my personal opinions. Please don’t over-emphasize a “half-star” difference between two scores!
  • Cost is relative and approximate, of course. In the real world it will depend on many factors; the dollar signs are just intended to provide some perspective.

The PC Guide
Site Version: 2.2.0 – Version Date: April 17, 2001
© Copyright 1997-2004 Charles M. Kozierok. All Rights Reserved.

This is an archive of Charles M. Kozierok’s PCGuide (pcguide.com) which disappeared from the internet in 2018. We wanted to preserve Charles M. Kozierok’s knowledge about computers and are permanently hosting a selection of important pages from PCGuide.

Access Time

One of the most commonly quoted performance statistics for CD-ROM drives is access time. As with most commonly-used performance metric, it is abused at least as much as it is used properly. Curiously, access time is used extensively in quoting the specs of CD-ROM drives, but is virtually never mentioned with respect to hard disks. With hard drives it is much more common to see quotes of the other metrics that are combined to make up access time. (This despite how similarly the devices access data…).

Access time is meant to represent the amount of time it takes from the start of a random read operation until the data starts to be read from the disk. It is a composite metric, really being composed of the following other metrics:

  • Speed Change Time: For CLV drives, the time for the spindle motor to change to the correct speed.
  • Seek Time: The time for the drive to move the heads to the right location on the disk.
  • Latency: The amount of time for the disk to turn so that the right information spins under the read head.

Although access time is made up of the time for these separate operations, this doesn’t mean that you can simply add these other measurements together to get access time. The relationship is more complex than this because some of these items can happen in parallel. For example, there is no reason that the speed of the spindle motor couldn’t be varied at the same time that the heads are moved (and in fact this is done).

The access time of CD-ROM drives in general depends on the rated “X” speed of the drive, although this can and does vary widely from drive to drive. The oldest 1X drives generally had truly abysmal access times, often exceeding 300 ms; as drives have become faster and faster, access times have dropped, and now are below 100 ms on the top-end drives.

Note that while faster “X” rated drives have lower access times, this is due to improvements that reduce the three metrics listed above that contribute to access time. Some of it (latency for example) is reduced when you spin the disk at 8X instead of 1X. On the other hand, seek time improvement is independent of the spin speed of the disk, which is why some 8X drives will have much better access time performance than other 8X drives, for example.

Even the fastest CD-ROM drives are significantly slower than even the slowest hard disks; access time on a high-end CD-ROM is still going to be four or five times higher than that of a high-end disk drive. This is just the nature of the device; CD-ROM drives are based on technology originally developed for playing audio CDs, where random seek performance is very unimportant. CDs do not have cylinders like a hard disk platter, but rather a long continuous spiral of bits, which makes finding specific pieces of data much more difficult.

Even though access time is important in some ways, its importance is generally vastly overstated by the people that sell CD-ROM drives. Random access performance is one component of overall CD-ROM performance, and how essential it is depends on what you are doing with your drive. However even if high random access performance is important, you must bear in mind that there are far fewer random reads done, in general, to a CD than to a hard disk.

Another point is that manufacturers are not always consistent in how they define their averages. Some companies may use different testing methods, and some may even exaggerate in order to make their drives look much better than they actually are. A small difference in quoted access time is not usually going to make any noticeable real-world difference. For most purposes, a drive with a 100 ms access time is going to behave the same as one with a 110 ms access time. It’s usually better at that point to differentiate them based on other performance characteristics or features (or price).


The PC Guide
Site Version: 2.2.0 – Version Date: April 17, 2001
© Copyright 1997-2004 Charles M. Kozierok. All Rights Reserved.

This is an archive of Charles M. Kozierok’s PCGuide (pcguide.com) which disappeared from the internet in 2018. We wanted to preserve Charles M. Kozierok’s knowledge about computers and are permanently hosting a selection of important pages from PCGuide.