"More memory, more memory, I don’t have enough memory!" Today, memory is one of the
most popular, easy, and inexpensive ways to upgrade a computer. As the computer’s CPU
works, it stores information in the computer’s memory. The rule of thumb is the more memory
a computer has, the faster it will operate.
To identify memory within a computer, look for several thin rows of small circuit boards
sitting vertically, packed tightly together near the processor. Figure 1.25 shows where memory
is located in a system.
Parity checking is a rudimentary error-checking scheme that lines up the chips in a column
and divides them into an equal number of bits, numbered starting at 0. All the number n bits,
one from each chip, form a numerical set. If even parity is used, for example, the number of
bits in the set is counted up, and if the total comes out even, then the parity bit is set to 0,
because the count is already even. If it comes out odd, then the parity bit is set to 1 to even up
the count. You can see that this is effective only for determining if there was a blatant error
in the set of bits, but there is no indication as to where the error is and how to fix it. This is
error checking, not error correction. Finding an error can lock up the entire system and display
a memory parity error. Enough of these errors and you need to replace the memory. If that
doesn’t fix the problem, good luck.
In the early days of personal computing, almost all memory was parity-based. Compaq was
one of the first manufacturers to employ non-parity RAM in their mainstream systems. As
quality has increased over the years, parity checking in the RAM subsystem has become rarer.
If parity checking is not supported, there will generally be fewer chips per module, usually one
less per column of RAM.
The next step in the evolution of memory error detection is known as Error Checking and
Correcting (ECC). If memory supports ECC, check bits are generated and stored with the
data. An algorithm is performed on the data and its check bits whenever the memory is
accessed. If the result of the algorithm is all zeros, then the data is deemed valid and processing
continues. ECC can detect single- and double-bit errors and actually correct single-bit errors.
In the following sections, we’ll outline the four major types of computer memory—DRAM,
SRAM, ROM, and CMOS—as well as memory packaging.
DRAM
DRAM is dynamic random access memory. (This is what most people are talking about when they
mention RAM.) When you expand the memory in a computer, you are adding DRAM chips. You
use DRAM to expand the memory in the computer because it’s cheaper than any other type of
memory. Dynamic RAM chips are cheaper to manufacture than other types because they are less
complex. Dynamic refers to the memory chips’ need for a constant update signal (also called a
refresh signal) in order to keep the information that is written there. If this signal is not received
every so often, the information will cease to exist. Currently, there are four popular implementations
of DRAM: SDRAM, DDR, DDR2, and RAMBUS.
SDRAM
The original form of DRAM had an asynchronous interface, meaning that it derived its clocking
from the actual inbound signal, paying attention to the electrical aspects of the waveform, such
as pulse width, to set its own clock to synchronize on the fly with the transmitter. Synchronous
DRAM (SDRAM) shares a common clock signal with the transmitter of the data. The computer’s
system bus clock provides the common signal that all SDRAM components use for each
step to be performed.
This characteristic ties SDRAM to the speed of the FSB and the processor, eliminating the
need to configure the CPU to wait for the memory to catch up. Every time the system clock
ticks, one bit of data can be transmitted per data pin, limiting the bit rate per pin of SDRAM
to the corresponding numerical value of the clock’s frequency. With today’s processors interfacing
with memory using a parallel data-bus width of 8 bytes (hence the term 64-bit processor),
a 100MHz clock signal produces 800MBps. That’s megabytes per second, not megabits.
Such memory is referred to as PC100, because throughput is easily computed as eight times
the rating.
DDR
Double Data Rate (DDR) SDRAM earns its name by doubling the transfer rate of ordinary
SDRAM by double-pumping the data, which means transferring it on both the rising and
falling edges of the clock signal. This obtains twice the transfer rate at the same FSB clock
frequency. It’s the rising clock frequency that generates heating issues with newer components,
so keeping the clock the same is an advantage. The same 100MHz clock gives a DDR
SDRAM system the impression of a 200MHz clock in comparison to a single data rate
(SDR) SDRAM system.
You can use this new frequency in your computations or simply remember to double your
results for SDR calculations, producing DDR results. For example, with a 100MHz clock, two
operations per cycle, and 8 bytes transferred per operation, the data rate is 1600MBps. Now
that throughput is becoming a bit tricker to compute, the industry uses this final figure to
name the memory modules instead of the frequency, which was used with SDR. This makes
the result seem many times better, while it’s really only twice as good. In this example, the
module is referred to as PC1600. The chips that go into making PC1600 modules are named
after the perceived double-clock frequency: DDR-200.
DDR2
Think of the 2 in DDR2 as yet another multiplier of 2 in the SDRAM technology, using a
lower peak voltage to keep power consumption down (1.8V vs. the 2.5V of DDR and others).
Still double-pumping, DDR2, like DDR, uses both sweeps of the clock signal for data transfer.
Internally, DDR2 further splits each clock pulse in two, doubling the number of operations it
can perform per FSB clock cycle. Through enhancements in the electrical interface and buffers,
as well as through adding off-chip drivers, DDR2 nominally produces four times what SDR
is capable of producing.
However, DDR2 suffers from enough additional latency over DDR that identical throughput
ratings find DDR2 at a disadvantage. Once frequencies develop for DDR2 that do not exist for
DDR, however, DDR2 could become the clear SDRAM leader, although DDR3 is nearing
release. Continuing the preceding example and initially ignoring the latency issue, DDR2 using
a 100MHz clock transfers data in four operations per cycle and still 8 bytes per operation, for
a total of 3200MBps.
Just like DDR, DDR2 names its chips based on the perceived frequency. In this case, you
would be using DDR2-400 chips. DDR2 carries on the final-result method for naming modules
but cannot simply call them PC3200 modules because those already exist in the DDR world.
DDR2 calls these modules PC2-3200. The latency consideration, however, means that DDR’s
PC3200 offering is preferable to DDR2’s PC2-3200. After reading the "RDRAM" section, consult
Table 1.2, which summarizes how each technology in the "DRAM" section would achieve
a transfer rate of 3200MBps, even if only theoretically. For example, SDR PC400 doesn’t exist.
Think of the 2 in DDR2 as yet another multiplier of 2 in the SDRAM technology, using a
lower peak voltage to keep power consumption down (1.8V vs. the 2.5V of DDR and others).
Still double-pumping, DDR2, like DDR, uses both sweeps of the clock signal for data transfer.
Internally, DDR2 further splits each clock pulse in two, doubling the number of operations it
can perform per FSB clock cycle. Through enhancements in the electrical interface and buffers,
as well as through adding off-chip drivers, DDR2 nominally produces four times what SDR
is capable of producing.
However, DDR2 suffers from enough additional latency over DDR that identical throughput
ratings find DDR2 at a disadvantage. Once frequencies develop for DDR2 that do not exist for
DDR, however, DDR2 could become the clear SDRAM leader, although DDR3 is nearing
release. Continuing the preceding example and initially ignoring the latency issue, DDR2 using
a 100MHz clock transfers data in four operations per cycle and still 8 bytes per operation, for
a total of 3200MBps.
Just like DDR, DDR2 names its chips based on the perceived frequency. In this case, you
would be using DDR2-400 chips. DDR2 carries on the final-result method for naming modules
but cannot simply call them PC3200 modules because those already exist in the DDR world.
DDR2 calls these modules PC2-3200. The latency consideration, however, means that DDR’s
PC3200 offering is preferable to DDR2’s PC2-3200. After reading the "RDRAM" section, consult
Table 1.2, which summarizes how each technology in the "DRAM" section would achieve
a transfer rate of 3200MBps, even if only theoretically. For example, SDR PC400 doesn’t exist.
No comments:
Post a Comment