Advertisement

How a CPU Works? The Nitty Gritty

Started by March 21, 2007 03:02 AM
25 comments, last by python_regious 17 years, 5 months ago
I'm not entirely sure, but I know it involves quartz
Quote: Original post by nobodynews
I was thinking about this recently while some people in a group I'm are taking a digital circuitry class which involved making a 4-bit ALU (exactly as you describe). I knew that having a, say, 64-bit adder would have too long of a propogation delay as just a series of 1-bit full adders. Just now, I came up with a method for adding that acts somewhat like a Fast Fourier Transform algorithm and I wanted to know if it was similar to what is done in real processors.

Say you have two 64-bit numbers A and B and you wish to generate the sum C. Break apart A, B, and C in half into A1, A2, B1, B2, C1, and C2 where X1 is the lower order bits and X2 is the higher order bits.

A1 + B1 = C1 w/ or w/o Carry
A2 + B2 + Carry = C2a
A2 + B2 + = C2b

Once C2a and C2b are calculated it simply becomes a matter of outputing the version of C2 that correspons to there being a carry or not.

This method can further be broken down. Divide A1, B1, and C1 in half and perform the same technique as before. Continue until you reach a size where the propogation delay is negligible.

My question is: is this how it's done, or is there a better trick?

Interesting. I came up with this idea three days ago.

I've sketched out a couple of circuits which (purport to) implement the scheme. I am no kind of expert on digital schematics. There may well be errors.



In this one, A and B come in from the left, Cin comes in from the top, the sum goes out to the right, and Cout goes out to the bottom. In this 1-bit adder case, there is no speed-up because we still need to propagate the carry in order to select which sum we want, and it happens that we use the same circuitry (two ands and an or) as we'd use in a conventional full adder.



This one uses three 74283 4-bit full adders and a 74157 multiplexer (it chooses between two sets of 4 wires). Both the 74283 and 74157 have a gate delay of 4. The upshot of this is that the total gate delay of the circuit is 8. The gate delay of two 74283's placed in series (i.e. the conventional ripple carry adder) is also 8. Result: No speed up.

I conjecture that the minimum gate delay of any n-bit multiplexer is equal to the minimum gate delay of any n-bit full adder.
Advertisement
Quote: Original post by trzy
Zmurf: To understand this, you'll want to study digital logic.


AH YES, that was the word I was looking for! It escaped my mind for some reason when I wanted to make this post.

Digital science is what I want to find information about, thanks everyone for all the links and info, I'll most likely be studying at ANU next year (an Australian University) and I'm definitely going to be looking into some of these courses. I think most of them are prescribed anyways for Software Engineering, I'd take Computer Science, but I just can't be bothered going for the UAI of 98 ...

If it's possible for us to conceive it, than it must be possible.
For a slightly different perspective, one might consider reading Feynman's Lectures on Computation. This covers things more from the physics of information point of view.
.
Quote: Original post by Endar
Implementing a processor? That would have been useful.


Since that is something you're going to do regularily? ;-)

Actually we did get to implement a CPU in a language called Kreds (I heard it's become visual since). That was pretty interesting :) But it's more fun than it's useful.
You're not alive until you've written microcode using toggle switches for an ALU implemented in discrete 74-series logic :-)

I think it's useful, because it gives you an understanding of subjects such as write hazards, race conditions, instruction latencies, etc.
enum Bool { True, False, FileNotFound };
Advertisement
Quote: Original post by hplus0603
You're not alive until you've written microcode using toggle switches for an ALU implemented in discrete 74-series logic :-)

I think it's useful, because it gives you an understanding of subjects such as write hazards, race conditions, instruction latencies, etc.


:-( I've only input microcode into the instruction cache of some god awful 8 bit CPU with hex switches.

You may find actual IC design interesting too (I know I did), though I can't recommend any books on it, as I can't find mine. Though, I suppose it all depends on whether you're interested in the layouts of circuit elements (no, that nice circuit diagram you have doesn't translate directly to what's on an IC), deep sub-micron effects and so on.
If at first you don't succeed, redefine success.

This topic is closed to new replies.

Advertisement