Advertisement

How a CPU Works? The Nitty Gritty

Started by March 21, 2007 03:02 AM
25 comments, last by python_regious 17 years, 7 months ago
A understand the basics of how a CPU works, like the theory behind it. But I'm finding it hard to get information on how the electronic pulses running through the buses allow the processor to execute instructions, how the computer can cache and store memory and information on the hard drive. Can anyone help me find some information on the actual science behind the computer? Thanks in advanced.
If it's possible for us to conceive it, than it must be possible.
I recommend reading Computer Organization and Design. May not be the cheapest way, but it's the standard text book on the matter and serves as the intro text to the nitty gritty of things for many thousands of students every year. I myself loved the book and considered the book and the course to be one of the most significant parts of my computer science education.

On the other side, you should probably get a good operating systems textbook as well. No recommendations on that one, I'm afraid -- I wasn't too fond of the text we used.
SlimDX | Ventspace Blog | Twitter | Diverse teams make better games. I am currently hiring capable C++ engine developers in Baltimore, MD.
Advertisement
If that books too much for you this book a lot smaller and does pretty much the same thing.
[size="2"]Don't talk about writing games, don't write design docs, don't spend your time on web boards. Sit in your house write 20 games when you complete them you will either want to do it the rest of your life or not * Andre Lamothe
When we studied it last year I didn't use a textbook... the course was structured such that we started with the simple logic gates and their composition using transistors, then looked at how they could be built up to make things like binary adders and registers.

Divide and conquer is the approach to take. An 32-bit adder is made up of 1-bit adders. RAM is made of millions of individual bit-stores. And so on.

Richard "Superpig" Fine - saving pigs from untimely fates - Microsoft DirectX MVP 2006/2007/2008/2009
"Shaders are not meant to do everything. Of course you can try to use it for everything, but it's like playing football using cabbage." - MickeyMouse

Quote: Original post by Promit
On the other side, you should probably get a good operating systems textbook as well. No recommendations on that one, I'm afraid -- I wasn't too fond of the text we used.

Modern Operating Systems? This was the book we used in my operation systems course, and it was pretty good. But I got more out of actually implementing a simple OS in the lab rather than just reading about one. Same goes for the architecture course. Once you have to implement a simple single- and multi-cycle pipelined processor in Verilog, you really get down and dirty with the details.
Quote: Original post by Zipster
But I got more out of actually implementing a simple OS in the lab rather than just reading about one. Same goes for the architecture course. Once you have to implement a simple single- and multi-cycle pipelined processor in Verilog, you really get down and dirty with the details.


That's just no fair. I did both the Architecture and Operating System subjects (architecture I unfortunately didn't pay any attention in, but I was able to pass), but we didn't do anything approaching implementing a processor in Verilog.

In the architecture subject, all we did was MIPS stuff. And the labs had no relation to the lectures. That was annoying. Perhaps if they were I would have paid more attention to the lectures.

In OS, we did a couple of relevant things, things like process scheduling, creating a basic file system and file allocation table. The hardest thing was writing a basic linux shell, and, to be honest, I didn't learn anything in that prac which I have used since.

Implementing a processor? That would have been useful.

Anyway, that was kind of OT, but, yeah, it may not be quite what you're looking for, but "Modern Operating Systems" was a good book.
[size="2"][size=2]Mort, Duke of Sto Helit: NON TIMETIS MESSOR -- Don't Fear The Reaper
Advertisement
Quote: Original post by superpig
[...]Divide and conquer is the approach to take. An 32-bit adder is made up of 1-bit adders. RAM is made of millions of individual bit-stores. And so on.
While it's possible to make an N-bit adder from a 1-bit full adder, it's generally not the best way to do it (the latency is too high).
Anyways, another proper name for this is "Digital Logic". Once you understand transistors, it's not to big a leap to logic gates and flip-flops (the simplest unit of memory), and once you have that, it's almost trivial to make an ALU (arithmetic logic unit, which does adding etc). For something simple, you'd just have every possibly type of calculation done (addition, shift, etc) and then have a multiplexer that picks which value to output based on the instruction bits in the opcode/microcode.

Implementing pipelining and caching etc is an advanced topic because of all the exceptions and corner cases (making sure you flush things at the correct time, etc), but really it's an extension of the above - when the logic unit needs to read memory, you'd send a read to the cache subsystem instead of the memory interface subsystem. When it gets the request, it would do all kinds of calculations to compare the registers storing 'Addresses Cached Here' to the requested address, and then it would either return the value from it's cache registers or it would send the request on to the memory subsystem to read a full cache line into one of it's register sets.

If you've never used Verilog or VHDL, you might want to download a simulator from someplace like Xilinx (that makes FPGAs that let you actually do this stuff - they even have a beginner kit that looks really nice for $150 {very very inexpensive for hardware development boards}). Once you mess around with the hardware design languages, this stuff can become a lot clearer.
"Walk not the trodden path, for it has borne it's burden." -John, Flying Monk
Quote: Original post by Endar
That's just no fair. I did both the Architecture and Operating System subjects (architecture I unfortunately didn't pay any attention in, but I was able to pass), but we didn't do anything approaching implementing a processor in Verilog.

In the architecture subject, all we did was MIPS stuff. And the labs had no relation to the lectures. That was annoying. Perhaps if they were I would have paid more attention to the lectures.

In OS, we did a couple of relevant things, things like process scheduling, creating a basic file system and file allocation table. The hardest thing was writing a basic linux shell, and, to be honest, I didn't learn anything in that prac which I have used since.

Implementing a processor? That would have been useful.

Anyway, that was kind of OT, but, yeah, it may not be quite what you're looking for, but "Modern Operating Systems" was a good book.


My architecture course's lab had us build the ACLU for a processor using MAX+Plus II... that was the most interesting thing I ever did in those classes... aside from that arc I and II were horribly boring. OS was neat though.
Quote: Original post by Zmurf
A understand the basics of how a CPU works, like the theory behind it. But I'm finding it hard to get information on how the electronic pulses running through the buses allow the processor to execute instructions, how the computer can cache and store memory and information on the hard drive. Can anyone help me find some information on the actual science behind the computer?


Do you mean a math foundation, or a principle behind current register based computer? These are two different things.

If you mean the later, the best way is try to write few test programs in an Intel syntax like ASM, and learn from them.

Do you know what is a register? Have you listened words like flat (linear) memory model?

Intel developer manuals are quite nice source of informations about current CPUs. Arstechnica has also detailed articles.

If you'd like to see how are done actual numeric operations, then the best way would be into my Java libraries for 128 bit whole number aritmetic. These are straightforward, and were done with ease of converting into ASM/HW implementation in the mind.
Well, this is weird. Where I'm working they use Xilinx FPGAs and I thought about putting together a small board that I could connect up to a PC and download CPU designs to with the idea of writing an article about CPU design. I was thinking of nothing more complex than a Z80. Back in 1991 I did much the same thing for my final year degree project, a configurable co-processor, only that was an ISA board to plug into a PC expansion slot. Also, the FPGA as a co-processor idea is starting to appear in supercomputer design circles.

Skizz

This topic is closed to new replies.

Advertisement