How Does A Hard Drive Work?


Hard disk dissection

The average laptop in the shops for around $500 has somewhere in the region of 60GB of storage memory. You see that figure and think ‘wow ““ imagine all the movies, songs, images, files and documents I could save on that baby’, right?

But did you ever think about how it actually gets stored?

If you were to stack the equivalent capacity of CDs in front of you it would surely rise to eye-level. You can fit everything on those CDs onto that hard drive. Truly amazing for an invention that has its origins in the 1950′s and was first developed as a humble cassette tape.

 

How Does a Hard Drive Work – The Basics

hard-drive-parts

In order to fully understand a hard drive you have to know how one works physically. Basically, there are discs, one on top of the other spaced a few millimetres apart. These discs are called platters. Polished to a high mirror shine and incredibly smooth they can hold vast amounts of data.

Next we have the arm. This writes and reads data onto the disc. It stretches out over the platter and moves over it from centre to edge reading and writing data to the platter through its tiny heads which hover just over the platter. The arm, on the average domestic drives can oscillate around 50 times per second. On many high-spec machines and those used for complex calculations this figure can rise into the thousands.

Hard drives use magnetism to store information just like on old cassette tapes. For that reason, copper heads are used as they are easy to magnetise and demagnetise using electricity.

Storage and Operation

Hard-Drive-sections

When you save a file, the “˜write’ head on the arm writes the data onto the platter as it spins at high RPM often in the region of 4,000. However, it doesn’t just go anywhere as the computer must be able to locate the file later. It also must not interfere or indeed delete any other information already on the drive.

For this reason, platters are separated into different sectors and tracks. The tracks are the long circular divisions highlighted here in yellow. They are like “˜tracks’ on music records. Then we have the different sectors which are small sections of tracks. There are thousands of these from centre to edge of the platter. One is highlighted blue in the picture.

In Operation

When you open a file, program or really anything on your PC, the hard drive must find it. So let’s say that you open an image. The CPU will tell the hard drive what you’re looking for. The hard drive will spin extremely fast and it will find the image in a nano-second. It will then “˜read’ the image and send it to the CPU. The time it takes to do this is called the “˜read time’. Then the CPU takes over and sends the image on its way to your screen.

Let’s say you edited the image. Well now those changes must be saved. When you click “˜Save‘, all of that information is shot to the CPU which in turn sorts it (processes it) and sends it to the hard drive for storage. The hard drive will spin up and the arm will use its “˜write‘ heads to overwrite the previous image with the new one. Job done.

That is what that buzzing disc in your computer gets up to all day. Now, as I do with most of my articles here on MUO I shall leave you with a friendly word of advice:

If you want to look inside to further understand how does hard drive work, do so with an old one. There are a few reasons for this.

  • Once you pop open that drive, plugs on the screws will snap to tell the manufacturer you have been poking around in there. By doing this, your warranty is void immediately. Many drives actually have this warning printed on the side.
  • They’re expensive and carry a lot of important info so don’t just pop open the family PC to have a go at it. Pick up an old one on eBay.

tell us what YOU know about HDDs, share your thoughts 🙂

What Is A Processor Core?


Every computer has a processor, whether it’s a small efficiency processor or a large performance powerhouse, or else it wouldn’t be able to function. Of course, the processor, also called the CPU or Central Processing Unit, is an important part of a functioning system, but it isn’t the only one.

Today’s processors are almost all at least dual-core, meaning that the entire processor itself contains two separate cores with which it can process information. But what are processor cores, and what exactly do they do?

What Are Cores?


A processor core is a processing unit which reads in instructions to perform specific actions. Instructions are chained together so that, when run in real time, they make up your computer experience. Literally everything you do on your computer has to be processed by your processor. Whenever you open a folder, that requires your processor. When you type into a word document, that also requires your processor. Things like drawing the desktop environment, the windows, and game graphics are the job of your graphics card — which contains hundreds of processors to quickly work on data simultaneously — but to some extent they still require your processor as well.

How They Work


The designs of processors are extremely complex and vary widely between companies and even models. Their architectures — currently “Ivy Bridge” for Intel and “Piledriver” for AMD — are constantly being improved to pack in the most amount of performance in the least amount of space and energy consumption. But despite all the architectural differences, processors go through four main steps whenever they process instructions: fetch, decode, execute, and writeback.

Fetch

The fetch step is what you expect it to be. Here, the processor core retrieves instructions that are waiting for it, usually from some sort of memory. This could include RAM, but in modern processor cores, the instructions are usually already waiting for the core inside the processor cache. The processor has an area called the program counter which essentially acts as a bookmark, letting the processor know where the last instruction ended and the next one begins.

Decode

Once it has fetched the immediate instruction, it goes on to decode it. Instructions often involve multiple areas of the processor core — such as arithmetic — and the processor core needs to figure this out. Each part has something called an opcode which tells the processor core what should be done with the information that follows it. Once the processor core has figured this all out, the different areas of the core itself can get to work.

Execute


The execute step is where the processor knows what it needs to do, and actually goes ahead and does it. What exactly happens here varies greatly depending on which areas of the processor core are being used and what information is put in. As an example, the processor can do arithmetic inside the ALU, or Arithmetic Logic Unit. This unit can connect to different inputs and outputs to crunch numbers and get the desired result. The circuitry inside the ALU does all the magic, and it’s quite complex to explain, so I’ll leave that for your own research if you’re interested.

Writeback

The final step, called writeback, simple places the result of what’s been worked on back into memory. Where exactly the output goes depends on the needs of the running application, but it often stays in processor registers for quick access as the following instructions often use it. From there, it’ll get taken care of until parts of that output need to be processed once again, which can mean that it goes into the RAM.

It’s Just One Cycle

This entire process is called an instruction cycle. These instruction cycles happen ridiculously fast, especially now that we have powerful processors with high frequencies. Additionally, our entire CPU with its multiple cores does this on every core, so data can be crunched roughly as many times faster as your CPU has cores than if it were stuck with only one core of similar performance. CPUs also have optimized instruction sets hardwired into the circuitry which can speed up familiar instructions sent to them. A popular example is SSE.

Conclusion


Don’t forget that this is a very simple description of what processors to — in reality they are far more complex and do a lot more than we realize. The current trend is that processor manufacturers are trying to make their chips as efficient as possible, and that includes shrinking the transistors. Ivy Bridge‘s transistors are a mere 22nm, and there’s still a bit to go before researchers encounter a physical limit. Imagine all this processing occurring in such a small space. We’ll see how processors improve once we get that far.

Where do you think processors will go next? When do you expect to see quantum processors, especially in personal markets? Let us know in the comments below !

By Danny Stieben makeuseof.com