CPU architectures

?
Fullscreen

Computer Architectures:

Early computers were able to calculate an output using fixed instructions. In the 1940s, John von Neumann and Alan Turing both proposed the stored program concept. In this architecture a program must be loaded into main memory to be executed by the processor. The instructions are fetched one at a time, decoded and executed sequentially by the processor. The sequence of instructions can only be changed by a conditional or unconditional jump instruction.

Von Neumann Architecture:

The most common architecture is the Von Neumann architecture. In this architecture instructions and data are stored in a common main memory and transferred using a single shared bus, because of this there it experiences problems with bottlenecking. 

  • Used in PC’s, laptops, servers and high performance computers.
  • Data and instructions share the same memory. Both use the same word length.
  • One bus makes the control unit design simpler and makes it much cheaper.

Harvard Architecture:

The other main model is the Harvard architecture which separates the data and instructions into separate memories using different busses. Instructions and data don't compete for the same bus. Different sized memories and word lengths can be used for data and instructions. Harvard principles are used with specialist embedded systems and digital signal processing, where speed takes priority over the complexities of design.

  • Used in digital signal processing, microcontrollers and in embedded systems such as microwave ovens and watches.
  • Instructions and data are held in separate memories which may have different word lengths. Free data memory can’t be used for instructions, and vice versa.
  • Separate buses allow parallel access to data and instructions.
  • Control unit for two buses is more complicated and expensive.

Contemporary Processor architectures:

Modern CPU chips often incorporate aspects of both von Neumann and Harvard architecture. In desktop computers, there is one main memory for holding both data and instructions, but cache memory is divided into an instruction cache and a data cache so data and instructions are received using Harvard architecture. Some digital signal processors have multiple parallel data buses and one instruction bus.

CISC AND RISC:

In complex instruction set computers (CISC), a large instruction set is used to accomplish tasks in as few lines of assembly language as possible. A CISC instruction combines a “load/store” instruction with the instruction that carries out the actual calculation

  • Quicker to code programs.
  • The compiler has very little work to do to translate a high-level language statement into machines.
  • Because the code is relatively short, very little RAM is required to store the instructions.

In reduced instruction set computers (RISC) a minimum number of very simple instructions, each taking on one clock cycle, are used to accomplish all the required operations in multiple general purpose registers.

  • The hardware is simpler to build with fewer circuits needed for carrying out complex instructions.
  • Because each instruction takes the same amount of time, i.e. one clock cycle, pipelining is possible.
  • RAM is now much cheaper, and RISC’s use of RAM and software allows better performance and processors at less cost.

Multicore and parallel systems:

Multicore processors are able to distribute workload across multiple processor cores, thus achieving significantly higher performance by performing several tasks in parallel. They are therefore known as parallel systems. Many personal computers and mobile devices now have more than one core. Supercomputers have thousands of cores.

Software has to be specially written to take advantage of multiple cores. For example, browsers such as Google Chrome and Mozilla Firefox can run several concurrent processes.

Co-processor systems:

A co-processor is an extra processor used to support the primary processor (the CPU). These generally carry out only a limited range of functions

GPU:

A graphics processing Unit (GPU) is a specialised electronic circuit which is very efficient at manipulating computer graphics and image processing. It consists of thousands of small efficient cores. It can process large blocks of visual data simultaneously.

A GPU can act together with a CPU to accelerate scientific, engineering and other applications. They are used in numerous devices ranging from mobile phones and tablets to cars, drones and robots.

Comments

No comments have yet been made