Instruction sets • Computer architecture taxonomy. • Assembly language. Overheads for Computers as

Instruction sets
• Computer architecture taxonomy.
• Assembly language.
© 2008 Wayne Wolf
Overheads for Computers as
Components 2nd ed.
1
von Neumann architecture
• Memory holds data, instructions.
• Central processing unit (CPU) fetches
instructions from memory.
• Separate CPU and memory distinguishes
programmable computer.
• CPU registers help out: program counter
(PC), instruction register (IR), generalpurpose registers, etc.
© 2008 Wayne Wolf
Overheads for Computers as
Components 2nd ed.
2
CPU + memory
address
memory
200
PC
data
CPU
200
ADD r5,r1,r3
© 2008 Wayne Wolf
ADD IR
r5,r1,r3
Overheads for Computers as
Components 2nd ed.
3
The von Neumann model
CPU
So where is the
Input/Output?
© 2008 Wayne Wolf
Buses
Input
Overheads for Computers as
Components 2nd ed.
Outpu
t
Slide 4
The Harvard Architecture (1)
• Harvard architecture is a computer architecture
with physically separate storage and signal pathways
for instructions and data.
• The term originated from the Harvard Mark I relaybased computer, which stored instructions on
punched tape (24 bits wide) and data in electromechanical counters (23 digits wide). These early
machines had limited data storage, entirely contained
within the data processing unit, and provided no
access to the instruction storage as data, making
loading and modifying programs an entirely offline
process.
© 2008 Wayne Wolf
Overheads for Computers as
Components 2nd ed.
Slide 5
The Harvard Architecture (2)
• In a computer with a von Neumann architecture (and
no cache), the CPU can be either reading an
instruction or reading/writing data from/to the
memory.
• Both cannot occur at the same time since the
instructions and data use the same bus system.
• In a computer using the Harvard architecture, the
CPU can read both an instruction and perform a data
memory access at the same time, even without a
cache.
• A Harvard architecture computer can thus be faster
for a given circuit complexity because instruction
fetches and data access do not contend for a single
memory pathway.
© 2008 Wayne Wolf
Overheads for Computers as
Components 2nd ed.
Slide 6
The Harvard Architecture (3)
• In a Harvard architecture, there is no
need to make the two memories share
characteristics. In particular, the word
width, timing, implementation
technology, and memory address
structure can differ.
• In some systems, instructions can be
stored in read-only memory while data
memory generally requires read-write
memory.
• Instruction memory is often wider than
Overheads for Computers as
data
memory.
© 2008 Wayne Wolf
Components 2nd ed.
Slide 7
Harvard architecture
address
data memory
data
address
program memory
© 2008 Wayne Wolf
PC
CPU
data
Overheads for Computers as
Components 2nd ed.
8
Harvard Architecture Example
Block Diagram of the
Overheads forPIC16C8X
Computers as
© 2008 Wayne Wolf
Components 2nd ed.
Slide 9
Modified Harvard Architecture
• The Modified Harvard architecture is very like the Harvard
architecture but provides a pathway between the instruction
memory and the CPU that allows words from the instruction
memory to be treated as read-only data.
• This allows constant data, particularly text strings, to be
accessed without first having to be copied into data memory,
thus preserving more data memory for read/write variables.
• Special machine language instructions are provided to read data
from the instruction memory.
• Standards-based high-level languages, such as the C language,
do not support the Modified Harvard Architecture, so that inline assembly or non-standard extensions are needed to take
advantage of it.
• Most modern computers that are documented as Harvard
Architecture are, in fact, Modified Harvard Architecture.
© 2008 Wayne Wolf
Overheads for Computers as
Components 2nd ed.
Slide 10
von Neumann vs. Harvard
• Harvard can’t use self-modifying code.
• Harvard allows two simultaneous memory
fetches.
• Most DSPs use Harvard architecture for
streaming data:
• greater memory bandwidth;
• more predictable bandwidth.
© 2008 Wayne Wolf
Overheads for Computers as
Components 2nd ed.
11
RISC vs. CISC
• Complex instruction set computer (CISC):
• many addressing modes;
• many operations.
• Reduced instruction set computer (RISC):
• load/store;
• pipelinable instructions.
© 2008 Wayne Wolf
Overheads for Computers as
Components 2nd ed.
12
Instruction set characteristics
•
•
•
•
Fixed vs. variable length.
Addressing modes.
Number of operands.
Types of operands.
© 2008 Wayne Wolf
Overheads for Computers as
Components 2nd ed.
13
Programming model
• Programming model: registers visible to
the programmer.
• Some registers are not visible (IR).
© 2008 Wayne Wolf
Overheads for Computers as
Components 2nd ed.
14
Multiple implementations
• Successful architectures have several
implementations:
•
•
•
•
varying clock speeds;
different bus widths;
different cache sizes;
etc.
© 2008 Wayne Wolf
Overheads for Computers as
Components 2nd ed.
15
Assembly language
• One-to-one with instructions (more or
less).
• Basic features:
• One instruction per line.
• Labels provide names for addresses (usually in
first column).
• Instructions often start in later columns.
• Columns run to end of line.
© 2008 Wayne Wolf
Overheads for Computers as
Components 2nd ed.
16
ARM assembly language example
label1 ADR
LDR
ADR
LDR
SUB
© 2008 Wayne Wolf
r4,c
r0,[r4] ; a comment
r4,d
r1,[r4]
r0,r0,r1 ; comment
Overheads for Computers as
Components 2nd ed.
17
Pseudo-ops
• Some assembler directives don’t
correspond directly to instructions:
• Define current address.
• Reserve storage.
• Constants.
© 2008 Wayne Wolf
Overheads for Computers as
Components 2nd ed.
18