Introduction to Assembly Language

08/31/2001

A Special Note:

A knowledge of Digital Circuitry is a very important asset to understanding the depths of CPU Architecture as it relates to Assembly Language, because as it will be explained later, the CPU Architecture is composed of Digital Circuitry, and all Assembly Language is solidly based on understanding that Architecture. It is this direct link that may cause problems when attempts are made to pursue the knowledge and understanding of Assembly Language for any CPU. This turns out to be quite reasonable, in that the manner in which a CPU decodes its instructions is by a collection of Gates, Flip-Flops, Data-Latches, Counters, and Timers. More about this will be explained later.

A Second Special Note:

It has been my experience in teaching these concepts over the years, is that if we can first understand how a CPU accesses memory with the many different "Addressing Modes", the rest will come a lot easier. I firmly believe that this is the heart and core of where we need to concentrate.


By the way, "CPU" stands for "Central Processing Unit", which is just a fancy term for the computer itself.

Group #1 - Introduction

Group #2 - 256 Possible Combinations

Group #3 - Simplest possible CPU Architecture

Group #4 - Some basic Assembly Language Commands

Group #5 - Certain Scenarios

Group #6 - Some actual CPU codes vs Binary Architecture

Group #7 - Easy way to do Hexadecimal Codes


Introduction:

Often as not, when the average individual tries to learn Assembly Language, it seems at first like some kind of an impossible task. This is especially so if you don't have any help, or perhaps the right kind of help. I will make a bold statement ... There are ways to approach this subject that can really make a difference, whether it is when you are learning this by yourself, or in the classroom. I'm going to also make another bold statement, I think that far too much time and emphasis is placed on primitive binary structure at the beginning. I will say that binary code is at the core of all that we will discuss, but much of what we will need can be learned as we go. Then, I will appear to contradict myself right off the bat. However, we will soon see that this might not be such a contradiction after all.

First, the contradiction:

  1. In binary code there is a simple premise, in that every time you add one binary bit to the combination, the possibilities double. I.e. with 2 binary bit positions there are 4 possibilities (00,01,10,11), and with 3 binary bit positions there are 8 possibilities, and with 4 binary positions there are 16 possibilities.
  2. Now it doesn't take a mathematician to continue this out to even more positions and possibilities, but let's bypass this for a while so that we can get down to the meat of our subject.

Now, consider what we have just said, where 8 binary bits would allow 256 possible configurations:

  1. If we need a binary computer to do various operations, an 8-bit configuration would allow us to contrive 256 different instructions. We might be able to do a considerable number of things with 256 instructions, although at the beginning it might be a little difficult to imagine that we would need this many.
  2. To begin with, let's reiterate something about computers from the simplest to the most complex:
    A computer only does 2 things: It either moves data, or manipulates data (some say "mangles data").
    1. Considering this to be actually true, then what is the value of so many possible instructions?
    2. Well, the point is that if we are going to have the computer move some data (actually copy, in most cases), we will need first of all to have some means of identifying where the data is coming from, and where it is going to.
    3. Consider the possibility that there are many possible means of identifying any address, such as we are so familiar with already.
      1. Local address, as in your neighborhood, or as in your township.
      2. Local address, as in your county, or as in your state.
      3. How about on your block, just 3 houses down the street, where you don't know and don't care what the actual address is?
      4. How about an address that you don't actually know, but an acquaintance of yours does?
      5. How about not the Post Office address, but the address of the box number you have at that address?
      6. How about a P.O. Box ##, as an address.
      7. Ok, here's my point, when addressing memory inside of a computer, there are a multitude of possible methods of determining just where in memory you might gain access by some addressing scheme. And this is only dealing with where (which is addressing) to gain access of the data.
    4. Of course there are other considerations, like if we can store and access data, what can we do with it?
      1. If the data is numbers, we might want to add/subtract/multiply/or divide with new numbers, or some other mathematical manipulation. Obviously, these would not be accomplished by just a few selections.
        1. This data may be our finances, addresses, or phone numbers.
        2. This data may relate to lighting and alarms for private homes or businesses.
        3. This data may relate to the positioning of automated equipment or machinery.
      2. There are many occasions where we need to move from one location in our program to another. After all, we need to realize that even our instructions need to reside somewhere (like in memory).


Please look at these CPU considerations "A Basic Introduction to Computer Systems (in 4 parts)" on this site. I believe that the absolute most important thing to consider in learning Assembly Language is centered in what is know as "Addressing Modes" (also on this site). These form the heart and core of really understanding "how" a CPU accesses memory, and will take some time and effort to master. I have also found that the best way to understand these Addressing Modes is to start with the Architecture of a CPU. The reason for this is actually quite simple, in that the Addressing Modes are centered around the Architecture itself.

For starters, let's look at a representation of the simplest possible CPU Architecture:

  • It needs to have a method of working through a set of instructions, which need to be located somewhere in memory.
  • It needs to have one or more means of pointing to a memory location as a binary address.
  • It needs to have some kind of various status indicators (they are called "Flags") for decisions to be made.
  • We would refer to these as "what if" situations usually based on things like:
  • "true vs false" responses
  • if a number is positive or negative
  • go forward or back up
  • There would need to be some means of acting on those Flags, to be able to carry out the desired "what if" scenarios.
  • These may be a provision for "Branching" off to another section of the program, or may carry out some mathematical process, or some other modification process.
  • There may be a provision for interacting with an ouside process, equipment, or machinery.
  • There would need to be some kind of (one or more) temporary storage within the CPU itself.
  • There would need to be some means of controlling the sequence of events required to carry out an instruction.
  • There would need to be some means of timing these sequences of events required to carry out an instruction.
  • As it turns out, most of the CPU Architecture is simply what we know in the common Digital World as:
  • Data Latches,
  • Counters that can simply increment or decrement,
  • Bi-Directional Data Lines
  • MUX and De-MUX
  • Tri-State control of Address Lines and Data Lines as common between Digital Devices as:
  • Address Lines
  • Bi-Directional Data
  • Address Lines would need to be controlled by one or more sets of Tri-Stated Data Latches.
  • There is one more common aspect of most CPU Architecture, and that is a special Digital Device known as an "Arithmetic-Logic-Unit" (ALU), which can be used as a device for a number of "what-if" scenarios.
  • A more advanced unit, which can perform complex mathematical functions would be a "Calculator" that works in concert with the main CPU. This device carries the name "Co-Processor", because it exists as a co-element in the CPU functions.
  • There needs to be a special Logic Block for "Instruction Decode".
  • There needs to be a special Data Latch configuration to control the Address Lines for accessing the Instructions, and this is commonly called either the "Program Counter" or the "Instruction Pointer", or such as one might call it.
  • There is usually a special Data Register for temporary storage, called the "Accumulator".
  • There are usually some "General Purpose Registers", which in many cases can even have several different personalities. These might be used for either selecting Address Lines, or even Data Storage.
  • Even though the use of Flags can be sometimes a little confusing, it might help to understand that these "Flags" are simply held in a basic Data Latch as simple binary values.

  • Organization of Assembly Language Commands within the design of the CPU:


    Let's look at some examples of some basic Assembly Language Commands,
    and how binary bit patterns are used in contrasting codes:


      08/30/2001

    Now let's see what more we can accomplish in understanding Assembly Language by certain scenarios:

  • First, let's consider why in the world there would be such a multitude of what appear to be nearly identical LOAD/STORE instructions. The real reason might sound a little mysterious at first, but after some very careful consideration, perhaps we can make some some sense out of these mysteries. Two things to consider first, review the comments that were mentioned about addresses at the beginning of this presentation, and "Addressing Modes" would now be a good thing to print out for future reference as we go along.
  • At this point it would be well to consider that there are a number of situations that we have been faced with, that are quite normal in everyday life, but for some reason an identical situation with computer codes blows us away.
  • So, let's start with some good practical real-world examples that we can later relate to. We will use some scenarios of a "Boss" vs "Worker" to illustrate our point, and in almost all cases they will be moving a series of boxes from one location to another, and where sometimes the destination may be entirely elsewhere.
  • Scenario #1: We have a row of numbered boxes that need to be moved from another location that is only 50ft away", and we have one boss and one worker to accomplish this task.

    1. Boss instructs the worker to pick up box #1, take the box over to the new location, put box #1 down at the new destination location, in a specific place.
    2. Boss now instructs worker to go back to the row of boxes, pick up box #2, take the box over to the new location, and put the box down in the new destination location, but obviously in a specific place other than where box #1 was placed. Process is continued until they run out of boxes.
    3. Obviously, this is a real "brute force" method, which is terribly inefficient and screams for a better way (especially if there is lots of boxes to be moved), because it was necessary to direct this operation for each individual box.

    Scenario #2: We have a row of a series of numbered boxes that need to be moved from another location that is only 50ft away", and we have one boss and one worker to accomplish this task, but this time we have a better approach.

    1. Boss instructs the worker to pick up (one at a time) the boxes in sequential order, and then place them in the same sequential order at the destination address, until 10 boxes have been moved.
    2. Boss leaves, and the worker does the necessary task. The overall process is now more efficient.

    Scenario #3: We have a warehouse with lots of materials, organized by rows and rows of storage bins. The worker does not yet know what materials are to be moved, nor their location(s). The worker also does not know ahead of time how many are to be moved, nor their destination. The information needed is available on a particular clipboard hanging in a specific location. This information will provide location of the materials to be gotten, how many, and where they are going.

    Scenario #4: In this situation, we have a large warehouse with two workers handling the materials that are being moved from one area in the warehouse to another area. There are two clipboards in use, with one for each worker. The first worker that goes and get the material, uses one clipboard to tell where to find the items, and how many items are to be transferred. The second worker has a second clipboard that tells where the items received are to be stored, and also how many to expect (consider a potential problem where either one of the two workers were given the wrong count of materials).

    Scenario #5: We have a driver who needs to take the truck over to a particular location, and pick up some furniture. The driver however, does not know the area, so another individual goes along as a "navigator".

    Scenario #6: We have the driver again, who needs to go pick up another load of furniture from an unknown address, except that it is 3 houses down the street from where the last pickup was made.

    Scenario #7: We have a "scavenger hunt", where we go to a selected location to find where we should go next.

    I think that we can see that there's nothing really complicated about any of these scenarios, and if we go about this carefully, we can figure out some of these types of CPU codes in exactly the same manner, and without getting lost.

    By the way, in the warehouse, if the boxes were heavy we could have used a handcart. In the CPU we would call that "temporary storage", as with the Register called the Accumulator. A truck, used as a transport is also "temporary storage". The clipboard(s) that had the information pertaining to where these items are, would also be represented in a CPU as simply a Register (or Registers) that could reference Addresses in Memory. Also on that clipboard was information that could be in a CPU as a Register for the Count of items to be transferred. It would be essential to have these related thoughts in mind as we pursue just how a Processor (CPU) does this. Also, remember that a CPU Register may be simply a Data Latch, used as a Digital Data Register.


    Now, we can begin to look at some actual CPU codes and watch just how this is done. And while we're at it it, we'll take a special look at those binary codes that give us an insight into the Architecture of the CPU at hand.

    { }

    There is an easier way to learn and use Hexadecimal Codes!

    To start with, "Hexadecimal" simply means "hex (6) above decimal (10)", or "6 more than decimal".


     

     Return to Main Page

    AssyLnge.html - SfE-DCS, ddf - 08/31/2001