Compilation, Interpretation, Virtualisation etc. How Computer Programming Works - PART 1

Compilation, Interpretation, Virtualisation etc. How Computer Programming Works - PART 1

How computer programs run

Historically, people wrote code in machine languages. A piece of code is simply a set of structured strings that tell the computer what to do.

You see, a computer (or specifically the processor) is an electronic device that is made up of smaller electronic components. These components have natural properties that enable them to react to electricity in certain ways.

Take a transistor for example, a crucial component of a computer system. it is made up of semiconductors which are materials that can either allow or disallow the flow of electricity based on certain conditions like temperature, light, conductivity per length and so on. Making them stupid enough for us to manipulate as we wish (although not without extra effort on our side). You can cause the flow of electric current through one end while shutting it off at the other. These are the fundamental operations of logic gates e.g Send me an electric signal (1) and I will do as if I didn't see it (0), otherwise know as "NOT Gate". You can combine all these components to do calculations, store signals (or to look like they are storing) or even play Forza 5. In other words, computers are weird.

Anyway, these processors we are (or were) talking about have billions of these weird components. Eventually we have a lot of 1s and 0s going around and they need to come from somewhere - Enter Programming!

The job (or torment) of the programmer is to pass these streams of 1s and 0s to the computer so that it can accomplish certain tasks. Take a look;

100011 00011 01000 00000 00001 000100

and it's a thing (wikipedia).

The above code (yes, it is code) is telling the machine to put a value somewhere (a register) but not before loading it from something (memory block) located somewhere (68 blocks away from the address in another register).

The Memory, (cool kids call it RAM) is set as an array of blocks (or cells) each of which can contain 8 bits (i.e 8 arrangments of this 1s and 0s). And you guessed it, it is also made up of those semiconductors we talked about.

A register is a temporary storage location much closer to the processor than the memory blocks so retrieval and insertion of data (actually, bits) into it is much faster.

This task as you can imagine is tedious, error prone and just outright wicked. As if that's not enough, put on that, the fact that different processors have different ways they want you to arrange their 1s and 0s, the so called the Instructure Set. And the group of machine designs to which these IS is applied is generalised as the Instruction Set Architecture (ISA).

So there came Assembly Languages. They were aimed at making the task much easier. So instead of writing 10110000 which originally meant to "move a value into a certain register" (Wikipedia) you now write "mov AL", that is; move so so so into the "AL" register where so so so is value in binary, decimal or hexadecimal form. The AL register is a type of register used for arithmetic operations and other generic purposes. It is an 8-bit register, which means it can only hold 8 bits.

A complete instruction might look like

mov al, 16h                    ;Move 16h (ie 0x16 hexadecimals) into the AL register.

Now the problem looks solved - except it's not! All we had was just a simplified representation of those 1s and 0s, we still had to manually (and carefully) move things around. And if you were to move the wrong thing, just hope the project is not for the CIA. Moreover, there is no way we could ever build fancy web apps by moving bits into registers. So our ancestors needed to come up with something.