What is the simplest form of information you could transmit as a message?
In Computer Science, this ON or OFF value is known as a bit, and can be thought of the literal presence or absence of an electrical charge within a transistor.
Our modern computer processors are made of millions of these transistors, each of which is on or off at a given moment in time, and it is on this foundation that our entire world of computing is built upon!
Computers take the simple ON or OFF values of a transistor, and scale them up to create the complexity we know and love.
Given each bulb can have two states, “on or off”, or 2 possible combinations, how many total possible combinations are possible with 8 light bulbs?
Given there are 8 bulbs, and 2 possible states of each, this should scale to 8^2 = 256 possibilities.
Rather than lightbulbs, in computer science we use 1 to represent ON, and 0 to represent OFF.
The 0 or 1 is known as a bit.
A cluster of 8 bits is known as one byte with values 0 to 255 (being 256 possible values)
Watch the following video for an introduction to binary and other related number systems.
Heliosphere (2017): Binary - How to make a computer: Part II
Complete the following table of number conversions
As powerful as computers can be using bits and bytes, humans don’t intuitively operate on that level, we use characters and words, so there needed to be a way for letters (and thereby words) to be represented somehow within a computer.
To achieve this, an arbitrary table was decided on in the 1960s called the ASCII table. This table still forms the basis of much of today’s character representation system. The significance of this table is that a “string of letters” is really just a sequence of binary that, according to this lookup table, can be used to represent alphanumeric text.
Sourced from: Weiman, D 2010: ASCII Conversion Chart http://web.alfredstate.edu/faculty/weimandn/miscellaneous/ascii/ascii_index.html
Convert the following ASCII to it’s binary representation.
01110100 - 01101000 - 01101001 - 01110011 00100000 - 01101001 - 01110011 - 00100000 01110011 - 01101100 - 01101111 - 01110111 00100000 - 01100111 - 01101111 - 01101001 01101110 - 01100111
ASCII was incredibly useful and opened up a world of computing accessible to a lot of people, but there are still significant limitations. ASCII was built on an 8 bit / 1 byte conversion table. That means there are only 256 possible characters that can be used for conversion. While this is generally fine for Latin based languages such as English, it imposes restrictions on how multilingual computing is capable of being.
The solution to overcome this was the development of the UNICODE standard, which was published in 1991. UNICODE is a 16 bit lookup table (65536 possible values). While this means it takes 2 bytes to store every letter, the cost of data storage has fallen significantly enough that that is not a major problem. The upside is it means all Asian characters etc can now be represented.
To look at the Unicode table for yourself, check out
Please watch this excellent introduction into unicode.
Computerphile (2013) Characters, Symbols and the Unicode Miracle by Tom Scott
So far we’ve been talking in binary, which is a base 2 number system (2 possible values per place column). Historically we are also familiar with the decimal number system, or base 10. In Computer Science there are others for you to be familiar with that we will now look at.
Look up table
Tom Scott has another excellent introduction to this concept in the following Computerphile video.
Computerphile (2014) Floating Point Numbers by Tom Scott
A good written summary can be found here
floating-point-gui.de (undated): Floating Point Numbers
For our purposes, you don’t need to be able to do manual conversions with floating point numbers, you just need to understand the concept, it’s limitations and work arounds (as Tom Scott outlines in his video).
In the examples we have been looking at so far, we have used the entire byte to represent a number: 8 bits to represent values 0 to 255. In reality computers need to be able to cater to negative values as well, so the most significant bit is actually reserved to indicate the sign (positive or negative) of the number. This system is known as twos complement, or having a signed integer. To use the full size of the binary number for a positive only value is known as having an unsigned integer.
How this looks in a conversion table is as follows:
Notice that this will mean the number range is greater for the negatives than the positives. For an 8 bit integer, the decimal values will range from +127 to -128.
Converting negative decimals to twos complement binary (example using -13):
Converting twos complement binary to negative decimal (example using 1110 1110):
Twos complement arithmetic
By using twos complement, binary addition and subtraction work quite simply. Some examples:
1 1 (carry over row) 0100 0100 (68) + 0000 1100 (12) 0101 0000 (80)
1 1 (carry over row) 0100 0100 (+68) + 1111 0100 (-12) 0011 1000 (+56)
111 1 (carry over row) 0000 1100 (+12) + 1011 1100 (-68) 1100 1000 (-56)
Assume you are using twos complement binary numbers.
Looking for more practice? This website has a number of online quizzes for you to convert between number systems and practice your binary arithmetic.
If you’ve used Photoshop, you have probably seen colours expressed as #FF0000. You should now be able to recognise this type of number as hexadecimal.
What is the value of the number? Colours in the computer are actually split into RGB – Red, Green Blue. One unsigned byte (256 values) for each.
So, #FF0000 is actually:
Note: 256^3 = 16’777’216 colour combinations
Computers store time internally as the number of seconds that have lapsed from an arbitrarily agreed epoch (zero-point) of midnight, 1st January 1970 UTC.
32 bit computers take their name by the fact their internal calculations are performed using an integer size of 32 bits. A signed 32bit integer has a range of −2,147,483,648 to 2,147,483,647.
That means that a little after 2 billion seconds have lapsed from the start of the 1970s, a 32 bit computer would be unable to accurately store an integer that represented the time! In fact, it would clock over from being 1970 plus 2 billion seconds to becoming 1970 minus 2 billion seconds! When do we reach this limit? 03:14:07 UTC on 19 January 2038!
The subsequent second, any computer still running a 32 bit signed system will clock over to 13 December 1901, 20:45:52.
While your personal computer may be a 64 bit system, so you think you are safe, there are a lot of systems still around that we all rely on that have 32 bit internals. This is particularly true of embedded systems in transportation infrastructure, electrical grid control, pumps for water and sewer systems, internal chips on cars and other machinary, even a lot of Android mobile phones (though admittedly the changes of one of them still being in use in 20 years is unlikely!). If you research into the “2038 problem” you’ll discover just how many critical systems are still vunerable.
So we’ve seen that binary can be used to store numbers, text, and colour codes, what we can use binary for much more than storing values; binary also forms the basis of all the logic functionality that occurs within computers.
We do this through what are commonly known as logic gates. All gates can be simplified down into the three of AND, OR and NOT but there are six gates we will learn to love in this course:
The following video provides a great introduction into how you can easily create your own logic gates and how they work.
Heliosphere (2016) Relays and Logic Gates - How to Make a Computer: Part I https://www.youtube.com/watch?v=fB85NrUBBhQ
To help you try to remember what the various symbols look like, it might be helpful to remember the ANDroid way (cheesy I know)…
Play the NAND game - Build logic circuits from the very basics upawrd.
Be aware that like PEMDAS in mathematics, an order of precedence exists for equations involving gates. The order of precedence is:
Logic equations can either use the written name of the relevant logic gates, or they could be expressed using boolean notation as per the following table. Unfortunately there are several different notations that you may come across.
There are three different ways of representing logic circuits that you need to be able to convert between for this course.
For example, an AND gate could be represented four different ways:
Logic equation using boolean notation: XY = Z
Logic equation using gate names: X AND Y = Z
X = not A and B or A and not B
|X = (A||B) & (not C||B)|
“A computer processor does moronically simple things — it moves a byte from memory to register, adds a byte to another byte, moves the result back to memory. The only reason anything substantial gets completed is that these operations occur very quickly. To quote Robert Noyce, ‘After you become reconciled to the nanosecond, computer operations are conceptually fairly simple.’” *
Modern CPU’s are just taking the idea of logic circuits made from transistors and scaling up!
Real Engineering (2016) Transistors - The Invention That Changed The World
As this video demonstrates it is perfectly possible to build a modern computer from normal transistors for those so inclined…
Computerphile (2017): MegaProcessor
The functions it can perform are not complicated, it’s power comes from it’s speed. They are just created through millions/billions of logic gates working together.
What is the speed of a typical CPU today? What does that speed “mean”?
A CPU can add, subtract, multiply, divide, load from memory, save to memory. From those simple building blocks we get the computers we have today.
It’s hard to actually find explainations what happens inside a CPU pitched to the correct level for this course. These videos are the best I’ve found to date but they still introduce a level of complexity beyond what you need for the course.
To save class time, I recommend watching these videos outside of class, taking notes on them as you do. We will then discuss in class and address questions that arise.
CrashCourse (2017): How Computers Calculate - the ALU: Crash Course Computer Science #5
CrashCourse (2017): Registers and RAM: Crash Course Computer Science #6
CrashCourse (2017): The Central Processing Unit (CPU): Crash Course Computer Science #7
Another possible alternative explaination is through the following video that also uses an imagined CPU called the Scott CPU to attempt to simplify things.
In One Lesson (2013): How a CPU Works https://www.youtube.com/watch?v=cNN_tTXABUA (20m41)
The key things for you to appreciate is the CPU cycle and the interaction of the key components within and beyond the CPU.
Comparing C to machine language https://www.youtube.com/watch?v=yOyaJXpAYZQ (10m00)
Diagramatically we could create a simple representation of the internals of a CPU and it’s connections as follows (this is the level of complexity for the course)
We have looked at a lot of different types of memory. Let’s briefly compare each type to make some generalisations about their different properties.
|Memory type||Speed||Capacity||Cost||Storage duration|
|CPU register||Very fast||Very small (a few bytes)||Very expensive (built into the CPU)||For immediate use|
|Cache||Very fast||Small (a few MB)||Very expensive||Immediate use|
|RAM||Fast||Large-ish (8 GB)||About USD 1c/MB||Short term (seconds to minutes)|
|SSD||Moderate||Large 100s of GB||About USD 0.2c/MB||Long term, non-volatile|
|HDD||Slow||Large TBs||Cheap! About USD 0.005c/MB||Long term, non-volatile|
|Tape||Excruciatingly slow||Many TBs or Petabytes||Extremely cheap!||Several years|
We continue our abstraction journey upward. We started with the humble electron, called it a bit, scaled up into bytes, started grouping bits together to build logic circuits, grouped bits together into bytes, and then started performing operations on them with ALUs inside CPUs and storing the results in memory. Finally we move into looking at the software that manages all this hardware for us, the operating system.
Operating System (OS) can be defined as a set of programs that manage computer hardware resources and provide common services for application software. The operating system acts as an interface between the hardware and the programs requesting I/O.
That said, we will be looking at operating systems in more depth in unit 6.
For now, there is just one question: What are main functions of an operating system?
What are the hardware resources that require managing? What are some of the services an OS provides to applications? Rather than repeating myself, see my notes in unit 6 for this as it is all addressed there.
Finally, after we have an operating system to manage the hardware, we get to finally run our application software to “do stuff”!
What are some of the common application uses available with computing? Some common categories include:
Names of a few of the main ones for each category? A key differentiator between each?
What are some of the more common features across most applications?
Which features are provided by the OS, and which by the application?
Computer Science Illuminated by Nell Dale & John Lewis (page numbers based on 6th edition):