top of page

Why Computers Work?

Written by Olga Cooperman

Illustrated by Mia Lodit

Explaining “how” something works is explaining the method; explaining “why” something works is explaining the reason it works. Why was the computer originally invented? Why do computers use 0 and 1?  Why can computers do so many different things?  Answers to these questions may surprise you and lead to seeing everyday objects in a new light.

Introduction

A legendary physicist Richard Feynman once proposed a thought experiment:

If, in some cataclysm, all of scientific knowledge were to be destroyed, and only one sentence passed on to the next generations of creatures, what statement would contain the most information in the fewest words?” Feynman’s own answer is the following statement: ”All things are made of atoms—little particles that move around in perpetual motion, attracting each other when they are a little distance apart, but repelling upon being squeezed into one another.” You may suggest a different concept that would induce and accelerate scientific discovery -   but what an interesting question to ponder! I would vote for the Hindu-Arabic number system that we use today. In Europe, Roman numerals were in use for 1800 years and not much of scientific progress happened during that time. Doing arithmetic with Roman numerals is simply not possible, so all calculations had to be carried out on abacus.

 

Let’s contemplate Richard Feynman’s thought experiment further and apply it to the field of computer science. The information revolution changed our society in many ways, from new jobs to the way we live, play, and interact with the world. At the heart of this revolution is a universal digital computer – a single device that can do almost anything. But, what if, in some cataclysm, all knowledge of how computers work was lost?  What would it take for the next generations of creatures to rebuild the information age from ruins? What would it take to re-invent modern digital computer from the First Principles?

 

In order to understand how computers work – or, more importantly, why they work, we need to explore three key ideas: Information, Computation and Digital Logic. By the end of this short journey, you will have profound understanding of how this little device in your hand is able to assist you in so many daily tasks.

City - 1.jp2

What would it take to rebuild the information age from ruins?

Part I: INFORMATION

Monkey vs. Wikipedia

In everyday use, the word “information” refers to facts or knowledge. This common notion is challenged by the “Monkey vs. Wikipedia” riddle.   

 

Which source communicates more information: Wikipedia or a monkey typing random characters on a computer? If the answer was “Wikipedia”, this would not be a very interesting riddle. But how can a monkey possibly produce more information than literally the most popular source of information? In order to answer this question, we need a more precise definition of information.

Monkey vs. Wikipedia 

26e6a6dc-b246-4fcf-abd3-362c61adffab.JPG
Everything is Information

Information is a very odd concept. It is not an abstract idea that you cannot touch or feel, such as wisdom or justice. It is a real physical thing. However, it is unlike any other physical thing because it can take on any form. Different forms or representations of information can be seamlessly converted into one another. Consider a piece of music. It can be represented as a sequence of musical notes on paper, as an arrangement of neurons in your brain, as sound waves produced by musical instruments, as electromagnetic waves transmitted through the air to a radio receiver, as a computer file or as tracks on a vinyl record. It is not possible to separate information from its physical representation. It is not possible to think of information separately from some physical embodiment. Therefore, anything in the physical world is information.

Violinist -1.jp2

Information has many physical forms

Information and Meaning

When we think about information, we often assume that it must mean something. However, the meaning depends on the audience and on the context. We can arbitrarily assign any meaning to any word. One word may have a different meaning in different languages. The meaning can be incomprehensible outside of a very specific situation. For example, a DNA molecule is meaningless outside of cellular structures capable of reading it – yet it still contains a blueprint of an organism. Information in itself does not have any intrinsic meaning. Information is simply an arrangement or a pattern of some elements, such as symbols, sounds, atoms, etc. Elements can be thought of as an alphabet. As we know, alphabets come in different sizes: 26 English letters, 118 chemical elements, 4 DNA bases, 7 main musical notes, or 3 primary colors of light. Alphabets can be converted or translated into one another. We can translate a Chinese text into English and English into Arabic. But what is the shortest possible alphabet that can be used to make patterns? A single letter would not be sufficient, we need at least two letters. But we don’t need more than two – just two letters or two elements is enough to make all kinds of different patterns. Therefore, the simplest possible alphabet would consist of two letters (e.g., 0 and 1). This also means that any information can be represented as a pattern of 0s and 1s. This is a very important observation: 0 and 1 can be used to make up all kinds of patterns. Because we can easily translate one alphabet into another, all we need is 0 and 1 to represent any information. The entire Universe can be represented as a string of 0s and 1s.

Peacock - 1.jp2

Information is a pattern

Quantifying Information

Anything and everything in the physical world is information: trees and clouds, the moon and the sun. But how can we measure the amount of information in these things? How can we compare the Black Square painting by Malevich to The Dream painting by Dali to nursery rhymes to a theoretical physics book? I will give you a hint. Think about summarizing or compressing these objects.

Black square.jpeg

Black Square (Malevich)

Dream.jpg

The Dream (Dali)

A painting is a pattern of colors. Large blocks of one color can be compressed more easily than a rarely repeating variety of colors and shades. A book is a pattern of words. A children’s book is likely to contain many repetitions and a limited vocabulary compared to a physics book. While a children’s book can be compressed into a shorter summary without impacting your understanding, a dense physics book may need to be read word for word. if we were to translate these patterns of colors and words into the universal alphabet of 1s and 0s, we could figure out whether a picture is really worth a thousand words.

 

It is worth noting that the size of an object plays a relatively small role in determining its information content. Think about a fat book containing the letter A printed one billion times. This book can be summarized in only a few words: “letter A repeated one billion times”. For this reason, two computer files of a similar size may contain a very different amount of information. However, zipped (compressed) versions of these two files is a good proxy for comparing their information content.

 

Another way of thinking about the amount of information is assessing how surprised we are by it. A completely white page is boring because it contains very little information compared to an intricate drawing. Think about the following examples - in which case would you pay greater attention to the process?

  • Hearing about your friend’s latest vacation vs. hearing unexpected news about rap star Cardi-B moving into your neighborhood

  • Listening to a favorite song vs. hearing a gunshot in the middle of the night

  • Watching a sunset vs. watching a meteor shower

 

Something unpredictable or surprising contains more information.

Surprise as a Measure of Information

The amount of information we receive from an information source depends on two things:

  1. how many different patterns (messages) the source can generate

  2. how surprised we are, on average, with these messages.

 

To illustrate this idea, let’s consider two information sources: a fire alarm and a coin. Both objects can produce only two messages: “fire” or “no fire” and “heads” or “tails”. Does this mean they are equally informative? Not really!

 

A fire alarm can produce two messages: “fire” or “no fire”. The “no fire” message is expected most of the time. In contrast, the event of a fire is unexpected and highly surprising. However, the probability of a fire is low. On average, we expect very little information from a fire alarm. We barely notice it!

 

A coin can also produces two messages: “heads” or “tails”. Both outcomes are equally likely and, therefore, equally surprising. A coin, on average, communicates more information than a fire alarm. If we were to bet a large sum of money on heads, we would be watching the flipping of a coin with great attention!  

Entropy as a Measure of Information

Claude Shannon is known as the father of the Information Theory. He realized that the entropy of a physical system is analogous to the amount of information in that system. In physics, entropy is a measure of all possible ways we can rearrange parts of a system without changing that system as a whole. For example, changing the position of individual molecules of air will not impact the overall temperature and pressure in the room. Air molecules have a high degree of freedom to move around - air is a high-entropy system. There are many ways for an egg to be broken, but there is only one way for an egg to be whole. Therefore, a broken egg has a higher entropy than a whole egg. Information is also a measure of possibilities. A fire alarm can generate two messages, but most of the time it will generate only the message of “no fire”.  A coin has more possibilities because it will generate two messages with the same frequency. The concept of information entropy was proposed by Claude Shannon in 1948. Shannon discovered a formula for quantifying the entropy of an information source, but for our purposes we are interested in the idea rather than in exact calculations.

There are many ways for an egg to be broken, but there is only one way for an egg to be whole.

Monkey vs. Wikipedia

Now we have the sufficient background to answer the Monkey vs. Wikipedia question. English text is fairly predictable. After the first few letters you can often guess the rest of the word. After a few words you can often guess the rest of the sentence. A string composed of random characters is unpredictable, and therefore cannot be shortened or summarized in any way. The monkey produced the maximum possible amount of information! A random text may not mean anything, but remember, information does not have any intrinsic meaning. It does not need to mean anything to count as information. Therefore, the monkey wins this competition with Wikipedia. A random text always contains the maximum amount of information.

Information Takeaways

Everything in the physical world can be thought of as information. Information is a pattern of elements. Any representation of information can be converted into the simplest pattern of 0s and 1s. A state of the entire Universe can be represented as a pattern of 0s and 1s.

Part II: COMPUTATION

What is Computation?

What do we mean by computation? Arithmetic is a common example but driving a car and playing chess are also computations. Even less obvious, a tropical weather system and an embryo developing in a womb are also computations. All these processes have three things in common. Firstly, they use a set of rules. The rules may be man-made, such as traffic laws or rules for playing chess. The rules may also come from nature, such as laws of fluid mechanics or biological mechanisms. Secondly, these processes use external inputs. A driver observes other cars on the road, a chess player observes his opponent’s moves. A weather system is influenced by the incoming energy from the sun and a developing embryo receives nutrients from the mother. Thirdly, all these processes use rules and inputs to progress to or to compute their next state. If we were to take periodic snapshots of these processes as they evolve, the progression between the frames would be a computation. As you can see, computation is a very general concept.  Quoting a computational pioneer, Stephen Wolfram, “All processes, whether they are produced by human effort or occur spontaneously in nature, can be viewed as computations… The evolution of a system is a computation.”  Computation is any rule-based process.

Building Blocks of Computation

Let’s define two important concepts related to computation: functions and algorithms.  A function is a relation between inputs and outputs where each input corresponds to exactly one output. Conceptually, everything around us is a function. A washing machine transforms dirty socks into clean socks. A car transforms fuel into kinetic energy and waste heat. This transformation may be achieved in different ways by using different algorithms. An algorithm is a specific step-by-step instruction for generating outputs from given inputs. A function may be implemented through many different algorithms. Algorithms are the essence of computation.

​

Fig. 1: Algorithm

Computation.png
Finite State Machines

When we think about simple computing machines, what comes to mind?  Maybe a calculator or an abacus? A turnstile is not typically viewed as a computing device – but remember, calculation is not only about arithmetic. A turnstile is a computing device with a fancy name - Finite State Machine. This type of a machine can change or transition from one state to another in response to external inputs (such as a coin or a push) and its current internal state (locked or unlocked). Finite State Machines are all around us. A traffic light, a clock, an elevator, a washing machine, a lamp, a lock, etc. Finite State Machines are purpose-built. We cannot reconfigure an elevator to wash socks. If we want to add new capabilities, we will need to fully redesign and rebuild the device. These computing machines are not universal.

​

Fig. 2: Turnstile State Transition Diagram

Hilbert’s “Decision Problem”

A universal computer was invented by accident. The person who came up with the idea was not trying to invent a computer. He was responding to a question raised during the 1928 International Congress of Mathematicians: “Is it possible to create a general algorithm for deciding whether any given mathematical proposition is true or false?” Ultimately, Alan Turing proved that there is no general algorithm for proving mathematical theorems.  But in order to answer this question, he needed to formally define the notion of “algorithm”. What is, precisely, an algorithm? Instead of proposing a definition, Turing invented a conceptual device to serve as a model of computation.

Turing Machine

Turing machine is a conceptual device that consists of a Finite State Machine connected to an infinite tape. The device moves along the tape left and right, reading and writing symbols. As you may remember, a Finite State Machine can change or transition from one state to another in response to external inputs and its current internal state.  States of a Turing Machine are letters, such as A, B, and C. Inputs are symbols read from the tape. The machine also writes symbols back to the tape based on its state transition rules.  It can produce virtually any pattern of symbols on the tape, given the right set of rules and enough time. The Turing machine was proposed as a formal definition of algorithm. As such, the Turing machine is a model of what is physically computable

 

Ultimately, a computation is just a game with symbols. Any computation that could be performed by a human involves reading and writing down numbers according to some rules kept in memory. Turin machine is a definition of computability.

​

Fig. 3: Turing Machine

Alan Turing

Turing’s 1936 paper on computability, which described a conceptual device now known as the Turing machine, became the foundation of computer science.  Alan Turing is also known as the father of artificial intelligence. In his other seminal paper, “Computing Machinery and Intelligence”, he described a test of a machine's ability to exhibit intelligence indistinguishable from human intelligence. He started his paper with "I propose to consider the question 'Can machines think?'”

During the World War II, Alan Turing worked on cryptoanalysis for the British government. He was instrumental in deciphering German codes and consequently shortening the deadliest of all wars. Sadly, Alan Turing was convicted of homosexuality by British authorities, disgraced, sentenced to hormone therapy, and stripped of his security clearance. He committed suicide in 1954 at the age of 42 by eating a poisoned apple. The apple on Mac devices has a bite, which would have been a wonderful tribute to Alan Turing. However, the artist designing the logo did not know how Alan Turing died. He added a bite to prevent misinterpreting the fruit as a cherry. Alan Turing was posthumously pardoned only in 2013, and The Bank of England issued a £50 note featuring Alan Turing in 2021.

Turing Apple

Computation Takeaways

The Turing Machine is a model of computation because computation is just a game of symbols. Input is converted into an output according to a rule. Input and output can be represented as a pattern of 1s and 0s. Therefore, computation can be thought of as flipping 1s and 0s according to a rule

 

Conceivably, "the ultimate question of life, the Universe and everything" is a string of 1s and 0s. So is the answer to the question, a binary string, which could be the number 42 in a decimal representation. Elon Musk once mused on a podcast that the Universe is the ultimate answer, but to what question?  The essence of computation is simple and can be automated. But what should we care to compute? Questions are much harder than answers.

​

Fig. 4: Computing The Ultimate Answer

ComputationTakeaways.png

Part III: DIGITAL LOGIC

Computational Systems

What kind of system can we make computations with? With any kind of system. If a computation is any rule-based process, then all systems around us are running computations. The weather is a computational system, but we don’t know how to leverage it for our human purposes. We do not know how to tell the weather what to compute. Throughout modern history, people tried to create calculating machines. Early designs were bulky mechanical devices intended to perform only specific arithmetic operations. At the present time, scientists are experimenting with natural systems. Molecular computers and quantum computers exist today at a “proof of concept” level, but they are difficult to control and operate and they target specific types of problems. When the concept of a digital computer was first introduced by 22-year-old Claude Shannon, it changed the world in a very short time. It was a simple, practical, and elegant idea that paved the way for limitless applications. A critical feature of a digital computer is its universality. It can perform any computation that can be done in principle. Although technology seems to advance at an increasing rate, the fundamental design of a digital computer has not changed since it was first formally described by Shannon in his MIT Masters’ thesis. All modern computers perform computations using Boolean algebra and electrical circuits – this approach is called digital logic.

Digital Logic

Let’s explain both terms – digital and logic. Digital means discrete, distinct, or individually separated. In contrast, analog means uninterrupted or continuous. Discrete objects include five individual fingers on the hand, individual steps of a staircase and a 2-positional on/off light switch. Corresponding analog objects are connected bones of a fish’s fin, a continuous ramp, and a dimmer light switch. The nature of information is digital. There is the smallest discrete unit of information, a binary digit also known as “bit”. A bit has two states: 1 and 0. Information is a sequence of bits.

​

Fig. 5: Digital Objects vs. Analog Objects

digital.JPG

Logic is a study of valid reasoning. Aristotle wanted to conceptualize the thinking process and describe the rules of inference – what conclusion necessarily follows from given premises. Socrates is mortal because he is a man, and all men are mortal. Socrates is mortal because this is how we defined men. Logic is logical simply by definition. Logic is only concerned with the form of an argument; it is not interested in the content or the subject of an argument. “If p then q; p, therefore, q”.  Actual sentences or “propositions” are replaced with labels “p” and “q”.  This generalized approach is the first step towards automating the thinking process. However, logical operations performed by computers have little to do with thinking or reasoning in a human sense. Digital logic simply defines the rules for making a selection between two alternatives: 1 and 0.  

Binary Number System

Humans use the decimal number system, while computers use the binary number system. The decimal number system utilizes 10 symbols from 0 to 9. The value of a digit depends on its position, whether it is in the Ones place, Tens place, Hundreds place and so on. Each place is a power of 10, increasing from right to left: 1, 10, 100, 1000, etc.  Any quantity can be represented concisely using only 10 symbols. This provides a significant advantage over non-positional systems, which require many symbols. This system also lends itself to easy arithmetic calculations on paper, such as addition with carry and long division. The decimal positional system was first invented in India in the 6th century. Before that, numbers were not of much use except for labeling quantities. Euclid’s book of geometry written in about 300 BCE, The Elements, did not contain any numbers! Fibonacci learned about the decimal number system during his travels and described it in his book Liber Abaci in 1202. The new system was finally adopted in Europe only in the 15th century, giving rise to rapid scientific and engineering advancements.

 

The binary number system works in a similar way to the decimal system. The difference is that it uses two symbols, 0 and 1 and each place value is a power of 2: 1, 2, 4, 8, 16, etc.  Any quantity can be represented by using only two symbols. Binary numbers are much longer than decimal numbers, e.g., 2022 is equivalent to 11111100110. The key advantage of the binary is that it can be easily modeled using any 2-state system, such as an on/off switch, magnetized and demagnetized areas of a magnetic strip, etc. Implementing binary arithmetic on a physical device is much easier than manipulating 10 decimal symbols.

 

 

Fig. 6: Binary and Decimal Number Systems

(101010 = 2 + 8 + 32 = 42)

Screen Shot 2022-02-08 at 8.01.21 AM.png
Screen Shot 2022-02-08 at 8.01.39 AM.png

In 1703, Gottfried Leibniz described the “simplest system of all”, which uses only characters 0 and 1 and “proceeds by twos”.  He noted that this system is the most fundamental for doing calculations and it does not require learning anything by heart, such as multiplication tables. One times one is one and one times zero is zero. This is all one would need to remember to multiply two binary numbers.  Leibniz discovered that a similar system was devised 4000 years earlier by Fuxi, an ancient Chinese king and philosopher. The method of counting by 0 and 1 was lost and rediscovered by Leibniz after a great interval of time.

Boolean Algebra and Logic Gates

George Boole was a self-taught British mathematician who was curious about the relationship between mathematics and logic. He envisioned a special kind of math for describing the laws of thought and wanted to give a mathematical form to Aristotelian logic. Instead of “If p then q”, he wanted to use algebra-like expressions, such as “p + q”. He invented a language for manipulating propositions, which is known as Boolean algebra. Propositions are either true or false, which is expressed as 1 and 0. Propositions can be combined using three operators: “AND”, “OR”, “NOT”. The operator “AND” is denoted by a multiplication sign, “OR” by a plus sign, and “NOT” by a minus sign.

In Boolean algebra, 1 + 1 = 1

 

The result of a Boolean operation is dictated by “truth tables”.  For example, if two true statements are combined using AND, the result is a true statement. However, if one of the statements is false, then the result is false.

 

 Fig. 7: Truth Tables

Screen Shot 2022-02-08 at 8.02.43 AM.png

George Boole died in 1864 at the age of 49. He was caught in the rain on his way to teach a class, lectured in soaked clothes, and later developed pneumonia. In 1937, Claude Shannon took a philosophy course at MIT where he learned about Boolean algebra.  Shannon realized that by using 1s and 0s the principles of arithmetic and logic could be combined. Boolean operators can be used as building blocks for arithmetic calculations. The picture below illustrates how an algorithm for adding two binary numbers can be implemented with Boolean operators AND, OR and NOT.

​

Fig. 8: Adding two binary numbers using Boolean operators

Screen Shot 2022-02-08 at 8.12.32 AM.png
Screen Shot 2022-02-08 at 8.12.48 AM.png

Shannon proved that any mathematical operation can be reduced to a pattern of Boolean operations, which could be implemented using electrical circuits with on/off switches.  A closed switch, which allows the current to flow, would represent “1”, while an open switch would signify "0". An electrical circuit designed to perform a Boolean operation is called a logic gate.

 

Imagine a simple circuit with two switches connecting a battery and a lamp.  An “AND” logic gate is built by connecting two switches consecutively. If either one of the two is open the light will be off. Both switches must be closed for the light to be on. As per the AND Truth Table, both A and B must be true for the result to be true. An “OR” logic gate is implemented by connecting two switches in parallel. The light will be on if either one of the two switches is closed, and the current can use one of the alternative paths.  As per the OR Truth Table, if either A or B is true, the result is true. Finally, a “NOT” logic gate is implemented by using a switch to create a shortcut in the circuit. If the switch is closed, the current will bypass the lamp and the light will be off. If the input is 1, the output is 0. Switches are critical to building logic gates. In modern computers, transistors are used as switches for electrical signals.

​

Fig. 9: Logic Gates (Switch closed = "1", opened = "0"; light on = "1", off = "0")

All digital computers use logic gates to transform binary inputs into binary outputs. This works because the essence of computation is transforming inputs into outputs and any information can be represented as a binary string. The only purpose of logic gates is to flip 1s and 0s in a controlled and predictable way. 

Logic gates are very simple, but computers perform many complex computations. The complexity is built in steps and layers, through both the hardware and the software. A series of logic gates connected in the right way is needed to implement a simple arithmetic operation, such as addition. But once a specific pattern of logic gates is designed, it can be burnt into a dedicated computer chip, mass-produced and used over and over again.  A software code can perform more complex calculations by repeatedly executing low-level operations built into hardware. For example, multiplication can be represented as a repeated addition. At the software level, this repeated addition can be packaged as a Multiply command (function) and then reused without giving much thought to the underlying steps. In short, all computer processes are decomposable to a handful of basic operations, which, in turn, are decomposable to Boolean operations implemented with logic gates.  Logic gates are implemented using electrical circuits and transistors acting as switches. 

 

In modern computers, switching electrical circuits are not assembled from components but printed directly onto a silicon chip. This allows to produce tiny computers and integrate them into everyday objects. A baby camera, a thermostat and a doorbell have become universal computers. As such, they can be hacked, reprogrammed and used for nefarious purposes.    

 

Claude Shannon had a brilliant and creative mind that imagined digital logic and later information theory. The cruel irony is that Shannon spent the last two decades of his life suffering from Alzheimer and died in 2001 at a nursing home.

Design of a Programmable Computer

Early computers were hard-wired to perform a particular task. Alan Turing and John Von Neumann envisioned a machine, which could perform any task by reading instructions from memory. John Von Neumann took this idea further by creating a modular design of a physical computer with components reminiscent of a Turing machine:  Central Processing Unit (Finite State Machine), extendable memory (the infinite tape) and input/output devices (the moving head). Von Neumann’s design is a general concept of a programmable computer, not a specific engineering solution. The key idea of the design is that a specific algorithm does not need to be built as a fixed sequence of logic gates. Instead, it can be expressed as a set of instructions stored in memory. These instructions would utilize basic operations hardwired into CPU. Data could also be read from and written back to memory. 

 

The components of the Von Neumann’s architecture include:

 

  • Arithmetic Logic Unit (ALU) - a pattern of logic gates designed to perform basic arithmetic and logical operations, such as multiply, add, subtract, load, store

  • Registers – a pattern of logic gates designed to store intermediate results as well as instructions and data loaded from memory

  • Central Processing Unit (CPU) – a place where ALU, Registers and Control Unit are connected by a “bus” or a group of wires

  • Control Unit – a pattern of logic gates designed to control the order in which ALU and registers read from or write data to the bus

  • Memory Unit – a hard drive where programs and data are stored in the form magnetized and demagnetized areas (magnetic bits)

  • Input / Output devices - keyboard, mouse, monitor, printer, etc.

​

Fig. 10: Von Neumann's Architecture

Conclusion

Computers can do so many different things because computation and information are the most fundamental and generic concepts. Any physical thing can be thought of as information and represented as a pattern of 0s and 1s. Any rule-based process can be thought of as a computation and represented by an algorithm on a Turing machine. The Turing machine was proposed as a formal definition of algorithm. As such, the Turing machine is a model of what is physically computable. Von Neumann’s architecture is a general concept of a programmable computer, which emphasizes that data and instructions can be stored in the computer’s memory separately from the fixed hardware structure.

 

Computers use 1 and 0 because

  • Any information can be represented as a pattern of ones and zeros.

  • By using 1 and 0, the principles of arithmetic and logic could be combined.

  • One and zero are easy to implement using on/off switches or magnetized and demagnetized areas of a magnetic tape.

 

These simple ideas emerged from a long history of human thought. Curious but obscure and disconnected at first, these ideas eventually crystallized, converged and became the foundation of our everyday technology.  They made possible easy access to knowledge, nearly instantaneous communication, autonomous driving, navigation, speech recognition, virtual reality and many other useful and convenient things.   But the next chapter of this story will be even more incredible. The information age not only changed our society but our understanding of the reality itself

​

In his 1989 essay, a visionary physicist John Archibald Wheeler proposed an "it from bit" doctrine: information sits at the core of the physical world. This intriguing idea prompted an ongoing scientific and philosophical debate. Is information just a mathematical tool and a useful metaphor – or is it the ultimate source of everything physical?

 

In 2002, a physicist and computer scientist Stephen Wolfram championed a computational paradigm for all branches of science. “If one views the evolution of a system as a computation, then each step is taking some amount of computational effort. Traditional mathematics suggests that this effort is unnecessary.” It should be possible to find the outcome of the evolution with much less effort by using a formula (e.g., computing planetary orbits). However, for most of common systems, such as markets, biological organisms, and cognition, no traditional mathematical formulas have ever been found. Computational irreducibility means that it can be no way to predict how the system will behave except by going through as many steps of computation as the system itself. This idea introduces a new limit on the predictive power of science. This is a startling statement, given that the main objective of science is to make predictions.

 

Another tantalizing principle proposed by Wolfram is computational equivalence. The only objective thing that characterizes intelligence is the ability to perform sophisticated computations. The fluid dynamics of the weather has the same computational sophistication as our brain. However, it’s possible to get sophisticated computation in systems with very simple construction. In other words, we may not need to peer into remote galaxies in order to find non-human intelligence, it may be all around us.

 

Information and computation are, possibly, the most important and provocative ideas that emerged from the research into the nature of reality.  

Timeline of Big Ideas

founders - final.JPG

This is the direct lineage of ideas leading to the modern digital computer. There were many other deserving contributors to the field whose inventions were either dead ends (e.g., Charles   Babbage) or who did not formalize and describe their methods for others to build on (e.g., Konrad Zuse).

Bibliography

  • The Pattern on The Stone: The Simple Ideas That Make Computers Work by W. Daniel Hills 

  • The Elements of Computing Systems: Building a Modern Computer from First Principles by Noam Nisan and Shimon Schocken

  • A New Kind of Science by Stephen Wolfram

  • An Investigation of the Laws of Thought by George Boole

  • Computing Machinery and Intelligence by Alan Turing

  • A Symbolic Analysis of Relay and Switching Circuits by Claude Shannon

  • Explanation of the binary arithmetic by Gottfried Wilhelm Leibniz (downloaded from http://www.leibniz-translations.com/binary.htm on 2/1/2022)

  • Computation and the future of human condition by Stephen Wolfram

  • The Science of Information: From Language to Black Holes by Professor Benjamin Schumacher (The Great Courses)

  • LinkedIn
bottom of page