dromedary
Camelus dromedarius
This is a story in four parts of mathematics, logic, science and philosophy. I hope you enjoy reading it as much as I have enjoyed writing it.
Part 1: Search for a Universal Machine
In 1985, mathematician Stephen Wolfram made a remarkable claim: that the number 110, used as a kind of coloring code for filling in the next row of a grid, could be used as a computer equally powerful to any we have ever built. "Rule 110", according to Wolfram, is a "universal machine". Specifically it is "turing-complete", which means it can be programmed with the same instructions (software) and input data, and generate the same result, as any other turing-complete computer, which is to say, all computers. That was Wolfram's claim, at least.
Wolfram had developed and was studying these simple rules, which have come to be known as "Wolfram numbers", for his 2002 magnum opus A New Kind of Science. The principle is relatively simple, and I will try to explain it in words and then visually. Imagine you have a sheet of paper containing a grid of squares. You colour some in at random along the top row. This is your "input string". You then colour in the next row by following a simple rule for each square, according to the three squares above it (the square directly above and one on either side of it). If you think of a coloured in square as a 1 and an empty square as a 0, there are eight possible combinations for any segment of three squares, which you can write as: 111, 110, 101, 100, 011, 010, 001, 000. For each rule, you will colour in the square below some of these combinations and leave blank the square below others. There are 256 ( 2^8 ) different colouring schemes, which we call rules, and you can work out the colouring scheme for each rule by writing it in binary below the 8 colouring combinations, like this example for rule 30 (image courtesy Wolfram MathWorld):
Along the top we see each of the possible patterns that might be above a given square. Along the bottom we see the number 30 written out in binary (00011110). In the middle we see that squares corresponding to a 1 in the rule number are coloured in, while rules corresponding to a zero are left blank. By following this colouring rule, line after line, a strange pattern emerges, as we see here for an input string consisting of a single coloured in square in the middle of our top row (left, from Wolfram MathWorld) and after a few hundred iterations (right, my own render):
These rules are called the elementary cellular automata. Further examples and a more thorough explanation are available from the Wolfram MathWorld website. They are a fascinating topic in their own right, but for now let us return to Wolfram's peculiar claim that one of these rules - Rule 110 - is capable of universal computation. I have generated my own output for the first few iterations of Rule 110 below:
The variably sized triangular shapes that emerge in the pattern are referred to as "spaceships", or more specifically for simple triangles like these, "gliders". They earn that name because of the way they seem to 'fly' through the grid over time, an impressively 2-dimensional feat for an object which emerges from exclusively 1-dimensional input and output. Wolfram saw that gliders of different types or sizes, moving at different speeds, could crash into each other and stop, or be destroyed, or cause a new glider to emerge from the wreckage. Certain patterns in the initial input string (or occuring naturally later) could become 'emitters' from which new gliders would launch periodically. More importantly, he noticed that certain patterns or 'structures' in the input string were stable, and would be conserved from line to line - at least until a glider crashed into them and caused a change. Wolfram hypothesised that an input string could be constructed that took advantage of the interactions between gliders, emitters and structures to simulate the essential parts of a computer - to store data, and work through a series of instructions to perform computations.
This was Wolfram's contention, but as of 1991 (six years after the initial conjecture) he hadn't been able to prove it. He certainly could not demonstrate a working Rule 110-based computer! As his attention became increasingly stretched due to ballooning responsibilities as president of his growing company (Wolfram Research, makers of Mathematica), he assigned the task of proving the universality of Rule 110 to an enthusiastic young assistant named Matthew Cook, just 21 years old at the time. Over the coming months, as he struggled with ever more sophisticated approaches to the problem, Cook's enthusiasm faded and eventually he confessed to Wolfram that he thought the problem was unsolvable. Wolfram disagreed, and encouraged Cook to keep working on the problem. Cook complied, and spent the next several years carefully identifying and cataloguing the different types of glider that emerged from the rule, in the hopes of understanding their behaviour more thoroughly. By 1994 he had gathered enough information that a solution, however far off, might really be possible. All the right interactions between the gliders seemed to be there to make a universal machine, but constructing a mathematical proof required far more rigorous reasoning than the vague assertion of having the right pieces. A working computer based on Rule 110 still seemed far away.
By 1998, Cook was convinced that he had solved it. He presented his results including the details of his proof to the Santa Fe Institute conference that year, to the extreme displeasure of his employer. Wolfram Research, acting as a direct proxy for Stephen Wolfram, sued Cook for violating his Non-Disclosure Agreement, which assumed intellectual property rights over all research conducted by company employees. Wolfram successfully filed for an injunction against publication of Cook's results, and everything went quiet for a while. A New Kind of Science was published in 2002, which explained Cook's proof in rough detail, and in 2004 the original proof was finally published in Wolfram's own journal, Complex Systems. The next several years saw incremental improvements to Cook's work, mainly in reliability and computation speed. The universality of Rule 110 remains an esoteric area of research, and there are still no general methods for building an arbitrary computer program to supply as an input string. The science is there, so to speak, but the technology is still waiting to be built.
Part 2: Biology and Universality
In many ways the most exciting consequence of Cook's proof is not that we may be able to use it to build computers, but that the mechanism is so simple that nature has almost certainly already done so. Other elementary cellular automata have already been observed in nature. You might compare our example Rule 30 to the pattern displayed by Conus textile shells:
Any situation involving a 1-dimension pattern (a line) being copied and changed may be able to be explained as a cellular automaton. The mechanism of deriving each cell from the three adjacent cells is reminiscent of many physical processes, from the simple extrusion of a blade of grass from a stem, to the complex process of DNA transcription into RNA repeated billions of times every second (click for video):
The observance of cellular automata in nature need not be so direct. The versatile Rule 184 has been found to model the way that falling particles deposit onto a surface, the way that particles and antiparticles mutually annihilate on collision, and even the way that traffic flow builds and disperses on a busy road. The question is not whether nature creates cellular automata, it is simply a matter of finding them. Matthew Cook needed an input string of a few million cells - around one megabyte of data - to demonstrate universality. Although this figure has been substantially improved upon by subsequent research, it is an amount which is easily dwarved by many common biological processes. The Paris japonica flower, to take an extreme example, has DNA containing 149 billion base pairs. Since each base pair can be represented by two bits, this flower's DNA trascription process can be said to have an input string of well over 35 gigabytes!
The endless folds of DNA and RNA encodings could comfortably be hiding computational systems in their depths. The consequence is that many natural processes which we have assumed to be regular, mechanical, predictable, may in fact be the product of careful calculations executed simultaneously by trillions of molecular computers. Our powers of prediction over nature may very well turn out to be limited not by the progress of scientific research, but by a very much deeper problem of the unpredictability of computational systems. In computer science, programmers recognise the "halting problem" as a fundamental law of computing: it is impossible to determine by looking at the source code (input string) of a program whether or not that program will terminate. If computational systems arise in nature, the halting problem would seem to impose a similar limitation on our ability to predict how and when vital components of life will cease to function. Is death really inevitable? The problem may turn out to be undecidable.
Part 3: A Logical Conlusion
The halting problem, first identified by Alan Turing in 1936, turned out to be a special case of a more general limitation in reasoning proven by logician Kurt Gödel five years earlier. To understand where Gödel was coming from, we need to look back earlier still to a mammoth effort by Alfred Whitehead and Bertrand Russel to formalise the rules of mathematics into a consistent logical system. They published their work between 1910 and 1913 in a three-part tome called Principia Mathematica, which to this day provides an important framework for the unification of mathematical research. Their work was widely celebrated, and seemed to close many logical gaps that had been left as unanswered questions in mathematics - but this logical soundness came at the cost of some expressive power.
One such cost was a weakening of set theory, which is closely related to computation and the construction of rigorous mathematical proofs. Set theorists had been plagued by a number of "paradoxes" involving self-reference: does a set of all sets that do not contain themselves, contain itself? Russel and Whitehead closed the self-referential gap by denying that it exists. They assigned a numerical order to each level of self-reference in a set. A set that contained no sets, and only regular objects, was of order 0. A set that contained other sets, which only contained regular objects was of order 1. A set that contained order 1 sets was order 2, and so on. By modifying set theory so that it could only express sets that contain sets of a lower order than themselves or regular objects, all set-related paradoxes could be avoided. Principia Mathematica systematically tamed all existing self-reference in mathematics, and with considerable fanfare, announced that it was both consistent (all statements derived from it would be true) and complete (all true statements in mathematics could be derived from it).
Gödel, an expert in the area of formal proof, found the claim to be specious. He noticed that the claim to completeness - that the foundations set out in Principia Mathematica were powerful enough to derive all true statements in mathematics - was itself a self-referential claim which could not be proven. His reasoning went something like this:
1. Imagine the set of all possible true statements in mathematics.
2. We want to prove the claim that "the set of all true statements in mathematics can be dervied from the rules of Principia Mathematica".
3. If our claim is true, then it must be part of the set of true mathematical statements we imagined in (1).
4. Since our claim contains a reference to the set of all true statements, it must not be part of the set of statements that can be derived from Principia Mathmatica, from which only non self-referencing sets can be derived.
5. Principia Mathematica must then either be inconsistent, incomplete, and probably both.
Douglas Hofstadter illustrates the way that a set of rules in a formal system can expand to attempt to fill the space represented by all true statements in that system, but will never be able to fill it, by drawing the shape of the derivations as a tree (image taken from Gödel, Escher, Bach):
Though the picture does not show it, the true shape of the tree, which is generated by a repeated application of a simple set of rules, is fractal in nature. The edges are not blurry, but are nevertheless impossible to define, because the pattern is self-similar at every level, no matter how closely you look. This property highlights the infinite detail that can occur in a sufficiently powerful formal system, which still somehow takes on a recognisable shape.
Notice that this kind of reasoning applies equally to any logical system that claims to be consistent and complete, not just Principia Mathematica. The general principle is called Gödel's incompleteness theorem, and it's proof - which is significantly more nuanced than the plain-English version I have presented here! - is quite irrefutable, and more than a little bit eerie. It says, in essence, that it is impossible to formalise truth. Truth will always escape from any attempt to capture it a framework of rules, no matter how broad and powerful those rules may be. With truth as the cornerstone of knowledge, we are forced against our will to scale back the hunt for the ultimate knowledge. Science will prove no grand unified theory of everything, because it inherently lacks the tools to do so. Objective truth simply cannot exist, not even theoretically.
Part 4: Irreducible Complexity
None of this is to say that the scientific approach of reductionist positivism - to break apart a problem, observe the behaviour of its parts, and find logically consistent explanations - has no place. Science, and its more practically-minded cousin Technology, have been responsible for the most important advances ever made by humanity. We are conquering disease, and a growing proportion of the species is becoming part of a global communications network that allows any member to communicate with any other, instantly, from just about anywhere. The achievements of science and technology are not in question here. Yet the once seemingly boundless potential of understanding the world and harnessing its power by looking at how its parts interact may find the limits of its influence far nearer than anyone expected. If there are computation systems in nature then the best we can hope from scientific enquiry is to identify them. It will be a different field of study that sheds light on how these systems work, what they do, and whether we can use that knowledge to predict or control them.
It is through this channel of research that we will arrive, inexorably, back at Gödel's incompleteness theorem. It starts by trying to model the system. The behaviour and output of a system of computation - whether it be in the form of semiconducting silicon in your desktop computer or a plant's protein chain being transcribed and altered by a sophisticated enzyme - can not in the general case be reduced into a simpler problem or solved in less time. It can only be computed, which is, unsurprisingly, exactly what these computation systems do. We can create useful abstractions (isomorphisms, in the language of formal systems) of the behaviour we identify as we have done with our own computers by creating programming languages and more programming languages on top of those programming languages. Indeed this is useful for understanding (and recreating) certain abstract patterns in a way that makes sense to our own ideas about logic, but it will always be an unsatisfying, partial understanding. Moreover, it will never be complete in a formal sense - our isomorphisms will always eventually meet some observed phenomena that seems to be inconsistent with the current theory.
Eventually, we will all have to accept, as mathematicians did in the 1930s, that logical consistency is a property of our own subjective interpretations, not an inherent property of any system. The system is what it is. We can infer a set of rules and axioms and say that it is consistent with the observed evidence, and that we are merely claiming that we have not yet found any evidence which is inconsistent, but we rarely consider how weak this statement really is. It is enough to describe many observed physical phenomena in a way that is useful to other humans, but it inevitably breaks down in the face of universality, where cause and effect are not related by any reducible physical mechanism. Building up digestible theorems based on the observed behaviour of a universal machine will never yield a higher understanding of it. We should learn from history and not find ourselves in working at the same hopeless task as Whitehead and Russel attempting to capture all of mathematics in a few axioms and rules.
Fortunately we can study these systems in isolation with simulated cellular automata, which give us a way of exploring the strange internal landscape of our own logic. It is my contention that this new form of investigation, which looks deep into the structure of our own thought rather than the physical structure of matter, will be the next stage for humanity in our unending pursuit of understanding.
Bibliography
Weisstein, Eric W. "Elementary Cellular Automaton." From MathWorld--A Wolfram Web Resource. Elementary Cellular Automaton -- from Wolfram MathWorld
Shalizi, Cosma (2002) "Review: A New Kind Of Science" Center for the Study of Complex Systems | U-M LSA Center for the Study of Complex Systems
Cook, Matthew (2004) "Universality in Elementary Cellular Automata" Complex Systems, 15 1–40
Martinez, Genaro J. et al. "Reproducing the cyclic tag system developed by Matthew Cook with Rule 110 using the phases fi_1"
Hofstadter, Douglas (1979) "Gödel, Escher, Bach"
Part 1: Search for a Universal Machine
In 1985, mathematician Stephen Wolfram made a remarkable claim: that the number 110, used as a kind of coloring code for filling in the next row of a grid, could be used as a computer equally powerful to any we have ever built. "Rule 110", according to Wolfram, is a "universal machine". Specifically it is "turing-complete", which means it can be programmed with the same instructions (software) and input data, and generate the same result, as any other turing-complete computer, which is to say, all computers. That was Wolfram's claim, at least.
Wolfram had developed and was studying these simple rules, which have come to be known as "Wolfram numbers", for his 2002 magnum opus A New Kind of Science. The principle is relatively simple, and I will try to explain it in words and then visually. Imagine you have a sheet of paper containing a grid of squares. You colour some in at random along the top row. This is your "input string". You then colour in the next row by following a simple rule for each square, according to the three squares above it (the square directly above and one on either side of it). If you think of a coloured in square as a 1 and an empty square as a 0, there are eight possible combinations for any segment of three squares, which you can write as: 111, 110, 101, 100, 011, 010, 001, 000. For each rule, you will colour in the square below some of these combinations and leave blank the square below others. There are 256 ( 2^8 ) different colouring schemes, which we call rules, and you can work out the colouring scheme for each rule by writing it in binary below the 8 colouring combinations, like this example for rule 30 (image courtesy Wolfram MathWorld):
Along the top we see each of the possible patterns that might be above a given square. Along the bottom we see the number 30 written out in binary (00011110). In the middle we see that squares corresponding to a 1 in the rule number are coloured in, while rules corresponding to a zero are left blank. By following this colouring rule, line after line, a strange pattern emerges, as we see here for an input string consisting of a single coloured in square in the middle of our top row (left, from Wolfram MathWorld) and after a few hundred iterations (right, my own render):
These rules are called the elementary cellular automata. Further examples and a more thorough explanation are available from the Wolfram MathWorld website. They are a fascinating topic in their own right, but for now let us return to Wolfram's peculiar claim that one of these rules - Rule 110 - is capable of universal computation. I have generated my own output for the first few iterations of Rule 110 below:
The variably sized triangular shapes that emerge in the pattern are referred to as "spaceships", or more specifically for simple triangles like these, "gliders". They earn that name because of the way they seem to 'fly' through the grid over time, an impressively 2-dimensional feat for an object which emerges from exclusively 1-dimensional input and output. Wolfram saw that gliders of different types or sizes, moving at different speeds, could crash into each other and stop, or be destroyed, or cause a new glider to emerge from the wreckage. Certain patterns in the initial input string (or occuring naturally later) could become 'emitters' from which new gliders would launch periodically. More importantly, he noticed that certain patterns or 'structures' in the input string were stable, and would be conserved from line to line - at least until a glider crashed into them and caused a change. Wolfram hypothesised that an input string could be constructed that took advantage of the interactions between gliders, emitters and structures to simulate the essential parts of a computer - to store data, and work through a series of instructions to perform computations.
This was Wolfram's contention, but as of 1991 (six years after the initial conjecture) he hadn't been able to prove it. He certainly could not demonstrate a working Rule 110-based computer! As his attention became increasingly stretched due to ballooning responsibilities as president of his growing company (Wolfram Research, makers of Mathematica), he assigned the task of proving the universality of Rule 110 to an enthusiastic young assistant named Matthew Cook, just 21 years old at the time. Over the coming months, as he struggled with ever more sophisticated approaches to the problem, Cook's enthusiasm faded and eventually he confessed to Wolfram that he thought the problem was unsolvable. Wolfram disagreed, and encouraged Cook to keep working on the problem. Cook complied, and spent the next several years carefully identifying and cataloguing the different types of glider that emerged from the rule, in the hopes of understanding their behaviour more thoroughly. By 1994 he had gathered enough information that a solution, however far off, might really be possible. All the right interactions between the gliders seemed to be there to make a universal machine, but constructing a mathematical proof required far more rigorous reasoning than the vague assertion of having the right pieces. A working computer based on Rule 110 still seemed far away.
By 1998, Cook was convinced that he had solved it. He presented his results including the details of his proof to the Santa Fe Institute conference that year, to the extreme displeasure of his employer. Wolfram Research, acting as a direct proxy for Stephen Wolfram, sued Cook for violating his Non-Disclosure Agreement, which assumed intellectual property rights over all research conducted by company employees. Wolfram successfully filed for an injunction against publication of Cook's results, and everything went quiet for a while. A New Kind of Science was published in 2002, which explained Cook's proof in rough detail, and in 2004 the original proof was finally published in Wolfram's own journal, Complex Systems. The next several years saw incremental improvements to Cook's work, mainly in reliability and computation speed. The universality of Rule 110 remains an esoteric area of research, and there are still no general methods for building an arbitrary computer program to supply as an input string. The science is there, so to speak, but the technology is still waiting to be built.
Part 2: Biology and Universality
In many ways the most exciting consequence of Cook's proof is not that we may be able to use it to build computers, but that the mechanism is so simple that nature has almost certainly already done so. Other elementary cellular automata have already been observed in nature. You might compare our example Rule 30 to the pattern displayed by Conus textile shells:
Any situation involving a 1-dimension pattern (a line) being copied and changed may be able to be explained as a cellular automaton. The mechanism of deriving each cell from the three adjacent cells is reminiscent of many physical processes, from the simple extrusion of a blade of grass from a stem, to the complex process of DNA transcription into RNA repeated billions of times every second (click for video):
The observance of cellular automata in nature need not be so direct. The versatile Rule 184 has been found to model the way that falling particles deposit onto a surface, the way that particles and antiparticles mutually annihilate on collision, and even the way that traffic flow builds and disperses on a busy road. The question is not whether nature creates cellular automata, it is simply a matter of finding them. Matthew Cook needed an input string of a few million cells - around one megabyte of data - to demonstrate universality. Although this figure has been substantially improved upon by subsequent research, it is an amount which is easily dwarved by many common biological processes. The Paris japonica flower, to take an extreme example, has DNA containing 149 billion base pairs. Since each base pair can be represented by two bits, this flower's DNA trascription process can be said to have an input string of well over 35 gigabytes!
The endless folds of DNA and RNA encodings could comfortably be hiding computational systems in their depths. The consequence is that many natural processes which we have assumed to be regular, mechanical, predictable, may in fact be the product of careful calculations executed simultaneously by trillions of molecular computers. Our powers of prediction over nature may very well turn out to be limited not by the progress of scientific research, but by a very much deeper problem of the unpredictability of computational systems. In computer science, programmers recognise the "halting problem" as a fundamental law of computing: it is impossible to determine by looking at the source code (input string) of a program whether or not that program will terminate. If computational systems arise in nature, the halting problem would seem to impose a similar limitation on our ability to predict how and when vital components of life will cease to function. Is death really inevitable? The problem may turn out to be undecidable.
Part 3: A Logical Conlusion
The halting problem, first identified by Alan Turing in 1936, turned out to be a special case of a more general limitation in reasoning proven by logician Kurt Gödel five years earlier. To understand where Gödel was coming from, we need to look back earlier still to a mammoth effort by Alfred Whitehead and Bertrand Russel to formalise the rules of mathematics into a consistent logical system. They published their work between 1910 and 1913 in a three-part tome called Principia Mathematica, which to this day provides an important framework for the unification of mathematical research. Their work was widely celebrated, and seemed to close many logical gaps that had been left as unanswered questions in mathematics - but this logical soundness came at the cost of some expressive power.
One such cost was a weakening of set theory, which is closely related to computation and the construction of rigorous mathematical proofs. Set theorists had been plagued by a number of "paradoxes" involving self-reference: does a set of all sets that do not contain themselves, contain itself? Russel and Whitehead closed the self-referential gap by denying that it exists. They assigned a numerical order to each level of self-reference in a set. A set that contained no sets, and only regular objects, was of order 0. A set that contained other sets, which only contained regular objects was of order 1. A set that contained order 1 sets was order 2, and so on. By modifying set theory so that it could only express sets that contain sets of a lower order than themselves or regular objects, all set-related paradoxes could be avoided. Principia Mathematica systematically tamed all existing self-reference in mathematics, and with considerable fanfare, announced that it was both consistent (all statements derived from it would be true) and complete (all true statements in mathematics could be derived from it).
Gödel, an expert in the area of formal proof, found the claim to be specious. He noticed that the claim to completeness - that the foundations set out in Principia Mathematica were powerful enough to derive all true statements in mathematics - was itself a self-referential claim which could not be proven. His reasoning went something like this:
1. Imagine the set of all possible true statements in mathematics.
2. We want to prove the claim that "the set of all true statements in mathematics can be dervied from the rules of Principia Mathematica".
3. If our claim is true, then it must be part of the set of true mathematical statements we imagined in (1).
4. Since our claim contains a reference to the set of all true statements, it must not be part of the set of statements that can be derived from Principia Mathmatica, from which only non self-referencing sets can be derived.
5. Principia Mathematica must then either be inconsistent, incomplete, and probably both.
Douglas Hofstadter illustrates the way that a set of rules in a formal system can expand to attempt to fill the space represented by all true statements in that system, but will never be able to fill it, by drawing the shape of the derivations as a tree (image taken from Gödel, Escher, Bach):
Though the picture does not show it, the true shape of the tree, which is generated by a repeated application of a simple set of rules, is fractal in nature. The edges are not blurry, but are nevertheless impossible to define, because the pattern is self-similar at every level, no matter how closely you look. This property highlights the infinite detail that can occur in a sufficiently powerful formal system, which still somehow takes on a recognisable shape.
Notice that this kind of reasoning applies equally to any logical system that claims to be consistent and complete, not just Principia Mathematica. The general principle is called Gödel's incompleteness theorem, and it's proof - which is significantly more nuanced than the plain-English version I have presented here! - is quite irrefutable, and more than a little bit eerie. It says, in essence, that it is impossible to formalise truth. Truth will always escape from any attempt to capture it a framework of rules, no matter how broad and powerful those rules may be. With truth as the cornerstone of knowledge, we are forced against our will to scale back the hunt for the ultimate knowledge. Science will prove no grand unified theory of everything, because it inherently lacks the tools to do so. Objective truth simply cannot exist, not even theoretically.
Part 4: Irreducible Complexity
None of this is to say that the scientific approach of reductionist positivism - to break apart a problem, observe the behaviour of its parts, and find logically consistent explanations - has no place. Science, and its more practically-minded cousin Technology, have been responsible for the most important advances ever made by humanity. We are conquering disease, and a growing proportion of the species is becoming part of a global communications network that allows any member to communicate with any other, instantly, from just about anywhere. The achievements of science and technology are not in question here. Yet the once seemingly boundless potential of understanding the world and harnessing its power by looking at how its parts interact may find the limits of its influence far nearer than anyone expected. If there are computation systems in nature then the best we can hope from scientific enquiry is to identify them. It will be a different field of study that sheds light on how these systems work, what they do, and whether we can use that knowledge to predict or control them.
It is through this channel of research that we will arrive, inexorably, back at Gödel's incompleteness theorem. It starts by trying to model the system. The behaviour and output of a system of computation - whether it be in the form of semiconducting silicon in your desktop computer or a plant's protein chain being transcribed and altered by a sophisticated enzyme - can not in the general case be reduced into a simpler problem or solved in less time. It can only be computed, which is, unsurprisingly, exactly what these computation systems do. We can create useful abstractions (isomorphisms, in the language of formal systems) of the behaviour we identify as we have done with our own computers by creating programming languages and more programming languages on top of those programming languages. Indeed this is useful for understanding (and recreating) certain abstract patterns in a way that makes sense to our own ideas about logic, but it will always be an unsatisfying, partial understanding. Moreover, it will never be complete in a formal sense - our isomorphisms will always eventually meet some observed phenomena that seems to be inconsistent with the current theory.
Eventually, we will all have to accept, as mathematicians did in the 1930s, that logical consistency is a property of our own subjective interpretations, not an inherent property of any system. The system is what it is. We can infer a set of rules and axioms and say that it is consistent with the observed evidence, and that we are merely claiming that we have not yet found any evidence which is inconsistent, but we rarely consider how weak this statement really is. It is enough to describe many observed physical phenomena in a way that is useful to other humans, but it inevitably breaks down in the face of universality, where cause and effect are not related by any reducible physical mechanism. Building up digestible theorems based on the observed behaviour of a universal machine will never yield a higher understanding of it. We should learn from history and not find ourselves in working at the same hopeless task as Whitehead and Russel attempting to capture all of mathematics in a few axioms and rules.
Fortunately we can study these systems in isolation with simulated cellular automata, which give us a way of exploring the strange internal landscape of our own logic. It is my contention that this new form of investigation, which looks deep into the structure of our own thought rather than the physical structure of matter, will be the next stage for humanity in our unending pursuit of understanding.
Bibliography
Weisstein, Eric W. "Elementary Cellular Automaton." From MathWorld--A Wolfram Web Resource. Elementary Cellular Automaton -- from Wolfram MathWorld
Shalizi, Cosma (2002) "Review: A New Kind Of Science" Center for the Study of Complex Systems | U-M LSA Center for the Study of Complex Systems
Cook, Matthew (2004) "Universality in Elementary Cellular Automata" Complex Systems, 15 1–40
Martinez, Genaro J. et al. "Reproducing the cyclic tag system developed by Matthew Cook with Rule 110 using the phases fi_1"
Hofstadter, Douglas (1979) "Gödel, Escher, Bach"