### In the beginning was the code

##### March 15, 2013 by Jürgen Schmidhuber

*There is a fastest, optimal, most efficient way of computing all logically possible universes, including ours — if ours is computable (no evidence against this). Any God-like “Great Programmer” with any self-respect should use this optimal method to create and master all logically possible universes. *

*At any given time, most of the universes computed so far that contain yourself will be due to one of the shortest and fastest programs computing you. This insight allows for making non-trivial predictions about the future. We also obtain formal, mathematical answers to age-old questions of philosophy and theology.*

**Transcript of Jürgen Schmidhuber’s TEDx talk at UHasselt, Belgium, Nov. 10, 2012**

I will talk about the simplest explanation of the universe. The universe is following strange rules. Einstein’s relativity. Planck’s quantum physics. But the universe may be even stranger than you think. And even simpler than you think.

**Is the universe being created by a computer program?**

Many scientists are now taking seriously the possibility that the entire universe is being computed by a computer program, as first suggested in 1967 by the legendary Konrad Zuse, who also built the world’s first working general computer between 1935 and 1941. [1]

Zuse’s 1969 book *Calculating Space* discusses how a particular computer, a cellular automaton, might compute all elementary particle interactions, and thus the entire universe.

The idea is that every electron behaves the same, because all electrons re-use the same subprogram over and over again.

First consider the virtual universe of a video game with a realistic 3D simulation. In your computer, the game is encoded as a program, a sequence of ones and zeroes. Looking at the program, you don’t see what it does. You have to run it to experience it.

Reality has still higher resolution than video games. But soon you won’t see a difference any more, since every decade, simulations are becoming 100–1000 times better, because computing power per Swiss Franc is growing by a factor of 100–1000 per decade.

A few decades imply a factor of a billion. Soon, we’ll be able to simulate very convincing heavens and hells. It will seem quite plausible that the real world itself also is just a simulation.

To a man with a hammer, everything looks like a nail. To a man with a computer, everything looks like a computation.

Skeptics might say: What about quantum physics, and Heisenberg’s uncertainty principle, and Bell’s inequality? Don’t they imply that the universe cannot be produced by a deterministic program? Not at all. Bell himself knew well that deterministic universes including deterministically computable observers are fully compatible with all available physical observations.

** The universe as the sum of all mathematics**

When my brother Christof was a teenager in the early 1980s in Munich, he told me and others: the universe, or quantum multiverse, is the sum of all mathematics. I believe he is the reason why such ideas emerged in Munich.

He was younger than me. He still is. He also was smarter than me. He went on to become a physicist at Munich, Caltech, Princeton, and CERN, and he lived in Berne next door to where Einstein lived.

It took me a while to understand what my brother meant. In 1996, I formalized his idea through a computation. I generalized Everett’s many worlds theory, pointing out that there is a very *short* *and* *fast* program that not only computes our own universe, or multiverse, but also all other logically possible universes, even those with different physical laws. For example, universes with anti-gravity.

In fact, there is a *fastest*, optimal, most efficient way of computing all logically possible universes, including ours — if ours is computable (no evidence against this).

The optimal method can be programmed with only ten lines of code. I wrote it down for you — here it is! [Holds up a piece of paper.] [2]

Any God-like “Great Programmer” with some self-respect should use this optimal method to create and master all logically possible universes.

Suppose he runs it for a while. At some point, many of the executed programs will have computed universes that contain *you*! You, as you are sitting here and staring at me with incredulous eyes.

You could even become a “Great Programmer” yourself, using the optimal method [holds note up again] to simulate all possible universes in nested fashion. (But this would not necessarily help to figure out the future faster than by waiting for it to happen. The computer on which to run this program would have to be built within our universe, and as a small part of the latter would be unable to run as fast as the universe itself.)

Anyway, now it’s easy to see that due to the nature of the optimal method, at any given time, *most* of the universes computed so far that do contain *yourself* will be due to one of the shortest and fastest programs computing: YOU.

**Predictions**

This insight allows for making non-trivial predictions about the future. There are many possible futures of your past so far. Which one is going to happen? Answer: given the probability distribution induced by the optimal method, most likely one of the few regular, non-random futures with a fast and short program.[3] (Because the weird futures where suddenly the rules change and everything dissolves into randomness are fundamentally harder to compute, even by the optimal method. Random stuff by definition does not have a short program.)

This implies that the decay of neutrons, widely believed to be random, most likely is not random, but pseudo-random, like the decimal expansion of PI, which looks random, but isn’t, because it is computable by a short and fast program.

**Why quantum computing may never scale**

The optimal method also implies that quantum computation will never work well, essentially because it is consuming so many basic computational resources. I first made this prediction a dozen years ago. Since then there has not been any progress in practical quantum computation, despite lots of efforts. (The biggest number factored into its prime factors by any existing quantum computer is still 15.) Quantum computation is sexy, but dead.

What about free will? Free will is overrated. In my group at the Swiss AI Lab IDSIA, we often program simulated worlds including simulated observers with simulated artificial brains. Through pseudo-random trial and error they even learn from experience to become smarter over time, acting as if they had free will. They have no idea that every thought in their artificial neural networks is computed by a deterministic program. (In a way, they do have free will — it’s just deterministically computed free will.)

**Computational theology**

Nevertheless, computer science is now giving us formal, mathematical answers to old questions of philosophy and theology. One of the results of my Computational Theology is this: *your* own life must be very important in the grand scheme of things.

You may think that your life is insignificant, because you are so small, and the universe is so big. But given the Great Programmer’s optimal way of computing all universes, it is probably very hard to edit *your* life (or mine) out of our particular universe: Any program that produces a universe like ours, but without *you*, is probably much longer and slower (and thus less likely) than the original program that includes you.

So with high probability, your life essentially has to be this way, with all of its ups and downs. *Your* life is *not* insignificant. It seems to be an *indispensable* part of the grand scheme of things.[4]

This is compatible with religions claiming that “all is one,” “everything is connected to everything.” May this thought lift you up in times of frustration.

**Footnotes:**

1. A recent KurzweilAI news article mentioned somewhat related ideas by Max Tegmark (1997/1998). How does Schmidhuber’s approach differ?

“My paper on all computable universes called ‘A computer scientist’s view of life, the universe, and everything’ got submitted/published in 1996/1997,” Schmidhuber told KurzweilAI.

“Back then, Max also was based in Munich (at LMU). He put forth this somewhat vague and not really formally well-defined notion of a mathematics-based ensemble of universes.

“He assumed a uniform prior distribution on this ensemble, which unfortunately cannot even exist, as there is no uniform distribution on countably infinite things. Over the years, Max and I had quite a few little chats about this :-) . I think a mathematical analysis of this type really must focus on the formally well-defined, limit-computable mathematical structures/universes.

“Max also completely ignores computation time, while the talk above is all about computation time, which makes a big difference between easy-to-compute and hard-to-compute universes, and greatly affects their probabilities, and thus the most likely futures of observers inhabiting them. I also addressed such differences in an additional 2000 paper on all formally describable universes (and also in the 2012 survey paper for H. Zenil’s book, *A Computable Universe*).

“I also wrote that I suspect my brother Christof Schmidhuber is the real reason why such ideas emerged in Munich. At the age of 17 he declared that the universe is the sum of all math, inhabited by observers who are mathematical substructures (private communication, Munich, 1981).

“As he went on to become a theoretical physicist at LMU Munich, Caltech, Princeton, and CERN, discussions with him about the relation between superstrings and bitstrings became a source of inspiration for writing both the first paper and later ones based on computational complexity theory, which seems to provide the natural setting for his more math-oriented ideas (private communication, Munich 1981-86; Caltech 1987-93; Princeton 1994-96; Berne/Geneva 1997–; compare his notion of “mathscape”).”

2. The preprint of a recent overview paper by Schmidhuber includes pseudocode (a simplified generic version) for the ten lines of code mentioned in the talk (see also slides):

**FAST Algorithm**

for i := 1, 2, . . . do

Run each program p with l(p) ≤ i for at most 2^{i−l(p)} steps and reset storage modified by p

end for

[here l(p) denotes the length of program p, a bitstring]

Schmidhuber explains: “This is essentially a variant of Leonid Levin’s universal search (1973), but without the search aspect. The code systematically lists and runs all possible programs in interleaving fashion. It can be shown that it computes each particular universe as quickly as this universe’s (typically unknown) fastest program, save for a constant factor that does not depend on the universe size.

From this asymptotically optimal method, we can derive an *a priori* probability distribution on possible universes called the Speed Prior. It reflects the *fastest *way of describing objects, not necessarily the *shortest*. (BTW, note that any general search in program space for the solution to a sufficiently complex problem will create many inhabited universes as byproducts.)”

3. Assume you are running computations for all universes in parallel, says Schmidhuber. Some contain you at a given time. So among those universes computed so far that contain you, which are the most likely ones, that is, what’s your most likely future? In a Bayesian framework, “Speed Prior” permits non-trivial answers to questions of this type.

4. “Because most likely the universe to which you owe your current existence has a high *a priori* probability,” explains Schmidhuber. “Other possible variants of your life are less likely because they are harder to compute, even by the optimal method.”

*The insights mentioned in this talk were first published between 1996 and 2000, and further popularized in the new millennium. Detailed mathematical papers as well as popular high-level summaries can be downloaded from Schmidhuber’s **overview site on all computable universes**.*

*For information on a Master’s Degree in Artificial Intelligence through courses taught by Schmidhuber and colleagues, visit this site: **Master’s Degree in Informatics with a Major in Intelligent Systems*

*For new jobs for postdocs and PhD students in Schmidhuber’s research group, visit this site: http://www.idsia.ch/~juergen/eu2013.html*

## Comments (62)

May 24, 2013by iomicrab

i wonder how computation time could be an issue if worlds like ours were computed : the first thing such code would have to realise iS “time”.

hence, no time would be needed ø:)

instant eternity would be that code’s nature, and time its first result : a collection of logical branches..

May 1, 2013by GAUSS

I’m going to start a religion where we all worship the Great Programmer, whose chief prophet is Alan Turing.

April 1, 2013by luke

SCI-FI meanderings:

Evidence for hierarchical evolution via our DNA isn’t necessarily what it appears. A third option I don’t hear discussed is the possibility that our DNA was generated / observed / computed elsewhere. This article (and others like it) might encourage this type of exploration.

With significant computational power, the “time” factor that often separates paradigms falls.

DNA is code. I reuse code all the time in my projects. I have code from projects many years ago that gets inserted into brand new programs I build today. A programming forensic expert might say my 1.3 code for my latest project has went through “significant” changes (due to patterns in the code) but in reality I was merely reusing pre-existing code. This project hadn’t changed much at all. (only from 1.0 to 1.3)

We (and our DNA) are the last ‘instance’ or iteration. What we observe in our DNA doesn’t mean it occurred in this iteration. We could reboot humans on Mars with a given DNA. Future generations thousands of years forward could lose historical record of the move to Mars. Might they one day incorrectly assume all the history in the DNA occurred on Mars?

Food for thought.

April 30, 2013by hexkid

This analogy of DNA to code falls down though as we don’t just have the latest version of the DNA we also have multiple branches from the parental forms (i.e. a bit like multiple forks from GutHub) and so we can identify the changes that have occurred along each branch & reconstruct the phylogenetic tree.

Whereas in a piece of code it is possible to completely refactor/replace a given subroutine which will look completely different that the previous version there is no such facility in the genetic code. It has to progress by changing individual letters but always keep a functional version of the code, the only other possibility is the duplication of an existing gene which can then diverge from the original.

April 1, 2013by eldras

What matters is what we can build. How what we build protects survival and diminishes pain. If the multiverse is infinite, or if there is infinite regression,

we cant capture the omniverse,and almost all its laws may remain unknowable. tHooft states Nature is faster than Man can ponder, generates more paws faster than we can imagine.

March 25, 2013by Matt Montgomery

Some interpretations of quantum mechanics say that wave function collapses do occur. Discrete many world ensemble theories say that the universe splits in two whenever a binary measurement of particle spin is made somewhere. For example, encode spin up = 0, spin down = 1. Such measurements occur trillions of times per second. So we get an enormous number of different possible universe histories.

But now Jürgen Schmidhuber says that some of those histories must have much higher probability than others, because certain non-random patterns of measurements like 0101010101… are much easier to compute by the fastest method of computing all histories than random patterns like 001010001010100111010… That’s the basis of his counter-intuitive predictions, and that’s why his theory satisfies the essential criterion of being falsifiable.

Now imagine time will prove him right. Then his papers of 1997, 2000, the Speed Prior paper at COLT 2002, and maybe even this blog and our comments, will become legendary.

March 20, 2013by Cloudswrest

Does this mean we’re in “the best of all possible worlds?” To paraphrase Leibniz, via Pangloss.

March 22, 2013by Matt Montgomery

Interesting question. It seems that according to Schmidhuber’s theory, we are probably in the “best of all possible worlds” if the “best world” is the “computationally simplest world that allows us to exist.” But is it?

Is this a new interpretation of Leibniz?

May 7, 2013by Daniel

That doesn’t immediately follow from his theory. It does follow, however, that we are much more likely to be in a more perfect (i.e. easier to compute) universe than in a less perfect universe, since the more perfect ones have a higher a priori probability.

March 19, 2013by Brad

The quantum nature of the universe seems particularly to harken to computability. Indeterminacy means that nothing need be calculated until it interacts in a significant way with something else (“observed”). What a massively effective efficiency / data compression mechanism! The majority of particles in the universe thus don’t need to be calculated every ‘cycle.’

March 19, 2013by Devon

So if we are in a computer program, is the unit unit time between computational iterations (as all programs i have ever written must have) equal to the plank length divided by the speed of light? L / (L / t)= time ??? Any physicists that can give this a really fast no for some simple reason?

March 18, 2013by EP

This guy makes one particularly strange claim given his views: that it takes more computation to have your life happening than not (as justification for your life being important). It doesn’t take any more computation to have humans running around, since even the electrons in an idle rock are whizzing around and interacting with neighbors all the time. The particles themselves are already doing all the computation they can, essentially.

March 18, 2013by Occam Razor

I think this is a misunderstanding. Of course the computationally cheapest universe is the one that does nothing at all. But clearly you are in another universe, where your life does happen. There are many different universes with different variants of your life. There are even more without you. But universes without you are irrelevant for you. That’s called the anthropic principle. According to his web site, the anthropic principle just says that the probability of finding yourself in a universe compatible with your existence is 1. In the talk he asks: “So among those universes computed so far that contain you, which are the most likely ones, that is, what’s your most likely future?”

March 17, 2013by Phil Osborn

“You could even become a “Great Programmer” yourself, using the optimal method [holds note up again] to simulate all possible universes in nested fashion. (But this would not necessarily help to figure out the future faster than by waiting for it to happen. The computer on which to run this program would have to be built within our universe, and as a small part of the latter would be unable to run as fast as the universe itself.)”

So, if you created a program to simply eat clock cycles, then is it likely that your “lifespan” is short or long? A good optimization might include terminating such “people” before they wasted too much time. A simple optimization might yield one answer, while a more complex might do better but take longer itself to run. Examples of this abound in practical computing, and while we can do sampling or preview the entire process of say, video compression, the final choice of which algorythm to employ is itself subject to uncertainty. The question of optimization in decision theory is another good focus on this issue. There was a researcher at one of the CyberArts or Meckler Vr conferences in the early ’90′s who

claimed to have developed the ultimate decision algorythm that included the time for choice of how to determine the method of choice of which algorythm to employ, through an infinite somehow collapsed set of implied itterations. I have my doubts about that one.

Just as an aside, back in the early ’80′s, I attended a halloween party as a Karma Warrior, wearing as a pendant an etched silicon disk. My story was that I had discovered that the universe was a simulation and that those people who were game wreckers – exposing that fact – were automatically selected for deletion. However, I stumbled accross this evidence in the course of designing this circuit that was itself capable of doing auto-modeling of ujniverse-calculation algorythms. Turning the circuit on created a simulation of the simulation we live in, generating an infinite computational demand, which inherently slowed down all calculations in the immediate vicinity, creating a kind of fog of probability into which I could disappear temporarily, long enough that the master computer would reset for that area and I would not be included in the reset. The disk itself did not have to actually “run.” It only had to exist with its model of the program to force the system into a local reset.

However, the system was fully smart enough that its internal check functions would know that someone was tweaking its own reality, and thus, planes would crash blocks away, gas mains would spontaneously ignite, while I was getting the hell out of Dodge. since I had essentially nothing to lose, my goal was to try to identify how I could bargain with the system for my existence, or even for moving up a level or two in the game… So far, so g

March 25, 2013by Peter

Very entertaining! Write the novel.

March 28, 2013by MatthewQ

As the other poster says: Write the novel.

i say: Or someone else will

March 16, 2013by Ed

I don’t buy it.

He makes a generic prediction – that quantum computers won’t pan out because of the computational complexity that they would entail – but completely neglects quantum computers in real life! Figure:

1. quantum electrodynamics: works on the ‘sum over all histories’ principle, where leptons and photons determine their behavior by going through all possible histories, and what ‘occurs’ is the average of these histories.

2. quantum adiabiatics: the ability for quantum particles to tunnel through higher energy states to settle on lower ones.

We see #1 all around us, and #2 is responsible for a number of natural quantum computing systems. In fact, we wouldn’t be alive except for proteins having the ability to go through an astronomical number of states and settle on the a very good local (or perhaps) global minimum, in a very short amount of time.

So its sort of puzzling to me, what he’s saying. In any case, I’d really like to see how he responds to the above two points. And – if dwave has its way, we’ll probably get experimental confirmation to how much we can harness quantum computation in the next few years or so.

Ed

March 18, 2013by Al Nex

Ed,

You’re confusing what quantum computing is. Sure, quantum physics happens all around us, all the time. But the main thing that makes a quantum computer special is isolation from the environment. In Schmidhuber’s view, this isolation is directly related to clock cycles. The more isolated a system is, the more independent its behavior becomes from the rest of the Universe, and thus the more clock cycles it consumes. What Schmidhuber is saying is that there is a limit beyond which further isolation cannot be achieved. And indeed, nowhere in the Universe – even in the intergalactic voids – do we find quantum isolation anywhere near the order of magnitude necessary for large-scale quantum computation.

About your point 2, adiabatic quantum computing (AQC) is, as you pointed out, quantum computing without isolation. However, this lack of isolation comes at a price – greatly reduced computational ability. Sure, proteins ‘evaluate’ a large number of configurations before settling on a ground state. But the number of configurations they sample is still finite, and far far less than what a full-blown quantum computer would be capable of.

And the thing is, if you want a more powerful AQC, you either need many more qubits or more isolation, and both of them run into the same problems as #1. There is no free lunch – if you want the level of power that full QC would provide, you need extreme isolation.

March 21, 2013by Pete

Thanks for mentioning quantum computing. I am not an expert in that field, but I believe D-Wave’s QCs (plus the fact that they work with Lockheed-Martin and CIA) should be examined with great attention.

We already have real QCs now, despite the academia’s “elites” telling us that QCs are not yet possible. Time for grassroot QC research movements.

March 21, 2013by Al Nex

It has been examined with great attention. What it’s claimed to be doing is quantum annealing (QA). In general, QA is more powerful than some classical algorithms (or so we think). However, its power is still a far cry from true quantum computing with full entangled qubits and no decoherence. True QC is still far in the future, and it’s what the people you call ‘elitists’ are talking about.

Some of the technologies that D-Wave uses are proprietary, but some external review of the chip has been done and it seems that it is using quantum effects. It is extremely hard to verify this though, even if a full description of the chip were available.

March 16, 2013by Tom

If, for some reason, God/TheGreatProgrammer/whatever uses a non-optimal method to simulate this universe, wouldn’t this allow for us to predict the future in our universe by using the optimal method?

“You could even become a “Great Programmer” yourself, using the optimal method [holds note up again] to simulate all possible universes in nested fashion. (But this would not necessarily help to figure out the future faster than by waiting for it to happen. The computer on which to run this program would have to be built within our universe, and as a small part of the latter would be unable to run as fast as the universe itself.)”

March 18, 2013by Occam Razor

But even then you’ll inherit the slowdown of your non-optimal universe containing your local computer programmed to simulate universes in optimal fashion.

March 28, 2013by MatthewQ

That makes my head hurt :-[

March 15, 2013by Snake Oil Baron

No free will means the fact that we suffer to stay alive is just a detail created by the “great programmer”. Those whom he allowed to be depressed enough to kill themselves, he or his software wanted out of the way. Those whom he allowed to think about suicide all the time but not do it because their desire to protect loved ones was computed to be just high enough, were needed by the software or the programmer. Neither suicide victim nor survivor deserves praise or blame for their choice because they had no choice.

So the great programmer is a jerk and to top it off he gave these deterministic bots the subjective sense with which to suffer rather than just be calculated to appear to suffer. I don’t think very highly of this great programmer. He should shut down this waste of resources and go back where he came from.

March 27, 2013by Durabys

Yes. This is starting to remind me of the Merovingian in the Matrix: “Choice is an illusion created between those with power and those without.” and “You see there is only one constant. One universal. It is the only real truth. Causality. Action, reaction. Cause and effect.”

What that guy is talking about is a predictable static universe with zero free will. Where all “sentient” beings are just elaborate clock work machines. I think that when he is at home he behaves like an absolutist tyrant to his own family because that is the only explanation I have for from where he got such ideas about the universe.

March 27, 2013by Bri

Ah Merve! I love that line. It’s so true. “Choice is an illusion between those with power and those without. Ultimately you are indirectly referring to my problem with the premise of this article.

Yes we do have free will. Yes what we choose is deterministic. That point is played out in the movie. If you really follow the story Neo is inevitable. The oracle sees this and “unbalanced the equation. The problem lies with mathematics. It is only a descriptive language. A single particle can be destined in an infinite number of ways. There is no way to fully desxibe all possible outcomes with mathematics. The particles themselves do the ultimate calculations that determine the universe.. Mathematics is a pail reflection of the ultimate computational substrate. Mathematics can produce beautiful concise formulas that are highly descriptive. E= MC2 is a classic example. It works most of the time. It does fall apart under certain conditions. What the ” Sum total of all mathematics ” might be seems to be an uninsurable question in and of it’s self.. There is an excellent article in Scientific American’s march issue that addresses the impossibility of calculating all the mathematically descriptive formulas for every particle interaction.

It refers to a unity principal. Examining the problem as a whole rather than trying to describe all it’s individual particles.. In a nutshell we do this with problems like fluid dynamics. We use different formulas that describe macroscopic behavior. And there’s the rub. It’s a dtereminiatic universe but we can’t know all the variables precisely. The Matrix movie plays the same theme. Neo can’t be controlled, yet the outcome is expected. The ” cause” of his unpredictable behavior( as seen by the AI/Robotic world of the matrix) is his “love” for trinity, AA opposed to his love of Zion. Something the machines couldn’t account for, or should I say “the Archetect didn’t account for.

March 28, 2013by MatthewQ

I don’t think I’m a jerk for not segregating my stomach bacteria according to those that worked harder or whatever. In truth, such a ‘great programmer’ would probably view us as a whole and not as individual data points unless one of us did something exceedingly interesting. Such a programmer would be so far beyond primitive creatures like you and me that it would be impossible to assume or figure out anything at all about its motivations. Certainly, primitive human descriptions (jerk for example) would not apply at all.

March 15, 2013by Bri

I don’t know I must be from Belgium.

March 15, 2013by Bri

Mathematics is just a language that describes relationships. Take a magnetic sphere the size of a marble. The smallest unit that can make a cube is four marbles on a side. It’s a function of space. Just because it can be described mathematically is irrelevant to the fact that it’s the lowest state for a cube.

March 21, 2013by Pete

I wish to add that all relationships (of all structures in the universe) are mathematical.

We need more computerized mathematics education (such as Wolfram’s softwares).

Instead of math contests and prizes for humans (which causes greed and petty human vanity and jealousy. People in such systems do not want to share all of their discoveries), I propose we need to create robotics mathematicians.

March 21, 2013by Pete

No more jealousy, no more vanity, no more hoarding, and faster discovery/breakthrough rate.

We will arrive at the Singularity faster.

March 28, 2013by MatthewQ

I agree with the sentiment but… Just go out on the street and see how many people you can get to go along with that. Go to the Middle East and take a good deep breath and shout ‘I’ve got a good idea- just stop fighting!’

Humans will never be that rational.

March 15, 2013by Ockham Razor

This is Occam’s Razor to the extreme. The shortest and fastest (and perhaps most elegant) description of everything that’s possible.

Normally such theories of everything have no predictive power, because they predict everything and nothing. But this seems different, because of the computation speed issue.

In 2000 there already was a lot of quantum computation hype. But he boldly predicted that according to his model quantum computation won’t scale. So far he’s been right.

I’d love to see QC working. So I don’t like his prediction. But what if he keeps being right? Unlike some other TOEs, his is falsifiable. We just need to build a reliable large scale quantum computer!

March 15, 2013by JC

Ok, so if you want to mess with the Great Programmer and make the computation more inefficient, just flip a coin for every decision you make from now on?

March 17, 2013by Jerry

If the universe is encoded in a small handful of logic/math, you are incapable of rebelling because all your actions and inactions arise from that small rule set. If you “decided” to start flipping coins for your future actions then that decision and future consequences are still all derived from those fundamental line(s) of code.

This very conversation and “awareness” of this reply is just following a strict set of rules that appears complex and random until you follow it back to the beginning, where you end up with the same initial boot code ;)

March 21, 2013by Pete

I want to see how this concept (large, complex structure arise from a few lines of code) relate to the concept of “embryonic-manufacture”.

In the future, complex robots and the internal software may need to be grown through a embryonic process.

March 28, 2013by MatthewQ

You could put it to the test ;-)

But I think it would be an inherent property of sentience to rebel against such randomness. I think you wouldn’t last very long flipping your coin. You’d want to see a film or have a meal or some hot person would offer you sex and you’d chuck the coin flipping out the window.

So, sure, if all humans began flipping coins, it might upset some local balance but in practice it could never ever happen.

April 6, 2013by Pithiest

How does he know he has the optimal program? This sounds a lot like like the proof for god, wherein god is defined as perfect and a perfect God must actually exist therefore the existence of god has been proved. It also reminds me of Voltaires “Candide” best of all possible worlds. Also, just because someone makes a bold prediction doesn’t mean they actually know anything.

March 15, 2013by Occam Razor

This is Occam’s Razor to the extreme. The shortest and fastest (and one might say, most elegant) description of everything that’s possible.

Normally such theories of everything have no predictive power, because they predict everything and nothing. But this seems different, because of the computation speed issue. In 2000, when everybody was hyping quantum computation, he boldly predicted that according to his model quantum computation won’t scale. So far he’s been right.

I’d love to see QC working. So I don’t like his prediction. But what if he keeps being right? Unlike some other TOEs, his is falsifiable – we just need to build a reliable large scale quantum computer!

March 15, 2013by tedhowardnz

I just read the guy’s paper, and it is weird.

He posits Occams razor to support his hypothesis, but the numbers involved are mind numbing.

If you simply assume that the universe is what it is, with a certain set of probability functions defining interactions, then the numbers involved come to about 10^220 quantum states in the time since the universe began. To run the sort of general computational algorithm he is talking of requires roughly raising that number to its own power of computational states.

Why would anyone opt for such a functionally complex explanation, however compact its mathematical shorthand?

Does not compute in my world (and I have spent 40 years programming computers in practical situations).

March 16, 2013by Editor

In “Computational capacity of the universe,” http://arxiv.org/pdf/quant-ph/0110141.pdf, MIT prof. Seth Lloyd estimates that the universe could have performed ≈ 10

^{120}operations in its history so far. He also notes that all the man-made computers in the world have performed no more than ≈ 10^{31}ops over the last two years (this was published in 2001), and no more than approximately twice this amount in the history of computation.April 10, 2013by Pete

May you tell us the number of operations performed by manmade computers from 2001 to today?

April 10, 2013by Editor

Sorry, I don’t have that information.

March 15, 2013by tho

In the industrial era , the universe was a big mechanism. Now it is like a computer. In the future it will be like whatever major technology we discover. And there will always be super-inflated egos who will declare they have got it.

But it’s all a big joke, made even funnier by the seriousness its priesthood.

March 15, 2013by Rob

It’s official, these guys are not only in the ivory tower, they’ve gotten lost in it. They state they have pseudo-intelligences running around in pseudo-worlds and based on this they can predict that reality is a simulation? This is crank science at it’s worst. Which seems to be the general direction of science over the last few decades. Hypothesize, test, observe, note results: this is science. Not modeling and half-guesses. No hypothesis based on modeling is valid until and unless observational data supports the models.

March 16, 2013by smb12321

I agree wholeheartedly. These “ideas” inevitably introduce non-scientific elements – Oriental mysticism, New Age bull, a Great Programmer. And just because they have built an elaborate structure with accompanying philosophy and mathematics does in no way validate their claims. In the end, they are still mind games.

So many of our presumptions are guided by human psychology, hopes and history. This has the ring of “aliens made the Pyramids” with lots of bells and whistles.

March 18, 2013by Occam Razor

The “Great Programmer” is rather irrelevant for this line of reasoning. Schmidhuber wrote under “Limitations of the Great Programmer” that he does not need not be smart. His job is almost trivial, since the optimal algorithm is so simple. I think he is just a straw man to acknowledge still open questions: where does the top level of the hierarchy of nested universes come from? Where do computability and logic come from?

April 2, 2013by Codie Petersen

7th graders science fair project.

March 22, 2013by Matt Montgomery

It’s more than just a mind game, because he has made falsifiable predictions, as required by empirical science. For now, the ongoing absence of working quantum computers empirically supports his theory.

April 17, 2013by A4i

Well, probably intelligent beings from this Earth already reached Singularity, but our ancestors were not in their ranks. Ask yourself , is it useful for the Universe a barbarian to be elevated in God like status.

March 15, 2013by Boristabby

Ah. There is that presumptive word again: TRUE. Was it once true, is it still true, will it be true tomorrow as we know more?

Is there only conceptual truth?

Literal truth has been proven illusive.

Are we better advised to just grab and run with what we see today?

Silentknowing (oneness) may serve you better? (:-) Cheers!

March 15, 2013by silentrage

Is there actually any evidence that the simplest explanation/most efficient solution is always the most likely to be true? Or is that our bias?

I don’t see the logic in, We can compute some universes, therefore all universes must be computable, therefore this universe must be computation. It seems very illogical to me.

March 15, 2013by Calum

Good question. Occam’s Razor may be a good way to select a working hypothesis, but is it any guide to reality?

March 15, 2013by John

Of course it isn’t.

March 15, 2013by Occam Razor

So far Occam’s Razor has been a great guide to reality, hasn’t it? There was a time when people had no idea that many of their observations can be described by a simple mathematical equation, in other words, a short program. Then they discovered gravity and other simple laws to compress the observations. Occam’s Razor at work!

March 15, 2013by Ockham Razor

So far Occam’s Razor has been a great guide to reality, hasn’t it? There was a time when people had no idea that many of their observations can be described by a simple mathematical equation, in other words, a short program. Then they discovered gravity and other simple laws to compress the observations. Occam’s Razor at work!

March 25, 2013by Ali

Occam Razor as far as we know, works in OUR universe ,we can’t prove that it should work in all universes

March 25, 2013by Editor

How do we know?

March 26, 2013by Ali

that’s the point , we don’t know.

March 15, 2013by Pete

Try this:

We have several explanations, from simplest to complexiest.

If the simplest (and I guess also the complexiest) is not valid, perhaps the central one will be the most likely (to be true) one?

BTW, in the future, everyone will have femtoscale quantum computers, and hi-def reality simulation can be done routinely.

Simulation/calculation is always better than speculation, I believe.

March 28, 2013by MatthewQ

I agree somewhat with this. It is unlikely that there would just be one being running simulations. Some coders would create the most efficient method, others would create something like Microsoft Wiindows.

We wouldn’t really know from inside that we were designed by a piss-poor programmer.

What is an uncomfortable thought is that the good programmers got good by creating trash that they deleted. Even though the simulation was substandard it probably had uncountable trillions and googagillions of sentient beings in it- but it did not please the programmer so into the recycle bin it went.

The uncomfortable bit to me is- I assume this has already happened. If not to us then to other simulations elsewhere. Sort of makes any sort of human holocaust look almost pleasant in comparison.

April 25, 2013by Rob Falgiano

True on the holocaust part, except the beings wiped out by being dragged to the trash bin would presumably be instantly annihilated and thus would experience no pain or even knowledge of their death. They’d exist, and then they wouldn’t.