A team of physicists led by Mir Faizal at the University of British Columbia has demonstrated that the universe cannot be a computer simulation, according to research published in October 2025[1].
The key findings show that reality requires non-algorithmic understanding that cannot be simulated computationally. The researchers used mathematical theorems from Gödel, Tarski, and Chaitin to prove that a complete description of reality cannot be achieved through computation alone[1:1].
The team proposes that physics needs a “Meta Theory of Everything” (MToE) - a non-algorithmic layer above the algorithmic one to determine truth from outside the mathematical system[1:2]. This would help investigate phenomena like the black hole information paradox without violating mathematical rules.
“Any simulation is inherently algorithmic – it must follow programmed rules,” said Faizal. “But since the fundamental level of reality is based on non-algorithmic understanding, the universe cannot be, and could never be, a simulation”[1:3].
Lawrence Krauss, a co-author of the study, explained: “The fundamental laws of physics cannot exist inside space and time; they create it. This signifies that any simulation, which must be utilized within a computational framework, would never fully express the true universe”[2].
The research was published in the Journal of Holography Applications in Physics[1:4].


Again, I really appreciate how deep you’ve gone into this. I haven’t dealt with these topics for many years and even then, I mostly dealt with the actual physical system of a single cell, not what you can build out of them. However I think that’s were the core of the issue lies anyway.
So you ran a simulation of those neurons?
LIF neurons can be physically implemented by combining classic MOSFETs with Redox cells. Like: Pt/Ta/TaOx with x<1. Or with Hafnium or Zirconia instead of Tantal.
The oxygen vacancies in the oxide form tiny conductive filaments few atoms think. While the I-V-curve is technically continuous, the number of different currents you can actually measure is limited. Shot noise even plays a significant role, where the discreetness of elections matters.
Under absolutely perfect conditions, you can maybe distinguish 300 states. On a chip at room temperature maybe 20 to 50. If you want to switch fast it’s 5 to 20.
That’s not continuous, it’s only quasi-continuous. It’s still cool, but not outside the mathematical scope of the theorems used in the paper.
And yes, continuity is not everything. You’re right about busy beavers being not computable in principle. But this applies to neuromorphic computing just the same.
But it doesn’t. No such extension can be meaningfully defined. If it could be calculated, then it could solve the halting problem. That’s impossible for purely logical reasons, independently of what you use for computation (a brain, neuromorphic computing, or anything else). Approximations would be incredibly slow, as the busy beaver function grows faster than any computable function.
I’m not sure I understand what you’re trying to explain with states. Do you mean measured externally? Or does part of the system discretize the signals? Or are you saying that while the driving fields may be continuous the molecular structure enforces some sort of granularity to the signals?
You seem to know much more than me on the hardware side.
The last time I looked at hardware I came across “ferroelectric synapses” which do the STDP learning. I think it had something to do with the way magnetic dipoles align when current is applied. I don’t think it requires measurement at any step and is continuous whether we have good enough hardware to measure those changes or not.
Yes. A very slow and very inaccurate one. I had to approximate the parallelization by setting a time step and then numerically compute the potentials of every neuron and synapse before moving on to the next time step and repeating.
I should state more clearly that I think it’s the temporal aspects of continuity that lead to undecidable behavior rather than just the number of states a neuron has.
Because each neuron in a neuromorphic net is running in parallel with all others, the signals produced by that neuron will not necessarily be in sync with the signals of any other neuron. As in, theoretically no two neurons are really ever firing at the exact same time.
As I previously stated, since timing is everything for STDP the time difference could be very significant when a neuron recieves multiple inputs in a short time window and fires.
An additional thing to note is that in more advanced models of neurons like the Hodgkin Huxley model, one can account for multiple synapses along the same dendritic tree which absolutely makes timing matter more since input to a synapse near the soma causes a localized change in ions that would stop the propagation of signals from the farther reaches of the dendrite. And if a far signal were to propagate to that synapse just prior to a an input signal, that signal might not be strong enough to get through.
Depending on how the hardware is build I’d imagine you could get similar effects from the nearness of electrical signals in a neurochip, where the local signaling causes non-trivial effects to the system.
Anyway, I’ve realized that I likely don’t know enough to say with real certainty whether spiking neural nets are incomputable or not. This is the most rigorous explanation of my thoughts I can write right now:
I think the problem is still uncomputable even with fully precise measurements simply due to the continuity and timing I mentioned before, but I guess I don’t have enough knowledge on the topic to prove it so perhaps I’m wrong.
I think someone else in this comment section mentioned analog-computing (which I thought included neuromorphic hardware) being capable of non-algorithmic computing so they might have more answers than me on the topic of what non-algorithmic means.
…would it? I don’t think you can derive a solution to the hard halting problem from knowing the longest finite runtime of a set of machines with n-states.
A function for the busy beaver numbers would only tell you that there exists some machine with n states that halts after a certain number of steps. It cannot be used to determine if any specific machine of that size halts or not, just that at least one does and it takes x number of steps.
Hell, it doesn’t even tell you what input would make a machine halt at that many steps only that there is at least one input for which you get that output.
So I think that means—if by some miracle you were able to construct an oracle for the busy beaver numbers—you wouldn’t really solve the halting problem yes? (Again wayy outside my expertise but still fascinating)