When Friston was inducted into the Royal Society of Fellows in 2006, the academy described his impact on studies of the brain as “revolutionary” and said that more than 90 percent of papers published in brain imaging used his methods. Two years ago, the Allen Institute for Artificial Intelligence, a research outfit led by AI pioneer Oren Etzioni, calculated that Friston is the world’s most frequently cited neuroscientist. He has an h-index — a metric used to measure the impact of a researcher’s publications — nearly twice the size of Albert Einstein’s. Last year Clarivate Analytics, which over more than two decades has successfully predicted 46 Nobel Prize winners in the sciences, ranked Friston among the three most likely winners in the physiology or medicine category.


Photoreceptors exemplify the principle of optimization, an idea, gaining ever wider traction among researchers, that certain key features of the natural world have been honed by evolution to the highest possible peaks of performance, the legal limits of what Newton, Maxwell, Pauli, Planck et Albert will allow. Scientists have identified and mathematically anatomized an array of cases where optimization has left its fastidious mark, among them the superb efficiency with which bacterial cells will close in on a food source; the precision response in a fruit fly embryo to contouring molecules that help distinguish tail from head; and the way a shark can find its prey by measuring micro-fluxes of electricity in the water a tremulous millionth of a volt strong — which, as Douglas Fields observed in Scientific American, is like detecting an electrical field generated by a standard AA battery "with one pole dipped in the Long Island Sound and the other pole in waters of Jacksonville, Fla." In each instance, biophysicists have calculated, the system couldn’t get faster, more sensitive or more efficient without first relocating to an alternate universe with alternate physical constants.


White torture, often referred to as “white room torture,” is a type of psychological torture technique aimed at complete sensory deprivation and isolation. A prisoner is held in a cell that deprives them of all senses and identity. [...]

Visually, the prisoner is deprived of all color. Their cell is completely white: the walls, floor and ceiling, as well as their clothes and food. Neon tubes are positioned above the occupant in such a way that no shadows are shown.

Auditory, the cell is soundproof, and void of any sound, voices or social interaction. Guards are stood in silence wearing padded shoes to avoid making any noise. Prisoners cannot hear anything but themselves.

In terms of taste and smell, the prisoner is fed white food — classically, unseasoned rice — to deprive them of these senses. Further, all surfaces are smooth, robbing them of tact.

Detainees are often held for months, or even years. The effects of white torture are well-documented in a number of testimonials. Typically, prisoners will become depersonalized by losing personal identity for extended periods of isolation; causing hallucinations, or even psychotic breaks.


Idea: a professional high-quality illustration of a giraffe dragon chimera...

Symbols that DALL·E generates:


Idea: an armchair in the shape of an avocado...

Symbols that DALL·E generates:


The bit represents a logical state with one of two possible values. These values are most commonly represented as either "1" or "0", but other representations such as true/false, yes/no, +/−, or on/off are also commonly used.


[Wheeler] studied with Niels Bohr, taught Richard Feynman, and boned up on relativity with his friend and colleague Albert Einstein. John Archibald Wheeler's fascinating life brings us face to face with the central characters and discoveries of modern physics. He was the first American to learn of the discovery of nuclear fission, later coined the term "black hole," led a renaissance in gravitation physics, and helped to build Princeton University into a mecca for physicists. From nuclear physics, to quantum theory, to relativity and gravitation, Wheeler's work has set the trajectory of research for half a century.


Now I am in the grip of a new vision, that Everything Is Information. The more I have pondered the mystery of the quantum and our strange ability to comprehend this world in which we live, the more I see possible fundamental roles for logic and information as the bedrock of physical theory.


For the past two decades, though, [Wheeler] has pursued a far more provocative idea for an idea, something he calls genesis by observership. Our observations, he suggests, might actually contribute to the creation of physical reality. To Wheeler we are not simply bystanders on a cosmic stage; we are shapers and creators living in a participatory universe.

Wheeler's hunch is that the universe is built like an enormous feedback loop, a loop in which we contribute to the ongoing creation of not just the present and the future but the past as well.


An axiom, postulate, or assumption is a statement that is taken to be true, to serve as a premise or starting point for further reasoning and arguments. [...]

As defined in classic philosophy, an axiom is a statement that is so evident or well-established, that it is accepted without controversy or question.


In addition to explaining why quantum physicists find so many examples of interconnectedness when they plumb the depths of matter, Bohm's holographic universe explains many other puzzles. One is the effect consciousness seems to have on the subatomic world. As we have seen, Bohm rejects the idea that particles don't exist until they are observed. But he is not in principle against trying to bring consciousness and physics together. He simply feels that most physicists go about it the wrong way, by once again trying to fragment reality and saying that one separate thing, consciousness, interacts with another separate thing, a subatomic particle.

Because all such things are aspects of the holomovement, he feels it has no meaning to speak of consciousness and matter as interacting. In a sense, the observer is the observed. The observer is also the measuring device, the experimental results, the laboratory, and the breeze that blows outside the laboratory. In fact, Bohm believes that consciousness is a more subtle form of matter, and the basis for any relationship between the two lies not in our own level of reality, but deep in the implicate order.


Depending on an observer’s relative motion or their position within a gravitational field, that observer would experience time passing at a different rate than that of another observer.


That's been one of my mantras — focus and simplicity. Simple can be harder than complex: You have to work hard to get your thinking clean to make it simple. But it's worth it in the end because once you get there, you can move mountains.


Similarly, [David Bohm] believes that dividing the universe up into living and nonliving things also has no meaning. Animate and inanimate matter are inseparably interwoven, and life, too, is enfolded throughout the totality of the universe. Even a rock is in some way alive, says Bohm, for life and intelligence are present not only in all of matter, but in "energy, " "space, " "time, " "the fabric of the entire universe, " and everything else we abstract out of the holomovement and mistakenly view as separate things.


Classical science generally divides things into two categories: those that possess order in the arrangement of their parts and those whose parts are disordered, or random, in arrangement. Snowflakes, computers, and living things are all ordered. The pattern a handful of spilled coffee beans makes on the floor, the debris left by an explosion, and a series of numbers generated by a roulette wheel are all disordered.

As Bohm delved more deeply into the matter he realized there were also different degrees of order. Some things were much more ordered than other things, and this implied that there was, perhaps, no end to the hierarchies of order that existed in the universe. From this it occurred to Bohm that maybe things that we perceive as disordered aren't disordered at all. Perhaps their order is of such an "indefinitely high degree" that they only appear to us as random (interestingly, mathematicians are unable to prove randomness, and although some sequences of numbers are categorized as random, these are only educated guesses).


In mathematical problem solving, the solution to a problem (such as a proof of a mathematical theorem) exhibits mathematical elegance if it is surprisingly simple and insightful yet effective and constructive. Such solutions might involve a minimal amount of assumptions and computations, while outlining an approach that is highly generalizable. Similarly, a computer program or algorithm is elegant if it uses a small amount of code to great effect.


It is not uncommon in the history of science that new ways of thinking are what finally allow longstanding issues to be addressed. But I have been amazed by just how many issues central to the foundations of the existing sciences I have been able to address by using the idea of thinking in terms of simple programs. For more than a century, for example, there has been confusion about how thermodynamic behavior arises in physics. Yet from my discoveries about simple programs I have developed a quite straightforward explanation. And in biology, my discoveries provide for the first time an explicit way to understand just how it is that so many organisms exhibit such great complexity. Indeed, I even have increasing evidence that thinking in terms of simple programs will make it possible to construct a single truly fundamental theory of physics, from which space, time, quantum mechanics and all the other known features of our universe will emerge.


There's one kind of opinion I'd be very afraid to express publicly. If someone I knew to be both a domain expert and a reasonable person proposed an idea that sounded preposterous, I'd be very reluctant to say "That will never work."

Anyone who has studied the history of ideas, and especially the history of science, knows that's how big things start. Someone proposes an idea that sounds crazy, most people dismiss it, then it gradually takes over the world.

Most implausible-sounding ideas are in fact bad and could be safely dismissed. But not when they're proposed by reasonable domain experts. If the person proposing the idea is reasonable, then they know how implausible it sounds. And yet they're proposing it anyway. That suggests they know something you don't. And if they have deep domain expertise, that's probably the source of it.

Such ideas are not merely unsafe to dismiss, but disproportionately likely to be interesting. When the average person proposes an implausible-sounding idea, its implausibility is evidence of their incompetence. But when a reasonable domain expert does it, the situation is reversed. There's something like an efficient market here: on average the ideas that seem craziest will, if correct, have the biggest effect. So if you can eliminate the theory that the person proposing an implausible-sounding idea is incompetent, its implausibility switches from evidence that it's boring to evidence that it's exciting.


Conway's Game of Life is a cellular automaton that is played on a 2D square grid. Each square (or "cell") on the grid can be either alive or dead, and they evolve according to the following rules:

Any live cell with fewer than two live neighbours dies (referred to as underpopulation).

Any live cell with more than three live neighbours dies (referred to as overpopulation).

Any live cell with two or three live neighbours lives, unchanged, to the next generation.

Any dead cell with exactly three live neighbours comes to life.

The initial configuration of cells can be created by a human, but all generations thereafter are completely determined by the above rules. The goal of the game is to find patterns that evolve in interesting ways – something that people have now been doing for over 50 years.


The defining characteristic of all scientific knowledge, including theories, is the ability to make falsifiable or testable predictions. The relevance and specificity of those predictions determine how potentially useful the theory is.


Austrian physicist, Erwin Schrödinger, is one of the founders of quantum mechanics. But he's most famous for something he never actually did: a thought experiment involving a cat.

He imagined taking a cat and placing it in a sealed box with a device that had a 50% chance of killing the cat in the next hour. At the end of that hour he asked, "What is the state of the cat?"

Common sense suggests that the cat is either alive or dead. But Schrödinger pointed out that, according to quantum physics, the cat is equal parts alive and dead at the same time. It's only when the box is opened that we see a single definite state. Until then, the cat is a blur of probability: half one thing and half the other.


However, the double-hole experiment's mind-boggling conclusions don't end there. In recent years, technology has allowed scientists to perform a fascinating variation of the test. Its results call into question the perception of time itself.

This is like a high-tech version of the double-hole experiment. Electrons are being fired towards a barrier with two holes in it. But the scientists can delay their decision about whether to observe the electrons until after they've passed through the holes but before they've hit the screen.

It's as though I'm on a baseball field and there's a baseball being pitched towards the barrier with the holes in it. But my eyes are closed, so it goes through and it behaves like a wave. But then, at the last second before it hits the screen I open my eyes and decide to observe it.

At that moment, the electrons, in essence, become particles — and seemingly always were particles from the time they left the electron gun. So it's as though they went back in time to before they went through the holes, and decided to go through one or the other — not through both, as they would have if they'd been behaving like waves. That's really crazy!

That's the enigma — that our choice of what experiment to do determines the prior state of the electron. Somehow or other we've had an influence on it that appears to travel backward in time.


I'm going to think of a number between 1 and 100. You have to guess it. You can only ask true/false questions. Let's play!

You: Is it more than 50?

Me: No

You: Is it more than 25?

Me: Yes

You: Is it more than 38?

Me: Yes

You: Is it more than 44?

Me: No

You: Is it more than 41?

Me: Yes

You: Is it 42?

Me: Congratulations! You guessed the number.