âThe brain is a machine: a device that processes information.â
âAnd yet, somehow, it also has a subjective experience of at least some of that information. Whether weâre talking about the thoughts and memories swirling around on the inside, or awareness of the stuff entering through the senses, somehow the brain experiences its own data. It has consciousness.â
âWhat exactly is this consciousness stuff?
Hereâs a more pointed way to pose the question: can we build it?â
âPeople once thought that if you made a computer complicated enough it would just sort of âwake upâ on its own. But that hasnât panned out (so far as anyone knows). Apparently, the vital spark has to be deliberately designed into the machine.â
âIâve made my own entry into that race, a framework for understanding consciousness called the Attention Schema theory. The theory suggests that consciousness is no bizarre byproduct â itâs a tool for regulating information in the brain.â
âIn this article Iâll conduct a thought experiment. Letâs see if we can construct an artificial brain, piece by hypothetical piece, and make it conscious. The task could be slow and each step might seem incremental, but with a systematic approach we could find a path that engineers can follow.â
âImagine a robot equipped with camera eyes. Letâs pick something mundane for it to look at â a tennis ball. If we can build a brain to be conscious of a tennis ball â just that â then weâll have made the essential leap.â
âWhat information should be in our build-a-brain to start with? Clearly, information about the ball. Light enters the eye and is translated into signals. The brain processes those signals and builds up a description of the ball. Of course, I donât mean literally a picture of a ball in the head. I mean the brain constructs information such as colour, shape, size and location. It constructs something like a dossier, a dataset thatâs constantly revised as new signals come in. This is sometimes called an internal model.â
âIn the real brain, an internal model is always inaccurate â itâs schematic â and that inaccuracy is important. It would be a waste of energy and computing resources for the brain to construct a detailed, scientifically accurate description of the ball. So it cuts corners. Colour is a good example of that. In reality, millions of wavelengths of light mix together in different combinations and reflect from different parts of the ball. The eyes and the brain, however, simplify that complexity into the property of colour. Colour is a construct of the brain. Itâs a caricature, a proxy for reality, and itâs good enough for basic survival.â
âBut the brain does more than construct a simplified model. It constructs vast numbers of models, and those models compete with each other for resources. The scene might be cluttered with tennis racquets, a few people, the trees in the distance â too many things for the brain to process in depth all at the same time. It needs to prioritise.â
âThat focussing is called attention. I confess that I donât like the word attention. It has too many colloquial connotations. What neuroscientists mean by attention is something specific, something mechanistic. A particular internal model in the brain wins the competition of the moment, suppresses its rivals, and dominates the brainâs outputs.â
âAll of this gives a picture of how a normal brain processes the image of a tennis ball. And so far thereâs no mystery. With a computer and a camera we could, in principle, build all of this into our robot. We could give our robot an internal model of a ball and an attentional focus on the ball. But is the robot conscious of the ball in the same subjective sense that you might be? Would it claim to have an inner feeling? Some scholars of consciousness would say yes. Visual awareness arises from visual processing.â
âI would say no. Weâve taken a first step, but we have more work to do.â
âLetâs ask the robot. As long as weâre doing a build-a-brain thought experiment, we might as well build in a linguistic processor. It takes in questions, accesses the information available in the robotâs internal models, and on that basis answers the questions.â
âWe ask: âWhatâs there?â
It says: âA ball.â
We ask: âWhat are the properties of the ball?â
It says: âItâs green, itâs round, itâs at that location.â It can answer that because the robot contains that information. Now we ask: âAre you aware of the ball?â
It says: âCannot compute.ââ
âWhy? Because the machine accesses the internal model that weâve given it so far and finds no relevant information. Plenty of information about the ball. No information about what awareness is. And no information about itself. After all, we asked it: âAre you aware of the ball?â It doesnât even have information on what this âyouâ is, so of course it canât answer the question. Itâs like asking your digital camera: âAre you aware of the picture?â It doesnât compute in that domain.â
âBut we can fix that. Letâs add another component, a second internal model. What we need now is a model of the self.â
âA self-model, like any other internal model, is information put together in the brain. The information might include the physical shape and structure of the body, information about personhood, autobiographical memories. And one particularly important part of the human self-model is called the body schema.â
âThe body schema is the brainâs internal model of the physical self: how it moves, what belongs to it, what doesnât, and so on. This is a complex and delicate piece of equipment, and it can be damaged.â
âWe say to the machine: âTell us about yourselfâ
Now that our build-a-brain has a self-model as well as a model of the ball, letâs ask it more questions.
We say: âTell us about yourself.â
It replies: âIâm a person. Iâm standing at this location, Iâm this tall, Iâm this wide, I grew up in Buffalo, Iâm a nice guy,â or whatever information is available in its internal self-model.
Now we ask: âWhatâs the relationship between you and the ball?â
Uh oh. The machine accesses its two internal models and finds no answer. Plenty of information about the self. Plenty of separate information about the ball. No information about the relationship between the two or even what the question means.
We ask: âAre you aware of the ball?â
It says: âCannot compute.ââ
âWhy have internal models at all? The real use of a brain is to have some control over yourself and the world around you. But you canât control anything if you donât have a good, updated dossier on it, knowing what it is, what itâs doing, and what it might do next. Internal models keep track of things that are useful to monitor. So far weâve given our robot a model of the ball and a model of itself. But weâve neglected the third obvious component of the scene: the complex relationship between the self and the ball.â
âThe robot is focusing its attention on the ball. Thatâs a resource that needs to be controlled intelligently. Clearly itâs an important part of the ongoing reality that the robotâs brain needs to monitor. Letâs add a model of that relationship and see what it gives us.â
âAlas, we can no longer dip into standard neuroscience. Whereas we have decades of research on internal models of concrete things such as tennis balls, thereâs virtually nothing on internal models of attention. It hadnât occurred to scientists that such a thing might exist. Still, thereâs no particular mystery about what it might look like. To build it into our robot, we once again need to decide what information should be present. Presumably, like the internal model of the ball, it would describe general, abstracted properties of attention, not microscopic physical details.â
âFor example, it might describe attention as a mental possession of something, or as something that empowers you to react. It might describe it as something located inside you. All of these are general properties of attention. But this internal model probably wouldnât contain details about such things as neurons, or synapses, or electrochemical signals â the physical nuts and bolts. The brain doesnât need to know about that stuff, any more than it needs a theoretical grasp of quantum electrodynamics in order to call a red ball red. To the visual system, colour is just a thing on the surface of an object. And so, according to the information in this internal model, attention is a thing without any physical mechanism.â
âWith our latest model, weâve given the machine an incomplete, slightly inaccurate picture of its own process of attention â its relationship to that ball. When we ask: âWhatâs the relationship between you and the ball?â the machine accesses its internal models and reports the available information. It says: âI have mental possession of the ball.ââ
âThat sounds promising. We ask: âTell us more about this mental possession. Whatâs the physical mechanism?â And then something strange happens.
The models donât contain detailed specifications of how attention is implemented. Why would they? So the build-a-brain can answer only based on its own incomplete knowledge. When asked, naive people donât say that colour is an interaction between millions of light wavelengths and their eyes: they say it is a property of an object. After all, thatâs how their brain represents colour. Similarly, the build-a-brain will describe its modelâs own abstractions as if they somehow floated free of any specific implementation, because thatâs (at best) how they will be represented within its self-model.â
âIt says: âThere is no physical mechanism. It just is. It is non-physical and itâs located inside me. Just as my arms and legs are physical parts of me, thereâs also a non-physical part of me. It mentally possesses things and allows me to act with respect to those things. Itâs my consciousness.ââ
âWe built the robot, so we know why it says that. It says that because itâs a machine accessing internal models, and whatever information is contained in those models it reports to be true. And itâs reporting a physically incoherent property, a non-physical consciousness, because its internal models are blurry, incomplete descriptions of physical reality.â
âWe know that, but it doesnât. It possesses no information about how it was built. Its internal models donât contain the information: âBy the way, weâre a computing device that accesses internal models, and our models are incomplete and inaccurate.â Itâs not even in a position to report that it has internal models, or that itâs processing information at all.â
âJust to make sure, we ask it: âAre you positive youâre not just a computing machine with internal models, and thatâs why you claim to have awareness?â
The machine, accessing its internal models, says: âNo, Iâm a person with a subjective awareness of the ball. My awareness is real and has nothing to do with computation or information.ââ
âThe theory explains why the robot refuses to believe the theory. And now we have something that begins to sound spooky. We have a machine that insists itâs no mere machine. It operates by processing information while insisting that it doesnât. It says it has consciousness and describes it in the same ways that we humans do. And it arrives at that conclusion by introspection â by a layer of cognitive machinery that accesses internal models. The machine is captive to its internal models, so it canât arrive at any other conclusions.â
âAdmittedly, a tennis ball is a bit limited. Yet the same logic could apply to anything. Awareness of a sound, a recalled memory, self-awareness â the build-a-brain experiment shows how a brain could insist: âI am aware of X.ââ
âAnd this is the central question of consciousness.â
âBut we donât need magic to explain the phenomenon. Brains insist they have consciousness. That insistence is the result of introspection: of cognitive machinery accessing deeper internal information. And an internal model of attention, like the one we added to our build-a-brain, would contain exactly, but exactly, that information.â
âThere is a real thing that we call attention â a wildly complex, beautifully adapted method of focusing the brainâs resources on a limited set of signals. Attention is important. Without it, we would be paralysed by the glut of information pouring into us. But thereâs no point having it if you canât control it. A basic principle of control theory is this: to control something, the system needs an internal model of it. To monitor and control its own attention, the brain builds an attention schema. This is like a map of attention. It contains simplified, slightly distorted information about what attention is and what it is doing at any particular moment.â
âWithout an attention schema, the brain could no longer claim it has consciousness. It would have no information about what consciousness is, and wouldnât know how to answer questions about it. It would lack information about how the self relates to anything in the world. What is the relationship between me and the ball? Cannot compute.â
âMore crippling than that, it would lose control over its own focus. Like trying to navigate a city without a map, it would be left to navigate the complexities of attention without a model of attention. And even beyond that, lacking a construct of consciousness, the brain would be unable to attribute the same property to other people. It would lose all understanding that other people are conscious beings that make conscious decisions.â
âConsciousness matters. Unlike many modern attempts to explain it away, the Attention Schema theory does exactly the opposite of trivialising or dismissing. It gives it a place of importance.â
âAs long as scholars think of consciousness as a magic essence floating inside the brain, it wonât be very interesting to engineers. But if itâs a crucial set of information, a kind of map that allows the brain to function correctly, then engineers may want to know about it.â
âSuddenly it becomes an incredibly useful tool for the machine.â
Navigation
Backlinks
There are no backlinks to this post.