‘Learning Loops’ by Geoff Mulgan

The following excerpt was extracted from Mulgan, Geoff. Big Mind: How Collective Intelligence Can Change Our World (pp. 70-75). Princeton University Press. Kindle Edition.


“Learning Loops

I EARLIER DESCRIBED THE ETYMOLOGIES of the words intelligence and collective, and showed that they have at their core a notion of choice within contexts of possibility and uncertainty. Any being faces an infinity of choices and no certainty about the future. We use all the elements of intelligence to help us understand what our choices really are, drawing on the limited data available to us as well as the mental models we have developed or acquired.

This mental task can be thought of in probabilistic terms. At every step, we try to make sense of the probability distribution of different outcomes. Are we at risk of attack? Will it rain? Will my friend still be my friend? If I build my home in this way, will it survive a storm? To make sense of these choices, any intelligence has to assess the possibility space lying ahead of it and the likely probability distribution. Is the context similar to one we’ve encountered before? Do our existing categories or concepts apply?

Our default is to depend on what we already know and change only incrementally, in small adjacent steps. This is the logic of evolution, in which big changes tend to be the consequence of many small changes rather than giant leaps.

The same is true of learning. Some learning is algorithmic, some is experimental, and much is sequential — what you can learn depends on what you have already learned. The computer scientist Lesley Valiant writes of such tools as “eliminations algorithms” and “Occam,” and what he calls “ecorithms” that help an organism cope with an environment by learning. “Cognitive concepts,” he writes, “are computational in that they have to be acquired by some kind of algorithmic learning process, before or after birth. Cognitive concepts are, equally, statistical in that the learning process draws its basic validity from statistical evidence — the more evidence we see for something the more confident we will be in it.” These models and inputs are considered in what has variously been called the mind’s eye, ego tunnel, or conscious present, where new data intersect with our longer-term memories, the moment between a known past and unknown future.

The mark of any intelligent creature, institution, or system is that it is able to learn. It may make mistakes, but it won’t generally repeat them. That requires an ability to organize intelligence into a series of loops, which have a logical and hierarchical relationship to each other.

First-loop learningis what we recognize as everyday thought. It involves the application of thinking methods to definable questions, as we try to analyze, deconstruct, calculate, and process using heuristics or frameworks. We begin with models of how the world works as well as models of thinking, and then we gather data about the external and internal worlds, based on categories. Then we act and observe when the world does or does not respond as expected, and adjust our actions and the details of our models in response to the data.

These first-loop processes of interpretation and action are imperfect. Much is known about confirmation biases along with our failure to think probabilistically or logically. But the first loop helps to correct our intuitions (what Daniel Kahneman calls System 2 processes of considered thought, helping to correct the otherwise-everyday use of System 1 intuitions). This kind of thinking helps us get by most of the time. It is functional, practical, useful, and relatively easy. The combination of facts and models is what enables life to function, and is how our brains work most of the time.

Within organizations, explicit processes for learning can dramatically improve performance. Later, I will discuss the procedures used in the airline industry to learn from crashes or near misses, hospitals that regularly review data and lessons learned, and factories that empower workers to fix problems. What’s more remarkable is how many institutions lack even basic learning loops of this kind, and so continue to make unnecessary mistakes, assume facts that aren’t true, and deny the obvious.

Second-loop learningbecomes relevant when the models no longer work or there are too many surprises. It may be necessary to generate new categories because the old ones don’t work (imagine a group that has moved from a desert environment to a temperate mountain zone), and it may be necessary to generate a new model, for example to understand how the stars move. This second loop also involves the ability to reflect on goals and means.

This is often what we mean by creativity: seeing in new ways, spotting patterns, and generating frames. Arthur Schopenhauer wrote that “the task is not so much to see what no one yet has seen but to think what nobody yet has thought about that which everybody sees.” Saul Bellow was implying something similar when he spoke of the role of art as something that alone can penetrate some of the “seeming realities of this world. There is another reality, the genuine one, which we lose sight of. This other reality is always sending us hints, which without art, we can’t receive. … Art has something to do with an arrest of attention in the midst of distraction.” That arrest, a slowing down of thought and then a speeding up, takes us to a new way of categorizing and modeling the world around us. In handling high-dimensional problems, we frequently try to accumulate multiple frames and categories, see an issue from many angles, and then keep these all in mind simultaneously. This is hard work, and the hardness of the work rises exponentially with the number of frames in play simultaneously.

The relationship between first- and second-loop learning is fuzzy. Sometimes we have to take risks to find new ideas and new categories, even when our current models appear to be working well. This is the well-known trade-off between exploitation and exploration. Exploitation of what we already know is predictable and usually sensible. But if we never explore, we risk stagnation or at least missing out on new opportunities. So to thrive, we have to sometimes take risks, accept failures and bad decisions, and deliberately go off track and take a route that appears less optimal. Think, for example, of trying out a new restaurant rather than one we know and love. Because exploration is so essential to learning, a surprising finding of research on decision making is that people who are inconsistent sometimes end up performing better than ones who are consistent.

Third-loop learninginvolves the ability to reflect on and change how we think — our underlying ontologies, epistemologies, and types of logic. At its grandest, this may involve the creation of a system of science, or something like the growth of independent media or spread of predictive analytics.

We recognize third-loop learning to have happened when a radically new way of thinking has become normal, with its own tools and methods, and its own view of what is and what matters.

Figure 4.   The Loops of Intelligent Learning

Most fundamental social change also involves just such third-loop learning — not just doing new things to others. This is the implication of Audre Lorde’s famous comment that we cannot use the master’s tools to dismantle the master’s house (and I examine it in more detail in chapter 16).

But we also see third-loop learning at a more mundane level, when individuals decide to live and think in a different way, such as by committing themselves to regular meditation.

All recognizably intelligent people and groups can adjust their behavior in response to surprises, adjust the categories they use more occasionally, and on rare occasions, adjust the ways in which they think.Indeed, the psychological growth of any individual involves passing through all three loops repeatedly, as the individual learns about the world, and also reevaluates their place in the world and how to conceive of it.Similar patterns are visible in groups and organizations, with the great majority of activity taking place within the first loop, the occasional use of second-loop learning to generate new categories and frames, and much more occasionally, a change to the whole cognitive model.

These basic characteristics of learning — iterative, driven by error and surprise, and with a logical flow from the small to the large — have been more fully embraced in some fields and societies than others. The acceptance of error in science along with the encouragement of surprise and discovery are fundamental traits of rational, enlightened societies. Being adept at all three loops helps the individual or organization to cope with multiple types of thought, choosing the right tool for the right task. Science advances through designing and testing hypotheses and theories. Philosophy involves questions — as Immanuel Kant put it, “Every answer given on principle of experience begets a fresh question.” Art involves exploration.

Our brains work through analogy, metaphor, and the search for commonalities as well as through linear logic. For example, a good musical understanding can neither be acquired nor demonstrated by setting out arguments. It is instead best displayed and learned by playing music with feeling and understanding, and makes sense only in the context of a culture. Indeed, meaning only arises from cultures and large-scale uses rather than from simple correspondence of the kind that a search engine provides, and we can all recognize the difference between knowledge that is general (like the theories of physics) and knowledge that is by its nature particular, like the qualities of a particular person, poem, or tree.

Someone capable only of linear logic or first-order learning can appear dumb even if in other respects they are clever. So far, computing has proven better at these first-loop tasks than it has at second- or third-order learning. It can generate answers much more easily than it generates questions. Computers are powerful tools for playing chess, but not for designing games. Networked computers can help shoppers find the cheapest products or simulate a market, but offer little help to the designers of economic or business strategy. Similarly networked computing can help get people onto the streets — but not to run a revolution.3

This is territory where rapid advances maybe possible; already we have many tools for generalizing from observed paired judgments as well as pattern recognition and generation, and huge sums are being invested in new forms of computing. For organizations, the challenge is to structure first-, second-, and third-order learning in practical ways, given scarce resources. The simple solution is to focus on first-order learning with periodic scans, helped by specialists, to review the need for second- or third-order learning, such as for changing categories or frames. We can visualize this as the combination of lines with loops — straight lines, or focused thinking combined with periodic loops to assess, judge, or benchmark.

This approach also helps to make sense of how all practical intelligence strikes a balance between inclusion and exclusion. Some types of data and patterns are attended to; others are ignored. This selectivity applies to everything from natural language to sensors. More is not always better; more inputs, more analysis, and even more sophistication can impede action. Practical intelligence has to select. There is a parallel issue for machine intelligence. Selection is vital for many complex processing tasks that are too big for even the fastest supercomputers if tackled comprehensively. This is why so much emphasis in artificial intelligence has been placed on selection heuristics or Bayesian “priors” that help to shrink the pool of possibilities to attend to.

Again, we complicate to understand and simplify to act. We may use wide networks for gathering information about options, and then use a small group or individual to make decisions in uncertainty.

Understanding better how focus is managed may turn out to be key to understanding collective intelligence — and how a shifting balance is achieved between wide peripheral vision and attention to many signals along with the focus needed for action.

But it’s intriguing to reflect that there can be no optimum balance between these three loops. In principle, any intelligent group needs a capacity for all three. It’s impossible, though, to know what the right balance between them should be. It’s easy to imagine an organization locked only into first-loop learning (many banks or firms like Enron have been). Yet it’s also possible to imagine organizations devoting too much scarce leadership time to second- and third-loop learning, reinventing their cognitive maps at a cost in terms of present performance.

In stable environments, the first-loop reasoners will tend to do best. In unstable ones, where any item of knowledge has a half-life of decay, the groups with more capacity to reimagine their categories and thinking modes may adapt better. In principle, any group should optimize for stable environments with a well-suited division of labor, until signs of change appear, at which point they should devote more to scanning, rethinking options and strategies, and mobilizing resources so that these can be redirected to opportunities and threats. But it is inherently impossible to know what the best balance is except in retrospect.

—–

1.This framework draws on yet extends the famous distinction made by Argyris and Schon between single-loop learning, which learns from new facts, but doesn’t question the goal or logic being followed, and double-loop learning that asks the bigger questions. See Chris Argyris and Donald A. Schon, Organizational Learning(Reading, MA: Addison-Wesley, 1978). For a similar framework, see James March, “Exploration and Exploitation in Organizational Learning,” Organization Science 2, no. 1 (1991): 71– 87.

2.A modern equivalent of the Turing test would probably want to assess abilities to reason at these three levels — rather than just the ability to appear like a human. For the third level it might ask whether the machine intelligence can generate a novel fragestellung, the German word meaning a worldview, but that is literally the posing of a question that makes it possible to see things and ask in new ways.

3.I like this comment from Steven Pinker on whether robots will produce literature: “Intelligent systems often best reason by experiment, real or simulated: they set up a situation whose outcome they cannot predict beforehand, let it unfold according to fixed causal laws, observe the results and file away a generalization about how what becomes of such entities in such situations. Fiction, then, would be a kind of thought experiment, in which agents are allowed to play out plausible interactions in a more-or-less lawful virtual world and an audience can take mental notes of the results. Human social life would be a ripe domain for this experiment-driven learning because the combinatorial possibilities in which their goals may coincide and conflict (cooperating or defecting in prisoner’s dilemmas, seeking long-term or short-term mating opportunities, apportioning resources among offspring) are so staggeringly vast as to preclude strategies for success in life being either built-in innately or learnable from one’s own limited personal experience.” Steven Pinker, “Toward a Consilient Study of Literature,” Philosophy and Literature 31, no. 1 (2007): 172.”

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.