[This is post 2 in the "Structure and Cognition" series; links to all the posts can be found here]
The doctrine of signatures was a theory of medicine dating back to 1400-1500 that attempted to cure disease using treatments that resembled either the patient’s symptoms or afflicted body part. For example, walnuts might treat afflictions of the mind and foxes’ lungs might be used to for respiratory problems because foxes were thought to be particularly fit animals.
We’ve come a long way, medically, since the 1400s, but it seems pretty likely that this way of thinking is still with us. It’s probably implicated in conspiracy theorizing where complicated effects are thought to require complicated causes (Lehman & Cinnirella, 2007). I think it might be part of the reason people tend to view those who disagree with them politically as evil – the evil effects of [other party’s] policies must be due to evil causes. But I’m going to play it a bit safer than venturing into politics in my second post. A much less controversial contender for a perception potentially influenced by the doctrine of signatures is the assumption that complex human behavior implies complex cognitive mechanisms.
Over the next few posts, I want to discuss the possibility
that much of what appears to be the result of complex mechanisms can in fact be
explained as the result of simple mechanisms often operating in the context of
particular environmental structures. The goal of this post is to demonstrate
that apparently complex behavior can arise from simple mechanisms operating in
conjunction with the environment.
Braitenberg (1984) demonstrates our tendency to ascribe
complicated motives to simple behavior in his (highly recommended) book Vehicles.
He begins his construction of a series of simple hypothetical machines with one
that is equipped with a sensor which enables them to detect a single cue in
their environment, say temperature, and a connected motor, which engages the motor
to the extent that the sensor is activated.
Vehicle 2 expands on vehicle one by adding an additional
sensor and motor, placing one of each on either side. There are 2 forms of this
vehicle: vehicle 2a has its sensors connected to the motors on the same side of
its body; vehicle 2b has the wires crossed, such that the left sensor is
attached to the right motor and vice versa. When 2a’s front is exposed to a lot
of what excites its sensors, say, a light source, it will shoot forward and may
collide with the light source. However, if one side’s sensors are more excited
that the other’s it will turn away and flee from the source.
Moving on from hypothetical examples, Herbert Simon, in The
Sciences of the Artificial, describes the following example of apparent behavioral
complexity resulting from simple rules:
“We watch an ant make his laborious way across a wind- and
wave-molded beach. He moves ahead, angles to the right to ease his climb up a
steep dunelet, detours around a pebble, stops for a moment to exchange
information with a compatriot. So as not to anthropomorphize about his
purposes, I sketch the path on a piece of paper. It is a sequence of irregular,
angular segments-not quite a random walk, for it has an underlying sense of direction,
of aiming for a goal.” (Simon, 1996, p. 51)
Simon suggests that someone looking at the paper might
assume the path was made by a skier on their way down a mountain or a sailboat
buffeted by winds. But all that is ostensibly going on is the ant pursuing a
general sense of direction toward its home and making simple local deviations
when confronted with obstacles. Or, as Simon puts it:
“Viewed as a geometric figure, the ant’s path is irregular,
complex, hard to describe. But its complexity is a complexity in the surface of
the beach, not a complexity in the ant.”
I’ve personally employed a similar simple technique to
navigate a complex environment. When I was younger, I went to a museum with a pitch-black
maze where I was instructed to navigate my way out by placing my hand on the left
wall and walking, turning when the wall turned, to maintain contact with the
wall. This simple strategy, given the specific structure of a maze, will suffice
to solve a complex wayfinding problem. (Later, in a compsci course, this
experience was useful in programming an algorithm to solve a maze for an
assignment).
Note that the apparent complexity here results from the interaction
between the simple rules of behavior and the environment. If the environment in
which Braitenberg’s vehicles found themselves was uniform, with no light or
temperature differences, their behavior would not vary. Likewise, if the ant
were traveling on a flat path, there would be much less complexity in its path.
Incidentally, when the environment effectively becomes “the
behavior of other agents following the same simple rules,” you can often get
emergent complexity of a much higher degree. For example, people walking in
crowds following a simple rule of minimizing obstacles on their way to a target
location often spontaneously organize into two opposing lanes (Moussaid, Helbing,
& Theraulaz, 2011). It’s easy to imagine the pressure here if you think of
a single person in the wrong lane, having to avoid bumping into each oncoming
person and how they might have a much easier time shifting into the lane to
their side.
And as has been well-documented, ants are paradigmatic examples
of this form of emergent complexity, efficiently organizing anthills with
specialized compartments, allocating work among various ants according to needed
supply and demand, and even farming aphids for food, all via the interaction of
simple, unconscious rules applied across many ants.
Of course, these interactions don’t always work out for the
best. If members of a crowd implicitly have a behavioral strategy of running in
panic if a certain number of their neighbors start running, this can lead to a
stampede. (though in certain circumstances, this is almost certainly adaptive
behavior).
Similar behavior can be seen in animals. Butail, Bartolini,
and Porfiri (2013) found that schools of fish could be remotely “controlled” by
adding a robotic fish to the tank and directing its movements, causing the rest
of the fish to follow. In this way, the school acts almost as a single
organism, with each fish acting as a separate sensor that might detect predators
or food in its proximity and alert the larger body.
Pillot et al. (2010) attached a vibrating collar to a sheep
and trained the sheep that vibration meant that food would be available in a
certain location. When the sheep was returned to the herd and received the
signal, it ran toward the location where it expected the food. Despite their
lacking a collar or the learned knowledge of food availability, the rest of the
herd often followed immediately in pursuit. The process underlying the decision
of whether or not to pursue the departing sheep was investigated by varying the
number of naïve sheep in the herd. Pillot and colleagues discovered that the
sheep were sensitive both the departing sheep and to the number of other sheep
that were not leaving.
Thomas Schelling describes an occasion where he was waiting
in the wings before a talk that was supposed to have a large audience. He could
see the first few rows from where he was standing and, as the time approached
for the talk to begin, it looked like no one was in attendance. After he was
introduced, he bemusedly took the stage and realized that there were 800 people
in the hall, all crammed into the back, with the front 13 rows completely
empty. When he asked the hosts why they sat people so inefficiently, they
responded that there had been no official seating policy – the result was due
to the choices of the audience.
Schelling (2006) entertains several hypotheses of how this outcome
might have obtained and suggests that everyone was following the simple rule: avoid
sitting in the first occupied row. They might prefer to be closer to front or
have no preference at all, except to avoid being in the front. Everyone might
be happier if they could shift the whole group forward by 12 rows, but anyone
who individually moved forward would be violating the preference to avoid being
in the front. Or, as Schelling puts it, “How well each does for himself in
adapting to his social environment is not the same thing as how satisfactory a
social environment they create for themselves.”
Simple rules can yield extremely useful and complex
behaviors when they are used in the correct environment. On the other hand,
when there is a mismatch between the environment and the simple rule, this can
cause problems. Often, because the rules are so simple, they lack the flexibility
to adjust to situations that elicit the rule but are not well suited to its use.
This, at least, is the gist of the heuristics and biases research program in
the psychology of judgment and decision making. This (controversial) research
argues that humans often use heuristics when they should engage more complex
processing to make better decisions.
The conclusion reached by this work is responsible for the
popular view that people are irrational. This conclusion has been hotly debated
and one major point of contention is how useful more complex cognitive
processes like explicit reasoning can hope to be in achieving rationality (for
some definitions of rationality). I think this debate touches on important but
subtle foundations in cognitive science, but often these remain obscure unless
you have the prior knowledge to contextualize the information. This series of
posts is largely my attempt to summarize that debate, highlighting the context where
I have been able to notice it.
References:
Pillot, M. H., Gautrais, J., Arrufat, P., Couzin, I. D., Bon, R., & Deneubourg, J. L. (2011). Scalable rules for coherent group motion in a gregarious vertebrate. PloS One, 6(1), e14487.
Comments
Post a Comment