Intelligence as Awareness and Response
The most primordial kind of intelligence is awareness of relevant features in the environment, coupled with responses to relevant information. This environmental awareness-response type of intelligence only makes sense in the light of goals (“relevant” to what?) – from a single-celled organism responding to the presence of food by consuming it, to a human noticing that a plant is dry and watering it.
Evolution itself has acquired a great deal of intelligence; DNA is the transmissible record of the information evolution has acquired about the environment, from the perspective of billions of organisms with future existence as a “goal.” Simple organisms are still very viable, but the computational process of evolution has revealed that increasingly complex organisms that extract a great deal of information (and energy) from their surrounding systems are also extremely viable, especially over short time frames. Organisms have evolved increasingly complex neural systems and senses that reach into new domains of relevant information. Humans have created instruments that do the same.
Games vary in the amount of “luck” that is available. A solved game presents no opportunities for randomness, no luck – but even very complex games present different amounts of luck depending on the level of play. One measure of luck available in a game is the distance from the best player to the ideal player; as chess becomes computationally solved, its skill component overtakes its luck component. Games, like awareness-response intelligence, only make sense in the context of goals. Awareness-response intelligence extracts as much information as it can about the world relevant to its goals so that “luck” is as small a factor as possible. Sources of apparent randomness must either be controlled, studied until predictable, or, if these are not possible, responded to with optimal probabilistic strategies.
The “intelligence” apparently contained in complex economies (the “invisible hand” of the market, ecosystems) is of the awareness-response type.
Intelligence as Extracting and Communicating Aboutness
Human intelligence includes awareness and response to the environment, but adds a new feature – extracting information from other intelligences, and communicating information as well. This type of intelligence also makes no sense apart from the concept of “goals” – and part of the complexity of the problem of communication and aboutness-extraction is that communication partners have some shared and some competing goals.
Sometimes people comment that they’re surprised that the problem of computer translation of languages has not been solved. Extracting the meaning from language – the story or concept communicated by the words – is an extremely hard problem that humans specialize in. It is not inherent to intelligence itself. A self-modifying super-intelligent being with vast computational resources could likely maximize its existence and reproduction goals with only the awareness-response type of intelligence – language and communication might prove to be a hindrance (see Watts 2006).
Humans tend to focus only on aboutness-extraction intelligence when evaluating others’ intelligence. What we mostly desire from an AI (or other conversation partner) is that it understand what our stories are “about,” and prove it by responding with a story with similar aboutness (see Schank 1990). This is why the “Turing test” for artificial intelligence is unfortunate. When listening to others, humans mostly listen long enough to extract an “aboutness,” and then search their memories for a story with a similar aboutness – and this aboutness is often a complex relation or “moral” with little relation to the naively-construed “topic” of the communication. (I have it on good authority that mathematical communication shares this feature.) Only rarely do humans respond to communication by modifying their own stories and models of the world.
The aboutness-extraction type of intelligence is a consequence of intelligence in humans being shared across many brains, and even outside of brains, in written language and artifacts. The most important part of the human environment is other people, and communicating with them is the only source of culturally preserved intelligence that would take individual brains too long to figure out (see Boyd et al. 2013 on the “lost European explorer experiment”.) Note that intelligence can actually be stored in culture, just as it can be stored in DNA, without the organisms involved having any understanding as to why it works. Awareness-response intelligence and aboutness-extraction intelligence work together to form culture.
Items of culture encode and contain intelligence relevant to human goals. But they are themselves entities under selection, and as they increase in complexity, they begin to display intelligence of the awareness-response type (non-conscious). Successful institutions evolve mechanisms for their own maintenance, such as awe-inspiring religious music or the hazing rituals used by fraternities. Items of culture rely on humans for reproduction, and their existence and reproduction goals depend on not being destructively detrimental to human carriers – but only in the very long run, as with contagious pathogens.
Mathematics is a type of symbol-mediated aboutness extraction and communication. The necessity of communicating suspected patterns to other minds limits the kind of information that can be communicated; minds may experience visual insights, for instance, that are difficult to express in any form of communication. Specialized forms of communication, including mathematics and jargons, trade off better compression and understanding within a group for poorer communicability outside that group.
The Flynn Effect is likely the result of the recent proliferation of culturally-transmitted tools that assist with aboutness extraction from symbolic systems.
Intelligence as Compression
A special type of intelligence is the organization of complexity into a simpler, less resource-intensive form. This is what is called “insight,” and it is pleasurable for humans even when not relevant to survival (see Hudson 2011). The complementary tool, humor, offers a pleasant sensation as a reward for weeding one’s model for inconsistencies (see Hurley et al. 2013), though as with compression and music, it has many social applications. Compression is likely one of the regularities in subjective aesthetic judgment.
This type of intelligence makes sense even without reference to goals – but reducing complexity only makes sense given limited processing power and storage space, a feature as important to human intelligence as cognition being localized within many separate minds. An elegant compression is often itself the “aboutness” that is communicated in symbolic systems.
Language itself acts as a store of information, again shared among minds, and language tends toward compression. As stories and concepts are shared, they become more compressed, until they reach the final stage: a metonym (see Ellis 1989), a single word that represents a story or concept that conversation partners are expected to understand. A word is the ultimate tl:dr for human communication. As awareness-response intelligence increases through humans acquiring new senses via technology, language grows to fill the space of understanding. New models compress complex, messy observations into cheaper, cleaner, more useful patterns: natural selection, Milankovitch cycles, game theory. But notice that an explosion in awareness-response intelligence without human limitations would not need or perhaps even benefit from language or the compressions expressed in it; a super AI would likely notice natural selection and Milankovitch cycles and game theory without any language at all.
Consciousness in the sense of the subjective experience of self-awareness is likely orthogonal to each kind of intelligence.
Boyd, Robert, Peter Richerson, and Joseph Henrich, “The Cultural Evolution of Technologies: Facts and Theories“. From Cultural Evolution: Society, Technology, Language, and Religion, eds. Peter J. Richerson and Morten H. Christiansen. MIT Press, 2013.
Ellis, Bill. “When is a Legend? An Essay in Legend Morphology,” in The Questing Beast: Perspectives on Contemporary Legend, Vol. IV, eds. Gillian Bennett and Paul Smith. Sheffield Academic Press, 1989. (Available in substantially similar form as Chapter 4 of this Google Books document.
Hudson, Nicholas J. “Musical beauty and information compression: Complex to the ear but simple to the mind?,” BMC Research Notes 2011, 4:9, doi:10.1186/1756-0500-4-9.
Hurley, Matthew, Daniel Dennet, and Reginald Adams, Jr. Inside Jokes: Using Humor to Reverse-Engineer the Mind. MIT Press, 2013.
Schank, Roger. Tell Me A Story: A New Look at Real and Artificial Memory. Scribners, 1990. Available in substantially similar form as this Google Books document.
Watts, Peter. Blindsight. Tor Books, 2006.