There has always been a generation gap. Perhaps Socrates really did describe the youth of his day with these words: “The children now . . . have bad manners, contempt for authority, they show disrespect to their elders.” And somehow, those young people who exasperated their parents and teachers really do turn out to be the leaders of tomorrow, and the world manages to survive, generation after generation.
But many teachers have noticed real changes in their students in the last ten-to-twenty years, changes particularly in the way they think. Many have wondered whether they’re just imagining things, whether the alleged changes are just anecdotal evidence from the teachers’ lounge, or whether there’s really something happening.
Turns out there is.
They Do Think Differently—And Why
A number of studies have demonstrated that today’s American students do in fact think differently from the way their parents did. We’re not talking here about what they think; kids have always been more idealistic than their parents, and they’ve always wanted to challenge the limits that their authorities place on them. We’re talking about how they think, about how their brains physically operate. And to no one’s surprise, the phenomenon is directly related to the way they get their information.
The rise of the web in the last fifteen to twenty years, and its nearly universal invasion of the lifestyles of young people in particular, have dramatically changed the way we as a culture get our information. Newspapers are dying; nobody buys their music on “albums” anymore; the stacks in libraries are accumulating dust. Whenever we want to know something, our first choice is to go online and find it. It’s faster, it’s easier, and it’s electronically searchable. There’s a reason that Google is the most heavily trafficked website[1]—and that its brand name is now in effect a verb that means “to find out.”
It would be foolish to argue that the technology is a bad thing. The web has taken the world by storm because it brings advantages to the users; that’s the way the free market works. The web is a better mousetrap.
But it has also changed the way we get our information—or, to word it differently, the way we learn. There are two significant characteristics to this change that have already begun to affect the way we think, the way our brains physically work.
First, we feel as though we don’t need to remember or know things anymore. The web is ubiquitous. It’s on pretty much every computer—desktop, laptop, notebook; it’s on our cell phones. We can access it anytime.[2] Some have suggested that this changes our need to learn things at all; if we can just look up whatever we need to know, then why clutter our brains with memorized stuff?[3]
A recent study demonstrated that students remembered random facts better when they were told that the computer would not save their notes and less well when they believed they could look the ideas up later. The study authors wrote, “Participants did not make the effort to remember when they thought they could later look up the trivia statement they had read.”[4] It should not surprise us that the shape of our media will change the way we receive and process our information. “Human memory is adapting to new communications technology.”[5] There is a second, even more significant characteristic to the change. Web users have gotten accustomed to receiving their information in bites rather than complete meals. They read disjointed snippets[6] and move on. They rarely engage in deep reading, what we used to call “getting lost in the book.” Bloggers have learned to keep their postings brief; readers won’t scroll down through several screens to read the whole piece. In his seminal book The Shallows: What the Internet Is Doing to Our Brains, Nicholas Carr begins with a personal observation:
I used to find it easy to immerse myself in a book. . . . That’s rarely the case anymore. Now my concentration starts to drift after a page or two. I get fidgety, lose the thread, begin looking for something else to do.[7]
He goes on to describe “the online life” as “a permanent state of distractedness”[8]; “we are evolving from being cultivators of personal knowledge to being hunters and gatherers in the electronic data forest.”[9] The rest of his book makes the case that our web-induced preference for skimming rather than reading is rewiring our brains, changing neural connections and thus changing the way we think.[10] This in turn closes certain intellectual doors—not necessarily permanently, since the brain that rewired itself once can rewire itself again—but affecting our thinking in a way that is difficult to reverse.
How Do We Respond?
What do we educators do about this shift?
The answer to that question depends on our evaluation of the shift: Is it a good thing or a bad thing, something we want to encourage or something we want to resist? Or is it neither, simply a phenomenon that we need to consider as we design our teaching strategies and tactics?
Is This a Good Thing or a Bad Thing?
As we’ve noted already, the internet is successful primarily because it brings advantages; we can do things with it that we could never do without it.[11] Further, we can do basic tasks much more quickly and efficiently than we ever could before.[12] Voice over IP (VOIP) technologies, such as Skype, have enabled inexpensive live audio and video conversation around the world; remote surgeries have saved lives in areas where medical access was limited or nonexistent. We can’t be Luddite about something that has brought such great good.
So let’s confine ourselves to the specific changes in thinking that we’ve identified. Is it good that we memorize less and reference more? And is it good that we scan more than we read deeply, that we consume information in small bites?
Conserving Our Brain Capacity
At first it seems sensible not to use up our brain capacity with things that we can easily look up. A previous generation applied that concept to logarithmic tables with no negative cultural ramifications. But there are at least two problems with the assertion as it stands. First, it assumes that we have something called brain capacity—that memorizing one thing takes up “space” that is now not available for something else.[13] In fact that does not appear to be the case.[14]
Of course there are limitations on what we can learn. First, we’re limited to things that we have the intelligence to understand. (I’m still wrestling with the concept of the Trinity, and so is everyone else.) And second, we’re limited by the amount of time we have to memorize things. But there is no evidence that the brain has a quantitative limit to what it can store—or at least, one that we’re likely to bump up against in our lifetimes.[15] Memorizing one thing doesn’t mean that your storage capacity has been decreased; your brain is not a chip.[16]
The second problem with the assertion is that it assumes that memorization is simply storage and not activity. We have every indication that memory is more like a muscle than a storage container;[17] as you memorize things, you get better at memorizing other things. Even memorizing a list of random numbers, rather than wasting storage space, increases the likelihood that you’ll be able to memorize other, more useful things.
In short, the entire analogy behind our assumption is flawed. The brain doesn’t work like that.
The implications are significant. If we yield to the temptation to let the web be our storage device and look stuff up when we need to know it,[18] then we are letting our brains sit idle unnecessarily. And if the brain is in some sense comparable to a muscle that benefits from exercise, then we are literally making ourselves stupid.[19]
Further, research indicates that the brain doesn’t place information into long-term memory until it has chewed on it for a while. People who look everything up as they need to know it don’t remember anything for long.
The Gettysburg Address isn’t just a text on the syllabus to be invoked at test time. The cadences and assertions should be internalized forever. The danger of Google is that it’s so convenient that it turns the materials of history, science, literature, art, and politics into information, not learning. In a Google-ized classroom, we lose the practice of education-as-formation. And the more we let search engines function in student work, the less we can expect that students will remember our instruction once the semester ends.[20]
By its very nature, web surfing inhibits the transfer of information from short-term into long-term memory.[21]
When we read a book, the information faucet provides a steady drip, which we can control by the pace of our reading. . . . With the Net, we face many information faucets, all going full blast. Our little thimble overflows as we rush from one faucet to the next. We’re able to transfer only a small portion of the information to long-term memory, and what we do transfer is a jumble of drops from different faucets, not a continuous, coherent stream from one source.[22]
Why should we care if the students remember the specifics? Because the most important thinking we do is the exercise of judgment. We need to make complicated decisions, sometimes involving morals and sometimes involving mere efficiency. This is exactly the kind of thinking that computers don’t do well.[23] But when we make such decisions, we’re involved in the higher levels of Bloom’s taxonomy, specifically synthesis.[24] And to do synthesis, we have to have something to synthesize—namely, multiple data points in their larger context and with an understanding of their relative significance. This is not something you can look up just as you need to make the decision. It needs to get turned over in your brain for a while.[25] That’s why good decision making comes with experience; the decision maker has not merely gathered data but has had time to put things into perspective and to assimilate individual data points into the larger picture.[26]
What we’re talking about here is the biblical concept of wisdom, the ability to act on knowledge in ways that bring success. Can we expect students whose brains only surf, to find and succeed in the will of God?
The Death of Deep Reading
What about the second change—that we prefer scanning and rapidly passing judgment on small bites of information rather than reading deeply? There’s certainly an immediate time advantage to reading a paragraph or two as opposed to a lengthy treatise. Further, studies indicate that the kind of reading that the web encourages strengthens abilities in mental coordination and rapid decision making, among other things. But are there advantages to deep reading that affect our ability to think clearly and wisely?
We’re really talking about two different ways of thinking. There’s linear thinking that works through a process or a logical chain to arrive at a conclusion. And there’s associative or networked thinking that rapidly moves from one idea to a related one and then jumps to a third.[27] The first type of thinking is good for proofs and decisions; the second is good for creativity.
And the way our brains work in either case is not the same. A study conducted at Washington University’s Dynamic Cognition Lab in 2009 demonstrated that when we read stories, our brains immerse themselves in the situation in ways that they do not otherwise, actually simulating mentally what we are reading on the page.[28] We cannot function adequately without that deeper sort of thinking; “the depth of our intelligence hinges on our ability to transfer information from working memory to long-term memory and weave it into conceptual schemas.”[29]
Every medium has its strengths and weaknesses; every medium develops some cognitive skills at the expense of others. Although the visual capabilities of . . . the Internet may develop impressive visual intelligence, the cost seems to be deep processing: mindful knowledge acquisition, inductive analysis, critical thinking, imagination, and reflection.[30]
Which way of thinking is better? We obviously need both. When ad agencies fail to break out of linear thought processes, they don’t come up with ideas like the Geico gecko. And when we’re trying to make an important decision through associative thinking, we fail to anticipate consequences or implications and thus make foolish choices. It’s a simple matter of using the right tool for the job. And that means we need to give our students facility in the use of both tools, both types of thinking as well as an understanding of when each is the right tool.
Absence of deep reading has significance beyond just the intellectual. A constant temptation in intellectual pursuits is to value efficiency above beauty— to learn quickly and thoroughly but not to smell the roses along the way. We are esthetic creatures as well as intellectual ones, and our feeling capacity needs to be fed as much as our thinking. Deep reading gives us an opportunity to savor words and ideas, to roll them around on our tongue, so to speak—to appreciate words as works of art as well as conveyors of truth. Skimming web content and moving quickly on tends to deprive us of that esthetic experience.[31] If we engage in it exclusively, we make ourselves less than we could be, less than we should be. We distort the image of God.
Where to Now?
Anyone who thinks he can stand astride the path of the juggernaut of cultural change and stop it with either nattering or brute force has not learned from history. Both Hitler’s Third Reich and Mao’s Cultural Revolution underestimated the force of the movement toward individual responsibility and freedom that had begun all the way back at the American Revolution, and the result was that they quickly became self-parodies; the “Reich” lasted less than a decade, exponentially less than a thousand years, and Mao’s Cultural Revolution became only a distant memory as the pragmatic Chinese leadership embraced limited capitalism within a few years after Mao’s death. Today, significant parts of China, most especially Shanghai, are extensively Westernized. We will be similarly frustrated if we try to denounce and reverse the effects of the Third Wave, the information revolution. Furthermore, there are relatively few who would even want to do such a thing, considering the benefits that the revolution has brought.
Can we embrace the technology yet still limit its negative consequences? Can we use it in ways that ameliorate our weaknesses without destroying our strengths?
It’s hard to say with certainty. History teaches us that major cultural shifts bring bad as well as good, and no society has ever been able to be completely selective about that. Further, since most evangelicals are premillennial in their theology, they interpret the Bible as teaching that in the last days conditions will degenerate rather than improving. Many conservative Christians, then, while advocating good stewardship, would still expect general cultural conditions to degenerate as the return of Christ approaches.
Well, then, how can we be good stewards of a mixed blessing? How can we take advantage of all the teaching opportunities that the internet and related technologies provide and prepare our students to use the technologies wisely in their lives while minimizing the negative effects that we have identified above?
Exercise Your Concentrator
While students should be encouraged to use their time efficiently as they research—for example, skimming in order to identify the best potential resources on a given topic—they should also be expected to maintain their facility in deep reading. They should be assigned regular lengthy readings and follow-up exercises that evaluate the success of their effort—for example, writing careful summaries and evaluations of the complex ideas included in the reading assignment. In preparation for the assignment, teachers should advise the students that success will require undivided concentration, turning off the TV, not replying to text messages, not stopping to check email. The teacher may even require such limitations as a part of the assignment. An hour or two a week spent focusing will not leave the student in a social wilderness, and it will help prepare him for future times of necessary concentration.
Think Big
Every teacher struggles with assigning writing. We all believe that we should assign lots of writing, but we also know how long it takes to grade those assignments well. While very tightly focused writing assignments—“Summarize this article in one tweet”—are easier to grade, the student needs to write about big ideas, complex ideas, in order to engage all of his thinking capacity. He needs to write at least one major research paper[32] during his secondary schooling—preferably at least one every year through high school—to solidify his mind’s ability to hold a significant amount of research in his head, prioritize it, summarize it, evaluate it, and then set down his ideas cogently in writing.
The key to such assignments is the necessity to think about big ideas, complex ideas, all at once, not merely reacting to single concisely stated ideas one at a time. As noted earlier, we’re talking about what Bloom’s taxonomy identifies as synthesis.
What if we don’t make these efforts? What if we adapt our teaching to fit with the rewired thinking engendered by technology? Well, will there be any complicated decisions to make in the future? It’s hard to imagine that history will reverse itself and start calling for simpler decisions rather than more complex ones. As factions grow ever more diverse, as weapons grow ever more destructive, as tactics grow ever more devious—in short, as the consequences grow ever more catastrophic—can we afford to proceed by trial and error? Will throwing in a few more advisors with fast-twitch brains lead to more wise decisions and stable outcomes? Won’t we need somebody who can surround all the contributing factors with his mind, corral them, tame them, and find a way through?
We’ll need people who can think deeply in every area of life: in business, in local and national government, and most certainly in the church. Economic prosperity, national survival, cultural stability, and most especially spiritual growth depend on it.
We can’t exhaust the teaching ideas and tactics here. In short, the teacher needs to model and require large-scale thinking from the students, thinking that involves comprehension, focus, careful analysis. For such exercises to stimulate the kind of brain formation that makes such thinking facile, the exercises need to be frequent and regular.
None of this emphasis needs to render the student inept at scanning or multitasking. The brain can retain both kinds of wiring, so long as it’s exercised in both kinds of thinking. Tomorrow’s wise leaders will certainly need both.
by Dr. Dan Olinger
[1] http://www.alexa.com/topsites
[6] The best examples of this kind of activity are Twitter feeds, Facebook status updates, and blog entry descriptions in RSS aggregators, such as Google Reader.
[7] (New York: Norton, 2010), 5.
[11] As just one example, the phenomenon of crowdsourcing enables the creation of products that were not economically feasible without the technology. Examples include Wikipedia and various layers on Google Earth, such as Panoramio. With the latter, the user can see multiple photographs of virtually any accessible location on the globe, and the system was put together in a matter of weeks.
[12] Moving from a shelf of physical volumes of The Reader’s Guide to Periodical Literature to online serial databases is indisputably a step up for researchers.
[13] Clive Thompson makes this assumption when he concludes from the fact that “we’re remembering fewer and fewer basic facts these days” that “we’re running out of memory.” “Your Outboard Brain Knows All,” Wired, October 2007.
[14] Carr, Shallows, 191–92.
[15] “The amount of information that can be stored in longterm memory is virtually boundless.” Torkel Klingberg, The Overflowing Brain: Information Overload and the Limits of Working Memory, trans. Neil Betteridge (Oxford: Oxford University Press, 2009), 36, cited in Carr, Shallows, 192.
[16] Carr, Shallows, 124.
[18] This concept has been called “just-in-time” (JIT) education, after the low-inventory system used in many highly profitable companies. In inventory management, JIT makes sense since inventory has both acquisition and storage costs. If there is no mental cost to acquiring and storing knowledge, however, there is no advantage to JIT learning. Further, if there is a mental cost to not storing knowledge, then JIT is a very bad idea.
[19] Nicholas Carr, author of The Shallows, first gained notoriety by publishing an article entitled “Is Google
Making Us Stupid?” Atlantic, July/August 2008.
[20] Mark Bauerlein, “Google Memory,” The Chronicle of Higher Education, July 17, 2011.
[21]Carr, Shallows, 123–24. Carr cites the research of John Sweller, Instructional Design in Technical Areas (Camberwell: Australian Council for Educational Research, 1999).
[23] Skeptics should evaluate the suggestions of any word processing software’s grammar checker in order to be convinced. Computers can give the impression of intelligence—an early example was Eliza, a parody of Rogerian psychoanalysis—but even the proponents of this kind of programming call it “artificial” intelligence for a reason. The creator of Eliza, Joseph Weizenbaum, was horrified when actual Rogerians recommended using his software seriously. Ibid., 201–5.
[24] Synthesis was the fifth of six levels of the cognitive domain in Bloom’s original taxonomy. Bloom’s work, originally done in 1956, was revised in the 1990s. The most significant change in the cognitive domain was exchanging levels 5 and 6, with some renaming. The old fifth level, “synthesis,” is now the sixth level, “creating,” having changed places with “evaluating.”
[25] Short-term memory is capable of holding only a very few ideas simultaneously: two to four, with the actual number probably being closer to two than to four. Sweller, 4–5, cited by Carr, Shallows, 124.
[27] Carr aptly describes the way net surfing exemplifies this second type. Shallows, 122.
[28] The study itself is available on commercial online databases: Nicole K. Speer et al., “Reading Stories Activates Neural Representations of Visual and Motor Experiences,” Psychological Science 20, 8 (August 2009), 989–99.
[29] Carr, Shallows, 124.
[30] Patricia M. Greenfield, “Technology and Informal Education: What Is Taught, What Is Learned,” Science 323, 69 (2009), 71.
[31] Carr, Shallows, 221.
[32] By “major,” I mean at least fifteen pages double-spaced. Longer papers won’t kill him. And cut-and-paste graphics don’t count toward the page total.