My daughter is Laura Ingalls Wilder, my son is a garbage can.
Yesterday before the kids' school day started, I went to the dentist. I didn't think it would take that long, and so the kids opted to come with me, rather than go to early morning child care.
My crown, via flickr
Long and short, we ended up leaving there about 9:30, and by the time we got out the door, my daughter was pretty upset. I walked her to her class; they were having a meeting, and everybody was in the main room. She put her guitar and lunchbox down in the back room, but couldn't go out; she was too weepy, and even though she hadn't missed the activity she thought they'd already be doing, didn't want to have everyone looking at her when she went back out.
We spent a few minutes talking about it; finally, when it seemed clear that I couldn't get her to go out, I offered to take her to Starbucks with me for a quick drink, and bring her back before her activity started. So we left, went to Starbucks, and came back about twenty minutes later. By now, she was fine, and went running off to activities (in this case, sculpture class), no problem.
front porch, via flickr
Right as she was heading to class, my phone rang. The school office was calling: Elizabeth's female classmates wanted to make sure Elizabeth was all right, and encourage her to come back.
This is a crew that's been together for the last three years, and include several girls who've been together since nursery school. Now, it was touching that her girlfriends wanted to make sure Elizabeth was all right. But the other thing that struck me was that they'd go en masse to the office and ask that someone call me. When I was in fourth grade, the office was definitely someplace you did NOT want to go; certainly you didn't go there with your friends and ask for a favor.
leaf spiral, via flickr
It's not that the girls assume that they can go anywhere, or are unsupervised: Peninsula is unstructured in the same way that a medieval village is unstructured, which is to say it has a minimum of formal regulation but a ton of custom, and its residents are guided by an awareness that they have to get along because they're going to be living together for a long time. (There are also all kinds of rules about boundaries, what trees you can climb and which ones you can't-- apparently there are marks on the ones you can't climb, but I've never seen them.) But they took for granted that they could use the office to check on their friend, and that no one would find that strange.
It says a lot about how the kids view teachers and staff, and how they see their own place at the school.
I've long been a fan of the IDEO talks, so it'll be a real pleasure-- and a real challenge-- to give one.
Know How Talks at IDEO
Thu, 11/06/08 5:00 pm
Free and open to the public
See bottom for venue, schedule, and more details
Alex Soojung-Kim Pang The End of Cyberspace
The concept of cyberspace-- an alternate dimension of information, accessible from computers, that was separate from and superior to the physical world-- has helped shaped the way we think about everything from the design of online environments, to intellectual property law, to predictions about the future of cities, work, and space. I want to explain how the idea of cyberspace came to be so compelling, and chart where it's going.
Cyberspace has its origins in science fiction, video games, the mythology of the Western frontier, and other cultural sources. But it became powerful because it helped us make sense of the emerging relationship between people, information, and the Web, in an era defined by desktop computers, modest Internet connections, and graphical interfaces. Cyberspace was an artifact of a particular moment in the cultural history of human-computer interaction.
So what happens to the concept of cyberspace as the character of our interactions with computers and information change? What happens when we move toward an always-on, mobile, ubiquitous future? I argue that the notion of cyberspace will become obsolete. As Gene Becker puts it, "cyberspace was a separate place from our world only because the necessary bridging technologies didn’t exist. Now that they do ... cyberspace is coming to us." Given how influential the idea of cyberspace was, it's worth asking what its obsolescence will mean, and what might come after cyberspace.
Alex Soojung-Kim Pang is a Research Director at the Institute for the Future, where he leads projects on the future of science, and an Associate Fellow at the Saïd Business School at Oxford University, where he works with students interested in futures and forecasting. Before becoming a futurist, Alex studied history and sociology of science at the University of Pennsylvania. He is writing a book on the end of cyberspace (http://www.endofcyberspace.com); his earlier projects include histories of Victorian solar eclipse expeditions; Buckminster Fuller and the geodesic dome; and the development of the Apple mouse.
Upcoming Know How Talks
This will be the last talk for 2008.
*The Know How Talks are usually held on Thursdays at 5:00,
in IDEO's Palo Alto cafe next to our lobby at 100 Forest.
Enter from the alley between Alma St and High St.
The talks are open to the public. No need to RSVP.
[Reposted from my Red Herring blog, 2005]
When modern architecture emerged in the first years of the last century, it threw down a gauntlet at the feet of traditional neoclassical and academic architecture. Modernism's style was stripped-down and functional. It celebrated the beauty of machines and the art of engineering, and expressed itself in concrete and steel, rather than brick and wood. Most important, it declared that the future would never again look like the past: from now on, architecture would be about innovation and change, not about working with timeless principles and eternal proportions.
Implicitly at first, and then consciously, architectural exhibits became predictions. Buckminster Fuller's Dymaxion house, first exhibited in 1927, exemplifies how modern architecture backed into the futures business. The Dymaxion house was a hexagonal structure, suspended from a central load- and services-bearing column. Virtually everything in it was made of aircraft-grade medal. The house wouldn't be built on-site, like traditional houses; instead, it would be mass-produced, like cars or cans of peas, and delivered to owners.
Soon "the home of the future" became a stock element of every architectural exhibit, World's Fair, forward-looking corporate display, or popular magazine special issue. (Even World War II couldn't derail them: a 1943 brochure showed a couple admiring a neighborhood of modern houses under the caption, "After total war can come total living.") Sporting automated kitchens, robot butlers, furniture that you washed with a high-pressure hose, and helipads (the long, sad story of why we don't have personal helicopters or jet packs will have to wait for another time), these houses were sleek temples of convenience, promises of a world in which the home would be as frictionless and worry-free as a department store.
Of course, almost none of this has come to pass. Instead, the "home of the future" projects serve as textbook examples of how you can get the future wrong, and why.
From the Examiner:
Thank goodness my daughter wanted to be Laura Ingalls Wilder this year. My son's a garbage can-- really, a garbage can with the bottom cut out so he can walk. He first wanted to be a garbage can on wheels, and have me push him around the neighborhood, but that idea died a quick death.
Halloween costumes have gotten out of hand. Gather any group of parents and you'll quickly hear about how the choices of costumes have gone from witch and princess to sexy witch and pouty porn princess.
I spent a really stimulating day yesterday at the Tinkering as a Mode of Knowledge conference, listening and talking to people like Dale Dougherty (founder of Make Magazine, the Maker Faire, etc.), Mitch Resnik (MIT Media Lab), Rick Prelinger (the Prelinger Library and online film collection), Anne Balsamo, and others. We're meeting for part of today, but I wanted to start reflecting on yesterday's discussion; and in particular, I want to get at the question of what tinkering is. Is it a unified body of practices? Is it a distinct set of skills? is it an historical moment? Is it just a trendy name? This is something we spent a fair amount of time discussing, either formally or informally, and the answer is: It's all of those. I also thinking there are a couple other important things that define tinkering.
What is Tinkering?
You can define tinkering in part in contrast to other activities. Mitch Resnick, for example, talks about how traditional technology-related planning is top-down, linear, structured, abstract, and rules-based, while tinkering is bottom-up, iterative, experimental, concrete, and object-oriented. (Resnick is very big on creating toys that invite tinkering.)
Anne Balsamo and Perry Hoberman have looked at a wide variety of tinkering activities, ranging from circuit bending to paper prototyping to open source to blogging. They argue that these varied activities are unified by a common set of principles or practices. (The following are just highlights.)
Tinkering isn't so much a specific set of technical skills: there tends to be a pretty instrumental view of knowledge. You pick up just enough knowledge about electronics, textiles, metals, programming, or paper-folding to figure out how to do what you want. It certainly respects skill, but skills are a means, not an end: mastery isn't the point, as it is for professionals. Competence and completion are.
Is Tinkering Shallow or Deep?
One of the things I talked with several people (Mike Kuniavsky in particular) about was how historically specific tinkering is. The deeper question is, is this just a flash in the pan, a trendy name without any substance underneath? The answer we came up with is that this is like a musical style, both the product of specific historical forces, and an expression of something deeper and more fundamental. (Think of jazz: you can talk about how it emerges in the early 20th century out of blues, ragtime, and other previous musical forms, reflects particular sociological and historical trends, and is guided by certain assumptions about beauty and what music is; but at the same time, it definitely expresses a deeper impulse to create music.)
Think of the historically contingent forces shaping tinkering first. I see several things influencing it:
No doubt there are other sources you could point to-- microentrepreneurship or the growth of "jobbies," the presence of an infrastructure that supports the sharing and tracking of unique handmade things (from eBay to ThingLink).
Does Tinkering Matter?
That's a pretty varied list. And it suggests that tinkering is more than a local, Valley, geek leisure thing.
First, tinkering is a powerful form of learning. Even if it doesn't stress mastery of skills, tinkering does emphasize learning how to use your hands, learning how to use materials, and to engage with the physical world rather than the world of software or Second Life-- though tinkering does share a sensibility toward the world that lots of kids demonstrate to programs and virtual worlds: you just get in there, hit buttons, and see what happens.
This really matters because you can be creative with stuff in ways you can't with bits, and that the more you understand the possibilities and limitations or materials-- or more abstractly, if you learn how to develop that knowledge-- the smarter you become. In this respect, it dovetails with "a little-noticed movement in the world of professional design and engineering" that Gregg Zachary wrote about a few weeks ago: "a renewed appreciation for manual labor, or innovating with the aid of human hands." (I write about this at greater length on End of Cyberspace.)
Second, tinkering is forward-looking. It's partly about how we'll use and interact with technologies in the future. As much as any loose movement can be described this way, tinkering is a set of anticipatory practices, aimed at developing a sensibility about the future. It's a way to develop skills that are going to matter in the Conceptual Age, in the ubiquitous computing world. As we move into a world in which we can manufacture things as cheaply as we print them, the skills that tinkerers develop-- not just their ability to play with stuff, or to use particular tools, but to share their ideas and improve on the ideas of others-- will be huge. (I talk about this some in an article in Samsung's DigitAll Magazine.)
Finally, tinkering is an expression of the nature of our engagement with technology. If you buy the argument of Andy Clark that we are natural-born cyborgs, you can see tinkering as a form of co-evolution with technology, or a kind of symbiotic activity.
[Reposted from my Red Herring blog, 2005]
Recently BBC World had an article on baby blogs-- blogs that parents will keep about their children, the digital equivalent of baby books. Coincidentally, that same day I posted my 500th entry on my blog about my children, which I started soon after getting a digital camera. Like most articles about blogs, its substantive points were mixed up with a measure of alarmism and technical naivete. Some of it was taken up with worries about what pedophiles unmentionable things could do to those cute baby pictures, and fretting over how revealing details about your child's daily routine isn't very smart. (Hello? Ever heard of password protection?)
The article also suggested that baby blogs were invasions of privacy. What if, twenty years from now, the merest acquaintance could read about your child's potty-training exploits, or their first visit to Grandma's house? Wouldn't making those details of your child's life available to people they barely know violate their privacy, and make it harder for them to get dates? (At this point in the article I wanted to pump my arm and shouted "Yessss!" My five year-old daughter is only in nursery school, and already I've guaranteed that she'll spend her college years undistracted by a social life.)
My efforts to archive my children's lives stand in stark contrast to the scanty documentation of my own past. My entire childhood is preserved in just under two hundred pictures, a few letters, and a couple yearbooks: it all fits in a single box. In contrast, I can take two hundred pictures of my daughter at a birthday party. The constantly-falling cost of digital media lower the barriers to recording everyday events, and preserving every last picture and audio file. At my current rate, each of my children are in danger of having me take 50,000 pictures of them by the time they turn 18.
Of course, parenting is one long invasion of privacy, but the idea of baby blogs coming back to haunt their subjects later in life is still an interesting one. Technology promises to take a ritual that had traditionally been a painful but very limited rite of passage-- the baby books shown to the fiance, the clever candids shown at the wedding reception-- and make it into a full-time affair.
It also shows that the relationship between privacy and technology is really pretty complex. Worries about technology affecting privacy are perfectly reasonable; but worries about specific technologies are often misplaced. To really know what to worry about, you have to think a bit more about what privacy is, and how technology can affect it.
[Reposted from my Red Herring blog, 2005]
The aging of the Baby Boom generation is a perfect example of what Peter Schwartz calls an "inevitable surprise." For years futurists have been talking about it for years, warning that it would be an event of tremendous importance. But most companies haven't taken it very seriously: like the new millennium, it always seemed distant, even as it got closer.
This lapse is made more peculiar by the fact that it's so easy to see. If you were born between 1946 and 1964, you're a boomer. You're part of the story, and there are 80 million of you in the United States. But few people think of themselves as getting older but not old, not crossing into that social Hell of polyester clothes and retirement homes—to say nothing of boredom, inactivity, poor health, and looming mortality. Most of us would say, that's not going to be me. I've spent my whole life being active, and I'm damned if I'm going to just shuffle offstage now.
Here's the thing: we won't have to. Instead of giving in to old age, boomers are going to dramatically change what aging is. They're going to use their money and political clout to alter our perceptions of age, the way elders live, and their place in society, the economy, and politics. Boomers will have as great an effect on our notions of aging as they had on youth in the 1960s. Indeed, so big are these changes that historian Theodore Roszak speaks of a "longevity revolution" as important as the Industrial Revolution.
I'm at a conference on "Tinkering as a Mode of Knowledge: Production in the Digital Age," at the Carnegie Foundation for the Advancement of Teaching. There are about 40 of us in a second-floor open conference room, and include some really awesome people. Given my interest in DIY science, the relationship between craft knowledge and formal knowledge, and the history of scientific practice, this should be a really interesting couple days, and one that can feed into several of the different projects I've got going on.
[Reposted from the Red Herring blog, ca. 2005.]
Let me begin with a confession. I spend most of my working life in front of a computer, and I suspect a fair amount of that time is wasted. I check my e-mail several times an hour. I regularly scan my RSS feeds for new posts. I visit news sites, just in case they've updated the list of breaking new stories. I can follow hyperlinks from one end of the Internet to the other if I'm not careful.
It's all the electronic equivalent of bouncing your leg up and down, or ripping a napkin apart. And I don't need to be this wired. It doesn't help my work or thinking; to the contrary, these information-era equivalents nervous tics are just distractions. Yet I do them.
I'm hardly alone. Some of my friends lead lives that require Blackberries; others have Blackberries that take over their lives. A recent Yahoo-OMD study of 28 people forced to go offline for two weeks shows how dependent—both in the functional, and the emotional sense—people become to being connected. According to The Atlantic Monthly, "Across the board, participants reported withdrawal-like feelings of loss, frustration, and disconnectedness after the plug was plug was pulled." Indeed, "[t]he temptation to go online was so great that the participants were offered "life lines"—one-time, one-task forays onto the Web—to ease their pain." Add to this the recent Pew Internet Survey study that found that Internet users are spending more time online, and less watching TV, and you get a picture of growing numbers of people turning productivity tools into weapons of self-distraction.
It's just the latest evidence confirming the truism that we live in an age of information overload. How did this happen? And is it going to get worse?
Up at the Carnegie Foundation for the Advancement of Teaching this evening, for the opening of a conference on tinkering. It looks like it's going to be a really fascinating event. There are lots of cool people, it's a wonderful subject, and the venue is really nice.
She was especially proud of the blood on the spear. Apparently it makes the stuffie look REALLY ferocious. I hope I don't need to take her to a psychologist....
[A repost from my old Red Herring blog.]Futurists live with a paradox. On the one hand (as they are the first to admit), it is impossible to predict the future. On the other, it is more important today than ever to try.
In a world that changes slowly, prediction is easy and uninteresting: the future will be much like the present, and the only real uncertainties are natural disasters like famine or drought. In a rapidly-changing world, in contrast, prediction is hard but important. The value of knowing the future, in other words, increases in proportion to its impossibility.
Futurists discovered this problem in the 1960s, when the modern field got its start. At the time, futurists thought that with enough computing power and the right programs, it really would be possible to predict the future, or at least assign statistical probabilities to major events. After a few years, thought two things became clear. You couldn't predict the future. And you didn't need to.
Specific events are impossible to predict because so many nearly-random factors can influence them: call this the "for want of a nail" problem, after the famous line in Shakespeare. History is filled with grand events that turn on some small hinge—a last-minute decision, a missed connection, the failure of reinforcements to arrive in time. The future will be full of them, too.
But even if you couldn't answer fortune teller-level questions, futurists could see the broad outlines of the Future: the world that's shaped by demographics, long-term economic patterns, geopolitics, and culture. Just as the Annales school of historians argued that the longue duree—the grand patterns of history shaped by climate, culture, demographics, and economics—were more important than the short-term world of politics or biography, so now did futurists argue that they could chart the coastline of the future.
Today at lunch I heard a really fascinating talk on certainty and knowledge by Robert Burton, author of a recent book called On Being Certain. It's a terrific project, because it makes a radical but to my mind entirely plausible case about the nature of certainty-- that it's not the product of logical operations, but an emotional state whose inner workings are (for now at least) forever mysterious.
But first, let's back up a bit. For a long time psychologists have mapped (and argued over) the differences between knowing things, and knowing that you know them-- between cognition and metacognition. The work of Ben Libet in the 1980s showed that there's a "ready potential"-- a gap of several hundred milliseconds between when we make a decision, and when we become aware of the decision. More recently, John-Dylan Haynes' work shows that this potential can, under some circumstances, be a lot longer.
There are also interesting examples of people being certain about things under circumstances where certainty is not really possible. William James, in his study of religious experience, talked about "felt knowledge" and mystical states: "Although so similar to states of feeling," he wrote, "mystical states seem to those who experience them to be also states of knowledge." Deja vu and cortical stimulation all generate a feeling of certainty, even in the absence of evidence. Baseball players talk about being able to see the ball and adjust their swings accordingly, when neurological studies show that they can't possibly have enough time to both track the ball, figure out how they should change their stance or the angle of the bat or the speed of the swing, and execute. Ditto for tennis players and other athletes who require lightning-fast reflexes; really, many of those sports shouldn't really exist at all.
All of this leads to Burton's argument that there's a really big disconnect between knowledge and certainty. Here's the book blurb:
In On Being Certain, neurologist Robert Burton challenges the notions of how we think about what we know. He shows that the feeling of certainty we have when we "know" something comes from sources beyond our control and knowledge. In fact, certainty is a mental sensation, rather than evidence of fact. Because this "feeling of knowing" seems like confirmation of knowledge, we tend to think of it as a product of reason. But an increasing body of evidence suggests that feelings such as certainty stem from primitive areas of the brain, and are independent of active, conscious reflection and reasoning. The feeling of knowing happens to us; we cannot make it happen.
What does this mean? Looking at 2+2 = 4 gives us both a correct answer, and a feeling that it's right (metacognition); looking at Einstein's theory of special relativity does not. Likewise, optical illusions are things that we can logically know work one way, but emotionally feel wrong.
We normally thing of thoughts and logical and reasonable, but they consist of lots of things: sensory inputs, biological predisposition, prior experience, and mental sensations. All of these are flexible, contingent, fragile, constructed, and otherwise... uncertain. Certainty isn't a logical conclusion, it's an emotional state. "Certainty and other feelings of conviction," Burton says, "are neither conscious choices nor even thought processes. They are mental sensations that arise out of involuntary brain mechanisms that, like love or anger, function independently of reason." Later in the talk, he added, "The feeling of knowing operates as an intermediary between the world, and your conscious thoughts... We think of the feeling of knowing as the logical result of a line of reason, it's actually the other way around."
However, while that "a-ha" feeling of certainty is not something you can arrive at through logical means, it can be trained. After all, physicists do look at the theory of relativity, and have that a-ha feeling.
I suspect you could take this idea and blend it into things like the sociology of science-- how does it mesh with concepts like tacit knowledge?-- and Kuhn's arguments about the psychological dimension of paradigm shifts (something that hasn't gotten a lot of attention among later readers).
Martin Dodge and Rob Kitchen's 2001 book, The Atlas of Cyberspace, is now available as a free PDF. Of course it's a huge file, and I still think the book itself is well worth owning, even though I think the concept of an "atlas" of cyberspace enshrines a concept that's worth challenging.
[Reprinted from my Red Herring column, 2004.]
I've had my own blog since late 2002. The post with the largest number of comments isn't my hilarious, cutting review of Matrix Reloaded; it's not my insightful analysis of Andy Clark's Natural-Born Cyborgs; it isn't even my post about Danish train stations. No, the post that has inspired the largest number of comments is one about Super Nanny, a reality TV show.
And most of the comments don't have anything to do at all with my post. Instead, the commenters are just venting about child-rearing, praising the show, or saying how much they love super nanny Jo Frost. I'm the equivalent of the bartender: I put out the nuts and wipe down the bar. Except this time, the patrons have brought their own bottles.
How did this happen? And why does it matter? The answer to the first question is easy: Google. For reasons that I can't divine, a search on "Super Nanny" returns my post about the show as the #2 result. I have no idea why. And why does it matter? In its own small way, it's an unexpected, but illuminating, example of user reinvention, the phenomenon wherein people take a technology or medium intended for one purpose, and remake it for themselves.
About two weeks ago, I lost my glasses. In the house. It was the weirdest thing. I lose pens all the time (and if they're really expensive, usually find them); leave my iPod headphones in clothes; have two watches because one of them is often off on some mysterious excursion; but I never lose my glasses.
However, this time, I took them, put on contacts, found some reading glasses (talk about something that I lose!), then... I have no idea what. The glasses were abducted by aliens, or something.
I had been thinking about buying some new ones anyway, and thought I needed to get my vision checked, and have enough contacts and reading glasses to function perfectly well, so it wasn't the end of the world.
Since high school I've worn wire-framed glasses: aviators, round standard issue minority professional glasses, or variations of those two. I decided to go with something different this time. Black Oakley metal frames, black on the outside, red on the inside.
I'm glad I've got them. Not only do I like the look, but I'm really glad to be able to see in the distance, then read something in front of me, without having to look down; reach in my jacket for reading glasses; reach in another pocket for them; possible look elsewhere; put them on; then look down and read. It's bad UI for your eyes.
In the last couple days, I've gotten better at reading without my reading glasses; not great, but between my eyes reshaping themselves a bit, and my brain being better able to puzzle out fuzzy shapes and interpret them as words, I was fairly functional. It's a reminder of just how much our sense of the world, and our senses, function at an intersection of technologies, bodies, and brains. We think of vision as something that's pretty straightforward, but in fact it's not: there's a complicated collaboration between glasses or contacts (or glasses and contacts); our eyes (and the muscles that adjust the shape of our eyes); and our brain's ability to process the signals it receives, and the control it can exert over how the eye behaves.
This is a really interesting example of how technology might affect perception: a new study indicating that people who grew up watching black-and-white television dreamed (and still dream) in black and white, while people who grew up watching color TV dream in color:
While almost all under 25s dream in colour, thousands of over 55s, all of whom were brought up with black and white sets, often dream in monchrome - even now.
The findings suggest that the moment when Dorothy passes out of monochrome Kansas and awakes in Technicolor Oz may have had more significance for our subconscious than we literally ever dreamed of.
Eva Murzyn, a psychology student at Dundee University who carried out the study, said: "It is a fascinating hypothesis.
"It suggests there could be a critical period in our childhood when watching films has a big impact on the way dreams are formed.
"What is even more interesting is that before the advent of black and white television all the evidence suggests we were dreaming in colour."...
Research from 1915 through to the 1950s suggested that the vast majority of dreams are in black and white but the tide turned in the sixties, and later results suggested that up to 83 per cent of dreams contain some colour.
Since this period also marked the transition between black-and-white film and TV and widespread Technicolor, an obvious explanation was that the media had been priming the subjects' dreams.
[Icelander] Palme Vidar, with the wisdom of 73 years, is equally ruminative. "This is a small country," he says. "We have always swung, between feast and famine. There have been terrible times before, too, when the sheep bubble burst and the herring fleet failed. We always hang on. And you know, we were not going in a good direction. When I was a boy, if you went to the harbour to fish and you got wet, you could not fish again until the next day, because you had only one pair of trousers. Today people have too many trousers."
If you can survive both a sheep bubble (what a concept), and your herring fleet failing... AND to top it off, dealing with too many trousers-- well, you can get through anything.
A few years ago I wrote an online column for Red Herring. The gig was interesting, but after a change of editorial regime, they decided to stop the experiment. The pieces all kind of disappeared after a while, and I realized that some of them were actually pretty good. Heaven knows I spent plenty of time on them.
So if for no other reason than to have easily-accessible copies of them, I'm going to start reposting them here. Most were from 2004, so they might seem a bit dated; but I think some of the ideas are still worth playing with.
Knowledge is power. For a long time we thought it was something immaterial, cerebral, almost otherworldly. No less a figure than Plato argued that the world of things and appearances was but a dim reflection of another world of ideal types, more real than reality itself. But Plato's theory is too good for this world. Knowledge is also things, and actions.
One of the key events in twentieth-century philosophy was the discovery that the Platonic model of knowledge was incomplete. In mathematics, Kurt Gödel demonstrated that mathematics could never be a perfectly self-contained, exhaustively proven system. For decades, philosophers and mathematicians had worked to find the fundamental foundations of mathematics; Gödel's incompleteness theorem showed that the search was fruitless.
The critique continued in philosophy. Cambridge University's Ludwig Wittgenstein, arguably the twentieth century's most influential philosophical mind, argued that the meaning of language arises from its use, rather than from its logical properties. A few years later, British philosopher Michael Polyani coined the term "tacit knowledge" to describe things that we can know but can't effectively communicate. Tacit knowledge, Polyani argued, is an important component of skilled work, and even shapes activities that we traditionally have thought of as entirely logical (like science).
Historian Thomas Kuhn's Structure of Scientific Revolutions took Polyani one step further, and opened up a whole new front in the assault against traditional notions of knowledge. Structure reconceived science as a puzzle-solving activity guided by a mix of formal methods and cultural norms, and punctuated by dizzying revolutions and paradigm shifts. Sociologists of science, cultural anthropologists, literary and gender theorists, all used Kuhn as an inspiration to their critiques of objectivity.
You would think that after all this, the Platonic model of knowledge would be dead and gone. But it lives on in information technologies.
Newsweek reports on research by UCLA neuroscientist Gary Small, and his research on the impact of information technologies on our brains. (He must have a good literary agent, since the article is triggered by his new book, iBRAIN.) Small is director of the UCLA Center on Aging, author of three previous books (two with the word "bible" in them-- he has a really good agent).
Is technology changing our brains? A new study by UCLA neuroscientist Gary Small adds to a growing body of research that says it is. And according to Small's new book, "iBRAIN: Surviving the Technological Alteration of the Modern Mind," a dramatic shift in how we gather information and communicate with one another has touched off an era of rapid evolution that may ultimately change the human brain as we know it. "Perhaps not since early man first discovered how to use a tool has the human brain been affected so quickly and so dramatically," he writes. "As the brain evolves and shifts its focus towards new technological skills, it drifts away from fundamental social skills."...
To see how the Internet might be rewiring us, Small and colleagues monitored the brains of 24 adults as they performed a simulated Web search, and again as they read a page of text. During the Web search, those who reported using the Internet regularly in their everyday lives showed twice as much signaling in brain regions responsible for decision-making and complex reasoning, compared with those who had limited Internet exposure. The findings, to be published in the American Journal of Geriatric Psychiatry, suggest that Internet use enhances the brain's capacity to be stimulated, and that Internet reading activates more brain regions than printed words. The research adds to previous studies that have shown that the tech-savvy among us possess greater working memory (meaning they can store and retrieve more bits of information in the short term), are more adept at perceptual learning (that is, adjusting their perception of the world in response to changing information), and have better motor skills.
Of course, we already know about neurological plasticity. The argument that information technologies and our brains mold each other is also not new. Reading, Maryann Wolf argues, turns us into reader-cyborgs (my term, not hers): over time, neural plasticity and technology work to create a human-textual (or human-alphabetic) symbiosis. Kristof Nyiri writes about the how "thinking with a word processor" differs from thinking with pen and paper. (Andy Clark's work, particularly his Natural Born Cyborgs, is also a good tool for getting at these issues.)
The interesting new thing is the question of how this will affect humans over the longer run, and whether we can talk about it as evolution.
Small says these differences are likely to be even more profound across generations, because younger people are exposed to more technology from an earlier age than older people. He refers to this as the brain gap. On one side, what he calls digital natives—those who have never known a world without e-mail and text messaging—use their superior cognitive abilities to make snap decisions and juggle multiple sources of sensory input. On the other side, digital immigrants—those who witnessed the advent of modern technology long after their brains had been hardwired—are better at reading facial expressions than they are at navigating cyberspace. "The typical immigrant's brain was trained in completely different ways of socializing and learning, taking things step-by-step and addressing one task at a time," he says. "Immigrants learn more methodically and tend to execute tasks more precisely."
The last trace of a Mercedes dealership, High and Hamilton, Palo Alto.
In the Chronicle:
Wal-Mart, the nation’s largest private employer, long criticized for its workplace policies, is a “more-honest employer” of part-time workers than colleges that employ thousands of adjunct faculty members. That was the harsh message delivered to a group of college human-resources officials here on Monday by one of their own: Angelo-Gene Monaco, associate vice president for human resources and employee relations at the University of Akron....
“We helped create a highly educated part of the working poor, and it’s starting to get attention from outsiders,” he said, noting that unions are trying to organize part-timers, and lawmakers in nearly a dozen states are examining the issue.... “We rely on them for a very important function, and we assume that they will continue to accept mistreatment in return.”
It's not at all unusual to see parents around the school-- unlike the schools I went to, when the only time you saw a parent on-campus was when someone was in serious trouble.
Right before this picture was taken, my son walked by, asked me was I was doing there, but really had no particular interest in the answer. It was curious that I was there, but not strange enough to deserve more than a second's thinking about.
Balloon Juice has a very nice piece on Paul Krugman's Nobel Prize, and the long-term payoff of taking controversial stands that you really believe in. Indeed, that seems to be a theme this year: As Science reported (quoted in Balloon Juice):
"Fluorescent proteins have revolutionized medical research,” says oncologist and imaging expert John Frangioni of Harvard Medical School in Boston... [but if 2008 Nobel Laureate Osamu] Shimomura’s pursuit of jellyfish fluorescence were funded today, says [chemist Marc] Zimmer, it would be more likely to earn scorn than anything else.
Years ago, when I was helping the Institute look for new offices, I visited Gate 3, a "work club" across the Bay in Emeryville. It was a wonderfully cool space, and I really loved the vision: the space was part open office, part meeting space, and part members-only club, with a downstairs cafe and space for social events. Unfortunately, it was ahead of its time, and eventually it folded. (The creators of Gate 3 seem to be trying to bring the idea back in North Carolina.)
The idea of offices for drop-in work has continued to fascinate me, though it seems clear that they're hard to get off the ground. So I was pleased to see that Ophelia Chong (who is probably the only person who'd think to work Cole Porter lyrics into a piece on temporary workspaces) has a nice piece in 404 about an effort to create such spaces in Los Angeles.
Los Angeles is a city of re-invention and of hyphenates...Our resumes can be compared to layers upon layers of paint that is never allowed to dry, because we are constantly changing the perception of who we are.
Our definition of what employment is about re-invention as well, we are historically a nomadic work force and because of this our freelance workforce is the highest in the country, 36-38%, almost 20% higher than the rest of the country. We are nomads that travel from village to village selling our wares and services, client to client with a laptop in tow....
In the new economy the idea of full time employment has moved towards working on a series of projects as a subcontractor, in Los Angeles we are more accustomed to this form of employment than most of the country, which is why BLANKSPACES does not have to explain it's purpose, we get it.
Christopher Buckley has endorsed Barack Obama, and as always he's thoughtful and funny about it:
Let me be the latest conservative/libertarian/whatever to leap onto the Barack Obama bandwagon. It’s a good thing my dear old mum and pup are no longer alive. They’d cut off my allowance.
Or would they? But let’s get that part out of the way. The only reason my vote would be of any interest to anyone is that my last name happens to be Buckley—a name I inherited. So in the event anyone notices or cares, the headline will be: “William F. Buckley’s Son Says He Is Pro-Obama.” I know, I know: It lacks the throw-weight of “Ron Reagan Jr. to Address Democratic Convention,” but it’ll have to do.
I write about people, technology, and the worlds they make.
I'm a senior consultant at Strategic Business Insights, a Menlo Park, CA consulting and research firm. I also have two academic appointments: I'm a visitor at the Peace Innovation Lab at Stanford University, and an Associate Fellow at Oxford University's Saïd Business School. (I also have profiles on Linked In, Google Scholar and Academia.edu.)
I began thinking seriously about contemplative computing in the winter of 2011 while a Visiting Researcher in the Socio-Digital Systems Group at Microsoft Research, Cambridge. I wanted to figure out how to design information technologies and user experiences that promote concentration and deep focused thinking, rather than distract you, fracture your attention, and make you feel dumb. You can read about it on my Contemplative Computing Blog.
My book on contemplative computing, The Distraction Addiction, will be published by Little, Brown and Company in 2013. (It will also appear in Dutch and Russian.)
My latest book, and the first book from the contemplative computing project. The Distraction Addiction is published by Little, Brown and Co.. You can find it at your local bookstore, or order it through Amazon, Barnes & Noble or IndieBound.
My first book, Empire and the Sun: Victorian Solar Eclipse Expeditions, was published with Stanford University Press in 2002 (order via Amazon).
PUBLISHED IN 2012
PUBLISHED IN 2011
PUBLISHED IN 2010
PUBLISHED IN 2009