I have a day with no meetings. Owing to the combination of the Institute being a pretty meeting-driven place, and my own distracting sociability, this is a rare thing. Not one to be wasted.
Everyone loves groups. What's better (in America at least) than being part of a "team"? Collaboration is cool. (Is there a word that's been rehabilitated more completely than "collaboration"? Fifty years ago, someone who "collaborated" wasn't a good person, but a traitor.) Collective intelligence is the solution to the world's problems. Smart mobs are... mobbish, perhaps, but also smart, and that's what matters.
Groups are powerful... but for all their power, they're also fragile. University of Washington academics Will Felps and Terence Mitchell constructed a very interesting experiment to show just how fragile they are, by demonstrating the effect of "bad apples" on the effectiveness of small groups.
Groups of four college students were organized into teams and given a task to complete some basic management decisions in 45 minutes. To motivate the teams, they're told that whichever team performs best will be awarded $100 per person. What they don't know, however, is that in some of the groups, the fourth member of their team isn't a student. He's an actor hired to play a bad apple, one of these personality types:
- The Depressive Pessimist will complain that the task that they're doing isn't enjoyable, and make statements doubting the group's ability to succeed.
- The Jerk will say that other people's ideas are not adequate, but will offer no alternatives himself. He'll say "you guys need to listen to the expert: me."
- The Slacker will say "whatever", and "I really don't care."
The conventional wisdom in the research on this sort of thing is that none of this should have had much effect on the group at all. Groups are powerful. Group dynamics are powerful. And so groups dominate individuals, not the other way around. There's tons of research, going back decades, demonstrating that people conform to group values and norms.
But Will found the opposite.
Invariably, groups that had the bad apple would perform worse. And this despite the fact that were people in some groups that were very talented, very smart, very likeable. Felps found that the bad apple's behavior had a profound effect -- groups with bad apples performed 30 to 40 percent worse than other groups.
A paper describing the experiment, "How, when, and why bad apples spoil the barrel: Negative Members and Dysfunctional Groups," is available as a PDF.
Thanks to Mathias for the link.
Sequestered myself in Cafe Zoë in an attempt to make some more progress on a report I need to get to clients.
Write write write write....
I'm at home today, as my daughter came down with strep. When I picked her up from school yesterday afternoon, she was on the couch in child care, looking pretty drained. We spent part of last night at the pediatrician's, getting her and her brother swabbed, and dosed up with amoxicillin.
She woke up today and was pretty out of it. Her brother insisted that he was at death's door, until he remembered that his class was going ice skating today. Then all of a sudden: Miracle Recovery!
He tested negative last night, seemed no worse than usual. Since I know Elizabeth will rest better if she's alone (obviously I'm here; she's alone in the same way nobility are alone when servants are still in the room), I decided to take him into school.
Elizabeth is now on the couch. She watched Nausicaä: Valley of the West Wind this morning, and is now on to The Cat Returns. She likes Hayao Miyazaki under normal times, but for some reason, when she's under the weather, escapist movies featuring young female heroines especially appeal to her. Just one of those inexplicable girl things.
Fortunately, she's old enough, and independent enough, and also not sick enough, for me to actually be able to work.
I'm spending the morning at Cafe Zoë, writing to a lot of people. I never expected, when I started working as a futurist, that I would have to calculate what time it was in Beijing and Budapest, and make sure to get some e-mails out while people are still in their offices or awake. But that's my life these days.
I've been coming to this cafe for a couple years now (actually, a quick check of my external memory-- aka the blog archive-- reveals its been four years and one month), and this morning I discovered a new function. I got to the counter, realized I didn't have any money, and apologized and told them I'd be back.
"It's okay," the owner said. "You can owe us. It's not the first time you're here." She pulled out a book with IOU on the front-- I guess there are plenty of people who come here a little absent-minded-- and wrote down my order.
It makes perfect sense. Unless I want to never come back here, I'm good for the $3.60. And they want to keep me as a regular customer, so it's a reasonable risk for them.
Fortunately they seem to be doing pretty well, despite the downturn: there are a core group of us who are here regularly, and they seem now to have multiple clienteles at different times of day: stroller jogger moms in the morning, people coming in for lunch, freelancers or people who aren't working and home and don't want to work in the office (hello!), and people from nearby businesses, popping in for a cup of coffee. It's a real slice of the neighborhood, and very nice to see.
The Institute's new future of science Web site is now live. For the last couple years we've been running the project under the name X2-- an historical reference to the X Club, a group I've long found fascinating-- but we've updated the name to Signtific, and rolled out a new, much more user-friendly Web site.
No time to stop and relax, though. We've also nearly finished development of a custom version of the online mapping tool that I started using last year (here are copies of my paper spaces and end of cyberspace maps, for example), which promises to be pretty amazing. So no rest for the wicked.
I've been working on a think-piece on the future of futures work. (It's an expansion of questions I started asking in my piece on design and futures.) It's organized around a simple question: If you were to invent a discipline of futures and forecasting today, organized to deal with today's problems, and drawing on current science, what would it look like? Would be it be just like the field today? Would it look for weak signals, produce roadmaps and scenarios, and seek to influence strategy and policy?
I suspect the answer is no. No, I'm confident-- using the term as Robert Burton would warn it should be used-- that the answer is no. Now I'm trying to explain where I think the field will, or ought, to go.
One of the things I'm thinking through is the role of expert knowledge and accountability in futures work. We claim to be experts about a bunch of things, most notably about how to think about the future in ways that can better inform the present. But the work of Philip Tetlock (which I've mentioned before) suggests that claims of expert knowledge, particularly when it comes to dealing with the future, are highly suspect.
Teltock's argument is nicely summarized by Louis Menand in a New Yorker review:
The obvious questions are, how relevant is this work to what we futurists do? And are our current attempts to explain that no, we can't predict the future but our work is still valuable, sufficient in the light of work like Tetlock's?
It is the somewhat gratifying lesson of Philip Tetlock’s new book, “ Expert Political Judgment: How Good Is It? How Can We Know” (Princeton; $35), that people who make prediction their business—people who appear as experts on television, get quoted in newspaper articles, advise governments and businesses, and participate in punditry roundtables—are no better than the rest of us. When they’re wrong, they’re rarely held accountable, and they rarely admit it, either. They insist that they were just off on timing, or blindsided by an improbable event, or almost right, or wrong for the right reasons. They have the same repertoire of self-justifications that everyone has, and are no more inclined than anyone else to revise their beliefs about the way the world works, or ought to work, just because they made a mistake. No one is paying you for your gratuitous opinions about other people, but the experts are being paid, and Tetlock claims that the better known and more frequently quoted they are, the less reliable their guesses about the future are likely to be. The accuracy of an expert’s predictions actually has an inverse relationship to his or her self-confidence, renown, and, beyond a certain point, depth of knowledge. People who follow current events by reading the papers and newsmagazines regularly can guess what is likely to happen about as accurately as the specialists whom the papers quote. Our system of expertise is completely inside out: it rewards bad judgments over good ones.
Tetlock got a statistical handle on his task by putting most of the forecasting questions into a “three possible futures” form. The respondents were asked to rate the probability of three alternative outcomes: the persistence of the status quo, more of something (political freedom, economic growth), or less of something (repression, recession). And he measured his experts on two dimensions: how good they were at guessing probabilities (did all the things they said had an x per cent chance of happening happen x per cent of the time?), and how accurate they were at predicting specific outcomes. The results were unimpressive. On the first scale, the experts performed worse than they would have if they had simply assigned an equal probability to all three outcomes—if they had given each possible future a thirty-three-per-cent chance of occurring. Human beings who spend their lives studying the state of the world, in other words, are poorer forecasters than dart-throwing monkeys, who would have distributed their picks evenly over the three choices.
Tetlock also found that specialists are not significantly more reliable than non-specialists in guessing what is going to happen in the region they study. Knowing a little might make someone a more reliable forecaster, but Tetlock found that knowing a lot can actually make a person less reliable. “We reach the point of diminishing marginal predictive returns for knowledge disconcertingly quickly,” he reports. “In this age of academic hyperspecialization, there is no reason for supposing that contributors to top journals—distinguished political scientists, area study specialists, economists, and so on—are any better than journalists or attentive readers of the New York Times in ‘reading’ emerging situations.” And the more famous the forecaster the more overblown the forecasts. “Experts in demand,” Tetlock says, “were more overconfident than their colleagues who eked out existences far from the limelight.”
On my way to Tampa! I have copies of "Miami Vice" and "Wild Things" on my iphone for cultural reference. And some Carl Hiassen. If Florida isn't like it is in the movies and books, I'll be really bummed.
I leave tomorrow for the Association of University Research Parks winter conference, in St Petersburg, Florida.
This is the first time I've traveled anywhere with my iPod, and already it's having an impact. Rather than putting the address of phone number of the hotel in my trusty Moleskine notebook, I put the hotel, Supershuttle, airline, and a couple local art museums in my address book, and created a new group called "Alex's Current Trip." I figure whenever I go somewhere, I can fill it with local stuff. It should be handy.
I also find myself doing two things differently when I create addresses. First, I grab the complete address, not just enough to tell a cab driver. And second, I don't bother to copy the directions. Why? Because I figure that I'll use the map program and built-in GPS to generate directions when I'm on the ground. But to do that, I need good (i.e., comprehensive) street address information. Thanks to the map program, my personal economy of information has changed. I don't need directions. I need the information that will help me generate accurate directions.
I'm staying at the Renaissance Vinoy, which is one of the few hotels to have a marina, golf course, AND tennis. Not that I'll use anything more sophisticated than a bar or hot tub. And for some reason the pictures remind me of the Hollywood Tower Hotel. However, it's within walking distance of two decent-looking museums (alas the Salavador Dali museum is not one of them), but I'm not sure I'll have time to swing by either one. But I know they're there.
One thing I wish I could do with iCal is set up an event that has several different dates associated with it. So, for example, if I'm going on a business trip, I'd like an event (or a reminder) a week before that says "Take everything to the dry cleaner / shoe repair." Five days before, "Read c.v.s of people you're meeting." Two days before, "Find suitcase and do laundry." The day before, a whole slew of things: pack clothes, print out confirmations, check weather, etc., etc.. I don't want to have to create these; I want them to be automatically generated when I create a trip.
[Reposted from my Red Herring blog, 2005.]
Whatever happened to the paperless office? A decade ago, futurists and pundits were confident that personal computers, CD-ROMs, and the Internet would render books and magazines obsolete, turn paper money and checks into curiosities, and bring about the paperless office. Of course, none of these predictions has come true. Books and magazines are still around, and while total paper use has declined in the last couple years, offices use more paper now than before personal computers became commonplace.
What this suggests is that the relationship between the paper and electronic worlds is more complicated than we first thought. On its own, this is hardly surprising: it's a truism that we overestimate the magnitude of technological changes in the near term, and underestimate them in the long run. But understanding why early predictions about the death of paper haven't come true will help us map out some of the possible futures of paper in the coming age of pervasive computing. This is a world in which computers are small and cheap enough to be embedded in virtually any built object; information can be associated with everyday objects and places; and networking technology allows devices to communicate and cooperate on behalf of their owners.
If paper was supposed to be made obsolete by the personal computer, what future could paper have in a world in which computers fly off the desktop and are everywhere, in everything?
[Reposted from my Red Herring blog, 2005]
When modern architecture emerged in the first years of the last century, it threw down a gauntlet at the feet of traditional neoclassical and academic architecture. Modernism's style was stripped-down and functional. It celebrated the beauty of machines and the art of engineering, and expressed itself in concrete and steel, rather than brick and wood. Most important, it declared that the future would never again look like the past: from now on, architecture would be about innovation and change, not about working with timeless principles and eternal proportions.
Implicitly at first, and then consciously, architectural exhibits became predictions. Buckminster Fuller's Dymaxion house, first exhibited in 1927, exemplifies how modern architecture backed into the futures business. The Dymaxion house was a hexagonal structure, suspended from a central load- and services-bearing column. Virtually everything in it was made of aircraft-grade medal. The house wouldn't be built on-site, like traditional houses; instead, it would be mass-produced, like cars or cans of peas, and delivered to owners.
Soon "the home of the future" became a stock element of every architectural exhibit, World's Fair, forward-looking corporate display, or popular magazine special issue. (Even World War II couldn't derail them: a 1943 brochure showed a couple admiring a neighborhood of modern houses under the caption, "After total war can come total living.") Sporting automated kitchens, robot butlers, furniture that you washed with a high-pressure hose, and helipads (the long, sad story of why we don't have personal helicopters or jet packs will have to wait for another time), these houses were sleek temples of convenience, promises of a world in which the home would be as frictionless and worry-free as a department store.
Of course, almost none of this has come to pass. Instead, the "home of the future" projects serve as textbook examples of how you can get the future wrong, and why.
| | | | |
| | | | |
It's not at all unusual to see parents around the school-- unlike the schools I went to, when the only time you saw a parent on-campus was when someone was in serious trouble.
Right before this picture was taken, my son walked by, asked me was I was doing there, but really had no particular interest in the answer. It was curious that I was there, but not strange enough to deserve more than a second's thinking about.
Years ago, when I was helping the Institute look for new offices, I visited Gate 3, a "work club" across the Bay in Emeryville. It was a wonderfully cool space, and I really loved the vision: the space was part open office, part meeting space, and part members-only club, with a downstairs cafe and space for social events. Unfortunately, it was ahead of its time, and eventually it folded. (The creators of Gate 3 seem to be trying to bring the idea back in North Carolina.)
The idea of offices for drop-in work has continued to fascinate me, though it seems clear that they're hard to get off the ground. So I was pleased to see that Ophelia Chong (who is probably the only person who'd think to work Cole Porter lyrics into a piece on temporary workspaces) has a nice piece in 404 about an effort to create such spaces in Los Angeles.
Los Angeles is a city of re-invention and of hyphenates...Our resumes can be compared to layers upon layers of paint that is never allowed to dry, because we are constantly changing the perception of who we are.
Our definition of what employment is about re-invention as well, we are historically a nomadic work force and because of this our freelance workforce is the highest in the country, 36-38%, almost 20% higher than the rest of the country. We are nomads that travel from village to village selling our wares and services, client to client with a laptop in tow....
In the new economy the idea of full time employment has moved towards working on a series of projects as a subcontractor, in Los Angeles we are more accustomed to this form of employment than most of the country, which is why BLANKSPACES does not have to explain it's purpose, we get it.
I'm settled in at the Omni Hotel, in lovely downtown Philadelphia. Actually, I'm not kidding: I'm across the street from Independence Park, near Independence Hall, the Philosophical Society, and other monuments of early American history.
I'm going to spend part of the morning with friends from school, then head back downtown to the Chemical Heritage Society. See the room I'm working in, rest up, then workshop time-- third one in a week, which I think is a personal record.
I'm in a limo, headed from the Toronto airport-- the big international one-- to Waterloo. It should be able an hour's drive.
I slept some on the flight, but very little. I worked on the fine details of my workshop for this evening, tweaking the questions a little bit and adjusting the process. Since I'll get to Waterloo around 4:15, and my workshop is at 7, I'm basically on.
Just passed a sign with distances to Waterloo and London. It took me a second to realize that 1) the distances were in kilometers, and 2) since this is Canada, I shouldn't get too excited at the idea of being 170 anythings from "London."
I think this is only my third trip to Canada. I don't remember the first trip. One of my great uncles was Korean ambassador to Canada for a few years in the 1960s, and we came up to visit him when I was two or three. The next time was just over 15 years ago, when I came up to Toronto for a history of science conference. And now i'm here again. Kind of strange that I should have been to England more times in my life than Canada, given how much closer it is.
It looks a lot like the States, except place names are resolutely English: Winston Churchill Boulevard, Kitchener, Trafalgar. (Are there any places in Canada or Australia-- ot to mention Botswana, India, or Trinidad-- named Blair or Thatcher? I wonder if that'll feel strange to me, or be an interesting hybrid state.
I'm on the SuperShuttle to SFO. It's the middle of the night-- I got about 90 minutes serious sleep before I had to get up and get ready for the ride-- and I'm now on my way to Canada and South Africa.
First I'm in Waterloo, Canada for three nights, attending a conference on science in the 21st century at the Perimeter Institute; then I'l be in Johannesburg from the 12th to the 14th for the International Association of Science Parks annual meeting.
I'm doing workshops at both of them, and both promise to be very interesting events. And I've never been to Africa, so it'll be interesting to see even the little bit of South Africa that I'll see from the conference.
However, no matter how cool the journey, 3:30 a.m. is a brutal time to start it.
I'm trying to finish a conclusion to a big report, and often find that I think better when I stand.
There's definitely something about writing on a big space that is psychologically different from writing on a piece of paper. And when you're standing it's easier to pace around, look at things from different angles, and throw a lot of ideas up on the wall.
I'm back in my hotel, after the workshop at NUS. The workshop went quite well: it was an excellent group, and we got some very good ideas and scenario work out of them.
For me, these things are exhausting. Not only does each one require several days of prep but they demand a full day of being ON, which is pretty draining. In the room you have to be hyperactively engaging, listen carefully to everyone, draw people out, convince the skeptics, synthesize the conversations, etc., etc.. Plus beforehand you've got to think like an events planner (should these tables be moved? do we have enough water? will the air conditioner make too much noise?) and roadie (how do I move these tables?).
And before that, you've got to plan out every step of the day-- not so much with the expectation that you can operate the day with military precision, but to give you a clear enough sense of what you're doing to make it possible for you to successfully improvise when something unexpected happens (like when you're scheduled to restart at 1:30, but the waiters only bring out the main lunch course at 1:20).
Even for me, who was described by a college housemate as having two emotions, on and off (she later added a third, strobe), it requires a lot of energy.
But I really like doing these workshops-- not because they're easy, but precisely because they're hard work, and several different kinds of work. The technology for supporting them is changing rapidly, and there are some huge opportunities to do interesting new things. And a good workshop has some of the best of teaching, which I think I'll always regard as the noblest of activities.
I'm going to rest up for a bit, then go have dinner at a Japanese restaurant in Chjimes.
Since we moved into our house in 2001, we've used part of the garage as a home office. Actually, functionally speaking much of the house is a home office at one time or another, but my desk and books are in the garage. Some of my books, at least: I've long had more books than is good for me, and not enough space for them, so at least half of them have been in a storage shed or the Institute. (An occupational hazard: my father and stepmother have a two-story octagonal library in their house, and have also filled the basement with books!)
I've long dreamed of having enough space for all my books. A couple weekends ago, we went to Ikea and bought some shelving. We bought it right before I went to Europe, so we didn't get it assembled before I left; but on Saturday we got it built. Finally, I've got space for all my books. I've got to put two rows on each of the shelves, but I've had to do that since Berkeley, so I'm used to it.
my daughter alphabetizing books, via flickr
So now I have bookcases and working space on three sides: the armoire, the new tall bookcases beside those, and the short white bookcases forming the other arm of the U. Heaven.
my son in my new intellectual control center, via flickr
I'll spend the next few days happily alphabetizing the books, then figuring out the ideal way to arrange them around me. Actually, I'm not likely to ever find an ideal system; I'll keep reorganizing them forever, as projects come and go.
Update: A Finnish friend informs me that the design for the Ikea bookcases I just bought is, shall we say, an homage to bookcases long sold by a Finnish company, Lundia. Their Web site doesn't seem to have an English section, but their designs-- particularly their chairs-- look edgier than most Ikea furniture these days. Maybe the difference is that Ikea design, for all its Swedish origins, is now a generic global modern, manufactured in and designed to appeal to buyers in China and Copenhagen alike, while Lundia's is more purely Finnish.
My day starts in earnest now. I never got back to sleep, so I spent a couple hours doing e-mail and reading, and making sure my various alarms work. (They do.)
I'm meeting someone at 9 (in a couple minutes), then another person at 10.
I actually had quite a good conversation last night at the pub-- we spent a while talking about an article I'm supposed to be writing about the future of futures, and it was one of those drunken states in which you manage to think through a bunch of things all of a sudden. Incredibly, I pretty much remember it all. Usually it's only a plane ride or gigantic amount of coffee that puts me in that state.
I finished up things at the National Academies, and am on the 6:00 Acela to New York.
As a friend of mine put it, the Acela rocks. It's basically a nice European train, which makes sense, since this is just about the only part of the country that could support train service of this sort. And yes it's expensive, but the Keck is about 7 minutes from Union Station, and my hotel in New York is two blocks from Penn Station; so even though JetBlue or the Delta shuttle is cheaper, once you figure the time and cost of getting out to Dulles or Reagan, up to JFK or Laguardia, and then back into midtown, it's easily a wash.
Today's meeting was pretty good. We got a lot of useful criticism, which from a group of very smart scientists and VCs is what you want. If you just get faint praise, or worse yet no reaction at all, you know you're in really serious trouble. Only really promising projects are worth tearing into.
Tomorrow I'm spending the morning in New York. I'm meeting a friend who's an IP lawyer, a hedge fund guy, and a collaboratories designer, and by a remarkable set of coincidences, they all work within view of (or literally within) the New York Public Library.
One of those strange things.
Then I'm back to Philadelphia, and on the plane home.
I'm in 30th Street Station, waiting for my (now delayed) train to Washington DC. This is not an unfamiliar situation: I spent a lot of time in 30th Street Station was I was living here, as it was my portal back home to Virginia, up to Boston to the MIT archives, or other points along the Northeast Corridor.
I'm on Flight 188, about half an hour outside Philadelphia. I worked for a while, napped fitfully, then woke up again and am doing some more stuff.
Not quite long enough a flight to enter a deep version of the Airplane Creative Zone-- some of my best ideas seem to come to me on the long overnight flights to Europe-- but I did make some headway in an article I'm writing for one of my colleagues at Oxford, on a future of futures. Essentially I'm trying to lay out what our work would look like if we were to create the field from scratch, and took into account what brain scientists and psychologists have learned in the last twenty years about the way people think about the future.
I'm at SFO, about to catch United 188 to Philadelphia. I'm on a slightly crazy trip this week. I'm in Philadelphia tomorrow, meeting with people at the Chemical Heritage Foundation; Wednesday I'm in Washington, for a National Academies meeting; Thursday I'm in New York, to meet with various people at the New York Public Library and elsewhere.
Except for dinner with my brother, it's all future of science-related, all the time. The project has pretty much taken over my life, which is just what I wanted to have happen.
As is my wont, I'm on the redeye, and will step off the plane and into a full day of meetings. I'm going to spend as much of the flight as I can refining the talk I'm giving in Washington (I'm nothing if not predictably obsessive about these things), as the rest of my trip just requires being sharp and interesting. And while I tell myself I do this mainly to prove how much of a road warrior I am-- and how young-- the fact is, I prefer to have the few extra hours with my kids than to spend an extra night on the road. Perhaps when they're older none of us will feel like this is so valuable, but for now it definitely is. I suspect the kids think so, too.
I made it to the airport in twenty minutes, and remembered my travel mug this time (I forgot it when I went to Malaysia and Singapore). So so far, things are going well.
I think with this trip I'll get into 100K territory on my frequent flyer miles. This year I'm probably spending close to two months on the road-- broken up into several big trips and lots of little ones, but still, the days add up. I'm already taking the kids to Europe this summer, but I should think about another trip with them. I feel like they're not traveling enough. By the time I was my daughter's age, I'd spent two years in Brazil, and been to Korea once; of course, my parents were divorced, so things kind of balance out.
I'm going to Oxford this summer for the workshop on imagining business. I'll be talking about "paper spaces," the large, often room-sized roadmaps, timelines, and other documents the Institute uses in its workshops.
I've put a PDF of the paper online; I may experiment with putting a copy on Google Docs, and using Zotero to manage the citations (though that seems iffy, given that I often write pretty long footnotes). Whatever environment I use, the piece is like to undergo substantial revision over the next couple months, as I know there are a couple parts of the argument I want to expand. Here's the introduction:
This article is about paper spaces: room-sized maps, timelines, and charts used to develop, record and share ideas. When used in collaborative work, paper spaces support both focused, creative activity—the creation of a strategy roadmap, the outlines of a software development project, etc.—and informal social goals, like the development of a sense of community or common vision. These are essentially very large pieces of paper, but the term "paper spaces" (the term is borrowed from computer-aided design ) highlights several things. First, we're used to thinking of things made of paper as physical objects whose qualities help shape the experience of reading, but it's useful to pay attention to their spatial and architectural qualities as well. Large visuals aren't just things: they're spaces that possess some of the qualities of desks or offices. IFTF workshops exploit their scale and physicality to promote social activity between workshop participants. In this case, the spatiality of paper is fairly self-evident; but many of our interactions with paper, books, and writing have a spatial quality. Scholars could gain much by analyzing print media using conceptual tools from architecture, design, and human-computer interaction, as well as literary theory and book history.
Second, studying paper spaces help us understand the role that visualizations play in contemporary organizations. Historians have used studies of visual media and visual thinking to expand our understanding of science, technology, and other fields. The business world is supersaturated with visualizations—everything from advertisements, to PowerPoint presentations, to org charts, to brands, to workflows and flow charts—and studying those images could bring similar benefits. At the same time, it warns us against taking too passive or formal a view of visual tools in business, of treating them like paintings on a wall. In the way users interact with them-- they're annotated, extended, argued over, and played with-- they're more like Legos than landscapes. The process of creating maps, and the maps themselves, both reflect a set of attitudes about how to understand and prepare for the future, one that emphasizes user involvement, and the need for actors to develop and possess shared visions of the future. Finally, the term "paper spaces" highlights their hybrid, ephemeral quality. They work because they're simultaneously interactive media and workspace, but their lives are brief and easy to overlook: they are designed to support idea- and image-making, but leave little trace of themselves.
To illustrate how paper spaces work, this article will focus on their use in a specific context: in expert workshops and roadmapping exercises conducted at the Institute for the Future (IFTF), a Silicon Valley-based think-tank. The article begins with an overview of information spaces, and a brief look at IFTF's local culture and research practices. Next, it looks in detail at our expert workshops and facilitated exchanges, and describes how they're organized, what they aim to accomplish, and how they work. It then discusses how paper spaces support the co-creation of knowledge about the future, and a sense of group solidarity. Finally, it argues that paper spaces are ubiquitous: most of our interactions with texts and other media have a spatial dimension that affects the ways we read, think, and create.
The piece is currently a relatively svelte 5000 words long; I figure it'll hit 6000-7000 before I'm done. There are two big things I still have to do.
First, I have to build out the discussion of how working with (or in) paper spaces generates group solidarity, or a sense of common identity and purpose among participants.
Second, I hadn't planned on doing this, but my experience working with ZuiPrezi has made me think I should make explicit something I had planned to leave implicit: that the paper spaces I describe will become extinct in the forseeable future. When I was in Malaysia, I used ZuiPrezi in one of my workshops, and it was a terrific experience; and it leads me to believe that we're not far off from being able to replicate most, if not all, of the social functionalities of paper spaces in digital, projected tools. Thinking about what has made paper spaces work well has been essential for making them obsolete, and I think I'm going to add a section explicitly laying out what a digital system has to do in order to work as well as paper.
In my sound bite, I reveal that I like paper because it's harder for me to break paper than the screen on my Nokia N95.
I played the piece for my kids this morning before I took them to school. At the end of it, my son came up to me and said, "You know, Dad, you really do drop your stuff a lot." Gee, thanks kid.
[To the tune of Handsome Boy Modeling School, "The Projects (PJays)," from the album "So...How's Your Girl?".]
I've been in Malaysia and Singapore this week, conducting workshops on the future of science and innovation. It's been a very interesting week, talking to scientists in Penang and Kuala Lumpur about the future of science, and what role they see Malaysia playing in that future. The people I've been talking to are pretty convinced that Malaysia, which has a respectable but not world-class scientific community, can evolve into a global player in science in the next couple decades. They don't want to emulate American and European institutions: you won't see multi-billion dollar particle accelerators here any time soon. But they're pretty aware that cloud computing, cheap genomics, and other inexpensive research tools will lower the economic bars to develop world-class competence in some important fields.
So I was especially struck by Gregg Zachary's latest column in the New York Times, which asks, "might cheap science from low-wage countries help keep American innovators humming?" At least a few policy analysts and scholars studying global trends in science think that the United States can profit from the growth of scientific excellence in the developing world.
Americans have long profited from low-cost manufactured goods, especially from Asia. The cost of those material “inputs” is now rising. But because of growing numbers of scientists in China, India and other lower-wage countries, “the cost of producing a new scientific discovery is dropping around the world,” says Christopher T. Hill, a professor of public policy and technology at George Mason University.
American innovators — with their world-class strengths in product design, marketing and finance — may have a historic opportunity to convert the scientific know-how from abroad into market gains and profits. Mr. Hill views the transition to “the postscientific society” as an unrecognized bonus for American creators of new products and services.
Mr. Hill’s insight, which he first described in a National Academy of Sciences journal article last fall, runs counter to the notion that the United States fails to educate enough of its own scientists and that “shortages” of them hamper American competitiveness.
The opposite may actually be true. By tapping relatively low-cost scientists around the world, American innovators may actually strengthen their market positions....
Precisely because the gap between basic science and commercial innovations is large, Mr. Hill’s postscientific society makes sense to innovators on the front lines. One implication for the future is that the United States “won’t have to import so many scientists,” says Stephen D. Nelson, associate director of policy programs at the American Association for the Advancement of Science.
The association, which for decades has generally favored policies to expand the ranks of American scientists, is devoting a portion of its annual policy seminar next month to talk about the “postscience” situation.
Industry, meanwhile, is adapting to a world where scientific goods can come from anywhere — and fewer scientists work on abstract problems unrelated to the market. “It is no accident that many corporate labs have fallen apart,” Sean M. Maloney, executive vice president of Intel, says. “They were science farms looking for problems.”
What is this post-scientific society that Hill writes about? As he explains it,
A post-scientific society will have several key characteristics, the most important of which is that innovation leading to wealth generation and productivity growth will be based principally not on world leadership in fundamental research in the natural sciences and engineering, but on world-leading mastery of the creative powers of, and the basic sciences of, individual human beings, their societies, and their cultures.
Just as the post-industrial society continues to require the products of agriculture and manufacturing for its effective functioning, so too will the post-scientific society continue to require the results of advanced scientific and engineering research. Nevertheless, the leading edge of innovation in the post-scientific society, whether for business, industrial, consumer, or public purposes, will move from the workshop, the laboratory, and the office to the studio, the think tank, the atelier, and cyberspace.
There are growing indications that new innovation-based wealth in the United States is arising from something other than organized research in science and engineering. Companies based on radical innovations, exemplified by network firms such as Google, YouTube, eBay, and Yahoo, create billions in new wealth with only modest contributions from industrial research as it has traditionally been understood. Huge and successful firms like Wal-Mart, FedEx, Dell, Amazon.com, and Cisco have grown to be among the largest in the world, not as much by mastering the intricacies of physics, chemistry, or molecular biology as by structuring human work and organizational practices in radical new ways. The new ideas and concepts that support these post-scientific society companies are every bit as subtle and important as the fundamental natural science and engineering research findings that supported the growth of firms such as General Motors, DuPont, and General Electric in the past half century. But innovation in these two generations of firms is fundamentally different.
The piece is well worth reading, as it has a number of provocative implications for science policy, innovation policy, and education. Essentially, Hill is arguing that a decline in America's monopoly on science-- even if that does happen-- is not to be lamented any more than the shrinking of the agricultural workforce: it doesn't reflect a weakness, but a more fundamental shift to a different kind of economy, in which the sources of value aren't facts, but what you do with them.
--and I've been up for a couple hours working. I tend to run on nerves on business trips, and this one is no different; combine that with the time difference, and it means I'm falling asleep at what for me are radically early times, and getting up before the crack of dawn.
Time for a shower.
[To the tune of Alanis Morissette, "Uninvited," from the album "City of Angels".]
At a recent Institute for the Future conference, Mike Chorost remarked that devices like the BlackBerry are basically designed to give us ADD, and one of the challenges we face in the future is to design tools that aren't quite so disruptive or addictive. In that vein, ABA Journal reports that a New York law firm as banned BlackBerries and smart phones in meetings.
A law firm in suburban New York City has banned electronic devices from major meetings to prevent distractions caused by cell phones and BlackBerrys.
The six-month-old "no-device policy" at the Long Island law firm of Meltzer, Lippe, Goldstein & Breitstone is intended to prevent even vibrations from incoming calls and e-mail messages from interrupting the flow of business....
At routine meetings, new guidelines allow participants to bring electronic devices but require them to step out into the hall when an essential call or e-mail demands an immediate response.
According to Newsday,
The "no-device policy" came about, says partner Ira R. Halperin, as the steady buzzes and vibrations signaling a new call or e-mail were increasingly interfering with meeting-goers' focus.
And you're not fooling anyone by trying to unobtrusively thumb out a response as you hold your BlackBerry under the table, says Halperin, co-head of the corporate law group, who admits to having been quite an offender himself.
At Slate, law blogger Philip Carter comments,
In my practice, and my work in/around government, I've seen this problem too. Big time. I'm certainly guilty of excessive BlackBerry usage. I even have colleagues (including some at Slate) who read their BlackBerries and thumb out messages while driving—a massive risk for them, and for their companies who may be held liable for anything that happens while they're reading/sending work e-mail.... I think we've gone too far—and that the quality of our counsel actually suffers because we are moving too fast and responding too quickly. We need to slow down.
I'm on my way to southern California tonight. I'll be there for a couple meetings at the National Academies.
I'm in San Jose airport, which I think is the most business-oriented airport I travel through. San Francisco has lots of tourists, as well as business people; Oakland I only see late at night when I'm doing the redeye to DC, and everyone looks the same at 11 PM. San Jose, in contrast, seems like it's 90% lawyers, Intel and Cisco people, and other high-tech types. Of course there are some tourists or families, but the proportion of people checking their Blackberries and talking on their Bluetooth headsets is much higher than SFO or OAK.
My flight is seriously delayed, but that just means I'm working on my talk in the airport rather my hotel room. Business travel is an odd combination of going somewhere, and ignoring your surroundings.
I don't think I'll be able to get to Disneyland, except possibly on evening between the first and second meetings. This is a shame, as I'm very fond of Tomorrowland, and consider it an essential destination for any futurist. There's nowhere else quite like it-- and certainly the future shows no sign of being like it.
[To the tune of Bruce Hornsby, "Every Little Kiss," from the album "The Way It Is".]
My son and a friend of his are having a playdate at my house this afternoon.
I'm taking this as an occasion to try out the new USB headset that just arrived, and to do some work on my article on paper spaces. I'm not quite as negligent as, say, Homer Simpson in "Treehouse of Horror," and I figure that so long as no one is crying and nothing is breaking, I probably don't need to involve or concern myself with what's going on.
[Blogged with Flock]
Atul Gawande has a terrific article in last week's New Yorker on an information technology that, after several years' testing, looks like it could transform intensive care. It's mainly been used in the reduction of line infections, which Gawande explains are
so common that they are considered a routine complication. I.C.U.s put five million lines into patients each year, and national statistics show that, after ten days, four per cent of those lines become infected. Line infections occur in eighty thousand people a year in the United States, and are fatal between five and twenty-eight per cent of the time, depending on how sick one is at the start. Those who survive line infections spend on average a week longer in intensive care.
This new technology was developed a few years ago by Johns Hopkins professor Peter Pronovost. After the first trial using it in a hospital,
The results were so dramatic that they weren’t sure whether to believe them: the ten-day line-infection rate went from eleven per cent to zero. So they followed patients for fifteen more months. Only two line infections occurred during the entire period. They calculated that, in this one hospital... [it] had prevented forty-three infections and eight deaths, and saved two million dollars in costs.
For years we've heard that information technology could solve some of the most tractable problems with our health care system, and this seems to make that promise true. So what is this technology?
Not a gigantic database, or RFID tags in unconscious patients, or steerable needles (which boffins at UC Berkeley are now working on); but pieces of paper listing the steps you're supposed to take when doing something. You know what they are.
So why are they good-- good to the point of being able to save lots of lives and millions of dollars in an average hospital? Checklist offer
two main benefits, Pronovost observed. First, they helped with memory recall, especially with mundane matters that are easily overlooked in patients undergoing more drastic events. (When you’re worrying about what treatment to give a woman who won’t stop seizing, it’s hard to remember to make sure that the head of her bed is in the right position.) A second effect was to make explicit the minimum, expected steps in complex processes. Pronovost was surprised to discover how often even experienced personnel failed to grasp the importance of certain precautions. In a survey of I.C.U. staff taken before introducing the ventilator checklists, he found that half hadn’t realized that there was evidence strongly supporting giving ventilated patients antacid medication. Checklists established a higher standard of baseline performance.
Tools like checklists aren't just accidental media containing information; when you look at how they're used, they turn out to be aids to memory, objects that help standardize what can be chaotic practices. Under some circumstances, they're tools for diffusing practices and raising standards.
The power of checklists rests in their simplicity, particularly the simplicity of their use. Documents behave predictably. That predictability, I would argue, in turn is important for its incorporation into work practices. With a checklist, you can easily see that steps have been followed: it's a bit like how strips of paper in air traffic control centers serve as tools for tracking who has responsibility for a plane.
This will probably be just a throwaway line in the book, or a paragraph at most, but I've been thinking a bit about RSIs and computer-related injuries as an example of the fractured manner in which we've tried to bridge the gap between the physical and digital worlds.
Of course, you can injure yourself carrying firewood, herding sheep, wrangling children, or doing a million other things in the real world. But as I understand it, people get RSIs when of two things happen: either when computers (or more precisely, keyboards, mice, and monitors and their relationships to the body) force users to do something that their body objects to; or when computers remove a physical constraint that prevented users from performing the same action for a long time.
This isn't necessarily a problem caused by badly-designed computers. One of my colleagues sent around this bit (allegedly) from the New England Journal of Medicine:
A healthy 29-year-old medical resident awoke one Sunday morning with intense pain in the right shoulder. He did not recall any recent injuries or trauma and had not participated in any sports or physical exercise recently....
[H]e had bought a new Nintendo Wii (pronounced "wee") video-game system and had spent several hours playing the tennis video game.... In the tennis video game, the player makes the same arm movements as in a real game of tennis. If a player gets too engrossed, he may "play tennis" on the video screen for many hours. Unlike in the real sport, physical strength and endurance are not limiting factors.
The problem with the Wii isn't that it makes you do something really unnatural. But in the real world, few of us can play tennis for four or five hours straight; a Wiimote, in contrast, is light enough to make that possible.
The photo of a Cisco no-cubicle office in the recent San Jose Mercury News article set off my alarm bells, however. The no-cubicle environment in the picture is an ergonomic nightmare. I can’t believe the article didn’t discuss this downside to the wonders of the new office.
I called Lisa Voge-Levin, an ergonomic consultant who helps companies design healthy work environments, and asked her to look at the Cisco photo with me.... [She reported that the armchairs, lack of eye-level monitors, and absence of tables for drinks and accessories] contributes to neck and back injuries including muscle and tendon strain as well as such serious injuries as ruptured discs. She also notes that in such an environment, it is hard to control lighting, glare, or noise; all can lead to headaches.
The Mercury News has an article about Cisco's new open plan offices:
Like other valley stalwarts, including Intel and Sun Microsystems, Cisco is casting aside the cubicle culture that has thrived in the United States since the late 1960s. In its place, the company is embracing a new workplace design that saves space and money, and encourages collaboration among co-workers.
Cisco is not the first to forsake the cube. Younger companies such as Google and VMware have created open office spaces that still retain assigned seating. But as the valley's largest employer - and with 6 million square feet of South Bay real estate - Cisco's decision reflects a push for efficiency and a trend that emphasizes the bottom line.
"It's a competitive world," said John Scouffas, principal and designer for Gensler, the San Francisco-based architectural firm that's been redesigning workplaces in valley companies, including Cisco's. "Collaboration has been shown to spark innovation and speed product to market."...
[C]ubes are becoming dinosaurs.
"Cubes have had their day," said Michael Joroff, senior lecturer at the Massachusetts Institute of Technology's School of Architecture + Planning. "They were established at a time when work was done head down, by yourself. More and more, work is collaborative."
Clearly, the trend is gaining ground. Gensler architects confirmed they are also working with Intel, Intuit, Hewlett-Packard and Network Appliance. Already at Sun, about 56 percent of the workforce, or about 19,900 employees, work without assigned office space.
There's nothing wrong with the article per se, but haven't these arguments in favor of open space been around since at least 2000?
A little while ago, Kevin Kelly suggested that the habit of sitting at desks might be "a short-term anomaly" that we would abandon in the future. This got me thinking: what is the ergonomic history of writing and thinking? Five hundred years ago, what kinds of spaces did philosophers or essayists construct for themselves; how were they furnished; and how did they work in them? There are lots of pictures of scholars or saints at work-- Saint Jerome in his study and all that-- but how idealized are those? How well do they reflect what scholars actually did?
I asked Anthony Grafton what had been written on the subject, and he suggested, among other works, Gadi Algazi's 2003 article, "Scholars in Households: Refiguring the Learned Habitus, 1480–1550." It's a really excellent piece of work, and it'll resonate with anyone who ever writes within sight of children's toys, or revises articles on nap drives. (Perhaps it's no coincidence that Algazi's Web page mentions that he has three children!) Here's the abstract:
Until the fifteenth century, celibacy was the rule among Christian scholars of northwestern Europe. Celibacy was a major element of the codified cultural representation of the scholar and his specific way of life, sustained by peculiar institutional arrangements and daily routines. Founding family households implied therefore a major reorganization of the scholar’s way of life. Broadly speaking, this involved refashioning the scholarly habitus (understood as a system of durable and transposable social dispositions), redefining social relations, and developing the necessary material infrastructure. The paper focuses on three aspects of this process during a period characterized by uncertainty and experimentation. It discusses the structure of scholars’ families, arguing that at least until the middle of the sixteenth century, received models still persisted, while new viable models for articulating family reproduction with the transmission of scholarly dispositions had not yet crystallized. It then turns to the reorganization of domestic space, focusing on the different uses of the study to manage social distance and regulate domestic relations. Finally, among the different manifestations of the scholarly habitus, it argues that the emotional detachment of learned men was itself a learned habit. The well-documented discussion of competing options for organizing scholars’ family households and cultivating an acquired nature in academic settings provides an exceptional occasion to examine the way a group habitus is reshaped and to explore the cultural work involved in this process.
Of course, there's Dora Thornton's The Scholar in His Study: Ownership and Experience in Renaissance Italy, which I've encountered a couple times, but never looked at with this particular subject in mind.
There's also some work on commercial and mercantile calculation and writing. I think Alfred Crosby talks some about this in one of his books, and of course there's JoAnne Yates' Control Through Communication, which is full of interesting detail on 19th-century business information practices.
I write about people, technology, and the worlds they make.
I'm a senior consultant at Strategic Business Insights, a Menlo Park, CA consulting and research firm. I'm also a visitor at the Peace Innovation Lab at Stanford University. (I also have profiles on Linked In, Google Scholar and Academia.edu.)
I began thinking seriously about contemplative computing in the winter of 2011 while a Visiting Researcher in the Socio-Digital Systems Group at Microsoft Research, Cambridge. I wanted to figure out how to design information technologies and user experiences that promote concentration and deep focused thinking, rather than distract you, fracture your attention, and make you feel dumb. You can read about it on my Contemplative Computing Blog.
My book on contemplative computing, The Distraction Addiction, was published by Little, Brown and Company in 2013.
My next book, Rest: Why Working Less Gets More Done, is under contract with Basic Books. Until it's out, you can follow my thinking about deliberate rest, creativity, and productivity on the project Web site.
The Distraction Addiction
My latest book, and the first book from the contemplative computing project. The Distraction Addiction is published by Little, Brown and Co.. It's been widely reviewed and garnered lots of good press. You can find your own copy at your local bookstore, or order it through Barnes & Noble, Amazon (check B&N first, as it's usually cheaper there), or IndieBound.
The Spanish edition
The Dutch edition
The Chinese edition
The Korean edition
Empire and the Sun
My first book, Empire and the Sun: Victorian Solar Eclipse Expeditions, was published with Stanford University Press in 2002 (order via Amazon).
PUBLISHED IN 2015
PUBLISHED IN 2014
PUBLISHED IN 2013
PUBLISHED IN 2012
PUBLISHED IN 2011
PUBLISHED IN 2010
PUBLISHED IN 2009