Bernie: I believe that's the secret of my success, Mike. I never ever thought outside the box!
Mike: Wow! Really?
Bernie: Yup.... Of course, it helps if you own the box.
Mike: I would imagine.
Turns out Douglas Rushkoff was right-- you should get back in the box!
Interesting concept: Tweet The Future. It looks like it's relatively simple: from what I can tell, it searches Twitter for tweets with the word "future" in them.
Of course, by focusing on things that explicitly mention the future, it misses tweets that are about the future but don't use the word-- drkiki's forecast that "1000 dollar genome sequencing will be here in under 5 years" can't be caught by the filter, and only one of twenty recent tweets by IFTF actually uses the word "future"-- but it's an interesting start.
Don't know the creator, but perhaps I should. He's in the neighborhood. And he's actually onto something useful. Some friends of mine and I have been talking about creating a system that would aggregate futures-related material from blogs, Twitter, del.icio.us, etc. We've been calling it "social scanning," as it would basically be a Web 2.0 upgrade to the scanning that all futurists already do. (I talk about the concept in greater depth in my recent think-piece on the future of the field.)
A cautionary tale for meeting facilitators and organizers everywhere:
Thousands of employees were evacuated from an office complex after an employee spotted a suspicious black box with lights, wires and a timer in a conference room, and called for help.
Employees streamed out of the JPMorgan Chase & Co. offices in Columbus, Ohio - only for investigators to figure what the device was: a timing device used to keep presentations short....
Several people were overcome by summer heat in the parking lot during Tuesday's evacuation and were treated by paramedics.
It's things like this that make me think that I should disconnect my computer and go back to writing letters on nice stationary.
And I was just about to start thinking nice things about Tweet the Future.
I'm going back to thinking about how to apply Tim Berners-Lee's vision of the Linked Web to improve the lives of disorganized professionals (not ones who are personally disorganized, but professions that aren't connected together very tightly by gatekeeping institutions, training programs, journals, etc.). And I may not come back.
Just watch it, if you haven't already.
As Ron Miller comments, "This is a fascinating talk and what jumped out at me was how excited he was about all of this. Rather than being jaded after after 20 years in the field, he's genuinely pumped to take this to the next level."
An idea I can really relate to, on parenthood and personhood:
It’s true that we ought to make it easier to parent because that would be good for children, and therefore good for society. But there’s another claim about parenting that feminists since Friedan have been pretty bad at making, and that is that it’s good for parents. Not having liked myself much as a child, I didn’t know how to love children until I became a mother. Mothering my children, I mothered myself, and shed a layer of callowness. My children’s fiercely animal bodies brought me alive to my own. And when I watched them see all the things I had trained myself not to see—poverty, racism, sexism, filthy public spaces—I felt responsible and ashamed. In short, I flourished, in the Aristotelian sense of the term: I became more politically aware, more physically attuned—more human.
Paul Graham has a nice post on the different ways managers and "makers" divide up time:
There are two types of schedule, which I'll call the manager's schedule and the maker's schedule. The manager's schedule is for bosses. It's embodied in the traditional appointment book, with each day cut into one hour intervals. You can block off several hours for a single task if you need to, but by default you change what you're doing every hour.
When you use time that way, it's merely a practical problem to meet with someone. Find an open slot in your schedule, book them, and you're done.
Most powerful people are on the manager's schedule. It's the schedule of command. But there's another way of using time that's common among people who make things, like programmers and writers. They generally prefer to use time in units of half a day at least. You can't write or program well in units of an hour. That's barely enough time to get started.
When you're operating on the maker's schedule, meetings are a disaster. A single meeting can blow a whole afternoon, by breaking it into two pieces each too small to do anything hard in. Plus you have to remember to go to the meeting.
When I read this, I thought, this explains why I found meetings so disruptive to my days. I'm a pretty social person, but I find myself increasingly aware of the need to create large blocks of time during which I can really get into a subject, and planning my days so all calls and meetings are loaded into a certain period, rather than spread throughout the day.
Graham's essay also echoes the distinction E. P. Thompson made in his classic article "Time, Work-Discipline, and Industrial Capitalism," between time-oriented work and task-oriented work, in his famous article on time consciousness in the early Industrial Revolution. Pre-industrial work, Thompson argued, was task-oriented: whether you worked in the fields or town, the rhythm of your working day wasn't determined by a clock, but by Nature and the work you needed to get done. With the rise of the factory system, and the growing specialization of labor within factories, the rhythms of work were defined not by organic tasks, but by machines and the factory itself: you worked a certain number of hours a day, and then you stopped. Work was no longer task-oriented, but time-oriented.
Of course, there are types of work that have always remained task-oriented, even when we're measuring or regulating or standardizing them using time. Cooking is one. Parenting is another. Babies are as demanding as any factory-owner, but as any new parent will tell you, they run very much on their own clocks. But today, when the two are at odds, task-orientation loses out to time-orientation: managers set meeting times for subordinates, some of whom are likely to be young mothers. As Judith Schulevitz argues,
The politics of time are hugely significant for women because the temporality of motherhood is strictly at odds with the temporality of work... Motherhood follows not just a pre-industrial schedule but a biological one as well. (The two are related.) Women have to have their babies before they become infertile, and once their children are born, they have to meet their needs then, not later. As we learn more about the psychological and physiological benefits to a baby of being soundly attached to a mother or father figure, the importance of love for brain development, not just personality formation, we get an ever clearer sense of the cost to children of depriving their parents of the means to spend time with them, especially when they’re young. Under current social arrangements, however, motherhood and fatherhood clocks clash with most career clocks, so parents who spend that time often pay a high price for doing so.
One of the things I think I'm going to have to do more ruthlessly is control my time: not just "manage" it better, but think more clearly about what kinds of time I need. I've done this pretty well for space and other resources, but time is something that I've tended to think of merely as a scarce but relatively undifferentiated resource. High time, as it were, to figure out how I can better balance tasks and time, and the different kinds of discipline required to satisfy each.
Really interesting piece in the New York Times on studies the military is conducting on why some people have a better sense for danger than others.
The study complements a growing body of work suggesting that the speed with which the brain reads and interprets sensations like the feelings in one’s own body and emotions in the body language of others is central to avoiding imminent threats.
“Not long ago people thought of emotions as old stuff, as just feelings — feelings that had little to do with rational decision making, or that got in the way of it,” said Dr. Antonio Damasio, director of the Brain and Creativity Institute at the University of Southern California. “Now that position has reversed. We understand emotions as practical action programs that work to solve a problem, often before we’re conscious of it. These processes are at work continually, in pilots, leaders of expeditions, parents, all of us.”...
So what are the factors that seem to affect the ability to detect problems early?
Experience matters, of course: if you have seen something before, you are more likely to anticipate it the next time. And yet, recent research suggests that something else is at work, too.
Small differences in how the brain processes images, how well it reads emotions and how it manages surges in stress hormones help explain why some people sense imminent danger before most others do.
Studies of members of the Army Green Berets and Navy Seals, for example, have found that in threatening situations they experience about the same rush of the stress hormone cortisol as any other soldier does. But their levels typically drop off faster than less well-trained troops, much faster in some cases....
The men and women who performed best in the Army’s I.E.D. detection study had the sort of knowledge gained through experience, according to a preliminary analysis of the results; but many also had superb depth perception and a keen ability to sustain intense focus for long periods. The ability to pick odd shapes masked in complex backgrounds — a “Where’s Waldo” type of skill that some call anomaly detection — also predicted performance on some of the roadside bomb simulations....
Veterans say that those who are most sensitive to the presence of the bombs not only pick up small details but also have the ability to step back and observe the bigger picture: extra tension in the air, unusual rhythms in Iraqi daily life, oddities in behavior.
Of course, this is a different scale of futures thinking where I work, but I always wonder whether there are things we futurists can take away from such studies that would improve our work, or help us heighten its impact.
One of the more exciting things I worked on when I was at IFTF was a project with Kitchen Budapest, an innovation lab in Hungary that Anthony and I kind of stumbled across. I've been using their fantastic presentation tool Prezi for a while, and it's now getting some well-deserved attention.
I've been waiting a while for this functionality, and am glad it's finally out!
William Shatner reads Sarah Palin's resignation speech as poetry (listen for the bongo drums).
It's like Peggy Noonan, Jack London, and William Faulkner wandered into the woods with three buttons of peyote and one typewriter, and only this speech emerged.
And she wrote this speech! In advance, on paper! What does any of it mean? It is amazing. Twenty years ago she could competently descibe a dog race, three years ago she could articulate a position on the abortion issue, and this weekend she composed a resignation speech by throwing culture war stock phrases into a hat and dumping it upside down on a copy of The Paranoid Style in American Politics [ed: great book! really holds up!].
Okay, now I'm going to work.
From The Guardian:
A group of eminent economists has written to the Queen explaining why no one foresaw the timing, extent and severity of the recession.
The three-page missive... was sent after the Queen asked, during a visit to the London School of Economics, why no one had predicted the credit crunch.
What did they come up with? The explain that
the failure to foresee the timing, extent and severity of the crisis and to head it off, while it had many causes, was principally a failure of the collective imagination of many bright people, both in this country and internationally, to understand the risks to the system as a whole.
Sounds perfectly plausible. The question is, what would help improve that collective imagination?
This bit from the TierneyLab:
“Academics, like teenagers, sometimes don’t have any sense regarding the degree to which they are conformists.”
So says Thomas Bouchard, the Minnesota psychologist known for his study of twins raised apart, in a retirement interview with Constance Holden in the journal Science....
The strength of this urge to conform can silence even those who have good reason to think the majority is wrong. You’re an expert because all your peers recognize you as such. But if you start to get too far out of line with what your peers believe, they will look at you askance and start to withdraw the informal title of “expert” they have implicitly bestowed on you. Then you’ll bear the less comfortable label of “maverick,” which is only a few stops short of “scapegoat” or “pariah.”
A remarkable first-hand description of this phenomenon was provided a few months ago by the economist Robert Shiller, co-inventor of the Case-Shiller house price index. Dr. Shiller was concerned about what he saw as an impending house price bubble when he served as an adviser to the Federal Reserve Bank of New York up until 2004.
So why didn’t he burst his lungs warning about the impending collapse of the housing market? “In my position on the panel, I felt the need to use restraint,” he relates. “While I warned about the bubbles I believed were developing in the stock and housing markets, I did so very gently, and felt vulnerable expressing such quirky views. Deviating too far from consensus leaves one feeling potentially ostracized from the group, with the risk that one may be terminated.”
My wife is running a half marathon this weekend in San Francisco. Myself, I'm good for an hour in the weight room or pool, but as a rule I don't tackle any distance greater than about a mile without a bicycle.
I'd meant to link earlier today to her Team in Training page and recommend you support a very good cause, but umm, things got a little busy. Still, it remains a good cause, and I encourage you to pledge!
A few days ago I wondered if shamelessness was the the new virtue. Yesterday I didn't quite find an answer, but I did find another great example of how information transparency doesn't necessarily lead to greater probity and economy, but greater recklessness or shamelessness. This is from Dan Ariely's Predictably Irrational, pp. 16-17:
[I]n 1993, federal securities regulators forced companies, for the first time, to reveal details about the pay and perks of their top executives. The idea was that once pay was in the open, boards would be reluctant to give executives outrageous salaries and benefits....
But guess what happened. Once salaries became public information, the media regularly ran special stories ranking CEOs by pay. Rather than suppressing the executive perks, the publicity had CEOs in America comparing their pay with that of everyone else. The result?... [I]n 1976 the average CEO was paid 36 times as much as the average worker. In 1993, the average CEO was paid 131 times as much.... Now the average CEO makes about 369 times as much as the average worker-- about three times the salary before executive compensation went public.
There's a great catflight going on between Matt Taibbi (who has turned into a mad cross between Upton Sinclair, Michael Lewis, and Lenny Bruce) and Claudia Deutsch about Goldman Sachs' plan to pay big bonuses to its people again. Matt rips apart a post titled "Congratulations, Goldman-- And I Wish You Many, Many More."
The defense of Goldman seems to boil down to, yes they have all sorts of connections, and yes they got tons of money from the government, but they're honest about it.
It makes me wonder: At one time, back in the day, we thought that the Internet and other information technologies would create transparency, make it harder to hide corruption, and thus force powerful people to behave better.
But we've essentially run an experiment for a decade testing this hypothesis, and it seems to me that it hasn't worked out that way.
Instead of forcing corruption underground, the Internet has forced shamelessness aboveground-- and indeed, has turned it into a virtue. So the Goldman execs may be dickheads, money-grubbing asses, and willing to sell their grandmothers if the price is right, but they don't pretend to be anything else. So they're welcome to their bonuses.
Bruce Sterling points out the parallels between the instructions I provide in the "Evil Futurists Guide to World Domination" (henceforth EFG2WD), and religion. Of course he's right. I should have thought of it earlier.
It’s a little odd that Pang doesn’t seem to realize that he is describing religion here. His “evil futurist” is a morally-certain holy prophet with a scripture. Social figures of this sort carry out practically every tactic that Pang describes, and that scheme’s been working grandly for millennia.
But on the upside, this'll be good for another dozen really dense footnotes citing works in the psychology of religion and apocalyptic prophecy literature. Win!
For a while now, I've been working on a think-piece on what futures would look like if it started now: if instead of starting during the Cold War, in the middle of enthusiasm for social engineering, computer programming, and rationalistic visions of future societies, futures was able to draw on neuroscience and neuroeconomics, behavioral psychology, simulation, and other fields and tools.
One of thing things I've kept coming back to is that, if you take seriously the criticisms or warnings of people like Nassim Taleb on the impossibility of prediction, Philip Tetlock and J. Scott Anderson on the untrustworthiness of expert opinion, Robert Burton on the emotional bases of certainty, Gary Marcus and Daniel Gilbert on the mind, etc., you could end up with a radically skeptical view of the whole enterprise of futures and forecasting. Or, read another way, you end up with a primer for how to be an incredibly successful futurist, even while you're a shameless fraud, and always wrong.
I've finished a draft of the serious article [PDF], so now it's time for the next project: The Evil Futurists' Guide to World Domination: How to be Successful, Famous, and Wrong. It would be too depressing to write a book-length study, so I'll just post it here.
(This exercise is, by the way, an illustration of Pang's Law, that the power of an idea can be measured by how outrageously-- yet convincingly-- it can be misused. Think of Darwin's ideas morphing into Social Darwinism or being appropriated by the Nazis, or quantum physics being invoked by New Age mystics. And yes, I know Pang's Law will never be as cool as the Nunberg Error, but I do what I can.)
Full essay in the extended post.
The citations are all real. But no, I don't really mean a single word of it. Yet, I wonder....
In Outliers Malcolm Gladwell writes that it takes about 10,000 hours to master something-- computer programming, classical violin, tennis, what have you. I've been working as a futurist for almost a decade; I don't know if I've done 10,000 hours of decent work, but I have some feel for how the field works, and what we're good at.
About a year ago-- okay, more like two years ago-- Angela Wilkinson, a friend who runs the scenario planning master classes at the Saïd Business School, invited me to write a think-piece about the field. I took it as an occasion to run a thought experiment: if you were to start with a clean sheet of paper-- if there was no Global Business Network, no IFTF, no organized or professionalized efforts to forecast the future-- what would the field look like? What kinds of problems would it tackle? What kinds of science would it draw on? And how would it try to make its impact felt?
As I got into it, I concluded that a new field would look very different from the one I've worked in for the last decade. This essay (it's a PDF, about 260kb) is a first draft at an effort to explain where I think we could go. Lots of what I talk about will be familiar to my colleagues, and indeed to anyone reasonably well-read; but I think there's utility in synthesis and summary, if only to see connections between literatures and chart one's next steps.
All the usual caveats apply: it's unpublished, it's unfinished, it doesn't reflect the thinking of any of the various institutions I'm associated with, all the errors are mine, there are plenty of things I could have talked about but didn't. But so does the usual invitation to comment on it. I could keep tinkering with it, but at this stage I think it's more useful for me to take a step back, work on some other things, and return to it with fresh eyes.
Angela had in mind something quick, short, and provocative. I definitely missed the first two. Angela, I'm sorry to have kept you waiting.
Update, 22 July 2009: I've posted a slightly updated version of the essay, and also reproduced the introduction below the jump.
in the course of my reading (or browsing or opportunistic strip-mining) of the literature on behavioral economics, the psychology of the future, studies of certainty, and other things, I keep having the same thought: The whole point of this literature is to explain why people listen to the neocons.
There's no logical reason, after the last eight years, that anyone should ever take anything that William Kristol (to take one example) says seriously. But the neocons don't appeal to logic: they appeal to those parts of our brains that respond to blinding certainty, simple arguments, and self-confidence, not complexity, contingency, and modestly.
Once you realize that it all makes sense. If you want to be constantly rewarded for being consistently wrong, study their careers.
I remember back in the late '90s when Ira Katznelson, an eminent political scientist at Columbia, came to deliver a guest lecture to an economic philosophy class I was taking. It was a great lecture, made more so by the fact that the class was only about ten or twelve students and we got got ask all kinds of questions and got a lot of great, provocative answers.
Anyhow, Prof. Katznelson described a lunch he had with Irving Kristol back either during the first Bush administration. The talk turned to William Kristol, then Dan Quayle's chief of staff, and how he got his start in politics. Irving recalled how he talked to his friend Harvey Mansfield at Harvard, who secured William a place there as both an undergrad and graduate student; how he talked to Pat Moynihan, then Nixon's domestic policy adviser, and got William an internship at The White House; how he talked to friends at the RNC and secured a job for William after he got his Harvard Ph.D.; and how he arranged with still more friends for William to teach at UPenn and the Kennedy School of Government.
With that, Prof. Katznelson recalled, he then asked Irving what he thought of affirmative action. "I oppose it", Irving replied. "It subverts meritocracy."
From the start, the more confident advisers found more buyers for their advice, and this caused the advisers to give answers that were more and more precise as the game progressed. This escalation in precision disappeared when guessers simply had to choose whether or not to buy the advice of a single adviser. In the later rounds, guessers tended to avoid advisers who had been wrong previously, but this effect was more than outweighed by the bias towards confidence.
The findings add weight to the idea that if offering expert opinion is your stock-in-trade, it pays to appear confident."
SciBarCamp is done. Other than a lot of excellent leftover Pakistani food, a surprising amount of beer, and a photo set on Flickr, you'd never know we hosted 60+ people for two days. Time for a bit of reflection.
Wednesday morning, as I was getting the Institute's conference space ready for SciBarCamp-- hauling tables, moving chairs, trying to figure out how to get sixty people into our large conference room, calculating how many and what kinds of signs we needed to put to up to help guests find the wifi, bathrooms, etc.-- I overhead someone say, "What I love about these things is that you don't have to do any preparation. You just show up."
Events like these may look like they're spontaneous and free, but that's only because someone has set up the environment in which it takes place. That labor shouldn't really be visible to the participants-- like all infrastructure, its purpose is to be useful, not to call attention to itself-- but it is essential to the success of even the loosest and most improvisational event. To make a brief comparison to music: the most brilliant jazz improvisers, people like Keith Jarrett and Ornette Coleman, aren't brilliant because they just get up onstage and do whatever comes into their heads: they're brilliant because they've played for thousands and thousands of hours, are highly disciplined, have great training... and bring all that to the concert hall. [Update: See Fred Kaplan on creativity in jazz.] Likewise, when I travel, I like to be able to wander around and explore things; but I can do that because I carry a pack that has all kinds of things that I find useful, and come in handy under a variety of circumstances. (Preparation is likewise important for biking and cooking, and other things.)
The Institute's conferences are scripted to the minute, the presentations are rehearsed endlessly, group exercises are agonized over. There's a lot of top-down structure, because we have a lot of content to share, and because it's hard for most people to think about the future in an orderly way. People, we assume, need the structure we provide in order to translate our work into terms that will be useful to them. So the bar camp model is one that I find very interesting.
But the camp isn't just the absence of organization: that wouldn't be a bar camp, it would just be chaos. There is structure here, and I want to understand what it is.
I was talking to Jamie McQuay, one of the organizers of this year's camp and a veteran of the bar camp scene, about the ingredients for a successful bar camp. He said that the two things you really need are free space (which saves the organizers money and time, and cuts down on the number of sponsors you have to look for), and interesting people. Tantek Çelik, a camp veteran, told me that all you really need are physical and virtual spaces-- a conference venue and a wiki.
But my sense is that there's more to it than that.
There's a cultural element to the camps that I think is important. People here are veterans of academic meetings, scientific society conferences, and industry trade shows, and know that world well enough to be intelligently dissatisfied by it. (I had a professor who said you couldn't rebel effectively against Catholicism unless you had been educated by Jesuits. Not Franciscans or Dominicans, mind you-- Jesuits. Truly, give me the child until he's seven, and he's ours forever.) When you have an event that's a mirror-world of the traditional conference, you need to know what the traditional conference is like, so you can do the opposite. I would draw a comparison to Wikipedia. One of the usually unacknowledged reasons Wikipedia works is because people know, or think they know, what encyclopedia articles are supposed to sound like: readers and creators alike share a basic understanding of what they should be doing.
I also suspect a good bar camp also requires some minimum number of people who are veterans of the camp scene, and can catalyze others and acculturate novices. I'm not sure what that number is. Tantek said that return attendees are like culture in yogurt, which I think is a good comparison.
I think there are also some practical things that you can do that I've listed after the jump. None are especially profound, but they'd all make the event work better, and are worth paying attention to. But what else is there? Besides physical and virtual space, interesting people, a familiarity with conventional conferences, and perhaps some elusive bare minimum of people who've been to bar camps before, are there other things that a successful camp needs?
I'm supposed to be taking some of the summer off, finishing the book and a couple articles, but like Michael Corleone in The Godfather, every time I think I'm out they pull me back in. I was at the first day of SciBarCamp today, playing local host / fixer / keeping an eye on the furniture. Sean Mooney (who in addition to being a former professor at Indiana University, was a World Wrestling Federation announcer) gave a very interesting talk about current challenges in bioinformatics.
A fair amount of Sean's talk dealt with the technical challenges of creating federated databases, the differing demands of bench scientists and funders-- the former want tools for managing and analyzing data in today's problems, while the latter want to attack Big Questions-- and the issues involved in getting people to share their data. The issues aren't so much philosophical or competitive, but practical: people believe in sharing data, and once they're done with it are generally willing to share so long as it doesn't put a burden on them.
But as Sean was talking about how different labs used different procedures for similar experiments, and how those differences manifested themselves in the ways they produced and consumed data (at least, this is what I took away from his talk-- he might have meant something complete different), a thought came to me. Projects intended to let scientists assume that data can be converted into something like the reagents or instruments labs buy from suppliers-- a commodity that you don't have to think about, you just use. But what if data can't be black-boxed this way? Or, more specifically, what if only really uninteresting data-- the kind that everyone understands very well, the kind that's solidly in the realm of normal science-- can be cleaned up, repackaged, commodified and standardized, and put online into generally-usable databases?
On one hand, this idea might seem stupid. After all, science is science: data is data, and facts about nature are true no matter where they're created. That makes them scientific. On the other hand, if you buy the argument of people like Harry Collins, scientific research is as much a craft as a-- well, a science. Databases tend to reflect the specific, local interests of researchers, working on particular problems. This tends to work against the generalizability of data: the more it's a product of craft, and an object tailored to a particular job, the harder it'll be to make it useful to other people.
So depending on how much databases are expressions of craftwork and problem-solving and bricolage, and how much they reflect a timeless, placeless crystallization of nature's order, they're going to be less or more easily poured into big projects to reuse data.
After I stopped being slightly alarmed (can I really be that predictable?), I was pretty impressed.
The New York Times has a piece (Future Vision Banished to the Past") about the likely destruction of Kisho Kurokawa’s Nakagin Capsule Tower, a "rare built example of Japanese Metabolism, a movement whose fantastic urban visions became emblems of the country’s postwar cultural resurgence."
Nakagin Capsule Tower, from the New York Times
The building, built in 1972, is now in lousy shape (what a surprise for an architecturally distinctive building employing innovative construction technology), but the author argues that
the building’s demolition would be a bitter loss. The Capsule Tower is not only gorgeous architecture; like all great buildings, it is the crystallization of a far-reaching cultural ideal. Its existence also stands as a powerful reminder of paths not taken, of the possibility of worlds shaped by different sets of values.
Founded by a loose-knit group of architects at the end of the 1950s, the Metabolist movement sought to create flexible urban models for a rapidly changing society. Floating cities. Cities inspired by oil platforms. Buildings that resembled strands of DNA. Such proposals reflected Japan’s transformation from a rural to a modern society. But they also reflected more universal trends, like social dislocation and the fragmentation of the traditional family, influencing generations of architects from London to Moscow.
Like lots of twentieth-century architectural movements, the Metabolists were at least as influential for their ideas as their actual buildings. (I remember studying them along with Archigram and Team X in David Brownlee's Art History 481B-- probably the most important class I took in college, given how often I use what I learned in it.) A lot of the more outlandish ideas from this period were never meant to be built-- drawings of walking cities were stimulating reflections on the nature of building in an impermanent world, but totally impractical-- but they made other, more prolific architects think differently about their work and the issues it raises.
In a way, I wonder if there's a useful comparison to be drawn between movements like these, or projects that remain forever on the drawing board but get talked about, and futurists and their work. Most of us don't build things, or write software, or craft strategies; the scenarios we write are intended to be provocations or stimulations (a hedge against them being wrong, which to one degree or another they inevitably are), and at best we have an indirect but positive influence on other people.
Composed of 140 concrete pods plugged into two interconnected circulation cores, the structure was meant as a kind of bachelor hotel for businessmen working in the swanky Ginza neighborhood of Tokyo.
Inside, each apartment is as compact as a space capsule. A wall of appliances and cabinets is built into one side, including a kitchen stove, a refrigerator, a television and a tape deck. A bathroom unit, about the size of an airplane lavatory, is set into an opposite corner. A big porthole window dominates the far end of the room, with a bed tucked underneath....
Each of the concrete capsules was assembled in a factory, including details like carpeting and bathroom fixtures. They were then shipped to the site and bolted, one by one, onto the concrete and steel cores that housed the building’s elevators, stairs and mechanical systems.
In theory, more capsules could be plugged in or removed whenever needed. The idea was to create a completely flexible system, one that could be adapted to the needs of a fast-paced, constantly changing society. The building became a symbol of Japan’s technological ambitions, as well as of the increasingly nomadic existence of the white-collar worker.
It's amazing how much work and expense goes into making the first example of something modular and standardized.
Of course, the great irony of building and construction standardization is that it hasn't produced a revolution in architecture. If anything, it's made it easier to throw up thousands of neo-Spanish colonial (or American colonial, or frontier, or postmodern-via-Miami Vice) houses in California's Central Valley, outside Phoenix, or in the suburban rings around Atlanta. Kurokawa was right that modularity and flexibility would suit "the needs of a fast-paced, constantly changing society;" but when married to the reality of real estate development, and the unreality of the mortgage market in the 2000s, the result was kind of architecture very different from what the Metabolists imagined-- a useful reminder for futurists that what we think of as "exogenous" factors often have a bigger impact on the futures we're trying to understand than the factors we do pay attention to.
As Kevin Conley explains,
The director, who had no permits to film or to stop traffic, hooked a camera to the front bumper of a Mercedes-Benz (in the only bit of film trickery, the sound of the motor was played by a five-speed Ferrari) and filmed the entire movie in a single cinema-verité take: He drove through the streets of Paris at five in the morning, through red lights, around the Arc de Triomphe, down the Champs-Élysées, against one-way traffic, over sidewalks, at speeds up to 140 miles per hour. The film ends after nine terrifying minutes when the driver parks the car in Montmartre and a blonde comes up the stairs toward Sacre Coeur. (It was a date.) After the first showing, the director was arrested for endangering public safety.
I suspect if you're familiar with Paris, this is even more terrifying a spectacle.
...on the blog NCBI ROFL. NCBI is the National Center for Biotechnology Information, and its Web site has a number of scientific journal databases.
Some of these public articles on such cutting-edge subjects as "Disco clothing, female sexual motivation, and relationship status," which concluded that
females are aware of the social signal function of their clothing and that they in some cases alter their clothing style to match their courtship motivation. In particular, sheer clothing -although rare in the study- positively correlated with the motivation for sex.
It may be just me, but the only reason I can imagine this article being written is to help nerdy guys get laid.
However, I think the article title "Inappropriate use of a titanium penile ring: An interdisciplinary challenge for urologists, jewelers, and locksmiths" (umm, LOCKSMITHS???) may be the best thing ever written.
[Update: Made it onto Boing Boing!]
[D]ata sets themselves do not really convey any specific meaning. Meaning can be inferred from how the data compare to expectations or previously published data, but numbers in enterprise applications or spreadsheets cannot explain the strategies Intel and its customers are employing or the uncertainties they are facing. Decentralized organizations must find a means of transmitting business context; in other words, instead of transmitting mere data sets, they must transmit information and intelligence from employees who have it to employees who need it to make decisions and plans. (Jay Hopman, "Using Forecasting Markets to Manage Demand Risk," Intel Technology Journal 11:2 (May 2007), emphasis added.)
My son has started tutoring in reading. He's not as strong a reader as we'd like, or as strong as he'd like. So twice a week we take him to a reading expert. She's a former Peninsula teacher, and is actually someone my wife had as a child.
His enthusiasm is striking, because when I was a kid, getting tutored was a Bad Thing. Certainly you didn't look forward to it, or expect it to be fun. I don't know if this is a general change in kids' attitudes, or something specific to this area, or an extension of their general Peninsula-bred love of school. My kids look forward to Monday coming around so they can go back to school, and my daughter and her friends always complain about the end of the school, so those attitudes probably influence their attitudes toward turoring. And my son has known Marion (her tutor) for ages, and that made him more excited to be working with her.
And while I haven't done any surveys, my sense is that a lot more of my kids' friends are doing that in an earlier age might have been seen as remedial, and not talked much about. At least two or three of my son's friends have worked with Marion, which goes a long way to normalizing it. And for kids who already are taking music lessons, are in swimming clubs or little league, or doing lots of other scheduled things, tutoring or speech therapy probably doesn't seem like anything out of the ordinary.
So he'd better be reading Tolstoy by September, or I'm going ask for my money back.
I write about people, technology, and the worlds they make.
I'm a senior consultant at Strategic Business Insights, a Menlo Park, CA consulting and research firm. I'm also a visitor at the Peace Innovation Lab at Stanford University. (I also have profiles on Linked In, Google Scholar and Academia.edu.)
I began thinking seriously about contemplative computing in the winter of 2011 while a Visiting Researcher in the Socio-Digital Systems Group at Microsoft Research, Cambridge. I wanted to figure out how to design information technologies and user experiences that promote concentration and deep focused thinking, rather than distract you, fracture your attention, and make you feel dumb. You can read about it on my Contemplative Computing Blog.
My book on contemplative computing, The Distraction Addiction, was published by Little, Brown and Company in 2013.
My next book, Rest: Why Working Less Gets More Done, is under contract with Basic Books. Until it's out, you can follow my thinking about deliberate rest, creativity, and productivity on the project Web site.
The Distraction Addiction
My latest book, and the first book from the contemplative computing project. The Distraction Addiction is published by Little, Brown and Co.. It's been widely reviewed and garnered lots of good press. You can find your own copy at your local bookstore, or order it through Barnes & Noble, Amazon (check B&N first, as it's usually cheaper there), or IndieBound.
The Spanish edition
The Dutch edition
The Chinese edition
The Korean edition
Empire and the Sun
My first book, Empire and the Sun: Victorian Solar Eclipse Expeditions, was published with Stanford University Press in 2002 (order via Amazon).
PUBLISHED IN 2015
PUBLISHED IN 2014
PUBLISHED IN 2013
PUBLISHED IN 2012
PUBLISHED IN 2011
PUBLISHED IN 2010
PUBLISHED IN 2009