Hollee (really?) Actman Becker has a great, heartfelt and smart piece about Instagram beauty contests (as terrible as they sound, read about them yourself), and the responsibilities parents have to helping their kids use technologies in ways that are smart:
we are failing our children by not giving them the tools they need to properly navigate this scary new world, and by not monitoring their interactions in this world closely enough once we do….
Because the minute we give our kids an iphone or ipod or any other gadget that puts technology quite literally in the palms of their hands, we become responsible for whatever happens next….
We potty train our kids, teach them good table manners, spend 10 minutes deciphering the food label on a candy bar before we let them eat it. And yet, we set our kids up on social media, and then for all intents and purposes, we hang them out to dry.
Checking our kids’ news feeds to see what they are viewing, scrolling through their profiles to see what they’re posting, investigating the people who want to follow them, finding out who they’ve given their password to and monitoring all of their accounts (because most kids have more than one instagram account in case you didn’t know) doesn’t make us helicopter parents.
It makes us smart parents.
As the father of a girl who just turned 14 yesterday, I say: Read the whole thing.
Derek Thompson has a short interview with Steve Blank about Facebook's IPO and what it means for Silicon Valley:
Facebook's success has the unintended consequence of leading to the demise of Silicon Valley as a place where investors take big risks on advanced science and tech that helps the world. The golden age of Silicon valley is over and we're dancing on its grave. On the other hand, Facebook is a great company. I feel bittersweet.
Why is that?
Silicon Valley historically would invest in science, and technology, and, you know, actual silicon. If you were a good VC you could make $100 million. Now there's a new pattern created by two big ideas. First, for the first time ever, you have computer devices, mobile and tablet especially, in the hands of billions of people. Second is that we are moving all the social needs that we used to do face-to-face, and we're doing them on a computer.
And this trend has just begun.
In other words, opportunities to make lots of money quickly on bubblicious things tend to draw attention away from hard things that offer more enduring value. Makes perfect sense. Blank also makes this interesting observation:
I see my students trying to commercialize really hard stuff. But the VCs are only going to be interested in chasing the billions on their smart phones. Thank God we have small business research grants from the federal government, otherwise the Chinese would just grab them….
The four most interesting projects in the last five years are Tesla, SpaceX, Google Driving, and Google Goggles. That is one individual, Elon Musk, and one company, Google, doing all four things that are truly Silicon Valley-class disruptive…. Thank God for federal government grants, and the NIH, and Musk, and Google.
I think TED talks are the worst example of modern faux-intellectualism. Audience flattering, based on ego and personality, dripping with self-congratulation, they contribute to one of the great lies of our time, which is that the truth is entertaining and can be contained in bite-sized, ready-for-television aphorisms. The reality is that progress is hard, that knowledge making is a long and dispiriting slog, and that when ideas and solutions appear pat, cute, easy, or triumphant, they’re almost certainly wrong.
Mainly this is an excuse to trot out my favorite Bart Simpson quote: To those who say there are no easy answers I say you're not looking hard enough!
As someone who's given TEDx talks, yet is occasionally put off by just how much buzz these talks generate (or really, how ready some speakers are to point to hit counts as proof that They Are Taken Seriously) I can understand the criticism.
Yet there can be value in struggling to take a complex project and at the very least, show people enough of it to make them think that it would be worth investing their time and attention to see the whole thing. Good TED talks aren't like music videos; they're like movie previews.
I love Felix Salmon's work. He's one of the best financial reporters in the business, someone who's got tons of insight and an ability to explain complicated, obscure but important things to a general audience.
So I was pained to see him get a small but significant detail wrong in his piece about Marc Andreessen. Among Andreessen's achievements, Felix writes, is that "he's dragging Silicon Valley into the world of philanthropy, where it’s historically been very weak."
Umm. No. Absolutely wrong.
Dave Hewlett and Bill Packard, and their families, have been philanthropists for decades. The value of their gifts to Stanford University exceed the Stanford family’s original endowment (or so I was told by some development folks there). The Lucille Packard Children’s Hospital (where both my children were born), the open space trusts that have kept a significant part of Silicon Valley from turning into places to park Range Rovers in front of McMansion… not to mention a variety of locally-famous schools, charitable foundations, etc. etc. ad infinitum-- have all benefitted from the work of Hewlett, Packard, the Varian family, and many others.
Too elitist? Fundraisers for schools sound too self-serving? Maybe Santa Clara U.’s social innovation prizes, and its goal of improving the lives of a billion poor by 2020, is a bit more to one’s liking.
Indeed, you might make the case that Marc learned about the value of philanthropy from his spouse, whose family real estate business shaped Silicon Valley, and whose family foundation has shaped Silicon Valley in different ways.
I think the idea that the Valley isn't interested in philanthropy comes from extrapolating the example of Steve Jobs, who famously was uninterested in it. However, what you have to realize is how much Steve was the exception to the rule; indeed, you'd have to be someone of Steve Jobs' caliber (in other words, only Steve Jobs) to get away with it.
So Marc’s not inventing a new tradition. If anything, he’s doing a great job of showing how new money legitimates itself by imitating old money.
And yes, here in the Valley money made selling klystrons and calculators– anything before about 1990– is Old Money. Those dollars might as well have been printed by Rembrandt.
I decided to download Blogsy, a blog editor for the iPad, and give it a try. I've lamented the apparent absence of decent Typepad editors (indeed, I still pine for the old days of Ecto), but this one looks pretty promising.
I spent yesterday at the Being Human conference in San Francisco, about which I'll have more to say shortly. It was a very interesting time, and quite well-done.
A good, lengthy piece in the New York Times uses iPhone manufacturing as a window into the growth of Asian factories, and the relative decline of American manufacturing capabilities. Well worth a read.
Apple had redesigned the iPhone’s screen at the last minute, forcing an assembly line overhaul. New screens began arriving at the plant near midnight.
A foreman immediately roused 8,000 workers inside the company’s dormitories, according to the executive. Each employee was given a biscuit and a cup of tea, guided to a workstation and within half an hour started a 12-hour shift fitting glass screens into beveled frames. Within 96 hours, the plant was producing over 10,000 iPhones a day.
“The speed and flexibility is breathtaking,” the executive said. “There’s no American plant that can match that.”
Similar stories could be told about almost any electronics company — and outsourcing has also become common in hundreds of industries, including accounting, legal services, banking, auto manufacturing and pharmaceuticals.
But while Apple is far from alone, it offers a window into why the success of some prominent companies has not translated into large numbers of domestic jobs. What’s more, the company’s decisions pose broader questions about what corporate America owes Americans as the global and national economies are increasingly intertwined.
“Companies once felt an obligation to support American workers, even when it wasn’t the best financial choice,” said Betsey Stevenson, the chief economist at the Labor Department until last September. “That’s disappeared. Profits and efficiency have trumped generosity.”
Vannevar Bush's Memex, Ted Nelson's concept of hypertext, Paul Baran's invention of packet-switching, the TCP/IP protocol stack, Tim Berners-Lee's HTML... What do we do with all this? Do we create magnificent works of art? Finally achieve world peace and global happiness?
No, we create Virtual Bikini Wrestling.
Humans. Are. Doomed.
[Don't ask me how I got to it. Actually, I was buying an ebook about Twitter and churches, and the same online publisher was selling "pictures" from recent VBW matches. It's such a weird concept, I couldn't resist clicking through.]
To walk through an airport with Bruce Schneier is to see how much change a trillion dollars can wreak. So much inconvenience for so little benefit at such a staggering cost. And directed against a threat that, by any objective standard, is quite modest. Since 9/11, Islamic terrorists have killed just 17 people on American soil, all but four of them victims of an army major turned fanatic who shot fellow soldiers in a rampage at Fort Hood. (The other four were killed by lone-wolf assassins.) During that same period, 200 times as many Americans drowned in their bathtubs. Still more were killed by driving their cars into deer. The best memorial to the victims of 9/11, in Schneier’s view, would be to forget most of the “lessons” of 9/11. “It’s infuriating,” he said…. “We’re spending billions upon billions of dollars doing this—and it is almost entirely pointless. Not only is it not done right, but even if it was done right it would be the wrong thing to do.”...
What the government should be doing is focusing on the terrorists when they are planning their plots. “That’s how the British caught the liquid bombers,” Schneier says. “They never got anywhere near the plane. That’s what you want—not catching them at the last minute as they try to board the flight.”
The camera, incidentally, looks pretty amazing-- an appealing blend of retro (it looks like slide viewers my dad had from the 1970s)-- and hyper-modern.
Here's the dissertation, if you're really interested.
And seriously, check out the interactive picture gallery. It's pretty amazing.
...and to put John Hagel and John Seely Brown on retainer to help explain how people really read on their Blackberry.
(Read about Murdoch's "I read this critical email on my Blackberry, so hey how could I have read it all the way through?" excuse.)
New York Magazine has a piece on the opening of Vogue's new online archive that only narrowly misses fainting with excitement.
Vogue's much-hyped archive website goes live today, and as promised, it contains every single page from every issue dating back to the magazine's American debut in 1892. According to Vogue's press release, the site is searchable by decade, brand, designer, and photographer; you can also sort results by articles, images, covers, or ads. It's a wildly impressive undertaking to organize such a massive amount of information, and bravo to Vogue for providing a great tool for researching the historical context of moments in fashion and society.
Felix Salmon adds:
The ad content alone is hugely valuable: it’s almost impossible, right now, to follow the course of, say, Dolce & Gabbana’s ad campaigns from inception to date, or to follow the career of a star model or photographer as they move back and forth between expensive editorial shoots and much more expensive ad shoots. The Vogue database even allows you to search by individual garment — again, giving you the opportunity to see how a certain pair of shoes, say, is portrayed in different contexts.
The people who really need this kind of information are professionals who happily spend $131 on lunch and will be equally happy to spend the same amount on a month’s access to the Vogue archives.
It'll be interesting to see if there are institutional subscriptions-- will Fashion Institute of Technology or RISD give its students access.
I had an old DVD player in my car, and I broke it open, and dug out one of the lenses. There are two lenses, a curvy one, and a flatter one. You want the flatter of the two.
I took the lens, drilled a hole in an old Maker Faire badge, and cemented it in. The lens is just above the "2010."
The results can be really cool.
You do have to tinker with it a lot to get a good picture-- if the lens is off-center everything gets messed up, you have to be careful to focus it properly, and of course there's huge vignetting-- but that's part of the pleasure of it.
Now on to making a pinhole lens for my Nikon D3100!
Hard to believe, but it looks that way. I poked around a little with Ecto this afternoon, and it looks like it's now abandonware. I like Mars Edit well enough-- the Flickr integration is nice, in particular-- but there are some strange elements to it-- no dedicated button for links, for example, which I shake my head at every time I write something.
But it seems impossible to me that Mars Edit is pretty much the whole game now. Can that be right?
And where the heck is the good blog editor for the iPad?
Has anyone written an article on the term "real time"-- where it comes from, how it's been used in the last few decades, and what it means today? In particular, I'm interested in how and when the "realness" of things like computer network and information flows became more "real" than those of more "natural" everyday human life.
This CNBC report on how oil companies are using Army and Marine Corps psy ops-- psychological operations-- veterans to help them defeat anti-fracking campaigns is pretty interesting.
In a session entitled “Designing a Media Relations Strategy To Overcome Concerns Surrounding Hydraulic Fracturing,” Range Resources communications director Matt Pitzarella spoke about “overcoming stakeholder concerns” about the fracking process.
“We have several former psy ops folks that work for us at Range because they’re very comfortable in dealing with localized issues and local governments,” Pitzarella said. “Really all they do is spend most of their time helping folks develop local ordinances and things like that. But very much having that understanding of psy ops in the Army and in the Middle East has applied very helpfully here for us in Pennsylvania.”...
“Download the U.S. Army-slash-Marine Corps Counterinsurgency Manual, because we are dealing with an insurgency,” [manager of external affairs for Anadarko Petroleum Matt] Carmichael said. “There’s a lot of good lessons in there and coming from a military background, I found the insight in that extremely remarkable.”
Given that advocacy groups here in the U.S. are interested in how social media and novel organizational forms are used in other parts of the world, it should be no surprise that corporations would look to psy ops to help counter them. After all, the field does claim to be able to deal with problems like these.
I'm sure that the importation of expertise developed in Iraq, Afghanistan and countless "small wars" in the last half century is also proof that Sharia law has now taken over in Pennsylvania oil country. Those Amish think they look so innocent...
My wife and son are both dyslexic, so I was interested in this piece in IO9 about a new font designed to help dyslexics read more effectively:
Dyslexie was created by Christian Boer, a dyslexic graphic designer from the Netherlands. The font incorporates a number of typographical features that make it harder for the brains of dyslexics to rotate, swap, mirror, and otherwise confuse letters while they're reading.
Take the letters "p," "b" and "d," for example.... In many fonts, these letters look very much the same, such that by rotating and mirroring them they can be used more or less interchangeably.
In Boer's font, however, the boldness of each letter's base is increased, granting each character a "weight" that hints at its proper orientation. Notice also that the space enclosed by each letter (what is referred to in typography as a character's "bowl" or "loop") is shaped just a little differently than that of the other two. These subtle typographical cues may not seem like much, but they go a long way in helping your brain recognize which letters belong where when they appear in words and sentences.
Scientific American adds:
One of the first things he did was increase the boldness of letters at their bases, to make them appear weighted, causing readers' brains to know not to flip them upside down, as can occur with "p" and "d." Boer also enlarged the openings of various letters, such as "a" and "c," to make them more distinguishable from one another, and increased the length of "the tail" of other letters, like the "g" and y." He also put certain letters at a slant so that they would appear to be in italics, like the "j," a tactic to increase the brain’s ability to distinguish it from the letter "i." Finally, he boldfaced capital letters and punctuation, and provided ample space between letters and words, to allow the brain more time to compute the letters and begin forming them into words and sentences.
Earlier today I went by the Apple Store, and had a look at the memorial.
the memorial wall and iphone 4s line, via flickr
It's developed into quite the phenomenon. Someone (Apple? someone else?) put out a bunch of Post-Its and pens, and people have been leaving messages. Most of them are impressively heartfelt.
what if god was one of us, via flickr
This evening I went back-- my wife, after a saintly exercise of patience, got a new phone-- and noticed the apple lit up some of the messages. A nice spontaneous piece of design.
the apple at night, via flickr
How easy am I to please? Yesterday I took one of my old Airport Express stations-- an older wifi station that, ever since I moved up to Airport Extreme, I've treated with contempt-- and reconfigured it so I could stream music from my iPad or computer to my stereo.
It's not like I couldn't plug either of those devices into the stereo before, but the point is, now I don't have to: I have the richer sound of my stereo, the vastness of my music collection, and the ease of it being wireless. It's surprisingly liberating; but then again, if you think back to what it was first like to have wifi, maybe it shouldn't feel surprising at all.
Today, I took the other Airport Express, and hooked it up to some Cambridge Soundworks powered speakers, and set them on the other side of the garage. Serious DIY surround sound. (Subwoofers are things of beauty.)
It's a good reminder that unless they're actually BROKEN, old technologies probably can still have a use. Their problem may not be that they're obsolete, but that you're not imaginative enough to figure out what to do with them.
Update: Check out these two chatbots having a conversation.
Second update: Incredibly, I blogged about this on the pre-anniversary of the day Skynet will become self-aware
I've had my pIad for a couple days now, and the most interesting thing is how much it's like using a laptop. Mainly it's a matter of size, the fact that I'm using a keyboard with it, and I'm doing more laptop/work-y things. Safari also functions differently on the iPad than the iPhone: you can actually use Facebook and Typepad through the browser, for example, rather than using the dedicated apps. (And unfortunately the Typepad app doesn't seem to have a landscape mode.)
Oddly, the Typepad page doesn't have a WYSIWYG mode on the iPad, but it's just a chance to remember HTML tags.
Interestingly, I need to be a little more attentive to ergonomics and posture when I'm using the iPad + keyboard combination: it's easier to put the iPad slightly to one side and have to crane my neck to see it, or set the keyboard in what turns out to be an inconvenient place.
But for some things, I'm already starting to favor the iPad over my beloved laptop. The fact that it starts up instantly makes like it a little more appealing. We'll soon see if I can travel and write with it without losing any important functionality....
This afternoon I finally gave in and bought an iPad 2. Perhaps "gave in" is not quite the right term for my feelings about the purchase, but still.... It's my gift to myself for winning a contract for the contemplative computing book, so I partly justify it on the grounds that I'll be able to write with it.
I have high hopes that I can leave my laptop at home more often, and take this and a Bluetooth keyboard when I go travelling, to the library, etc.. That won't always be the case; in fact, I suspect that the relationship between my MacBook and the iPad will be like the relationship between my car and bike. I need the car for certain kinds of tasks, but I can substitute the bike for shorter trips and if I think out my day completely enough; a large technology is really a hedge against having to think too much.
Anyway, the Bluetooth keyboard works like a charm, and I have high hopes that I'll be able to make this relationship work.
I've spent the last few weeks working on a book proposal around contemplative computing. It's been a great, absorbing experience, so naturally an article on the growing respectability of self-publishing would catch my eye.
With Bowker reporting an "explosive growth" of 169% last month in "non-traditional" publishing, it's not just vanity projects that are taking the self-publishing route these days. Amazon announced last week that John Locke had sold 1,010,370 Kindle books using Kindle Direct Publishing, making him the first self-published author to join the "Kindle Million Club", alongside the likes of Stieg Larsson and James Patterson. Meanwhile, self-published authors Louise Voss and Mark Edwards currently top Amazon.co.uk's Kindle bestseller list, and say they're selling up to 1,900 copies a day of their jointly-written thriller, Catch Your Death. Faulkner award-winning author John Edgar Wideman last year chose to publish his new collection of short stories through Lulu.com; the site, offering authors an 80/20 revenue split, has published over 1.1 million authors to date, adding 20,000 titles to its catalogue a month. Writers around the world are getting their books to readers – and getting paid for it – without a publisher standing in between. Self-publishing, it seems, is becoming respectable.
Many of the authors who this Guardian article talks about are established authors with fan bases: their name recognition means that they're going to be sought out by readers, and don't have to compete as hard as first-time authors.
So what's changed recently? According to one author who's selling a lot online,
"Two major developments have had a hugely beneficial impact on self-publishing. Firstly, changes in technology, in particular the adoption of ebooks by the mainstream thanks to Amazon's Kindle, the iPad, etc... If you're a self-publishing author today, you have a vast audience waiting, and a decent number of professional channels through which you can easily make your work available. I personally know authors who are doing this to great effect – some are making over $10,000 every month! Secondly, the advent of social networking has had an incredible effect."
Word of mouth matters a lot for both printed and electronic books. And as another author makes clear, for professional writers, this isn't just about disintermediation, or being free of the shackles of the editorial process: to be a success in the self-publishing market, you need to
"Write the best book you can, hire a real editor to make it better. Have it professionally copy-edited to remove typos. Get a real cover artist – if you're not a professional artist, don't do your own cover. Get that book into ebook form. Start promoting, and start on your next book. Repeat, repeat, repeat."
So essentially self-publishing is "create your own virtual publishing house."
For years I've used Ecto, and loved it. However, for the last few months I've had problems with it: it's crashing or hanging up constantly, and that's getting in my way.
So what should I switch to? I tried Mars Edit a long time ago, and have the vague memory that it was all right. I've installed the ScribeFire extension, and will play with that a bit (though I'n not seeing an ability to set categories, which may be a deal-breaker). But I liked having a stand-alone blog editor for offline writing, and would like to find another one.
Yesterday I dropped my iPhone 3 G, and of course I didn't have the case on, so I managed to damage the cellular antenna (the wifi and cellphone parts of the phone have different antennas). After a couple hours, it was clear that the phone reception was pretty messed up, so this morning I went to the Genius Bar and had them take a look at it.
They were able to get some of the connectivity back, but I decided, it's time to get another phone. So I got an iPhone 4, a case that I'll ACTUALLY KEEP ON THE PHONE AT ALL TIMES, and a screen protector.
I also decided to get a bluetooth keyboard, which works with this generation of iPhone. I'm still hopeful that I can dispense with the laptop when I'm not doing really high-intensity editorial work or writing, and the keyboard is an essential component in making any such plan come true-- whether it's with the iPhone, an iPad, or something else. There's just no comparison between how quickly I can type, and my pecking on the little thumb-sized virtual keyboards.
I doubt I'll be able to dispense with the Macbook often, but I'm going to think seriously about what things I actually use on the Mac when I'm working, and how I could replicate them on some combination of mobile device, keyboard, and cloud.
I've done some writing in Google Docs, and I think that for at least parts of my book project, and many articles, I could use it instead of Word; I also need to look at having a setup where I have master copies of Word or other documents living in the cloud, and accessed by various devices. (Actually, for things like my personal library of PDFs of scholarly articles, I wonder if iCloud would actually be useful.)
I've also got to find a case for the keyboard, and a stand for the iPhone-- and preferably have both functionalities in the same object, which I suspect I'll have to commission from someone.
In a field still deeply shaped by arcane traditions and turf wars, when it comes to assessing what actually works — and which tidbits of information make it into the president’s daily brief — politics and power struggles among the 17 different American intelligence agencies are just as likely as security concerns to rule the day.
What if the intelligence community started to apply the emerging tools of social science to its work? What if it began testing and refining its predictions to determine which of its techniques yield useful information, and which should be discarded?... “We still don’t really know what works and what doesn’t work,” said Baruch Fischhoff, a behavioral scientist at Carnegie Mellon University. “We say, put it to the test. The stakes are so high, how can you afford not to structure yourself for learning?”...
Fischhoff and a who’s who of social scientists from psychology, business, and policy departments hope to foment a similar revolution in the intelligence world. Their most radical suggestion could have far-reaching effects and is already being slowly implemented: systematically judge the success rates of analyst predictions, and figure out which approaches actually work. Is intuition more useful than computer modeling? Is game theory better for some situations, and on-the-ground social analysis more accurate elsewhere?...
That remains only a proposal so far, but the Intelligence Advanced Research Projects Activity, or IARPA — a two-year-old agency that funds experimental ideas — is already trying a novel way to generate imaginative new steps to make predictions better. It is funding an unusual contest among academic researchers, a forecasting competition that will pit five teams using different methods of prediction against one another.
Of course, one can argue-- and indeed many of my fellow futurists will argue-- about what constitutes a "working" forecast, and some may go so far as to claim that even a completely wrong forecast can be useful under the right circumstances.
Submitted without comment:
Patrick Markey, Charlotte Markey, "Pornography-seeking behaviors following midterm political elections in the United States: A replication of the challenge hypothesis," Computers in Human Behavior, v. 27 n. 3 (May 2011).
The current study examined a prediction derived from the challenge hypothesis; individuals who viciously win a competition of rank order will seek out pornography relatively more often than individuals who viciously lose a competition. By examining Google keyword searches during the 2006 and 2010 midterm elections in the United States, the relative popularity of various pornography keyword searches was computed for each state and the District of Columbia the week after each midterm election. Consistent with previous research examining presidential elections and the challenge hypothesis, individuals located in traditionally Republican states tended to search for pornography keywords relatively more often after the 2010 midterm election (a Republican victory) than after the 2006 midterm election (a Democratic victory). Conversely, individuals located in traditionally Democratic states tended to search for pornography relatively less often following the 2010 midterm election than they did following the 2006 midterm election.
The power of the social sciences!
There's a brief, but kind of stunning, interview in The Economist with what I can only assume is a Turing machine pretending to be Google economist (and former UC Berkeley Information School dean) Hal Varian. It's hard to fit so much stupidity into three paragraphs, but this interview manages to do it.
There's a recent study out of the University of Michigan, where they had a team of students find answers to a set of questions using materials in the campus library. Then another team had to answer the same set of questions using Google. It took them 7 minutes to answer the questions on Google and 22 minutes to answer them in the library. Think about all the time saved! Thirty years ago, getting answers was really expensive, so we asked very few questions. Now getting answers is cheap, so we ask billions of questions a day, like “what is Jennifer Aniston having for breakfast?” We would have never asked that 30 years ago.
Now, I don't want to overstate the case, but the idea that this is really significant matters if you're in a trivia contest, but this assumption that intelligence is equivalent to search-- that, effectively, all the answers to all our questions have been answered somewhere-- is just wrong.
And the claim that "Thirty years ago, getting answers was really expensive, so we asked very few questions." This raises a few questions, most notably, what the fuck does this mean?? Thirty years ago was 1981-- that is, my senior year of high school, and eight years after Varian had received his Ph.D. Ronald Reagan was starting his first term as President. Granted, Reagan was not known to be the most inquisitive fellow, but I don't recall feeling like the ability to answer questions was significantly limited by costs, economic or temporal or otherwise. You can claim that the quality of answers improves with the volume of information available (a claim that is highly problematic outside a few technical, numbers-friendly fields), but did we not ask many questions thirty years ago? Huh?
Then there's this:
If you look at the history of the world, up until 1700 nothing much happened--GDP growth per capita was essentially flat. Then the wonderful Industrial Revolution happened and things took off. And now it’s easy to make predictions about the future. What rich people have now, middle class people will have in twenty years.
Notice the conflation of economic growth with "stuff happening," the mix of Galtian trickle-down economics and early Fukuyama-like teleology. Nothing happened before 1700, and prediction is easy. My life as an historian and futurist as clearly been wasted. But at least I can find out what Jennifer Aniston is having for breakfast.
After working on this for a couple weeks, I've reached that familiar point with the contemplative computing article (or mini-monolith, as it's well over 10,000 words) where it's not yet completely finished, but I need to put it down for a little bit, and go do other things. I have a couple editors who are ready to kill me if I don't deliver on other work, and it would be good to get a little critical distance from the piece.
This time I'm trying an experiment: I've put the article up on Google Docs, and made it public. You can read it here, though it may take forever to load, as Gdocs tends to choke on large files, so I've also posted a PDF.
The introduction is below. Naturally, comments are welcome.
The phrase "contemplative computing" sounds oxymoronic. Information technologies today do many things, but they do not make us more contemplative. Instead, they interrupt and distract us; they throw up swarms of real-time data that obscure our long-term perspective; they encourage us to spread our attention across a range of activities and devices—Web pages, documents and presentations, emails, phone calls, text messages, etc. etc. ad infinitum. Some look to technological solutions (e.g., better filtering tools or "distraction-free" software) or better personal management (exemplified by the GTD—"Getting Things Done" movement) to give teem balance; a few take digital sabbaths, and simply leave their digital lives behind for a day a week. I believe, however, that we can create information technology that does not distract us from the world, but invites us to engage with it more thoroughly, thoughtfully, and profoundly. In this article I will describe what contemplative computing could be; why it is an appealing and achievable design goal and attitude to devices; and how we can get there. My argument will unfold as follows.
I first explain why contemplation is valuable, and how contemplative practices have been applied in fields as diverse as military training and psychotherapy. I then look more closely at contemplation itself: contrary to the popular perception of it as a solitary, passive state, I argue argue that contemplation is active, skilled, embodied, and social. From this, I develop a set of design principles for contemplative computing. These are intended for both designers and users, for neither have complete control over the way people use computers; indeed, contemplative computing requires being contemplative about computing-- learning to think about how and why we use technologies in particular ways, and how to improve our relationships between devices. I explain how an approach to information technologies that emphasizes engagement, self-experimentation, and embodied cognition; that skillfully used spatiality and sparse design; that rewarded challenges, acknowledged obliquity, and allowed for mind wandering and reentering, would help us begin to deal with the problems created by today's information technologies and our interactions with them.
Thus this article is an effort to demonstrate how we can design with values in mind [Harper, Rodden, Rogers, and Sellen, 2006]. The project also seeks to answer the call proposed by Levy to develop new means for contemplation in creative and scholarly life [Levy 2007] in response to growing time and productivity pressures [Menzies and Newson 2007]. As computers make their way into more and more parts of our everyday lives, we need to understand how tools initially built for the scientific laboratory or office may be ill-suited to the home or family; how the objectives of efficiency and optimization may not work in environments characterized by irreducible uncertainty and ambiguity. Given the ubiquity of computers and their power to influence our lives, it makes sense to think about how they can be designed and used to better promote our abilities to see, act in, and improve the world, and to improve ourselves.
Contemplation offers a variety of benefits. A contemplative stance can help people be more creative; deal with complex problems that require months or years to solve; and is essential to long-term happiness. Contemplation promotes both self-sufficiency and close, questioning observation of the world, and both are particularly valuable in this moment in the history of technology. We need to develop personal tools to better control information technologies, and to see how technologies that often are described as irresistible and inevitable are really shaped by human decisions and choices (or the failure to make such decisions). Contemplative computing can help with both of these urgent tasks.
I'm giving a talk tomorrow at the Centre for Creativity in Professional Practice at City University London. Essentially it's a live performance of the paper spaces article, though I'll also try to tie in some of what I've learned from the contemplative computing project as well.
Here's the abstract:
In this talk Alex will explore relationship between space and media. We normally treat spaces and media as different things, but our interaction with such communicative media as newspapers, paintings, books, and maps has an important embodied, physical dimension to it, which can be exploited to support collaborative work. This talk will describe how large-scale media, such as wall-sized maps and floor-to-ceiling whiteboards, support collaboration in four cases: analog circuit design, Buckminster Fuller's World Game, emergency tabletop exercises, and expert workshops conducted by futurists. Alex will talk about how each invites invite participation, annotation, and reinterpretation by users as opposed to passive consumption, and how they support the generation of common knowledge. Finally, he will show how in the near-future we will be able to design digital tools that better support collaboration.
I've also put together a Prezi because it's a talk about spaces and media and collaboration... and because that's what I do.
This is going on in Johannesburg:
Hundreds of [traffic] lights have been damaged by thieves targeting the machines' sim cards, which are then used to make mobile phone calls worth millions of South African rand.
More than two-thirds of 600 hi-tech lights have been affected over the past two months, according to the Johannesburg Roads Agency, causing traffic jams, accidents and frustration for motorists.
The traffic lights use sim cards, modem and use GPRS to send and receive information, a system intended to save time and manpower by alerting the road agency's head office when any lights malfunction. According to Thulani Makhubela, a spokesman for the agency, the robberies have been "systematic and co-ordinated", possibly by a syndicate. An internal investigation has now been launched.
"They know which signals to target," Makhubela added. "They clearly have information."
Wow. Real world, meet ubicomp!
Krugman's latest post about Google:
scammers and spammers are doing their best to game the search engine, and in the process making it less useful to the rest of us. And people are turning to other search engines that are less affected, precisely because they’re less pervasive and the scammers and spammers haven’t adapted to them.
This makes me think of sex.... I’m not quite sure what search-engine sex would involve. But Google apparently needs some.
To be fair, it's not the worst analogy: Friedman would have cast it as an insight that came while sharing a Cinnabon with a Mumbai cab driver in Lufthansa business class.
The Economist (of course) has an excellent article on the creation (and creativity behind) the Hungarian microcar, which flourished in the early Cold War.
After the Soviet takeover in the late 1940s, the Russians set up Comecon, the Council for Mutual Economic Assistance, to co-ordinate economic relations across the Soviet bloc. In line with the principles of socialist planning, each country was ordered to make certain products but not others. Czechoslovakia, Poland and Romania were allowed to make cars, but Hungary was forbidden to, probably because it had no existing car industry. But a national vehicle, like a national airline, was a symbol of patriotic pride, especially in eastern Europe. Hungary’s Communists were soon determined that Hungary should have its own cars. Accordingly, they quickly found a very Hungarian solution. The way around the Comecon restrictions was via the Magyar speciality known as the kiskapu, or “little gate”. When one door closes, the kiskapu usually opens, often lubricated by an envelope of bank-notes. So under Communism, if, for example, X was forbidden, something rather like X—but not actually identical to it and arguably something else—was surely permitted, because only X was forbidden.
The answer to the prohibition on the manufacture of cars, Hungary’s Communist leaders decided, was to make an enclosed geared vehicle with a steering-wheel and petrol engine that transported people in safety, but did not qualify as a car because it was too small: the microcar. As the Magyar microcar was not actually a car, it could drive through both the kiskapu and the thickets of Communist bureaucracy, or so the argument went, and so the microcar did, with some success.
Of course, Hungarian engineers are known for being wildly ingenious under the right conditions, and sometimes the wrong ones-- "A Hungarian, the old joke goes, is someone who enters a revolving door behind you but comes out in front"-- and the piece has some useful things to say about the role that constraints can play in stimulating creativity. But this isn't just an historical footnote:
Nowadays the Magyar microcars are a footnote in automotive history. But they were much more than engineering whimsy. The principles underlying their economy of design, ease of production and simplicity remain relevant today. Currently, about half the world’s population lives in cities, a figure projected to rise to 60% by 2030. Even if urban developers provide decent public-transport networks, many people will still want their own cars. This is especially true of the rapidly growing new middle classes of India and China. Microcars, which have fewer emissions, cost less to run and take up less parking space, are the obvious answer. Hence the Tata Nano, launched with much fanfare in India a year or so ago.
Andy Clark argues:
[W]e seem to be entering an age in which cognitive prosthetics (which have always been around in one form or another) are displaying a kind of Cambrian explosion of new and potent forms. As the forms proliferate, and some become more entrenched, we might do well to pause and reflect on their nature and status. At the very least, minds like ours are the products not of neural processing alone but of the complex and iterated interplay between brains, bodies, and the many designer environments in which we increasingly live and work.
Ann Blair suggests, not so fast:
[We assume] that modern technology is creating a problem that our culture and even our brains are ill equipped to handle. We stand on the brink of a future that no one can ever have experienced before.
But is it really so novel? Human history is a long process of accumulating information, especially once writing made it possible to record texts and preserve them beyond the capacity of our memories. And if we look closely, we can find a striking parallel to our own time: what Western Europe experienced in the wake of Gutenberg’s invention of printing in the 15th century, when thousands upon thousands of books began flooding the market, generating millions of copies for sale. The literate classes experienced exactly the kind of overload we feel today — suddenly, there were far more books than any single person could master, and no end in sight.
In the New York Times, Edinburgh philosopher Andy Clark has a nice essay on embodied cognition. If you're familiar with his book Natural Born Cyborgs, you'll already know the outlines of his argument; but it includes this update:
Most of us gesture (some of us more wildly than others) when we talk... [and it seems that] bodily motions may themselves be playing some kind of active role in our thought process. In experiments where the active use of gesture is inhibited, subjects show decreased performance on various kinds of mental tasks. Now whatever is going on in these cases, the brain is obviously deeply implicated! No one thinks that the physical handwavings are all by themselves the repositories of thoughts or reasoning. But it may be that they are contributing to the thinking and reasoning, perhaps by lessening or otherwise altering the tasks that the brain must perform, and thus helping us to move our own thinking along.
It is noteworthy, for example, that the use of spontaneous gesture increases when we are actively thinking a problem through, rather than simply rehearsing a known solution. There may be more to so-called “handwaving” than meets the eye.
More on this at Contemplative Computing.
Over the last couple years I've lost about fifty pounds. As nerdy as this will sound, while I was a fat kid and spent my adult life overweight, it was only in the last two years, when 1) I started to worry that it was now or never-- that my condition in my 40s would determine how long I would live and what kind of life I would have, and 2) I could make it into as much a cerebral challenge as a physical one, that I managed to take off the weight.
By cerebral I mean this: in order to get past the various things that had kept me from losing weight in the past, it was necessary for me to read a lot about nutrition and dieting, dive into the literature on obseity and satiety, and think about how what I'd learned from behavioral economics could be applied to weight loss. At a certain point, I realized that the challenge of losing weight was a classic futures problem: complex, uncertain, requiring all kinds of near-term tradeoffs for long-term benefits, and hard to sustain. Maybe, I wondered, my training as a futurist help me lose weight? Conversely, could I learn something about futures problems through the experience of losing weight?
I think the answer to both is yes, and I've written an article-- available as a PDF-- that explains those answers in detail.
The piece is also kind of personal because it's a bit of an intellectual pivot. On one hand, it's the first article that draws on my reading on mindfulness and contemplative practices, and tries to applies that work to futures. There are lots of futurists who have been interested in meditation and Eastern religions-- it's at least as common among Bay Area futurists as 5.11 Tactical shirts-- but not much explicit use of the idea of mindfulness as a tool for thinking about the future. Partly, I think, it reflects a certain suspicion that writers on contemplative practice display toward thinking about the future, a suspicion that I try to argue is misplaced. But I've come to believe that mindfulness and attention to the now is an essential starting-point for seeing how the future could unfold.
On the other hand, mindfulness and contemplation is a big part of what I'm going to be working on next year at Microsoft Research. I'm going there to start a project on contemplative computing, a form of computing that doesn't fracture your attention and capacity to think long thoughts, but protects and supports it. It's become clear that, in our headlong rush to become more connected and accessible, we're accidentally eroding our capacity to think about complicated problems for long periods. For stockbrokers, pundits, ER doctors, elementary school teachers, and other people whose lives are all about speed and instant reaction, this may not be an issue at all; but for people who are creative for a living, the destruction of our ability to concentrate is a great loss.
Some people have tried to deal with the problem by going off Facebook, taking "digital sabbaths," and otherwise taking a break from digital devices and the digital world. While I certainly understand the impulse, I don't like it, for a few reasons. First, in the long run it's impractical: a movement designed to give us a break from our mobile devices and laptops is going to have trouble dealing with a hyperconnected world of pervasive computing. Second, I actually like being connected, and don't want to live without my digital augmentarium. Third, while I'm as much in danger of being distracted by the Web and Facebook as anyone, there are also times when I can use devices to be creative and reach that mental state of "flow." Finally, the digital sabbath movement implicitly accepts the idea that information technologies have to be this way, and that humans and tools are opposites. In contrast, I buy Andy Clark's idea that we're natural born cyborgs, and my instinct is that the future will offer great opportunities to design information technologies that are better able to support concentration and contemplation-- in other words, to learn how to create tools that help us be better, more focused cyborgs. Figuring out what those tools could look like, and how to design them, is the big task I'll be taking up in Cambridge.
Today, as part of my wife's birthday and in preparation for our sabbatical in Cambridge, we bought a new camera, a Nikon D5000. I've had compact digital cameras for years-- I keep one in my pocket pretty much all the time-- so this is a big change in terms of technical quality and sophistication.
Naturally I played around with it this afternoon, and took a few pictures in the backyard.
backyard, via flickr
Since we also had a birthday dinner tonight, the camera went there, too.
birthday cake, via flickr
While the D9000 is supposed to be a step down from the more professional D90, it's been years since I had a "serious" camera-- e.g., one heavy enough to hurt if you hit someone over the head with it, and sturdy enough to take a picture of them on the ground afterwards.
my brother in law and me, via flickr
Between the ability to shoot some very fast pictures (the autofocus and latency are awesome), and the ability to turn off all the controls, put an old Nikon lens on the camera, and really see what happens when you play around with exposure, f-stop, and aperture, I think it's going to be a long time before I feel like I exhaust all the possibilities of the camera.
The one thing I know I have to resist is falling into Camera Guy World-- you know, the land where people obsess over different models of lenses, filters, etc. I just don't want to get sucked into that world, because I know I'd waste a lot of time and energy there. I just want to take good pictures, and not spend 50% of my time chasing after that last 2% of value. Better to spend time out in the field, or taking pictures of the kids.
Though they may not feel that way!
my daughter voting for no more cameras, via flickr
Interesting concept: make a video pieced together from YouTube videos in which the camera is dropped.
As Petapixel comments, "What’s amazing is how seamlessly the clips are stitched together, making it difficult to discern where one clip ends and the next begins." The end result is a video that feels like it's of one camera being dropped in one part of the world, and picked up in another. Very neat.
For people familiar with trends in progressive education, some of what he talks about will sound familiar, but he's wonderfully well-organized and a good speaker. He's also quite critical of overmedication for ADHD, which he regards largely (but not entirely) as an artifact of schools' need to medicate kids to regulate their behavior (and one might add, the existence of drugs-- other than Nyquil and gin-- to do so). I especially loved this line:
Attention deficit increases as you move east across the country. People start losing attention in Oklahoma, they can hardly think straight in Arkansas, and by the time they reach Washington, they've lost it completely.
As an ex-Southerner, I find this delightfully anti-Southern. (I expect Sir Ken isn't from Manchester.)
The other thing I really like about this talk is that it presents a wonderfully updated version of graphic recording, something I've spent years working with. (If you're not familiar with the practice, David Sibbet's new book, Visual Meetings, is a terrific overview.) The animators, Cognitive Media, are essentially scribing the talk, but do so with greater than a live recorder can, and they make judicious use of animation as well. The result, I think, is pretty cool.
They've got many more animated maps on their blog, should you care to take whole morning off.
I've recently been interested in the subject of unanticipated or unintended consequences. Most of my interest has been fueled by a sense that arguments of the "nobody could have predicted this massive, now-obvious consequence of actions I took" type are becoming more popular: think how often they've been deployed in the aftermath of the Iraq war, the financial meltdown, Deepwater Horizon, etc..
Of course, unanticipated consequences can be good things too, as I noticed this morning. We recently bought a new vacuum cleaner, one of the bagless cyclonic kinds with the transparent canister. I mainly liked the fact that there were no bags, and that it was less than 20 years old. But my kids turn out to really like too: so much so, in fact, that they're actually cleaning their rooms when friends come over.
Why? Because as they vacuum their rooms, the canister turns into a "tornado of grossness," as one of my son's friends put it.
Making the canister transparent turns vacuuming into entertainment-- and because it combines technology, loud noises, visual effects, and gross stuff, it's irresistable to young boys.
I have no idea if the designers have kids, but: well done.
Dan Ariely has a good post about why our current "productivity tools" generate time-wasting or addictive behavior: he looks to B. F. Skinner's work on "schedules of reinforcement" that found that random rewards inspired more work than predictable rewards. (It got more work out of rats, anyway. Come to think of it, it also works for graduate students.)
Ariely comments that Skinner's work
gives me a better understanding of my own e-mail addiction, and more important, it might suggest a few means of escape from this Skinner box and its variable schedule of reinforcement. One helpful approach I’ve discovered is to turn off the automatic e-mail-checking feature. This action doesn’t eliminate my checking email too often, but it reduces the frequency with which my computer notifies me that I have new e-mail waiting (some of it, I would think to myself, must be interesting, urgent, or relevant). Another way I am trying to wean myself from continuously checking email (a tendency that only got worse for me when I got an iPhone), is by only checking email during specific blocks of time. If we understand the hold that a random schedule of reinforcement has on our email behavior, maybe, just maybe we can outsmart our own nature.
There's also this observation of Skinner's own work habits.
Skinner had a trick to counterbalance daily distractions: As soon as he arrived at his office, he would write 800 words on whatever research project he happened to be working on—and he did this before doing anything else. Granted, 800 words is not a lot in the scheme of things but if you think about writing 800 words each day you would realize how this small output can add up over time.
This is something I try to do, but I need to be more disciplined about it. There aren't THAT many e-mails waiting for me in the morning that require my immediate attention, and I suspect that I'm actually more likely to lose track of tasks or not reply to a message if I read it, think to myself "I'll deal with this later," then set it aside. For me, the in-box is not nearly as effective a place to stack tasks than, say, a physical pile (or even better, a written list in my little Moleskine notebook).
A review of the new reality TV series, Plain Jane. Whatever you think of the concept, this description is brilliantly written:
Combining elements of makeover fantasies, petal-strewn dating programs, Japanese game shows, magazine columns of the snag-a-man Cosmo sort, and primitive folklore, Plain Jane (the CW, Wednesdays at 9 p.m. ET) brushes the pleasure receptors with an odd texture of fluff....
[Hostess Jane] Roe advertises herself as a fashion journalist and stylist "hailing from London but now based in L.A.," and she plays her role with perfection. She is faintly alien, plausibly posh, strategically tacky, and Britishly skinny, her eyelashes thicker than her forearms.... Plain Jane, Roe says, will be "transformed into a new woman with the style and confidence to surprise the man of her dreams on a romantic date." The real star of the show is that concept—a mission statement so clear and concise that you can practically feel the mineral water fizzing in the pitch meeting.
I write about people, technology, and the worlds they make.
I'm a senior consultant at Strategic Business Insights, a Menlo Park, CA consulting and research firm. I'm also a visitor at the Peace Innovation Lab at Stanford University. (I also have profiles on Linked In, Google Scholar and Academia.edu.)
I began thinking seriously about contemplative computing in the winter of 2011 while a Visiting Researcher in the Socio-Digital Systems Group at Microsoft Research, Cambridge. I wanted to figure out how to design information technologies and user experiences that promote concentration and deep focused thinking, rather than distract you, fracture your attention, and make you feel dumb. You can read about it on my Contemplative Computing Blog.
My book on contemplative computing, The Distraction Addiction, was published by Little, Brown and Company in 2013.
The Distraction Addiction
My latest book, and the first book from the contemplative computing project. The Distraction Addiction is published by Little, Brown and Co.. It's been widely reviewed and garnered lots of good press. You can find your own copy at your local bookstore, or order it through Barnes & Noble, Amazon (check B&N first, as it's usually cheaper there), or IndieBound.
The Spanish edition
The Dutch edition
Empire and the Sun
My first book, Empire and the Sun: Victorian Solar Eclipse Expeditions, was published with Stanford University Press in 2002 (order via Amazon).
PUBLISHED IN 2012
PUBLISHED IN 2011
PUBLISHED IN 2010
PUBLISHED IN 2009