posted March 15 2011
the 3d wasteland
In 1961, newly elected FCC chairman Newton N. Minow (actual name) famously declared that television was a “vast wasteland”. Minow believed that television had the potential to be something truly great, but then, as now, it was too full of superficial, meaningless garbage and grating, obnoxious advertisements.
Fifty years after Minow’s speech, I couldn’t help but think of him as I toured the exhibition floor of the Penny Arcade Expo and evaluated the plethora of 3D technology on display. I might be misremembering last year’s PAX, but it seemed like the only 3D technology on the floor then was a small section of NVIDIA’s booth, a timid, tentative offering that nevertheless managed to draw a perpetual crowd. This year, NVIDIA was pushing 3D as hard as it could. Its booth—more accurately described as a small mall of game technology, such was its size and layout—was dominated by 3D. The booth was fronted by a 3D-capable monitor 103 inches in size. Passersby could pick up any one of a dozen 3D glasses to experience the effect for themselves.
That experience, I’m sorry to say, is disappointing. It’s not that it doesn’t work; I’m practically stereoblind, and even I was able to perceive some depth in the displays. But just because a thing works, that doesn’t mean that it’s worth your time. 3D projection is taxing on two fronts: it requires a variety of technological trade-offs to work properly, and makes unusual, often uncomfortable demands of an observer’s visual system.
The biggest trade-off is brightness. One way or another, the 3D setup has to send two separate images to each of your eyes. This means that the image is going to look, at best, half as bright as it would under 2D viewing conditions. This may be acceptable in a movie theater, where you’re sitting in front of a massive screen that pours tons of light into an otherwise completely dark environment, but it’s terrible in the context of a living room or a show floor. With my 3D glasses on, screen images appeared dismally murky. Details were lost and some scenes became entirely unintelligible. The goggles felt awkward to wear, especially over my eyeglasses. In short, it’s an uncomfortable viewing experience of questionable value, to say nothing of the cost of the technology. No person is in his right mind would spend the hundreds, possibly even thousands, of dollars necessary to play a 3D game in his own home.
One intriguing alternative is Nintendo’s 3DS, which uses a variation of the “hologram” cards that were popular in the 80s (you know, hold it at one angle, you see one picture, hold it at a different angle, you see another). When you hold the 3DS at just the right angle, the screen beams two separate images into each eye, creating a depth percept without the need for expensive monitors or dim glasses. The image is bright, and lining yourself up is simple. That the effect works at all is extremely impressive, but I still question the practically. If you’re like me, you move around when you play games, and if you move the 3DS out of its narrow “sweet spot,” you’ll get a double image instead of 3D. Though promising, it’s worth keeping in mind that this method will only work with a handheld system, and cannot be adapted for a television.
Now let’s talk vision science. For the record, what I have to say here applies to 3D movies as well. 3D projection is not natural, or as we might say in the lab, it’s not ecologically valid. The human eye regularly performs two related but independent actions: convergence and focus. Hold up a finger at arm’s length, then move it toward your face. As you keep your eyes on your finger, you’ll notice two things. One, your surroundings will fall out of focus as your finger gets closer to your face. Two, your eyes might start to feel weird. This is because they’re converging at a fairly extreme angle. Essentially, you’re crossing your eyes. Out in the real world, focus and convergence always change in tandem. But in a 3D movie or game, your point of focus is constant (the screen), while convergence changes depending on the contents of the scene. I don’t suppose you’ve seen the trailer for Pirates of theCaribbean 4? The trailer is in 3D, and like most trailers, it cuts rapidly from shot to shot. The depth plane changes every few seconds, and it’s completely exhausting. The human visual system simply isn’t built to process the world this way.
3D suffers from artistic problems, too. Designers and directors have had over a hundred years to learn how light, color, texture, and spatial arrangement affect a scene. We have no such body of knowledge for the manipulation of depth (da Vinci and Picasso notwithstanding). Nobody designing 3D games, or directing 3D movies, really knows how to use 3D effectively. I watched someone play World of Warcraft in 3D for about fifteen minutes. Every time he killed a monster, a large notification would float over the screen at the front of the depth plane, obstructing the ongoing action. The effect was distracting and ugly. And we’re talking about Blizzard! A company that spends years meticulously honing its games for the optimal playing experience, one of the few companies that actually conducts rigorous research for these sorts of issues. If Blizzard can’t do it right, who can?
Since I’m not stereoscopically normal (neither is Penny Arcade’s Mike Krahulik, apparently), I went out of my way to ask other people at the show how they felt about 3D. Most people at the NVIDIA booth seemed unimpressed. They complained about the subtlety of the 3D effect, the dimness of the images, eye strain, the inability to spectate if you weren’t wearing 3D glasses, and the general awkwardness of these systems. People were more positive when asked about the Nintendo 3DS, which makes a certain amount of sense, as Nintendo’s system is much less cumbersome and produces brighter images.
So that’s the shape of 3D: a wasteland of finicky technology, dark, muddled images, and uncomfortable customers who can barely manage to feign enthusiasm for the duration of a show, let alone hours in the living room. That kid playing World of Warcraft? He wasn’t playing it because it was 3D. He was playing it because he loves World of Warcraft. If 3D is to have any kind of future, it needs to create compelling experience that you can’t get in 2D, and I’m not sure that such a thing is possible.
During my hours in the exhibition hall, I also caught Ubisoft’s Child of Eden. More accurately, it caught me, as well as everyone else who passed within sight of it. Child of Eden looks like something out of the year 2045, a rhythm-based first person shooter that relies on the Xbox Kinect for interaction. Players fly through the game’s trippy environments simply by waving their hands. I didn’t get a chance to play it for myself, but even as a spectator, the experience is incredibly immersive. The flowing visuals and organic pace of action are mesmerizing, and seeing players control the game without a physical controller was downright arresting. An awe-inspiring piece all around, and not a 3D goggle in sight.
posted January 4 2011
child's play and a bit about data visualization
The books have closed on Child’s Play 2010, and this year’s total is a truly awe-inspiring 2.3 million dollars. With 2010 in the total, this means that cumulatively, Child’s Play has raised just shy of nine million dollars. Nine million dollars over the last eight years, every single cent of it helping to improve the life of a sick kid. If that’s not something to be proud of, I don’t know what is.
But I didn’t sit down in front of the computer today to talk to you about that. I do that enough. Instead, we’re going to talk about this sweet chart I made. Incidentally, at the time of this writing, Googling “sweet chart I made” returns this as the first result. Who am I to disagree?
Last year’s chart was put together with Numbers. I’m generally very happy with Numbers—certainly much happier than I ever was with the sluggish, bloated, obtuse mess that is Microsoft Excel—but the chart I produced last year has some problems. The spacing on the x-axis looks weird, and that’s a poor way to format a date anyway. Since the key shows the annual totals, it kind of defeats the point of the chart. And why did I go with a filled line chart? Because every year has many missing data points, and a filled chart was the only way to get Numbers to draw each year as a connected line.
This year’s chart was put together with R and ggplot2. Here’s what I like about it, and what I don’t.
What I Like
- R and ggplot2. I can’t recommend R highly enough. It’s fast, flexible, powerful, and oh yeah, free. I now use it for all my data analysis needs. I intentionally gave myself a hellish, badly formatted CSV file to work with here, just to see if R could beat it into shape. No sweat. As for ggplot2, it’s overkill for some situations, but a great solution for most. Maps, anyone?
- The date axis. It’s nicely labeled, with every major gridline representing exactly one week. Look closely, and you’ll see that the minor gridlines split the weeks into days.
- I ditched the legend, and instead placed year labels at the ends of their respective lines. Extra special audience challenge: do this in Excel without killing yourself after fifteen minutes.
- Cumulative total is computed on the fly and automatically added to the plot’s title.
- In fact, the whole plot is defined programmatically, even the year labels, so adding in 2011’s data should be a cinch.
What I Don’t Like
- There are too many colors on this thing. ggplot2 computes those colors by finding equally spaced points along the rainbow, so as more colors are added, the difference between them gets smaller. I’m using these colors to keep each line visually separate, but why? Do you need to see every data point of every year? One alternative would be to color in the current year and the previous year, and turn all others a shade of dark gray.
- The larger problem, though, is that this plot doesn’t serve much of a point. I don’t have enough data to get an accurate sense of how quickly Child’s Play accumulates funds. Look at where the lines start. The early years start at $0, but the spread runs up to nearly $500,000. Is that variability a reflection of larger corporate donations kicking off the fundraiser, or is it that Child’s Play runs year-round, and the charity is taking more money in during the non-holiday months of the year? In short, the only reliable data points in the plot are the totals, in which case a simple table could tell you just as much.
Still, it’s a fun exercise. I certainly learned a lot about R while working on this, and that’ll pay off in the future. Maybe I’ll tackle Boston’s weather data instead.
posted December 20 2010
classification images and perceptual learning
I became a published author as of a few weeks ago! The paper is titled “Perceptual learning of oriented gratings as revealed by classification images”, which is…a mouthful. Don’t get me wrong, I’m very proud of this. Designing the experiment, collecting the data, and writing the paper all added up to a tremendous learning experience, and the final product is a solid piece of work. Still, it’s not exactly beach reading. So, herein and forthwith, a plain-English explanation of this thing I just published. Yes, you too can understand science!
Before you can really understand what this paper is about, you need to understand perceptual learning. I’ve talked about this before, but here’s another quick primer. Learning of any sort requires practice, whether the goal is to recite all 50 state capitals from memory, ride a unicycle, or perhaps most interestingly, do both at once. In these examples, learning involves the parts of your brain that handle memory, motor skills, or both. Likewise, practice can also change the parts of your brain responsible for vision. When you perform a difficult visual task again and again (like, say, a dentist looking for cavities or a hunter looking for deer), the neurons responsible for processing this visual information become more refined, better at representing the important aspects of the task. It’s not that you simply understand what you’re looking at in a different way (which would be a change in strategy), it’s that you are literally getting better at seeing. Perceptual learning can enable a person to detect tiny changes in an object’s position, make a person more sensitive to detecting motion, enhance contrast sensitivity, and many other things.
So how do we measure perceptual learning? Typically we’ll sit you down in a dark, quiet room and give you a difficult visual task to do. At first you probably won’t be very good, and your answers will be random. But after making thousands of these visual decisions over several days, you will get better at it, and we’ll be able to measure that improvement. Usually we’ll boil all these trials down to a simple summary, like percent correct. The point is that we’ll sit you down for an hour at a time and have you complete 1,000 trials of an experiment, and out of all that data we’ll extract perhaps a handful of useful numbers.
This very common approach comes with some limitations. First, it seems a bit wasteful to sit someone down for a full hour and have only a few useful numbers to show for it. More importantly, though, remember that I’m using your behavior (how well you do on the task) to draw conclusions about what’s going on in your brain. This is problematic, because while I can see that you’re learning the task, I can’t say what, exactly, is being learned. This is one of the major debates in the field. Are you getting better at the task because your brain is becoming more sensitive to the important parts of the task, or because your brain is getting better at filtering out the parts that don’t matter?
Suppose that instead of walking away with a handful of numbers, I was able to produce an image from your data, a picture of the mental template you were using as you performed the task. To get an idea of what I mean, look at these images from a 2004 paper by Kontsevich and Tyler, charmingly titled, “What makes Mona Lisa smile?” The researchers were interested in what aspects of the Mona Lisa’s face influence her famously ambiguous smile. To answer this question, Kontsevich and Tyler took a picture of the Mona Lisa and then added what we call “visual noise,” basically frames of TV static. As you can see from the black and white images, the random noise alters the Mona Lisa’s expression in various small ways. Participants in the study were simply asked to classify whether all these different Mona Lisas looked happy or sad. Once that was done, all of the noise that was classified as making the Mona Lisa “happy” could be averaged together and then laid back on top of Mona Lisa, producing a “happy” Mona Lisa (right color image). Ditto all the “sad” noise, producing a sad Mona (left color image). The basic message? The mystery of Mona Lisa is all in the mouth.
Kontsevich and Tyler managed to produce these classification images very efficiently by layering the noise on top of the picture. But if you wanted to produce a classification image of the Mona Lisa purely from noise, you would need tens of thousands of trials before you started to get something that looked like a woman’s face. This presents a problem for those of us who study perceptual learning, because we usually like to examine how perceptual learning takes shape over the course of a few days. Therefore, we need a way to produce a good classification image purely from noise, and out of one hour’s worth of data. That way we can examine how the image changes day to day.
That’s really the whole point of our paper here: can we find a way to make good classification images from very little data, and if so, can we then analyze the classification image to figure out what, exactly, is changing as perceptual learning occurs?
We decided to apply this concept to a pretty classic task in the perceptual learning field: the detection of an oriented grating in noise. Oriented grating is just another way of saying “tilted stripe”. An hour a day for 10 training days, we’ll have you look at stimuli that are either 100% noise, or a mixture of noise and grating. The grating is always tilted to the same angle, until day 11, when we rotate everything 90 degrees (we call this the transfer session, since we want to see if your learning transfers to a grating of a different angle). In the image below, you see, from left to right, the training grating, the transfer grating, an example of what pure noise looks like, and an example of a noise/grating mixture.
The trick to producing classification images from small amounts of data is to simplify your stimuli as much as possible. In most studies gratings look more like the image at the left, but in our study we’ve eliminated all shades of gray, and kept our stimuli very low-resolution. Each stimulus is just 16x16 pixels, but blown up to about the size of a Post-it note on the screen. This way, instead of each stimulus being composed of several thousand pixels and several hundred colors, ours have just 256 pixels and two colors.
And it works! Here are classification images from one of our subjects for Day 1, Day 10, and the Transfer Day (“T”). As you can see, the classification image is fairly indistinct on Day 1, shows a much clearer stripe pattern by Day 10, and then gets worse again when the orientation is changed for the Transfer session (all images have been rotated to the same orientation for ease of comparison). In case the effect still isn’t clear, the right column applies some extra filtering to the images in the left to enhance the effect. And remember, while all our stimuli used only black or white pixels, the classification images have shades of gray because they represent an averaging of all those stimuli.
Our next hurdle was how, exactly, to measure the “goodness” of each classification image. Our solution was to calculate the Pearson correlation between each classification image and the target grating. In other words, a fairly straightforward measure of how well these two pictures match up, with 1.0 being a perfect score. Once you have that, you can see how the correlations change over time. In our data, image quality clearly improves for about six days, and then levels off. When we change the orientation of the target grating, performance drops back to square one:
My more eagle-eyed readers might notice something ironic about all this. Did we just go through all this trouble to create a classification image, only to convert the image back to a simple number again? But wait, there’s more. What if I created a grating at every possible orientation, and then calculated how well a person’s classification image correlated with each of those? I end up with what we call a tuning function. Ordinarily you can only get those if you’re plugging wires into animals’ brains and directly measuring the activity of their neurons. But have a look:
The red line represents the tuning function for Day 10 classification images. See where the red line intersects with the line labeled “45°”? That’s the equivalent of the Day 10 data point in the previous figure. But because I’ve gone through the trouble of creating this classification image and measuring its correlation with 180 different gratings, I can see much more than a single data point. I can see that there’s a big improvement immediately around the orientation that was trained, but that this improvement rapidly drops off as you move farther from it. In fact, most of the red line is below zero, indicating that the fit between the classification image and these other gratings has actually decreased, or been inhibited.
Then there’s the blue line, which represents the tuning function for Transfer Day. Its spike is 90 degrees away from the red line’s, which you’d expect. But the spike is also smaller and broader, indicating that learning to detect one grating transfers imperfectly to others. Also of interest, you might notice that the blue line has a noticeable “bump” near the spike of the red line. This suggests that even when subjects are told to search for this new Transfer grating, they are using a mental template that is tuned for what they had been trained on previously.
Lastly, no two people are ever going to produce identical classification images, which makes the images useful for revealing individual differences and strategies. Early on in this study, we noticed that some subjects produced very nice-looking images and showed very clear learning trends. Others seemed to produce murkier images and flat or even reversed learning effects (they seemed to get worse over time, not better). What to do with subjects like this is always a thorny question. So, we split the subjects into two groups: learners and non-learners. We found, somewhat surprisingly, that the so-called “non-learners” actually started off with better images than the “learners,” but that they never seemed to improve beyond their starting point. Our analyses showed that the non-learners tended to focus on the center of the stimuli. Their classification images looked like bright blobs, not stripes. It’s a perceptual strategy that worked well for them initially, but failed to generate measurable learning. Meanwhile, the learners started off a bit worse, but seemed to incorporate more of the stimulus into their decisions, and thus produced better classification images. In other words, the non-learners were lazy, and we were able to see and quantify this thanks to the classification images.
That, ladies and gentlemen, is the story: an efficient way to pull a picture out of a person’s brain, and what that picture can tell us about how that person learns. In closing, I’d like to point out that this thing didn’t spring fully formed from our minds. It was accepted for publication as of October, but for three years before that it was a work in progress. More than that, it was one huge learning experience for me. How to program the stimuli, how to design the experiment so that subjects don’t fall asleep in the middle of it, how to detect a cheater, how to analyze the data, how to slice and dice the data, how to ask questions, how to get answers, how to write it up, and how to handle the revisions, all of this was a a learn-on-the-job deal. I’d particularly like to thank my advisor on this project, Aaron Seitz, for all his help and guidance.
posted October 20 2010
the amazing thing
I’m taking a class about professional issues in psychological science, in other words, a class in how to be a real scientist. Giving talks is as much a part of the professional life of a scientist as running a lab and applying for grants, so yesterday was talk day. The goal was to give a talk about your research lasting no more than ten minutes, and I hit it on the dot. The talk was generally well-received, which was gratifying, as it will likely evolve into my eventual dissertation defense. Then the professor spoke. “Really great job,” she said, “but, Jon, why should I care about this topic?”
Ah yes. That. I knew I had forgotten something. To be fair, I’m not likely to give this talk to audiences who are utterly unfamiliar with the topic in question, which in this case is something called perceptual learning. But I’m hedging here. I can play devil’s advocate with myself until the sun implodes, but really, I should have just found an extra thirty seconds to explain why my field matters. Good scientists explain their work clearly and concisely. Great scientists—your Carl Sagans, Oliver Sackses, Steven Pinkers, and Albert Einsteins—make the point of their work so obvious and accessible that anyone can understand it.
To the above list of Great Scientific Communicators we should certainly add Dr. Jean Berko Gleason. Dr. Gleason is one of the world’s preeminent psycholinguists, and if you don’t know what a psycholinguist is, I’ll let her explain. All those clips are wonderful (and yes, she is exactly like that in person), particularly for how clearly she explains herself. Multiple branches of nuanced research laid out—zip, boom, bonjour—in half a minute. After watching those clips no one is left wondering why psycholinguistics is an important field. We all understand why Dr. Gleason has allowed psycholinguistics to fill her life, and how it’s relevant in our own.
So what about my field? What’s perceptual learning?
Perceptual learning is a process by which we get better at perceiving things over time. We’re not interested in changes in strategy or decision making (usually), rather, true perceptual learning means that you are literally getting better at seeing. With practice, you can become more sensitive to contrast, motion, or a thousand other things. Take my dentist, for example. Since I’m a graduate student with a bargain basement dental plan, my dental work is performed by fourth and fifth year students at my university’s dental school. Before the end of every visit, a supervising dentist double-checks the student’s work. During one recent checkup, my student dentist told his supervisor that he hadn’t found any cavities in my mouth. The supervisor took one of her tiny metal hooks in hand and glanced it over my molar for what could not have been more than a second. “There’s some decay there. We’ll need to schedule a follow-up to treat it.” She, the experienced dentist, had seen plainly what the less experienced student could not, because her extensive training had made her more sensitive to the tiny precursor of a cavity on my tooth.
It’s not just dentists, of course. Radiologists have to hunt through murky x-rays to find fractured bones and dangerous tumors. Baggage screeners need to be able to spot a knife amongst shampoo and sweaters. Jewelers peer inside diamonds, looking for perfection. Perceptual learning makes the difference between an amateur and an expert.
Perceptual learning is important because it demonstrates that experience and practice can create dramatic change in the brain. If you have a stroke, can we help you recover? Yes. As the developed world grows older on average, what can we do to protect the brain from the ravages of age? Back in ‘86, two researchers showed that you could turn an eighty year old into a twenty year old, given enough practice. We spend increasingly large amounts of time playing video games. Some decry this as a waste of time, a habit fated to create a nation of ADHD-addled zombies. But the science says video games improve our visual capacities. Hell, what are you doing right now? How did you get so good at reading? How are you able to process letters, words, and whole sentences with such blinding speed? In part, reading is a product of perceptual learning.
Over the summer, I participated in a colleague’s experiment, one that involved what’s called a texture discrimination task. Imagine a grid of horizontally dashed lines. Somewhere in the grid, off in your peripheral vision, three of the dashed lines have been turned into diagonal slashes. Sometimes the slashes are arranged vertically in the grid, and sometimes horizontally. The whole stimulus is displayed for a fraction of a second, and your job is to say whether the slashes were vertical or horizontal. On the first day of the experiment, the task was impossible. My performance was so poor that I might as well have had my eyes closed. It was the same story on the second day. After three weeks of training, however, my performance bordered on perfect. The exciting thing, the amazing thing, was that I could feel my perception changing over time. What had once been impossible became trivially easy. The slashes, once as jumbled and fleeting as a snowflake in a blizzard, now jumped out at me as clearly as a snowball in summer.
As scientists we can argue at length about why and how my perception changed. Was it a low-level change in highly specific sets of neurons, or was it a broader change in the way I allocate attention? Will this transfer to other tasks, and what does that mean? Was I detecting the slashes, or filtering out the horizontal lines? It’s easy to get bogged down in the methodology and specifics, and lose sight (har har) of the big picture. Whatever the reasons, there is no doubt in my mind that my perception changed over those three weeks, simply as a result of diligent practice. That’s an amazing thing, a powerful thing. That’s why I do what I do.
posted March 4 2010
start your mornings with b. f. skinner
Every Psychology 101 course will spend a week or two on the principles of learning. The larger question being addressed is: How can a person’s thoughts or behaviors be changed in an enduring way? In discussing how this question has been studied by psychologists, the lesson invariably starts with the example of Pavlov’s famous drooling dogs and ends with B. F. Skinner and his quirky pigeons. How quirky were they, you ask? Skinner successfully trained his birds to play ping-pong, and even secured military funding to see if he could train them to act as bomb guidance systems. Project Pigeon, as it was called, actually worked, to an extent. True story.
Students don’t have much trouble with the drooling dogs and bombardiering birds. Nor do they have much difficulty mastering the concepts of positive reinforcement, in which you are given something desirable (food, money, etc)to reinforce a target behavior, and punishment, in which you experience something unpleasant whenever you perform an undesired behavior. Confusion doesn’t set in until_negative_ reinforcement is brought up. Negative reinforcement, just like positive reinforcement, increases the likelihood that a target behavior will occur (in contrast, punishment decreases the likelihood of a behavior occurring, and isn’t very good at creating long-term behavior change). The difference is that while positive reinforcement introduces something pleasant to reinforce behavior, negative reinforcement works by removing something unpleasant.It’s a tricky concept to teach because it’s difficult to think of examples in which you’re removing something unpleasant without also introducing something pleasant. As it happens, I’m living just such an example right now.
I am not a morning person. I never have been, and I probably never will be. I am not one for whom the dawn is its own reward. It takes a supreme effort and an elaborate system of alarms to wrench myself out of bed every morning. During holidays my circadian rhythms inevitably slide toward the nocturnal. I eventually find myself falling asleep at 4:00AM and waking up at the crack of noon. Nevertheless, I can count on one hand the number of times I’ve overslept for an early appointment. If I absolutely have to be somewhere at 7:00AM, I’ll be there. It’s just how I was raised.
For the past two weeks I’ve been arriving at the lab by 7:30AM. Initially I did this because I had to; a subject in an experiment I’m running could only come in at 8:00AM. But the subject in question finished up last week, and now I’m coming in early by choice. You may be wondering how I’ve managed to sustain such a miraculous change in my behavior, and believe me, “miraculous” is definitely the word here.
It has everything to do with the B Line. Anyone who lives in Boston knows what I’m talking about. The B Line is by far the slowest, most crowded, and least reliable subway line in the city. That it serves the most densely residential sections of the city but runs the fewest trains is a topic for another day. The end result is that rush hour on the B Line is a nightmare. Moreover, the B Line remains crowded for huge swathes of the day, even as the ridership on other branches of the Green Line thins to almost nothing. The thought of having to stuff myself into a B Line car for what is, all things considered, a short commute fills me with dread.
It turns out that if I can get myself onto the train by, say, 7:00AM, all of these problems go away. The trains haven’t had the chance to get backed up, seating is plentiful, and the ride is quantifiably faster. The whole experience is far less aversive, so much so that I’m actually willing to push against twenty-seven years of night owl habits to change my behavior. Negative reinforcement in action, ladies and gentlemen.
There are positive reinforcers as well, of course. There are more hours in my day and I’m more productive (the only other time in my life where I managed to maintain this schedule also happens to be the time I wrote daily). Since I’m in so early, I don’t feel bad about dodging the evening rush hour by leaving at 4:00. The positive reinforcers are obvious, but it’s the negative reinforcement of avoiding a horrible commute that gets me up in the morning. Skinner tends to get a bad rap these days, but there’s no denying that the man was on to something.
posted February 19 2010
keep calm and carry on
Imagine that it’s 1939 and your country is about to enter World War II. Further imagine that you work for your country’s government, and you have been charged with designing a series of posters with the goal of calming the public in the event of mounting catastrophe. Also, you are British. Immersed in such a situation and thrown a dash of inspiration, you might—I say again, might—come up with something half as brilliant as what was actually designed for this purpose: “Keep Calm and Carry On”.
“Keep Calm” was the final poster in a series of three commissioned by the Ministry of Information. It was intended to be used only if Britain was invaded by the Germans. I mean, really, how stiff upper lip can you get? There you are knee-deep in Nazis and the King’s message to you is simply, “Keep calm and carry on.” Of course, Britain was never invaded and this third poster never saw the light of day (though one supposes that this might have been an ideal message during the Blitz). The poster was forgotten by history until 2000, when an errant copy turned up at Barter Books of Northumberland.
The bold color, stark typography, minimalistic Crown symbol, and wonderfully succinct slogan combined to create something that was both emblematic of the era and perfectly British, all in little more than five words. Suddenly I remembered that the Southerner’s birthday had just passed (don’t look at me like that, he’s not big on presents) and that he has an affection for all things Royal. It became clear that I had no choice but to buy him a copy for his own especial privilege and certain knowledge.1
Here’s where things get tricky. Crown Copyright on “Keep Calm and Carry On” expired more than twenty years ago. Combined with the poster’s simple design and immense popularity,2 this means that there are a lot of knock-offs floating around the internet. All of them feature the Crown, but only some of them use an accurate font. Since the poster is almost all text, the accuracy of the typeface is critical. Barter Books claims to have the original poster. I have no reason to disbelieve them, but some of their merchandise uses a font that is obviously different from the original’s (note the way the letter “C” terminates, as well as differences in the “M”). The match is close, but not close enough, which is deeply confusing. KeepCalmAndCarryOn.com (even more rhyming than the original!) seems to employ the same near-miss typeface, except on the book (weird, right?). Don’t even get me started on the cheap and highly inaccurate reproductions available on Amazon, which appear to use Adobe’s Myriad Pro. While Myriad has the benefit of being free, it looks nothing like the original 1939 letterforms (particularly noticeable on the letters “K”, “C”, and “M”).
So, Mr. Amateur Typographer, you might be thinking, what’s the correct typeface, then? The answer, I think, is that it doesn’t exist. Given the way that posters were produced in 1939 and the limited set of letters that “Keep Calm” employs, it’s more likely that the text was drawn by hand specifically for the job. This means that the only accurate type sample is on the original poster itself. I found one vender on eBay who had gorgeous, accurate prints for sale, but because I live in a disreputable neighborhood the print went missing somewhere between the confirmed delivery and my arriving home. Luckily enough, Wikipedia’s version of the poster appears to be a direct copy of the original, and even better, it’s in SVG format. This means that provided you have a suitable vector graphics application and a decent print shop, you can make your own crisp, typographically accurate copy in any color or size you like. As it happens, I have both, so today I’ll be able to present the Southerner with his long-overdue birthday gift. 24x36 inches big, violently red, and defiantly British.
-
For why that sentence is funny, please see Paragraph VI of the Charter of Maryland, 1632. ↩
-
Some people believe that “Keep Calm and Carry On” is too popular, as this thread on Apartment Therapy indicates. One commenter goes so far as to write, “I couldn’t stand to have my Keep Calm print at home anymore, it seems like such a cliché.” This is, of course, idiotic. In my opinion, a classic never goes out of style. ↩