The look on her face was frantic, her motions frenetic as she exited her office. Somehow I knew she would be approaching my desk. “What’s that?” I asked, removing my headphones and trying to remain calm. I already dreaded the answer. “The site’s down. No one can log on,” said our CEO again in a panic-stricken voice.
As I absorbed those words—words feared and loathed by developers everywhere—I opened the file where I suspected I’d find the culprit. I had a pretty good guess: the one I had just deployed to our production environment. Before long, our phone began to ring, the oncology clinics that depend on our software to care for their patients every bit as frantic as our CEO.
I quickly fixed the bug and committed the changes, watching anxiously as our deploy script spat out its log messages. I switched back to my web browser and refreshed. The page loaded successfully, and I went outside for some fresh air. I paced back and forth in front of our building, my hunched shoulders refusing to relax. The weight I felt on my chest constrained my breathing.
Like most of the colleagues I related this story to, this was neither the first nor the last time I sacrificed my own physical and emotional health for the fleeting promise of startup work: the chance to get in early in a company destined for an enormous IPO. But this time, my body’s warning signs were impossible to ignore.
I had always assumed that a brain scan taken while I worked would read like an aerial view of a forest fire: intense, bright orange and red activity engulfing the whole area. In reality, however, one region in particular is uniquely triggered. The prefrontal cortex, the roughly fist-sized, foremost region of the brain that sits behind the forehead on the left and right hemispheres, is at work when we sit at our computers and bang out code. Scientists note that this region is responsible for “executive function,” an umbrella term that includes everything from organizing and planning to goal setting, problem-solving, and abstract thinking.
The prefrontal cortex consumes a disproportionately large amount of energy for its size: more than six of every 100 calories you eat are reserved for this cerebral region, impressive if you consider the number of other bodily systems vying for that energy. Unfortunately for the typical startup worker, the performance of the prefrontal cortex is also directly linked to our sleep habits.
In a recent study from the Sleep Neuroimaging Research Program at the University of Pittsburgh School of Medicine, researchers found that the prefrontal cortex is preferentially impaired following a night of particularly poor sleep. In other words, the sleep-deprived brain diverts its resources away from more energy-consuming and higher-order regions like the prefrontal cortex and toward areas like the basal ganglia, which is responsible for vital life functions such as swallowing and breathing.
In other words, while you may think you’re building your advantage by skipping out on sleep in order to get more work done, your brain will eventually be too starved to be of use.
Fortunately, the brain can begin to be renewed after even a single night of what the University of Pittsburgh researchers refer to as “recovery sleep”—the deep, dreamless kind that leaves you sleeping until the afternoon on Saturdays.
This research seemed to confirm my own findings. I found that the cognitive toll of sleep deprivation was most evident when I was working closely with a coworker. In these pair programming sessions, I struggled not only to write coherent code, but to communicate the deeper intentions of my work. This was the point at which I became the most distressed: no amount of coffee could diminish the difficulty of putting words together.
Suddenly, I realized that something had to give. I was forced to come to terms with how unsustainable my work-life balance had become. At one point, I had thought that my capacity for productivity was limitless. But now both science and my own body were directly contradicting the myth of the superhuman startup worker.
Coming to the realization that people have limitations is much easier than concretely recognizing our own. We spend so much of our time communicating with computers that it’s easy to begin expecting the same superhuman things from ourselves as we do from our machines. Extraordinary feats like 99 percent-plus uptime, a flawless ability to perform complex calculations, and a logarithmically expanding memory become our goals.
But in order to continue to work, I ultimately had to put a greater focus on taking care of myself. This turned out to be both more difficult and more rewarding than I had anticipated.
Knowing how self-defeating my cycles of lack of sleep and increased need for sleep were, I decided to propose a conference talk on this very subject. I knew that I would have to speak from the experience of having shifted the focus from software to my own well-being, and this talk would keep me accountable. When I received the invitation to speak, I had about three months to prepare, which allowed for a deep exploration of the effectiveness of my habits at work and at home.
My two immediate goals were to get a full eight hours of sleep every night and to explore meditation. I found sleep to be the easier of the two to implement, and after I came to terms with the anxiety of not learning quickly enough, I was able to sleep well regularly. But while I had always been drawn to the idea of meditation, I found it difficult to incorporate into my life consistently. It helped me to redefine meditation not necessarily as a religious or spiritual practice, but rather, a single-minded focus on one thing. In this case, my breath.
I found that running the automated test suite on our application’s Ruby code gave me a perfect opportunity to pull my hands away from the keyboard, place them on my knees, straighten my posture, close my eyes, and begin breathing deeply. Prone as I am to racing thoughts, this practice helped me not only manage the stress I felt during the day, but also to focus on a single train of thought more consistently. Once I found this window of time to meditate, more began to crop up: starting up my machine, waiting for my lunch to heat up in the microwave, watching my local server start the Rails environment. Anywhere I had time to check my phone, I had time to breathe.
It wasn’t the easiest habit to cultivate, though. At first, my immediate impulse was to check Twitter, open my email, or switch back to the code I was testing and try to anticipate a failure before reading the command line output. I thought these impulses were saving me from focusing on my erratic and sometimes chaotic thoughts, but I came to realize that over time, they were just adding to the chaos. The more often I overcame the instinct to switch immediately to a new task, the more prolonged my sense of calm was when I went to take deep breaths.
To my surprise, this sense of calm led to an increased awareness of my level of work-related stress and its effects on my emotional health. That is, the slowing of my thoughts allowed me to pay especially close attention to my moods, my energy, my ability to engage and to communicate well, and the overall sense of personal satisfaction I derived from my work.
I had always enjoyed my job, and was exhilarated by the initial investment it required in forward-thinking technologies. Recognizing a dramatic jump in my learning curve from week to week had given me a profound source of motivation. But when I began meditating, slowing my thoughts to a speed at which they could form a coherent stream, I realized I was overwhelmed by all those extra hours. My emotional burnout was rivaling my sleep-deprivation-induced physical one.
In the long term, I believe—and the research I uncovered supports—that focusing on my emotional stability was perhaps more effective at ensuring my success as a developer than staying up all night reading books on object-oriented design patterns would have been. But in the interim, I started to miss the previous pace of my learning progression. I wondered if I would ever rediscover my passion for programming.
I felt better than I had in months.
I would love to say that this pause, this perspective-broadening sabbatical I took from the overwhelming workload of the web, fundamentally changed my career and the way I work. But ultimately, it can be difficult to justify spending too much time recovering from burnout when our industry continues to fetishize productivity and the breakneck pace of web technologies.
Like most things in life, it’s about striking a balance: an equilibrium between the frantic tempo of our industry and our own internal rhythms. This year, I learned that my body knows this balance—and it will help me find it if I listen.
I’m thinking about successful new communication channels, and how we talk about what’s in them. On Twitter, we say tweets. In the blogosphere and on Facebook, posts; also rants, reviews, and flames. Facebook has likes and now everything has links.
But I note the entire absence of “content”; the word, I mean. Yay! I’ve loathed it ever since its first powerpoint-pitch appearance, meaning “shit we don’t actually care about but will attract eyeballs and make people click on ads”. Except for they don’t say “people”, they say “users”, a symptom of another attitude problem.
With every year that passes, it’s increasingly clear that the appearance of “content” in any business plan is a symptom of (likely fatal) infection by cluelessness; and a good predictor of failure.
It’s on my side. Nobody calls Hollywood’s output “content” (or Bollywood’s either): They’re movies and flicks, with a lovable posse of modifiers: horror, chick, war, Elvis, zombie, romance, slasher, Bond.
Publishers produce novels and epics and mysteries and bodice-rippers and procedurals and memoirs and hatchet jobs.
Musicians make songs and symphonies and anthems and albums and jams and (along with DJ’s) sets. Theater companies put on plays: musicals, tragedies, comedies, farces.
“Content” has the stink of failure; of hustlers building businesses they don’t actually care about. Which is icky and usually doesn’t pay off.
Enough with the negative findings, because there’s something important and positive to say here: If you’re building something that’s used for communication, and you find that people are using an idiomatic name for what they’re sending and receiving, you’re probably on to something.
But if you’re about “generating content” you’re dead.
The next Deathmøle album is gonna be pretty proggy I guess
We are excited to unveil a couple experimental data-driven visualizations that literally map 400,000 hours of U.S. television news. One of our collaborating scholars, Kalev Leetaru, applied “fulltext geocoding” software to our entire television news research service collection. These algorithms scan the closed captioning of each broadcast looking for any mention of a location anywhere in the world, disambiguate them using the surrounding discussion (Springfield, Illinois vs Springfield, Massachusetts), and ultimately map each location. The resulting CartoDB visualizations provide what we believe is one of the first large-scale glimpses of the geography of American television news, beginning to reveal which areas receive outsized attention and which are neglected.
Select a TV station and time window to view their representations of places.
Keep in mind that as you explore, zoom-in and click the locations in these pilot maps, you are going to find a lot of errors. Those range from errors in the underlying closed captioning (“two Paris of shoes”) to locations that are paired with onscreen information (a mention of “Springfield” while displaying a map of Massachusetts on the screen). Thus, as you click around, you’re going to find that some locations work great, while others have a lot more error, especially small towns with common names.
What you see here represents our very first experiment with revealing the geography of television news and required bringing together a bunch of cutting-edge technologies that are still very much active areas of research. While there is still lots of work to be done, we think this represents a tremendously exciting prototype for new ways of interacting with the world’s information by organizing it geographically and putting it on a map where it belongs!
Virtual Machines: Unlocking Media for Research
In addition to our public web-based research service, we are facilitating scholars, like Kalev, and other researchers in applying advanced data treatments to our entire collection, at a speed and scale beyond any individual’s capacity. As responsible custodians of an enormous collection of television news content created by others, we endeavor to secure their work within the context of our library. Therefore, rather than lending out copies of large portions of the collection for study, researchers instead work in our “virtual reading room” where they may run their computer algorithms on our servers within the physical confines of the Archive. We hope our evolving demonstrations of this data queries in — results out — process may help forge a new model for how exceptional public interest value can be derived from media without challenging their value and integrity to their creators.
The Knight Foundation and other insightful donors are providing critical support in our ongoing efforts to open television news and join with others in re-visioning how digital libraries can respectfully address the educational potential of other diverse media. We hope you will consider lending your support.
I find great stuff on the Internet Archive all the time, and now I can use a tool called CratePlayer to create playlists from archive.org movie and audio files. For example, I want to play a bunch of old Christmas movies at my holiday party this year so I found some cartoons and added them to a Crate. Now all I have to do is hook my computer up to the TV, press play, and poof! Instant entertainment!
CratePlayer is a curation tool that lets you gather audio and video content from online sources into collections that can be played and shared. When they approached us about incorporating Internet Archive items into their platform, we said “yes!” and gave them some pointers about accessing archive.org content. Off they went, and in short order they had it all working.
Try using their bookmarklet as you’re poking around among archive.org audio and video content. It’s easy to use and might help you keep track of all the great things you find.
Step 1: brew some espresso.
I made a double, which in hindsight was too much. One is enough.
Step 2: Oatmeal
I used Cinnamon & Spice because I thought it would complement the espresso nicely.
Step 3: Make mistakes
THAT’S RIGHT, PUT THE OATMEAL IN THE ESPRESSO AND STIR
Step 4: CONSUME
[image removed at request of public health authority]
It was tastier than I expected. The sugar and spices from the oatmeal sweetened the espresso, and the espresso did a fine job of preparing the oatmeal. It still came out slightly bitter, but if you like espresso (I do) it was a pleasant bitterness.
We just received a shipment of Internet Archive TShirts. They have the Internet Archive logo on the front and a choice of slogans on the back. They come in S, M, L, XL and XXL
We know you’ve been waiting so get ‘em while they last. You see them and the other great Internet Archive gear at https://store.archive.org/
As I write this I’m angry at the CBC, Canada’s national broadcaster, for their shoddy, shallow coverage of reformgovernmentsurveillance.com (let’s say “RGS” for short). But the trap they fell into is probably attractive to many flavors of media.
The 6PM news report opened with a few seconds of Zuckerberg saying he thought the government was blowing it in this space, then another few words from Zoe Lofgren talking about the NSA putting American business at a disadvantage. (Do ya think?!)
Then there was a sudden 180° shift into hard polemics, with a snotty British professor opining that it was all the companies’ fault because they were sucking up the information, and the NSA wouldn’t come after it if the companies weren’t collecting it, would they? There was some more yammering about hypocrisy, then they went to the next story.
Indeed, most (not all) of the RGS companies track a whole lot of information about a whole lot of people, and use it to help sell ads and make money.
So, that granted:
What about, you know, the issues? The light cast by the Snowden documents has revealed an egregiously intrusive snooping regime that contravenes what a lot of us understood our constitutions to say. Could the coverage at least acknowledge that this is seen by many as a problem?
News flash: A lot of the information that’s useful in selling advertising (clickstream, search terms, likes) is irrelevant to the spooks. They want to know whom you’re talking to and what you’re saying; everything else is secondary.
These tax-funded banditos put taps in privately-owned fiber trunks between data centers to pull out raw application data with no regard to who or what it was about.
There were some other news angles you think would be interesting. First of all, how did this get wrangled? Getting the egomaniacs on all these exec teams to line up and play nice is a hell of an achievement by, well, someone; who? Also, why isn’t Amazon there? Also, why is Apple’s logo weirdly missing from the headline space? Those are interesting, but they’d have required reporting, not just scraping up a random academic to shovel world-weary cynicism.
I know this will sound sort of old-fashioned, but I can tell you from experience that Google is bulging with people who think that everyone’s freedom to say anything to anyone anywhere using the Internet makes the world a significantly better place, and who are deeply angry with the spooks’ shotgun blast at everyone’s presumption of privacy.
So, granted: Pervasive surveillance is bad for the companies who make money on the Internet. That’s a side-effect of being bad for the Internet. And being bad for freedom.
Me: I’m in favor of the rule of laws not men; and not shadowy three-letter-agencies either. So I’m proud to be a teeny-tiny part of the work at places like the IETF on doing a better job at protecting privacy. But at the end of the day this is a political problem, and it’s useful (I think) to have big Internet companies as political allies.
I looked it up: “A traditional German alcoholic drink for which a rum-soaked sugarloaf is set on fire and drips into mulled wine”. It was a tasty treat, on offer at Vancouver’s Christmas Market, itself a treat for the eyes, so I took pictures.
There were little kids singing carols: Cute overload!
There were Croatian dancers and an old-fashioned merry-go-round, and lots of booths selling bright things.
Some of the decorations were worth zooming in on.
I’ve been to real Christmas Markets, in Würzburg and Antwerp, and they’re good fun. You have to watch out for that feuerzangenbowle though; it’s hot and spicy and delicious and goes down smooth as anything; you may not notice it sneaking up on you till it’s too late.
Playthrough of one of the tracks off the upcoming Deathmøle record! Jackson B7 deluxe -> Axe FX II -> computer
Today is a great day. A new potential client has come knocking on your door, and they’d like to consider you for a project. Thrilling as it may be, your excitement quickly turns to anxiety as you realize that the next thing they want to know is “how much will it cost?”
Here begins the great struggle of web business development. You need to know what you’re building before you can know how much it costs to build it. But accurately mapping out the scope of a project could take weeks of focused effort. That’s probably not something you can give away whenever you get a request for a quote. So what do you do? Make up numbers and hope you’re on target? Undershoot and you may land a job that cripples your business. Overshoot, and you may unnecessarily send that new client packing.
Fortunately, it doesn’t have to be this way. Defining the challenges, solutions, and strategies for the project to come is some of the most valuable work you will do for your client. Not only is that time worth paying for, but the resulting deliverables will be critical to the success of the project, regardless of who they hire to complete the next phase.
Let’s look at how we can structure a pre-project research phase that will ensure that—on completion—everyone’s ready to hit the ground running with design and development. By the end, our new client will know more about their organization and web project than ever before, and you’ll be able to create a much more accurate budget for subsequent work.
So there it is. That unread email with the subject line Fancy Organization Request for Proposal.
It’s a website redesign: no surprise there. And, wait—oh miraculous day—they even told you their budget. They have $100,000, which is enough money to do some real stuff. But what stuff do they want? Poring over the pages, you realize quickly that it’s not so clear. Some of the requirements they state, in fact, seem vague, strange, or misguided. As much as these folks are hoping you can take their RFP and give them a precise quote and a plan of action, you know in your bones that you need more clarity—and clarity rarely comes easily.
So instead of putting together their requested $100,000 proposal, what do I do? I put together a $20,000 one. This, my friends, is unexpected. Gutsy, even! This is not what they’ve asked for. The gall. The nerve. The chutzpah!
Alright, let’s acknowledge this right now: there is risk involved in this approach. If I turn in something that looks entirely different from what the client is requesting, they cannot compare apples to apples with the other proposals. This may not fit into their (potentially rigid) RFP process. Our proposal may get tossed out in the very first round, just for being different.
On the other hand, this may be exactly what makes us stand out. Rob Harr from Sparkbox has a terrific line that I’ve started using myself: “I’m going to embrace the fact that I’ll know more about your project tomorrow than I do today.” For some clients, that may be a refreshing dose of honesty and creative thinking. And I don’t know about you, but those are the clients I really want.
One caveat: though we’re holding off on producing an accurate second phase budget until later, we still need to address the general cost of future work in some way. It’s a good idea to take a stab at a ballpark range for the second phase, with the understanding that it’s a bit of a shot in the dark. That way everyone will at least have a sense of the magnitude of a second phase, and can plan accordingly.
So what are we actually pitching in this smaller project? Well, it’s the first part of the bigger project, naturally. Depending on the nature of the project, it may require different tasks and deliverables. But we’ll likely include things like meetings, interviews, information architecture recommendations, branding analysis, a copywriting style guide, a content audit, wireframes, and style prototypes/style tiles. Whatever we end up doing, we’ll compile all of the research and conclusions we draw in the specification document, which is the central deliverable we provide at the end of the phase.
When putting together a standalone research phase, the trick is to focus on work that will help you more clearly define the project. That way you’ll have a well-formed plan for a second phase, and can provide the client with a much more accurate budget for that subsequent work.
In all likelihood, I don’t need to sell you on research. Understanding what we’re going to design and build before we begin means we can create better things, more efficiently. But why, when we have the opportunity to sign a big contract, would we opt to sign a small one? Why, indeed!
You know what’s scary? Handing a big wad of money to a stranger. That’s what a big initial contract is like for a potential client. A smaller introductory research project lets a new client wade in ankle-deep before the big plunge.
Not only are they making a smaller commitment of time and money but—by the end of the project—they’ll know if you’re a good fit for them. If everyone decides to part ways at the end of this phase, they’ll still have valuable deliverables to help them jump-start the project with a different team.
And this goes for you, too. There’s nothing worse than signing on for a year-long project with a new client, and then realizing a month in that it’s a bad match. The pre-project project lets you assess the relationship in a low-risk environment, and decide if it’s worth continuing.
From a business development perspective, this initial research project has a lot of appeal, too. Sure, you’re not landing a big ticket contract just yet, but bear with me on this one.
If you’re a small company like ours, you likely don’t have a dedicated sales person. This means that responding to RFPs is costly, and before long you’ll have to start making some hard decisions about which proposals you can afford to write, and which you can’t.
The nice thing about research phase proposals is that they tend to be very similar from project to project. By definition, these proposals don’t include very much about the specifics of the work being done, so a chunk of well-written boilerplate in your proposal gets you a lot more mileage. Investing less time with each proposal means that you can respond to more proposals. Wider net, less effort—without sacrificing quality. Yay!
In my experience, when you don’t have a proven track record with a client, selling a $10,000–20,000 project is a lot easier than selling a $100,000–200,000 project. This little research project helps you get a foot in the door with that new client, and prove your worth without resorting to something devaluing like spec work.
As this phase is nearing completion, you’ll be able to create a much more accurate budget for phase two. Because your research has generated a well-informed project definition, there will be much less guesswork, and a far greater understanding of the project’s requirements. In effect, you’re getting paid to write the best proposal ever. And you should be! The insight into the project and organization you’re providing is vital work that ensures that no one will be jumping into the project blindly and simply hoping for the best. That’s because now you’ll have something that every project desperately needs, but surprisingly few actually have. You’ll have a plan.
With over three years of responsive web design in our collective portfolios, we now have a solid set of design patterns for making websites work on small devices. But what about larger screens?
It’s become common for sites to employ a liquid design for smaller breakpoints, which allows the content to expand and contract as necessary to make the most of the available screen width. At the opposite end of the spectrum, however, many of those same sites have a maximum width of 960 pixels or so, which can leave a lot of unused pixels on a contemporary desktop display.
Designing for the big screen can be complicated—negative space, scale, density, and layout devices such as grids, modules, and columns can be factors in managing hierarchy and emphasis.
Large screens are also generally shaped in a wide landscape orientation, a poor fit for the traditional vertically scrolled webpage. As with smaller screens, there are a wide variety of screen sizes and resolutions—but in the case of larger screens, the differences are often magnified, ranging from ultra-light 11-inch laptops to 30-inch desktop monitors.
Given these conditions, it’s not surprising that many desktop layouts (like this one) are designed to suit a 1024x768 resolution. It’s a leftover from an earlier era, when designs were constrained to the screen resolution that was most prevalent amongst users. Today, with the majority of desktop users on screens that are wider than 1024 pixels, a maximized browser window can turn that carefully considered 960-pixel layout into a monolith in a field of whitespace.
More people are accessing the internet with a mobile device every year, and so it makes sense to concentrate budgets and timelines on creating good user experiences for smaller screens. Mobile layouts can be perfectly usable on larger devices, but the same cannot always be said for desktop layouts viewed on small screens.
But by embracing large screens, designers have the opportunity to work within a larger fold, presenting the user with more content simultaneously, lessen scrolling on longer pages, and create a richer, more expansive user experience. And by using the same practices we developed to adapt layouts to smaller screens and identifying some common patterns for large screens, we need not necessarily introduce extra cost or time to our projects.
As with any design, the first consideration when approaching larger breakpoints is content. Long- and short-form writing, photography, ecommerce, video, or web applications may benefit from different approaches in different ways.
Photography, search results, and other content presented in grid format are easy candidates for wide screens. Showing as much content as the screen can accommodate allows a user to quickly scan and compare results.
On the other hand, long-form reading is a challenge for wider breakpoints. Long line lengths can make it difficult to follow the text from line to line, while short line lengths can introduce a sense of jumpiness or acceleration, breaking a reader’s rhythm and pacing.
To make reading more comfortable, a designer needs to balance the width of the text column (the measure) against the size and line-height (leading) of each line of text. Classically, an appropriate count for a single column of text is seven to 10 words (Josef Muller-Brockmann) or 45 to 75 characters (Robert Bringhurst). Taken another way, Bringhurst also notes that the measure of a conventional book column is about 30 times that of the type size used, but that this number may also range from 20 to 40 times the size of the type.
Wider columns can use more line-height to make it easier to follow the text from line to line, but too much line-height can cause lines to drift apart, resembling a college research paper. Similarly, as the text size in a column grows larger, the number of lines that can be presented vertically on the screen grows smaller, increasing the need for scrolling and breaking the reader’s immersion. Simply scaling the text for larger breakpoints is a limited solution.
The Great Discontent demonstrates how a site can use art direction to adapt to larger screens without necessarily filling every single pixel in the browser window. Each article expands its feature art at the top to fill the viewport, resulting in a striking full-bleed effect upon first viewing. The main content of each article is set in a relatively narrow main column, but sidebars, pull quotes, and inline art expand beyond the central column. Breaking the content out of the main column creates an asymmetrical shape which complements the full-width artwork at the top—creating the illusion of a full-window experience without compromising legibility. Large images like these can come at a cost, though, as a balance between image quality and the overall page weight needs to be considered.
The recently relaunched Roger Ebert site deals with large breakpoints by simply scaling up the maximum width of the page and the page elements proportionately. In theory this might work, but the execution is not entirely successful. Elements such as headers scale up vertically as well as horizontally, meaning the amount of content displayed within the fold is drastically reduced. Inexplicably, main body copy on the more text-heavy pages does not scale up in proportion to the other page elements, so it seems dwarfed in comparison, in addition to being set in a size that is too small for the main column measure.
Using the extended margins of larger screens for related or tangential content, such as Medium’s comments layout, is another idea that seems well suited for long-form publishing. When the main content column is maximized on smaller screens, it moves aside to reveal the comments area; on larger screens, the comments are revealed in the available margin space.
I’ve also always liked Grantland’s use of the lower right column for footnotes, which takes advantage of wider screens while maintaining focus on a readable central column. Photographs, figures, asides, quotes, and other related content can be extended out into the margins of wider viewports. This allows a designer to extend the vertical grid outward to create variety while preserving the flow of the main text.
Newer CSS features like columns and regions could be useful tools to enhance long-form reading on wider screens. CSS-based columns are now supported across most new browsers, and could be deployed within sections of an article to maximize screen usage while maintaining a good measure for text readability. If you have a large screen, for example, see my column-based demo of this article.
As a progressive enhancement measure, older browsers that do not support these features could be restricted to a single column of appropriate measure.
Breaking content into chunks allows users to quickly and efficiently process information on content-heavy pages, and it’s a natural fit for responsive designs, because it allows content to be easily stacked hierarchically or arranged in columns for different breakpoints.
The advantage of this technique for large screens is that each chunk or band of content can use a different layout to optimize for legibility or impact. A good example of this approach is the Manchester City Council site, which uses different groups of modules in restricted widths together with a full-width photography chunk to create impact and emotion. The layout adapts fluidly to different viewports while retaining an appropriate width and layout for the content of each chunk.
Juliana Bicycles uses a more visual approach to content chunking, combining horizontal bands with flexible tiles to create a rich and compelling responsive site that also scales to large screen widths. Navigation is recast as a full-window carousel with rich background photographs. Content is presented in modular blocks, and gutters that appear between tiles are removed in tablet and mobile screen sizes. A paper texture background fills in empty tile spaces and also helps fill out the screen at the largest breakpoint. Using image-based modules in this way can be expensive from a bandwidth perspective, but is a great way to get the user to navigate quickly by showing rather than telling.
The obvious advantage of a big screen is the ability to see a lot of content at one time.
With collection-based content such as photos, tiling can be an effective way to to fill large screens. We see this every day when searching Google Images—the results spread out to fill the viewport, presenting a large variety to choose from in a single scan.
Pinterest also uses a tiling layout for images, with the addition of text and whitespace to mitigate what could be an overly busy layout. On larger screens the image preview modules seem to tile indefinitely. For a collection site, where the user experience is about quickly collecting and marking favorites, filling the viewport with thumbnails makes it easier to scan and creates a satisfying sense of fullness.
Uniqlo uses a wide, tiled-image layout that also looks well-designed and spacious. Items are chunked together with large headers acting as bumpers between sets to add breathing room. Tiling the products across a wide area allows shoppers to quickly compare items visually. At the same time, the whitespace, model photos, and variety in scale add refinement to the overall look and feel and help reinforce the point that design is an important differentiator in Uniqlo’s product line.
Neither Pinterest nor Google Images are responsive or adaptive sites—they both employ a separate site for mobile users. Uniqlo is also only adaptive to larger screens; small screens get the narrowest desktop layout. While these sites may not be complete models for responsive design, we can look at them as examples for expanding this type of content.
Another interesting technique for larger screens is based more on classic print design, rather than restructuring or manipulating content to fill the browser.
Institut Choiseul confines the content of each page to a structured grid in the center of the window, but effectively stakes out a large screen presence by extending a field of color from the logo and main page content outward toward the left edge of the viewport. The Back to Top link appears in the lower left corner of the viewport when the page is scrolled, a small touch that stakes out the entire window for the page. The strong grid and large fields of color give the site a sober, logical tone that evokes the International Design style of the 1950s and 1960s.
Kanselarij der Nederlandse Orden has a similar style, with asymmetrical bands of color that provide the background to a centered flexible grid. Because the grid expands as a percentage of the total window width, the content also plays a part in filling the screen, but the boxy color fields add a level of sophistication to what is otherwise a fairly ordinary layout.
Small effects such as a color tone or texture in the background, or removing boxy lines from the edges of a layout, can go a long way toward creating a sense of completeness in the maximized window. Creative use of asymmetry instead of skinny, tower-like layouts can also keep readers from drowning in white margins.
By simply extending common techniques for adapting content to smaller breakpoints, we can see plenty of opportunities for larger breakpoints as well. Sites that use a strong grid will have an easier time of it, as a well-structured grid should have no problem expanding into a wider space.
Obviously the most important consideration in any design is the content, and so that must be the basis for any effort to expand a design to fill a wide screen. For long reads, it’s more important to create a good rhythm and flow so that the text can be read without distraction. For photographs or graphics, space and scale contribute directly to impact. Government and service-oriented sites must provide easy access to tasks and information. Ecommerce sites need to make it easy for consumers to evaluate and purchase products. A layout’s density should reflect the site’s tone—more density for a more active experience, less for a slower, more thoughtful tone. Much like framing a photograph, filling out the viewport can make a design seem bigger and bolder, just as framing a design in generous whitespace can make it seem more elegant or precious.
It may be true that desktop users have the luxury of resizing the browser window if all that whitespace makes them uncomfortable, unlike users of smaller devices. It may be also be true that not all desktop users browse with large or full-screen windows. But as with mobile, we shouldn’t make assumptions about which devices are used to view our content now, and especially in the future. Large screens, in some cases, can provide both enhanced usability for users and a richer palette for designers. It’s up to us to take advantage of these expanded borders.
When it comes to building apps, we often assume our users are very much like us. We picture them with the latest devices, the most recent software, and the fastest connections. And while we may maintain a veritable zoo of older devices and browsers for testing, we spend most of our time building from the comfort of our modern, always-online desktop devices.
For us, a connection failure or slow service is a temporary problem that warrants nothing more than an error message. From this perspective, it is tempting to think of connectivity, mobile or otherwise, as something that will solve itself over time, as we get more network coverage and faster service. And that works, as long as our users stay above ground in large, well-developed—but not overly crowded—cities.
But what happens once they descend into the subway, board a plane, travel over land a bit, or go live in the countryside? Or when they stand in the wrong corner of a room or simply find themselves part of a huge crowd? Our carefully constructed app experiences become sources of frustration, because we rely so fully on that ephemeral link back to the servers.
This reliance ignores a fundamental truth: Offline is simply a fact of life. If you’re mobile, you’ll be offline at some point. It’s okay, though. There are ways to deal with it.
Web apps used to be completely dependent on the server: it did all the work, and the client just displayed the result. Any disruption in your connection was a major problem: if you were offline, you couldn’t use your app.
That problem is solved, in part, by richer clients, where more of the application logic runs in the browser—like Google Docs, for example. But for a proper offline-first experience, you also want to store the data in the front end, and you want it to sync to the server’s data store. Happily, in-browser databases are maturing, and there are an increasing number of solutions for this—like derby.js, Lawnchair, Hoodie, Firebase, remotestorage.io, Sencha touch, and others—so solving the technical aspects is getting easier.
But we have bigger, and much weirder, fish to fry: designing apps and their interfaces for intermittent connectivity leads to an abundance of new scenarios and problems.
There are of course a few precedents for offline-first UX, and one of them is especially prevalent: your email inbox and outbox. Emails go into your outbox, even when you’re offline. Once you’re online, they get sent. It’s simple and unobtrusive, and it just works.
For incoming email, the experience is similarly smooth: once you reconnect, new emails from the server appear at the top of your inbox. In between, you’ve got a more or less complete local copy of all your emails up to this point, so you’re never stuck with an empty app. All three scenarios (client push fails, client pull/server push fails, availability of local data when offline) are well handled.
The experience of using a website or app when offline should be a lot better, less frustrating, and more empowering. We need the UX equivalent of responsive web design: a strong catalog of guides and patterns for a disconnected, multi-device world.
Most web apps have two connectivity-related points of failure: client push and client pull/server push. Depending on what your app does, you might want to
Other issues arise when the connectivity state changes during use, e.g., the server wants to push a change to the object or view that the user is currently looking at, or even editing. This would require you to
Let’s take a look at some real-world examples.
Going offline while using Google Docs in a browser other than Chrome can be quite frustrating: you can’t edit your document. And while reading is still possible, copying parts of it isn’t. You can’t do anything with your text or spreadsheet—not even copy it to another program to continue working there. And yet, this is actually an improvement over past versions, where a large overlay would notify you of the offline state and prevent you from even seeing your work.
This is a common occurrence in both native and web apps: data you’ve only just accessed suddenly becomes unavailable when you lose your connection. If possible, apps should retain their last state and make their data available, even if it can’t be modified. This requires keeping local data to fall back to if the server can’t be reached, so your users are never stranded with an empty app.
Stop treating a lack of connectivity like an error. Your app should be able to handle disconnections and get on with business as gracefully as possible. Don’t show views you can’t fill with data, and make sure error messages hit the right tone. Take Instagram: when a person can’t post a photo, Instagram calls it a failure—instead of reassuring the user that the image isn’t lost, it’s just going to be posted later. No big deal. You might even want to reword your interface depending on the app’s connection state, such as turning “save” into “save locally.”
You might sometimes need to block whole features completely, but more often, you won’t need to. For example:
If your app offers collaborative editing or some other form of simultaneous use on multiple devices, you will likely create conflicting versions of objects at some point. We can’t prevent this, but we can provide easily usable conflict-resolution UIs for people who might not even understand what a sync conflict is.
Take Evernote, whose business is heavily based on syncing notes: conflicts are resolved by simply concatenating both versions of the note. On anything longer than a couple of lines, this requires an inordinate amount of cognitive effort and subsequent editing.
Draft, on the other hand, has managed to make conflict resolution between collaborators simple and beautiful. It shows both versions and their differences in three separate columns, and each difference has an “accept” and an “ignore” button. Intuitive and visually appealing conflict resolution, at least for text, is definitely possible.
Detailed change-by-change resolution isn’t always necessary. In many cases you just need to provide a nice interface for highlighting differences and allowing the user to choose which version wins in this specific conflict.
There are other types of conflicts awaiting us, however, and many of them won’t be text based: disputed map marker positions, bar chart colors, lines in a drawing, and endless other things we haven’t even thought of yet.
But not all technical problems need technical solutions. Consider two waiters with wireless ordering devices in a large, multi-story restaurant. One is connected to the restaurant’s server. The other is on the very top floor, where his connection fails temporarily. Both wait tables that order the same bottle of rare, expensive wine. The offline waiter’s device cannot know about this conflict. What it can do, however, is be aware of the risk of conflict (low stock number, its own offline state) and advise the waiter to give an appropriate reply to the table (“Oh, exquisite choice. Let me see if that’s still available”).
In some cases, apps can preemptively take low-overhead action to give users a better experience later. When Google Maps detects I’m on wifi in a different country than usual, it could quickly cache my surroundings for the likely case of later offline or roaming use.
In many cases, however, content is too large to preemptively cache it—for example, a video from a news site. In these cases, users must make the explicit decision to locally sync, which would require them to download the video to their device and view it in a different application. Any context that video had online—like related information or relevant comment threads—is now lost, as is the opportunity for users themselves to comment.
All of these examples were client push, but there’s the server push aspect as well: what can we do when the server updates a user’s active view, and pushes data that can’t conveniently be added to the top of a list? Chronological data often causes this problem.
For example, if you use iMessage on several devices, messages are sometimes displayed out of chronological order when syncing. iMessage could sort them in the correct order—they are timestamped, after all—but instead it shows them in the order in which they arrived on the device. This makes them highly noticeable, but is also terribly confusing.
Imagine the more intuitive way of doing it: messages are always shown in chronological order, regardless of when they arrive. This sounds more sensible at first, but means you may have to scroll back in time to read a message that just arrived, because it was sent in response to something much older. Worse, you might not even notice it, since it pops into existence somewhere you’re probably not looking.
If you display data chronologically and the sequence of the data itself is meaningful, like in a chat (as opposed to email, which can be threaded), offline capabilities pose a problem: the most recently transmitted data is not necessarily the newest, and may therefore appear somewhere users won’t expect it. You could maintain context and sequence, but your interface also needs to let users know where in time the new content is.
Many of these examples are text based, and even if they aren’t (like a map marker), some of them could conceivably have a text-based helper (like a list of map markers next to the map), which can simplify sync-related updates and notifications.
However, we know the amount, diversity and complexity of web applications will continue to increase, as will the types of data that are handled by their users. Some will be collaborative, most will be usable on multiple devices, and many will introduce new and exciting syncing issues. It makes sense to study them, and to develop a common vocabulary for offline scenarios and their solutions.
As we started asking developers from all over the world about these issues, we were surprised at how many people suddenly opened up about their tales of offline woe—realizing they’d had similar problems in the past, but never spoken to others about them. Most battled it out alone, gave up, or put it off, but all secretly wished they had somewhere to turn for offline app advice.
We don’t need to be anonymous, though. We can look to John Allsopp’s call, 13 years ago, to embrace the web as a fluid medium full of unknowns, and to “accept the ebb and flow of things.” Today we realize this extends beyond screen sizes and aspect ratios, feature support and rendering implementations, and holds true even for our work’s very connection to the web itself.
In this even more fluid and somewhat more daunting reality, we’ll all need each other’s help. We should make sure that we, and those who follow us, are equipped with reliable tools and patterns for the uncertainties of the increasingly mobile world—both for the sake of our users and for our own. Web development is complicated enough without wasting extra time reinventing wheels.
To help each other and future generations of designers, developers, and user interface experts, we are inviting you to join the discussion at offlinefirst.org. Our eventual goal is to create an offline handbook that includes UX patterns and anti-patterns, technology options, and research on user’s mental models—creating a repository of knowledge to draw from and contribute to, so our collective efforts and experiences don’t go to waste.
For now, we need to hear from you: about your experiences in this field, your knowledge of tools, your tips and tricks, or even just your challenges. Solving them won’t be easy, but it will improve our users’ experiences—wherever and whenever they need our services. Isn’t that why we’re here?
What happened was, Paul Hoffman, Lauren, and I were sitting up talking about privacy, looking at a WordPress blog, and this weird thing happened: We typed in its address with “https:” at the front, and it showed up as locked/HTTPS in some browsers but not others. It took quite a bit of poking around to figure out.
First, wordpress.com is perfectly happy to accept secure HTTPS connections. Good for them! (Although for something as intensely personal as blogging, I think there’s a strong case that they should force HTTPS for all connections from platforms that can handle it, i.e. anything later than XP).
First, it’s really lame that wordpress.com is doing this kind of insecure crap in the default themes that huge numbers of people use, creating privacy risks that don’t need to exist. Second, this is a common class of error, it happens all over the place. So, what should a browser do when it happens?
Here’s a screen shot:
That’s interesting: Safari goes ahead and runs the insecure code. But, in compensation, doesn’t show either the lock symbol or “https”.
I suppose you can make an argument that this is Apple It-Just-Works thinking, but the silent auto-security-downgrade makes me a little uncomfortable.
IE 11 on Windows 7, to be precise. (Screenie courtesy of Paul Hoffman.)
It shows the secure HTML but no lock, and there’s a little notice at the bottom offering to “Show all content.” This is praiseworthy, except for the word “content” is a symptom of laziness, they know perfectly well that the security problem is around a script to be run, so why not say so? I’d be way more willing to “display insecure content” than “run insecure code”.
By the way, this a quite a bit different from IE10, which showed the lock and didn’t offer any way I could see to run the insecure code.
It shows the lock and, like IE, doesn’t run the script.
But beside the lock in the address bar there’s a little shield thingie you can click on. Firefox still says “content” (it’s a script, dammit) but “blocking” is more accurate than IE’s “not displaying”.
For the longest time, I thought Chrome, which shows the lock, just silently suppressed the unsafe bits. But then I noticed that it has a shield thingie too.
Chrome’s English is a little klunky (“includes script from unauthenticated sources”, sounds like a Russian Bond villain), but it’s perfectly comprehensible and way more accurate than any of the other browsers at informing a human what’s going on. On the other hand, I don’t like the shield hiding where it’ll never be seen, off at the right edge of the address bar.
The browser is trying to help ordinary civilian humans make a potentially-dangerous choice and it’s just not those people’ jobs to know this shit. Yeah, we can quibble about handling corner-cases elegantly and yeah, they matter.
But if your app is doing this to people, then you’re doing it wrong.
Rick Prelinger’s Lost Landscapes of San Francisco is a movie happening that brings old-time San Francisco footage and our community together in an interactive crowd-driven event. Showing in the majestic Internet Archive building, your ticket donation will benefit the Internet Archive, which suffered a major fire in November. Please give generously to support the rebuilding effort.
December 18, 2013
300 Funston Ave.
San Francisco, CA 94118
Lost Landscapes returns for its 8th year, bringing together both familiar and unseen archival film clips showing San Francisco as it was and is no more. Blanketing the 20th-century city from the Bay to Ocean Beach, this screening includes newly-discovered images of Playland and Sutro Baths; the waterfront; families living and playing in their neighborhoods; detail-rich streetscapes of the late 1960s; the 1968 San Francisco State strike; Army and family life in the Presidio; buses, planes, trolleys and trains; a selected reprise of greatest hits.
As usual, the viewers make the soundtrack — audience members are asked to identify places and events, ask questions, share their thoughts, and create an unruly interactive symphony of speculation about the city we’ve lost and the city we’d like to live in.
Recent 19 entries