Villains, henchmen and UX for the layman

I’ve been preparing the first lecture of my upcoming course at the School of Visual Concepts. I’ll be introducing UX as a discipline and talking about how UX works in projects. One of the things that excites me about teaching this class is that, unlike my conference talks, I won’t be teaching specialists how to deepen their practice. I’ve got an audience of professionals who are not now and probably not going to become UX folks. They just want to learn a little roll-up-your-sleeves UX so they can do their jobs better.

This gives me a great excuse to revisit my understanding of UX and reframe it in terms of other roles and projects. There’s a huge difference between how the UX field frames itself versus what fundamentals are actually helpful for beginning practitioners. It’s not useful to spend class time quibbling about ontological bullshit or presenting JJG’s elements of user experience. What’s useful is to learn how to just make the actual thing you’re building not suck.

So I went out to see what UX-to-layman resources I could borrow and crib, and I discovered something of a wasteland. There’s a lot of great UX-to-UX stuff out there, but it’s very deeply embedded in an expert’s framing.  If you’re already a practitioner, it’s straightforward to figure out what kind of information you need and then go get it. Unfortunately, if you’re not a UX practitioner, you don’t have that framing and vocabulary to help you discover how to get things done.

This is the root of the professional cognitive dissonance between what UX does and what people think UX does. Unless we’re in very collaborative environments (small teams, lean methodologies, or just plain lucky), the only thing most of our co-workers see is output deliverables. They see polished wireframes, but they don’t see the thinking: piles of crumpled-up iterative sketches and whiteboard marker stains. They see us talking to users, but they don’t see how we consolidate that data into meaningful insights and map it out. They don’t see our process tools.

“First of all, the deliverable is not the thinking. It’s a record of the thinking and an attempt to share that thinking.” — Jeff Gothelf

If wireframes are all someone sees from UX, that’s all they’ll know to do. When a short-handed project manager or developer needs to step up and design something, they’ll sit down to make wireframes and it’s a huge pain in the ass and it turns out it doesn’t integrate here, or there’s a showstopping dead end there, or there’s a jungle of tradeoffs. It’s easy to google and find wireframing tools, but it’s a lot harder to google and find out that even if you think you’re only designing one page, tricks like scribbling out the high-level workflow can make things go a lot more smoothly. You need to employ UX processes, which requires navigating the ambiguity between UX outcome, UX process and UX output.

UX outcome is what we want to achieve, such as the goals that we’re helping users satisfy. When we set an outcome, we’re ensuring that what we’re building solves the problem we’ve identified. We’re balancing the whole product as it is experienced by the user, the business, and the technology. This is the part that keeps us focused and tells us what to build and test.

2014-03-20 19.35.51 HDR

This user story is an example of an outcome. The outcome is stated as a user goal, not as a set of features. This approach makes the outcome a problem that can be solved any number of different ways. As a bonus, it translates well into a testable, measurable task that we can use to make sure our product has an actual impact on users.

UX process is how we break down problems and think through them to arrive at a solution that fits the outcome. These methodologies allow us to take diverse inputs, sort through them, and make informed design decisions. Our toolkit has got all kinds of handy shit in it. This toolkit might look like a heap of post-its and scribbled user notes, but these are actually precision instruments for defining the problem, externalizing our cognition, identifying patterns, ripping through iterations, yada yada. This toolkit is how we draw a straight line from our starting point all the way to UX outcome.

Post-it affinity diagrams are a great tool because they make it easier to consolidate user interviews and pick out patterns.

Post-it mental models and affinity diagrams are great not just because they make it easier to consolidate user interviews and pick out patterns, but because they externalize the process and make it easy for entire teams to collaborate on the analysis.

Personas are handy for distilling what we know about our users into memorable artifacts. This one-pager shows our two users that this app needs to support, and brings to the forefront the problem of henchmen losing phones and foiling plans. We now know that secrecy is a problem we need to solve.

UX output is how we represent and communicate insights and designs. It’s a physical thingymajig that everyone on the team can look at and say “oh, so that’s what we’re gonna do”. It’s a boundary object that creates the common understanding needed to get the engines pointed in the right direction.

UX output could be a findings deck, a wireframe, or even an app. In this example, a rough sketch that shows how the app enables the evil villain to share plans with his henchmen.

UX output could be a findings deck, a wireframe, or even an app. In this example, a rough sketch that shows how the a Snapchat-inspired feature enables the evil villain to share plans with his henchmen yet ensure secrecy.

I want to see more UX resources that make it all about collaboration and opening up user-centered design processes. Yes, it means decentralizing the UX power and lightening up the deliverables, but the end results shine. It just looks like an ordinary whiteboard, but in truth it’s a highly sophisticated social mechanism for externalizing problem-solving processes into a collaborative space. You know, so magic can happen.

Tagged with: , ,
Posted in Uncategorized

Content choreography done right

I always have an eye out for good responsive resources. I come across a lot of developer-oriented information and tools, but it’s the conceptual frameworks in this still-young field that really excites me. Some of the best stuff I’ve found has come from the content strategists. In particular, I love this idea of content choreography, which provides a workable framework for designing a flexible content strategy. Trent Walton presents the concept nicely here:

“Media queries can be used to do more than patch broken layouts: with proper planning, we can begin to choreograph content proportional to screen size, serving the best possible experience at any width.”

Jordan explains how to build this out in his rundown on using the flexbox css property, and with this great demo that helps everything just instantly make sense. It’s also a great tool for explaining to stakeholders and newbies to the responsive game exactly what content choreography means.

I’ve not seen very many sites execute content choreography in a meaningful way, but AMC Theatres comes pretty close. It’s gotten a lot of buzz for its sophisticated responsive UX. It’s super tappy, the nav moves to the bottom, and it leans heavily on location services.

But here’s the big deal: content choreography is in full effect here! The information hierarchy shifts at each viewport. For example, let’s look at search bar. It’s in the usual place at desktop, it’s very prominent at tablet, and it’s gone at smartphone. This makes perfect sense when the user’s needs and context of use is placed up front and center: instead of cutting out features and functionality, they prioritized by contextual use case.

Users on desktops will know to look to the right sidebar to find search, and the full-featured website makes it less likely that they’ll need it anyway.


On the other hand, users on tablets might want to know where to go to see a specific movie, but aren’t necessarily out and about, so they have a few more options open to them.


Smartphone users are more likely to be out, about and looking for a movie right now, in which case it makes more sense to leverage location services and prioritize movies playing nearby.



Posted in Uncategorized

Tools to check responsive and mobile sites

Does this ever happen to you? “I want to check out a mobile site but I’m simply too lazy to reach for my phone and type in the URL.” Or simply “I just want to see what it looks like on a tablet.” Here are a couple of different tools that have helped me out.

Tool #1:

This site emulates desktop, iPad and iPhone viewports and orientations. It does not emulate the user agents so the most you’ll get is an idea of the responsiveness, which frankly you can probably do by resizing the browser window. Still, the ability to quickly toggle to the exact window size is pretty handy.



Tool #2: Chrome Developer Tools.

It turns out Chrome has a little widget in to override the user agent, meaning that you can emulate specific devices. It doesn’t work perfectly so if you want 100% accuracy you’ll have to get your hands on the actual mobile device, but it’s not bad for getting the general idea.

To use it, hit F12 or Command – Option – J to bring up developer tools, and then click the gear in the bottom right corner. You’ll see this overlay. Click on the Overrides tab, and pick out your desired user agent. Be sure to check “fit in window”.

Anybody find a better way of doing this? Hit me in the comments.

Tagged with:
Posted in Uncategorized

Delta’s customer experience surprise

When I was flying back from Michigan this week, I had an unexpectedly inspiring layover at MPS. It hosts Delta’s customer experience experiment, and it’s really nicely done, with thoughtful UX that pays close attention to the customer’s environmental context and needs.

It’s an attractive, open and inviting environment, and the retail area is also very open. You’re immediately invited to browse through retail, and by the way, perhaps you could use a cup of coffee. The tables are very open, and all the seats at all the tables have individual iPads.

The first thing the app asks you to do is select what flight you’re waiting for, and then as you use it, you can access flight updates at any time by sliding up the tab at the bottom. I like this because in addition to being useful, it assures the customer that their needs are accounted for and gets them comfortable. And ready to pull out their wallets.


You can browse the menu and order food from your seat.

There are, of course, appropriate cross-sells.

You can pay for it by swiping your card in the reader built into the station.

Finally, they bring you your order, like magic.

One thing I’m curious about, but didn’t test, is what happens if you try to order something when it’s clear you don’t have enough time to eat before your flight boards.

All in all, it’s a very thoughtful and slick system. However, I was a bit disappointed at how long it took me to get my order; I got my coffee after 10 minutes, and the restaurant was hardly busy. Ben Brignell sums it up best — it really does all come down to service.

Posted in Uncategorized

UX guidelines for retail displays

My favorite thing about working right downtown Seattle is the amazing people-watching opportunities. A few months ago, I noticed a crowd of people in front of a Nordstrom’s window. It turned out to be Nordstrom’s Seattle Music touch window (with cool clothing and merchandise casually arranged in the window behind it, natch). People could browse through a timeline of Seattle bands, and selecting a band would blast a sample of their music. It was amazingly effective at getting people to stop in their tracks to explore, play, and reminisce with strangers.

While public displays seem to tap right into a rich vein of human-computer interaction research (Malcolm McCullough’s book on pervasive computing isn’t a bad place to start), I’m not finding nearly enough cross-channel retail case studies that walk the walk. In particular, I had a hard time finding best practices and case studies for retail displays and retail kiosks. I’d like to share a few resources and tips that helped me.

What’s so tricky about kiosks? In-store retail kiosks don’t have the standard use cases and don’t map to the same audiences, context of use, or end goal as a typical retail web experience. The basic questions you’d ask – who’s the audience? where/how are they going to use it? what’s the end goal? – don’t have obvious answers. However, it’s all in the approach.

Who’s the audience?

This is a question you should ask on every project, but in the case of retail kiosks, it’s a bit of a trick question. Customers aren’t actually the sole users (nor necessarily the primary ones), nor is the interaction necessarily a one-to-one relationship between human and computer. Customer service representatives might use them to look up information or demonstrate features, or there might be handoffs between the customer and the rep. For example, GE’s interactive Grid Explorer enables representatives to work with customers to discover solutions. The audience might even be curious passers-by. Once you know who your audience is, you can figure out how best to serve them.

Where are they going to use it?

Think about the physical environment. They might be in the middle of a retail location, with the hustle and bustle of strangers milling about. Should the kiosk attract attention of strangers and invite participation, like HP’s interactive jam session display? If that’s the case, think of ways to make it inviting and playful. Should the kiosk be more private, like Verizon’s bill payment kiosk? Ensure that a stranger can’t steal someone’s identity via a glance at the screen.

How are they using it?

Is it a task-oriented kiosk where the user needs to stand there until something is done, like checking in for a flight? Help the user complete tasks by breaking it up into small, logical steps…really small, logical steps.

Is it more of an exploratory kiosk, like an interactive museum exhibit? Eric Socolofsky puts it best: “Like many other user experiences, museum exhibits must offer quick entry and intuitive interaction—often allowing the visitor to move beyond initial engagement to a deep exploration of a given topic or phenomenon”. Make sure the navigation and information architecture doesn’t assume a linear workflow, but also accommodates people walking up and walking away throughout the entire experience. It should always be easy to pick up where someone else left off, meaning that the navigation shouldn’t be unnecessarily deep or complex.

What is the end goal?  

Task-oriented interfaces might have a simpler end goal housed inside the digital environment, such as “buy this thing” or “click that link”. But the public nature of kiosks opens up the possibility of different types of end goals that reside in the physical world. Examples include encouraging the customer to find a representative or go find a specific product, such as in the digital signage concept that frog created for Intel. The end goal might even be for multiple people to work and interact together, in which case, how do you get strangers to talk to each other?

Pinning down what exactly you want the user to walk away with can make it easier to design the interaction not just between human and device, but also between other humans and the environment.

Here’s some more food for thought:

Do you have any good case studies of retail kiosk design? Please share them in the comments.

Posted in Uncategorized

How should we train incoming UX practicioners?


I’ve been on both sides of the hiring equation — I’ve been the hopeful new UXer trying to kickstart my career, and I’ve been the interviewer on the other side of the hiring table. Along the way, I’ve learned a few things and formed a few strong opinions about how to keep UX valuable and how to make sure the brilliant and energetic folks joining our field have the tools they need. Read more at Johnny Holland:

When I entered the job market, bright-eyed and clutching a newly-minted Human-Computer Interaction diploma, I was confident that a lush future lay ahead of me. I had a serious rude awakening when I hit the real world. It turned out that despite my brand-new degree in human-computer interaction, I wasn’t the well-rounded practitioner that I needed to be. I was far from a UX craftsman.

Luckily, I landed a job by the seat of my pants, and even more luckily, I scored a fantastic mentor. She was whip-smart, patient and supportive, and she shaped me into a bona fide strategist and user experience architect. This story ends happily for me, but I was lucky, and that’s frightening. It shouldn’t be a matter of luck whether a hardworking user experience professional can learn how to produce quality work; it should be standard and expected.

Posted in Uncategorized

little big details: weather maps

Architecting user experience entails the big work of carving out spaces for users to work within, but the tiny little details that matter just as much. For this reason, I love as a source of inspiration, and it reminds me of one of my favorite little big details.

As a bicyclist, I love weather radar maps that help me guess when storms will break.’s radar map now has little tips that appear when I hover, giving me the precise level of zoom I want. This is even better for my particular use case, because sometimes I want to know within the next 15 minutes whether I can bike out (street level), or within the next 4 hours whether I can bike out (state or country level).

Posted in Uncategorized

In Defense of Doing It the Hard Way

I’m a published author! My article “In Defense of Doing it the Hard Way” has been published in the Interactions March+April issue as part of the Evaluation and Usability Forum. ACM allows me to post a version in my personal collection as well, so here it is!

The job of a user-research professional is undoubtedly a hard one. Understanding problems, getting the right sample of people in our labs, extracting insights from data, and evangelizing the user’s needs can make for challenging work. At the same time, rewards abound in this profession: the joy of diving into a new topic, engrossing conversations with some of the hundreds of people that pass through my lab, and of course the aha moments—those glimmers of awesomeness alone more than make up for any difficulties. But every now and then I wish it were all a little…easier.

In the heat of the workweek, I’ve been tempted by quick fixes and shortcuts. A glance at the battlefield of user research tells me I’m not alone. It seems as if every week I read about some paradigm-shattering new tool that promises to blow my mind, crunch all of my data by 5 o’clock, and have dinner on the table by 7. Tools like these are often pitched to us, an eager audience of open-minded, tired, bored, inexperienced, or budget-starved user-experience evaluators.

These promises are rarely fulfilled. I still end up spending hours hunched over my computer, or I don’t get the insights I was hoping for, or the quality of my work just plain suffers. After many failed experiments, I’m starting to think that these gimmicks and borrowed techniques from other fields amount to shortcuts, and shortcuts are not exactly formulas for success. Worse, I’m concerned that the quality of our work as a whole suffers: Every time we cut corners, we deliver subpar work that waters down the value that user research can offer.

We might not intend to skimp on our work, or we might feel pressured to cut corners in our quest to deliver more work more quickly, but no matter how you slice it, shortcuts aren’t actually doing us any favors. Shortcuts don’t help us produce good work, and if we strive to produce good work, shortcuts don’t actually save time. We have to do it the hard way.

What Do Shortcuts and Cut Corners Look Like?

Let’s get something out of the way first: When picking on shortcuts, I’m not targeting appropriate guerrilla user research. I have no issue with designers or one-man bands who just want to know how to improve their products. They don’t need to do it the hard way, and when they are ready to do it the hard way, they’ll approach their work differently, either by learning new skills or bringing in a seasoned researcher.

Rather, this discussion is intended for people whose primary focus is user research, day in and day out, whose job is to learn more about users and to understand their context. Solid user research requires both sweat and diligent work. Whether we are in the lab running usability studies or out in the field conducting ethnographic research, our core value as user-research professionals lies in our deep understanding of context, our analytical skills, and our ability to bring empiricism into the product-development process.

To put it another way, user-research professionals get hired not just because we are good at excavating truth, but also because we have a knack for mapping the knowns and unknowns around those truths, finding new points to investigate, and communicating the core truths that we learn in a way that’s helpful and productive. When we do that, we can help a design plow all the way to the other end of development, through shifting requirements and slippery scopes, without ever losing focus on the needs that the design was built to address.

It’s our job to ensure that rigor backs our process, and that we are actually being as precise in our measurements as we think we are. Unfortunately, in our line of work there are many opportunities to deceive ourselves into thinking we can save time, energy, or money without sacrificing the precision and accuracy of our work. Anything that doesn’t require much sweat, plodding, or careful attention to detail is a shortcut, whatever form that shortcut may take. Sometimes a shortcut promises to reduce the amount of time we spend planning and executing studies. Other times a shortcut claims to make analyzing data easier. Still other times, a shortcut takes the form of a misapplied tool.

The Shotgun Shortcut: Executing Studies Ineffectively

Generally, the greater the amount of time that a shortcut claims to save me, the more suspicious I am of the shortcut. A case in point: I’m highly suspicious of unmoderated open card sorts, in which remote participants are given a heap of cards and asked to sort them into categories and label the categories. Hosting card sorts online saves time and resources by allowing users to complete them at their leisure and without need for professional attention. However, this adaption comes at a cost. It sacrifices the main benefit of that particular research methodology, namely access to the rich, qualitative verbal report from our participants that helps us understand the way in which they construe the world. With this understanding, we can address the why of things. Unmoderated card sorts can’t give you this why; they can give you only the what and the when.

One rebuttal is that if we run a large enough sample, we can call it quantitative data. But this is still qualitative data; in adapting this methodology poorly, we lose its intent and its strength. When we mechanize it and remove that ability to follow our participant’s train of thought, we shove all that beautiful qualitative data into a quantitative box. The result is a monster pile of data, stripped of context and of any good foothold from which we can understand what these categories and labels mean to the user and their work.

I have put online card sorting to good use before, of course: in validating a preexisting idea. Only after I’ve interviewed enough people and had in-person sorts and developed a prototype of a navigation tree can I bring the online card sort out of the toolbox and test the ideas I’ve come up with. The context and the why are still missing, but that’s not what I’m trying to get at in this particular study. I fill in those in gaps by triangulating data collected from other studies. Of course, once we undertake the difficult task of piecing together data from different sources, we are no longer cutting corners.

The Drive-by Analysis Shortcut: Skimping on Thinking

From fear of analysis paralysis (spending too much time poring over data, needlessly turning over stones, and beating long-dead horses), we can swing to the other extreme and rush through analysis. I have learned that when something does need to be examined thoroughly, nothing can substitute for the grunt work of teasing out answers and squinting our eyes to see if the puzzle is yet complete.

Web analytics readily fall into this trap. They are indispensable and you will have to pry them from my cold dead fingers, but they are stunningly easily to screw up. Here is a basic example that many of us have grappled with: “Time on page is up 20 percent since last month!” If we take statistics like these at face value, we might consider it a win, but we need to dig deeper to figure out what the numbers mean. What kind of design changes have we made since then? Do people spend more time on that page because it takes longer to get stuff done? Did our South Korean users abandon us, taking with them their stunningly high-speed Internet? Increased time on page is just one of many deceptively simple numbers that, without context, raises more questions than it answers.

No single bit of analytics data can stand by itself, and analytics is at its most powerful when it’s part of a holistic picture of users, their goals, and their environment. When we bring together insights from other qualitative and quantitative studies, our understanding of the truth sharpens. Because of this, I approach analytics in much the same way that I approach salt: as an essential seasoning to (almost) every main dish.

For example, while planning an in-person usability test, I first sniff out the goals of the study and what we need to learn, and then find ways to season those questions with analytics. If we’re concerned that a new form field will frustrate users, before we conduct a usability test we compare before-and-after numbers on form-completion rates, exit page destinations, and time on page. This helps identify things to watch for in the lab. If analytics show a pattern of folks heading en masse for the “about our company” link, we can keep a special eye on our in-person participants’ behavior and expectations around that link.

At the end of the day, our job is to solve problems. Analytics, like anything else, is a means to an end. Taking an iterative, systematic, and rigorous approach to problem solving yields a clearer connection between the problem, the research done around it, and the design that gets to the root of the issue.

The Square Peg in a Round Hole Shortcut: Using the Wrong Tool for the Job

After moving to a new apartment in an unfamiliar part of town, I drove to work using a familiar route that added five miles to my daily commute. I did this not just once or twice, but every day for over a month. I vaguely knew that there was another road to town, but I was afraid of getting lost, so I didn’t venture out of my comfort zone. In user research, it’s also tempting to stay inside our comfort zone and stick to tried-and-true methodologies even when they are not appropriate for the job. I’ve watched professional user researchers adapt them, stretch them, and hack them together with other methodologies. Inevitably, these “franken-methodologies” resemble a snake with legs stapled onto it, sadly attempting something for which it was never built.

We are not always as careful as we should be when planning research. Once we have identified a research need, it’s fantastically handy to have a wide range of tools and approaches to pick from to address that need. However, different tools answer our questions from different angles, and sometimes we simply pick the wrong angle, ending up with empty or inaccurate answers. All methodologies bias research in some way; when we understand what our bias is, we can take steps to address it. Because of this, it’s essential to understand the strengths and weaknesses of our tools, and what implications that holds for our results.

And here’s a good example: Eye tracking, in all of its popular glory, is a notoriously misapplied methodology. Eye-tracking technology monitors where and for how long people’s eyes fixate on a target. The original idea back in the day was to learn how people read and to correlate eye fixation with cognition. It was long the exclusive tool of labs with very deep pockets, but times have changed, and at UX conferences these days you can’t throw a rock without hitting an eye-tracking vendor. These vendors claim to deliver the power of the eye-tracking lab at dirt-cheap prices. Eye-tracking presentations and seminars (often given by said vendors) spring up like weeds, offering “eye-tracking 101” and “eye-tracking boot camp.” It’s not so expensive, they promise, and not so hard. Anybody can do it.

Great! What’s the catch? Well, eye tracking in UX is based on the premise that the resulting heat maps will reveal thoughts that users don’t verbalize, because they are not conscious of their attention processes. Unfortunately, the heat-map data does not actually represent the user’s mental processes. Like chocolate cake, you have to bake it before you eat it. Cognitive scientists understand this. When they use eye-tracking studies to learn how we process information, they actively take account of all relevant work, no matter the methodology or the discipline. When vendors promote eye tracking as easy and accessible, they gloss over that work, and because the heat maps look scientific, we fall for it.

It’s easy to understand why eye-tracking maps are so easily mistaken for findings. Humans intuit that data is messy, so if it looks nice, it must be analysis-ready. Unfortunately, because eye-tracking is so deceptively easy, it enables enormous fallacies in user research. It’s marvelous at proving other people wrong (“See, I told you green wouldn’t work”), proving our own points (“If the button were red, people would see it”), drawing shaky conclusions (“It’s not that people don’t want to use it, it’s that they don’t see it”) and discrediting our profession (“This isn’t so hard. Remind me again why we’re paying an expert to do this?”).

Like the other shortcuts I’ve mentioned here, eye tracking gives a dangerous amount of latitude for anybody to make their own guesses and draw their own conclusions. Eye-tracking data seems very approachable, and it looks fun to play with. However, its data is stripped of all meaning and context, and when we take it at face value, we run the risk of drawing unsubstantiated conclusions. Unfortunately, our clients may also mistake eye-tracking data for insights, and it’s our responsibility to ensure that they don’t draw unsubstantiated conclusions either. Our clients (who are not trained in the fine art of considering data in a holistic context) need solid information to make solid business decisions. In supporting that need, we must ensure that our insights are rich and that they provide information our clients can trust.

Certainly, eye-tracking studies can be used constructively in our research. But this requires them to be carefully written, carefully moderated and observed, and very carefully analyzed. The results must be situated in an existing understanding of the user’s intentions and workflows. In short, a successful eye-tracking study calls for a skilled practitioner with a sixth sense for subtleties.

And really, if you’re that good at making sense of patterns of user behavior, you probably don’t actually need eye tracking to succeed. Everything that you can learn from eye tracking at this point you can learn using simpler, cheaper methods. If you actually do all of that work, it’s no longer a shortcut. You’re back to doing sweaty labor.

What Can We Do, Then?

These are pitfalls for new and seasoned user researchers alike. Folks new to the field, including those transitioning from research in guerrilla-style environments, might inappropriately adapt techniques they already know, or they might address weak points in their research by Band-Aiding over them. Seasoned practitioners might tire of dealing with stakeholders who don’t care about deep, rich data, so they might look for ways to develop more insights faster, or yield to bad compromises.

Shortcuts, in all their varied and sneaky and tricky disguises, can entrap anybody along the entire spectrum of experience and cause our work to suffer. Even if our enthusiastic adoption of shiny things distracts us from noticing the weaknesses in our research, others will notice the problems. This seed grows into distrust of our individual work and has the potential to scatter seeds of distrust of the user-research profession.

Many shortcuts share the shiny allure of modern, sophisticated-looking technology, but in the end they are a poor substitute for our critical-thinking skills. They might look good, but they are compromises, and they don’t replace the fundamental skills we should be developing. These skills are not new; they are the skills polished by curious people across all scientific fields: Once we have made an observation and defined the problem, we form a hypothesis and test it. Those skills take a lifetime to perfect, and when we are neck-deep in fads, we can’t hone them. We might suffer the illusion that superficial understandings will suffice, and we might conclude that our restless minds are at their sharpest when wielding the newest of an endless series of gadgets, but in reality we’re letting the best things about us—our curiosity and our intellect—waste away.

This is perhaps the saddest thing about shortcuts. While we’re leaping from gimmick to gimmick, we forget the reason we started poking around and asking questions and knitting our brows in puzzlement. We forget about the basic human need, as old as the wheel, to understand the world and its people. This is a huge undertaking. We should do it right.

About the AuthorLeanna is the User Research Coordinator at ITHAKA and a problem-solver by trade. As part of her calling to create holistic and delightful experiences, she manages research studies, conducts social experiments on teammates, and juggles between quantitative and qualitative analysis.

© ACM, (2012). This is the author’s version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in Interactions, VOL19, ISS2, (March + April 2012)


Posted in Uncategorized

we now feature a face to face ordering system!

Just a small reminder that the best solution isn’t always something fancy.


Taken at the Muskegon Subway.

Posted in Uncategorized

The Irish Tavern: Why cultural context is everything

The wonderful thing about travel is that it constantly reminds me that culture seeps into everything. Culture is everything.

In Dublin, seeing the “Favourite Fried Chicken” restaurant made me chortle. Why? Because fried chicken is an American thing, and with that spelling, you can bet the owner isn’t serving authentically Southern fried chicken. The owner and proprietor of that restaurant either didn’t consider the cultural context (grabbed an idea at face value), or doesn’t have access to that cultural context (hasn’t spent enough time in Georgia).

Churros con chocolate, or little fritters dipped into hot chocolate.

In the United States, "churros" are served in high-end tapas bars. Not so in Spain - they're served at kiosks, I guess? After a week of intensive research, I still couldn't reliably find hot, delicious, fresh churros.

In Spain, I saw a bar called “The Irish Tavern”. There’s no such thing as an Irish tavern. There are Irish pubs, which originated as public drinking houses. Instead of appealing to the homesick, it serves as a giant red flag that the owner doesn’t understand what makes an excellent Irish pub. There will be beer, but what kind? will there be music? will it be sociable? Probably not. I’ll just go have some tapas instead.

Context is everything. Culture is everything. Everything is culturally situated. By “culture” I don’t just mean “east vs. west”, or “American vs European”, but I also mean all the different, weird, wonderful, teeming cultures that we engage in every day. A heavy metal concert reflects one type of culture. Someone rocking out at that concert may get up early the next day and put on a suit to enact another type of culture. Culture is a shared set of attitudes and meaning that a group works within, and cultures are everywhere.

Understanding cultural context and wielding that understanding is essential in the practice of user experience. Take deliverables. The sprint team loves sketches. Should you give a rough sketch to an executive as a strategy document? Well, obviously not. But should you give a sprint team a polished 50-page strategy brief during a sprint planning session? That’s an equally egregious failure to understand cultural context.

What about usability findings presentations? I give at least one internal presentation every week, sometimes to small teams, sometimes to the entire company, sometimes to small bands of business line owners. A little while back, I gave an internal presentation to 300 people. It was an FYI-style summary of usability findings with an invitation to contact me with questions. I then gave the same FYI-style presentation to a band of product owners. Did that go well? Nope. Cultural context is everything.

How about the user experience strategies you create? Let’s say you’re working for a funky shoe company targeted at teens. Your company would like you to embed promotions for the Twitter account around the site, using it to push coupons and specials to your consumers, these teenagers. Sure, you can do that, but that’s the least of your worries. The first question you should ask is: Will that work? Hmm, maybe, but it’s complicated. danah boyd – a leading scholar on youth culture – highlights some of the questions you’d have to ask about teens and Twitter: what kinds of teens are they? geeks? celebrity followers? are they American? It’s only after exploring the cultural context that tweeting teens navigate that you can put together a meaningful Twitter strategy (or not).

Cultural context is everything.

One of my very favorite Interaction’12 talks was from Eric Dahl, on the topic of cultural design. Eric puts it beautifully in the blurb for the talk:

The products and services we design and deploy are embedded within a culture and not just a context. Culture is an important concept that is often overlooked by designers. We need to think beyond user’s goals, needs, desires, emotions, context, psychology and principles of design; we need to start designing from a place of culture.

Another thought-provoking resource on culture is F.S. Michaels’ Monoculture: How One Story is Changing Everything. BrainPickings has a good write-up summarizing the central premise of the book here. It argues that our overarching culture serves as a template from which we tell stories about ourselves, others and society. Although it doesn’t delve much into the microcultures we participate in (which is more in the realm of interest to user experience practitioners), it’s worth reading.

What cultures do the people you’re designing for participate in?

Tagged with: , ,
Posted in Uncategorized