- Keith Brewster
It was 2006: a year that saw the launch of Twitter, the North American release of the Nintendo Wii, and the demotion of Pluto from a planet to a dwarf planet (RIP 🙏). My friends and I went to the theater to watch an exciting new fantasy movie called Pan's Labyrinth. It's a great movie—still one of my all-time favourites—but the trailer for the film was a little bit misleading. As the narrator orates overtop Javier Navarrete's haunting score, we're treated to glimpses of the film's fantastical creatures and mythopoeic set pieces. We learn about the fable at the centerpiece of the story; a tale as starry-eyed as it is born from the nadir of human cruelty. It's a trailer packed with mysticism and intrigue, however, it leaves out one fairly significant detail: the movie is entirely in Spanish.
Also the trailer didn't (and couldn't) prepare me for whatever the hell the Pale Man is.
Between playing in a band, and working six years in a factory with dangerous noise levels (and a generally lax attitude towards ear protection), my hearing is not great. I've been watching shows and movies with subtitles for as long as I can remember, so this wasn't my first rodeo. However, I can imagine that it was a fairly jarring experience for anyone who wasn't prepared to spend two hours reading the movie they came to watch. Though the trailer seemingly tip-toed around this small detail, it was necessitated by the challenge of marketing non-English media to a North American audience (despite receiving a hilariously long 22-minute standing ovation at the Cannes Film Festival).
Pan's Labyrinth is generally regarded as one of the greatest fantasy films of all time, so it's safe to say that this classic case of misdirection was ultimately forgiven. Though we've collectively decided to give a pass to Guillermo del Toro, there are a set of tactics being used in much more nefarious ways across the internet. These practices have been given the name Dark Patterns (also a great name for a synthwave band and/or a bad true crime novel). These dangerous design patterns are often employed to control a user's behaviour through deception; tricking users into doing something that they didn't mean to do, or intentionally obfuscating critical paths (like canceling a subscription).
The goal of developing an application is—first and foremost—getting people to use it (and hopefully enjoy using it). Humans already have so little time in the day that it's becoming increasingly difficult to grab a small piece of attention between their morning and evening doomscrolling sessions. Fortunately, creating an app that's both useful and delightful to use will keep users coming back—this is the power of good UX. Unless you're serving an incredibly specific niche, chances are you have competitors. If your application is frustrating or cumbersome, you'll drive your audience straight into their welcoming arms.
Infinite scroll? Challenge accepted.
The second goal of developing an application is usually to make money. However, this can sometimes be paradigmatically at odds with the first goal. Making money means lowering churn rate. Lowering churn rate means stopping users from leaving your service. A good UX should provide a user with an easily navigatable unsubscribe flow, but we don't want to lose those users, right? Maybe we just massage some of the language, tuck away our cancel membership feature deep in some inconsequential application settings page—we could probably drag out a little more money from these people who want to leave, and isn't money a good thing?
The Power of UX
Design is a language. When we see a button with an
X on it, we know it's going to close something. If you decided to build an app using
the shrimp emoji 🦐 as your close button, you would (rightfully) be run out of town with pitchforks and torches. The language we
choose should guide users through the different journeys of our applications—that's why the X in UX stands for eXperience (I know
experience doesn't start with an X, and honestly the world would be a better place if it did). And like any experience, they
can be good or bad.
It's not just a matter of dotting the i's and crossing the t's; every micro-behaviour in every part of your app is another opportunity to create a bad experience. We need to combine design, performance, and intuitivity to craft delightful experiences—missing any of these pieces will negatively impact how people use and interpret your application. A bad design hurts credibility, bad performance affects bounce rates, and a lack of intuitivity will make for a frustrating experience. It's what I'm now affectionately referring to as the Triforce of UX.
This is entirely the intellectual property of Nintendo™, please don't sue me.
I once worked on a website where I was being pushed to add in a bunch of social features. The problem is that giving users the ability to upload a profile picture or add a friend doesn't inherently create a delightful experience —there are mega-corporations with the world's smartest minds working constantly to reduce users into an algorithm who still struggle to build a social app that's not constantly scrutinized by the people using it. Anyways, it turns out that people going online to find manuals for medical equipment didn't really care about writing a bio, so the requested social features were eventually entirely abandoned.
It's important to note that there's a significant difference between bad UX and harmful UX, and this is where dark patterns come into play. Because UX is so intimately rooted in how users behave in our applications, it can be twisted into deceitful methods. Expected behaviours can be subverted, and misdirection can be used to coerce a desirable result. We could increase the prominence of a call to action to draw attention away from that 10px link for skipping past the behaviour we're hoping to encourage.
Nothing drives me crazier than ads in Gmail. Ignoring the fact Google is a $1978B company, Gmail insists on nesting ads in your inbox that look exactly like every other email. In the mobile app, you can multi-select emails by clicking on the round badge of the sender. However, clicking anywhere on the ad (including where you'd click for multi-select) will open it. This means that when I'm trying to quickly tag a bunch of emails for deletion, my fat fingers will almost always end up clicking on one of the ads.
Not only is nesting these ads egregious, they have the audacity to recommend me a coding bootcamp.
I had a gym membership at one of the local gyms in my city. It was down the street from where I lived, and it was only $10 a month—both highly attractive features to a broke college student with no car. Even after graduating, my first programming gig paid $14/hour CAD (or around $10.95 USD) so an upgraded gym experience wasn't exactly in the budget. I eventually decided to move to the other end of town, and I lost 50% of the reasons to continue going to that gym.
I phoned them up to cancel my membership, only to be informed that I would need to physically denounce my continued tenure at their establishment in person. With no other choice, I went to the gym to have the exact same conversation (but in a slightly different medium). They asked me why I would possibly want to leave—was my health not worth $10 a month (plus an additional $15 on a quarter-yearly cadence for maintenance fees)? Realizing that they were going to try and shame me into staying, I lied and said I was moving an hour away which I figured was a reasonable enough distance that it would not encourage any follow-up questions. At that point, I was finally able to waive my allegiance to the gym and headed home $10 a month richer.
You thought the TOS on software was bad.
What the gym did to me is actually a combination of dark patterns. The first was trying to make it difficult to cancel my membership—this is lovingly referred to as the Roach Motel. This is when a situation is easy to get into, but challenging to get out of. This can be accomplished by making it difficult to find the cancellation flow, or by creating layers of complexity to opt-out. The second is Confirmshaming. This pattern is when language is used to guilt or shame a user for a choice. Sure, I guess they could cancel my gym membership, because clearly I didn't care about my health and wanted to die younger.
Not only are these frustrating experiences, but they're also harmful—what if I had moved inconveniently far away? Would I still be expected to make the trip to cancel my membership? It's a David and Goliath scenario; these companies have endless resources to experiment with which techniques provide the most desirable outcome. Without proper education, we can't identify when a company is manipulating us into a specific action. That's why Harry Brignull created the idea of dark patterns—we can use this as a resource to protect ourselves from these harmful UX practices, and hopefully begin to hold companies more accountable for their behaviour.
I won't go through every dark pattern in this article, but I encourage you to read through the full list. Not only is it a great point of reference for what not to do, but it's enlightening to see the patterns we've subconsciously experienced for years given an identity. Manipulating a user's behaviour isn't inherently nefarious (after all, we want to help guide predictable behaviour in our applications), but it's a valuable thing to be able to identify when a company is trying to pull a fast one on you. If we can demand better protection for our privacy, we should be able to demand companies cut out their tomfoolery. Oh, and stop putting ads in my inbox, Gmail.
Share (or don't):