This will be a 3-part series on ethics in design. In the first part, we’ll cover why context matters when we discuss design ethics. Designer or not, this will be an interesting read because anyone who works on a product influences its shape and the final outcome.
Let’s start with a question:
What is ethics?
“Ethics is the discipline concerned with what is morally good and bad and morally right and wrong.”
Encyclopedia Britannica
We could say that “ethical design” means “designing for good” but as soon as you start to go deeper than that, things start to get muddy. What is “good” and who gets to decide that?
We all know a cliche of a shallow designer who only cares about aesthetics and making things pretty, disregarding anything deeper than that.

As a designer, however, you have certain responsibilities. You’re choosing to impact people who use your products. You can help them or hurt them on numerous different levels. Even if you’re not actively trying to hurt someone, you’re still responsible for what you’re putting out into the world and can’t act surprised if it’s used in harmful ways.
For example, facial recognition technology will always have the biases of the people involved in its development.

On the other hand:
- You need that paycheck at the end of the month
- You have bosses and product managers and OKRs
- It’s easy to submit to peer pressure
So you’ll need to balance:
- Your own economic survival
- The company you’re working for
- The well-being of your users
Users and apps
At the bottom of the barrel, we have companies that fully embrace using unethical design. They persuade and pressurize users to do something they don’t actually want to; they spy on their users to get more data and cross-sell; they use addictive design patterns to make it hard to stop using the app. That last example has been documented and sold by Nir Eyal in his book “Hooked: How to Build Habit-Forming Products”, where he explains in detail how you can use his “hook model” to keep users in a never-ending loop.

Seeing his “hook model” explained, with triggers and variable rewards, immediately brings to mind casinos and online gambling websites, which is a huge red flag. Admittedly, he does cover the ethics of using this approach.. but writing a 200-page book and then dedicating 20 pages to “please use it for good” is simply irresponsible. And it brings us to the question from the start of the article.
Who gets to decide what’s “good”?
You’re probably familiar with mindfulness meditation applications that have gained popularity in the last few years following the trends of “#selfcare” and growing awareness on the side effects of antidepressants.

At the surface level, these apps are “objectively good” and aim to help people to cope with anxiety and depression. A deeper look into them shows that they too manipulate users into using them. They use streaks to get you to meditate every day and allow your friends to “nudge you” to meditate if they see you breaking your streak. This could create even more pressure and anxiety for some users, instead of helping them cope.
Even worse, there’s been some research that shows that around 8% who use mindfulness meditation experience worsening of their symptoms. Hence, marketing it as a cure-all for everyone is simply wrong.
If all you have is a hammer…
One of the issues here is the obsession with using technology to “solve problems.” Before you start working on a problem, you need to ask yourself:
- Is this a problem worth solving?
- Does it have to be solved with an app?
- Am I the person who should be solving this problem?
“Can we do it” vs “Should we do it”
Before asking ourselves if we can make it, we should ask why are we making it. There are some technologies that just aren’t worth it. Perhaps they would bring a marginal improvement to people’s lives and make them borderline more comfortable, but there are trade-offs. Sure, facial recognition will help you unlock your phone faster, and a smart speaker will make it easier for you to play music, but is it worth it if it means selling your data and enabling corporations to use it to target marginalized groups?

Question of intent
Despite working on something you think is harmless, there still exists a risk. Github has been a recent example. On the surface, it’s pretty neutral: they let you host your code and give you version control for it. In reality, it turned out they have contracts with The U.S. Immigration and Customs Enforcement (ICE), which is infamous for denying asylum to refugees, separating undocumented children from their parents and keeping them in inhumane conditions.

Several of Github’s employees protested and after being ignored by the management, resigned. Those who stayed have different reasons: they think they can change things from the inside, they have to think about their own family, or it simply doesn’t bother them that much. The lesson is: everything you build, no matter how “neutral” or “objective” you think it is, can be used in harmful ways.
Beyond “users”
It’s often suggested that to combat this, you need to use a more empathic approach, such as IDEO’s human-centered design defined as:
A process that starts with the people you’re designing for and ends with new solutions that are tailor-made to suit their needs
https://www.designkit.org/human-centered-design
The problem here remains that the focus is still on “users” and as soon as you’re focusing on something, you’re also ignoring something else. So if you’re choosing to focus on your users, how will this design affect:
- Other people, who are not part of your target audience?
- The society as a whole?
- Non-humans, such as animals and plants?
- Our environment and the planet?
For example, gig economy apps like Uber and Deliveroo brag about disrupting their industry and offering fast and flexible services for their end-users. On the other hand that means their drivers and couriers don’t get a pension, sick pay, holiday entitlement or parental leave.
Airbnb’s initial pitch is that it helps travelers book affordable spaces which are connected to local culture, while also giving hosts an easy option to be flexible when renting. This also means it accelerates gentrification and makes it hard for lower-income locals to buy or rent property because it becomes really profitable for rich people to buy a bunch of apartments and rent them day-by-day.

One of my favorite examples of laser-focusing on one factor while ignoring others is the US presidential candidate Elizabeth Warren’s call to make the military “go green” and work on “environment-friendly missiles”. Because it’s okay to invade and murder as long as we achieve net-zero carbon emissions.
This brings to mind Mike Monteiro’s quote:
“A broken gun is better designed than a working gun.”
When something is designed to hurt people, designing it “well” would just hurting people more.
In the next part of the series, we’ll cover different ways you can hurt people, intentionally or unintentionally.
More from
Design
Multimedia design starter pack
Jelo Pabayo, Multimedia Designer
Becoming more self-aware as a Junior Designer
John Carlos, Junior UI Designer
Receiving criticism
Rancy Guanzon, null