I’m Scott Bryan, a designer writing about ethics in tech. You’re reading this because you, most likely, signed up for my weekly newsletter, 1-2-3 Tech Miscellany.
If you find yourself here via some other serendipitous path, feel free to subscribe below.
Each week, I’ll introduce you to 1 ethical dilemma, 2 good news stories and 3 random quotes, tweets or other oddities.
A number of readers have told me, this newsletter no longer appears in their inbox.
To ensure it reaches your inbox every week, and is not automatically filtered into another folder, please do one, or both of the following:
add email@example.com to your contacts
move this email from your ‘promotions’ folder (if using Gmail) to your inbox
Of course, if you no longer wish to receive my newsletter, you are welcome to unsubscribe. To do so, click on the link at the bottom of this email.
Reading time: 3 minutes
Part 1 - Ethical Dilemma
Is there a problem with bad language in tech?
We trust the technology we use, to be helpful companions in our lives. But, the “behind-the-scenes” language of the tech industry, suggests this trust isn’t always merited.
Echoing this linguistic shortcoming is an anecdote from Anna Wiener's Uncanny Valley — a memoir about a reluctant seven years spent working in Silicon Valley:
“In the spring, the startup released a new feature called Addiction. It was an inspired decision, executed brilliantly by the engineers. Every company wanted to build an app that users were looking at multiple times a day. They wanted it to be sticky - the stickiest. Addiction quantified and reinforced this anxiety and obsession.
The novelty of Addiction was exciting, but the premise made me uneasy. App addiction wasn’t something I wanted to encourage. Addiction was a generational epidemic.
I brought up my qualms to Kyle.
"We may aswell call our funnel reports Anorexia", I said. "Let’s start calling churn rates Suicides."
"I hear you," he said. "The questions of addiction is a big thing in gaming. It’s nothing new. But I don’t see any incentive for it to change. We already call our customers ‘users.”
(Anna Wiener, Uncanny Valley)
In Tech We Trust
The tech industry cares not for your intention, but rather your attention. They manipulate your behaviour toward their goals rather than your own.
They use "dashboards" and “metrics” to measure their success, counting “number of clicks”, “time on site”, and “total conversions” to make sure their methods are working.
Sometimes they speak from a battleground, where users are “engaged” and “targeted“ — “called to action” from the “front lines" and "trenches” to help with the “blitzscale."
Other times, it’s a call from from the sausage factory, where "eyeballs” are “segmented" and “funnelled" into a “channel.”
Success is also measured in epidemiological terms. The “virality” of something online is seen as a positive metric, equally attributable to a funny cat video as much to a deliberate misinformation campaign.
This feels wrong. And in the midst of a pandemic, it’s especially so.
Ultimately, this “behind-the-scenes" language of the tech industry feels disrespectful. It’s lacking humanity and empathy.
But how can it change?
The Limits Of Language
A long time ago, philosopher Ludwig Wittgenstein wrote, "the limits of my language mean the limits of my world.” Rephrased, we might say, "the limits of our language mean the limits of our empathy.
The tech industry’s limited choice of language suggests a limited empathy for our true goals — the kind of goals we’ll regret not having achieved on our deathbed. Like learning to play piano, or spending more time with our kids and so on.
As this disparity shows, the metaphors we use shape our thinking and our responsibility. So, industry needs to re-engineer the way they talk about “users”. They must find more human words for human beings.
Instead of counting “number of clicks”, they could count “number of goals achieved.” Instead of “time on site”, it could be “time on task.” Instead of “total conversions” what about “total personal intentions converted?”
“Viral” might be a good metaphor to describe the spread of misinformation, but not to describe everything that’s wildly popular on the web. It makes the term feel benign, almost fun. It makes us feel immune to the dangers of the disease of misinformation, forgetting we can, in fact, be carriers, helping to spread it.
These hurdles aren’t technical. They’re cultural. Anna Wiener explains why;
“I talked about compassionate analytics. He talked about optimizing. I wanted a team of tender hearts. He wanted a team of machines.”
“The industry’s pervasive idealism was increasingly dubious. Tech, for the most part wasn’t progress. It was just business.”
The required redesign of language is not yet in fashion. It's business as usual on the linguistic front.
Nonetheless, it’s worth simply drawing attention to the problem, knowing like Emerson, that “sometimes a scream is better than a thesis.”
Part 2 - Good news you may not have heard
1. Nutrition Labelling For Online Privacy
A lengthy terms and conditions, user agreement that no one reads or understands because they’re designed to be obtuse, is no longer an acceptable practice.
But it can't stop there.
Every single app and website should be compelled to reveal information on data collection and tracking — clearly and concisely.
But it's still not enough.
The App Store continue to allow a Saudi Arabian app that lets husbands track their wives, for example. This highlights self-labeling as a helpful start, but not a long term solution.
Such a requirement needs to be independently regulated and not left to the whim of individual companies.
Read more on Vox
2. Brainiacs in tech
MIT just released its 2020 list of the world’s top innovators under the age of 35.
My personal favourite is Bo Li.
Most developments around AI relate to it’s role in the digital space. Li’s research however, focuses on exposing and improving flaws in how AI functions in the physical world.
For example, Li arranged small black-and-white stickers on a stop sign, in a graffiti-like pattern. They were visible to the human eye, but appeared random and didn’t obscure the sign’s clear signal to stop. Yet the arrangement was deliberately designed to instruct an autonomous vehicle to do the opposite, and continue driving at 45 mph.
Li is devising ways to make AI better able to defend against such attacks.
Read more about Li and other innovators on MIT
Part 3 - Quotes, Tweets And Other Oddities
"Columbus 'discovered' America the same way our parents 'discovered' Facebook: a long-inhabited place was utterly ruined upon their arrival." - Nat Baimel
Until next week, thanks for reading.