Clicks Against Humanity

Gina Costanza Johnson
17 min readAug 25, 2021

--

The Questionable Ethics of Persuasive Design

The evolution of social media and the stealth development of intelligent and sophisticated technology, created by the largest corporations in the world like Facebook and Apple, continue to revolutionize the globes’ ability to connect in ways never possible in the past. However, they no longer align with humanity’s best interests, ideologies, and belief systems. We see this most evident when we reflect on the abuse of personal data collection, the abject failure of social media networks to regulate fake news and misinformation, and, worst yet, surveillance with little to no consequences. These unethical practices negatively impact society’s optimism and trust in our systems and democracy, all for the fight for human attention. This article explores persuasive design and its negative impact on humanity today.

B.J. Fogg’s Persuasive Design Technology: Using Computers to Change What We Think and Do

Persuasive technology takes on the form of apps, interactive features and triggers, notifications, recommendations, and more. It marries traditional methods of persuasion using information, incentives, and even coercion with the new capabilities and innovations of devices to change user behavior. The conceptualization of persuasive design, pioneered by B.J. Fogg, author of Persuasive Technology: Using Computers to Change What We Think and Do provides a foundation for the study of computers as persuasive technology. In addition, Fogg published the “Fogg Behavior Model,” a model for analyzing and studying human behavior. The FBM describes three conditions needed for a behavior to occur: 1.) Motivation 2.) Ability 3.) A Prompt. Motivation is influenced by pleasure, pain, hope, fear, social acceptance, or rejection.

Designed Prompts As Triggers.

Persuasive design is carefully calibrated to extract our most valuable resource, our attention. Fogg’s behavior model is analogous to today’s social media trigger tactics, designed to capture users’ attention and encourage action. Today, social media triggers prove to be the most effective method to attract our attention. Without a craftily designed call-to-action platform model, social media companies will not survive. This leaves us questioning their lack of integrity and non-ethical behavior, all for the sake of human capital.

Ethically questionable persuasive design is not new but advanced over time. Vance Packard’s The Hidden Persuaders published in 1957, exposed the advertising industry’s psychologically manipulative techniques for selling products. Packard explored advertisers’ use of consumer motivational research and other psychological techniques, including subliminal tactics, to manipulate expectations and illicit desire for products. He identified eight “compelling needs” that advertisers are confident their products will fulfill 1.) Emotional Security 2.) Reassurance of worth 3.) Ego gratification 4.) Creative outlets 5.) Love objects 6.) Sense of power 7.) Roots 8.) Immortality. According to Packard, these needs are so strong that people buy products merely to gratify themselves. The book encourages the reader to question the morality of these techniques.

The Hidden Persuaders by Vance Packard

Social media giants are reactively and stealthily designing tools and triggers that elicit action in real-time. The digital landscape space is so congested with messaging and information that they are at constant odds fighting to attract human attention in order to maintain and grow advertiser revenue. This battle for user attention is called “Attention Economics.” This design method is built to help retain human attention, which is a scarce commodity, and creates an online environment that is designed to increase users’ propensity to buy products or services and thereby advertisers capitalize on users’ vulnerabilities in digital spaces.

Attention Economics: The Battle Human Attention

Can I Have Your Attention?

Attention is a resource; a person has only so much of it. And yet we’ve auctioned off more and more of our public space to private commercial interests, with their constant demands on us to look at the products on display or simply absorb some bit of corporate messaging. Lately, our self-appointed disruptors have opened up a new frontier of capitalism, complete with its own frontier ethic: to boldly dig up and monetize every bit of private head space by appropriating our collective attention. In the process, we’ve sacrificed silence — the condition of not being addressed. And just as clean air makes it possible to breathe, silence makes it possible to think.

To overcome attention scarcity, social media companies inundate users with information forcing them to choose where to allocate their attention. Herbert Simon, a modern behavioral and cognitive scientist, theorized and predicted this would eventually happen.

The wealth of information means a dearth of something else: a scarcity of whatever it is that information consumes. What information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention and a need to allocate that attention efficiently among the overabundance of information sources that might consume it.

Simon lived during a period of time that was radically different than our own: a world without the internet, computers, smart devices, and technology. Yet his theories foreshadowed today’s technological advances. Global access to the internet and smart technology fundamentally restructured our relationship with information and the environment in which we access it.

Stand Out of Our Light

James Williams, author of Stand Out of Our Light: Freedom and Resistance in the Attention Economy and former Google employee describes his philosophy on the subject:

For most of human history, we’ve lived in environments of information scarcity. In those contexts, the implicit goal of information technologies has been to break down the barriers between us and information. Because information was scarce, any new piece of it represented a novel addition to your life.”

On this front, our information technologies have been widely successful; they have indeed broken down the majority of barriers to information flow. Information is now cheap and easily accessible. In fact, it has become so abundant that each new piece no longer represents a valuable addition to your life.

This results in a form of value-inversion between information and attention: as the marginal value of information falls, the marginal value of attention rises. In the age of information abundance, it is to which pieces of information you attend that are in demand.

The Algorithms Take Over

Attention scarcity is not problematic if our attention is not commoditized for profit. However, once we direct our attention toward websites and social media platforms and begin to personalize our experience by providing our data and sharing media consumption interests, the algorithm takes over. The tech begins to collect your data, which in turn powers the algorithm, and informs content recommendations meant to keep you onsite as long as possible. Time spent onsite equates to realized revenue for the media company.

As the internet began to scale, the algorithms harvested vast amounts of user data as people blindly navigated the internet freely, unaware that nothing online comes for free. Jaron Lanier, digital visionary, philosopher, and one of the creators of our digital ecosystem today, asserts that the illusion of “free information” lives at the root of our current digital existential crisis. He believes that “Free information” was born out of a paradoxical commitment in Silicon Valley. He expounds on this thought by recounting his past, there was this very strong culture in the 1980s and ’90s demanding that everything be free. But the problem is, at the same time there was also this worship of Silicon Valley entrepreneurship. So it was a case of two ideologies that each, by themselves made sense, but combined created this third outcome” known as the “finance through third parties” model, or in plainer terms, the finance through an advertising model. This has since normalized a perspective among users in which they largely scoff at paying for Internet services (e.g., reading the news, searching the web, using social network sites, etc.). Of course, the natural consequence of this “finance through advertising” model is that the technology companies who operate these services must collect vast amounts of user data in order to be profitable. Thus, an economy of user data has emerged on a vast scale.

Jaron Lanier

Tristan Harris, former Design Ethicist at Google and founder of The Center for Humane Technology builds upon Lanier’s ideology by stating at a 2017 Ted Talk that “a handful of tech companies control billions of minds every day.” The handful of companies Harris references are Facebook, Twitter, and Google. He often expounds on this statement by sharing that “these companies prey on our psychology for their own profit.” He calls for “a design renaissance in which tech instead encourages us to live out the timeline we want.” Attention scarcity and attention commodification create a system of perverse incentives.

Tristan Harris

The attention merchants must aggressively compete for the attention of users in order to monetize. Every second a person spends scrolling through their Facebook feed is a second not spent watching YouTube, and every second a person spends watching YouTube is a second not spent snapping. As a result, these companies are locked in an arms race for the attention of users. One could conceptualize this as a marketplace for users’ attention. What is especially pernicious about this marketplace is that it is all-inclusive; it does not respect the boundaries between our online and offline lives. Facebook, then, does not simply compete with other social networking sites, but with other data-fed communications companies, both online and offline, that are not disseminated through its platform. To some, this may seem hyperbolic, but it is in fact an economic reality, one that is not lost on the technology companies themselves. The CEO of Netflix Inc. Reed Hastings, for example, recently remarked that the media-streaming giant’s biggest competitors were not merely Amazon and Hulu, but rather “sleep” and the other “range of things you did to relax and unwind, hang out and connect.

Persuasive Technology and Persuasive Design

While considering the ethical issues around persuasive technology, one needs to differentiate between two types of persuasion: persuasive design and algorithmic persuasion. Persuasive design employs a small team of designers who embed specific design features like an autoplay prompt into a video player for the purpose of maximizing engagement. Engagement in this context refers to the amount of time you spend engaging with a product message or advertisement. Algorithmic persuasion is also designed to maximize engagement, although it differs in that it uses data, AI, and algorithms to tailor content. Thus, persuasive design is comprised of the “behavioral” method of maximizing engagement and algorithmic persuasion that is the data-informed “algorithmic” method.

Persuasive design relies on manipulating human psychology to achieve engagement goals. James Williams writes, “these ‘engagement’ goals are petty, subhuman goals” if nothing else. It would be impossible to list all versions of persuasive design; however, Tristan Harris shares several major principles of persuasive design. The first being the concept of intermittent variable rewards, a phenomenon discovered by the psychologist B. F. Skinner. Intermittent variable rewards is a method of habit-formation in which a person’s action like pulling a lever on a slot machine is closely connected to a variable reward, in this case, money. As Harris argues, “several billion people have a slot machine in their pocket.” However, in this case, rather than money, the variable rewards are often social or psychological rewards like push notifications, likes, retweets, and tagging. Every time a person navigates their smartphone, the action of clicking, swiping and scrolling is psychologically connected to the possibility of receiving a reward and quick dopamine hit when someone likes their photo or shares their post. Sometimes this addictive reward is simply passive, and other times it is consciously created.

A disconcerting example of the latter comes from the former president of Facebook Sean Parker, who publicly stated that Facebook continually asks their developers, “how do we consume as much of your time and conscious attention as possible?” In pursuit of this, Facebook designed their interface with interactive features such as the “like” button that would give users that little dopamine hit in order to further propagate participation on the platform. The goal is to design an entire platform that functions as a social validation feedback loop, exploiting human vulnerabilities throughout the entire user experience. This effectiveness of the approach aligns with human media consumption behavior. The average American checks their phone approximately once every 12 minutes; this averages to around 80 checks per day. Millennials are even worse, checking their phones approximately 150 times a day. Harris asks, “Why do we do this?” Are we making 150 conscious choices?”

Sean Parker, Former President of Facebook

There are several other forms of persuasive design, all of which mesh to make for one holistic addictive social experience. Social reciprocity, for example, is a social behavior often exploited by technology for the sake of engagement. Snapchat does this through snap streaks. These streaks occur when friends exchange photos for three consecutive days. They then continue to build on each other for three consecutive days while friends keep snapping photos to keep the streak going. If one day is missed, the streak disappears. Features like this prey upon young peoples’ sense of social reciprocity in order to keep them using the platform. There are some reports of kids turning to their parents and friends to keep up their steaks when they are unable to do so themselves. Psychologists warn parents that this feature may cause kids to create a friendship-hierarchy, “that can leave some teens afraid to disappoint others if they drop a streak or petrified about any change in status.”A feature like snap streaks not only addicts users but fundamentally redefines what it means to be a friend by quantifying and gratifying relationships. There is a litany of other examples of persuasive design, such as the infinite scroll.

Harris cites the infamous “bottomless bowl” study conducted by Brian Wansink from Cornell. In this experiment, some participants were asked to eat soup from a bowl being slowly and imperceptibly refilled from the bottom. He demonstrated that these participants consumed 73% more soup than the control group despite the fact that they did not believe they had consumed more. Harris argues that tech companies utilize this same “bottomless” principle in their product design. “News feeds are purposely designed to auto-refill with reasons to keep you scrolling. This is also why video and social media sites like Netflix, YouTube, or Facebook autoplay the next video after a countdown instead of waiting for you to make a conscious choice in the event you don’t.” Of course, this is not an exhaustive list of examples. However, they should be sufficient to illustrate the extent to which these features are meticulously designed with the goal of persuasion in mind.

Establishing persuasive design as a “behavioral” method of persuasion, algorithmic persuasion is a technologically driven method. This is made possible through data, AI, and algorithms. Sometimes the effect is indirect, like curated content and recommendation features. Other times, it is more direct, like A/B testing. Consider the indirect case first. When designed to optimize the “time spent” metric, curated content feeds like Facebook’s NewsFeed and recommendation content engines tend to tailor content that will keep users on the platform. As Jaron Lanier points out, most algorithms used in tech are “adaptive,” meaning that they make use of randomness to see if small changes result in behavior that maximizes a given metric. He provides the following example:

Let’s suppose an algorithm is showing you an opportunity to buy socks or stocks about five seconds after you see a cat video that makes you happy. An adaptive algorithm will occasionally perform an automatic test to find out what happens if the interval is changed to, say, four and a half seconds. Did that make you more likely to buy? If so, that timing adjustment might be applied not only to your future feed but to the feeds of thousands of other people who seem correlated with you because of anything from color preferences to diving patterns.

His example illustrates the advancements in data and AI-powered algorithms to systemize persuasion universally. Each user’s behavior on these platforms is useful in persuading everyone else. It is a round-the-clock experiment in which we are the participants. What makes this more concerning, is that this occurs even when it plays against human nature. Lanier points out that outrageous, negative, and sensationalized content tends to get more clicks and views than gracious, positive, and nuanced content. Therefore, it is not surprising that social media has become a profoundly toxic place. While human vulnerabilities are to blame, the persuasive tactics of the attention economy are near impossible to overcome. The intelligently designed features are so sophistically optimized to earn your attention, that media companies win more times than not and continue to remain profitable.

A/B Testing

The second, more direct, feature of algorithmic persuasion is A/B testing. A/B testing is a randomized controlled experiment. Users are divided into two groups. The control group sees a standard user interface, while the treatment group sees a slightly modified version. The modification is slight enough for engineers and researchers to identify a treatment effect.

According to author Seth Stephens-Davidowitz in his book Everybody Lies, the A/B test was first used by Google engineers back in 2000 to determine the optimal number of search results to display per page. They simply conducted a test where the treatment group was shown twenty links per page, while the other control group was shown the standard ten. The “engineers then compared the satisfaction of the two groups based on how frequently they returned to Google”

This is the key learning of A/B testing. The metrics by which technology companies identify are the metrics for which these tests optimize and once again aligned with “time spent.” What makes this challenging is that these tests have numerous advantages over traditional randomized experiments. They are quick, inexpensive, and easily replicated. Engineers do not need to recruit or pay participants. A simple program can ensure perfect randomization, and this can be done to thousands of users without them even realizing it. All of this is made possible through data.

According to Davidowitz, Google engineers ran approximately seven thousand A/B tests in 2011. One designer even quit after Google went through forty-one marginally different shades of blue in A/B testing. Facebook is even worse. Currently, the company runs around a thousand A/B tests per day. Once again, the experiments we all knowingly take part in are designed for a purpose. Davidowitz concludes that through testing, Facebook was able to figure out that making a particular button a particular color gets people to come back to their site more often. So they change the button to that color. Then they may figure out that a particular font gets people to come back to their site more often. So they change the text to that font. Pretty soon, Facebook becomes a site optimized to maximize how much time people spend on the platform. In other words, find enough winners of A/B tests and you have an addictive site.

Persuasively designed features and triggers are imminent and designed to maximum attention. Distraction is not optimal, as previously discussed. Williams outlines two distinct, yet not mutually exclusive, forms of distraction: functional and existential. Functional distraction is the most immediate and intuitive form. It attacks what Williams calls the “spotlight” of our attention or the moment-to-moment direction of our cognitive focus. This distraction draws us away from the daily tasks and goals that we have set for ourselves and replaces them with endless feeds, recommended videos, and notifications. This occurs when we cannot read a book for more than ten minutes without checking our phones. By design, algorithms know when you are idle longer than usual. To gain back your attention, the algorithm begins to push out notifications that are nearly impossible to ignore. Once you click, you are sent down a long rabbit hole that becomes less and less relevant the longer you stay. The consequence of this distraction is clear; it drowns out our ability to complete tasks that we find important. Distraction equates to “time spent” instead of “time well spent.” What is particularly pernicious about this distraction is that it permeates into other areas of their lives.

Williams explains, “exposure to repeated notifications can create mental habits that train users to interrupt themselves, even in the absence of the technologies themselves.” In other words, technology does not just distract us but trains us to be distracted. These technologies do not just compete with other online sources for our attention, but with offline sources as well. This adds another dimension to the landscape; namely, that they do not merely compete with external sources for our attention, but internal sources as well. It is a competition against the thoughts, tasks, and goals within one’s own mind. This functional distraction, experienced daily, is a more serious existential distraction.

Williams calls this “the ‘starlight’ of our attention, or our ability to navigate ’by the stars’ of our higher values or goals.” Existential distraction occurs when we become so enamored with technology, so addicted by dopamine hits, Facebook likes, and endless feeds that we begin to lose our ability to construct an identity anchored to our own ideals and values. People are no longer autonomous or free. Our information and communication technologies do not produce behavior this extreme, but they play into the exact same dangerous circuitry.

Davidowitz quotes Harris stating: “There are a thousand people on the other side of the screen whose job it is to break down the self-regulation you have”. What could we be doing with our time if we did not check our phone 150 times per day?

Existential distraction not only obscures our ability to pursue our goals but also dilutes these goals when we do manage to pursue them. This is predicated on the concept of co-shaping, in which society’s values shape technology, and then technology in turn shapes society’s values. It is no surprise that technology is associated with mental health issues. Social media platform studies indicate that time spent on site negatively affects academic performance, relationships, and decreases offline time spent with family, friends, and communities.

In order to reinvent social media and technology to align with our values and ideologies, we must work towards collective action. We need systemic reform that will shift technology corporations to serving the public interest first and foremost. We need to think bigger about how much systematic change might be possible, and how to harness the collective will of the people. Ultimately it comes down to setting the right rules. It is difficult for any one actor to optimize for well-being and alignment with society’s values when other players are still competing for finite resources and power. Without rules and guard rails, the most ruthless actors win. That’s why legislation and policies are necessary and social media leaders must reinvent and reimagine design norms and move to one that will reward the betterment of people, not profit.

References

Bosker, Bianca. “What Will Break People’s Addictions to Their Phones?” The Atlantic, Atlantic Media Company, 6 Jan. 2017,

www.theatlantic.com/magazine/archive/2016/11/the-binge-breaker/501122/.

Harris, Tristan. “How Technology Hijacks People’s Minds.” 2016. http://www.tristanharris.com/essays/.

Harris, Tristan. “How a Handful of Tech Companies Control Billions of Minds Every Day.”

TED,

www.ted.com/talks/tristan_harris_how_a_handful_of_tech_companies_control_billions_ of_minds_every_day?utm_campaign=tedspread&utm_medium=referral&utm_source=te dcomshare.

Hastings, Reed. “Netflix DeclaresWar On Sleep, Its Biggest Competitor.” Newsweek. 2017. https://www.newsweek.com/netflix-binge-watch-sleep-deprivation-703029.

Jodi Gold, “Experts warn parents how Snapchat can hook in teens with streaks,” 2017, https://abcnews.go.com/Lifestyle/experts-warn-parents-snapchat-hook-teens-streaks/story

?id=48778296.

Lanier, Jaron. Ten Arguments for Deleting Your Social Media Accounts Right Now. 2018. Matthew. (2015, March 7). The Cost of Paying Attention. The New York Times.

https://www.nytimes.com/2015/03/08/opinion/sunday/the-cost-of-paying-attention.html.

Packard, Vance. The Hidden Persuaders. New York: David McKay Co, 1957.

Parker, Sean. “Ex-Facebook president Sean Parker: site made to exploit human “vulnerability.”

2017. https://www.theguardian.com/technology/2017/nov/09/facebook-sean-parker vulnerability-brain-psychology

Simon, Herbert. “Designing Organizations for an Information-RichWorld.” In Computers, Communication,and the Public Interest, 40–41. Baltimore: John Hopkins Press, 1971.

Stephens-Davidowitz, Seth. Everybody Lies: Big Data, New Data, and What the Internet Can Tell Us About Who We Really Are. New York: Dey Street Books, 2017.

Wansink, Brian. “Bottomless Bowls: Why Visual Cues of Portion Size May Influence Intake.” Obesity Research 13, no. 1 (2005): 93–100.

“We Need to Have an Honest Talk About Our Data.” Wired. 2018. https://www.wired.com/story/ interview-with-jaron-lanier/.

Williams, James. Stand Out of Our Light: Freedom and Resistance in the Attention Economy.

Cambridge: Cambridge University Press, 2018.

Wu, Tim. The Attention Merchants. New York: Alfred A. Knopf, 2016.

--

--

Gina Costanza Johnson

Digital Media Change Agent | Digital Philanthropist | Digital Design Ethicist | Humane Technology Advocate