Excerpted from "Ten Arguments for Deleting Your Social Media Accounts Right Now" by Jaron Lanier. Published with permission.



Something entirely new is happening in the world. Just in the last five or ten years, nearly everyone started to carry a little device called a smartphone on their person all the time thats suitable for algorithmic behavior modification. A lot of us are also using related devices called smart speakers on our kitchen counters or in our car dashboards. Were being tracked and mea- sured constantly, and receiving engineered feedback all the time. Were being hypnotized little by little by technicians we cant see, for purposes we dont know. Were all lab animals now.

Algorithms gorge on data about you, every second. What kinds of links do you click on? What videos do you watch all the way through? How quickly are you moving from one thing to the next? Where are you when you do these things? Who are you connecting with in person and online? What facial expressions do you make? How does your skin tone change in different situations? What were you doing just before you decided to buy something or not? Whether to vote or not?

All these measurements and many others have been matched up with similar readings about the lives of multitudes of other people through massive spying. Algorithms correlate what you do with what almost everyone else has done.

The algorithms dont really understand you, but there is power in numbers, especially in large numbers. If a lot of other people who like the foods you like were also more easily put of by pictures of a candidate portrayed in a pink border instead of a blue one, then you probably will be too, and no one needs to know why. Statistics are reliable, but only as idiot demons.

Are you sad, lonely, scared? Happy, confident? Getting your period? Experiencing a peak of class anxiety?

So-called advertisers can seize the moment when you are perfectly primed and then influence you with messages that have worked on other people who share traits and situations with you.

I say “so-called” because its just not right to call direct manipulation of people advertising. Advertisers used to have a limited chance to make a pitch, and that pitch might have been sneaky or annoying, but it was fleeting. Furthermore, lots of people saw the same TV or print ad; it wasnt adapted to individuals. The biggest difference was that you werent monitored and assessed all the time so that you could be fed dynamically optimized stimuli — whether “content” or ad — to engage and alter you.

Now everyone who is on social media is getting individualized, continuously adjusted stimuli, without a break, so long as they use their smartphones. What might once have been called advertising must now be understood as continuous behavior modification on a titanic scale.

Please dont be insulted. Yes, I am suggesting that you might be turning, just a little, into a well-trained dog, or something less pleasant, like a lab rat or a robot. That youre being remote-controlled, just a little, by clients of big corporations. But if Im right, then becoming aware of it might just free you, so give this a chance, okay?

A scientific movement called behaviorism arose before computers were invented. Behaviorists studied new, more methodical, sterile, and nerdy ways to train animals and humans.

One famous behaviorist was B. F. Skinner. He set up a methodical system, known as a Skinner box, in which caged animals got treats when they did something specific. There wasnt anyone petting or whispering to the animal, just a purely isolated mechanical action — a new kind of training for modern times. Various behaviorists, who often gave of rather ominous vibes, applied this method to people. Behaviorist strategies often worked, which freaked everyone out, eventually leading to a bunch of creepy “mind control” sci-fi and horror movie scripts.

An unfortunate fact is that you can train someone using behaviorist techniques, and the person doesnt even know it. Until very recently, this rarely happened unless you signed up to be a test subject in an experiment in the basement of a uni- versitys psychology building. Then youd go into a room and be tested while someone watched you through a one-way mirror. Even though you knew an experiment was going on, you didnt realize how you were being manipulated. At least you gave consent to be manipulated in some way. (Well, not always. There were all kinds of cruel experiments performed on prisoners, on poor people, and especially on racial targets.)

This book argues in ten ways that what has become suddenly normal — pervasive surveillance and constant, subtle manipulation — is unethical, cruel, dangerous, and inhumane. Dangerous? Oh, yes, because who knows whos going to use that power, and for what?


You may have heard the mournful confessions from the founders of social media empires, which I prefer to call “behavior modification empires.”

Heres Sean Parker, the first president of Facebook:

We need to sort of give you a little dopamine hit every once in a while, because someone liked or commented on a photo or a post or whatever….Its a social-validation feedback loop… exactly the kind of thing that a hacker like myself would come up with, because youre exploiting a vulnerability in human psychology….The inventors, creators — its me, its Mark [Zuckerberg], its Kevin Systrom on Instagram, its all of these people — understood this consciously. And we did it anyway…it literally changes your relationship with society, with each other…It probably interferes with productivity in weird ways. God only knows what its doing to our childrens brains.

Heres Chamath Palihapitiya, former vice president of user growth at Facebook:

The short-term, dopamine-driven feedback loops weve created are destroying how society works…. No civil discourse, no cooperation; misinformation, mistruth. And its not an American problem — this is not about Russian ads. This is a global problem… I feel tremendous guilt. I think we all knew in the back of our minds—even though we feigned this whole line of, like, there probably arent any bad unintended consequences. I think in the back, deep, deep recesses of, we kind of knew something bad could happen…So we are in a really bad state of affairs right now, in my opinion. It is eroding the core foundation of how people behave by and between each other. And I dont have a good solution. My solution is I just dont use these tools anymore. I havent for years.

Better late than never. Plenty of critics like me have been warning that bad stuff was happening for a while now, but to hear this from the people who did the stuff is progress, a step forward.

For years, I had to endure quite painful criticism from friends in Silicon Valley because I was perceived as a traitor for criticizing what we were doing. Lately I have the opposite problem. I argue that Silicon Valley people are for the most part decent, and I ask that we not be villainized; I take a lot of fresh heat for that. Whether Ive been too hard or too soft on my com- munity is hard to know.

The more important question now is whether anyones criticism will matter. Its undeniably out in the open that a bad technology is doing us harm, but will we — will you, meaning you — be able to resist and help steer the world to a better place?

Companies like Facebook, Google, and Twitter are finally trying to fix some of the massive problems they created, albeit in a piecemeal way. Is it because they are being pressured or because they feel that its the right thing to do? Probably a little of both.

The companies are changing policies, hiring humans to monitor whats going on, and hiring data scientists to come up with algorithms to avoid the worst failings. Facebooks old mantra was “Move fast and break things,” and now theyre coming up with better mantras and picking up a few pieces from a shattered world and gluing them together.

This book will argue that the companies on their own cant do enough to glue the world back together. Because people in Silicon Valley are expressing regrets, you might think that now you just need to wait for us to fix the problem. Thats not how things work. If you arent part of the solution, there will be no solution.

This first argument will introduce a few key concepts behind the design of addictive and manipulative network services. Awareness is the first step to freedom.


Parker says Facebook intentionally got people addicted, while Palihapitiya is saying something about the negative effects on relationships and society. What is the connection between these two mea culpas?

The core process that allows social media to make money and that also does the damage to society is behavior modification. Behavior modification entails methodical techniques that change behavioral patterns in animals and people. It can be used to treat addictions, but it can also be used to create them. The damage to society comes because addiction makes people crazy. The addict gradually loses touch with the real world and real people. When many people are addicted to manipulative schemes, the world gets dark and crazy.

Addiction is a neurological process that we dont understand completely. The neurotransmitter dopamine plays a role in pleasure and is thought to be central to the mechanism of behavior change in response to getting rewards. That is why Parker brings it up.

Behavior modification, especially the modern kind implemented with gadgets like smartphones, is a statistical effect, meaning its real but not comprehensively reliable; over a population, the effect is more or less predictable, but for each individual its impossible to say. To a degree, youre an animal in a behaviorists experimental cage. But the fact that something is fuzzy or approximate does not make it unreal.

Originally, food treats were the most common reward used in behaviorist experiments, though the practice goes back to ancient times. Every animal trainer uses them, slipping a little treat to a dog after it has performed a trick. Many parents of young children do it, too.

One of the first behaviorists, Ivan Pavlov, famously demonstrated that he didnt need to use real food. He would ring a bell when a dog was fed, and eventually the dog would salivate upon hearing the bell alone.

Using symbols instead of real rewards has become an essential trick in the behavior modification toolbox. For instance, a smartphone game like Candy Crush uses shiny images of candy instead of real candy to become addictive. Other addictive video games might use shiny images of coins or other treasure.

Addictive pleasure and reward patterns in the brain — the “little dopamine hit” cited by Sean Parker — are part of the basis of social media addiction, but not the whole story, because social media also uses punishment and negative reinforcement.

Various kinds of punishment have been used in behaviorist labs; electric shocks were popular for a while. But just as with rewards, its not necessary for punishments to be real and phys- ical. Sometimes experiments deny a subject points or tokens.

You are getting the equivalent of both treats and electric shocks when you use social media.

Most users of social media have experienced catfishing (which cats hate), senseless rejection, being belittled or ignored, outright sadism, or all of the above, and worse. Just as the carrot and stick work together, unpleasant feedback can play as much of a role in addiction and sneaky behavior modification as the pleasant kind.


When Parker uses the phrase “every once in a while,” hes probably referring to one of the curious phenomena that behaviorists discovered while studying both animals and people. If someone gets a reward — whether its positive social regard or a piece of candy — whenever they do a particular thing, then theyll tend to do more of that thing. When people get a flattering response in exchange for posting something on social media, they get in the habit of posting more.

That sounds innocent enough, but it can be the first stage of an addiction that becomes a problem both for individuals and society. Even though Silicon Valley types have a sanitized name for this phase, “engagement,” we fear it enough to keep our own children away from it. Many of the Silicon Valley kids I know attend Waldorf schools, which generally forbid electronics.

Back to the surprising phenomenon: its not that positive and negative feedback work, but that somewhat random or unpredictable feedback can be more engaging than perfect feedback.

If you get a piece of candy immediately every time you say please as a child, youll probably start saying please more often. But suppose once in a while the candy doesnt come. You might guess that youd start saying please less often. After all, its not generating the reward as reliably as it used to.

But sometimes the opposite thing happens. Its as if your brain, a born pattern finder, cant resist the challenge. “There must be some additional trick to it,” murmurs your obsessive brain. You keep on pleasing, hoping that a deeper pattern will reveal itself, even though theres nothing but bottomless randomness.

Its healthy for a scientist to be fascinated by a pattern that doesnt quite make sense. Maybe that means theres something deeper to be discovered. And its a great tool to exploit if youre writing a script. A little incongruity makes a plot or a character more fascinating.

But in many situations its a terrible basis for fascination. The allure of glitchy feedback is probably what draws a lot of people into crummy “codependent” relationships in which they arent treated well.

A touch of randomness is more than easy to generate in social media: because the algorithms arent perfect, randomness is intrinsic. But beyond that, feeds are usually calculated to include an additional degree of intentional randomness. The motivation originally came from basic math, not human psychology.

Social media algorithms are usually “adaptive,” which means they constantly make small changes to themselves in order to try to get better results; “better” in this case meaning more engaging and therefore more profitable. A little randomness is always present in this type of algorithm.

Lets suppose an algorithm is showing you an opportunity to buy socks or stocks about five seconds after you see a cat video that makes you happy. An adaptive algorithm will occasionally perform an automatic test to find out what happens if the interval is changed to, say, four and a half seconds. Did that make you more likely to buy? If so, that timing adjustment might be applied not only to your future feed, but to the feeds of thousands of other people who seem correlated with you because of anything from color preferences to driving patterns. Adaptive algorithms can get stuck sometimes; if an algo- rithm gets no further benefits from further small tweaks to its settings, then further small tweaks wont stick. If changing to four and a half seconds makes you less likely to buy socks, but five and a half seconds also makes sales less likely, then the timing will remain at five seconds. On the basis of available evidence, five seconds would be the best possible time to wait. If no small random change helps, then the algorithm stops adapt- ing. But adaptive algorithms arent supposed to stop adapting.

Suppose changing even more might improve the result? Maybe two and a half seconds would be better, for instance. But incremental tweaks wouldnt reveal that, because the algorithm got stuck at the five-second setting. Thats why adaptive algorithms also often include a sparser dose of greater randomness. Every once in while an algorithm finds better settings by being jarred out of merely okay settings.

Adaptive systems often include such a leaping mechanism. An example is the occurrence of useful mutations in natural evolution, which is usually animated by more incremental selection-based events in which the genes from an individual are either passed along or not. A mutation is a wild card that adds new possibilities, a jarring jump. Every once in a while a muta- tion adds a weird, new, and enhancing feature to a species.

Neuroscientists naturally wonder whether a similar process is happening within the human brain. Our brains surely include adaptive processes; brains might be adapted to seek out surprises, because nature abhors a rut.

When an algorithm is feeding experiences to a person, it turns out that the randomness that lubricates algorithmic adaptation can also feed human addiction. The algorithm is trying to capture the perfect parameters for manipulating a brain, while the brain, in order to seek out deeper meaning, is changing in response to the algorithms experiments; its a cat-and-mouse game based on pure math. Because the stimuli from the algorithm dont mean anything, because they genuinely are random, the brain isnt adapting to anything real, but to a fiction. That process — of becoming hooked on an elusive mirage — is addiction. As the algorithm tries to escape a rut, the human mind becomes stuck in one.

The pioneers of the online exploitation of this intersection of math and the human brain were not the social media compa- nies, but the creators of digital gambling machines like video poker, and then of online gambling sites. Occasionally, pioneers of the gambling world complain about how social media companies ripped of their ideas and made more money, but mostly they talk about how social media is helping them identify the easiest marks.


Social networks bring in another dimension of stimuli: social pressure.

People are keenly sensitive to social status, judgment, and competition. Unlike most animals, people are not only born absolutely helpless, but also remain so for years. We only survive by getting along with family members and others. Social con- cerns are not optional features of the human brain. They are primal.

The power of what other people think has proven to be intense enough to modify the behavior of subjects participating in famous studies like the Milgram Experiment and the Stanford Prison Experiment. Normal, noncriminal people were coerced into doing horrible things, such as torturing others, through no mechanism other than social pressure.

On social networks, the manipulation of social emotions has been the easiest way to generate rewards and punishments. That might change someday, if drones start dropping actual candy from the sky when you do what the algorithm wants, but for now its all about feelings that can be evoked in you — mostly, feelings regarding what other people think.

For instance, when we are afraid that we might not be considered cool, attractive, or high-status, we dont feel good. That fear is a profound emotion. It hurts.

Everybody suffers from social anxiety from time to time, and every child has encountered a bully who used social anxi- ety as a weapon of torture, probably because behaving like a bully lessened the chances that the bully might become a target. Thats why people, even those who would normally be decent, tend to pile on to a victim of social anxiety torture. Theyre so afraid of the very real pain that social anxiety brings that they can lose sight of their better natures for a moment.

Thats not to say that all social emotions are negative. We can also experience camaraderie, sympathy, respect, admiration, gratitude, hope, empathy, closeness, attraction, and a world of other positive feelings when we interact with other people. On the negative side, we might feel fear, hostility, anxiety, resent- ment, repulsion, jealousy, and a desire to ridicule.

If socially evoked emotion can function as punishment or reward, then is reward or punishment more effective at changing people? This question has been studied for a long time, and it seems that the answer varies according to the population being studied and the situation. Heres a study that suggests that young children respond better to reward than punishment, though the reverse might be the case after age twelve. Heres another study that suggests that punishment is more effective than reward for manipulating college students. Heres a summary of research indicating that affirmation works better to motivate adult workers. It might be that the nature of the task determines which type of feedback is more effective, as does the way the task is described.

A corpus of academic research compares the powers of positive and negative feedback, but that is not the key question for the design of commercial social media platforms, which are pri- marily concerned with reducing costs and increasing performance, thereby maximizing profit. Whether or not positive feedback might in theory be more effective in certain cases, negative feedback turns out to be the bargain feedback, the best choice for business, so it appears more often in social media.

Negative emotions such as fear and anger well up more easily and dwell in us longer than positive ones. It takes longer to build trust than to lose trust. Fight-or-flight responses occur in seconds, while it can take hours to relax.

This is true in real life, but it is even more true in the flattened light of algorithms.

There is no evil genius seated in a cubicle in a social media company performing calculations and deciding that making people feel bad is more “engaging” and therefore more profit- able than making them feel good. Or at least, Ive never met or heard of such a person.

The prime directive to be engaging reinforces itself, and no one even notices that negative emotions are being amplified more than positive ones. Engagement is not meant to serve any particular purpose other than its own enhancement, and yet the result is an unnatural global amplification of the “easy” emotions, which happen to be the negative ones.


In the bigger picture, in which people must do more than conform in older for our species to thrive, behaviorism is an inadequate way to think about society. If you want to motivate high value and creative outcomes, as opposed to undertaking rote training, then reward and punishment arent the right tools at all. Theres a long line of researchers studying this topic, starting with Abraham Maslow in the 1950s and continuing with many others, including Mihaly Csikszentmihalyi (joined by writers like Daniel Pink). Instead of applying the simple mechanisms of behaviorism, we need to think about people in more creative ways, if we expect them to be creative. We need to foster joy, intellectual challenge, individuality, curiosity, and other qualities that dont fit into a tidy chart.

But theres something about the rigidity of digital technology, the on-and-of nature of the bit, that attracts the behaviorist way of thinking. Reward and punishment are like one and zero. Its not surprising that B. F. Skinner was a major player in the earliest days of digital networking, for instance. He saw digital networks as an ideal way to train a population for the kind of utopia he sought, where wed all just finally behave. One of his books was called Beyond Freedom and Dignity. Beyond!

The term “engagement” is part of the familiar, sanitized language that hides how stupid a machine we have built. We must start using terms like “addiction” and “behavior modification.” Heres another example of sanitized language: We still call the customers of social media companies “advertisers” — and, to be fair, many of them are. They want you to buy a particular brand of soap or something. But they might also be nasty, hidden creeps who want to undermine democracy. So I prefer to call this class of person a manipulator.

Sorry, soap sellers…. Actually, I can report, the people at companies like Procter & Gamble are just fine — Ive met a bunch of them — and their world would be happier if they werent beholden to social media companies.

Back in its earliest days, online advertising really was just advertising. But before long, advances in computing happened to coincide with ridiculously perverse financial incentives, as will be explained in the next argument. What started as adver- tising morphed into what would better be called “empires of behavior modification for rent.” That transformation has often attracted new kinds of customers/manipulators, and they arent pretty.

Unfortunately, manipulators cant get any result they want with equal ease. You cant pay social media companies to help end wars and make everyone kind. Social media is biased, not to the Left or the Right, but downward. The relative ease of using negative emotions for the purposes of addiction and manipulation makes it relatively easier to achieve undignified results. An unfortunate combination of biology and math favors degradation of the human world. Information warfare units sway elections, hate groups recruit, and nihilists get amazing bang for the buck when they try to bring society down.

The unplanned nature of the transformation from advertis- ing to direct behavior modification caused an explosive ampli- fication of negativity in human affairs. Well return to the higher potency of negative emotions in behavior modification many times as we explore the personal, political, economic, social, and cultural effects of social media.


Addiction is a big part of the reason why so many of us accept being spied on and manipulated by our information technology, but its not the only reason. Digital networks genuinely deliver value to us. They allow for great efficiencies and conve- nience. Thats why so many of us worked so hard to make them possible.

Once you can use a pocket device to order rides and food and find out where to meet your friends right away, its hard to go back. Its hard to remember that people with rare medical conditions used to have no way of finding other people in the same boat, so there was no one to talk to about unusual problems. What a blessing that it has become possible.

But the benefits of networks only appear when people use the same platform. If no one wanted to be an Uber driver, then your Uber app would accomplish exactly nothing. If no one wants to be on your dating app, then, once again, nothing.

The unfortunate result is that once an app starts to work, everyone is stuck with it. Its hard to quit a particular social network and go to a different one, because everyone you know is already on the first one. Its effectively impossible for everyone in a society to back up all their data, move simultaneously, and restore their memories at the same time.

Effects of this kind are called network effects or lock-ins.

Theyre hard to avoid on digital networks.

Originally, many of us who worked on scaling the internet hoped that the thing that would bring people together — that would gain network effect and lock-in — would be the internet itself. But there was a libertarian wind blowing, so we left out many key functions. The internet in itself didnt include a mechanism for personal identity, for instance. Each computer has its own code number, but people arent represented at all. Similarly, the internet in itself doesnt give you any place to store even a small amount of persistent information, any way to make or receive payments, or any way to find other people you might have something in common with.

Everyone knew that these functions and many others would be needed. We figured it would be wiser to let entrepreneurs fill in the blanks than to leave that task to government. What we didnt consider was that fundamental digital needs like the ones I just listed would lead to new kinds of massive monopo- lies because of network effects and lock-in. We foolishly laid the foundations for global monopolies. We did their hardest work for them. More precisely, since youre the product, not the customer of social media, the proper word is “monopsonies.” Our early libertarian idealism resulted in gargantuan, global data monopsonies.

One of the main reasons to delete your social media accounts is that there isnt a real choice to move to different social media accounts. Quitting entirely is the only option for change. If you dont quit, you are not creating the space in which Silicon Valley can act to improve itself.


Addiction gradually turns you into a zombie. Zombies dont have free will. Once again, this result isnt total but statistical. You become more like a zombie, more of the time, than you otherwise would be.

Theres no need to believe in some myth of perfect people who are completely free of addictions. They dont exist. Youre not going to become perfect or perfectly free, no matter how many self-help books you read or how many addictive services you quit.

Theres no such thing as perfectly free will. Our brains are constantly changing their ways to adapt to a changing environment. Its hard work, and brains get tired! Sometimes they take a break, zone out, and run on autopilot. But thats different from being driven by hidden manipulators.

We modify each others behavior all the time, and thats a good thing. Youd have to be insensitive and uncaring to not change how you act around someone in response to how that person reacts. When mutual behavior modification gets good, it might be part of what we talk about when we talk about love. We dont have to think of free will as a supernatural inter- vention in our universe. Maybe free will exists when our adap- tation to each other and the world has an exceptionally creative quality.

So the problem isnt behavior modification in itself. The problem is relentless, robotic, ultimately meaningless behavior modification in the service of unseen manipulators and uncaring algorithms.

Hypnosis might be therapeutic so long as you trust your hypnotist, but who would trust a hypnotist who is working for unknown third parties? Who? Apparently billions of people.

Consider the billions of dollars taken in by Facebook, Google, and the rest of the so-called digital advertising industry every month. The vast majority of that money comes from par- ties who are seeking to change your behavior, and who believe they are getting results. Many of these behavior changes are similar to the ones that television ads try to provoke, like getting you to buy a car or go to a café.

But, despite in some ways knowing more about you than you know about yourself, the companies dont always know the identities of the advertisers, the parties who are benefiting from manipulating you. Tech company lawyers have testified under oath that the companies couldnt have known when Russian intelligence services sought to disrupt elections or foment divisions to weaken societies, for instance.

I find that paranoid thinking is generally counterproductive. It disempowers you. But consider the present situation. We know that social media has been successfully deployed to disrupt societies, and we know that the price to do so is remarkably low. We know that relevant companies take in an astounding amount of money and that they dont always know who their customers are. Therefore, there are likely to be actors manip- ulating us—manipulating you—who have not been revealed.

To free yourself, to be more authentic, to be less addicted, to be less manipulated, to be less paranoid… for all these marvelous reasons, delete your accounts.

Original Article


Please enter your comment!
Please enter your name here