Why A Technology’s Intent Doesn’t Matter

Over at Vox, Todd VanDerWerff has a fantastic explanation and takedown of authorial intent. In recent days, Vox interviewed Sopranos showrunner David Chase, in which he reveals his meaning in that show’s famously-enigmatic ending. While many fans assumed the abrupt cut-to-black that ended the series meant Tony Soprano had died, Chase confirms (sort of) to Vox that Tony is, in fact, still alive within the universe he created. VanDerWerff’s point is for fans to follow their own analysis.

Likewise, VanDerWerff argues that the insistence of Hello Kitty’s creator that the iconic cartoon is not a cat but a little girl should be meaningless if individuals and the culture as a whole observed the character as a cat:

For many critics — including myself — the most important thing about a work is not what the author intends but what the reader gleans from it. Authorial intent is certainly interesting, but it’s not going to get me to stop calling Hello Kitty a cat.

It’s a fantastic read, but, for myself, it brought up a different nature of authorial intent. In the vein of the Hello Kitty problem–in which the answer is largely binary–it reminded me of Steven Wilhite. Wilhite, the inventor of the Graphic Interchange Format (GIF) clarified that GIF should be pronounced with a soft g, so it is a homonym with Jif peanut butter.

I disagree with Wilhite’s statement, mostly because within the acronym the g stands for a hard-g word: Graphical. Therefore the acronym GIF should be pronounced with the hard g, not as its inventor intends.

A more difficult issue arises when we tackle a more vague definition, like the meaning of the Sopranos ending. For example: What does it mean to “fave” a tweet? Farhad Manjoo tackled this problem earlier in the week after Twitter announced they would be experimenting with a Facebook-esque update that would refer you to tweets faved by enough of your friends or followers, effectively turning the fave into the algorithmically-important Facebook Like.

When it wasn’t being used as a bookmark to help you remember links for later, pressing “favorite” on a tweet was the digital equivalent of a nod, a gesture that is hard to decipher. The fav derived its power from this deliberate ambiguity.

The fave, much like the “poke” before it, is left deliberately obscure in meaning so users can find their own meaning for it. We see this in the trend app Yo, which allows you to send one single message to a friend: “Yo.” While likely meant as a simple joke, the app became a somewhat jokey way to contact friends but also worked as a derived way to, say, alert Palestinians to incoming missiles.

Technology’s intent, much like the art VanDerWerff discusses, is largely left up to the user’s intent for it. When Wilhite developed the GIF format in 1987, it was intended as a faster way to upload color images. Presently, it’s an entirely different form of communication. Much like image macros–which were initially meant to speed up communication on image boards–they’ve become a shorthand altogether for the entire range of human emotion. It’s a way to share movie clips,

Source: moviepsycho.tumblr.com

teach planetary science,

Source: Buzzfeed

or simply show agreement.

Source: Awesomegifs.com

The format is so widely easy to use–and easier on bandwidth than the streaming video Wilhite could only dream of–that his original intent for the technology is irrelevant.

Culture at large is subject to each of our interpretations and uses for it. Technology is often only considered different because history has tended to view technological innovations as a specific solution to a specific problem. In Social Construction of Technology (SCOT) theories, the focus is entirely on weeding out physical reasons for a technology’s existence and narrowing it down to socioeconomic reasons. While popular among social scientists, this theory quickly becomes irrelevant when you realize that the solutions a specific technology may solve often enough have nothing to do with the original questions they set out to answer.

when Étienne Lenoir developed the internal combustion engine (ICE) in 1858, he originally sold it to printers and factories as a replacement for human crank workers. Consider the problems the same invention–albeit heavily adjusted–solves now. The ICE is now apart of the very fabric of our world, shifting humans across the planet to allow for the type of quick innovation that could handle the population explosion of the 20th century. The ICE was meant to liberate factory bosses from feeding another mouth, and it instead liberated mankind from the comparatively shackled reliance on the steam engine and horse.

Source: Wikimedia Commons

If we were analyzing the ICE through a SCOT lense, we might only find that the societal or economic problems that preceded its invention and how the ICE approached those as a solution. We’d have to ignore the almost unlimited utility of the basics of Lenoir’s design to solve a multitude of problems Lenoir himself could never have dreamt of.

In fact, Lenoir actually did design a wagon powered by his original crude design, but he became angry with it after one of his prototypes was lifted by Tsar Alexander II and went to work on motorboats. This transition of priorities was heralded by Popular Mechanics at the time, which called it the end of the steam age. Of course it wasn’t, but that narrow-minded focus highlights how technology can transcend the intent of either the creator or the culture that tries to frame it.

How The Internet Isolates Us From Dissent

The opportunity for everyone to customize their online media intake is often enough heralded as a terrible enabler for confirmation bias. If you only read The Drudge Report, for example, you might find the world you live in is far scarier and dramatic than it really is. And if you choose to habitually read The Drudge Report, chances are you already view the world that way by default.

Among differing viewpoints, it’s increasingly rare to find places where they may meet, polarizing our society to a glaring degree. At the height of this summer’s Israeli-Palestinian war, Betaworks data scientist Gilad Lotan took samples of Pro-Israeli and Pro-Palestinian tweets and found that, in conversation and news consumption, the two groups more often isolated themselves from any dissenting view.

Full_chart_cropped

Source: Gilad Lotan/Medium via Vox

The chart above visualizes this effect. In the top-left green nexus, UN and BBC links converge with Pro-Palestinian tweeters, with rare intersections with the large Pro-Israeli blue nexus below it (lines indicate communications between Twitter accounts). Both sides find themselves not simply disagreeing on the core facts of the issue, but rarely allowing themselves to hear what the other side is saying.

The frightening effect of this polarized world, according to a recent Pew study, is average people are far less likely to communicate on societal topics at all. Using last year’s Snowden-leaked revelations about the NSA as a starting point, Pew polled over a thousand adults and found the normal status for most people is self-censorship.

If the topic of the government surveillance programs came up in these settings, how willing would you be to join in the conversation?

In the chart above, you see the vast majority of people are far more comfortable discussing the revelations in impermanent social settings. “This challenges the notion that social media spaces might be considered useful venues for people sharing views they would not otherwise express when they are in the physical presence of others,” writes Pew.

What effect does this self-censorship hold for online society at large? Certain venues (forums, Reddit, etc) are far better at facilitating longform discussions of political and socioeconomic issues, but they also provide a heavy degree of moderation and anonymity. Why would people be afraid to discuss such complicated topics on forums like Facebook and Twitter, and why, when they do, are they usually only reaching out to people who agree with them?

I wouldn’t be the first person to note online discourse is a cesspool for the angry, ill-mannered adolescent in all of us. The separation of a broadband connection works similar to the separation of a car window; you sling epithets at or about other people you never would to their face.

This toxic environment for dissent creates what we see in the Lotan’s graph: people localize among friendly opinions while simultaneously pushing out any dissent from entering their conversation. While social media often seems like a hivemind, it is actually a collection of hiveminds all avoiding each other.

For those who recognize the futility of participation–“the most logical move is not to play”, so to speak–the incentive is to keep their mouths shut. It creates what political scientists call a “spiral of silence“. Silence, as it happens, is the lowest-common denominator; the man with no voice creates no enemies.

This phenomenon being more present online is representative of each of us becoming purveyors of our own mini-media outlets. Most people are like Jimmy Fallon: If they ever talk about politics, they keep it as safe as possible. This is different than being indifferent to politics, but it does forward the image of apathy.

The rest of us, apparently, are either like Jon Stewart or Greg Gutfeld. We may enjoy talking about politics, but it takes bravery to say something our audience doesn’t want to hear. This forces us and them to create an echo chamber of opinion, otherwise known as a circlejerk.

Our jerking, however, is often so vile and disgusting we push out those who might usefully partake, be they those who strongly dissent with our message or those yet to form one. This alienating effect hurts everyone involved: the average person fears joining in while two extremes rotate around themselves with only a sliver of a Venn diagram appearing. Instead of utilizing this great resource to have productive conversations, we’re making ourselves even more isolated to our own detriment.

Once more, the very infrastructure of the web itself could be worsening things. Eli Pariser, the CEO of Upworthy, coined the term “Filter Bubble” to describe the funneling effect most algorithms create when they offer recommendations based on their previous behavior. So if you click on more Libertarian links on Facebook than links representing any other viewpoint, Facebook will hide dissenting viewpoints from you. Facebook, like Google Search, needs to drive traffic in order to be successful, so it feeds into your confirmation bias. By doing so, Pariser argues, “the Internet is showing us what it thinks we want to see, but not necessarily what we need to see.”

This structural flaw worsens the problem of biased news consumption. It’s no mistake the most popular accounts in Gilad Lotan’s study of the Israel/Palestinian debate on Twitter are media outlets: Haaretz and The Jerusalem Post for the Pro-Israel side, BBC for the Pro-Palestinian. We’re not just weeding out people who disagree with us; we’re limiting the facts we allow ourselves to see.

When your exposure to information that challenges your views is limited–whether by your own self or a website’s business model–your views become inherently more extreme, once again encouraging you to ignore dissent and encouraging others to either find their own node or not participate at all. When you isolate yourself from views or facts that make you uncomfortable, you’re participating in the utter destruction of online discourse. Society has been gifted with this amazing venue for public debate, and we can only save it by ignoring the base instincts to pleasure our egos or destroy others.

 

How Generation X Became America’s Middle Child

Generation X–loosely defined as those born between 1962 and 1980–has the great misfortune of being sandwiched between two seemingly more iconic generations (Full disclosure: I’m 25). Baby Boomers are still largely defined by their social movements and the umbrella of societal change they grew up beneath. Kennedy, The Beatles, Vietnam, Watergate, and everything else I learned from Forrest Gump are mostly as iconic as they are because they occurred during a Gutenberg-esque expanse in mass media, namely the television.

Millennials, likewise, are able to exhibit their nostalgia in an ubiquity heretofore unseen thanks to the sudden spread of online media. We are fully enabled to relive our childhood through the expansive catalog of Surge soft drink commercials and Legends of the Hidden Temple reruns available to us at any given moment. Much like the wistful Boomer nostalgia that has ensnared television and film for the past three decades, the Internet is sickeningly full of baited cues for Millennial’s soft-hearted wish fulfillment that our childhood will never end.

 

Generation X, by comparison, is somewhat lost in the valley between the two revolutions. Sure, mass culture of every decade for the last century can be relived online, but it will always be Millennials who were the first to be so familiar with its inner workings. In fact, Generation X falls neatly in the middle of adoption of new technologies, never being as slow as Boomers but not quite keeping pace with Millennials. According to that Pew study, Generation X falls between the two on nearly every major issue or lifestyle.

chart generation x political

Info Source: Pew Research Image Source: CNN Money

Because they exist in this gulf between the largest generation ever (Millennials) and the former largest generation ever (Boomers), Xers are reasonably down-trodden about being washed over in lazy think pieces like this one. For Gizmodo, writer Matt Honan pens the article “Generation X is Sick Of Your Bullshit” and proceeds to lay claim to most of the trophies picked off by his generation’s older and younger siblings:

Generation X is a journeyman. It didn’t invent hip hop, or punk rock, or even electronica (it’s pretty sure those dudes in Kraftwerk are boomers) but it perfected all of them, and made them its own. It didn’t invent the Web, but it largely built the damn thing. Generation X gave you Google and Twitter and blogging; Run DMC and Radiohead and Nirvana and Notorious B.I.G. Not that it gets any credit.

While his boasting is a bit much, Honan is not incorrect. Like every generation before and after it, GenX has much to lay claim to when accounting for its highest members. And Generation X’s effect on the Internet should not go unnoticed. In their twenties they populated early IRC rooms and Usenet forums, setting the standard for Reddit, Twitter, and other popular services founded by GenXers. While not exactly in mass appeal during their youth, Generation X can and should be credited for creating social media.

Sadly, however, these innovations have come too late for the generation that designed them. As has been the case since the 1950s, the money to be made on the media explosion of the last two decades lies in the kids. Millennials and Boomers on average waited longer to get married and have children than Generation X, meaning there era of disposable income was shorter than the generations around it and therefore less worthy of nostalgic remembrances on CNN or listicles on Buzzfeed.

Once more, Generation X is mostly seen as cynical, slacker crybabies. From the rise of gangster rap to grunge to Cameron Crowe to The Real World–all the way up through Dave Eggers and the murdering of 80s franchises like The Smurfs and Transformers–Generation X has less of a restorative capability than Boomers or Millennials simply because they lack a defining historical moment. The Challenger disaster? The deaths of Kurt Cobain or Tupac Shakur? While tragedies all, they fail to match the historic arc met by the Vietnam-era of their past or the War on Terror in their adulthood.

Calling Generation X a “transitional” generation would be too easy; all generations are, by their nature, a transition from one era to another. Xers, however, have the existential misfortune of  being placed between two threatrical generations defined not just by their culture but by the media they use(d) to experience it and the societal moments that impacted it. The Cambrian style eruption of media experienced by both Boomers in the 50s and 60s and Millennials in the aughts and now cannot be matched by the in-between evolution of the Star Wars generation, which largely had a only a bit of time with both before they got to watch the former die and the latter explode.

There is no home for Generation X. They watch TV like their parents did (but more often) and use the Internet like their kids do (but less often). Michael Harris, author of the new book The Absence, told Quartz ““If you were born before 1985, then you know what life is like both with the internet and without. You are making the pilgrimage from Before to After.”

While Harris has optimistic view of that generation’s importance (“if we’re the last people in history to know life before the internet, we are also the only ones who will ever speak, as it were, both languages”), it seems likely Generation X will be remembered much like “the Silent Generation“. Those born and raised during the Great Depression and World War II earned that moniker through largely escaping the economic strife and war met by both their parents and their children–so much so they also earned the name “The Lucky Few“. The adult cast of Mad Men, for example, is largely made up of the Silent Generation (consider that Sally Draper is a Baby Boomer).

In the rise of mass media, Generation X is likewise strewn between two great events. Born after the rise of television and too-early-before the rise of online media, they, are not quite “The Lucky Few” and more “The Unlucky, Unattended Few”. They simply grew up between two great eras of novelty, something purposefully targeted at people in their youth then retreaded for their elderly pleasure. For this, they will/may go unnoticed: The Silent Generation can be credited with the Civil Rights movement and landing on the moon, but good luck taking those accomplishments away from the Boomer Industrial Complex.

 

Similarly, considering how strongly my fellow Millennials cling to their cultural artifacts, it may be difficult to remind them Steve Jobs and Barack Obama were not born in 1989, nor were the icons of early-2000s culture handcrafted by starling prodigies.

The interplay between generations is one that relies mostly on collective conscience, what we mostly deem as important to our culture and how we assign it to the calendars of our own own lives. Something like 9/11 or the Kennedy assassination, for example, puts a distinct divide between “before” and “after” because it is a singular incident. It narratively makes sense to use such a thing as having started a different era, even if the designation of that era is largely perfunctory and insignificant.

The rise of any given media form is, often enough, too blurry for groupthink to use as such a cultural boundary. The truth is no single generation can or should lay claim to any societal movement, as anything truly important culturally will span many generations and be adapted and changed by both. Generation X, lacking the sort of historical mile marker that typically ends or begins a chapter in a textbook, is trapped between an indistinct dawn and an even more vague dusk. In the words of that GenX spokesman Tyler Durden, “we’re the middle children of history…We have no great war. No great depression. Our great war is a spiritual war.”

 

As Generation X grows, it is likely they will be no more or no less influential in finance, culture, or politics than any generation before or after it. That’s what generations do; they move forward through time, carrying as many baggage as gifts. Every generation is a revolution. What Generation X may never have, however, is anyone telling them so.

 

 

Only Online Communities Themselves Can End Abuse and Harassment

Vitriol and harassment have become such a stalwart of the digital landscape as to barely merit mention, and that’s a damn shame. The fact that sexual harassment, threats, and cyberbullying are so pervasive (especially for women) should be top priority for major social media sites like Twitter and Facebook. But can these corporations be trusted to handle an issue that is simultaneously so widespread and evasive?

Researchers out of Boston University think they’ve found the way. Analyzing how rumor and hate speech spread amongst social networks, both online and IRL, they found the only real solution is to “quarantine” the perpetrators not just from your Following list but from the community altogether. “A level of quarantine can always be achieved that will immediately suppress the spread of offensive messages,” the team wrote, “and [this] level of quarantine is independent of the number of offenders spreading the message.”

But can a top-down approach create such an environment? In it’s write-up on the BU paper, the MIT Technology Review proposes using bots and algorithms “to pick out tweets that are likely to be offensive and then quarantining the authors” or possibly even “a peer review model in which people rate the offensiveness of tweets and those responsible for the content deemed most offensive are quarantined.”

To a programmer, every problem is a bug. But this is a social problem and it requires social cures, not engineers and code. Even if we push aside the massive holes in the BU and MIT solutions (abusers can readily create new accounts and regular users might grow uncomfortable with the idea bots may censor them at any moment), the most efficient and successful way to combat cyberbullying is to change the community itself.

This is what Riot Games, creators of the enormously popular League of Legends, found when they created “Tribunals” of users to review reports of abuse and let them decide whether they were truly offensive. Riot then matched the Tribunals’ decisions against those made by a panel of doctorates from the social sciences. The decisions matched more than 80% of the time, despite the fact most players are teenagers.

Such a solution might be too communal for a much larger site like Twitter (League of Legends sports a still impressive 27 million daily users against Twitter’s 232 million). But that doesn’t mean any site is too big to moderate. Engineers from the Electronic Freedom Foundation have begun work on a beta API app for Twitter, called Block Together, that creates shared lists of users others have repeatedly blocked or muted for reasons of abuse. Its predecessor, Flaminga, likewise arms users with each other, allowing online abuse to become a shared experience users can fight together instead of the presently few options available to isolated users.

When her beloved father passed away earlier this month, Zelda Williams took to Instagram and Twitter to share her grief and loving memories of Robin Williams. While the vast majority of the replies were condolences, some found it necessary to taunt Ms. Williams about her father’s death, even sending photoshopped images altered to appear like her father’s asphyxiated corpse. Williams was very understandably horrified and frustrated that such a solemn occasion would invite such filth, and abandoned her Twitter and Instagram feeds altogether. In response, Twitter promised a change to abuse policies.

Williams’ story is certainly only unique because of the ubiquity of her situation and family name, but it is also representative of how users typically deal with such online harassment: Pack up and leave. Were Williams to follow Twitter’s guidelines of abuse, she would have to not just block and mute each individual account terrorizing her but then each account those users created to replace the ones she blocked.

In fact, shortly before announcing her leave of social media, Williams seemed to attempt reporting two users to Twitter for abuse, sending the since-deleted message “Please report @PimpStory @MrGoosebuster. I’m shaking. I can’t. Please. Twitter requires a link and I won’t open it. Don’t either. Please.”

Had a proactive community not turned the all-too-common blind eye to the abusive messages being sent her way, armed with tools like Block Together and its predecessor Flaminga, they could have quickly silenced the trolls in the quarantining manner proven so effective by the Boston University team. Likewise, if Twitter had policies more befitting a site their size–perhaps even mirroring Youtube and Facebook by hiring “safety officers” to manually find and report abuse–an informed populace could prevent such virulent behavior from reaching its target.

Such tools and community outreach is what can minimize the total reach of cyberbullies. In fact, it’s not so different from how public schools and workplaces deal with bullying or sexual harassment. The aim of seminars and conferences and, at the primary level, incentives to report is to change the attitude and the environment itself. Schools especially have realized that the answer is not simply hiring more teachers or harsher punishments for bullies, but proactive change amongst the student body itself.

Cyberbullying is only different in that it’s far easier to hide. However, social media can often serve as a bullhorn for victims of online abuse to share and shame their tormentors for their behavior, minimizing the damage they can do to others. By presenting a more positive scene less welcoming to those who take satisfaction in others anxiety and stress, online communities can successfully end the contradiction engineers and programmers have struggled with for years: making online life free from both restriction and abuse.

How “The Cable Guy” Predicted Our Digital Society

If you ever want a quaint look at the not-so-distant past, spend some time watching or reading old warnings about our culture’s obsession with TV. A few years ago, I did a short write-up about a 1994 Washington Post feature about a family’s addiction to television. It features all four members of the family and their dutifully-followed TV schedules, interspersed with school programs to get kids to watch less TV:

“It is now 9:10 a.m. in the Delmar house. Fifty minutes have gone by since the alarm. Four TVs have been turned on. It will be another 16 hours before all the TVs are off and the house is once again quiet.

By the sink, Bonnie continues to watch “Regis & Kathie Lee.”

At the table, Ashley and Steven watch Speedy Gonzales in “Here Today, Gone Tamale.”

Looking at them, it’s hard to imagine three happier people.

The reason I see the treatment of such a lifestyle for its novelty (or even its dangers) as “quaint” is they were warning about the low levels of inactivity and the social alienation a TV addiction can cause. While not entirely wrong, these warnings for parents and kids seem like people using sandbags to fight a coming tsunami. The Internet has created an overwhelming cavalcade of content that also creates inactivity and social alienation, but it is beyond accepted. In fact, TV itself has largely undergone a re-creation as the most challenging art form of the past decade, with some deeming it “the new novel”.

It’s with this in mind that I found the 1996 Jim Carrey and Matthew Broderick movie The Cable Guy all the more absurd. Its central plot follows a deranged and lonely cable worker (Carrey) going full stalker on an unassuming customer (Broderick). Through deception and an undying devotion to make friends, the titular guy ruins the life of his target.

Near the beginning, when Carrey invites Broderick to see a massive TV satellite dish, he somewhat surprisingly predicts the future of our content addiction (this scene is from the end of the film but Carrey simply repeats the speech he gives earlier):

It’s funny, of course, to find an oddly prescient narrative in what by all accounts is a simple 90s comedy not so out-of-league with other Jim Carrey features like Dumb and Dumber or the Ace Ventura films.

The central point of the film, however, is this nameless cable guy–who only gives TV character names in lieu of his own–is yearning for companionship because he received none as a child. In flashbacks, we see him left by his alcoholic mother in front of the boob tube (would provide clip if I could find it; the film is available on Netflix Instant). So instead of being raised by a caring matriarch, he’s raised by I Love Lucy and Bonanza.

Not exactly a unique narrative; the idea of TV as babysitter has been around since the latchkey days of the 1980’s. However, TV has transformed itself from something to distract your kids with into something to actively do with your kids. It has functionally replaced going to the movies as a family event. Of course, what even The Cable Guy could never predict is the forward market of using apps to pacify children. Whatever the warnings of the 1990s looked like, they simply could not have anticipated tablet holsters for baby seats and the ever-growing market of children’s games on the App store.

Much less could they have known this addiction would spread to adults. To paraphrase Allen Ginsberg, I saw the best minds of my generation destroyed by Candy Crush, starving hysterical Flappy Bird, dragging themselves through the Kim Kardashian Hollywood tips and tricks at dawn.

I say all of this not as some prurient elitist; I myself find my smartphone as addicting as anyone finds their’s. Nor do I even see this as inherently a negative thing: The most popular apps on any major app store are messenging services, meaning we’re connecting now more than ever.

But what does it say that with each method of disconnection presented to us we become more and more comfortable? The future, if Mark Zuckerberg is to be believed, lies with us strapping on VR headsets and immersing ourselves even deeper into realities either completely fantastical (TV, movies, video games) or so distant from our own lives as to be de facto fictional (sports, award shows, news events).

(Note: That last parenthetical got me thinking. Could you imagine being able to strap on a VR headset and be immersed on the streets of Ferguson or the hills of Mosul?)

With every technological development, we find that what “convenience” actually means is being more and more removed from the physical spaces we inhabit and the people we inhabit them with. In his speech, Carrey’s character lists off shopping and Mortal Kombat as two things we in the future can do from the privacy of our own home. But at the time–1996, when the Internet was just about to burst from its nerddom trappings and into mainstream ubiquity–these were things we had to do IRL. There were no other options. Sitting around and playing an SNES with friends was a hallmark of a 90s childhood. Now millions of kids can play Battlefield or Titanfall with each other whilst remaining completely alone. Malls, formerly the center of social life for American adolescents, are dying at record rates as kids find it more comfortable to buy clothes on Amazon and talk to friends on Snapchat, Facebook, or whatever else may come.

Why, after hundreds of thousands of years of cognitive expansion by our ancestors accomplished largely by socializing, is our priority now dealing with as few people as possible? It is a common trope of sci-fi movies that alien invade Earth and find we are killing our planet, that we are the disease to be cured. It almost appears as if we realize that ourselves presently.

Of course, as I hinted at before, most of the technology at hand is actually meant to remove the static of socializing. The text of a conversation on WhatsApp or Kik is direct and clear, free of the trappings of a face-to-face conversation (although also free of the clues physical language can bring). We have found our physical existences to be tiresome and want simply the ego to come across. While the invention of writing and telecommunications made us figuratively closer as a species and culture, it is simultaneously and literally driving us apart.

At the end of The Cable Guy, Carrey’s mad man dangles himself over the same antenna we see earlier in the film. As police choppers surround him, he confronts the cause of his inner need to socialize:

Having realized his intense loneliness is driven by his longing for his mother’s attention, he plunges himself towards the satellite dish.

Broderick grabs a hold of him mid-fall, as Carrey gives the ludicrously well-delivered line “somebody has to kill the baby sitter.” Broderick drops him, and Carrey’s body cuts the feed just as the nameless town learns the verdict of a widely-watched televised trial (presumably meant to echo the then-relevant OJ Simpson trial).

The “babysitter”, as Carrey calls it, has expanded in ways no one has ever imagined. Even the remotest among us are cradled in the embrace of escapism and digital love. Ben Smith, chief news editor at Buzzfeed (itself a warlord for digital consumption) once said “technology is no longer a section in the newspaper. It’s the entire culture.”

And we don’t really seem to mind. In fact, we all seem quite delighted with the prospect of driverless cars, VR adventures, and same-day drone delivery. But much like unattended children, we find ourselves fighting stress and alienation with things that can often enough cause both to force the question: From what are we, all of us in the 21st century maelstrom of voided humanity, hiding? What is all this for?

 

When The War Comes Home

One of the glorious benefits we as Americans get to enjoy is endless war with little direct societal impact. Well, little impact if you discount dead veterans, traumatized veterans, neglected veterans, and a staggering national debt. But compared to most other countries–including the ones we bomb–we as a citizenry can largely avoid whatever atrocities our foreign policy may derive, justified or no.

But on the streets of Ferguson, Missouri, for the past four days we’ve been witness to exactly how that war can reach us in a very real way. People, reporters, and politicians have been shocked, shocked I tell you, to see tanks, armored vehicles, and combat-ready police forces marching on largely peaceful protesters calling for answers in the shooting of unarmed 18-year-old Michael Brown.

Of course, as is usually the case with the mainstream press, it takes one of their own to be directly involved for attention to be deserved.

And just where did all this artillery come from? After more than a decade of war, the Pentagon and weapons manufacturers find themselves inundated with an unused surplus of high-class combat weaponry and found a perfect market in the overflowing police budgets of suburban America.

Credit: NYTimes.com

Credit: NYTimes.com

The above map, from a New York Times report back in June, shows the overflow of military gear distributed throughout the US:

“The equipment has been added to the armories of police departments that already look and act like military units. Police SWAT teams are now deployed tens of thousands of times each year, increasingly for routine jobs.Masked, heavily armed police officers in Louisiana raided a nightclub in 2006 as part of a liquor inspection. In Florida in 2010, officers in SWAT gear and with guns drawn carried out raids on barbershops that mostly led only to charges of ‘barbering without a license.'”

The question in Ferguson, then, becomes not just why the St. Louis County PD–who”s jurisdiction does not include St. Louis the city– is reacting this way, but also what these people were preparing for. Experts and scholars have pointed out that even the looting late Sunday night could’ve been prevented and halted with basic policing tactics. Instead, the police have escalated the situation by breaking out armored vehicles, donning riot gear, and firing tear gas and rubber bullets at everyone–including the press.

What we now have is a police culture driven by paranoia and worst-case-scenario preparedness that has never been seen. Crime across the country is down and, despite mass media coverage, even so-called “active shooter situations” are increasingly rare. The above map shows Oklahoma, for some god-forsaken reason, stocking up on military gear more than any other state. Meanwhile, crime-ridden Detroit can’t respond to a 911 call within an hour. Your local, small-town police force is ready for a war that isn’t coming.

While an argument of “better safe than sorry” could surely and has surely been made for the militarization of police forces (see the literal occupation of Watertown, MA after the Boston marathon bombings), what it provides police forces with is the increased ability to overreact. When you strap flak jackets and grenade lines on someone, they’re going to behave as if they’re in a war zone because they certainly must feel they are. If the most powerful weapon you give a security guard is a plastic badge and a can of mace, he’ll only pull the mace out in extreme circumstances. When the most powerful weapon you give a cop is an Abrams tank, suddenly the tear gas and stationed snipers don’t seem so extreme.

This is how the imagery of the wars we’ve waged have been brought onto our streets and in our neighborhoods. A flood of post-war profiteering met the general paranoia of post-9/11 America and had the despondent lovechild that is militarized police forces.

Why You Care About Oberyn Martell And Not Lorna Wing

(Note: Spoilers Ahead for Game of Thrones fans)

Don’t feel entirely guilty for not knowing who Lorna Wing was. Wing, who was 85, was the founder of the National Autistic Society and the inventor of the term “Asperger’s Syndrome” (not, as many are mistaken, Hans Asperger). Having an autistic daughter herself, she devoted her life to understanding the disorder, popularizing the notion that autism exists as a spectrum instead of as a finite point. Her 1981 paper Asperger Syndrome: A Clinical Account, remains one of the most thorough analyses of ASD, setting what would be the diagnostic establishment for autism. While her accomplishments are of innumerable benefits to society, especially families dealing with autism, Wing was obscure as a public figure, netting no magazine covers or Top Trending status.

She passed away on Friday, June 6. Five days prior, millions of American households watched Oberyn Martell have his life struck from him in dramatic fashion. Martell, Prince to the Dornish throne, was struck down in a duel with Ser Gregor Clegane. Often known as “The Red Viper”, Martell was simultaneously championing the supposedly-regicidal Tyrion Lannister whilst also seeking vengeance for the plunder and murder of his younger sister, Elia, when Clegane confessed to the murder while crushing Oberyn’s skull.

The difference between the Prince of Dorne and the founder of the National Autistic Society is pretty clear; one is a very fictional person from a very fictional world, and one is a hero of medicine and psychology. One was curated by the deliciously-twisted mind of George R. R. Martin, and the other was born, had children, then died.

Yet asking why millions mourned Martell and not Wing is also an obvious question: they didn’t know Wing. Despite having the advantage of actually existing, Lorna Wing did not have a chance of being nearly as famous as Martell. We watched (or read of) Martell’s whoring, his dalliances, his eight bastard daughters. We watched him fight for a character, Tyrion, whom we’ve known far longer and cared for even more, as well as in the defense of the honor of his sister. Protecting two innocents in a literal trial-by-combat, watching him die in such brutal and shocking fashion had an emotional toll.

Game of Thrones and the A Song of Ice and Fire novels are the perfect venue to learn how we connect with and mourn for fictional characters. Martin, the author of the fantasy series, kills characters with such flippancy and frequency the mantra “All Men Must Die” (or “valar morghulis” if you’re privy to your High Valyrian) might as well be the official slogan. You can mourn every character death (including the direwolves) at digital graveyards. The  coffers of Youtube are full of “reaction videos” wherein we see people crying real tears for fake people.

And Game of Thrones is different from similarly passionate shows like Breaking Bad or The Sopranos. Walter White existed in our reality. He was a guy who drove an Aztec with a missing hubcap and worried about his medical debt. Danaerys Targaryen doesn’t exist in anything like our reality. Arya Stark doesn’t know what a middle school is, but there we are, rooting for this 12-year-old to murder.

So where do our feelings toward fictional characters blend with our feelings toward real humans? We say we mourned Oberyn Martell more than Lorna Wing because we knew Martell more. How do we “know” a fictional character?

Sociologists and Malcolm Gladwell-types will be familiar with Dunbar’s number. Based on research he did in the early 1990s, anthropologist Robin Dunbar proposed the theory that human’s are limited by their brain capacity in how many social connections they can usefully make. Studying how social connections are processed in the “orbitomedial prefrontal cortex” of the brain, Dunbar concluded the “mean group size” of any average human is 148.

What that means is the average person can only make and maintain 148 social connections, be they friends, family, co-workers, teachers, or your dentist. Sure, you can certainly know of more people (as your Facebook friend list may tell you), but in total the average limit of people you can truly know is set at 148.

In fact, Dunbar has updated his research in this area to include the world of social networks, wherein it’s not so odd to have well over 500 “friends”. He found what most of us already knew: Your friend count is pointless vanity. In fact, since the average adult Facebook user has over 300 friends, over half of your connections on Facebook are pointless.

Does this neurological limitation extend to how we feel about fictional characters? It’s hard to say. In a Cornell study from 2012 on the social networks of mythological characters, two researchers built a theory for determining whether a storyline about characters was based on real events or not (simply by analyzing how the characters interact and whether it mirrors how real human connections behave). So while there is validity in relating how humans interact to how fictional beings interact, it’s difficult to say what happens between the two worlds.

In her book apty titled Why Do We Care About Literary Characters?, Brit-lit scholar Blakey Vermuele posits that we might actually learn more about fictional characters than we do the people around us. Because of the narrative focus of certain books, we get a front-row seat to another person’s psyche when we read a novel; something you may never get from your 148. This is also why reading fiction can make you more empathetic.

When a personal connection is driven into you by an author’s sheer force of will, it’s emulating how you would feel if these characters were real. But does emulation equal reality when it comes to your prefrontal cortex? Does the social part of your brain prepare you for human relation with figments of someone else’s imagination, or does it merely think books are training for the real deal?

The core part that’s missing, of course, is that Oberyn Martell never knew you. Because you only know of Oberyn Martell or Lorna Wing or whoever the next big celebrity death is from a viewpoint of consumption, it is not a social function. While they can change you, you can never change fictional characters. It’s this void that fan-fiction aims to fill. Even in the most interactive video games, there is very little a player can do to change the intents and attitudes of the characters around him (should probably say that I’ve never played Heavy Rain but my understanding is you can effect events and not characters). So even when we become a character in the storyline, we’re still short from it becoming a social experience–the necessary requirement for that prefrontal cortex activity that would drive you towards your 148 limit.