Dan Harmon And Honesty As Schtick

I should probably start this by stating upfront that I am a devoted Harmenian–that is, a die-hard fan of Community and Rick & Morty creator Dan Harmon’s podcast Harmontown. I’ve listened to every episode–some multiple times–and know every facet, every guest star, and every quotable moment by heart. Aside from that of the Song of Ice and Fire books, it’s the only fandom I partake in with an almost religious ferocity.

If you’ve never heard the show, the first thing to say about it is that it isn’t a “show” in the traditional sense. There is little preparation done by Harmon or any of his frequent guests, including Whose Line regular Jeff Davis, fellow podcaster/Harmon Bride-to-Be Erin McGathy, and “half-ubiquitous” comedian and Silicon Valley star Kumail Nanjiani. The only one who does prepare is perennial straight man and “Dungeon Master to the Stars” Spencer Crittenden, a former audience member Harmon pulled on-stage to run the show’s weekly round of Dungeons & Dragons.

Source: LA Weekly

Harmontown is effectively a comedic stream of consciousness, wherein host, guests, and audience members alike are encouraged to be open and honest, even if that isn’t in the name of comedy. Harmon is as likely to give his anarchistic opinions on the Ferguson protests as he is to make up song parodies on the fly or confess to being the previous owner of a Real Doll. He’ll admit to being an alcoholic and a racist as quickly as he’ll admit to hating Space Jam.

What makes the show so engaging, however, is not just Harmon’s free-wheeling attitude, but the seeming devotion he has to encouraging others to do the same. He’ll frequently bring up audience members to discuss secrets, regrets, and whatever else may ail the mind. One audience member might confess to being between careers as an improv comic or a plastic surgeon. Another might discuss their preferred clothing of skin-tight polyester body suits. In the latest podcast, an audience member admitted to sneaking into bars and stealing the money out of drunk girls’ purses during a period of their life when they were homeless.

Between all of this are dick jokes, necrophilia jokes, improvised Superman movie plots, or such classic tunes as “Pringles Dick”, “Chicken Noodle Dick”, and “Poopy Sand”. Like many podcasts, it is filled with inside jokes, nerdy references, and the bawdy humor of an open-mic stand-up who’s realized he still has five minutes left.

Despite this, the honesty of Harmon, his audience, and his guests is the main draw of the show for myself and others. It is refreshing to find a character of show business who’s primary interest is not protecting thine own ego, but forming an active relationship with fans. It’s often enough that even a devoted listener like myself might find ourselves made uncomfortable by some revelation or even a moment of spontaneous ill repute; Harmon is often sexist to McGathy on-stage and has openly admitting to being somewhat of an emotionally abusive partner.

Even so, this opportunity to watch a volcano of humanity and comedic instinct erupt for two hours a week is unmissable. Television talk shows are mostly pablum, manila-colored packages of celebrities’ and public figures’ best possible image. Reality TV betrays its name with every false narrative, every producer-controlled conflict, and every forced attempt to make celebrities real. Even other podcasts rarely confront the truth of the matters they discuss, with the rare exception perhaps being Marc Maron’s famed WTF podcast.

This is an illness that regularly forces its way into the daily lives of us, the consumers. With each carefully composed tweet and Facebook post, we are emulating the practices of Hollywood in merely showing off who we want to be. We are the director and producer of our own public perception.

This cultural malaise of false self-representation is the antithesis to Harmontown. Yes, there are bits and, yes, everyone on stage knows they’re being recorded, but the ideal of complete and utter authenticity is led by Harmon himself. Audience members feel so willing to put their freshest scars on display because Harmon not only asks them to, but also does so himself.

This was put on display in the documentary about the podcast’s nationwide tour, entitled simply Harmontown. The movie, directed by Neil Berkeley, is as stylized and composed as the podcast is neither. It’s an attempt to define Harmon’s allure as well as what a “Harmenian” is. While it settles on a rather safe definition of “a nerd full of love”, I think it’s a bit narrower than that.


Dan Harmon is “real” in the sense that he rarely thinks before opening his mouth. Once more, he faces consequence for it; his off-set antics was one of the reported reasons Harmon was fired from the fourth season of Community and similar outbursts, many of them on Harmontown, have landed him in hot water (he once compared the fourth season to watching his family be raped).

He and his audience often see this as Dan being Dan. A Harmenian, as I see it, is drawn to the show because here, finally, is an individual who praises the creative life while simultaneously exposing himself to criticism, much like us. This isn’t license to be an asshole; but it does reaffirm the estranged relationship with the outside world many of us feel for crimes no bigger than simply being ourselves.

Perhaps that’s why a large section of the film is spent on Harmon attempting to discover ways to change while keeping his own soul intact. I was reminded of this ideal watching the Netflix original series Bojack Horseman, about a washed-up, alcoholic TV star attempting to better his life. The central message of the show is whether, to paraphrase a common Harmon idiom, the noose of our own regret and our own actions ever loosens.

Harmon eventually settles on the idea that, no, he can’t change. But simply by asking himself the question, he invites the audience–most of them in their twenties–to ask it of themselves. How wide a berth are we willing to give society when forsaking who we currently are and who we want to be?

What this all means is we like listening to Dan Harmon for the same kind of reason people like Larry the Cable Guy or Jeff Dunham; His shtick. It just so happens that his gimmick is being a real person, warts and all, and relating to real people in the same manner. Simply because this is the draw to Harmontown does not make it any less genuine or less worthwhile. In the cheesiest way possible, it means he’s building a community, not a fanbase, around empathy and understanding of the greatness and flaws within each cog of humanity. Harmtown empowers every listener not just to be themselves but to find methods of acceptance between one another.

It’s also really, really funny.

Featured Image -- 193

Blockades and IPOs: China’s Great Firewall gives its companies an advantage over Western counterparts

Originally posted on PandoDaily:

China's fist

China is making it even harder for Western companies to bring their services into the country. The New York Times reports that the Chinese government has “draped a darker shroud over Internet communications in recent weeks” to “tighten internal security,” and in doing so it’s made it even harder for companies like Google to offer many of their services in the country.

Google’s services were all blocked in China in the lead-up to the anniversary of the Tiananmen Square protests, according to a site focused on censorship in the country, and the Times reports that the blockade has stayed in place ever since. The Chinese government is particularly clever in its efforts to make Google’s services unusable — instead of merely blocking them outright, it sometimes allows searches to go through, creating the illusion that the problem is with Google’s services:

The Chinese authorities typically allow a tiny fraction of searches and…

View original 320 more words

Miley Covers Zeppelin And A Generational Struggle Unfolds

Above is a fairly remarkable cover of Led Zeppelin’s passionate ballad “Babe, I’m Gonna Leave You” by notable provocateur Miley Cyrus. Her version is actually quite apt and decidedly lo-fi, with the singer striding the fence between Robert Plant’s sheer power and his passive androgyny.

As one might expect,  Miley Cyrus is not the first pick most Zeppelin fans would have if picking a cover band. Her dueling careers as first a Disney channel star and now as a twerking, little-clothed agent of pop sexuality make her the precise antithesis to the hard-rocking, masculine endeavor that is Getting The Led Out.


Indeed, her deft performance here was met with the predictable vitriol and abusive sentiments that have sadly become a natural landscape of the Internet.

“One day,” writes user Ausgebombt, “when you’re washed up, fully drugged out and broke, you will finally realize you were never an artist, never a musician, but rather just a set of tits for a producer to dangle on stage for millions of idiots to throw money at (like a common stripper). People will still be listening to Led Zeppelin decades after they forget your name.”

Led Zeppelin does hold an odd place in American culture. While definitively “dad rock”, they’ve extended themselves past the Deep Purple’s of the world yet haven’t quite attained the same “coolness” harvested by distinctly glam acts like David Bowie or Queen.

Part of this has to do with their dual existence as fodder for stoned mystics and backyard BBQ soundtrack. As Chuck Klosterman writes in his excellent Killing Yourself To Live, Led Zeppelin holds a unique ubiquity among American male rock fans:

For whatever reason, there is a point in the male maturation process when the music of Led Zeppelin sounds like the perfect actualization of the perfectly cool you…You simply think “Wow. This shit is perfect. In fact, this record is vastly superior to all other forms of music on the entire planet, so this is all I will ever listen to, all the time”…And this is your Zeppelin Phase, and it has as much to do with your own personal psychology as it does with the way John Paul Jones played the organ on “Trampled Under Foot”…Led Zeppelin is unkillable, even if John Bonham was not.

Hence the straightforward abuse being sent towards Miley. Zeppelin not only captured a certain sound (dubbed “The Sound” by Grantland’s Steve Hyden) but a mysterious ethos about being male and wishing you were a rock star.


Miley, on the other hand, represents everything that is not that ethos. While Led Zeppelin were not the kings of authenticity (“Babe I’m Gonna Leave You” was originally penned by a Berkley student in the 50s), Miley is so forthrightly contrived she often seems to be at odds with her previous and future self.

Considering her start as Hannah Montana–a character already in existential throes between her normal and rock star selves–Miley Cyrus has had to overcome quite a number of unfair expectations of her. Her performance at the 2013 VMAs was as calculated a decision as doing a verse on a Snoop Dogg song or jumping on stage with The Flaming Lips. She’s throwing pasta at the wall to see when she’s done, and in doing so creating and abolishing archetypes a typical celebrity might strive to own.


This catch-all tactic of redefinition is a risky one; Lady Gaga long ago ran out of ways to shock anyone but people who thought her albums would still sell.  But what they inherently are is something red-blooded rock fans despise: Pretension.

This, of course, is ironic as rock and roll mastered the art of crafting an identity. Before the indie waves of the 1980s told everyone you could just as easily perform in a T-shirt, rock was spandex’s biggest customer. From the moment The Beatles donned faux-military coats to the rise of the mohawk all the way up to Kurt Cobain showing up on Headbanger’s Ball in a dress, rock–masculine, bravado-driven rock–has long been obsessed with image. The same kind of people who might trash Miley for being yet another plastic pop star will sing along to a leather-clad lad literally named Nikki Sixx or defend the drumming skills of a man in kitten make-up.


But this is not how rock fans remember it or even believe it to be. Despite Alex Turner’s mod affectations or the Mumford house sigil of Flannel n’ Beards, there is a hard-wrought belief rock music is more real than pop. This is why Miley’s crossover cover of Zeppelin would be so infuriating. Despite her obvious vocal talents, she is, to remember the phrasing of one brave anonymous commenter, “just a set of tits for a producer to dangle on stage for millions of idiots to throw money at.”

That sentiment is first and most importantly sexist. Male rock fans readily criticize the use of sexuality by female pop stars, then we wonder why so few of the top-earning rock acts are women; who would want to play for such a hostile audience?

Second, as stated before, male rock stars are all pretension. All performance is pretension. There’s no more authenticity in Robert Plant’s lace vests than in a symphony conductor’s coattails or Katy Perry’s firework brassiere.

Not realizing this core lesson of pop culture is the American rock fans greatest mistake. Miley is no more nor any less an inorganic creation than any great rock band. When I think of Miley, I think of this video of her covering a Melanie classic:


The song is almost predictive of the world she would soon introduce herself to. Whether she wanted to be a “true artist” or not, she’s become indebted to the same machinery this song decries. The comments on this performance are full of far fewer death threats and more commiserating about missing “this side” of Miley. It’s a decidedly stripped-down performance and therefore will inherently “seem more authentic”, but it, too, is a mechanized exploration of image.

And perhaps that’s why the protective love of Led Zeppelin against Miley Cyrus is so unhinged. Led Zeppelin never quite had “phases”. They may have tweaked their sound from album to album, but never the massive swings Miley exhibits from Hannah Montana to “Party In The USA” to Bangerzz. David Bowie, for example, lost a core of his rock audience when he abandoned the Ziggy Stardust pose he held through the 1970s and adopted a dance-heavy sound.


This is where we encounter the idea of “the sellout”. In the view of people for whom this is a problem, there is no greater sin–and most pop music by definition fits the bill. Metallica, for example, had a distinct change in sound after the 1989 masterpiece …And Justice For All. The next self-titled album, dubbed “The Black Album” is notably softer in tonality and production, far more accessible for an audience just then abandoning hair metal and embracing harder acts like Soundgarden and Pearl Jam.

Because Miley Cyrus distinctly changes herself to find who she is/what is the most marketable, she’s immediately a sellout without hope of retribution. The greatest acts of all time have also challenged such notions–Bob Dylan has at least 12 distinct phases at my count and The Beatles had at least four. But she pays for it because of the volatility of her performance and where it places her on a musical scale: Directly opposed to all a Led Zeppelin fans know to be good and true.

Featured Image -- 188

World’s first 3D-printed car unveiled in Chicago

Ben Branstetter:

You wouldn’t download a car…

Originally posted on WHNT.com:

[van id="van/ns-acc/2014/09/14/MW-003SU_CNNA-ST1-10000000024a9bbf"]

(WGN)– In a matter of two days, history was made at Chicago’s McCormick Place, as the world’s first 3D printed electric car—named Strati, Italian for “layers”– took its first test drive.

“Less than 50 parts are in this car,” said Jay Rogers from Local Motors.
Roger’s company is part of the team that developed the engineering process to manufacture an entire car with carbon fiber plastic and print it with a large 3D printer set up at McCormick Place by Cincinnati Incorporated.

Oakridge National Laboratory also collaborated on the concept that could bring custom printed cars to the marketplace by 2015.

“You could think of it like Ikea, mashed up with Build-A-Bear, mashed up with Formula One,” Rogers told us.

The concept of Strati began just six months ago, before being brought to the showroom floor of the International Manufacturing Technology Show.

Attendees got a first-hand look at…

View original 155 more words

Do We Love To Watch Tragedy?

Yesterday morning, for the 9th year in a row, MSNBC re-aired its broadcast footage of the 9/11 terrorist attacks. Starting from the collision of the first plane into the North Tower of the World Trade Center at 8:46 AM and ending with the collapse of that same tower, the rerun of this national tragedy–which started in 2006–is billed as a “Living History” broadcast.

Dan Abrams, then General Manager of MSNBC and now leader of Abrams Media (Mediaite, The Mary Sue, etc), faced much criticism for this programming decision. Gawker has called the tradition “PTSD-inducing” and many on Twitter dubbing it “death porn”.

Dan Abrams, for his part, has defended the decision. In 2011, during the 10th anniversary of the attacks, he wrote:

No one was forced to watch MSNBC coverage. I watched it for the fourth year in a row. Many others will have chosen to change the channel. But in a world where cable news is often consumed with internecine and sometimes invented squabbles, seeing one of the most important moments in American history as it aired, in real time, seems to be exactly what cable news can and should do best.

I, too, have made a small tradition of watching the coverage if I can. I’ve also spent time on Youtube watching news break of the JFK assassination, the death of John Lennon, and the Columbine shootings. These are monumental historical moments and, with the historical record so easily accessible, it’s an invaluable if difficult opportunity to even simulate the experience of having history unfold upon you.

In no time at all, you can relive any number of disasters, natural or otherwise. What separates this practice from listening to FDR’s speech shortly after the surprise bombing of Pearl Harbor? In fact, recreating the historical record is exactly what we call history. When we visit Gettysburg, for example, most of what we learn comes from very personal accounts of the deadliest battle in US history.

The passage of time plays a major role, as Abrams points out in his defense, but so does the graphic nature of the content. Footage of 9/11 is quite dramatic and shocking, but it’s not what we might call violent. Footage of people jumping from the towers is certainly more violent; the famous “Falling Man” photo being the most prescient example. In it’s recent write-up about the publishing history of the image, Motherboard cites the public outcry for the images notoriety:

Readers were incensed. Had the press no decency? Tasteless, crass, voyeuristic. From the Times to the Memphis Commercial Appeal, dailies pulled the image and were forced to go on the immediate defensive as they wiped the image from their online records. Don Delillo didn’t use the image on the cover of his 2006 novel Falling Man, though in 2007, the Times would run it on the front of the Book Review. But mostly the image hasn’t been seen in print since 2001. Drew has called it “the most famous photograph no one has seen.”

That said, there wasn’t a newspaper in the country (or in the world) that fretted over publishing more large-scale images of the tragedy. Fireballs rising from the towers or the antenna of the North Tower descending into smoke and debris were and remain very commonplace. Although troubling, they merely seem to represent the thousands of deaths occurring in that moment. They allow us to subconsciously pretend these events aren’t happening, that lives aren’t being quite literally crushed before us, while still experiencing the event from a safe distance. Seeing one person fall to their death, however, feels a bit too personal.


Distance, in fact, also seems to play a major image in how we observe such events. Newspapers will readily publish images of brutal violence abroad they would never run if that individual were from the United States. How many gruesome scenes of car bombing or tsunami victims have we seen plainly laid on the top fold of The Wall Street Journal?

Peter Maass for The Intercept:

It is a different thing when the victims are ours. When it comes to our own citizens, the consequences of war are preferably represented in elliptical ways that do not show torn flesh or faces of the newly dead. Instead, we see townspeople lining up and saluting as a hearse drives by, we hear the sound of taps at a funeral, we remember the flag as it was placed in a brave widow’s hands, or we see a wounded veteran with a handful of pills for PTSD.

When Malaysian Airlines Flight 17 was shot down over the battlefields of eastern Ukraine, images from the scene were grisly and morbid. In response, cable TV news outlets blurred out the bodies of the passengers, even if that meant presenting no more an image than a formless cloud of pixelated grays and whites, giving viewers no more information than a picture of a cloudy sky.

Source: CNN

Print and online media had no such restraint. Buzzfeed compiled the uncensored images for your viewing (with a click-to-view trigger warning). So did TIME.

So the moving (as opposed to still), personal image of death is the media’s limit, but is it ours? Liveleak has famously made an entire business out of having little to no censorship, meaning there exists a strong audience for brutally violent real world content. When video footage surfaced of events shortly before a 9-year-old girl kills an instructor with an Uzi in a gun range accident, many Redditors asked of the video “Soooo….where’s the Liveleak version?”

This hunger for “snuff” footage isn’t new to the Internet. In 1963, Many newspapers went with the now-famous photograph of Thích Quảng Đức’s self-immolation in protest of the persecution of Buddhist monks by the South Vietnamese government.

Source: Wikimedia Commons

Shortly afterwards, the release of the Zapruder footage of JFK’s assassination would become the most recognizable piece of media surrounding that event, brain viscera and all.

Both images were very important to their respective stories. Thích Quảng Đức’s message was the brutality of his death; simply writing that a man burned himself in protest does not carry the power of his message like the iconic photo. The Zapruder film is possibly the most analyzed piece of media ever.

But the Internet’s insistence to spread graphic media–even the ones that represent no notable news or historical value–speaks to a core interest within the zeitgeist for images that have nothing to do with historical moments or messages of philosophy. Millennials like myself might remember being quietly introduced to Rotten.com, perhaps the most famous early aggregation of violent content online. In a 2001 profile of the site, Salon cited Rotten’s average daily traffic as 200,000 unique visitors–a tidy sum for that time.

Rotten Logo

As I remember it, Rotten was an endurance test. Kids scrolling through the site in a computer lab might as well have been having a staring contest. I still remember the image that made me swear off the site: A man’s face lying in the grass–sans the rest of his head–after being whipped off by a helicopter propeller.

Rotten’s founders see it differently, billing the website as a statement against censorship online:

We cannot dumb the Internet down to the level of playground. Rotten dot com serves as a beacon to demonstrate that censorship of the Internet is impractical, unethical, and wrong. To censor this site, it is necessary to censor medical texts, history texts, evidence rooms, courtrooms, art museums, libraries, and other sources of information vital to functioning of free society.

What drove us there? Is it any different than watching a massive terrorist attack unfold or bodies wash on shore after a hurricane? Why do we (a large number of us, anyway) actively seek out the grotesque ends of a story, newsworthy or not?

The question’s been bugging me since the release of two videos showing the beheading of American journalists James Foley and Steve Sotloff. Many sites, including Twitter, Youtube, and Liveleak have banned the videos or images from the videos.

Many applauded the move. Chris Taylor at Mashable wrote:

When we look at something so shocking it’s impossible to erase from our brains, we give power to the person who wants to gain the notoriety of having made you look. Every time you choose not to look, not to share a link, or to share another remembrance instead, you’re restoring a little bit of decency to the Internet and removing power from the perpetrator.

Peter Maass from earlier would disagree:

I wish we didn’t have to ask these questions — that there were no loathsome images to flash on our screens — and I wish we didn’t have a responsibility to look and think deeply. But we do, if the depravity of war is to be understood and, hopefully, dealt with.

It’s a difficult question to handle. Foley and Sotloff’s families had to deal with the fact that this was the how many Americans knew their sons:

Source: New York Post

Certainly there’s no need to subject them to more reminders of their sons’ gruesome demise.

At the same time, however, Maass makes a good point: Images can bring to life the harshness of the realities of war in a way text simply can’t, giving appropriate gravity to how we form our views. In the Dalton Trumbo novel Johnny Got His Gun, the severely maimed main character begs for the opportunity to be toured through every senate and parliament to show off his tortured existence as a sample of the realities of war:

Remember this. Remember this well you people who plan for war. Remember this you patriots you fierce ones you spawners of hate you inventors of slogans. Remember this as you have never remembered anything else in your lives.

Dragging the reality of war before the eyes of the public, however, can also have unintended consequences. When images and videos are as free as they are now–and social media often mutilating the truth into a slippery abstract–they risk being taken and abused for nefarious purposes. In her epic 2002 essay about war photography, Susan Sontag wrote in The New Yorker:

To the militant, identity is everything. And all photographs wait to be explained or falsified by their captions. During the fighting between Serbs and Croats at the beginning of the recent Balkan wars, the same photographs of children killed in the shelling of a village were passed around at both Serb and Croat propaganda briefings. Alter the caption: alter the use of these deaths.

Indeed, the image below spread around the Internet as a true photograph of a Syrian boy resting between the graves of both his parents, fatalities of that country’s ongoing civil war.

Chances are you've seen this very very viral photo of what was purported to a little boy from Syria sleeping between the graves of his parents. Well, it was staged.

Source: Buzzfeed

It was, in fact, the work of a Saudi photographer.

The photo was taken by photographer Abdel Aziz Al-Atibi. The boy in the photo is his nephew and it was taken for a conceptual art project that Al-Taibi was working on.

Source: Also Buzzfeed

What about images more immersive than simple 2-D video and photographs? Project Syria uses virtual reality to simulate for anyone the streets of Aleppo during a bombing raid or daily life in a refugee camp. Developed by documentarian Nonny de la Pena,  the “experience” uses photographs, videos, and personal accounts of a single bombing from 2013 to as accurately as possible recreate the traumatic experience of war:

She  invited to the World Economic Forum in Davos–where titans of industry and government meet to discuss such things as war and disasters–in a Trumbo-esque wish to have those in power witness the wars they choose. This is also not the first such experience de la Pena has created; she’s also recreated stories of hunger on the streets of LA and being a Gitmo prisoner.

If there is value in her experiences, why would anyone submit themselves to be traumatized? Images of beheadings or terrorist attacks already have a negative effect on our mind and health. In a UCI study from 2011, researchers found being subject to images of 9/11 did effect individual’s mindset and increased their overall stress (PDF) for the long term. A report published in the British Journal of Psychiatry found the same thing, with geographic distance to the event being inversely related.

If we follow de la Pena’s view, perhaps the emotional effect of the news is less a warning for news media (as the UCI study states) and more a warning for news consumers. The world is gory and depressing. If we blind ourselves to that, are we not whitewashing human existence?

Not every murder or horrible incident needs to be unveiled. There’s not much public benefit in actually watching a 9-year-old girl become an accidental murderer. But watching the towers fall on 9/11–or even the devastating view of people leaping to their deaths rather than facing the flames–may actually inspire the appropriate amount of woe and misery such an event should yield.

The common refrain that “9/11 changed everything” is often mocked as being another sign of the American blindness to world events. Many of us willingly see these things–for cathartic fascination, rubbernecking, or otherwise–but many more of us choose to abandon the world around us as simply too morbid. The realities of our complex civilization can easily be hidden if you’d like, but that can have devastating effects on us culturally.

Source. Cagle.com

The average news consumer is already far too geocentric; If publishing nauseatingly vicious photos actually drives a news story through the thick egos we all carry, then perhaps its more than snuff.

Then again, text can often accomplish the same thing a photo can. Despite the widespread censorship of the ISIS beheading videos, the story of them has had wider penetration among the American public than any other news story of the last five years. But was it not driven by the cover stories on websites and newspapers of James Foley dressed in orange, stood on an ethereal desert hillside with a literal masked villain waving a knife at him?

You should feel miserable about James Foley, Steve Sotloff, and Flight MH17. We should feel compelled to question a culture that encourages a small child to practice fire an Uzi. We should be as fully aware of the horrors wrought by colonialism and globalization as we can. We Americans especially should, at the very least, allow ourselves to bare slight witness to the atrocities our decadence pays for.

This does not mean we need to turn every crime blotter into Rotten or the backpages of 4chan. Nor does it mean we should be apologists to purposeful exhibitions of violence and gore, from crush videos to bumfighting. But if we want to be world citizens, if we want to actively feel compelled to question the morals of our leaders and the fortitude of our enemies, nothing can heighten our sensitivity like allowing ourselves to experience it, if even from the comfort of our safely-guarded homes.

Why A Technology’s Intent Doesn’t Matter

Over at Vox, Todd VanDerWerff has a fantastic explanation and takedown of authorial intent. In recent days, Vox interviewed Sopranos showrunner David Chase, in which he reveals his meaning in that show’s famously-enigmatic ending. While many fans assumed the abrupt cut-to-black that ended the series meant Tony Soprano had died, Chase confirms (sort of) to Vox that Tony is, in fact, still alive within the universe he created. VanDerWerff’s point is for fans to follow their own analysis.

Likewise, VanDerWerff argues that the insistence of Hello Kitty’s creator that the iconic cartoon is not a cat but a little girl should be meaningless if individuals and the culture as a whole observed the character as a cat:

For many critics — including myself — the most important thing about a work is not what the author intends but what the reader gleans from it. Authorial intent is certainly interesting, but it’s not going to get me to stop calling Hello Kitty a cat.

It’s a fantastic read, but, for myself, it brought up a different nature of authorial intent. In the vein of the Hello Kitty problem–in which the answer is largely binary–it reminded me of Steven Wilhite. Wilhite, the inventor of the Graphic Interchange Format (GIF) clarified that GIF should be pronounced with a soft g, so it is a homonym with Jif peanut butter.

I disagree with Wilhite’s statement, mostly because within the acronym the g stands for a hard-g word: Graphical. Therefore the acronym GIF should be pronounced with the hard g, not as its inventor intends.

A more difficult issue arises when we tackle a more vague definition, like the meaning of the Sopranos ending. For example: What does it mean to “fave” a tweet? Farhad Manjoo tackled this problem earlier in the week after Twitter announced they would be experimenting with a Facebook-esque update that would refer you to tweets faved by enough of your friends or followers, effectively turning the fave into the algorithmically-important Facebook Like.

When it wasn’t being used as a bookmark to help you remember links for later, pressing “favorite” on a tweet was the digital equivalent of a nod, a gesture that is hard to decipher. The fav derived its power from this deliberate ambiguity.

The fave, much like the “poke” before it, is left deliberately obscure in meaning so users can find their own meaning for it. We see this in the trend app Yo, which allows you to send one single message to a friend: “Yo.” While likely meant as a simple joke, the app became a somewhat jokey way to contact friends but also worked as a derived way to, say, alert Palestinians to incoming missiles.

Technology’s intent, much like the art VanDerWerff discusses, is largely left up to the user’s intent for it. When Wilhite developed the GIF format in 1987, it was intended as a faster way to upload color images. Presently, it’s an entirely different form of communication. Much like image macros–which were initially meant to speed up communication on image boards–they’ve become a shorthand altogether for the entire range of human emotion. It’s a way to share movie clips,

Source: moviepsycho.tumblr.com

teach planetary science,

Source: Buzzfeed

or simply show agreement.

Source: Awesomegifs.com

The format is so widely easy to use–and easier on bandwidth than the streaming video Wilhite could only dream of–that his original intent for the technology is irrelevant.

Culture at large is subject to each of our interpretations and uses for it. Technology is often only considered different because history has tended to view technological innovations as a specific solution to a specific problem. In Social Construction of Technology (SCOT) theories, the focus is entirely on weeding out physical reasons for a technology’s existence and narrowing it down to socioeconomic reasons. While popular among social scientists, this theory quickly becomes irrelevant when you realize that the solutions a specific technology may solve often enough have nothing to do with the original questions they set out to answer.

when Étienne Lenoir developed the internal combustion engine (ICE) in 1858, he originally sold it to printers and factories as a replacement for human crank workers. Consider the problems the same invention–albeit heavily adjusted–solves now. The ICE is now apart of the very fabric of our world, shifting humans across the planet to allow for the type of quick innovation that could handle the population explosion of the 20th century. The ICE was meant to liberate factory bosses from feeding another mouth, and it instead liberated mankind from the comparatively shackled reliance on the steam engine and horse.

Source: Wikimedia Commons

If we were analyzing the ICE through a SCOT lense, we might only find that the societal or economic problems that preceded its invention and how the ICE approached those as a solution. We’d have to ignore the almost unlimited utility of the basics of Lenoir’s design to solve a multitude of problems Lenoir himself could never have dreamt of.

In fact, Lenoir actually did design a wagon powered by his original crude design, but he became angry with it after one of his prototypes was lifted by Tsar Alexander II and went to work on motorboats. This transition of priorities was heralded by Popular Mechanics at the time, which called it the end of the steam age. Of course it wasn’t, but that narrow-minded focus highlights how technology can transcend the intent of either the creator or the culture that tries to frame it.

How The Internet Isolates Us From Dissent

The opportunity for everyone to customize their online media intake is often enough heralded as a terrible enabler for confirmation bias. If you only read The Drudge Report, for example, you might find the world you live in is far scarier and dramatic than it really is. And if you choose to habitually read The Drudge Report, chances are you already view the world that way by default.

Among differing viewpoints, it’s increasingly rare to find places where they may meet, polarizing our society to a glaring degree. At the height of this summer’s Israeli-Palestinian war, Betaworks data scientist Gilad Lotan took samples of Pro-Israeli and Pro-Palestinian tweets and found that, in conversation and news consumption, the two groups more often isolated themselves from any dissenting view.


Source: Gilad Lotan/Medium via Vox

The chart above visualizes this effect. In the top-left green nexus, UN and BBC links converge with Pro-Palestinian tweeters, with rare intersections with the large Pro-Israeli blue nexus below it (lines indicate communications between Twitter accounts). Both sides find themselves not simply disagreeing on the core facts of the issue, but rarely allowing themselves to hear what the other side is saying.

The frightening effect of this polarized world, according to a recent Pew study, is average people are far less likely to communicate on societal topics at all. Using last year’s Snowden-leaked revelations about the NSA as a starting point, Pew polled over a thousand adults and found the normal status for most people is self-censorship.

If the topic of the government surveillance programs came up in these settings, how willing would you be to join in the conversation?

In the chart above, you see the vast majority of people are far more comfortable discussing the revelations in impermanent social settings. “This challenges the notion that social media spaces might be considered useful venues for people sharing views they would not otherwise express when they are in the physical presence of others,” writes Pew.

What effect does this self-censorship hold for online society at large? Certain venues (forums, Reddit, etc) are far better at facilitating longform discussions of political and socioeconomic issues, but they also provide a heavy degree of moderation and anonymity. Why would people be afraid to discuss such complicated topics on forums like Facebook and Twitter, and why, when they do, are they usually only reaching out to people who agree with them?

I wouldn’t be the first person to note online discourse is a cesspool for the angry, ill-mannered adolescent in all of us. The separation of a broadband connection works similar to the separation of a car window; you sling epithets at or about other people you never would to their face.

This toxic environment for dissent creates what we see in the Lotan’s graph: people localize among friendly opinions while simultaneously pushing out any dissent from entering their conversation. While social media often seems like a hivemind, it is actually a collection of hiveminds all avoiding each other.

For those who recognize the futility of participation–“the most logical move is not to play”, so to speak–the incentive is to keep their mouths shut. It creates what political scientists call a “spiral of silence“. Silence, as it happens, is the lowest-common denominator; the man with no voice creates no enemies.

This phenomenon being more present online is representative of each of us becoming purveyors of our own mini-media outlets. Most people are like Jimmy Fallon: If they ever talk about politics, they keep it as safe as possible. This is different than being indifferent to politics, but it does forward the image of apathy.

The rest of us, apparently, are either like Jon Stewart or Greg Gutfeld. We may enjoy talking about politics, but it takes bravery to say something our audience doesn’t want to hear. This forces us and them to create an echo chamber of opinion, otherwise known as a circlejerk.

Our jerking, however, is often so vile and disgusting we push out those who might usefully partake, be they those who strongly dissent with our message or those yet to form one. This alienating effect hurts everyone involved: the average person fears joining in while two extremes rotate around themselves with only a sliver of a Venn diagram appearing. Instead of utilizing this great resource to have productive conversations, we’re making ourselves even more isolated to our own detriment.

Once more, the very infrastructure of the web itself could be worsening things. Eli Pariser, the CEO of Upworthy, coined the term “Filter Bubble” to describe the funneling effect most algorithms create when they offer recommendations based on their previous behavior. So if you click on more Libertarian links on Facebook than links representing any other viewpoint, Facebook will hide dissenting viewpoints from you. Facebook, like Google Search, needs to drive traffic in order to be successful, so it feeds into your confirmation bias. By doing so, Pariser argues, “the Internet is showing us what it thinks we want to see, but not necessarily what we need to see.”

This structural flaw worsens the problem of biased news consumption. It’s no mistake the most popular accounts in Gilad Lotan’s study of the Israel/Palestinian debate on Twitter are media outlets: Haaretz and The Jerusalem Post for the Pro-Israel side, BBC for the Pro-Palestinian. We’re not just weeding out people who disagree with us; we’re limiting the facts we allow ourselves to see.

When your exposure to information that challenges your views is limited–whether by your own self or a website’s business model–your views become inherently more extreme, once again encouraging you to ignore dissent and encouraging others to either find their own node or not participate at all. When you isolate yourself from views or facts that make you uncomfortable, you’re participating in the utter destruction of online discourse. Society has been gifted with this amazing venue for public debate, and we can only save it by ignoring the base instincts to pleasure our egos or destroy others.