Featured Image -- 193

Blockades and IPOs: China’s Great Firewall gives its companies an advantage over Western counterparts

Originally posted on PandoDaily:

China's fist

China is making it even harder for Western companies to bring their services into the country. The New York Times reports that the Chinese government has “draped a darker shroud over Internet communications in recent weeks” to “tighten internal security,” and in doing so it’s made it even harder for companies like Google to offer many of their services in the country.

Google’s services were all blocked in China in the lead-up to the anniversary of the Tiananmen Square protests, according to a site focused on censorship in the country, and the Times reports that the blockade has stayed in place ever since. The Chinese government is particularly clever in its efforts to make Google’s services unusable — instead of merely blocking them outright, it sometimes allows searches to go through, creating the illusion that the problem is with Google’s services:

The Chinese authorities typically allow a tiny fraction of searches and…

View original 320 more words

Miley Covers Zeppelin And A Generational Struggle Unfolds

Above is a fairly remarkable cover of Led Zeppelin’s passionate ballad “Babe, I’m Gonna Leave You” by notable provocateur Miley Cyrus. Her version is actually quite apt and decidedly lo-fi, with the singer striding the fence between Robert Plant’s sheer power and his passive androgyny.

As one might expect,  Miley Cyrus is not the first pick most Zeppelin fans would have if picking a cover band. Her dueling careers as first a Disney channel star and now as a twerking, little-clothed agent of pop sexuality make her the precise antithesis to the hard-rocking, masculine endeavor that is Getting The Led Out.

 

Indeed, her deft performance here was met with the predictable vitriol and abusive sentiments that have sadly become a natural landscape of the Internet.

“One day,” writes user Ausgebombt, “when you’re washed up, fully drugged out and broke, you will finally realize you were never an artist, never a musician, but rather just a set of tits for a producer to dangle on stage for millions of idiots to throw money at (like a common stripper). People will still be listening to Led Zeppelin decades after they forget your name.”

Led Zeppelin does hold an odd place in American culture. While definitively “dad rock”, they’ve extended themselves past the Deep Purple’s of the world yet haven’t quite attained the same “coolness” harvested by distinctly glam acts like David Bowie or Queen.

Part of this has to do with their dual existence as fodder for stoned mystics and backyard BBQ soundtrack. As Chuck Klosterman writes in his excellent Killing Yourself To Live, Led Zeppelin holds a unique ubiquity among American male rock fans:

For whatever reason, there is a point in the male maturation process when the music of Led Zeppelin sounds like the perfect actualization of the perfectly cool you…You simply think “Wow. This shit is perfect. In fact, this record is vastly superior to all other forms of music on the entire planet, so this is all I will ever listen to, all the time”…And this is your Zeppelin Phase, and it has as much to do with your own personal psychology as it does with the way John Paul Jones played the organ on “Trampled Under Foot”…Led Zeppelin is unkillable, even if John Bonham was not.

Hence the straightforward abuse being sent towards Miley. Zeppelin not only captured a certain sound (dubbed “The Sound” by Grantland’s Steve Hyden) but a mysterious ethos about being male and wishing you were a rock star.

 

Miley, on the other hand, represents everything that is not that ethos. While Led Zeppelin were not the kings of authenticity (“Babe I’m Gonna Leave You” was originally penned by a Berkley student in the 50s), Miley is so forthrightly contrived she often seems to be at odds with her previous and future self.

Considering her start as Hannah Montana–a character already in existential throes between her normal and rock star selves–Miley Cyrus has had to overcome quite a number of unfair expectations of her. Her performance at the 2013 VMAs was as calculated a decision as doing a verse on a Snoop Dogg song or jumping on stage with The Flaming Lips. She’s throwing pasta at the wall to see when she’s done, and in doing so creating and abolishing archetypes a typical celebrity might strive to own.

 

This catch-all tactic of redefinition is a risky one; Lady Gaga long ago ran out of ways to shock anyone but people who thought her albums would still sell.  But what they inherently are is something red-blooded rock fans despise: Pretension.

This, of course, is ironic as rock and roll mastered the art of crafting an identity. Before the indie waves of the 1980s told everyone you could just as easily perform in a T-shirt, rock was spandex’s biggest customer. From the moment The Beatles donned faux-military coats to the rise of the mohawk all the way up to Kurt Cobain showing up on Headbanger’s Ball in a dress, rock–masculine, bravado-driven rock–has long been obsessed with image. The same kind of people who might trash Miley for being yet another plastic pop star will sing along to a leather-clad lad literally named Nikki Sixx or defend the drumming skills of a man in kitten make-up.

 

But this is not how rock fans remember it or even believe it to be. Despite Alex Turner’s mod affectations or the Mumford house sigil of Flannel n’ Beards, there is a hard-wrought belief rock music is more real than pop. This is why Miley’s crossover cover of Zeppelin would be so infuriating. Despite her obvious vocal talents, she is, to remember the phrasing of one brave anonymous commenter, “just a set of tits for a producer to dangle on stage for millions of idiots to throw money at.”

That sentiment is first and most importantly sexist. Male rock fans readily criticize the use of sexuality by female pop stars, then we wonder why so few of the top-earning rock acts are women; who would want to play for such a hostile audience?

Second, as stated before, male rock stars are all pretension. All performance is pretension. There’s no more authenticity in Robert Plant’s lace vests than in a symphony conductor’s coattails or Katy Perry’s firework brassiere.

Not realizing this core lesson of pop culture is the American rock fans greatest mistake. Miley is no more nor any less an inorganic creation than any great rock band. When I think of Miley, I think of this video of her covering a Melanie classic:

 

The song is almost predictive of the world she would soon introduce herself to. Whether she wanted to be a “true artist” or not, she’s become indebted to the same machinery this song decries. The comments on this performance are full of far fewer death threats and more commiserating about missing “this side” of Miley. It’s a decidedly stripped-down performance and therefore will inherently “seem more authentic”, but it, too, is a mechanized exploration of image.

And perhaps that’s why the protective love of Led Zeppelin against Miley Cyrus is so unhinged. Led Zeppelin never quite had “phases”. They may have tweaked their sound from album to album, but never the massive swings Miley exhibits from Hannah Montana to “Party In The USA” to Bangerzz. David Bowie, for example, lost a core of his rock audience when he abandoned the Ziggy Stardust pose he held through the 1970s and adopted a dance-heavy sound.

 

This is where we encounter the idea of “the sellout”. In the view of people for whom this is a problem, there is no greater sin–and most pop music by definition fits the bill. Metallica, for example, had a distinct change in sound after the 1989 masterpiece …And Justice For All. The next self-titled album, dubbed “The Black Album” is notably softer in tonality and production, far more accessible for an audience just then abandoning hair metal and embracing harder acts like Soundgarden and Pearl Jam.

Because Miley Cyrus distinctly changes herself to find who she is/what is the most marketable, she’s immediately a sellout without hope of retribution. The greatest acts of all time have also challenged such notions–Bob Dylan has at least 12 distinct phases at my count and The Beatles had at least four. But she pays for it because of the volatility of her performance and where it places her on a musical scale: Directly opposed to all a Led Zeppelin fans know to be good and true.

Featured Image -- 188

World’s first 3D-printed car unveiled in Chicago

Ben Branstetter:

You wouldn’t download a car…

Originally posted on WHNT.com:

[van id="van/ns-acc/2014/09/14/MW-003SU_CNNA-ST1-10000000024a9bbf"]

(WGN)– In a matter of two days, history was made at Chicago’s McCormick Place, as the world’s first 3D printed electric car—named Strati, Italian for “layers”– took its first test drive.

“Less than 50 parts are in this car,” said Jay Rogers from Local Motors.
Roger’s company is part of the team that developed the engineering process to manufacture an entire car with carbon fiber plastic and print it with a large 3D printer set up at McCormick Place by Cincinnati Incorporated.

Oakridge National Laboratory also collaborated on the concept that could bring custom printed cars to the marketplace by 2015.

“You could think of it like Ikea, mashed up with Build-A-Bear, mashed up with Formula One,” Rogers told us.

The concept of Strati began just six months ago, before being brought to the showroom floor of the International Manufacturing Technology Show.

Attendees got a first-hand look at…

View original 155 more words

Do We Love To Watch Tragedy?

Yesterday morning, for the 9th year in a row, MSNBC re-aired its broadcast footage of the 9/11 terrorist attacks. Starting from the collision of the first plane into the North Tower of the World Trade Center at 8:46 AM and ending with the collapse of that same tower, the rerun of this national tragedy–which started in 2006–is billed as a “Living History” broadcast.

Dan Abrams, then General Manager of MSNBC and now leader of Abrams Media (Mediaite, The Mary Sue, etc), faced much criticism for this programming decision. Gawker has called the tradition “PTSD-inducing” and many on Twitter dubbing it “death porn”.

Dan Abrams, for his part, has defended the decision. In 2011, during the 10th anniversary of the attacks, he wrote:

No one was forced to watch MSNBC coverage. I watched it for the fourth year in a row. Many others will have chosen to change the channel. But in a world where cable news is often consumed with internecine and sometimes invented squabbles, seeing one of the most important moments in American history as it aired, in real time, seems to be exactly what cable news can and should do best.

I, too, have made a small tradition of watching the coverage if I can. I’ve also spent time on Youtube watching news break of the JFK assassination, the death of John Lennon, and the Columbine shootings. These are monumental historical moments and, with the historical record so easily accessible, it’s an invaluable if difficult opportunity to even simulate the experience of having history unfold upon you.

In no time at all, you can relive any number of disasters, natural or otherwise. What separates this practice from listening to FDR’s speech shortly after the surprise bombing of Pearl Harbor? In fact, recreating the historical record is exactly what we call history. When we visit Gettysburg, for example, most of what we learn comes from very personal accounts of the deadliest battle in US history.

The passage of time plays a major role, as Abrams points out in his defense, but so does the graphic nature of the content. Footage of 9/11 is quite dramatic and shocking, but it’s not what we might call violent. Footage of people jumping from the towers is certainly more violent; the famous “Falling Man” photo being the most prescient example. In it’s recent write-up about the publishing history of the image, Motherboard cites the public outcry for the images notoriety:

Readers were incensed. Had the press no decency? Tasteless, crass, voyeuristic. From the Times to the Memphis Commercial Appeal, dailies pulled the image and were forced to go on the immediate defensive as they wiped the image from their online records. Don Delillo didn’t use the image on the cover of his 2006 novel Falling Man, though in 2007, the Times would run it on the front of the Book Review. But mostly the image hasn’t been seen in print since 2001. Drew has called it “the most famous photograph no one has seen.”

That said, there wasn’t a newspaper in the country (or in the world) that fretted over publishing more large-scale images of the tragedy. Fireballs rising from the towers or the antenna of the North Tower descending into smoke and debris were and remain very commonplace. Although troubling, they merely seem to represent the thousands of deaths occurring in that moment. They allow us to subconsciously pretend these events aren’t happening, that lives aren’t being quite literally crushed before us, while still experiencing the event from a safe distance. Seeing one person fall to their death, however, feels a bit too personal.

 

Distance, in fact, also seems to play a major image in how we observe such events. Newspapers will readily publish images of brutal violence abroad they would never run if that individual were from the United States. How many gruesome scenes of car bombing or tsunami victims have we seen plainly laid on the top fold of The Wall Street Journal?

Peter Maass for The Intercept:

It is a different thing when the victims are ours. When it comes to our own citizens, the consequences of war are preferably represented in elliptical ways that do not show torn flesh or faces of the newly dead. Instead, we see townspeople lining up and saluting as a hearse drives by, we hear the sound of taps at a funeral, we remember the flag as it was placed in a brave widow’s hands, or we see a wounded veteran with a handful of pills for PTSD.

When Malaysian Airlines Flight 17 was shot down over the battlefields of eastern Ukraine, images from the scene were grisly and morbid. In response, cable TV news outlets blurred out the bodies of the passengers, even if that meant presenting no more an image than a formless cloud of pixelated grays and whites, giving viewers no more information than a picture of a cloudy sky.

Source: CNN

Print and online media had no such restraint. Buzzfeed compiled the uncensored images for your viewing (with a click-to-view trigger warning). So did TIME.

So the moving (as opposed to still), personal image of death is the media’s limit, but is it ours? Liveleak has famously made an entire business out of having little to no censorship, meaning there exists a strong audience for brutally violent real world content. When video footage surfaced of events shortly before a 9-year-old girl kills an instructor with an Uzi in a gun range accident, many Redditors asked of the video “Soooo….where’s the Liveleak version?”

This hunger for “snuff” footage isn’t new to the Internet. In 1963, Many newspapers went with the now-famous photograph of Thích Quảng Đức’s self-immolation in protest of the persecution of Buddhist monks by the South Vietnamese government.

Source: Wikimedia Commons

Shortly afterwards, the release of the Zapruder footage of JFK’s assassination would become the most recognizable piece of media surrounding that event, brain viscera and all.

Both images were very important to their respective stories. Thích Quảng Đức’s message was the brutality of his death; simply writing that a man burned himself in protest does not carry the power of his message like the iconic photo. The Zapruder film is possibly the most analyzed piece of media ever.

But the Internet’s insistence to spread graphic media–even the ones that represent no notable news or historical value–speaks to a core interest within the zeitgeist for images that have nothing to do with historical moments or messages of philosophy. Millennials like myself might remember being quietly introduced to Rotten.com, perhaps the most famous early aggregation of violent content online. In a 2001 profile of the site, Salon cited Rotten’s average daily traffic as 200,000 unique visitors–a tidy sum for that time.

Rotten Logo

As I remember it, Rotten was an endurance test. Kids scrolling through the site in a computer lab might as well have been having a staring contest. I still remember the image that made me swear off the site: A man’s face lying in the grass–sans the rest of his head–after being whipped off by a helicopter propeller.

Rotten’s founders see it differently, billing the website as a statement against censorship online:

We cannot dumb the Internet down to the level of playground. Rotten dot com serves as a beacon to demonstrate that censorship of the Internet is impractical, unethical, and wrong. To censor this site, it is necessary to censor medical texts, history texts, evidence rooms, courtrooms, art museums, libraries, and other sources of information vital to functioning of free society.

What drove us there? Is it any different than watching a massive terrorist attack unfold or bodies wash on shore after a hurricane? Why do we (a large number of us, anyway) actively seek out the grotesque ends of a story, newsworthy or not?

The question’s been bugging me since the release of two videos showing the beheading of American journalists James Foley and Steve Sotloff. Many sites, including Twitter, Youtube, and Liveleak have banned the videos or images from the videos.

Many applauded the move. Chris Taylor at Mashable wrote:

When we look at something so shocking it’s impossible to erase from our brains, we give power to the person who wants to gain the notoriety of having made you look. Every time you choose not to look, not to share a link, or to share another remembrance instead, you’re restoring a little bit of decency to the Internet and removing power from the perpetrator.

Peter Maass from earlier would disagree:

I wish we didn’t have to ask these questions — that there were no loathsome images to flash on our screens — and I wish we didn’t have a responsibility to look and think deeply. But we do, if the depravity of war is to be understood and, hopefully, dealt with.

It’s a difficult question to handle. Foley and Sotloff’s families had to deal with the fact that this was the how many Americans knew their sons:

Source: New York Post

Certainly there’s no need to subject them to more reminders of their sons’ gruesome demise.

At the same time, however, Maass makes a good point: Images can bring to life the harshness of the realities of war in a way text simply can’t, giving appropriate gravity to how we form our views. In the Dalton Trumbo novel Johnny Got His Gun, the severely maimed main character begs for the opportunity to be toured through every senate and parliament to show off his tortured existence as a sample of the realities of war:

Remember this. Remember this well you people who plan for war. Remember this you patriots you fierce ones you spawners of hate you inventors of slogans. Remember this as you have never remembered anything else in your lives.

Dragging the reality of war before the eyes of the public, however, can also have unintended consequences. When images and videos are as free as they are now–and social media often mutilating the truth into a slippery abstract–they risk being taken and abused for nefarious purposes. In her epic 2002 essay about war photography, Susan Sontag wrote in The New Yorker:

To the militant, identity is everything. And all photographs wait to be explained or falsified by their captions. During the fighting between Serbs and Croats at the beginning of the recent Balkan wars, the same photographs of children killed in the shelling of a village were passed around at both Serb and Croat propaganda briefings. Alter the caption: alter the use of these deaths.

Indeed, the image below spread around the Internet as a true photograph of a Syrian boy resting between the graves of both his parents, fatalities of that country’s ongoing civil war.

Chances are you've seen this very very viral photo of what was purported to a little boy from Syria sleeping between the graves of his parents. Well, it was staged.

Source: Buzzfeed

It was, in fact, the work of a Saudi photographer.

The photo was taken by photographer Abdel Aziz Al-Atibi. The boy in the photo is his nephew and it was taken for a conceptual art project that Al-Taibi was working on.

Source: Also Buzzfeed

What about images more immersive than simple 2-D video and photographs? Project Syria uses virtual reality to simulate for anyone the streets of Aleppo during a bombing raid or daily life in a refugee camp. Developed by documentarian Nonny de la Pena,  the “experience” uses photographs, videos, and personal accounts of a single bombing from 2013 to as accurately as possible recreate the traumatic experience of war:

She  invited to the World Economic Forum in Davos–where titans of industry and government meet to discuss such things as war and disasters–in a Trumbo-esque wish to have those in power witness the wars they choose. This is also not the first such experience de la Pena has created; she’s also recreated stories of hunger on the streets of LA and being a Gitmo prisoner.

If there is value in her experiences, why would anyone submit themselves to be traumatized? Images of beheadings or terrorist attacks already have a negative effect on our mind and health. In a UCI study from 2011, researchers found being subject to images of 9/11 did effect individual’s mindset and increased their overall stress (PDF) for the long term. A report published in the British Journal of Psychiatry found the same thing, with geographic distance to the event being inversely related.

If we follow de la Pena’s view, perhaps the emotional effect of the news is less a warning for news media (as the UCI study states) and more a warning for news consumers. The world is gory and depressing. If we blind ourselves to that, are we not whitewashing human existence?

Not every murder or horrible incident needs to be unveiled. There’s not much public benefit in actually watching a 9-year-old girl become an accidental murderer. But watching the towers fall on 9/11–or even the devastating view of people leaping to their deaths rather than facing the flames–may actually inspire the appropriate amount of woe and misery such an event should yield.

The common refrain that “9/11 changed everything” is often mocked as being another sign of the American blindness to world events. Many of us willingly see these things–for cathartic fascination, rubbernecking, or otherwise–but many more of us choose to abandon the world around us as simply too morbid. The realities of our complex civilization can easily be hidden if you’d like, but that can have devastating effects on us culturally.

Source. Cagle.com

The average news consumer is already far too geocentric; If publishing nauseatingly vicious photos actually drives a news story through the thick egos we all carry, then perhaps its more than snuff.

Then again, text can often accomplish the same thing a photo can. Despite the widespread censorship of the ISIS beheading videos, the story of them has had wider penetration among the American public than any other news story of the last five years. But was it not driven by the cover stories on websites and newspapers of James Foley dressed in orange, stood on an ethereal desert hillside with a literal masked villain waving a knife at him?

You should feel miserable about James Foley, Steve Sotloff, and Flight MH17. We should feel compelled to question a culture that encourages a small child to practice fire an Uzi. We should be as fully aware of the horrors wrought by colonialism and globalization as we can. We Americans especially should, at the very least, allow ourselves to bare slight witness to the atrocities our decadence pays for.

This does not mean we need to turn every crime blotter into Rotten or the backpages of 4chan. Nor does it mean we should be apologists to purposeful exhibitions of violence and gore, from crush videos to bumfighting. But if we want to be world citizens, if we want to actively feel compelled to question the morals of our leaders and the fortitude of our enemies, nothing can heighten our sensitivity like allowing ourselves to experience it, if even from the comfort of our safely-guarded homes.

Why A Technology’s Intent Doesn’t Matter

Over at Vox, Todd VanDerWerff has a fantastic explanation and takedown of authorial intent. In recent days, Vox interviewed Sopranos showrunner David Chase, in which he reveals his meaning in that show’s famously-enigmatic ending. While many fans assumed the abrupt cut-to-black that ended the series meant Tony Soprano had died, Chase confirms (sort of) to Vox that Tony is, in fact, still alive within the universe he created. VanDerWerff’s point is for fans to follow their own analysis.

Likewise, VanDerWerff argues that the insistence of Hello Kitty’s creator that the iconic cartoon is not a cat but a little girl should be meaningless if individuals and the culture as a whole observed the character as a cat:

For many critics — including myself — the most important thing about a work is not what the author intends but what the reader gleans from it. Authorial intent is certainly interesting, but it’s not going to get me to stop calling Hello Kitty a cat.

It’s a fantastic read, but, for myself, it brought up a different nature of authorial intent. In the vein of the Hello Kitty problem–in which the answer is largely binary–it reminded me of Steven Wilhite. Wilhite, the inventor of the Graphic Interchange Format (GIF) clarified that GIF should be pronounced with a soft g, so it is a homonym with Jif peanut butter.

I disagree with Wilhite’s statement, mostly because within the acronym the g stands for a hard-g word: Graphical. Therefore the acronym GIF should be pronounced with the hard g, not as its inventor intends.

A more difficult issue arises when we tackle a more vague definition, like the meaning of the Sopranos ending. For example: What does it mean to “fave” a tweet? Farhad Manjoo tackled this problem earlier in the week after Twitter announced they would be experimenting with a Facebook-esque update that would refer you to tweets faved by enough of your friends or followers, effectively turning the fave into the algorithmically-important Facebook Like.

When it wasn’t being used as a bookmark to help you remember links for later, pressing “favorite” on a tweet was the digital equivalent of a nod, a gesture that is hard to decipher. The fav derived its power from this deliberate ambiguity.

The fave, much like the “poke” before it, is left deliberately obscure in meaning so users can find their own meaning for it. We see this in the trend app Yo, which allows you to send one single message to a friend: “Yo.” While likely meant as a simple joke, the app became a somewhat jokey way to contact friends but also worked as a derived way to, say, alert Palestinians to incoming missiles.

Technology’s intent, much like the art VanDerWerff discusses, is largely left up to the user’s intent for it. When Wilhite developed the GIF format in 1987, it was intended as a faster way to upload color images. Presently, it’s an entirely different form of communication. Much like image macros–which were initially meant to speed up communication on image boards–they’ve become a shorthand altogether for the entire range of human emotion. It’s a way to share movie clips,

Source: moviepsycho.tumblr.com

teach planetary science,

Source: Buzzfeed

or simply show agreement.

Source: Awesomegifs.com

The format is so widely easy to use–and easier on bandwidth than the streaming video Wilhite could only dream of–that his original intent for the technology is irrelevant.

Culture at large is subject to each of our interpretations and uses for it. Technology is often only considered different because history has tended to view technological innovations as a specific solution to a specific problem. In Social Construction of Technology (SCOT) theories, the focus is entirely on weeding out physical reasons for a technology’s existence and narrowing it down to socioeconomic reasons. While popular among social scientists, this theory quickly becomes irrelevant when you realize that the solutions a specific technology may solve often enough have nothing to do with the original questions they set out to answer.

when Étienne Lenoir developed the internal combustion engine (ICE) in 1858, he originally sold it to printers and factories as a replacement for human crank workers. Consider the problems the same invention–albeit heavily adjusted–solves now. The ICE is now apart of the very fabric of our world, shifting humans across the planet to allow for the type of quick innovation that could handle the population explosion of the 20th century. The ICE was meant to liberate factory bosses from feeding another mouth, and it instead liberated mankind from the comparatively shackled reliance on the steam engine and horse.

Source: Wikimedia Commons

If we were analyzing the ICE through a SCOT lense, we might only find that the societal or economic problems that preceded its invention and how the ICE approached those as a solution. We’d have to ignore the almost unlimited utility of the basics of Lenoir’s design to solve a multitude of problems Lenoir himself could never have dreamt of.

In fact, Lenoir actually did design a wagon powered by his original crude design, but he became angry with it after one of his prototypes was lifted by Tsar Alexander II and went to work on motorboats. This transition of priorities was heralded by Popular Mechanics at the time, which called it the end of the steam age. Of course it wasn’t, but that narrow-minded focus highlights how technology can transcend the intent of either the creator or the culture that tries to frame it.

How The Internet Isolates Us From Dissent

The opportunity for everyone to customize their online media intake is often enough heralded as a terrible enabler for confirmation bias. If you only read The Drudge Report, for example, you might find the world you live in is far scarier and dramatic than it really is. And if you choose to habitually read The Drudge Report, chances are you already view the world that way by default.

Among differing viewpoints, it’s increasingly rare to find places where they may meet, polarizing our society to a glaring degree. At the height of this summer’s Israeli-Palestinian war, Betaworks data scientist Gilad Lotan took samples of Pro-Israeli and Pro-Palestinian tweets and found that, in conversation and news consumption, the two groups more often isolated themselves from any dissenting view.

Full_chart_cropped

Source: Gilad Lotan/Medium via Vox

The chart above visualizes this effect. In the top-left green nexus, UN and BBC links converge with Pro-Palestinian tweeters, with rare intersections with the large Pro-Israeli blue nexus below it (lines indicate communications between Twitter accounts). Both sides find themselves not simply disagreeing on the core facts of the issue, but rarely allowing themselves to hear what the other side is saying.

The frightening effect of this polarized world, according to a recent Pew study, is average people are far less likely to communicate on societal topics at all. Using last year’s Snowden-leaked revelations about the NSA as a starting point, Pew polled over a thousand adults and found the normal status for most people is self-censorship.

If the topic of the government surveillance programs came up in these settings, how willing would you be to join in the conversation?

In the chart above, you see the vast majority of people are far more comfortable discussing the revelations in impermanent social settings. “This challenges the notion that social media spaces might be considered useful venues for people sharing views they would not otherwise express when they are in the physical presence of others,” writes Pew.

What effect does this self-censorship hold for online society at large? Certain venues (forums, Reddit, etc) are far better at facilitating longform discussions of political and socioeconomic issues, but they also provide a heavy degree of moderation and anonymity. Why would people be afraid to discuss such complicated topics on forums like Facebook and Twitter, and why, when they do, are they usually only reaching out to people who agree with them?

I wouldn’t be the first person to note online discourse is a cesspool for the angry, ill-mannered adolescent in all of us. The separation of a broadband connection works similar to the separation of a car window; you sling epithets at or about other people you never would to their face.

This toxic environment for dissent creates what we see in the Lotan’s graph: people localize among friendly opinions while simultaneously pushing out any dissent from entering their conversation. While social media often seems like a hivemind, it is actually a collection of hiveminds all avoiding each other.

For those who recognize the futility of participation–“the most logical move is not to play”, so to speak–the incentive is to keep their mouths shut. It creates what political scientists call a “spiral of silence“. Silence, as it happens, is the lowest-common denominator; the man with no voice creates no enemies.

This phenomenon being more present online is representative of each of us becoming purveyors of our own mini-media outlets. Most people are like Jimmy Fallon: If they ever talk about politics, they keep it as safe as possible. This is different than being indifferent to politics, but it does forward the image of apathy.

The rest of us, apparently, are either like Jon Stewart or Greg Gutfeld. We may enjoy talking about politics, but it takes bravery to say something our audience doesn’t want to hear. This forces us and them to create an echo chamber of opinion, otherwise known as a circlejerk.

Our jerking, however, is often so vile and disgusting we push out those who might usefully partake, be they those who strongly dissent with our message or those yet to form one. This alienating effect hurts everyone involved: the average person fears joining in while two extremes rotate around themselves with only a sliver of a Venn diagram appearing. Instead of utilizing this great resource to have productive conversations, we’re making ourselves even more isolated to our own detriment.

Once more, the very infrastructure of the web itself could be worsening things. Eli Pariser, the CEO of Upworthy, coined the term “Filter Bubble” to describe the funneling effect most algorithms create when they offer recommendations based on their previous behavior. So if you click on more Libertarian links on Facebook than links representing any other viewpoint, Facebook will hide dissenting viewpoints from you. Facebook, like Google Search, needs to drive traffic in order to be successful, so it feeds into your confirmation bias. By doing so, Pariser argues, “the Internet is showing us what it thinks we want to see, but not necessarily what we need to see.”

This structural flaw worsens the problem of biased news consumption. It’s no mistake the most popular accounts in Gilad Lotan’s study of the Israel/Palestinian debate on Twitter are media outlets: Haaretz and The Jerusalem Post for the Pro-Israel side, BBC for the Pro-Palestinian. We’re not just weeding out people who disagree with us; we’re limiting the facts we allow ourselves to see.

When your exposure to information that challenges your views is limited–whether by your own self or a website’s business model–your views become inherently more extreme, once again encouraging you to ignore dissent and encouraging others to either find their own node or not participate at all. When you isolate yourself from views or facts that make you uncomfortable, you’re participating in the utter destruction of online discourse. Society has been gifted with this amazing venue for public debate, and we can only save it by ignoring the base instincts to pleasure our egos or destroy others.

 

How Generation X Became America’s Middle Child

Generation X–loosely defined as those born between 1962 and 1980–has the great misfortune of being sandwiched between two seemingly more iconic generations (Full disclosure: I’m 25). Baby Boomers are still largely defined by their social movements and the umbrella of societal change they grew up beneath. Kennedy, The Beatles, Vietnam, Watergate, and everything else I learned from Forrest Gump are mostly as iconic as they are because they occurred during a Gutenberg-esque expanse in mass media, namely the television.

Millennials, likewise, are able to exhibit their nostalgia in an ubiquity heretofore unseen thanks to the sudden spread of online media. We are fully enabled to relive our childhood through the expansive catalog of Surge soft drink commercials and Legends of the Hidden Temple reruns available to us at any given moment. Much like the wistful Boomer nostalgia that has ensnared television and film for the past three decades, the Internet is sickeningly full of baited cues for Millennial’s soft-hearted wish fulfillment that our childhood will never end.

 

Generation X, by comparison, is somewhat lost in the valley between the two revolutions. Sure, mass culture of every decade for the last century can be relived online, but it will always be Millennials who were the first to be so familiar with its inner workings. In fact, Generation X falls neatly in the middle of adoption of new technologies, never being as slow as Boomers but not quite keeping pace with Millennials. According to that Pew study, Generation X falls between the two on nearly every major issue or lifestyle.

chart generation x political

Info Source: Pew Research Image Source: CNN Money

Because they exist in this gulf between the largest generation ever (Millennials) and the former largest generation ever (Boomers), Xers are reasonably down-trodden about being washed over in lazy think pieces like this one. For Gizmodo, writer Matt Honan pens the article “Generation X is Sick Of Your Bullshit” and proceeds to lay claim to most of the trophies picked off by his generation’s older and younger siblings:

Generation X is a journeyman. It didn’t invent hip hop, or punk rock, or even electronica (it’s pretty sure those dudes in Kraftwerk are boomers) but it perfected all of them, and made them its own. It didn’t invent the Web, but it largely built the damn thing. Generation X gave you Google and Twitter and blogging; Run DMC and Radiohead and Nirvana and Notorious B.I.G. Not that it gets any credit.

While his boasting is a bit much, Honan is not incorrect. Like every generation before and after it, GenX has much to lay claim to when accounting for its highest members. And Generation X’s effect on the Internet should not go unnoticed. In their twenties they populated early IRC rooms and Usenet forums, setting the standard for Reddit, Twitter, and other popular services founded by GenXers. While not exactly in mass appeal during their youth, Generation X can and should be credited for creating social media.

Sadly, however, these innovations have come too late for the generation that designed them. As has been the case since the 1950s, the money to be made on the media explosion of the last two decades lies in the kids. Millennials and Boomers on average waited longer to get married and have children than Generation X, meaning there era of disposable income was shorter than the generations around it and therefore less worthy of nostalgic remembrances on CNN or listicles on Buzzfeed.

Once more, Generation X is mostly seen as cynical, slacker crybabies. From the rise of gangster rap to grunge to Cameron Crowe to The Real World–all the way up through Dave Eggers and the murdering of 80s franchises like The Smurfs and Transformers–Generation X has less of a restorative capability than Boomers or Millennials simply because they lack a defining historical moment. The Challenger disaster? The deaths of Kurt Cobain or Tupac Shakur? While tragedies all, they fail to match the historic arc met by the Vietnam-era of their past or the War on Terror in their adulthood.

Calling Generation X a “transitional” generation would be too easy; all generations are, by their nature, a transition from one era to another. Xers, however, have the existential misfortune of  being placed between two threatrical generations defined not just by their culture but by the media they use(d) to experience it and the societal moments that impacted it. The Cambrian style eruption of media experienced by both Boomers in the 50s and 60s and Millennials in the aughts and now cannot be matched by the in-between evolution of the Star Wars generation, which largely had a only a bit of time with both before they got to watch the former die and the latter explode.

There is no home for Generation X. They watch TV like their parents did (but more often) and use the Internet like their kids do (but less often). Michael Harris, author of the new book The Absence, told Quartz ““If you were born before 1985, then you know what life is like both with the internet and without. You are making the pilgrimage from Before to After.”

While Harris has optimistic view of that generation’s importance (“if we’re the last people in history to know life before the internet, we are also the only ones who will ever speak, as it were, both languages”), it seems likely Generation X will be remembered much like “the Silent Generation“. Those born and raised during the Great Depression and World War II earned that moniker through largely escaping the economic strife and war met by both their parents and their children–so much so they also earned the name “The Lucky Few“. The adult cast of Mad Men, for example, is largely made up of the Silent Generation (consider that Sally Draper is a Baby Boomer).

In the rise of mass media, Generation X is likewise strewn between two great events. Born after the rise of television and too-early-before the rise of online media, they, are not quite “The Lucky Few” and more “The Unlucky, Unattended Few”. They simply grew up between two great eras of novelty, something purposefully targeted at people in their youth then retreaded for their elderly pleasure. For this, they will/may go unnoticed: The Silent Generation can be credited with the Civil Rights movement and landing on the moon, but good luck taking those accomplishments away from the Boomer Industrial Complex.

 

Similarly, considering how strongly my fellow Millennials cling to their cultural artifacts, it may be difficult to remind them Steve Jobs and Barack Obama were not born in 1989, nor were the icons of early-2000s culture handcrafted by starling prodigies.

The interplay between generations is one that relies mostly on collective conscience, what we mostly deem as important to our culture and how we assign it to the calendars of our own own lives. Something like 9/11 or the Kennedy assassination, for example, puts a distinct divide between “before” and “after” because it is a singular incident. It narratively makes sense to use such a thing as having started a different era, even if the designation of that era is largely perfunctory and insignificant.

The rise of any given media form is, often enough, too blurry for groupthink to use as such a cultural boundary. The truth is no single generation can or should lay claim to any societal movement, as anything truly important culturally will span many generations and be adapted and changed by both. Generation X, lacking the sort of historical mile marker that typically ends or begins a chapter in a textbook, is trapped between an indistinct dawn and an even more vague dusk. In the words of that GenX spokesman Tyler Durden, “we’re the middle children of history…We have no great war. No great depression. Our great war is a spiritual war.”

 

As Generation X grows, it is likely they will be no more or no less influential in finance, culture, or politics than any generation before or after it. That’s what generations do; they move forward through time, carrying as many baggage as gifts. Every generation is a revolution. What Generation X may never have, however, is anyone telling them so.