text /// Chris Rusak

Pinyon air.

No journey has affected me more than my recent trip to Death Valley National Park. This, especially since a vast amount of the park was just hit by flash flooding, mudslides and washouts punishing and barricading several, now-closed, major access roads.

The area I visited in the southwest quadrant, itself already inhibited by a 54-mile detour around the sinkhole-pockmarked direct route, contains the park’s two highest mountain peaks and the one I went to hike: Wildrose.

Death Valley From Above, elevation about 8000 ft, quite hazy. Copyright Chris Rusak 2015
Death Valley, Furnace Creek area at sea level, from above
On the Wildrose Peak Trail, elevation around 8000′
Thick, temporal morning haze

Several callings led me there, the usuals, the things I find myself needing, craving, needing more and more. Desert silence, astonishing, even and especially for those who have already heard it. Over the everpresent freeway din surrounding my life each day, that undertone of desert silence, only several hours away, has bobby-pinned my presence to the present many a fretted night.

And the desert night, the galaxies it holds for us cupped in her canyons, rushing with stars shot around the azimuth. Heavens! Everyone should see the Milky Way in situ.

And the morning. Solely nature. Unfurling. A real warming-up. Slower than any nagging urban pestilence gnawing at our time.

But, I truly needed to get out of Los Angeles and into a desert forest to breathe. I just wanted to breathe, deeply.

Surrounded by pinyon pines, the air is an exhaust of sun-struck sap and sagebrush particulate freshly crushed underfoot, redoubled with every motion through this mist. I could not stop breathing. I could not hold my breathing, though I pained to hold each lump of savory breath.

The morning I left Wildrose Canyon I knew that in a few hours I’d be back in the world of auto exhaust, of silicone gaskets, traffic and needless standstills. And decisions.

I’ve been in Los Angeles just over two years now. I worked hard to get here. I accomplished my initial goals, having graduated from the University of California, beyond honorably. These days now were to be peppered with excited executions of other goals, the ones I set up long before I recently discovered pockmarked routes in those dreams that led me here.

I’ve been in Los Angeles just over two years now, and I don’t think it’s for me. I don’t think I want to stay here, in its recognition frenzy, its faux glitters, its exhaustion of incessant climbing, its climbing rents and frenzied brown/green/orange/steel/glass gentrification. Its scorching white and purple and pink LED façades, stripping itself like Vegas week by week. Its incredibly volatile egos hinged to self with trends, and cocaine, and reliable cycles of rejection. I feel like I moved to the San Francisco I just left, to a place becoming inexplicably expensive for its proximity to high-contrasty, graphically designed cookie-cutter dens of reclaimed wooden commercial districts. And its increasingly wooden people starstruck on the dramas of building handheld digital worlds. Invasive silicon beaches overtaking neighborhoods. And an incredibly steep social curve, toward which everyone resignedly seems to point as they excitedly suggest getting together again moments before they get themselves together in their car, then delete your texts and number. Or so it seems.

And, honestly, it’s not just Los Angeles — it’s California, and cities in general. Or so it seems.

And I hate to even say that — to even encompass that whole thought “I want to move out to the country” — but it’s been frustratingly true.

Death Valley was a good place to go and face these thoughts, notions nicking at my mind while I was wrapping up UCLA. It was a good place to go and imagine my dreams dying, of letting go of long-held long-term goals like gallery representation, critical importance, recognition, authorship, relevance. Making a difference and paying my bills. Of somehow trying to figure out how to sustain an intense artistic life in an increasingly oppressive capital world, without giving up more things to get there. Like steady money. Security, but not the oppressive faux flagship-city kind. It was a good place to go and imagine leaving urbanity and its toxicities, replaced with quieter nights away from police spotlights helicoptering the azimuth of my neighborhood. Of not following so much of what comes through the tubes. Of a different morning-kind-of illumination ritual. Of letting every expectation and hope thus far burn down and flood, and hiking a different path around the wreck, one on which I never imagined heading.

But what will you do?

Does it matter?

Eye-level with the clouds, Wildrose Peak, Death Valley. Copyright Chris Rusak 2015Eye-level with the clouds
Wildrose Peak, Death Valley National Park
September 2015

The morning I hiked to Wildrose Peak, 9064′ above sea level with pause-inducing gain, the most fantastic thing happened. A virga, at that altitude seemingly right beside me. That weekend the park was under every conceivable atmospheric warning to which Southern Californians are now accustomed, but the threat of rain was real, and it had briefly thunderstormed the night before. But here I was almost two miles skyhigh with a 360º view of serious geologies on an otherwise astoundingly clear day, now, suddenly, pockmarked with greyish dense shoals. And it started to rain, except it didn’t, because though I could feel the faintest sensations of moisture against my face, this rain didn’t hit the ground. It wasn’t suspended; I could hear it, like those initial moments of a weeping San Francisco fog coming through air, but it was gone as soon as it had materialized. In those few feet between my eyes and my toes, the lightest rain sublimated. So strange. Real? I laid down on the ground and watched rain melt away no sooner than it precipitated. To think, sitting over the driest valley in the land, a rainstorm in a truncated near-body-length of space.

Anyhow, convinced I was suffering from altitude hallucinations, I shoved some protein and fruit in my gullet and started my descent, just in case this phenomena was a harbinger of something to come. It wasn’t until a few days later, home, after doing some research, that I even learned what virga was, that a virga was real, and that I had experienced a small one at an absurdly perfect moment in time on just the right point in space.

Perhaps this is Los Angeles’s time, and perhaps for artists this will be its space. And there lies one of my hot current conflicts. Environmentally and emotionally, I feel like I’m living in a city where I’m sublimating, in a battle between my bleeding sinuses and the brake dust, and a turf war between sirens and almost non-existent moments of silence. Industrially — and there’s more to be said, elsewhere, about this — but pursuing art, in our time, seems to be the most batshit crazy thing anyone can do. (Chew on that pitch with me until later.)

The funny thing is that every morning I wake up, now, I feel like I’m having a psychotic break. The daily news, the coming ecological apocalypse, my checking account, why is it that I’m drawn to apartments with horrific neighbors, another Twitter redesign. But sitting on a mountaintop surrounded by airborne water re-disappearing back into air I really thought I was nuts. Yet gleefully. I had one of the most ridiculous, rare natural occurrences happening right about my head and I nevertheless doubted it. Necessarily? And lately, I just can’t help but wonder if Los Angeles is a virga, too.

Out of Wildrose Canyon, Death Valley National Park

Out of Wildrose Canyon, Death Valley National Park by Chris Rusak
Out of Wildrose Canyon, Death Valley National Park
September 2015

Why do people lose their minds over Facebook?

Making the rounds today is a Guardian article on journalist Laurie Penny’s banishment from Facebook for using a pseudonym; Facebook mandates users playing within their concentration camp do so only while displaying their “real name.” This should tip off the masses that Facebook is foundationally, greatly interested in your identity, not your participation or your content. But, we need to ask, as Facebook’s real name policy has been steadily inspiring outcry for months now, why do the people who claim they need a pseudonymous Internet presence also seem to be outraged at their banishment as if they have a natural right to use Facebook? There is much to parse here.

Facebook claims its “‘real name culture’ creates more accountability,” which is a corporate culture’s attempt at branding their product as safe. Safety, especially in the post-9/11 world, holds Most Favored Possession status in the capitalist socioeconomic commodity hierarchy. Consumers want products, but only if they are safe. Cars, swing sets, plastic bottles, pharmaceuticals — the moment products indicate they are or have become known to be unsafe, become hard-to-sell and are recalled. Parents especially hate unsafe products; when celebrities counter-endorse products as unsafe — say, vaccines — suddenly consumption wanes. Studies show that introducing brand name goods to children at young ages fosters loyalty to those products. Even Major League Baseball knows this. In her essay, “Understanding Loyalty and Motivation of Professional Sports Fans,” Victoria Wilkins underscores why corporations need little consumers’ attention: “Appeal to children. A true bond that lasts a lifetime starts in youth. A child will become a fan of a team … and will retain fandom throughout his or her lifetime.” Team Facebook, aware of the digital future children face, having bought out the futures of Teams Friendster, Myspace, and Google+, knows it must advertise a safe digital playing field if it wants to convince parents to allow their children to sign up.



The claim that real names will create a safer Internet experience is specious at best, distraction. The claim certainly serves its theatrical purposes inasmuch as the Transportation Safety Administration purports to create a safer travel experience, despite what the masses experience and know.

And despite what Facebook claims about its purpose — “to give people the power to share and make the world more open and connected” — it is a publicly traded, billion-dollar corporation, operating to make a profit. Facebook is a mall, selling its visitors’ presence to advertisers and selling its visitors’ actions as the content-product that visitors come to consume. The whole operation is also one of surveillance, in that every action within the mall is seen and recorded. This is why Facebook wants children in their mall, to accurately learn and record what other loyalties these new consumers are psychologically forming at each milestone of their youth.

Unlike traditional shopping malls, however, Facebook demands you identify yourself, with your real identity, before entry, just like the TSA. Facebook, to this effect, is a private, privileged space. Yet, the TSA doesn’t want to know your real name, the TSA — and Facebook — both want to know your legal name. There is a difference. Drag queens know this. Chinese academics know this. Everyone reading this knows that who you really are is more akin to the words that your lover calls you than the ones the Internal Revenue Service does. Just like government administration, Facebook wants to keep its accounting records organized by single legal identities. It wants to account for historical facts. But, unlike the IRS, it seeks to retain an inescapable lifetime accounting of actual persons’ expressions and interactions.

I’ll take the IRS instead any day.

Can you imagine driving up to a dank gas station, late at night, and the greasy cashier who has been staring you down the aisles refuses to sell you $10 worth of gas and a bottle of water, paying cash, unless you show some state ID?

“We prefer to thank you by your real name instead of calling you ‘ma’am.’ Sorry, just store policy.”

Has the Internet become so desperate that Facebook is the last gas station, late at night, on the information superhighway? Do you really need to piss beside everyone else congregating in its ripe restroom?




If you are someone who claims to be oppressed, to be “at risk ‘of rape and death threats,'” if you need anonymity while communicating with your associates, and there are well-lit, locking-door pissholes across the street, that care not whether your real name is Mark Zuckerberg or RuPaul, why are you getting mad at the gas station for being so dank?

Would you send your child in to pay instead?

“Pick up mommy some cherry vape, too, sweetie. I love you!”

If the government banishes the public free-speech right to call yourself or anyone by a real name, a truer description of the person we embody, and rather mandate self-representation solely by the words scribed on a State’s legal document, the uproar would be furious. Hopefully people would loathe the loss of a civil right. The question of the State’s motive would surely publicly arise.

It seems foolish for any public business to demand one’s legal identity in order to consume its advertised free products, but the sign stating so is on the door. Don’t like it?

And yet, the vociferous complaints in the media describe users’ defensive opposition to Facebook’s real name policy and not to its larger purported motives.

Which makes me wonder: Given the growing evidence that social media corporations are less about connecting people and more about creating data-surveillance profits, and given Facebook’s especially egregious, known efforts at such, why are the ejected oppressed users so hellbent on getting back into the concentration camp from which they’ve just been thrown out? I’m not trying to purport these oppressed users are victimizing themselves; I am instead alarmed that there is something so perniciously attractive and appealing about Facebook that its corporately bullied users do keep trying to go back behind the bully’s fences, and worse, believe they should attack the fences.

Why would you purchase the pleasure products of an oppressive business?

Why do you want to hide your identity and earnestly use a platform whose whole business model is recording history so they can perpetually identify you?

Is there no alternative venue?

Internet Distraction and The New Sight of Terror

The concept of terrorism, the language of terrorism, and the spectacle of terrorism all repeatedly graze across contemporary senses, the buzzword’s conceits fueling a whirling international discourse wrestling between freedom and safety, a self-preservative echoey cacophony itself the anthracite off which media industries’ smokestacks puff. More than a decade after that fateful September day, its mourning television broadcasts scorched onto memories like the conflagration that it was — plasma hot, irretrievable, terminal — that ante meridiem commentary of news anchors reverberates today for anyone who watched the destruction occur in the confines of their home theater. It took only forty-five seconds after Good Morning America had returned from a commercial break and Diane Sawyer had informed viewers of “some sort of explosion at the World Trade Center” before her co-anchor Charlie Gibson used the word “terrorists” in reference to the 1993 bombings at the site. An encircling helicopter transmitted real-time video of rising smoke while Gibson’s commentary stressed that facts about its cause were scant. Eleven minutes and forty-five seconds later, terrorists flew a second catastrophic plane straight into the live shot. Gibson, unnaturally calm even for the strictest of journalists and speaking over an audible background audience of a painfully thunderstruck production staff, immediately assessed the moment as evidently a “concerted effort to attack” the building. In fact, the whole viewing public had just been indelibly attacked. Months later, in his subsequent 2002 State of the Union address, his first, broadcast live as is national tradition, President George W. Bush officially ruminated on the event. “[F]ellow citizens,” Bush asserts in his opening, “as we gather tonight, our nation is at war, our economy is in recession, and the civilized world faces unprecedented dangers [raising his voice and gesticulating] yet the state of our nation has never been stronger.” Bush’s thumping fist-to-lectern assertion received twenty-five seconds of amenable applause. The Commander-in-chief later foresees and foretells the American future: “our war against terror is only beginning,” he says, despite the fact that warring against terror has always been a regular feature of not only American history, but humanity in general [1]. Reporting the next day, David E. Sanger of the New York Times emulated well what radio listeners and television viewers had heard and their minds surely captured: Sanger’s report uses variants of the word “terror” fifteen times, once in its headline and in eleven of its forty-five paragraphs, accurately echoing the President’s own broken-record monologue. Though the words “war against terror” might have sounded like a resolute promise to manufacture a nation’s economic recovery and to secure a homeland’s freedoms, the phrase instead resounds today like a smokescreen, a puffy clarion call for the masses to stay tuned after the following commercial messages. Thereafter and still, the language of terrorism loudly infects mass media, acting, especially for the generation who witnessed 9/11, not just as an audible touchstone of loss and fear, but more exploitatively as a strategic trigger for distractive transfixion.

In a war against a nebulous concept the actual enemy can never be in sight. Thus, the necessitation of warring and the concept of terrorism must be commodified, generally by states or private organizations. The war on terror has been accordingly staged on multiple battlefields, in the American mind, on foreign soils, against foreign soul, and now inside an international consciousness digitally interconnected by live 24-hour broadcasts on digital television and the Internet. This war’s executive producers cast the personalities of various combatants to personify or iconize an opposition, relying on a perpetual conflation of events, not necessarily worldly, nor international, nor tragic, to the terror concept. Summarizing America’s experience with this concept in The History of Terrorism: From Antiquity to Al Qaeda, Arnaud Blin of the French Institute for Strategic Analysis notes that in the immediacy after the Twin Towers fell, the Bush administration inculpated Iraq, and by association Saddam Hussein, “even though nothing suggested that country was involved in this act of terrorism” [2]. Gibson’s similarly brisk on-air appraisal connected his aforementioned nod to the 1993 “terrorists” and his therewith designation as an “attack”; however, Bush’s brisk appraisal obviated hard evidence and instead theatrically connected it to an icon already vilified as a terrorist. Both men capitalized on lingually provoked emotional tension in order to reify it in service to the production of their goods: the former, commitment to a forthcoming newscast; the latter, assent to forthcoming policy. Since terror is a social resource mined from human emotion, used historically as a “tool of enslavement and guarantor of mass obedience,” the nebulousness of a war against its creation can fizz away once an antagonist is placed on stage, in a spotlight, holding a weapon; yet, this personification and iconization of terror simultaneously creates objects for its loci, operatively transacted throughout mass media, thereby perpetuating it within the human community [3]. Plainly, a war framed against terror necessarily births the supposed enemy it claims to battle.

One grander problem with warring and terrorism in the Internet age is the rapid sleight of hand with which symbolic weapons are issued and by which humanity’s more pernicious enemies can be obscured. This phenomenon, certainly nothing new, rather now primarily occurs on a technologically refined, composite iteration of mass media’s ancestral modes of dissemination — newspaper, radio, film, and television — coupled onto the telecommunication industry. As the invention of the telegraph led to both primitive analog and wireless telecommunication networks, enabling the postwar entertainment technology complex, the introduction of home computers, unrelenting refinement of their scalability, and the subsequent boom of microprocessor chips correspondingly gave rise to today’s expansive digital Internet, a remarkable always-on network sustaining convenient handheld and portable electronics. The early edition paper, the walkie-talkie, the cinema reel, and the boob tube, and more importantly their industries, have, sometimes unwillingly, become one. News headlines, horror movies, and soap operas illuminate the same transmissive juncture as phone calls, emails, and text messages. This endoparasitic subsumption of journalism, entertainment, and telephony essentially consolidates them into one phenomenal entity. The ultimate result is a significant expansion of what philosophers Max Horkheimer and Theodor Adorno describe as the culture industry. The two assert in their seminal Dialectic of Enlightenment that the culture industry — today amalgamated as the Internet — distractingly exploits consumer attention by stoking desire through its promises of amusement, leisure, and social interaction, an excellently reliable set of diversionary activities [4]. For-profit social media corporations like Facebook and Twitter demonstrate the symbiosis and expeditious dissemination of amusement content, leisure opportunities, and ostensible sociability that Internet technology enables. Consequential “heightened competition” between such agents of the culture industry competing for consumers’ attention fosters media spectacle, or “technologically mediated events, in which media forms … process events in spectacular ways,” a phenomenon particularly exemplified by the Internet community’s transfixion during recent catastrophic natural disasters and riots protesting police brutality. Moreover, the political stump now resembles a variety show. Media-framed political spectacles, evoking Gong Show farce with their mugging for extremists to provoke audience reactions, hope for a responsive outcry throughout the Internet, thirsty for virality — the precious commodity of mass-transfixed attention — which cares not whether it scores a ten for effort or is gonged for idiocy, but cares only that it has been seen, heard, and discussed. To consumers, whose attention once oscillated between media formats or at least benefited from those clear section headings or commercial breaks set between news and nonsense, who relied on those news anchors or emcees speaking between reports and soap opera acts, now, in a world of endless breaking news and scripted-reality TV, the divisions between veracity and invention are distinctly unclear. The systemic sameness Horkheimer and Adorno alarmingly underscore throughout their dialectic has ballooned; the Internet’s ubiquitous infinite scroll acts like a perpetual paragraph without end, creatively blurring the edges of each original subject and always promising the next point for attention, ultimately delivering this same mode of distraction over and over again [5].

And this incessancy has a purpose.

For those who choose, portable electronic devices can persistently update their owner with personalized digital events and curated headlines — the push notification — constant audible, haptic, and flashing visual pulls of consumers’ attention back toward the illuminated digital stream of consciousness. While Horkheimer and Adorno lamented advertising’s devouring of the “landscape … a mere background for signboards and symbols,” push notifications effortlessly beckon to onlookers in real space, drawing their consciousness back to a new landscape equally slathered with promoted consumerism, back to more reflexive, recursive distraction [6]. The viability of this new landscape, too, depends on its users’ incessant participation. Disconcertingly and dangerously, the new monoliths of the culture industry meticulously control their channels of dissemination, thus mediating their users’ interaction with information at-large and effecting societal shifts of consciousness. The inevitable result is a mode of digitally negotiated human interaction operable as an immediate, worldwide megaphone that broadcasts knee-jerk or exacted premeditated communications, thrusting the actors, agents, and scenes of today’s spectacular events into the visual foreground while enabling event producers to step off into the shadows.

This light-speed, yet mediated interaction of the Internet fuels more than desire, the amusingly fantastical, and the questionably true. In one of Adorno’s later essays, “Culture industry reconsidered,” he displays prescient pings about the Internet age in his explication of what the culture industry exactly was at the time. Writing when film was still “the central sector of the culture industry,” Adorno notes how “the expression ‘industry’ is not to be taken too literally. It refers to the standardization of the thing itself … and to the rationalization of distribution techniques, but not strictly to the production process” [7]. This claim warrants correlative transposition to the present day.

Previously, a film-centric culture industry relied on the “extensive division of labor” to create its salable product — a “star system” of actors to iconize each film thoroughly assembled by “industrial forms of organization” which, unlike autonomous artists crafting their artwork through most of its fruition, disunites the film’s technical components, distributes these to its factory-like groups of disparate workers, who then manufacture its elements through mechanical means and later assemble an actual filmic thing [8]. But, as Adorno emphasizes, the filmic thing is merely the base commodity around which culture circulates, and this industrialized circulation wields the most influence against the societal subjectivity it encircles. The phenomenon of amusement instead “becomes an ideal,” and so while the filmic thing itself not only amuses, its numerous ancillary modes — the trailers, the teasers, the press releases, the movie posters, the star scandals, talk show appearances, the troubled production, the tabloid machine, and plethoric red-carpet fêting — measure out steady morsels of amusement to incessantly stoke consumer chatter and, hopefully, sell a ticket to the exhibition [9]. Journalistic news and highbrow criticism complement this cycle, too. This total labor force of yesteryear’s culture industry did far more than filmic manufacturing. In fact, its heavily controlled star system deliberately sought to negotiate consumer attention between film and then-nascent television by manipulating press to modulate celebrities’ images, which consequentially modulated consumers’ relationship to celebrities, which ultimately modulated consumers’ relationships to themselves [10]. Horkheimer and Adorno saw this as manipulating the “identity of a species” by depreciating individuality, or the encouraging of homogeny [11]. Or, the emergence of persons and things but only through sameness.

Today’s culture industry operates equivalently, perhaps more deceptively. Hollywood-like vertical integration persists and grows: conglomerates like Comcast and AT&T control not only the similarly organized production and distribution of media, but also the physical electronic networks over which consumers receive that content. Profit-hungry Internet companies vehemently attempt to strangle that distribution network, reusing a tactic of early film studios, who asphyxiated independent movie theaters in metropolitan markets by restricting their access to high-quality, high-demand content, then aggressively acquired them as they failed. Recent public reaction to the threat of throttled data distribution and rumored concomitant price hikes naturally played out over the very Internet networks to be asphyxiated, becoming commercial and political spectacle itself. Current Presidential-hopeful Senator Ted Cruz, through social media, famously compared net neutrality, a federal mandate for corporations to provide equitable data delivery speeds to its paying-by-choice subscribers, to the healthcare coverage of Obamacare, a federal mandate for consumers to purchase services from corporations in an effort to spur equitable access to insurance, an apples-to-oranges juxtaposition that received much Twitter-gonging and satirical re-appropriation for what manifested as nothing more than inane self-promotion. Meanwhile, the very enemies of net neutrality synchronously reported on the hysteria. For example, a week after Cruz’s quotable outburst, reporter Jane C. Timm critically outlined the senator’s faulty logic in her article “Ted Cruz won’t back down on net neutrality argument.” Timm, however, wrote for msnbc.com, whose parent company, NBCUniversal, had 49% of its ownership stake purchased by Comcast the year prior. While Cruz spouted off against net neutrality, Comcast made contributions to the Jobs, Growth & Freedom Fund, a political action committee bearing Cruz’s face as spokesperson and from which he financially benefits. Comcast obviously benefits from the increased web traffic and advertising exposure across its divisions, stoked by the whole event it helped fund and create. Cruz, acting as a prominent member of the Internet star system, helps displace public consciousness about the enemy and the crime and instead keeps it focused on the spectacle. Furthermore, this performance deliberately shields an underlying terror within the culture industry — the wide-reaching and overwhelming concentration of media and communication controlled by behemoth corporations.

Yet, consumers satiate themselves with endless culture industry diversions, a “means of putting things out of mind … even when on display” [12]. Adorno, speaking of the culture industry’s capacity to inject consumers with fabricated desire, flatly highlights this paradoxical, absurd instance of the public’s eating from the hand that chokes them, how culture “deludes them with false conflicts … [solved] only in appearance” [13]. Accordingly, the technologically mediated phenomenon of net neutrality now seems insignificant: Encircling rhetoric about controlling the speed at which Internet content is delivered moves the public to erupt as it distracts from the fact they have already been manipulated through an amusing sleight of outrage by a culture industry whose weapon instead is the very manipulation of communication and media itself. The fantastical cycle of amusement, the industry itself, speedily whirls unquestionably well around the masses desirous of a freely breathing Internet. Today, using sharpened tools of distraction and shining fanatical stars, the culture industry is primed for mass exploitation through persistent attention-grabbing (mis)information.

Horkheimer and Adorno’s observation of the old culture industry façade — “formal freedom is guaranteed for everyone” — could be the perceived or desired sign hanging on the front door of the Internet now that culture industrialists must yield to a concept of net neutrality, recently enacted by the Federal Communications Commission, who will regulate Internet service as telephony [14]. “However,” the philosophers continue, “all find themselves enclosed … within a system,” which the two delineate as the various community institutions which had historically acted as “the most sensitive instrument of social control.” New members of the growing digitally connected global community do find themselves unwittingly entering a system whereby social interactions are controlled by unseen algorithms, the ultimate form of rationalized decision-making and a cornerstone of corporate and political administration. Horkheimer and Adorno discuss the danger of “mathematical formalism,” under which algorithmic administration certainly falls, and given their assertion that “industrialism makes souls into things,” the industrial algorithm, if appropriately employed by Internet corporations, can help catalyze through standardization of social activity the needed shifts of social consciousness that has led previous generations toward harm [15].

Nevertheless, the varied auras of social media’s function and purpose help parry notice of its manipulative and exploitative capacities, especially its communicative un-freeness. A recent study by a multi-university group of researchers reveals the efficacy of this phenomenon. Motahhare Eslami, et. al., examine how algorithms affect habitual Facebook users’ social perceptions and compare it to their perceptions of the platform. Notably, the researchers point out how nearly two-thirds of participants did not know Facebook’s primary content aggregating feature, the News Feed, is algorithmically controlled, and rather believe that “every single [interaction] from their friends … appeared in their News Feed.” Accordingly, a large percentage of users are likely unaware that social media algorithms can strengthen class bias, redouble racial profiling, and unwittingly censor user-requested data through erroneous electronic “moral judgements, such as [in the removal of] terms deemed to be related to child pornography.” Upon revelation of these facts, though, one participant’s reaction is illuminating: “It’s kind of intense, it’s kind of waking up in ‘the Matrix‘ [sic] in a way. I mean you have what you think as your reality of like what they choose to show you. [...] So you think about how much, kind of, control they have…”. This cinematic comparison is telling since it acknowledges not only the expanded culture industry’s roots, as well as an analogy of social media audience membership to the filmic group experience of watching a screen, but also latent control lurking right behind the production. Facebook, in fact, takes no efforts to conceal their control over their users; in Forbes, Kashmir Hill describes a 2012 “emotion manipulation” experiment Facebook facilitated on 689,003 of its users, whose consent for such subjection the corporation failed to acquire until four months after the social experiment ended. Besides revealing Facebook’s questionable ethical stance, “[t]he experiment manipulated the extent to which people were exposed to emotional expressions in their News Feed,” also revealing how “emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness” [16]. The fact that Facebook has 1.44 billion monthly users, or approximately 20% of the world’s population, emphasizes the immense power consolidated in this one key corporation of the culture industry. This concentrated attention toward a single technological entity, its worldwide spatial reach, and its efficacious expediency are dangerous: Eslami, et. al., discovered the more troubling effects algorithmic filtering stokes, namely that a significant percentage of participants believed, in cases where content from close friends was no longer being displayed in their News Feed, “that friends had dropped them due to political disagreements or their unappealing behavior,” or, flatly, such silence denoted their own inadequacy beyond mere digital interaction resulting from being “not interpersonally close enough.” Other participants plainly expressed anger or frustration at what they viewed as a violation of “their expectations” about Facebook’s product, being treated like experimental mice, “being lied to,” admitting the unseen filters “affect their behavior.” Ultimately, participants realized that the News Feed is operative social manipulation. Though social media might seem for some to be free technological exchanges of discourse, its many users mistake the controlled cinematic Internet screen for unmediated human interaction.

Willingly disrupting human interaction, however, is a key aim and feature of terror.

In his essay “Terror’s Atomization of Man,” Horkheimer and Adorno’s associate Leo Löwenthal offers a phenomenology of terror as affective in the concentration camp and in society in the wake of the Holocaust. Löwenthal notes operative similarities in its effects between the two domains, long before digital social media when news traveled fastest through the vocal grapevine. He plainly begins that terror has deep roots “in the trends of modern civilization, and especially in the pattern of modern economy” [17]. He distinguishes an “interruption of the causal relation between what a person does and what happens to him” as key in an operation which seeks “dehumanization through the total integration of the population into collectivities, then depriving them of the psychological means of direct communication in spite of — rather because of — the tremendous communications apparatus to which they are exposed” [18]. By manipulating social interaction, severely influencing one’s subjective experience by disrupting the traditional expectations and operations of interpersonal communication, he believes, the Nazis were able to transform “a human being … into a unit of atomized reactions” [19]. This function, however, operated on both victims and the perpetrators, shaping the latter’s mentality to stoke, justify, and enable the performance of psychological and physical violence behind camp walls, and provoke a phenomenal assumption that their complicit manufacturing of terror would mine their own “self-perpetuation” [20]. Löwenthal identifies this as a type of “cultural monopoly,” an industrialization that equates human beings to raw goods or merchandise [21]. Outside of the concentration camp confines, he sees cultural monopolization and the steady improvement of technology together increasingly transforming human beings to be “largely superfluous,” which he views as another “pre-condition of terror” [22]. Thus, separating individuals by disrupting communication and categorically generalizing them, simply like groups of objects inside resulting cultural monopolies, prepares them “to accept the most insane ideologies and patterns of domination and persecution” as it provides those entities bearing totalitarian desires “a road to power and an object for its exercise” [23]. This effort of emotional control, “the systematic modification of the ideas and feelings of the masses,” Löwenthal points out, was Adolf Hitler’s operative bedrock [24].

Despite the chronological distance for the world’s most recent catastrophic event, because today’s always-on culture industry loudly reports tragic events as spectacle, threats of unprecedented danger are never far from the horizon. Terrors unrealized and freedom’s fragility are kept perpetually in view. In his 1965 essay, “Threats to Freedom,” a part of his Critique of Instrumental Reason, Horkheimer peers at the concept of freedom from several angles and attempts to outline “manifestations of [its] regression” in society that result from social and technological change [25]. Notably, Horkheimer indirectly juxtaposes the socioeconomic demands placed upon citizens of centuries past beside the technological imposition citizens face today, drawing out the distinction that, between those two abstract historical periods, the ideological “limitation of freedom had caused the further development of freedom,” especially experienced in the bourgeoisie-proletarian dichotomy [26]. He believes intervening “technological revolution” fails to improve and undergird living conditions for most and then intimates it rather atomizes society further [27]. He provides two interesting examples: first, television, which radically changes how children discover their world — discovery through a “screen and its images” — and de-prioritizes interaction with one’s parents; second, courtship, evidenced by a “pamphlet giving the young man rules” for interaction with women, which explains how to quantitatively assess them and make a “rational” choice to successfully find a bride [28]. The former example still rings true today: a recent report shows “[o]ver a third of children under the age of 1 have used a device like a smartphone or tablet,” with a majority of their parents employing Internet devices as a means “to calm their child.” Horkheimer sees this technologically mediated formative engagement with one’s external world as negatively resulting in intellectual passivity [29]. And Horkheimer’s latter example merely typifies an analog algorithm, or a program of rules and calculations to derive a result, primitive technology, as simple as a kitchen recipe, but certainly prototypical of programmatic News Feed interactivity. Irrespective of whichever technology transacts data, though, Horkheimer qualifies mass media as “suggestion and manipulation … bound up with the active supplying of information” [30].

Technological progress, Horkheimer proceeds, continually facilitates surveillance [31]. And surveillance significantly equips its agents to control the surveilled. Culture industry corporations, actively supplying parts of the world with its daily information, monitor their users’ provided expressions, interests, and digital relationships shared on or transmitted through their platforms. Essentially, consumers freely proffer complete dossiers of themselves. This starkly contrasts the burden that previous generations of watchers faced in assembling complex profiles of individuals or groups. A person’s sentiments, movements through space, employment activity, affiliations, and other minutiae are effortlessly cataloged each time a user updates their digital presence. Hitherto privileged or secret diaries are now transacted freely, serialized amongst entertainment and news. Social media’s auras of amusement and interaction distract and cajole consumers into exchanging their own daily information for participation in the Internet star system, into donating data in order to retain the privilege of being seen. Though the divide between mass murder and mass media is wide — Hitler’s intentions were unequivocally malicious while the culture industry’s motivation is outwardly capitalistic — the current monopolization of consumer attention, their communication, and its consolidation through increasingly intrusive technological interaction should jar those individuals who are unwilling to consider a comparison of history to the present and who instead stare gapingly into their battery-powered, handheld, black mirrors of the Internet. The culture industry’s growing intimate examination of consumers’ behavior increasingly renders them vulnerable to social atomization, and this consequence of a watched Internet community is clear: Eslami, et. al., reveal almost half of their participants, in addition to experiencing personal behavioral changes as a result of the algorithmically triggered disruption in communicative causality, also report digital familial disconnection — that is to say, the experience of atomizing alienation between biological family members due to reduced exposure to their digital activity — perceiving this phenomenon as Facebook’s flawed categorization of “people.” The emotion manipulation experiment, “given the massive scale of social networks such as Facebook,” similarly alerted its researchers, who believe that in terms of digital emotional sway, “even small effects can have large aggregated consequences” [32]. The social media algorithm, once programmed by its corporation, efficiently operates as a surveilling digital biographer and social arbiter, recording data, assessing comportment, matchmaking new relationships, and choosing the news to feed to its consumers, commercial manipulation with great effect on subjective and interpersonal consciousness. Today’s media landscape, simultaneously more controlled by its producers and more personalized for each consumer, is primed for a considerably surreptitious use of imagery and rhetoric, and methods of media distraction, often constructed as means of entertainment, increasingly operate in society as systematic terroristic acts themselves. Since the advancement of Internet technology seems to continue on a meteoric trajectory, now venturing into the realm of wearable biometric interfaces, the threat to human consciousness through cultural manipulation is real.

While concentration camps have heretofore been physical spaces filled with people forcibly contained, surveilled, atomized, and tortured, in the current epoch of humans increasingly digitizing their lives, their communities transcending oceans inasmuch as existing across multiple server farms, it might be time to question how terror can operate on large concentrations of digitized consciousness. A need for barbed wire or force might be giving way to free participation and distraction. To the extent the Internet and social media facilitate cultural exchange and communication, it also puts its consumers under producers’ constant spotlight. Digital presence is an always-illuminated duplex of watching and being watched. For some, this is already terroristic enough.

Participating on the Internet is still voluntary, but to function and live without it is becoming increasingly difficult, especially once one has been accustomed to it. But to avoid surveillance in a post-9/11 world is nearly impossible. Horkheimer indicates how surveillance’s “influence … on our speech is evident,” that knowledge of its ongoing occurrence changes communicative behavior; moreover, “great words,” he continues, like “freedom, lose their meaning,” especially in repetition. Horkheimer offers an anecdote on this latter disintegration:

A while back I received a well-meaning pamphlet on educational reform, with the request that I go through it very carefully. On the first page the word freedom was used thirteen times. In my answer I said that if I should find the word honesty used thirteen times in a business advertisement, I would surely buy nothing from such a store [33].

Consider, now, Bush’s aforementioned State of the Union address, his language of terrorism. Consider the rhetorical sway of drumbeating “terror,” “terrorism,” “terrorists,” ceaselessly through the media. Consider the metronomic tension of a ticking clock in a silent room hushed by heightened emotions. Consider the nuisance of a nervously tapping stranger’s finger in a quieted public space. Repetitive sameness considerably accumulates distractive transfixion over time. Willed ignorance or selective unconsciousness will nevertheless muffle a relenting clock’s tick, de-powering its ability to affect us; consider the ignorance one must exercise to silence the incessant news accounts that report what once were merely aggravated crimes, but now market to audiences better as domestic terrorism. Sameness of sound — which repetitive rhetoric triggers, too — promotes an active mode of unconsciousness, an intellectual passivity, an acquiescence to its existence, resignation to its cycle.

But the clock still ticks.

With those twenty-five seconds of acquiescent applause, Congress audibly assented to the power a leader needs in a time of crisis. Bush’s rhetoric launched into its echoey drumbeat, a years-long reliance on constant reiteration of words like “terror” and “freedom” to structure his media engagement with the public. Despite the fact that a conceptualization of freedom is as nebulous as one of terror, mankind respectively seeks and relies on the use of both in order to live [34]. Actively aware of this or no, Bush’s conflating drumbeat shaped national consciousness and pacified their sentiment, hoping to unite a nation. But to corral people is also to contain something which stands outside them, and to corral people is to corral power.

And so we fought.

Bush’s tick-tock language, however, masked terror, one, at the time, dependent on a passive public unable to scrutinize it. Though the post-9/11 war on terror appeared to have an “advantage of being … free of all moral ambiguity,” since then scores of individuals have revealed the questionable machinations it fueled [35]. In particular, former United States Army intelligence analyst Chelsea Manning, now jailed for espionage after leaking an astounding trove of classified documents to the public, wrote in the New York Times about the United States government’s wartime manipulation of the media. Likewise, former CIA contractor Edward Snowden exposed the concurrent construction of a mass surveillance sponge that soaks up much of the public’s exposure on the Internet, especially from social media, categorizing it as “extremely questionable surveillance for reasons entirely unrelated to national security.” The backbone of this digital panopticon is powered by for-profit corporations, in particular Amazon, who furnishes the CIA with remote server farms, as well as other behemoths like AT&T and, of course, Facebook. Today’s Stasi is vertically integrated with not just the culture industry but the whole economic system around which it whirls. Consumers’ interactions with social media sites create the streams of content delivered to other consumers to fuel incessant interactivity, the necessary resource to create revenue; synchronously, from these incessant streams of content, terror-weary intelligence agencies purportedly derive intelligence content delivered to anti-terror operations, a resource necessarily fueling a war against terror. Although Löwenthal warned technology would ultimately lead to human beings’ superfluousness, industry now thrives dependably on his other fear, merchandised human interaction. Consumers who manufacture the content fed to other consumers exist as a workforce exploited for their creativity, remunerated primarily with free participation in a system of cyclical amusement, and this labor complements the self-perpetuation of an ever-hungry intelligence machine, whose expanding structure props up the whole system on the backend.

The consumer is the product. That is terror.

Bush’s language also masked depraved motivations within his administration. Torture, terror reified through physical violence, became a valued resource in the Bush war machine. On December 10, 2014, the United States Senate Select Committee on Intelligence, chaired by Senator Dianne Feinstein, released its heavily redacted, unclassified “Findings and Conclusions” from the Committee Study of the CIA’s Detention and Interrogation Program, commonly referred to as the CIA Torture Report. In her Foreword, Feinstein notes her own memory scorched by 9/11:

I recall vividly watching the horror of that day, to include the television footage of innocent men and women jumping out of the World Trade Center towers to escape the fire. The images, and the sounds as their bodies hit the pavement far below, will remain with me for the rest of my life.

She acknowledges the indelible effect this living visual “context” holds on public consciousness, its reverberant shock and undulation through the intelligence and defense communities, a collective fear of more terror. And she summarizes how, underneath the fear of these undulations, suspected terrorists detained by the United States “were tortured.” The Report describes gruesome treatments — sleep and food deprivation, waterboarding, medically unnecessary “rectal feeding” — obsequiously termed “enhanced interrogation techniques” by the CIA yet akin to Nazi concentration camp protocol. Though the Committee repeatedly finds such torture to be defensively worthless, The New York Review of Books contributor Mark Danner reveals worse: CIA administration ordered the repeated methodical torture of post-9/11 detainees despite “the strenuous objections of the interrogators” charged with its undertaking, and:

the use of those techniques, in this brutal, appalling extended fashion, had let them prove, to their satisfaction, that [a detainee] didn’t know what [administrators] had been convinced that he did know. It had nothing to do with him giving more information as he was waterboarded. The use of these techniques let them alleviate their own anxiety.

Danner’s assessment demonstrates Löwenthal’s paradigm of terror manufactured to placate its perpetrator’s hand. Danner also describes how a “hysteria” — about torture’s lawfulness, use, and efficacy — resounding inside the Bush administration also intentionally played out in the media, often hinged on the faulty classified intelligence extracted from the atomized reactions of tortured detainees [36]. This substantiates Adorno’s construct of distractive false desires: spectacle utilized to positively stoke public sentiment on the necessity or performance of torture all while actually helping to deflect from the fact that the performance was already underway.

As a consequent to the American torture program, the recursion of terror is very visible on the public’s horizon. While 9/11 was minimally witnessed on the Internet at the time, today’s terrorist organizations take maximum advantage of its broadcast spectrum. Instigated by not only the CIA torture program but also decades of American imperialism and fueled with hateful malevolence, organizations like the Islamic State (ISIS) have now instead embraced American creativity in service to their motives of terror. Their recent murders of hostages, particularly American journalist James Foley, reignited fears when slickly produced, high-definition short films of the executions were released directly on the Internet and gained immediate virality. In his recent article “Islamic State and its increasingly sophisticated cinema of terror,” Los Angeles Times reporter Jeffrey Fleishman keenly identifies ISIS’s tactics as one with “Hollywood aesthetics … stylized for a world wired to social media.” He claims beyond the incitation of the public’s fear and rage, ISIS hopes, through recruitment spurred by such productions, to grow their membership, attracting especially the impressionably young and disenfranchised. He describes the awful, methodically filmed immolation of Jordanian pilot Lt. Moaz Kasasbeh, a devastating montage of highly choreographed visual narrative and footage of Syrians alleged to have been killed by U.S. coalition forces. Frighteningly, as his article points out, the films’ draw is an “apocalyptic” “perverted ideology”: polished real-life performances of Hollywood’s reliably crude narratives — violence, horror, and, from a Muslim’s perspective as victim, retaliation — meant to advertise a “primitive brand” of governance in contrast to that of the free world. Writing about the films for Al Akhbar English, the blog of Egypt’s second-largest daily newspaper, reporter Islam Sakka explains how “the group has moved from recording events with regular cameras, to becoming a media strategy organization whose task is to shape the group’s upcoming messages to the world.” Sakka even underscores the conceptual, subjective underpinnings of their working style: “All sounds were intentionally muted to create an atmosphere of anticipation. This is how ISIS wanted it, and this is how we unconsciously deal with it.” And their strategy is effective, attracting to their operations, with ancillary social media spectacle, new, young enlistees not only from Europe and the Middle East, but America, too. On one hand, young recruits purchasing the propaganda must assume that in order to win the capital of freedom from torture and imperialism, it must be wrested from or extinguished in another, an iteration of the bourgeoisie-proletarian dichotomy; that is to say, the creation of their freedom through the fatal limitation of it elsewhere. On the other hand, exposure to filmic imagery of actual torture — arguably torture itself directed at the viewer, like the live shot of an airplane destroying 5000 lives — exhibited amongst the disruptive communication systems of social media, proves to be paving roads toward terrorist ideologies of dominance which advocate insane persecution, roads on which these new willing enlistees assent to being the exploited objects for its exercise.

As technology continues to enable new methods for the manipulation of consciousness and the manifestation of terror, a recommendation for preventative limitations or deterrent restrictions might seem fit. But, Horkheimer warns against attempts to reverse exposure to or eschew technology: “flight into the past is no help to the freedom that is being threatened” [37]. Still, he believes “the human individual” is fading into “a world of numbers [that] is becoming the only valid one” [38]. As younger generations increasingly encounter the digital Internet technologies crafted of atomic zeroes and ones to be as natural as life itself, humanity becomes more identifiable by its digital avatars; the current pervasive phenomenon of the selfie, the obsessive sharing of one’s own image on social media, even demonstrates, as Horkheimer notes, how self-awareness is being sublated by an industrialized “corporate mentality” [39]. Considering culture industry corporations’ continued push into the developing world and their glowing presence in undereducated communities, and considering that these corporations’ proprietary technologies — particularly their cherished and valued structural algorithms — generally remain heavily guarded, these communities are the most susceptible to terror operations hiding behind digital avatars, to corporate and government propaganda appearing as non-fictive, and to the algorithmic myopia of censored or meticulously curated content appearing as unfettered. Such tactics could greatly affect local intersubjectivity and foment, intentional or otherwise, communal atomization. As terror organizations like ISIS expand their technological capacities, upgrade their media production facilities, and grow their proprietary distribution networks and social media sites — often graphically designed to indiscernibly mimic their Western analogues — the division between economic capitalism and terror capitalism, too, blurs, a divide which might be glaring to an American purview, but deceptively exploitable elsewhere.

The progressive normalization of digitally negotiated relationships — the standardization of embodying a digital presence submitted for its rationalized, distributed exhibition — risks damaging a perceptive richness natural to unmediated human-to-human interaction. Today’s culture industry, its fuel of virality, its promise of incessancy, and most of all its capacity for stardom, invites everyone, anyone, to take their own picture and submit it to the spotlight. On the infinite scroll of the Internet, the selfie exists beside the celebrity, it fits snugly above the riots and below the crimes. Each uploaded image of the self is an insertion into its cinema. But transfixed in the confines of cinema, affixed to a screen waiting to subsume heroes and villains, audiences never truly see themselves.

[1] Chaliand, Gérard, and Arnaud Blin, eds. The History of Terrorism: From Antiquity to Al Qaeda. Trans. Edward Schneider, Kathryn Pulver, and Jesse Browner. Berkeley: University of California Press, 2007. pp. vii-viii.
[2] Ibid p. 412.
[3] Ibid p. vii.
[4] Horkheimer, Max, and Theodor W. Adorno. Dialectic of Enlightenment: Philosophical Fragments. Stanford, Calif: Stanford UP, 2002. pp. 113-126.
[5] Ibid p. 94.
[6] Ibid pp. 131-133.
[7] Adorno, Theodor W. The Culture Industry: Selected Essays on Mass Culture. London: Routledge, 1991. p. 87.
[8] Ibid pp. 86-87.
[9] Horkheimer and Adorno p. 115.
[10] Mann, Denise. “The Spectacularization of Everyday Life: Recycling Hollywood Stars and Fans in Early Television Variety Shows.” Private Screenings: Television and the Female Consumer. Lynn Spiegel and Denise Mann. eds Minneapolis: Univ. of Minnesota, 1992.
[11] Horkheimer and Adorno p. 116.
[12] Ibid pp. 113-116.
[13] Adorno p. 90.
[14] Horkheimer and Adorno p.120.
[15] Ibid pp. 20-21.
[16] Kramer, Adam D. I., Jamie E. Guillory, and Jeffrey T. Hancock. “Experimental Evidence of Massive-Scale Emotional Contagion through Social Networks.” Proceedings of the National Academy of Sciences 111.24 (2014): 8788-8790.
[17] Löwenthal, Leo. “Terror’s Atomization of Man.” German 20th-Century Philosophy: The Frankfurt School. Schirmacher, Wolfgang, ed. New York: Continuum, 2000. p. 81.
[18] Ibid p. 82.
[19] Ibid p. 83.
[20] Ibid pp. 83-84.
[21] Ibid pp. 85-86, 90.
[22] Ibid p. 89.
[23] Ibid pp. 89-90.
[24] Hitler, Adolph. Quoted on Löwenthal pp. 90-91.
[25] Horkheimer, Max. Critique of Instrumental Reason. London ; New York: Verso, 2012. pp. 156; 138-139.
[26] Ibid pp. 136-140.
[27] Ibid p. 140.
[28] Ibid p. 141.
[29] Ibid p. 140.
[30] Ibid.
[31] Ibid p. 142.
[32] Kramer, Guillory, and Hancock.
[33] Horkheimer p. 142.
[34] Ibid p. 144.
[35] Chaliand and Blin p. 415.
[36] Ibid.
[37] Horkheimer p. 140.
[38] Ibid p. 157.
[39] Ibid pp. 157-158.

Art school.


UCLA Reject

…you just saved 100% on life assurance by switching from Simco.