Tumgik
#these extreme moral undergirdings
rollercoasterwords · 1 year
Text
gonna try not to go on and on because i have other things i told myself i'd work on this morning but it truly is such a strange experience seeing how much of like...online/popular discourse about morality echoes conservative religion. like. idk man maybe it's the perspective of having been raised in a conservative religious environment that was very centered around stuff like guilt and control and then growing older and losing that religion and separating myself from that environment but. it's just crazy to me how much supposedly "woke" or "progressive" rhetoric is almost indistinguishable from the conservative christian rhetoric i was raised with at its core. like:
idea that "sin"/perceived bad actions means a person is fundamentally "sinful"/evil/bad
idea that consuming any "sinful"/bad/"problematic" media/literature/art makes YOU a bad person
using morally weighted labels to discourage critical thought or nuance. like. if something is labeled Evil by the Group then you must accept it as Evil. if something is labeled as Good by the Group then you must accept it as good
emphasis on punishment + guilt!! if you fuck up you must Get On Your Knees and Repent
there's definitely more to be said but...yeah! idk it's just kind of scary to me that if you replace the word "problematic" with the word "sinful" a lot of the rhetoric i see coming from ostensibly progressive circles could literally be something i'd hear sitting in the pews of my old church lol
83 notes · View notes
bestworstcase · 1 year
Note
I'm sold on some sort of detente with Salem being part of the godbros' reckoning, but wonder if this series' Tolkienish beats (gold macguffins, fellowship, Cinder & Salem's Sauron-Morgoth parallel, etc) are causing *tons* of fans to write the notion off as Saruman-style undermining of The Cause-especially with Raven & Leo's examples for Team Oz pundits to point at.
ok so full disclosure this:
Tumblr media Tumblr media
plus being able to identify gandalf, orlando bloom, That One Dwarf, and gollum in stills from the peter jackson films, represents the sum total of what i know about LOTR. i remember, vaguely, reading a rwby meta post along the lines of “cinder is getting worse because tolkien themes” but i’m afraid i did not retain any of the actual argument because, as i recall, the analysis of cinder’s arc up to the point when the post was written did not strike me as terribly insightful or, like, accurate as to what was going on with her narratively.
all of which is to say, there very well might be a connection that is staggeringly obvious to actual,, LOTR fans. but 1. tolkien bores me to absolute tears so i’m working off the LOTR wiki pages which i have just skimmed and 2. my general impression is that large swaths of the rwby fandom do not know how to distinguish between “common genre conventions” and “deliberate, meaningful allusion to another story” and 3. two out of three things in that list are, um, stock fantasy tropes, and nothing in the wiki pages for morgoth or sauron seem… remotely like salem or cinder? so my inclination is to conclude this is another winter-is-jadis situation. gfbfjc sorry
honestly i think the degree of fandom-wide resistance to the possibility of salem (or cinder, to a lesser extent) undergoing a villain -> hero arc mostly comes down to the expectations of the genre. the looming implacable evil sorceress doesn’t… negotiate. the heroes don’t decide she maybe has a point about the gods who sent the chosen one to go to war with her. like this is not how the story is supposed to go—and of course rwby is playing with that deliberately in order to criticize the simplicity of the moral framework undergirding the genre, but i think a lot of fans saw the familiar surface and took for granted that salem (and cinder) would function in the narrative in an archetypical way.
[which, to be fair, i did too—lost fable completely blindsided me because i also kind of went, yeah okay evil manipulative ancient witch, sure, i know this song—but then to be less fair, i also paid attention to lost fable.]
and then on top of it there’s also, like, this is an… unusually vindictive and bloodthirsty fandom. i don’t think i’ve ever encountered a fandom with so many mainline, high-profile bloggers who participate in the humiliation fantasies about the main villain getting taken down a peg, ideally in the most emotionally sadistic way possible, on the grounds that she’s a spoiled brat throwing a tantrum.
it’s… weird, and the constant repetition of that reading by the otherwise mostly reasonable mainstream has really profoundly distorted the fandom’s ability to like… parse what’s happening on screen, to the point that things like salem using the normal, universally-agreed-upon term for a type of magic gets twisted into “proof” that salem doesn’t think modern humans are people. hfghvs like this fandom is SO determined to interpret everything salem does through the most batshit, extreme lens imaginable, for… some… reason. and with that being the normal background radiation for discussions about her, like, of course “salem villain -> hero arc imminent” sounds absurd. don’t you know she kicks puppies and eats the hearts of babies for breakfast??
6 notes · View notes
Text
Why Batman Doesn't Kill
(and also why he doesn't use guns).
One of the perennial questions which plagues Batman discourse is "why doesn't he just shoot them?" It's the first question people ask when getting into the character and the most common critique leveled against him. And honestly, for as much as I hate the question (because it's been asked and answered so many times) I do get why it's being asked.
Because on a surface level? It seems like he *should*. Batman is already a person who is characterized by his extreme hard stance on crime and his willingness to use morally suspect means (brutal violence, fear and torture*) to accomplish his goal. And at this point the Joker has killed like a million people? It seems weird and arbitrary that the Batman isn't willing to use lethal force on that guy while he's throwing people off buildings and running them over with his car**. It draws attention to how much the format depends on characters like the Joker *not* dying and so drains the believability out of the story.
But to understand why Batman won't kill (and why he won't use guns) you have to understand what Batman is trying to accomplish.
Batman goal isn't to end crime. This is a common misconception stemming from a SparkNotes read of his origin (crime kills kid's parents, kid vows to end crime). He's not going to every night and beating up criminals because they are *breaking the law*. He's not judge dress, and his goal isn't to make a world where everyone is too frightened of him to disobey the state. He's not going to go after someone for draft dodging or beat up a starving orphan for stealing bread. He doesn't care if you jaywalk, and he's not going to stop you from writing ACAB on the police station. He *fights* crime, but that's ultimately in service to a larger goal.
Similarly, Batman isn't trying to make the world a better place. Or, he is, but not in a generic, maximum utilitarianism way. Batman has a *specific* cause he's fighting for and while reducing global poverty and world hunger *help* with that (he does do a lot of philanthropy) but it's not the most direct method of achieving his goal.
So what is Batman trying to accomplish? Simple, he wants to make sure no one ever has to go through what he did. Bruce's entire moment was shattered by his parents death, and he swore to make a world where no one else has to go through that. It's that trauma *and* that empathy which undergirds all his actions. He stops people from beating their wife and why he stops the joker from murdering people*** He does this because he understands, truly understands, how much these things can hurt people and he wants to protect people from that pain. And he wants
And it's also why he can't murder the Joker (or anyone else). Because he understands how truly awful it is to murder someone. How much it hurts those left behind. (He doesn't use guns because of the trauma of losing his parents to a gun more than any sense of empathy tbh).
*Side note, a good Batman story shouldn't have torture. In general a sign of a bad Batman story is how much *harm* Batman is doing in pursuit of his goals and how cruel he is about it. As this essay will show, Batman is driven by radical empathy.
**Side Note, any Batman story which has him use obvious lethal force (hitting people with his car) or guns is *also* a bad adaption.
*** It's also why he sends his rogues to Arkham. Bruce's empathy drives him even more than his trauma does, and so he is always hoping his villains will get better.
2 notes · View notes
gothhabiba · 4 years
Text
People--including people who purport to be critical about these things--just don't realise how deeply & inextricably Orientalist narratives, imagery, and art are tied to colonial violence, both historically and today (in fact assuming ignorance is the most generous interpretation possible).
Images such as Ingres' "Grande Odalisque" furthered curiosity about the "East" and demand for "Eastern" goods and narratives even as they asserted Europeans' right to authoritatively depict, categorise, judge, and rule the "Orient." Thus--& a lot of people writing about Orientalism get this wrong--it's not just how Orientalists depict the "Orient" (as lush, sensual, exotic, lazy, libidinous, hedonistic, and potentially cruel & depraved), but also the authority they project in depicting the "Orient" at all. This authority is what leads into an "academic Orientalism" (Said, Orientalism) that categorises and taxonomises "races" into "natural" categories based on their bodies and the moral qualities that supposedly correspond with or arise from them ("physiological-moral classification"). In other words, the problem isn't just what Orientalists decided that the "Orient" was--it's that they decided that they got to decide, that they had sole right and authority to decide, what the "Orient" was (after artificially dividing "Europe" from the "Orient" in the first place).
Orientalist paintings (along with travel narratives and fiction, of which "harem" was a popular genre, & which get more disturbing than I want to get into) played a major part, not only in portraying the Orient as libidinal and depraved, but also in cementing that aforementioned authority. These narratives provided justification for the material domination that enabled and inspired their production. They spread a type of curiosity about the "East" that led to more travellers producing more narratives, more soldiers, a greater sense of ownership. Untangling the relationship between cultural & socio-economic or political hegemony is of course beyond the scope of this post, but the point is that Orientalist art is deeply entangled in histories of domination, even besides just being racist in what they depict (as well as misogynistic--again, narrative depictions of women in harems get extremely gritty, & I can't believe that they didn't have real consequences in terms of sexual violence.)
In summation: if you see a painting of a sexy naked lady lounging around in what are clearly North African or “Middle Eastern” environs with associated props, please think critically for approximately three seconds before sharing it. Maybe even as many as seven seconds.
Readings:
Edward Said, Orientalism (above quotes are from pp. 118-9 of this edition)
--Culture and Imperialism
Isra Ali, "The harem fantasy in nineteenth-century Orientalist paintings" discusses the "use of depictions of gender and sexuality to undergird the political project of colonialism" (message me with your email for a pdf)
She also cites Deepa Kumar's Islamophobia and the Politics of Empire in talking about how depictions of Islam as uniquely barbaric in its treatment of women (furthered by harem painting and literature) are "very much in play today" in the context of the so-called War on Terror
Aimillia Ramli, "Contemporary criticism on the representation of female travellers of the Ottoman harem in the 19th century: a review" discusses how women's harem narratives may take part in "the hegemonic power-relation that has characterized the relationship between the European and the racialized Other"
310 notes · View notes
arcticdementor · 3 years
Link
The hilarious headline in the Daily Beast yesterday read like a cross of Clickhole and Izvestia circa 1937: “Is Glenn Greenwald the New Master of Right-Wing Media? FROM HIS MOUTH TO FOX’S EARS?”
The story, fed to poor Beast media writer Lloyd Grove by certain unnamed embittered personages at the Intercept, is that their former star writer Greenwald appears on, and helps provide content for — gasp! — right-wing media! It’s nearly the exclusive point of the article. Greenwald goes on TV with… those people! The Beast’s furious journalisming includes a “spot check” of the number of Fox items inspired by Greenwald articles (“dozens”!) and multiple passages comparing Greenwald to Donald Trump, the ultimate insult in #Resistance world. This one made me laugh out loud:
In a self-perpetuating feedback loop that runs from Twitter to Fox News and back again, Greenwald has managed, like Trump before him, to orchestrate his very own news cycles.
This, folks, is from the Daily Beast, a publication that has spent much of the last five years huffing horseshit into headlines, from Bountygate to Bernie’s Mittens to classics like SNL: Alec Baldwin's Trump Admits 'I Don't Care About America'. The best example was its “investigation” revealing that three of Tulsi Gabbard’s 75,000 individual donors — the late Princeton professor Stephen Cohen, peace activist Sharon Tennison, and a person called “Goofy Grapes” who may or may not have worked for Russia Today host Lee Camp — were, in their estimation, Putin “apologists.”
For years now, this has been the go-to conversation-ender for prestige media pundits and Twitter trolls alike, directed at any progressive critic of the political mainstream: you’re a Republican! A MAGA-sympathizer! Or (lately), an “insurrectionist”! The Beast in its Greenwald piece used the most common of the Twitter epithets: “Trump-defender.” Treachery and secret devotion to right-wing politics are also the default explanation for the growing list of progressives making their way onto Fox of late, from Greenwald to Kyle Kulinski to Aaron Mate to Jimmy Dore to Cornel West.
The truth is, Trump conservatives and ACLU-raised liberals like myself, Greenwald, and millions of others do have real common cause, against an epistemic revolution taking hold in America’s political and media elite. The traditional liberal approach to the search for truth, which stresses skepticism and free-flowing debate, is giving way to a reactionary movement that Plato himself would have loved, one that believes knowledge is too dangerous for the rabble and must be tightly regulated by a priesthood of “experts.” It’s anti-democratic, un-American, and naturally unites the residents of even the most extreme opposite ends of our national political spectrum.
Follow the logic. Isikoff, who himself denounced the Steele dossier, and said in the exchange he essentially agreed with Meier’s conclusions, went on to wonder aloud how right a thing could be, if it’s being embraced by The Federalist and Tucker Carlson. Never mind the more salient point, which is that Meier was “ignored by other media” because that’s how #Resistance media deals with unpleasant truths: it blacks them out, forcing reporters to spread the news on channels like Fox, which in turn triggers instant accusations of unreliability and collaborationism.
It’s a Catch-22. Isikoff’s implication is a journalist can’t make an impact if the only outlet picking up his or her work is The Federalist, but “reputable” outlets won’t touch news (and sometimes will even call for its suppression) if it questions prevailing notions of Conventional Wisdom.
These tactics have worked traditionally because for people like Meier, or myself, or even Greenwald, who grew up in the blue-leaning media ecosystem, there’s nothing more ominous professionally than being accused of aiding the cause of Trump or the right-wing. It not only implies intellectual unseriousness, but racism, sexism, reactionary meanness, greed, simple wrongness, and a long list of other hideous/evil characteristics that could render a person unemployable in the regular press. The label of “Trump-defender” isn’t easily removed, so most media people will go far out of their way to avoid even accidentally incurring it.
The consistent pattern with the Trump-era press, which also happens to be the subject of so many of those Greenwald stories the Beast and the Intercept employees are complaining about, is that information that is true but doesn’t cut the right way politically is now routinely either non-reported or actively misreported.
Whether it’s Hunter Biden’s laptop or the Brian Sicknick affair or infamous fictions like the “find the fraud” story, the public increasingly now isn’t getting the right information from the bulk of the commercial press corps. That doesn’t just hurt Trump and conservatives, it misinforms the whole public. As Thomas Frank just pointed out in The Guardian, the brand of politicized reporting that informed the lab-leak fiasco risks obliterating the public’s faith in a whole range of institutions, a disaster that would not be borne by conservatives alone.
But this is only a minor point, compared to the more immediate reason the constant accusations of treachery and Trumpism aimed at dissenters should be ignored.
From the embrace of oligarchical censorship to the aggressive hawking of “noble lies” like Russiagate to the constant humbugging of Enlightenment values like due process to the nonstop scolding of peasants unschooled in the latest academic jargon, the political style of the modern Democratic mainstream isn’t just elitist and authoritarian, it’s almost laughably off-putting. In one moment it’s cheering for a Domestic War on Terror and in the next, declaring war on a Jeopardy contestant flashing the “A-OK” sign. It’s Dick Cheney meets Robin DiAngelo, maybe the most loathsome conceivable admixture. Who could be surprised a politically diverse group finds it obnoxious?
During the Trump years conventional wisdom didn’t just take aim at Trumpism. The Beltway smart set used the election of Trump to make profound arguments against traditional tenets of democracy, as well as “populism,” (which increasingly became synonymous with “the unsanctioned exercise of political power by the unqualified”), and various liberal traditions undergirding the American experiment. Endless permutations of the same argument were made over and over. Any country in which a Trump could be elected had a “too much democracy” problem, the “marketplace of ideas” must be a flawed model if it leads to people choosing Trump, the “presumption of innocence” was never meant to apply to the likes of Trump, and so on.
By last summer, after the patriotic mania of Russiagate receded, the newest moral panic that the kente-cloth-clad Schumers and Pelosis were suddenly selling, in solidarity with famed progressive change agents like Bank of America, PayPal, Apple, ComCast, and Alphabet, was that any nation capable of electing Trump must always have been a historically unredeemable white supremacist construct, the America of the 1619 Project. The original propaganda line was that “half” of Trump supporters were deplorable racists, then it was all of them, and then, four years in, the whole country and all its traditions were deemed deplorable.
Now, when the statues of Washington, Jefferson, Lincoln and Roosevelt came down, there was a new target, separate and apart from Trump. The whole history of American liberalism was indicted as well, denounced as an ineffectual trick of the oppressor, accomplishing nothing but giving legitimacy to racial despotism.
The American liberalism I knew growing up was inclusive, humble, and democratic. It valued the free exchange of ideas among other things because a central part of the liberal’s identity was skepticism and doubt, most of all about your own correctitude. Truth was not a fixed thing that someone owned, it was at best a fleeting consensus, and in our country everyone, down to the last kook, at least theoretically got a say. We celebrated the fact that in criminal courts, we literally voted to decide the truth of things.
This new elitist politics of the #Resistance era (I won’t ennoble it by calling it liberalism) has an opposite view. Truth, they believe, is properly guarded by “experts” and “authorities” or (as Jon Karl put it) “serious people,” who alone can be trusted to decide such matters as whether or not the Hunter Biden laptop story can be shown to the public. A huge part of the frustration that the general public feels is this sense of being dictated to by an inaccessible priesthood, whether on censorship matters or on the seemingly daily instructions in the ear-smashing new vernacular of the revealed religion, from “Latinx” to “birthing persons.”
In the tone of these discussions is a constant subtext that it’s not necessary to ask the opinions of ordinary people on certain matters. As Plato put it, philosophy is “not for the multitude.” The plebes don’t get a say on speech, their views don’t need to be represented in news coverage, and as for their political choices, they’re still free to vote — provided their favorite politicians are removed from the Internet, their conspiratorial discussions are banned (ours are okay), and they’re preferably all placed under the benevolent mass surveillance of “experts” and “professionals.”
Add the total absence of a sense of humor and the inability of “moral clarity” politics to co-exist with any form of disagreement, and there’s a reason why traditional liberals are suddenly finding it easier to talk with old conservative rivals on Fox than the new authoritarian Snob-Lords at CNN, MSNBC, the Daily Beast or The Intercept. For all their other flaws, Fox types don’t fall to pieces and write group letters about their intolerable suffering and “trauma” if forced to share a room with someone with different political views. They’re also not terrified to speak their minds, which used to be a virtue of the American left (no more).
From the moment Donald Trump was elected, popular media began denouncing a broad cast of characters deemed responsible. Nativists, misogynists and racists were first in line, but from there they started adding new classes of offender: Greens, Bernie Bros, “both-sidesers,” Russia-denialists, Intellectual dark-webbers, class-not-racers, anti-New-Normalers, the “Substackerati,” and countless others, casting every new group out with the moronic admonition that they’re all really servants of the “far right” and “grifters” (all income earned in service of non-#Resistance politics is “grifting”). By now conventional wisdom has denounced everyone but its own little slice of aristocratic purity as the “far right.”
3 notes · View notes
Text
Progress in the Time of Coronavirus
Two nights ago Vice President Joe Biden said that the people of the United States wanted “results not a revolution” in response to Senator Bernie Sanders’s call for overhauling the US-American economy and healthcare systems as a response to the Coronavirus epidemic. In stating this consultant written soundbite, Joe Biden revealed his political inadequacy (vulnerability) for the times we live in, and telegraphed his weakness in the coming general election. 
We live in a time of extremes, and the COVID-19 crisis is no exception. I cannot remember another moment in my lifetime where this kind of slow-moving disaster paralyzed the country. Although I was just a child when 9/11 happened, I remember the acuity of the crisis, unlike this chronic build up of anxiety and doubt we are currently experiencing. 
I want to respond to the Vice President’s myopia with a call to not only meet the demands of the crisis but also to see beyond it and begin to address the structural violence that undergirds the failures of our healthcare and economic systems to adequately prepare us for COVID-19. As Bernie has said, repeatedly, now is a time for solidarity, and to pay careful attention to the needs of those most vulnerable in our society. We are only as safe as the least among us. 
To that end, I propose that in any action we take to remedy this crisis we center on the needs of marginalized populations, with special care and attention given to rural communities like those in Indian Country, the Black Belt, and Appalachia; communities that remained economically depressed and marginalized prior to the arrival of COVID-19 and the subsequent disruptions in the modern global economy. If we center the needs and challenges of these communities during the crisis, we will be better able to assure that no issue is forgotten or cast aside. 
In addition to matters of quarantine and other issues that arise on an hourly basis, we need to expand welfare benefits universally with additional actions such as utility payment moratoriums, rent and mortgage suspensions, and student debt forgiveness so that those most burdened by the crisis are freed from daily living expenses. 
As the People’s Policy Project has recommended, I would like to see states and the federal government purchase cheap stocks to create a sovereign wealth fund that pays US-Americans dividends in the future. 
We need to pass a comprehensive single-payer health system (Medicare-for-All) that guarantees healthcare for diagnosis and treatment to all residents of this country, and if that fails, nationalize private hospitals and medical practices to fight the disease at no upfront cost to patients. 
We should direct the armed forces to begin setting up temporary hospitals and producing ICU equipment (in particular ventilators) in quantity for states and municipalities to purchase at cost for overwhelmed hospitals and health systems. 
We should explore requiring corporations to have a three month ‘rainy day’ fund for future crises. Why should a family be expected to have three months of expenses saved while airlines are allowed to spend down cash reserves on stock buybacks, which are a blatant manipulation of the market? Any bailout for carbon intensive industries should require decarbonization and regardless of industry, require the placement of workers on the companies board in the codetermination model laid out by Senator Warren. 
We need a Green New Deal stimulus package focused on decarbonization and public transit with limited to no funding for automobile infrastructure and gasoline consumption. New subway and high speed rail lines MUST be central to any transportation and infrastructure package passed by congress. 
We should expand provider training programs and subsidize the cost of medical, nursing, and pharmacy education, bringing down the price of public professional schools for these degrees to our  international peers if not eliminating them. 
Finally I would like to see the US Congress (or potentially large states such as California, New York, or Texas) create a public lab to mass produce cheap generic pharmaceutical drugs, and use revenue to subsidize clinical education for providers. 
There are doubtless other ideas and ones that should be included here that I’ve forgotten. However, the point is that we must not let this opportunity for “big structural change” go to waste. Indeed it is our moral obligation to address the maladies that allowed for the situation to get this dire if we are to avoid this kind of pandemic again in the future. 
11 notes · View notes
ruminativerabbi · 7 years
Text
Choosing Life
One of the most often-repeated talmudic aphorisms notes that “all Israel is responsible for each other,” a noble sentiment, surely, but one that most of us set into a set of larger and smaller concentric circles of obligation: we feel most of all responsible for our closest family members, then for more distant relatives, then for neighbors, then for the other members of the faith- or ethnic group with which we identify the most strongly, then for the other citizens of our country, and then for all humankind. Some of us go one step further too and include the inanimate world as well in the scheme, promoting the conviction that, as part of collective humanity, we have a kind of corporate responsibility not solely for the other residents of our planet, but for the planet itself. Eventually, in some distant age, the circle will probably widen to an even greater extent to allow for feelings of responsibility directed towards the inhabitants of the other populated planets with whom we will eventually learn that we share our galaxy.
None of the above will strike anyone as at all controversial. And even if some specific people might well order their circles differently—feeling more responsible for neighbors than for distant relatives, for example—the concept itself that responsibility for people other than ourselves inheres in our humanness hardly sounds like an even remotely debatable proposition.  But saying wherein that responsibility lies exactly—and how that sense of responsibility should manifest itself in law—is another question entirely.
The system of common law that undergirds American jurisprudence holds that one commits a crime by doing some specific illegal thing, not to failing to do something...and this principle applies even when the thing not done involves coming to the aid of someone in distress. There are, however, several exceptions to the rule. If you yourself are personally responsible for the other party being in distress, then you are in most cases deemed responsible to come to that person’s aid. Parents have an obligation to rescue their children from peril, as do all who act in loco parentis (like the administration of a child’s school or the staff in a child’s summer camp). Spouses have an obligation to rescue each other from peril. Employers have an obligation to come to the aid of workers imperiled in the workplace. In some places, property owners have a similar obligation of responsibility towards people invited onto that property; in others, that obligation extends even to unwanted visitors like trespassers. Otherwise, there is no specific legal obligation to come to the assistance of others. And this is more or less the way American law has evolved as well: other than with respect to the specific exceptions mentioned above and in Vermont, people are generally deemed to have no legal obligation to assist others in distress, much less to risk their own wellbeing to do so. Indeed, although all fifty states, plus the District of Columbia, have so-called Good Samaritan laws on the books that protect individuals who come to the aid of others from subsequent prosecution in the event of mishap, or some version of such laws, Vermont is the sole state to have a law on the books that actually requires citizens to step forward to help when they see others in peril or distress.
These last several weeks, I have been following with great interest the case in Taunton, Massachusetts, that culminated in the conviction of involuntary manslaughter of Michelle Carter, 20, in a case that involved a suicide for which she was not present and which her lawyers argued she therefore could not have prevented, but for which the court deemed her in some sense responsible nonetheless.
The story is a sad one from every angle. It revolves around two troubled teenagers who met while vacationing in Florida and subsequently began an intense twenty-first-century-style relationship that was primarily conducted digitally through emails and text messages. (After returning home to Massachusetts, the two only met in person on a few rare occasions.) But digital though their relationship may primarily have been, it was clearly intense. She was suffering from an eating disorder; he suffered from suicidal tendencies and had actually attempted suicide before, which fact he disclosed to her. At first, she was supportive and encouraged him to seek counseling and not to harm himself. But at a certain point, she became convinced—and we are talking about a teenager here not a mature adult, let alone a trained therapist—she became convinced that suicide was the right solution to her friend Conrad’s problems. And so she encouraged him, as any friend would, to do the right thing. “You can’t think about it,” she texted him on the day of his death. “You have to do it. You said you were gonna do it. Like I don’t get why you aren’t.”
Convinced that she was right, Conrad drove his truck to the secluded end of a K-Mart shopping plaza and prepared to asphyxiate himself with the fumes from his truck’s exhaust. And then we get to the crucial moment, at least in the judge’s opinion. Suddenly afraid and apparently wishing not to die, or at least willing to rethink his commitment to dying, Conrad got out of the truck and phoned his friend Michelle. Her response was as clear as it was definitive: “Get back in,” she told him. And he did, dying soon after that.
Was she responsible for his death? You could say that she did not force his hand in any meaningful way, that she merely told him to do something that he could have then chosen not to do. Or you could say that she had his life in her hands at that specific moment and that she chose to do nothing—not to phone his parents, for example, or 911—and thus was at least in some sense complicit in his death. “The time is right and you’re ready…just do it, babe,” she had texted him earlier in the day. That, in the judge’s opinion, was just talk. But when Conrad’s life was hanging in the balance—when he was standing outside the cab of his truck and talking on the phone as the cab filled with poison gas—and she could have exerted herself to save his life but chose instead to encourage him to get back in—that, the judge ruled, constituted involuntarily manslaughter in the Commonwealth of Massachusetts. And, as a result, Michelle Carter is facing up to twenty years in prison.
Involuntarily manslaughter, defined as behavior manifesting extreme indifference to human life that leads to the death of another, seemed to the judge to suit the details of this case. But what if Michelle truly believed that Conrad would be better off dead, that the dead rest in peace or reside in a state of ongoing bliss in heaven? What if she felt it was his right to die on his own terms and when he wished? What if she felt that he was right in his estimation that he would be better off dead than alive? Should she be held responsible for his death if she felt she truly believed any of the above? As someone who has spoken in public countless times about the dead resting in peace and their souls being resident in paradise beneath the protective wings of God’s enduring presence, there is something deeply unsettling for me personally even to ask that question aloud! Surely, I could answer, the metaphoric lyricism we bring to our efforts to deprive death of its sting aren’t meant to be taken literally, and certainly not to the point at which the taking of an unhappy person’s life feels like a justifiable decision! But what if someone did take them literally and then, not actually acting, chose not to act because of them—should that be considered criminal behavior?
I am neither a lawyer nor a legal scholar, so I won’t offer an opinion about the verdict in the Carter case per se, other than to note that it is controversial and may ultimately be reversed. But the burning question that smolders at the core of the matter—what it ultimately means to be responsible for another—is not something anyone who claims to be a moral person can be at peace being unable to answer. The sense of the interconnectedness of all living things that we reference so blithely in daily discourse can serve as the platform on which all who would enter the discussion may stand. And, surely, the responsibility towards others that develops naturally from our sense of society as a network of closer and more distant relationships is undeniably real. Still, it feels oddly difficult—both morally and apparently legally—to say precisely what it means for one individual to bear responsibility for another.
The most-often repeated commandment in the Torah requires the faithful to be kind to the stranger, to the “other,” to the person who is not yourself…which is all people. Theologically, being solicitous of the wellbeing of others is a way of acknowledging the image of God stamped on all humankind. Whether criminalizing the willful decision to look away from that divine image is a good idea, or a legally sound one, is a decision for jurists, not rabbis. But the notion that all behavior that shows disrespect, disregard, or contempt for others—and thus denies the principle that all human life is of inestimable value regardless of any individual’s circumstances—is inconsonant with the ethical values that should undergird society is something we all can and should affirm. When the Torah commands that the faithful Israelite “choose life” over death, it is specifically commanding that the faithful ever be ready to step into the breach to save a life in peril and thus to affirm our common createdness in God and the responsibility towards each other that derives directly from that belief.
4 notes · View notes
Quote
Spadefoot toad tadpoles eat other tadpoles. Some snail mothers lay two sets of eggs—one intended to hatch and the other intended to be eaten by those who hatch. (Schutt calls these “kids’ meals.”) Cichlid fish that practice mouthbrooding, in which the parents protect their eggs and babies from predators by carrying them around inside their mouths, sometimes get hungry and swallow their own children. Whoops! In what Schutt calls “an extreme act of parental care,” black lace-weaver spider moms call their hungry babies over and allow their children to eat them alive. But, as Schutt points out, we are discriminating about what we call cannibalism. Historical claims about savages eating one another are sometimes wildly hyperbolic, or lacking in proof, Schutt finds. Accusations are often undergirded by racism and opportunism. When Spain was exploring and exploiting the Caribbean during the 1500s, a 1510 papal decree that Christians were morally justified in punishing cannibals meant that on “islands where no cannibalism had been reported previously, man-eating was suddenly determined to be a popular practice.Yet Renaissance-era Europeans who would have been disgusted by reports of people-eating rituals in faraway places nonetheless practiced what Schutt called medicinal cannibalism. “Upper-class types and even members of the British Royalty ‘applied, drank or wore’ concoctions prepared from human body parts,” Schutt writes. And blood was consumed to treat epilepsy. So what does cannibalism look like in a culture that doesn’t attach as much stigma to it? Like many other peoples, the Chinese practiced survival cannibalism during wars and famines; an imperial edict in 205 B.C. even made it permissible for “starving Chinese” to exchange “one another’s children, so that they could be consumed by non-relatives.” But, according to historical sources cited by Schutt, the Chinese also practiced “learned cannibalism.” In Chinese books written during Europe’s Middle Ages, human flesh was occasionally cited as an exotic delicacy. In times of great hunger or when a relative was sick, children would sometimes cut off their flesh and prepare it in a soup for their elders. One researcher found “766 documented cases of filial piety” spanning more than 2,000 years. “The most commonly consumed body part was the thigh, followed by the upper arm;” the eyeball was banned by edict in 1261.
“Cannibalism” from Slate
61 notes · View notes
thejesusreport · 7 years
Text
THE END TIMES ARE HERE: TRUMP, BREXIT AND ISIS AND WHAT THEY MEAN FOR THE FUTURE
December 1, 2016 (RE-BLOG from Avenger-Equalizer blog) 
THE END TIMES ARE HERE: TRUMP, BREXIT AND ISIS AND WHAT THEY MEAN FOR THE FUTURE 
 BY STEPHEN M. THERIAULT 
 “Let no man deceive you by any means: for that day shall not come, except there come a falling away first, and that man of sin be revealed, the son of perdition; who opposeth and exalteth himself above all that is called God, or that is worshipped; so that he as God sitteth in the temple of God, shewing himself that he is God.” 2 Thessalonians 2:3-4 (emphasis added) 
Those who are students of the Book of Revelation will appreciate the timeliness of my message. For in no time in recorded history has a more sinister man become leader of the free world! As I have previously said, I strongly assert that Donald J. Trump is the anti-Christ and the “son of perdition” who has been revealed in this extraordinary year of 2016. Trump, a fittingly charismatic, if morally bankrupt, leader, most clearly opposes God’s commands given through Jesus Christ and has exalted himself above God, morality and righteousness. But the above passage also forewarns us that before the “Beast” or anti-Christ comes into the world and announces himself, there will be a falling away of the faithful, which we can see all over the western world, with churches closing and losing members and the vast majority of our youth claiming they “don’t believe in organized religion.” In counseling and talking with young people, I heard that phrase time and time again, yet I can’t criticize or judge as Jesus taught us that formal churches tend to be greedy and self-serving, pastors looking mostly for “filthy lucre” and propounding the “Gospel of abundance”, claiming that if you pray and believe and give dollars to the church you will not only be “saved” but will financially and materially prosper. The abundance fallacy is pulled from one lone passage in the New Testament, but Jesus was not a believer in the power of material wealth and the efficacy of the love of, and hoarding of, money. But that is another essay, and I want to get back to the coming of the End Times, which is ushered in by the faithlessness of this generation and by the coming of the anti-Christ and his cohort (the beast and false prophet of Revelation). 
So now we have step one, the falling away of the faithful, which is undeniably happening as we have lost most of our Judeo-Christian and Biblical principles undergirding our culture and legal system. And we have step two, the revealing of the son of perdition, Mr. Trump. Step three is the key, the coming of Jesus Christ, the King of kings, and Lord of lords. The next thing is the most terrifying war we’ve ever encountered, the Battle of Armageddon. The good thing about this war is that Jesus wins and the Beast and False Prophet are cast down to the Lake of Fire and Brimstone (hell) where Satan has already been cast. I know most of you “5% Christians” are skeptical that this really will happen, but who would have predicted that Donald Trump would be President? But it’s happening and the war will happen and those written in the book of life (the True Church) will be saved along with a few faithful men (apostles, prophets and martyrs) and 144,000 male virgins who will act as eunuchs, serving the Court of Jesus Christ and His Bride the True Church. This is all in the Book of Revelation. I know this is hard to believe, but it’s definitely coming and we better be ready for the end as we never know when Jesus will appear on His white horse, commanding all the legions of angels and those few men who are on His side in the battle. Those whose names are not written in the Book of Life and who have worshipped the beast or Satan, or have the mark of the evil one on their bodies, will be killed and cast down to the Lake of Fire with the anti-Christ and “false prophet” (who represents the false male church which gives the first Beast (or anti-Christ) power through his charisma). Too surreal to be true? 9/11 ushered in a period of tribulation and fear the world had never known since the plagues of the Middle and Dark Ages. So who would have predicted the World Trade Center towers would be totally destroyed in one hour? (me!) Revelation speaks of this and I wrote about it and predicted it in my book, The Practical Guide to Real Christianity, in late 2000/early 2001. In this article, I have purposely not quoted the Bible extensively in the interest of brevity and for an incentive for readers to pick up the Source of all inspiration, the Holy Scriptures and take a look for themselves. The impending terrorist threat from “Islamic Extremism��� in the form of “ISIS” also affirms that the time is near for the final battle. I proffer that radical jihad and the European meltdown which is “Brexit”, are further examples of the world finally throwing off stability and sane governance and starting to look to extremism, scapegoating and populist demagogues to fulfill their lust for radical change and a new tolerance and even love for hatred, violence, racism, extremism, xenophobia, Islamophobia and other forms of narrow-minded hate and bigotry. Our masses are deluded enough to elect a raging racist, bigoted power monger in the U.S. and in Europe cast away a half- century of unity and cooperation, while claiming to fear a formless radical element which can’t win a war but can inflict much terror and bloodshed. ISIS itself is not the end of the world, but Trump’s response to it might just be! And European chaos over Brexit, refugees and money troubles keeps it from being a sane check on the unbridled spiteful, fear-mongering lunacy of Donald Trump. Armageddon is claimed to take place in the Middle East, and that makes perfect sense. But we need the Son of Man on the White Horse to take charge, defeat the anti-Christ and the evil extremists, and rule with a rod of iron, a benign dictatorship of Biblical proportions, covering the whole earth. This is what Revelation predicts and I, for one, believe in the inerrant Scriptures. We are in the end times and the coming of Christ and the Final Battle looms over us like an impending dark storm! For those men who have been judged unworthy, I offer you mercy but not salvation. Women and girls of the True Church, the time is coming for justice and your reward. Men have hated you, hated your softness and femininity and they treated you horribly, causing your hardening and cynicism, and they are judged for it, as are those women who have conspired against God. (Women will not get the Lake of Fire, which was prepared for the devil and his henchmen) Don’t be sad for men’s destruction as this excising of the male cancer is necessary to provide you with an unpolluted Kingdom of Heaven. The honeymoon awaits. The war has just begun. “The wisdom of this world is foolishness with God. For it is written, He taketh the wise in their own craftiness.” 1 Corinthians 3:19 “For what shall it profit a man, if he shall gain the whole world, and lose his own soul?” Mark 8:36 “Be not deceived; God is not mocked: for whatsoever a man soweth, that shall he also reap. For he that soweth to his flesh shall of the flesh reap corruption; but he that soweth to the Spirit shall of the Spirit reap life everlasting.” Galatians 6:8 
 STEPHEN THERIAULT IS AUTHOR OF THE PRACTICAL GUIDE TO REAL CHRISTIANITY AND IS ORGANIZER/FOUNDER OF INTERNATIONAL CITIZENS AGAINST CORRUPTION AND OVERDEVELOPMENT (ICACO), AMERICAN CITIZENS AGAINST CORRUPTION (ACAC) AND AMERICANS AGAINST OVERDEVELOPMENT (AAO) AMONG OTHERS, WHICH CAN BE FOUND ON FACEBOOK.COM. HIS WEBSITE AND BLOG: WWW.THEAVENGER.US ALSO, THE “AVENGER/EQUALIZER BLOG” IS ALSO ON FACEBOOK (AVENGER/EQUALIZER BLOG) AND TUMBLR (AVENGER-EQUALIZER BLOG). GO TO: WWW.TUMBLR.COM/BLOG/AVENGER-EQUALIZER
3 notes · View notes
bluewatsons · 5 years
Text
Martyn Pickersgill, The social life of the brain: Neuroscience in society, 61 Curr Sociol 322 (2013)
Abstract
Neuroscience is viewed by a range of actors and institutions as a powerful means of creating new knowledge about our selves and societies. This article documents the shifts in expertise and identities potentially being propelled by neuroscientific research. It details the framing and effects of neuroscience within several social domains, including education and mental health, discussing some of the intellectual and professional projects it has animated therein (such as neuroethics). The analysis attends to the cultural logics by which the brain is sometimes made salient in society; simultaneously, it points towards some of parameters of the territory within which the social life of the brain plays out. Instances of societal resistance and agnosticism are discussed, which may render problematic sociological research on neuroscience in society that assumes the universal import of neuroscientific knowledge (as either an object of celebration or critique). This article concludes with reflections on how sociotechnical novelty is produced and ascribed, and the implications of this.
Introduction
Promissory discourse that takes neuroscience as its focus frames the study of the brain as a powerful means of providing new ways for understanding our selves and societies. Neuroscientific research, including brain imaging studies, contributes to longstanding debates concerning free will, morality and madness. In so doing, it reignites the same kinds of questions about selfhood and its physical corollaries and determinants that fired the imaginations of nineteenth- and twentieth-century novelists and filmmakers, social commentators and policy actors. Increasingly, the terms and concepts of neuroscience are evident within domains and discourses that are only loosely connected to biomedicine; a consequence of this is that existing professional practices are sometimes challenged, and new social engagements and relations can be animated.
The reasons why neuroscientific research has operated in this way and had such effects on the production, circulation and deployment of knowledge are diverse. However, the historian Fernando Vidal (2009) has emphasized especially trends in cultural discourse regarding individualism, the import of identity politics, the salience of biomedical facts and the portability of (neuro)images in enabling and sustaining diverse interests in neuroscience. Sociological attention to the social life of the brain has thus been directed by the challenges (and, some believe, opportunities) neuroscience presents to the social sciences as a consequence of its causal tropes and presumed policy implications – and has been further engendered by the kinds of rapprochements with other forms of praxis that I have pointed to above (and will characterize in more detail below).
Reflecting in particular on the impact of neuroscience on mental health, social theorist Nikolas Rose has suggested that ‘we’ have now become ‘neurochemical selves’, understanding thought, feeling and behaviour as being mediated through the brain (Rose, 2007: 188). Similarly, for Thornton, ‘the language of the brain is a powerful vocabulary through which citizens today express their anxieties, articulate their hopes and dreams, and rationalize their disappointments’ (Thornton, 2011: 150). This is regarded as being, in part, a consequence of the ‘saturation of public discourse with biological and neurological ways of thinking’ (Thornton, 2011: 112). Indeed, Pitts-Taylor speculates as to whether seeing ‘ourselves in neuronal terms may be becoming a duty of biomedical citizenship’ (Pitts-Taylor, 2010: 649) and Holmer Nadesan (2002) suggests that brain science promotes new regimes of surveillance, bolsters the power of already privileged groups and extends governmentality. Such formulations, all explicitly indebted to the work of Michel Foucault, vividly capture some of the broad shifts in identity that the meta-narratives of neuroscience seem to have entrained. However, as Thomson (2005) argued for Foucauldian analyses of psychology in society, there is a risk that these conceptualizations overstate the transformations in subjectivity that have occurred with and through neuroscience (Bröer and Heerings, in press; Choudhury et al., 2012; Pickersgill et al., 2011).
In this article I juxtapose reflections on neuroscience – developed through and informed by research continuously undertaken since 2005 under the aegis of several projects (supported by the AHRC, ESRC, Newby Trust and Wellcome Trust) on the sociology of neuroscience and mental health1 – with accounts and analyses in other social science literature, in order to document the social life of the brain. Through the article, I undertake an examination of the power of neuroscience, and of the shifting understandings of expertise and identity potentially being propelled by neuroscientific research.2 The account presented aims to be sensitive to the sociotechnical innovations that have come with (and supported) the increasing attention of researchers and wider publics to the neurological, but is at the same time mindful of the limits within which these take place. This analysis thus suggests that the very premise on which much social scientific analysis of the brain is undertaken (i.e. the pervasive biomedical and cultural import of neuroscience) needs to be more closely scrutinized, underscoring the importance of examining how novelty is produced (including the role of sociological work itself in producing the significance of ‘new’ objects of study). The article thus intersects with wider debates within (medical) sociology and science and technology studies (STS) on the role of the promissory and the co-construction of expectations and agendas between social science analysts and the actors and practices being analysed (Barben et al., 2008; Brosnan, 2011; Burchell, 2009; Calvert and Martin, 2009; Hedgecoe and Martin, 2008; Macnaghten et al., 2005; Molyneux-Hodgson and Meyer, 2009).
What is neuroscience?
Characterizing neuroscience is somewhat akin to attempting to define the social sciences; any attempt risks reducing a diverse set of discourses and practices into a ‘manageable’ rubric that perhaps effaces as much as it reveals, and presents as unified that which is highly heterogeneous. Nevertheless, we can advance a broad definition: i.e. that neuroscience is the scientific study of the nervous system, including the brain. It is at once a conglomeration of scholarly traditions (molecular biology, biochemistry, medical physics and so on) as well as a discipline in its own right (much like gender studies, or STS). Many neuroscientists are concerned with both the structure and function of neurological matter: what are the different bits of the brain, and what role do they play in both basic and more sophisticated physiological and cognitive processes? So-called ‘lesion studies’ have played an important role in neurological research (a tradition which now sometimes come within the broad purview of ‘the neurosciences’), and today genetic and cellular approaches are employed alongside computational perspectives. Neurosurgery, pharmacological disruption of neural circuits and molecular techniques are all common features of contemporary neuroscience.
Investigators have taken as their focus matters as diverse as addiction, emotion, learning, memory, sensation, sleep and, of course, psychopathology. We can thus see that the scope of neuroscience is extremely wide-ranging, and so too is its ambition. The following extract, taken from the first page of the introductory chapter of one popular undergraduate neuroscience textbook, illustrates these aspirations:
It is human nature to be curious about how we see and hear; why some things feel good and others hurt; how we move; how we reason, learn, remember, and forget; the nature of anger and madness. These mysteries are starting to be unravelled by basic neuroscience research. (Bear et al., 2007: 4)
This short extract reveals three key points. First, neuroscience commonly takes as its subject the human itself; this contrasts with older pursuits associated with the pursuit of cerebral knowledge, such as craniometry, which were specifically underpinned by efforts to empirically validate differences between Europeans and other hominids (Carson, 1999). Second, in spite of this, the notion of humanity undergirding much neuroscientific research is informed by modernist ‘Western’ assumptions about the singularity of human nature. Last, human nature is understood as constituted through somatic structures and processes.
In recent years, neuroscientists have come to be concerned not solely with the neurological but with sociality. The subdiscipline of ‘social neuroscience’, for instance, examines how the brain mediates social interaction and cognition (Young, 2012). This field exemplifies the jurisdictional challenge some regard neuroscience as posing to the traditional social sciences (and the humanities) (e.g. Duster, 2006), and the ways in which it reconfigures the boundaries between taken-for-granted ontological distinctions regarding ‘biology’ and ‘society’.
Similar work articulating brain science and the study of social life has, however, also been undertaken by sociologists (Franks, 2010; TenHouten, 1997; for analysis, see Von Scheve, 2011). As Webster suggested for genomics, such scholarship means that we must confront, or at least contemplate, a situation wherein the ‘very object of social science analysis – the social – is no longer clearly defined’ (Webster, 2005: 234). Research at the interface between sociology and neuroscience continues longstanding attempts by the social sciences to emphasize the soma (Dingwall et al., 2003), as well as problematizing simplistic characterizations of these disciplines being, a priori, ‘opposed’ to one another. Further, the fact that rapprochements between brain and social science are deemed possible and workable reminds us of the extent to which neuroscientists themselves can incorporate quite elaborate ontological imaginaries within their experimental praxis (Pickersgill, 2009). Indeed, those working in the realm of ‘critical neuroscience’ seek explicitly to create a more ‘reflexive scientific practice’ in neuroscientific research through the incorporation of insights from the social sciences and humanities (Choudhury et al., 2009: 61).
Neuroscientists, then, are not all blithely deterministic, with dreams of reducing all of human experience to the circulation of cerebral blood and the alchemy of neurotransmitters. Yet, a research focus on neuroscience and the neurological does, in many cases, result in an assignment of ontogenic privilege to the brain in reflections on and assumptions regarding personhood. However, whilst such investigations are commonly understood as helping to enjoin new understandings of selves as brain, Fernando Vidal has cogently argued the inverse: that ‘the ideology of brainhood’, which he regards as ‘the quality or condition of being a brain’, has in fact ‘impelled neuroscientific investigation much more than it resulted from it’ (Vidal, 2009: 5). For Vidal, emerging ideas about brainhood can be traced back to the mid-eighteenth century. This insight – i.e. that claims for the transformational nature of neuroscience may themselves be problematic – provides an organizing framework for the subsequent analysis presented here.
Attending to images
In terms of the attention of sociologists, it is neuroimaging that has formed the dominant focus of empirical and conceptual work. Technologies like PET (positron emission tomography) and fMRI (functional magnetic resonance imaging) measure, respectively, blood flow or oxygen consumption, and the results are used to make inferences about brain activity. However, just as it would be inaccurate to consider survey research as the sole methodology within the social sciences, many neuroscientists do not undertake imaging research but rather make use of a wide range of tools and methods (such as those noted above). Further, whilst imaging techniques are widely used, critique remains regarding how they are and should be employed. As anthropologist Joseph Dumit has shown, a given neuroimaging technology builds ‘assumptions into its architecture and thus can appear to confirm them, while … reinforcing them’ (Dumit, 2004: 81). Neuroscientists themselves have likewise questioned the ‘puzzlingly high correlations’ seen in the results of imaging studies, and hence problematized the significance of these for casting light on the phenomena under study (Vul et al., 2009).
Imaging is perhaps also the most visible aspect of neuroscience within wider cultural discourse, and the most ubiquitous icon of neuroscientific power today appears to be the brain scan. Often compelling, these evocative representations of cerebral matter are ciphers over which a variety of professionals and publics have come to lay their own understandings of personhood. Frequently described as ‘pictures’ of the brain, though actually representations of complex statistical data, they are produced by technologies such as PET and MRI. Such techniques are commonly understood as granting unmediated and unproblematic access to what Anne Beaulieu (2000) calls the ‘space inside the skull’. Nonetheless, these scans are produced through the work of human actors existing within what is sometimes a tense social matrix that involves a range of investigators and research participants.
Structural, as well as micro-sociological, power relations mediate the relations between scientists and their subjects. In the UK, university and NHS ethics committees may place particular limitations on the actions of investigators that govern their contact with research participants, and, in the US, a ‘complicated landscape’ of human research protection has a powerful influence on the mechanisms of research (Kulynych, 2002: 346). These ‘regimes of normativity’ (Pickersgill, 2012a) not only shape the nature of relationality in science, but the emotions experienced by participants and investigators in the process. In particular, neuroscientists are under new pressure to scrutinize decisions regarding whether to disclose incidental findings from scans (e.g. the presence of tumours). In 2005, the US National Institutes of Health co-organized a conference with Stanford University, on the ‘Detection and Disclosure of Incidental Findings in Neuroimaging Research’; since then, a plethora of literature has been published in science and ethics journals on this matter, with little sense of universal consensus emerging as a consequence. Diverse forms of responsibilization are thus occurring within neuroscience; however, precisely who is responsible for what and why are not readily resolvable questions. Consequently, the legitimacy of those institutions and individuals exercising power over research processes and participants remains ambiguous. In the next section, I discuss further some of the normative dimensions of neuroscience (as perceived by sociologists and others). Through doing so, I aim to cast light on some of the sociotechnical work involved and the negotiation of ‘novelty’ essential to it.
Neuroscience and the normative
Science, technology and medicine are aggregates of actors, discourses and practices that exert profound influence on the functioning of societies and enactment of social relations. The diverse dimensions and forms of power structuring contemporary neuroscience, and the authority of neuroscience itself, are key sociological problematics: from the micro scale of scientists interacting with those who participate in their studies, to the macro, where we can see symbolic and economic investments in brain science in order to bolster already dominant military regimes (e.g. Board on Army Science and Technology, 2009).
The links between the armed forces and science have always been complex (Bud and Gummett, 1999), with military-funded research sometimes contributing both to enhancing the destructive potential of nations, and to the improvement of health and societal infrastructure internationally. The relationship between neuroscience and the military is a case study of this moral complexity; for instance, as Lichtman and Sanes (2006) have argued, the joint sponsoring of research into the nervous system by the British Army and the British Medical Research Council during the Second World War led to what has been regarded as pioneering work. As Moreno (2006) shows, neuroscience today commands the attention of defence agencies seeking to enhance soldiers’ capabilities (for instance, via pharmaceuticals or devices aimed at heightening cognition) and improve national security (e.g. through neurotechnological deception-detection) (see also Tennison and Moreno, 2012).
In the latter case, developments in neurologic means of lie detection simultaneously are sustained by ‘post-9/11 anxiety’ and contribute to the production of ‘models of the brain that reinforce social notions of deception, truth, and deviance’ (Littlefield, 2009: 365). Equally, expectations of pharmaceutical intervention are powered by longstanding hopes and research trajectories alongside fictional tropes which legitimate attempts to actualize them (Wolf-Meyer, 2009). As such, the sponsorship of and focus on neuroscientific research by the military can be understood as resulting from the novel import the science is taken to have, which is itself reliant on pre-existing ideas about possible futures articulated through a range of media. At the same time, the novelty of neuroscience is re-embedded within cultural discourse as a consequence of symbolic and material investments.
The entanglements between neuroscience and the state have been taken by some commentators to be important – not least because these may be regarded as contradicting public-facing statements by funders about the explicitly health-related focus of government sponsorship of neuroscientific research. As a result of this moral claims-making, the relationships they revolve around are deserving of greater attention by sociologists. However, the military is not the only field of practice within which the normative aspects of neuroscience are constructed and debated. A new discipline of ‘neurolaw’ is today being forged, which both reflects and furthers expectations about the (increasing) relevance of neuroscience to law (Pickersgill, 2011a). Neuroimages appear to be gaining traction within the courts – especially in the US, where the role of neurologic knowledge in law is expected by many to increase (e.g. Gurley and Marcus, 2008; Jones and Shen, 2012).
This is hardly a new phenomenon: representations of brains have been used to substantiate psychiatric arguments for almost as long as the technologies of their production have been expedient to use (Kulynych, 1996). It is not unexpected, either, given the longstanding recourse of law to technoscience in order to render transparent that which seems irresolvably opaque (Jasanoff, 2006). Yet, despite historical precedent the admissibility and reliability of neuroscientific evidence have been contested (see Patel et al., 2007). Although the question of whether ‘the use of neuroscientific evidence in the legal system will expand is an open and hotly debated question’ (Jones and Shen, 2012: 351), those individuals concerned with ‘neurolaw’ direct attention to the potential of neuroscience to impact on law – with some actively campaigning for the reshaping of legal systems and processes in light of neuroscientific insights. Such endeavours raise their own normative questions, given the ways in which anticipatory ethical discourse can play a key role in the development of sociotechnical futures, closing off certain avenues of technological development and governance regimes and legitimizing others (Hedgecoe and Martin, 2003). Perhaps with this in mind, ethicists have urged neuroscientists to ‘avoid inadvertently fueling [sic] misconceptions about the power and promise of neuroimaging’ (Kulynych, 2002: 355).
A related academic project that has been impelled by neuroscience is the discipline of ‘neuroethics’. The emergence of neuroethics has come at a time when ethicists have come to play an increasing role in the moral order of science, including attending to scientific and technological innovation and seeking to anticipate and analyse cultural and moral concerns before they concretize and individual or societal harm results. This reflexive governance of science (Braun et al., 2010) seeks to shape the future through practices of anticipation and consequent intervention (e.g. through professional codes of conduct, ethics training for scientists and recommendations by public bioethics bodies).
Anticipatory ethics is exemplified by neuroethics; this discipline engages actively in empirical and conceptual analyses that seek to identify and manage the emerging and potential consequences of neuroscience, answering the ‘host of new questions’ (Farah, 2012: 571) ethicists claim scientific research raises. At the same time, neuroethics aims to adjudicate existing concerns, such as enduring questions around addiction, brain death and informed consent, which are viewed as newly significant in light of the expanding scope of neuroscientific research and its perceived novelty (see, for example, Carter and Hall, 2011; Fuchs, 2006; Glannon, 2011; Illes et al., 2006). As other commentators have noted, this gives ‘a “hip” feel’ to neuroethics (Conrad and De Vries, 2011: 301); this has in turn enjoined critique of the discipline from both other bioethicists and social scientists, who have questioned the novel nature of the questions it asks and the degree to which it constructs neuroscience as innovative and transformative in order to provide support for the neuroethical enterprise (Conrad and De Vries, 2011; Brosnan, 2011; Martin et al., 2011; Rose, 2007).
In sum, then, we can see that the potential of and excitement associated with neuroscience animate new normative debates and areas of research (e.g. neuroethics) – reflecting a wider ‘preoccupation with interdisciplinarity’ (Barry et al., 2008: 21) within universities and beyond. At the same time, this assumed potentiality propels sociotechnical work between neuroscience and the professions (such as law), and ethical discussions about its consequences. By capitalizing on the perceived novel nature of neurologic knowledge, this work further substantiates and legitimates interdisciplinary praxis, endeavours to mutate expertise and claims to novelty. In what follows, I attend more closely to the implications for imagined, ascribed and experienced identities that are produced through these shifting configurations of authority and expertise.
Biomedical identities and the cultures of care
As we have seen, neuroscience and its products are increasingly playing roles in (potentially and partially) reconfiguring ecologies of expertise in a range of contexts. In particular, this is through the production of hybrid knowledge and the absorption of neuroscientific concepts and data into sites that are distant from the laboratory. Universities are investing in and attracting sponsorship for initiatives and centres that seek to articulate neuroscience and diverse disciplines (like economics, music, philosophy and policy) with one another (Littlefield and Johnson, 2012; Schüll and Zaloom, 2011). These serve as sites within which different perspectives and methodologies might be brought into alignment and made to ‘stick’ (Molyneux-Hodgson and Meyer, 2009) together. In the process, they perform what, following Barry et al. (2008), we might call ‘ontological work’ (Pickersgill, 2011a).
As well as law and ethics, the promissory potential of neuroscience and the performance of ontological work is also apparent in (biomedicalized) debates around children and education. Here, neurologic narratives can be located within popular media and self-help books, as well as within policy documents and scientific literature on development. Within these texts, young people are constructed as what Choudhury et al. (2012) characterize as ‘neurological adolescents’, while the teaching profession is figured as a set of work practices that have much to benefit from neuroscience. Neuroscientific research is regarded by some of its practitioners as an endeavour that ‘is revealing, and will continue to reveal, a great deal about the developing brain’ (Blakemore et al., 2011: 6), providing ‘insights to guide innovative policies and practices’ (Shonkoff, 2011: 983). Parents, fuelled by desires that their offspring ‘exceed norms of behaviour and intellect’ (Nadesan, 2002: 413), can be active consumers of such discourses that present the brains of babies, children and young adults as plastic objects to be cared for, but also as biological constants that are determinative of behaviour and subjectivity. This exemplifies what Thornton (2011) has pointed to as a wider role of brain images in the public sphere, which act ‘to substantiate biological determinism’ whilst simultaneously supporting ‘claims that emphasize individual agency and responsibility’ (Thornton, 2011: 4; see also Pitts-Taylor, 2010). Such narratives implicitly – and sometimes explicitly – enjoin parents and teachers to take on board the lessons of neuroscience, (re)configuring their ascribed and perhaps enacted expertise in the process.
In line with widespread interest in what some have called ‘neuroeducation’ (e.g. Battro et al., 2008), the international publisher Elsevier has introduced a new title as part of their popular line of ‘Trends in …’ journals. This periodical, Trends in Neuroscience and Education, seeks to ‘bridge the gap between our increasing basic cognitive and neuroscience understanding of learning and the application of this knowledge in educational settings’ (Elsevier, 2012). Evoking narratives of scientific revolution, it is argued that:
Just as 200 years ago, medicine was little more than a mixture of bits of knowledge, fads and plain quackery without a basic grounding in a scientific understanding of the body, and just as in the middle of the nineteenth century, Hermann von Helmholtz, Ernst Wilhelm von Brücke, Emil Du Bois-Reymond and a few others got together and drew up a scheme for what medicine should be (i.e., applied natural science), we believe that this can be taken as a model for what should happen in the field of education. In many countries, education is merely the field of ideology, even though we know that how children learn is not a question of left or right political orientation. (Elsevier, 2012)
Yet, whilst the novelty and potential import of neuroscience is widely acknowledged, professionals at the coalface of practice, and even associations examining the role of neuroscientific research in (for instance) education, problematize and deconstruct the ascribed salience of the brain (Pickersgill et al., 2011). As the Royal Society put it in their report on ‘Neuroscience: Implications for Education and Lifelong Learning’:
There is great public interest in neuroscience, yet accessible high quality information is scarce. We urge caution in the rush to apply so-called brain-based methods, many of which do not yet have a sound basis in science. There are inspiring developments in basic science although practical applications are still some way off. (Royal Society, 2011: v)
This ‘great public interest’ appears most evidently instantiated through the media, where brain scans are readily apparent – especially in regard to discourses of health and illness. Images of brains, both normal and pathological, abound within magazines, books, newspapers and on television, with iridescent colours used to highlight regions that are ‘active’ or ‘hot’ and, it is assumed, thereby linked in some fundamental way to the subject of the commentary (Dumit, 2004). Images perform importance rhetorical work (Burri, 2012; Joyce, 2008); once produced, they can become central entities around which systems of authority and control orbit. In so doing, they at once contribute to the reification of some identities whilst reconfiguring others in sometimes surprising ways. The remarks and observations images are juxtaposed with in diverse media are often flagrantly promissory, extolling the power of the new brain sciences to reveal the secrets of the mind and cast fresh light on health, well-being and the very meaning of human nature. Nevertheless, these may be taken as entertaining, rather than profound, by audiences – or sometimes simply ignored (Pickersgill et al., 2011).
Particularly in regard to mental health, neuroscientific research and the images produced through it can also serve to bolster existing professional claims; psychiatric expertise is supported through the credibility attached to neuroscience in a range of domains. As suggested above, Rose perhaps overstates his case when he argues that whilst once ‘our desires, moods and discontents might previously have been mapped onto a psychological space, they are now mapped upon the body itself, or one particular organ of the body – the brain’ (Rose, 2007: 188). However, it seems clear that individuals diagnosed with mental disorder are often identified by a range of actors as being, in part, their brains, as a consequence of both the proliferation of neuroimages within popular media and the pharmacological treatments that are regularly prescribed to them to manage unruly subjectivities (Buchman et al., in press; Karp, 2006).
This use of psychopharmaceuticals is tightly bound up with the success of the third edition of the US Diagnostic and Statistical Manual of Mental Disorders (DSM-III, released 1980) (Healy, 2004). In particular, newly developed psychiatric drugs in the 1980s were trialled as specific treatments for the novel DSM-III diagnoses; following these, lavish symposia were sponsored through which the results of such studies could be disseminated. At the same time, DSM-III itself was heavily promoted internationally. The marketing efforts of both drug companies and the APA (American Psychiatric Association) were thus mutually beneficial: they powerfully reinforced both the legitimacy of the new diagnostic text and the categories it contained, and the efficacy of the compounds ostensibly developed to treat them. In recent years, the development of the Internet has enabled forms of direct-to-consumer advertising (Fox et al., 2006), further contributing to ways in which agents are subjectified as individuals living with mental ill-health.
Psychopharmaceuticals, and the neurobiological narratives that make sense of and are articulated through their promotion and consumption, are also ‘leaking’ (Lovell, 2006: 138) into spaces wherein they are employed to enhance normal functioning (rather than solely treat pathology). ADHD (attention deficit hyperactivity disorder) medication is a classic example of this, and wakefulness-promoting drugs are also important cases. The ‘off-label’ use of drugs in this way (e.g. by overworked students and business people) carries ‘the potential to reify and reinforce … dominant and hegemonic [cultural] narratives’ (Mamo and Fishman, 2001: 13). Chemical enhancement is indicative of a broader societal move away from viewing dependence on pharmaceuticals as a weakness, to instead considering their consumption as desirable or even essential in order to fully participate in society (Martin et al., 2011; Williams et al., 2008). The ‘duty to be well’ that Monica Greco (1993) has characterized as an animating theme within twentieth-century ‘Western’ society can now be understood as intimately entwined with what we might think of as a ‘duty to do well’ – and pharmaceuticals provide one of several means through which this might be achieved (though how many people will use them in this way remains an open question; Coveney, 2011).
At the same, though, criticism of the use of pharmaceuticals in this way is rife, not just for enhancement but for therapy as well (if a straightforward distinction between these can in fact be drawn). Lifestyle and psychosocial interventions for health and well-being continue to hold considerable traction within the UK and other ‘Western’ nations. Indeed, the implosion of selfhood into the brain has paradoxically acted to legitimize a range of applications of psychosocial knowledge, including intervention in families and children deemed ‘at risk’ of developing mental pathology and producing social disorder (Pickersgill, 2009; Walsh, 2011). The very fact that the brain is viewed as potentially shaped through ‘the environment’ underscores the continued salience of this ontological realm within public mental health. Indeed, neurobiological research may be reflexively rejected by psychiatrists and psychologists who read onto it assumptions of determinacy that undermine their therapeutic goals (Helén, 2011; Pickersgill, 2011b).
Further, in the UK at least, the ‘socio-medical organisation of psychiatric disorder’ (Prior, 1991: 403) seems to be shifting: developments in mental health policy today seek to prioritize psychological interventions, rather than drug therapies, with techniques such as cognitive behavioural therapy (CBT) increasingly lauded, funded and consumed. This is occurring in tandem with disinvestment treatments for mental illness by the pharmaceutical industry (Pickersgill, 2012b).3 One of the most significant biopolitical engines that has customarily been understood as driving the processes by which subjective experience comes to be mapped upon the brain thus could now to be powering down.4
Discussion
As we have seen, neuroscience is frequently regarded as a powerful set of practices that can reveal important information about our brains and our selves. Such ascriptions of profundity reconfigure neuroscientists as not solely purveyors of fresh knowledge, but also co-producers (Jasanoff, 2004) of sociality as they enable and come to be embedded in new relationships with the health and other professions, patients and wider publics. With these developments come shifts in the flow of capital; neuroscience is a resource from which a new ‘promissory bioeconomy’ (Martin et al., 2008) is being formed. Neuroscience, through its construction as a novel epistemological practice, is clearly impacting on society in varied, subtle and sometimes far-reaching ways.
Alongside the perceived novelty of neuroscience sits its capacity to (re)legitimate older forms of knowing and acting. This raises questions regarding whether neuroscience may lend credibility to counter-modern (Beck, 1992) knowledge claims: for instance, to what extent might neuroscientific research (re)inscribe gender and ethnic differences onto biological sex? Alongside its modernist efforts to contribute to classical concerns of liberal democracies (health, education, justice), neuroscience therefore may also have implications for the exercise of new forms of power and control. This has not gone unnoticed by some academics and social groups: alongside celebrations of neuroscience sit passionate critiques labelling it as ‘reductionist’. The questions of whether new forms of discrimination may be engendered by neurologic conceptions of morality, deviance and sickness demand attention (cf. Kerr and Cunningham-Burley, 2000).
However, particular outcomes cannot be assumed. Further, it is not clear that neuroscientific knowledge is truly transformative, nor even that the celebratory, promissory and critical discourses associated with it are novel and unexpected. Rather, for over a century the scientific exploration of the brain has engendered both public and political interest, with many of the hopes invested in techniques like PET once instantiated within research involving technologies such as electroencephalography (EEG) (Borck, 2001, 2008; Hagner and Borck, 2001). Indeed, such tools produced particular models of the psyche (Borck, 2005) that can today be understood as the scaffolding upon which 21st century neurologic constructs have been assembled.
We must, then, be wary: not only of claims from neuroscientists and other actors about the potentiality of studies of the brain and the innovations they can and should engender, but also of highly theorized social scientific accounts that might over-play the novelty and import of neuroscience. As Webster (drawing on Barry, 2001) notes for genomics, it can be ‘difficult to identify novelty in-itself, since there is much evidence that shows how the same technoscience can be positioned and repositioned as old and new, depending on the networks and audiences it seeks to embrace or mobilise’ (Webster, 2005: 236, emphasis in original; see also Webster, 2002). Accordingly, it is problematic to view neuroscience as ‘intrinsically and necessarily transformative’ (Webster, 2005: 237), or, equally, as utterly lacking in social agency. Rather, the potential of neuroscientific research to (re)shape society is inextricably bound up in the social life of the brain: how, when and where is the neurological ‘consumed’, and in what settings and how does it gain traction and legitimacy?
As Vidal shows us, ‘brainhood predated reliable neuroscientific discoveries, and constituted a motivating factor of the research that, in turn, legitimized it’ (Vidal, 2009: 14). Today, social scientists themselves may play a role in the legitimization of neuroscience. Analysts are increasingly expected to play a ‘formative role’ (Barben et al., 2008: 983) in innovation through, for instance, collaboration and anticipatory governance (Molyneux-Hodgson and Meyer, 2009). This explicit positioning of social science within innovation policy has been broadly, but cautiously, welcomed by anthropologists, sociologists and STS scholars (Macnaghten et al., 2005), who today occupy ambivalent subject positions between ‘collaborator’, ‘contributor’ and ‘critic’ with and of scientific research and technological development (Calvert and Martin, 2009). Yet, such interdisciplinary working and engagement requires social scientists to be ‘involved explicitly in the construction of possible futures’ (Barben et al., 2008: 993); this future-orientated collaborative debate and practice is necessarily intertwined with discourses of novelty that legitimate interdisciplinarity. In turn, dialogue around the transformative potential of ‘new’ science, technology and medicine ‘plays an important role in informing and shaping social science research agendas’ (Hedgecoe and Martin, 2008: 817). This has created anxieties within social science communities regarding the extent to which researchers are serving to create further societal legitimacy for technoscientific practice (Burchell, 2009).
These wider debates resonate with the specific challenges indicated here regarding research into the place and role of neuroscience in society. Existing work has often drawn on Foucauldian insights around governmentality, emphasizing the kinds of self-making that encounters with the brain result in (Dumit, 2004; Nadesan, 2002; Pitts-Taylor, 2010; Rose, 2007; Thornton, 2011; Vrecko, 2006). This scholarship is rich and important, helping us to better understand the impacts of neuroscience on subjectivity. Yet, Foucault expressly argued for the possibilities of societal resistance to dominant discourses, which in the sociology of neuroscience have, to date, been less well documented and explored. Indeed, the very idea of a dominance/resistance binary presupposes widespread individual and institutional engagement with neuroscience and the neurological. Such an imaginary leaves insufficient room for elaborations of sociotechnical and subjective spaces wherein the brain is simply not present. As I have sought to describe here, the power of neuroscience is evident in a variety of social realms; yet, at the same time, it is also sometimes resisted or ignored. Moreover, the potency of neurologic technoscience is itself often activated by longstanding cultural tropes. This underscores a central question: how can social scientists engage with ‘new’ developments whilst recognizing that such novelty is itself socially constructed and may in fact be partly constituted through sociological engagement?
This article aims to articulate a problematic, rather than defend a solution. However, some further reflection on this question is important to set out here. In particular, sociological research must refrain from assuming novelty, even when case studies originate from or are propelled by a general understanding (held by the investigator, or evinced from them as apparent elsewhere) that some form of sociotechnical practice can be taken to be ‘novel’. In effect, I regard some kind of ‘methodological impartiality’ (cf. Bloor, 1991) as central to investigations into ‘novel’ technoscience. Perhaps of equal import, research must engage empirically (using a range of methods) with the matter of novelty: when, why and how is a particular realm of science or a kind of technological artefact deployed as ‘novel’ and to what epistemological or societal ends? Claims made through such research should likewise be reflexive about how partial they are: for instance, neuroscientists themselves might refuse to see their work as novel, whilst reporters often do, and wider publics may be ambivalent or agnostic. In bearing these issues in mind, sociologists might better chart the complex terrain from which science and technology emerge and within which they function, without inappropriately reifying the praxis under examination. However, this remains an incomplete response to the question detailed above: it demands further sociological engagement and debate.
Conclusion
In this article, I have aimed to chart some of the key features of the current social life of the brain, whilst also documenting why, how and by whom the brain is (sometimes) taken to be important. The resonance of neuroscience in society is noteworthy to be sure; however, the brain itself might best be understood as being what I have elsewhere called an object of ‘mundane significance’ (Pickersgill et al., 2011). By this I mean that the brain – and, hence, studies of its function and structure – has a taken-for-granted salience that is synthesized through the considerable sociotechnical work that goes into figuring it as novel, yet simultaneously neuroscientific research is often distant from everyday professional and health practice. Considering the brain and the discourses and techniques that help to constitute it in this light acknowledges sociotechnical innovation but nevertheless remains mindful of the ways in which the ‘inventiveness’ (Barry, 2001) of neuroscience is itself socially produced. In so doing, cultural analyses of neuroscience might better distinguish themselves from biomedical and entrepreneurial discourses that likewise ascribe transformative effects to neurologic knowledge.
Notes
Data collected include mental health policy documents, reports from bodies such as the Royal Society, journal articles in scientific and professional journals, (participant) observation at events pertaining to neuroscience, mental health and society (including clinical conferences and public engagement events), and interviews, focus groups and informal conversations with scientists, clinicians, service users and wider publics (including teachers, counsellors and members of the clergy).
The article does not provide a general schematic of the social, economic and epistemological drivers of neuroscience, which have been documented elsewhere (e.g. Beaulieu, 2001; Dumit, 2004; Littlefield and Johnson, 2012; Rose, 2007; Vidal, 2009).
For more on the economic, scientific and social aspects of disinvestment in psychopharmacology, see Chandler (in press).
Social scientists have shown extensively, though a wide range of richly ethnographic and detailed historical studies, how profoundly the pharmaceutical industry has shaped personal experience in the twentieth and twenty-first centuries; see Dumit (2012), Healy (2004), Jenkins (2009) and Petryna et al. (2006).
References
Barben D, Fisher E, Selin C and Guston DH (2008) Anticipatory governance of nanotechnology: Foresight, engagement, and integration. In: Hackett EJ, Amsterdamska O, Lynch M and Wajcman J (eds) The Handbook of Science and Technology Studies, 3rd edn. Cambridge, MA: MIT Press, pp. 979–1000.
Barry A (2001) Political Machines: Governing a Technological Society. London: Athlone Press.
Barry A, Born G and Weszkalnys G (2008) Logics of interdisciplinarity. Economy and Society 37(1): 20–49.
Battro AM, Fischer KW and Léna PJ (eds) (2008) The Educated Brain: Essays in Neuroeducation. Cambridge and New York: Cambridge University Press.
Bear MF, Connors BW and Paradiso MA (2007) Neuroscience: Exploring the Brain, 3rd edn. Baltimore and Philadelphia: Lippincott Williams and Wilkins.
Beaulieu A (2000) The space inside the skull: Digital representations, brain mapping and cognitive neuroscience in the decade of the brain. PhD Thesis, University of Amsterdam.
Beaulieu A (2001) Voxels in the brain. Social Studies of Science 31(5): 635–680.
Beck U (1992) Risk Society: Towards a New Modernity. London: Sage.
Blakemore S-J, Dahl RE, Frith U and Pine DS (2011) Developmental cognitive neuroscience. Developmental Cognitive Neuroscience 1(1): 3–6.
Bloor D (1991) Knowledge and Social Imagery, 2nd edn. Chicago: University of Chicago Press. Board on Army Science and Technology (BAST) (2009) Opportunities in Neuroscience for Future Army Applications. Washington, DC: The National Academies Press.
Borck C (2001) Electricity as a medium of psychic life: Electrotechnological adventures into psychodiagnosis in Weimar Germany. Science in Context 14(4): 565–590.
Borck C (2005) Writing brains: Tracing the psyche with the graphical method. History of Psychology 8(1): 79–94.
Borck C (2008) Recording the brain at work: The visible, the readable, and the invisible in electroencephalography. Journal of the History of the Neurosciences 17(3): 367–379.
Braun K, Moore A, Herrmann SL and Könninger S (2010) Science governance and the politics of proper talk: Governmental bioethics as a new technology of reflexive government. Economy and Society 39(4): 510–533.
Bröer C and Heerings M (in press) Neurobiology in public and private discourse: The case of adults with ADHD. Sociology of Health and Illness.
Brosnan C (2011) The sociology of neuroethics: Expectational discourses and the rise of a new discipline. Sociology Compass 5(4): 287–297.
Buchman DZ, Borgelt EL, Whiteley L and Illes J (in press) Neurobiological narratives: Experiences of mood disorder through the lens of neuroimaging. Sociology of Health and Illness.
Bud R and Gummett P (eds) (1999) Cold War, Hot Science: Applied Research in Britain’s Defence Laboratories 1945–1990. London: Harwood Academic Publishers.
Burchell K (2009) A helping hand or a servant discipline? Interpreting non-academic perspectives on the roles of social science in participatory policy-making. Science, Technology and Innovation Studies 5(1): 49–61.
Burri RV (2012) Visual rationalities: Towards a sociology of images. Current Sociology 60(1): 45–60.
Calvert J and Martin P (2009) The role of social scientists in synthetic biology. EMBO Reports 10(3): 201–204.
Carson J (1999) Minding matter/mattering mind: Knowledge and the subject in nineteenth-century psychology. Studies in the History and Philosophy of the Biological and Biomedicine Sciences 30(3): 345–376.
Carter A and Hall W (2011) Addiction Neuroethics: The Promises and Perils of Neuroscience Research on Addiction. Cambridge: Cambridge University Press.
Chandler DJ (in press) Something’s got to give: Psychiatric disease on the rise and novel drug development on the decline. Drug Discovery Today.
Choudhury A, Nagel SK and Slaby J (2009) Critical neuroscience: Linking neuroscience and society through critical practice. BioSocieties 61(1): 61–77.
Choudhury S, McKinney KA and Merten M (2012) Rebelling against the brain: Public engagement with the ‘neurological adolescent’. Social Science and Medicine 74(4): 565–573.
Conrad EC and De Vries R (2011) Field of dreams: A social history of neuroethics. In: Pickersgill M and Van Keulen I (eds) Sociological Reflections on the Neurosciences. Bingley: Emerald, pp. 299–324.
Coveney CM (2011) Cognitive enhancement? Exploring modafinil use in social context. In: Pickersgill M and Van Keulen I (eds) Sociological Reflections on the Neurosciences. Bingley: Emerald, pp. 203–228.
Dingwall R, Nerlich B and Hillyard S (2003) Biological determinism and symbolic interaction: Hereditary streams and cultural roads. Symbolic Interaction 26(4): 631–644.
Dumit J (2004) Picturing Personhood: Brain Scans and Biomedical Identity. Princeton, NJ and Oxford: Princeton University Press.
Dumit J (2012) Drugs for Life: How Pharmaceutical Companies Define Our Health. Durham, NC: Duke University Press.
Duster T (2006) Comparative perspectives and competing explanations: Taking on the newly configured reductionist challenge to sociology. American Sociological Review 71(1): 1–15.
Elsevier (2012) Trends in Neuroscience and Education. Available at: www.elsevier.com/journals/trends-in-neuroscience-and-education/2211–9493# (accessed 9 January 2013).
Farah MJ (2012) Neuroethics: The ethical, legal, and societal impact of neuroscience. Annual Review of Psychology 63: 571–591.
Fox N, Ward K and O’Rourke A (2006) A sociology of technology governance for the information age: The case of pharmaceuticals, consumer advertising and the internet. Sociology 40(2): 315–334.
Franks DD (2010) Neurosociology: The Nexus between Neuroscience and Social Psychology. New York: Springer.
Fuchs T (2006) Ethical issues in neuroscience. Current Opinion in Psychiatry 19(6): 600–607.
Glannon W (2011) Brain, Body and Mind: Neuroethics with a Human Face. Oxford: Oxford University Press.
Greco M (1993) Psychosomatic subjects and the ‘duty to be well’: Personal agency within medical rationality. Economy and Society 22(3): 357–372.
Gurley JR and Marcus DK (2008) The effects of neuroimaging and brain injury on insanity defences. Behavioral Sciences and the Law 26(1): 85–97.
Hagner M and Borck C (2001) Mindful practices: On the neurosciences in the twentieth century. Science in Context 14(4): 507–510.
Healy D (2004) Let Them Eat Prozac: The Unhealthy Relationship between the Pharmaceutical Industry and Depression. New York: New York University Press.
Hedgecoe A and Martin P (2003) The drugs don’t work: Expectations and the shaping of pharmacogenetics. Social Studies of Science 33(3): 327–364.
Hedgecoe AM and Martin PA (2008) Genomics, STS, and the making of sociotechnical futures. In: Hackett EJ, Amsterdamska O, Lynch M and Wajcman J (eds) The Handbook of Science and Technology Studies, 3rd edn. Cambridge, MA: MIT Press.
Helén I (2011) Is depression a brain disorder? Neuroscience in mental health care. In: Pickersgill M and Van Keulen I (eds) Sociological Reflections on the Neurosciences. Bingley: Emerald, pp. 123–152.
Illes J, DeVries R, Cho MK and Schraedley-Desmond P (2006) ELSI priorities for brain imaging. American Journal of Bioethics 6(2): W24–W31.
Jasanoff S (ed.) (2004) States of Knowledge: The Co-production of Science and Social Order. New York: Routledge.
Jasanoff S (2006) Just evidence: The limits of science in the legal process. Journal of Law, Medicine and Ethics 34(2): 328–341.
Jenkins JH (ed.) (2009) Pharmaceutical Self: The Global Shaping of Experience in an Age of Psychopharmacology. Santa Fe, NM: SAR Press.
Jones OD and Shen FX (2012) Law and neuroscience in the United States. In: Spranger TM (ed.) International Neurolaw: A Comparative Analysis. Berlin: Springer, pp. 349–380.
Joyce KA (2008) Magnetic Appeal: MRI and the Myth of Transparency. Ithaca, NY and London: Cornell University Press.
Karp DA (2006) Is It Me or My Meds? Living with Antidepressants. Cambridge, MA: Harvard University Press.
Kerr A and Cunningham-Burley S (2000) On ambivalence and risk: Reflexive modernity and the new human genetics. Sociology 34(2): 283–304.
Kulynych J (1996) Brain, mind, criminal behaviour: Neuroimages as scientific evidence. Jurimetrics 36(3): 235–244.
Kulynych J (2002) Legal and ethical issues in neuroimaging research: Human subjects protection, medical privacy, and the public communication of research results. Brain and Cognition 50(3): 345–357.
Lichtman JW and Sanes JR (2006) Translational neuroscience during the Second World War. Journal of Experimental Biology 209(18): 3485–3487.
Littlefield M (2009) Constructing the organ of deceit: The rhetoric of fMRI and brain fingerprint- ing in post-9/11 America. Science, Technology and Human Values 34(3): 365–392.
Littlefield M and Johnson J (eds) (2012) The Neuroscientific Turn: Transdisciplinarity in the Age of the Brain. Ann Arbor: University of Michigan Press.
Lovell AM (2006) Addiction markets: The case of high-dose buprenorphine in France. In: Petryna A, Lakoff A and Kleinman A (eds) Global Pharmaceuticals: Ethics, Markets, Practices. Durham, NC: Duke University Press, pp. 136–170.
Macnaghten P, Kearnes M and Wynne B (2005) Nanotechnology, governance, and public delib- eration: What role for the social sciences? Science Communication 27(2): 1–24.
Mamo L and Fishman JR (2001) Potency in all the right places: Viagra as a technology of the gendered body. Body and Society 7(4): 13–35.
Martin P, Brown N and Turner A (2008) Capitalizing hope: The commercial development of umbilical cord stem cell banking. New Genetics and Society 27(2): 127–143.
Martin PA, Pickersgill M, Coveney CM and Williams SJ (2011) Pharmaceutical cognitive enhancement: Interrogating the ethics, addressing the issues. In: Segev I and Markram H (eds) Augmenting Cognition. Lausanne: EPFL Press, pp. 179–192.
Molyneux-Hodgson S and Meyer M (2009) Tales of emergence: Synthetic biology as a scientific community in the making. BioSocieties 4(2–3): 129–145.
Moreno JD (2006) Mind Wars: Ethics, National Security and the Brain. Washington, DC: Dana Press.
Nadesan MH (2002) Engineering the entrepreneurial infant: Brain science, infant development toys, and governmentality. Cultural Studies 16(3): 401–432.
Patel P, Cidis Meltzer C, Mayberg HS and Levine K (2007) The role of imaging in United States courtrooms. Neuroimaging Clinics of North America 17(4): 557–567.
Petryna A, Lakoff A and Kleinman A (2006) Global Pharmaceuticals: Ethics, Markets, Practices. Durham, NC: Duke University Press.
Pickersgill M (2009) Between soma and society: Neuroscience and the ontology of psychopathy. BioSocieties 4(1): 45–60.
Pickersgill M (2011a) Connecting neuroscience and law: Anticipatory discourse and the role of sociotechnical imaginaries. New Genetics and Society 30(1): 27–40.
Pickersgill M (2011b) ‘Promising’ therapies: Neuroscience, clinical practice, and the treatment of psychopathy. Sociology of Health and Illness 33(3): 448–464.
Pickersgill M (2012a) The co-production of science, ethics and emotion. Science, Technology and Human Values 37(6): 579–603.
Pickersgill M (2012b) What is psychiatry? Engaging with complexity in mental health. Social Theory and Health 10(4): 328–347.
Pickersgill M, Cunningham-Burley S and Martin P (2011) Constituting neurologic subjects: Neuroscience, subjectivity and the mundane significance of the brain. Subjectivity 4(3): 346–365.
Pitts-Taylor V (2010) The plastic brain: Neoliberalism and the neuronal self. Health 14(6): 635–652.
Prior L (1991) Mind, body and behaviour: Theorisations of madness and the organisation of therapy. Sociology 25(3): 403–421.
Rose N (2007) The Politics of Life Itself: Biomedicine, Power, Subjectivity in the Twenty-first Century. Princeton, NJ and Oxford: Princeton University Press.
Royal Society (2011) Neuroscience: Implications for Education and Lifelong Learning (Brain Waves Module 2). London: The Royal Society. Available at: royalsociety.org/uploadedFiles/ Royal_Society_Content/policy/publications/2011/4294975733.pdf (accessed 9 January 2013).
Schüll ND and Zaloom C (2011) The shortsighted brain: Neuroeconomics and the governance of choice in time. Social Studies of Science 41(4): 515–538.
Shonkoff JP (2011) Protecting brains, not simply stimulating minds. Science 333(6045): 982–983.
TenHouten W (1997) Neurosociology. Journal of Social and Evolutionary Systems 20(1): 7–37.
Tennison MN and Moreno JD (2012) Neuroscience, ethics, and national security: The state of the art. PLOS Biology 10(3). Epub. DOI:10.1371/journal.pbio.1001289.
Thomson M (2005) Psychological Subjects: Identity, Culture, and Health in Twentieth-century Britain. Oxford: Oxford University Press.
Thornton DJ (2011) Brain Culture: Neuroscience and Popular Media. New Brunswick, NJ: Rutgers University Press.
Vidal F (2009) Brainhood, anthropological figure of modernity. History of the Human Sciences 22(1): 5–36.
Von Scheve C (2011) Sociology of neuroscience or neurosociology? In: Pickersgill M and Van Keulen I (eds) Sociological Reflections on the Neurosciences. Bingley: Emerald, pp. 255–278.
Vrecko S (2006) Folk neurology and the remaking of identity. Molecular Interventions 6(6): 300–303.
Vul E, Harris C, Winkielman P and Pashler H (2009) Puzzlingly high correlations in fMRI studies of emotion, personality, and social cognition. Perspectives on Psychological Science 4(3): 274–290.
Walsh C (2011) Youth justice and neuroscience: A dual-use dilemma. British Journal of Criminology 51(1): 21–39.
Webster A (2002) Innovative health technologies and the social: Redefining health, medicine and the body. Current Sociology 50(3): 443–457.
Webster A (2005) Social science and a post-genomic future: Alternative readings of genomic agency. New Genetics and Society 24(2): 227–238.
Williams SJ, Seale C, Boden S et al. (2008) Waking up to sleepiness: Modafinil, the media and the pharmaceuticalisation of everyday/night life. Sociology of Health and Illness 30(6): 839–855.
0 notes
gravitascivics · 5 years
Text
COMPETING HISTORIES
One of the background bits of information undergirding a lot of what this blog has to say is the ongoing “debate” between establishment educators and academic educators.  A simple rundown of what divides them can take the following form:  the establishment favors a natural rights point of view and academia favors critical theory.  
The natural rights view summarily supports individuals holding basic rights and that they can behave to exercise those rights as long as such behavior does not interfere with others enjoying their basic rights.  Critical theory believes society is ruled and controlled by monied interests and the resulting political/economic system is set up and sustained to advance those interests.  
Each side strives to move education toward recognizing the governmental/political, national perspective they respectively harbor, to pursue policies it sees as ideal, and delegitimize the related beliefs of the other side.  This leads to a different set of policy preferences:
·        For establishment educators, policy should support and be directed toward preparing students for the competitive environment that categorizes the economy.  
·        For critical theorists, policy should encourage the citizenry toward overthrowing the exploitive society – this can be done peacefully – so it can advance to a general state of equality in which all will equally share in the economy’s benefits.
As a short expression of these opposing ideas, one can simply state what each side’s trump value is:  natural rights holds liberty – or what the literature calls natural liberty or equal condition – and critical theory holds equality – or what the literature calls equal results.  As a point of context, the current political landscape seems to indicate that the electorate is swinging toward one camp or the other by abandoning what is usually called the “political center.”  Another way to describe this:  both sides are getting closer to the extremes – nationalism and socialism.
As for this posting, the aim is to see how each side treats educational history in the US.  If either can have its view of the past as the general view, this would lead, among the electorate, to agreements over many assumptions one makes in promoting one set of policies as opposed to the other set.  This writer believes that that would be an important achievement by either side as it would “grease” the way toward being successful in many policy debates.
But before explaining that, here is a contextual point that needs to be mentioned. One should not view education as a consumer service.  Instead, one should see it as a discipline; in order to succeed, it demands that a reasonable amount of effort be exerted by students.  This account mentions this because of its varied implications.  When reformers consider changes in education, they need to remember that this basic relationship – between students and subject matter – should dictate important parameters as to what should be considered.
Often, those who discuss reform, be they establishment advocates or critical theorists, do not take this fact into consideration.  How an educational system is run, therefore, goes a long way in determining whether the resulting service is reasonable in its demands in the expected efforts called upon by students, parents, or other involved adults, such as teachers and school administrators.  With those ground rules, what do these opposing sides view as being the educational history of the nation?  Here is a short answer.
Naturally, both sides have differing historical accounts they favor.  So, how can one see these dual views separately and in comparison, to each other?  A central concern of critical theorists can be used to begin this review.  That is:  why, to begin with, does an exploitive nation, such as the US, have a public-school system?  Here are the two opposing answers to this basic question.
The establishment believes that on the face of it, public schooling allows the masses to become educated and, through that experience, can improve their competitive posture.  In turn, this, with hard work, can lead to ever higher levels of rewards that lucrative careers can accrue.  In short, education can lead to equal opportunity assuming the educational system works sufficiently well.  
This assumption is one of many that critical theorists question.  Critical theorists do not accept that the above overall aim is true.  They argue that an alternate motivation was at play when the rich agreed to sign up and support a public-school system, an essential element in instituting such a system.  
On the critical theory side, one can summarize its view by stating that public schooling came about due to how rich people saw a vast population.  That population, mostly poor, would have little to no motivation to favor, uphold, or otherwise support the existing, exploitive polity and its economic system.  Given their lack of any meaningful stake in what was, the masses were left to their own devises to not only be antagonistic toward that system, but also, through lawlessness and outright rebellion, undermine it.
Chief among the upper class’s concerns, were behaviors that aimed at destruction.  Yes, religion could be counted on to stave off some of these beliefs or feelings, but on a day to day basis, a more instituted effort could be made to promote a message of how good the system was and, better still, that message could be imparted early in life.  
Not only could such a schooling institution do these things, but it could also socialize people to except such exploitive practices as forcing the working class to submit to the factory system, in which manufacturers forced workers to earn a subsistence wage.  By the way, schools used the factory model to physically structure their operations.[1]  
The promotors of public schooling could and finally did sell this essential message to the rich.  It has been further supported, critical theorists would say, with the underfunding that public schooling has struggled through the years since its inception. Of course, because of funding formulas across the country, one can readily see the results of such funding when one compares the physical resources schools in lucrative areas have compared to say inner city schools where struggling people live.
That, in a few sentences, summarizes why critical theorists argue the rich – in this nation and in just about all nations that have public education systems – supported initiating and funding such expensive operations among the states of the US. Of course, lately the rich are thinking of ways to lighten that financial burden.  Critical theorists are apt to see such “reforms” in this light, and one can easily see why the critical theorists might question the motivation behind moves toward some alternate school models such as charter schools.[2]  
On the other side, natural rights advocates favor a different history. They might see some of the above counterproductive practices – e.g., the factory system model for structurally arranging schools – but usually chalk those up to how people generally saw education at a given time.  They instead look and emphasize those actors and movements that strove to make education available to the public.  
And they proudly point out those cases where people of modest means have been successful in an array of professional careers.  For example, the head of Amazon went to high school in the Miami-Dade school system.  
In addition, they like to tout the efforts of such historical characters as Horace Mann who in the 1800s “believed passionately in the value of public schools and common education for the children of Massachusetts and defended his proposal in flowery … terms.”[3]  But, one should note an issue that bogged down a harmonious beginning to the public-school system.  
Among the populous, initial efforts found a lot of disagreement over any normative elements being included in the resulting public-school curriculum.  What one needs to keep in mind, initial efforts at public schooling emanated from a time when there was broad concern over any educational effort especially those involving moral content.  
Most insisted any resulting curriculum should include moral content based on sectarian foundations.  But, the founders of the public system faced a public strongly divided – for one thing, Roman Catholics who objected to the Protestant bias started their own generally affordable system.[4]  
Despite this divergence, American schooling evolved into a homogenous product.  Here is Toni Massaro’s summary:
Despite this resistance, compulsory schooling came to dominate the educational landscape during the 1900s.  By 1918, every state had adopted a compulsory school act.  In the 1920s, Cremin reports, “over ninety percent of American children between the ages of seven and thirteen were reported as enrolled in school,” and the vast majority of them were enrolled in public elementary and secondary schools.  Moreover, the content of this education became reasonably, though never completely, coherent.[5]
This writer argues that the resulting curriculum up until roughly the 1960s had a strong moral component.  Cremin lists various moral strains which categorized the content among the various state systems that include:  New Testament values, Poor Richard’s Almanac wisdom, and Federalist Papers’ political messaging.  This writer feels this ended in the sixties and that a more natural rights bias has become dominant and has instilled a less communal content-based curriculum that generally avoids dealing with moral questions.
Part of adopting federation theory to guide educational efforts, especially in civics, is a call for a communal curricular base that actively deals with moral questions, but on a secular basis.  This blog has argued that the federalist view can and should be seen as a compromise between both the natural rights view and critical theory.  Again, the nation seems to be edging toward the extremes of these views and that, in turn, can result in dire consequences.
[1] Here’s the analogy: The raw material was students, they were submitted to timed sessions, they were molded and passed on the assembly line of different workers/teachers and were submitted at the end of the assembly line as finished – educated – products.  In addition, production areas, the classroom, were to be arranged in neat rows to better keep oversight on how well the material was progressing.
[2] These semi-public, semi-private schools are often seen as just means of draining public resources away from regular public schools to provide cheaper semi-private schools to affluent families that usually do not have the oversight to which regular schools are subjected.
[3] Toni M. Massaro, Constitutional Literacy (Durham, NC:  Duke University Press 1993), 12.
[4] For an account of this division, see Robert Gutierrez, “Some Initial Rancor over Education,” Gravitas:  A Voice for Civics, December 1, 2017, accessed July 25, 2019, https://gravitascivics.blogspot.com/2017/12/some-initial-rancor-over-education.html .
[5]  Toni M. Massaro, Constitutional Literacy, 14.
0 notes
friend-clarity · 6 years
Text
Suggestions for the new Ontario sex-ed curriculum
Barbara Kay: July 3, 2018
Responsible sex ed has, as its first obligation, to respect parents as the first line of protection from experimentation with their children’s minds, and to Do No Harm. I hope that credo will be the sex-education watchword in Ontario on Doug Ford’s political watch
Doug Ford’s victory was in some measure due to his promise — I believe a heartfelt one — to repeal the sex ed curriculum in Ontario schools. I assume there’s a replacement program in the works. A sex-ed vacuum is not politically tenable, or even what most conservative parents want.
What principles will undergird a Doug Ford inspired curriculum? I’d suggest four guidelines for his consideration.
First, take sex ed out of the hands of ideologues and activists. Constitute a task force made up of a variety of stakeholders, involving both liberal and conservative parents (including parents of LGBT students), disinterested scientific authorities and, yes, religious representatives, to hammer out recommendations for a sex ed paradigm, in which science is separated from theory, and in which proponents of morality and modesty-based sex ed have a voice and a vote.
Second, revisit the underlying premise in sex ed today that all children must learn everything under the sun that touches on sexuality from the state.
In 2012 “Sex: a Tell-All Exhibition,” produced by the publicly supported Montreal Science Centre, opened at the Ottawa Museum of Science and technology after touring Montreal and Regina. Schoolchildren aged 12 and over were scheduled to see it. One feature of the exhibit was a video animation of a male and female masturbating. This isn’t something adolescents need graphic (or any) instruction in. More disconcerting, youngsters were watching this in the presence of strangers watching them. Forcing sexual imagery on children in the company of unknown adults is unethical; there is a whiff of voyeurism behind that video’s conception. (Protest led to a minimum entry age of 16 being established before the event opened.)
Of course this exhibit was not part of the Ontario sex ed curriculum. But it speaks to an assumption amongst some of our cultural elites that the more one knows about sex, and the more openly it is handled, the better and healthier it is. I’m neither religious nor prudish, but I vigorously reject that assumption, and so do many other conservatives.
Nothing would be lost, for example, if sex ed in early grades was taught in sex-specific groups. I believe modesty is a natural trait in children and that single-sex forums provide a more appropriate environment for discussion of intensely private issues, like masturbation, menstruation and wet dreams.
Third, there is the question of readiness. Children can be taught the facts of biology quite early, but there is no need to engage young children in detailed discussion of sexual preferences before they fully understand the nature of sexual desire. It is obviously appropriate to warn against internet porn and social media perils at a fairly early age, and the reality of same-sex couples (including parents of students) openly acknowledged, but full engagement in the nature of sexual desire in all its diversity and detail is best left for adolescence.
Finally, nowhere is the need for distinction between science and theory more urgently required than in the area of transgenderism.
Much of what children are learning about transgenderism today, at a very tender age, is not science-based, but activist-dictated theory that can result in psychological harm.
In California last year, at an upscale school, a Sacramento schoolteacher sent a gender-dysphoric boy into the washroom, where he changed into girl’s clothing and re-emerged as a “girl.” The teacher told her kindergarten students to use the child’s new girl name. Parents were outraged when their children came home distressed, with some children reportedly “crying, afraid they were turning into the opposite sex.”
This is abuse of professional authority, a dreadful manipulation of children’s self-confidence and sense of objective reality. Indeed, “gaslighting” is not too strong a word for that teacher’s gender-bender theatrics.
Any classroom discussion of transgenderism must be handled with extreme sensitivity and restraint. The theory of gender fluidity should not be introduced at all until students have the critical thinking skills to distinguish between theory and scholarship. The doubt-encouraging “Genderbread” charts, which attempt to explain differences between gender identity, sexual preference and biological sex and have been brought into Ontario classrooms by some teachers, should disappear altogether.
Individual cases of dysphoria in young children should be a matter between parents and their chosen therapist. “Watch and wait” should be the default approach in schools. Children are capable of accommodating the kind of difference they may encounter in genuinely dysphoric classmates without an entire program spent on what is in fact extremely rare.
State-sponsored sex ed is only one component of a child’s education. If they believe in social-engineering theories, let progressive parents teach their kids “social construction” and “gender fluidity” at home.
But responsible sex ed has, as its first obligation, to respect parents as the first line of protection from experimentation with their children’s minds, and to Do No Harm. I hope that credo will be the sex-education watchword in Ontario on Doug Ford’s political watch.
[email protected] Twitter.com/BarbaraRKay
0 notes
Photo
Tumblr media
Messianism & Moral Radicalism (After the Holocaust) (Steven Schwarzschild)
Avoiding him for years, I knew that reading Steven Schwarzschild’s Pursuit of the Ideal was going to be for me an unpleasant professional chore. He wrote an important article about “Jewish aesthetics” that appears in the volume. But that was all I was going to read, and really it too is a strange intellectual exercise. That essat occupied some three paragraphs in the preface to The Shape of Revelation, where it left an unfavorable impression. I highlighted there the confluence of Judaism and high modernism presented by Schwarzschild against postmodernism, Andy Warhol, and pop-art. That was years ago already. This spring, reading the entire volume ended up being one of the very last things I needed to do in the process of completing draft-chapters on ethics and messianism for new research, in which he will occupy at most a footnote or two. Not without an interest of their own, the essays collected in Pursuit of the Ideal only confirm what I already thought. There is something badly awry, albeit instructive, in this body of work. A radical thinker, a moral radical, Schwarzschild did not grasp very well the tension between the ideal and the real, which he posed in stark binary terms. For him, it was one or the other, with no in-between. As post-Holocaust Jewish thought, it is oddly obtuse to the problem of human suffering.
Schwarzschild invoked messianism years before it was brought into vogue in critical theory, and for that alone his work is worth a look. But to what purpose did he stick his hands into that fire? For someone writing well into the 1980s against political and religious Zionism, maybe Schwarzschild should have known better than to play the messianism card. To be sure, he was utterly prescient in his radical critique of the State of Israel, in particular Jewish racism in Israel, but for the wrong reasons. His critique of Israel conformed to a larger pattern of thought privileging idealism and ethical idealism, transformed into an uncompromising form of moral radicalism that he positioned over against “reality.” Compare him in contrast to post-Holocaust thinkers like Richard Rubenstein, Eliezer Berkovits, and Emil Fackenheim. Politically, they were, of course, conservative to the extreme about Israel. Writing so soon after 1945, Israel was for them a precious icon of life against death. And yet, for all that, the post-Holocaust thinkers understood the State of Israel more or less realistically, more or less historically. In the end, they did not look upon Israel through an idealist-eschatological lens. As they imagined it, Israel was for them a real place meant to meet real human problems in the here and now. From deep within “the religious,” they understood that God was a problem, not an ethical foundation of faith, as per Schwarzschild. They shared none of his austere theological confidence. For them the only important datum was the Jewish people, Israel after Auschwitz.
Associated with ethical socialism, messianism was for Schwarzschild the end all and be all of Judaism. Looking to the future and only to the future, messianism stood for ethical perfection. Ethics and messianism are radical and uncompromising figures of thought strictly opposed to reality (understood to be imperfect, compromised, and cruel). Itself always in pursuit of the ideal, Judaism was marshalled to teach that the world as it is is never what God wants it to be (pp.209-11). This is the essential teaching of “pure monotheism,” presented as the truth of Judaism, as a “clear cut norm,” “authoritative tradition,” with “no doubt,” “in fact,” as “historic fact” (cf. 3, 129,130, 209, 235, 244, 247). Hyperbolic to the extreme, the essence of Judaism is no longer “ethical monotheism,” that moderate bourgeois inheritance from nineteenth century liberal Judaism. Much more stringent Schwarzschild distilled the essence of Judaism into “moral radicalism,” which he posed against ethical eudaemonism and the Aristotelean mean (chapter 8).
As an aesthetic figure, moral radicalism may have been marked by an austere and alien grandeur, but, conceptually, it was mangled at the root. Let’s start with how Schwarzschild consistently confused moral radicalism (i.e. absolute and uncompromising commitments to justice, mercy, and the virtue of humility) with religious radicalism (loving God to the point of excess, the desire to imitate God, to see God, to confront God without any mediating intermediary). Reading Deuteronomy 6:5 in order to lampoon liberal religion, is it possible, he demanded to know, to love God with “a moderate part of one’s heart”?  Schwarzschild has just pulled a bait and switch. If we are to assume that ethics speaks to right human action or to individual virtue and social relations, then the commandment in Deuteronomy to love God with all one’s heart, soul, and might is not ethical per se (p.139). Unethical to the extreme, a “foolish pietism,” to turn all one’s attention to God is to turn one’s eye away from human beings.
Schwarzschild makes the same elision between ethics and religion when writing about Maimonides. Noting the ethical dictum at the end of the Guide that to know God is to practice grace, justice, and righteousness in this world, Schwarzschild fails to comment that this highpoint in the text is the precise moment at which the soul separates from the body, and allowing the perfected intellect to unite with the Active Intellect. We see this too in Schwarzschild’s reading of The Eight Chapters, also by Maimonides, his commentary to the ethical tractate Pirkei Avot. Here again what Schwarzschild himself calls the religious “extremism” in the effort to “see” God certainly outstrips Aristotelean moderation. About this, he was right about Maimonides. But this might be better understood to constitute the limits of Maimonidean ethics, or at best their consummation at the limit. Schwarzschild doesn’t seem to grasp that this extreme comportment is no longer ethical. Poltics and ethics are a condition for intellectual perfection, not its telos. Transcending interhuman and character ethics, the relation is religious-prophetic (pp.142-3).
What about the individual, which, according to Maimonides, Torah must always accommodate? While one can appreciate the religious intensity of the spiritual stretch of the vision, it would be a category mistake to call it ethical as conventionally understood, even as “infinite task.” In Schwarzschild an intense disregard if not for human life as such, then for the value of individual human life as a particular actuality. The contempt shown for “the vulgus” and hoi polio is consistent and withering. Viewed from an eschatological point of view, for Schwarzschild the value of a single human individual life, a suffering human life, pales before “the ideal.” As a general statement, does this hold true. Moral radicalism tout court is unsympathetic as such to human weakness, even to the human suffering that one might have thought, would have propelled it in the first place.
Under the surface, the strong pathos that inflects this transformation of ethical idealism into moral radicalism betrays a unique form of affective volatility undoubtedly pressured by the Holocaust. A mix of heart and heartlessness, the fever pitch of Schwarzschild’s moral radicalism explains why his discussion of the dispute between R. Akiva and ben Petura seems to go off the rails. In this well-known debate in tractate Bava Metzia of the Babylonian Talmud, there are two men in the desert, one in possession of just enough water for the one person to live, but only at the expense of the other. Ben Petura would have them share the water, meaning that both men will die in the desert. The opposing view is Akiva’s. In the Bavli, it is clear that either one person or no person will survive this zero-sum ordeal. Typically regarded in Jewish tradition as winning the argument, Akiva’s ruling is that, in order to survive this state of emergency, the owner of the water should keep the water and preserve his own life, even at the expense of the other.
Raising the stakes to that higher pitch of moral radicalism, Schwarzschild brings a post-Holocaust swerve to this classical conundrum. In his re-reading, the “model situation” under consideration is “actually” one in which it is unclear if either man in the discussion will survive the passage through the desert. Bending the text in light of “cool analysis” and the “history of Jewish martyrdom” leads Schwarzschild to conclude that both men will die in the desert, either as martyrs or as “frustrated survivors.” Bringing the conversation to bear on our own contemporary condition, Schwarzschild now complains, almost suddenly, that no one today wants to talk about “the possibility of future martyrdom.” He is distressed by the thought that in “our times,” “no one seems to have any idea anymore about having to draw the line somewhere, beyond which, as a limit, one may not go without losing one’s humanity, though, perhaps saving one’s life –not to speak of lesser sacrifices” (pp. 132-3). This is the view that we have no choice but to die sometime, and that one should always prefer one’s own death rather than to cause the death of another person. Schwarzschild conflates this with killing, which he insists “Judaism” forbids absolutely.” Having further conflated killing and murder, the appeal of martyrdom is unique to Schwarzschild.  “May my soul die the death of the righteous and my end be like theirs.” Against post-Holocaust theology, Schwarzschild calls “psychopathic” its will to survive at every cost after Auschwitz. Instead he embraces the example of the poet, Yehudah Halevy, as the willingness to suffer and to die (p.134-5). The martyr dies not for the sake of one’s fellow human being; the martyr dies for the sake of God’s name.
Guiding the moral radicalism undergirding this idiosyncratic appeal to martyrdom is the firm and unshakeable trust in the God of Israel. Schwarzschild trusts that God, even if we are to call God cruel, saw to the survival of the Jewish people during the Holocaust and will always see to the survival of the Jewish people (cf. pp 86, 89, 94-8 in chapter 4, “On the Theology of Jewish Survival”). This is the confidence that God will extricate humanity out of the inextricable muck of material existence (p.223). Sorting through old arguments about the soteriological effect of human action versus divine grace, Schwarzschild rejects apocalyptic modes of religious thought (the notion that redemption comes only by way of sudden catastrophic rupture) and also utopianism (defined as the false belief that the world can be made perfect). Against Scholem, Schwarzschild thinks one can separate both concepts from pure messianism (i.e. the notion that there is “some organic relationship between human history and its end” allowing us to grasp that ethical action is a necessary but insufficient condition for ethical perfection).
After the Holocaust, the confidence is fragile. Undercutting his own thesis, Schwarzschild could only say that to pin one’s hope for salvation upon some sudden, apocalyptic divine act of grace and only grace would depend upon an experience so “horrifying,” “morally atrocious” and “experientially painful” that one would never want the messiah to come in the first place (pp.225-6, “On Jewish Eschatology”). But surely, he must have understood that the Holocaust was exactly that — horrifying, morally atrocious, and experientially painful to the extreme. As much as he wants to subvert Scholem’s famous thesis that messianism is a theory of catastrophe, in writing these lines, Schwarzschild only highlights the degree to which the difference between messianism and apocalypticism is a too fine if not altogether non-existent line. For Schwarzschild, the pursuit of the ideal bears up over the weight of brute human reality. In this, it might be more to the point to concede that ethical perfection and moral radicalism are not worth the radically unbearable price of radical suffering.
Furthermore, when framed as radical ethical idealism, messianism is an eminently falsifiable belief. On the one hand, we have already suggested that Schwarzschild is critical of political and religious Zionism. On the other hand, he insists that “statements about life in the Messianic era or in the world to come are not logically self-contradictory or morally counterproductive and, in addition, produce this-worldly ethical injunctions which can both function empirically and be approved of morally” (p.222).  With an eye on rightwing religious radicalism in the State of Israel today, it is hard to follow this logic. As Schwarzschild himself knew well enough, statements about messianism can be ethically counterproductive, even morally atrocious. The so-called difference between messianism and false messianism is another thin line.
Another muddle: the pursuit of the ideal depends upon a view of the world according to which “reality” is too irredeemably rotten to function as a platform for the very ethical action it demands in relation to standards of messianism, perfectionism, and moral radicalism. Instead of determining ethical action in this world, these essays give way to an extreme form of religious love for the sake of the God of Israel. Unto death, the thinking here is no longer ethical if by ethics we mean the morally ambiguous terrain of human virtues and interpersonal relationships. None of this reflects “cool analysis.” It is all rather hot to the touch. The more coherent counterclaim would be that moral universe in this world is not given to the Platonic, mathematical precision that Schwarzschild wants for ethical action. Ours is an experience of the world too often overwhelmed by “horrifying experience” and moral atrocity. To work one’s way around in this world requires the kind of accommodation that Schwarzschild associates with Aristotelean ethics, even as the refusal to accommodate to the reality of this world lies at the heart of this post-Holocaust brand of ethical idealism and moral radicalism.
If Schwarzschild’s thinking is muddled, it because it bangs up against ontological muddles, not just conceptual ones. This is to say that he further confuses the relation between things that perhaps are already mingled at the root. They include muddled relations between what’s real and ideal, between is and ought, between what is permitted and forbidden, between messianism and apocalypticism, between religion and ethics through which there is no way out except perhaps through death. Does this explain the appeal of martyrdom? About the confused or confusing understanding of the relation between the ideal and real, this is what I wrote about Schwarzschild’s essay on Jewish art and modernism in the preface to The Shape of Revelation:
Writing at mid-century after the triumph of modernism…Steven Schwarzschild allows plastic art into Judaism; although once again, word trumps image in order to contrive a uniquely Jewish approach to visual art. Schwarzschild cites the Shulkhan ‘Arukh, to explain that Jewish law permits nonmimetic plastic expression. According to a gloss by Moses Isserles (1520–72), “There are those who hold that images of man or a dragon are not prohibited unless they are complete with all their limbs, but the shape of a head by itself or a body without a head is in no wise forbidden.” Schwarzschild relates this opinion back to the rationalism of Moses Maimonides, who rejected any attempt to stand for the human soul, God’s very image. As Schwarzschild puts it, “To represent physical appearance as the whole person is, therefore, a misrepresentation. The converse is also true: a ‘misrepresentation’ will in fact be a true depiction. . . . To represent the empirical as tout court is, therefore, also to misrepresent God.”
 Deference to the authority of Jewish law notwithstanding, this stilted approach to art trips up on German philosophical idealism. Schwarzschild continues to patrol the anxious boundary separating art from reality, the ideal from the empirical. He wants to avoid that instance in which two objects, the original and its artistic reproduction, appear exactly identical. According to Schwarzschild, art is no longer art when the artwork neither adds to nor detracts from the real world. “What in truth is the difference between a pop art duplicate . . . of a soup can and the original on a display shelf? Hegel was surely right when he held that whenever and wherever the idea is believed to have become identical with the real, the ‘death of art’ has occurred.” Couched in the idealist critique of empirical reality and in a modernism already at war with postmodernism, Schwarzschild’s discussion of the Shulkhan ‘Arukh proudly concludes, “We have thus deduced two of the chief principles of twentieth-century modern art—abstraction and distortion.” Leaving copy realism far behind, abstract art nihilates physical semblance, whereas distortion detracts from and adds to it.
Modern art and Judaism are thereby forced to confirm each other on the basis of philosophical constructs regarding the fixed difference between appearance, representation, and reality. Mostly, the entire exercise remains fundamentally arbitrary, as seen by Schwarzschild’s embrace of Rembrandt, El Greco, and Modigliani, while excluding classical Greek art, its Renaissance revival, French pointillism, and American pop art. The author seems to think that the shadows and light casting human figures in Rembrandt’s paintings obviate their realistic character, while Warhol’s soup cans violate the fragile boundary between real and ideal. Spurious at best, such judgment reflects a rearguard modernism, not
halakhic principle. Kandinsky is far less dogmatic. “Approaching it in one way, he writes, “I see no essential difference between a line one calls ‘abstract’ and a fish. But an essential likeness.’ Line and fish are “living beings,” each with latent capacities. A “miracle,” these capacities are made manifest and radiant by the environment of their composition. And this despite the equally essential difference that a fish “can swim, eat, and be eaten.”
For all that he turned to modernist abstract art, the philosopher was unable, as the artist was able, to negotiate the difference between the abstract or ideal versus the real, between a picture of a can of soup and an actual can of soup. In this, Schwarzschild refused to distinguish between spiritual and physical need. Unable to tolerate the distortions of ethical life, the pursuit of the ideal is unable to accommodate itself sympathetically to human weakness. The best one could say is that the thinking at work in these essays is hyperbolic, unreal like abstract art. But even Kandinsky’s art touched in some way upon the world. Perhaps it is the case that Schwarzschild’s thought was also touched by the world, or rather burnt by the real. By his own admission, the form of thought does not belong to this world. The pursuit can only end falling in on itself, instantiating the problem with moral radicalism in the first place.
http://ift.tt/2pstpod
0 notes
orunife-blog · 7 years
Text
My Take on The Natural Law vs Positivism Debate
Several schools of thought have arisen and many arguments and debates have been raging concerning this not so simple question. To really understand the philosophy of law, there is need for a deep and detailed understanding of the legal process. One also needs to have a tendency towards philosophical thinking to truly engage in any intellectual and meaningful debate on the complexity of law. Of the many schools of thought that exist about this topic, the most notable ones are the natural law scholars and the positivists. These two camps couldn't be more different in their philosophies and views concerning how the law should function and what role it should play in the larger society. https://www.lawteacher.net/free-law-essays/common-law/natural-law-theory-legal-positivism-law-essays.php
The distinction between natural law and positivism exists only in a broad sense because even within each school of thought there exists some more liberal and conservative factions. A very broad spectrum of academic thought and differences exists within natural law and positivism theories. However, some general underlying principles make it easier for academics and philosophers to define where they stand through their write-ups. 
Some form of moral and spiritual influence or principles have been known to undergird the law. Moral authority is very important in legislating any law so that a law which is seen as immoral can not be legislated. A lot of changes and modifications have had to be made to the natural law theory in order to adapt it to the modern world. This was because a lot of heavy criticism arose against the natural law theory because anarchy and disorder was being permitted and justified because of it. However, the natural law has helped a lot in dealing with war criminals and former dictators who had committed crimes against humanity with impunity. 
Positivists have had the strongest criticism for the natural law theory as they argue that the law should not be influenced or affected by morality. In their own opinion, the law should rather serve as a determinant or source of morality. They claim that because morality is not very objective, the law should make no considerations for any thoughts or conduct that can be seen as extra legal. However, many argue that positivism leaves a lot of room for extreme behavior and unjust conduct. Also positivism does not consider the restrictions in language and context of legal enactments thus implying that the positivism law can be interpreted in different ways based on differing opinion on fine points such as context of language use. In spite of all these, positivism still seems to be garnering a lot of acclaim and popularity in the world of academia and legal philosophy because it still remains one of the most basic legal theories in history. 
The argument between natural law and positivist theorists continues to rage as the role of law and its function in society is continually considered. Criticisms and counter criticisms have been ongoing and many theoretical principles are being layered on others to create a very detailed and extensive knowledge base on these subject matters. A lot of credence and respect has been won for both parties and it is being anticipated that a new generation of natural law and positivism theorists will emerge very soon.
0 notes
sawomensmedia · 7 years
Text
IAT206 draft
As the western world grows to be ever more progressive, advocating for feminism and equality for all, some eastern countries still struggle to break out of their norms encapsulated by tradition and culture, leading to the oppression of groups within a country. Saudi Arabia is one country whose politics are so tied to their beliefs that a violation of one is seen just as bad as breaking the law. For women in the country, these moral codes of conduct restrict their freedom as they strive to survive in this patriarchal society. While foreigners may see their laws as absurd, for the ones who has been abiding by these laws for their whole lives, it is viewed quite normally, but because of these laws women are “further denied autonomy by discriminatory “personal status laws.”” (Sakr, 2002, pg. 822) Though in the past women have spoken out against these discriminatory rulings, many are unfortunately dismissed. Even finding a platform for discussion to occur exist sparingly as even the simple act of a woman speaking about a woman’s rights in public is gravely frowned upon (Sakr, 2011, pg. 1). But now with access to the internet, cultural struggles and the need for change can reach an international audience through different forms of media. Despite this opportunity, due to laws and moral beliefs, much of the media that comes out of the country still exclude women, leading to a prevalent lack of representation of women’s rights within the country. The only way women were freely represented was through acts of protest, most recently about an ongoing ban on women’s driving. This was the case until late December of 2016 when a music video was posted onto Youtube, one which, unlike anything else that had come out of Saudi Arabia, used primarily women acting in ways no Saudi Arabian government official would have allowed. The video itself though fun and playful held a heavy message laden with signs and images in support of Saudi Arabian women’s freedom. Through semiotic analysis the use of images can bring to light the political message that is often hidden away and the narrative it plays as part of an ever-changing world. In this sense, the use of music videos has become a powerful medium for empowering women, allowing change to occur within country and the minds of those who view the content. This music video has gone far beyond simple entertainment and exists as a strong political statement, and with support of notable figures, it only serves to prove effective. 8ies studios, the creators of the music video, has stepped into a boundary no one in the country has dared to allowing for civil matters to be tackled in international eyes. The influence and emergence of women appearing as empowering figures in Saudi Arabian music videos afford for the opportunity that creates an outlet in addressing political statements, enabling women to challenge rampant social issues and progress beyond gender restrictions exposing political biases.
The case of representation is relevant in a large aspect of a movement towards something progressive, articulating the need for equality among members of society. Within Saudi Arabia, the problem lies not in the misrepresentation of women but the complete lack of it. Due to laws restricting the freedoms of women in the nation, they have little to no power to express their right for equality.  What Arab women experience can be described as “a “double jeopardy” because not only are they “subjects to the widespread restrictions on civic and political participation, but [they] are further denied autonomy by discriminatory “personal status laws”” (Sakr, 2002, pg. 2). Specifically, laws like the ban on women driving as well as not being allowed outside independently without a male chaperone to restrict what little freedom they possess. Currently, a majority of music videos that come out of Saudi Arabia show exclusively men and contain no women. Though some can be found with girls, adult women seem to be absent from all but one video.
In The music video for “Lw Alai” by Hamad Al Qatta, directed by Ryan Zahran, is one example of this instance of using exclusively men. The video starts off with two men talking amongst themselves, both wearing traditional Saudi Arabian wear, a red and white keffiyeh and a black agal on top of their heads, dressed in a thawb standing in front of what seems to be a temple. Entering the building, the scene cuts into a room full of men sitting in a circle, with a patterned carpet in the middle. As different shots are taken around the room, the singer, Hamad Al Qatta, starts the song off playing it on an Oud, a traditional Saudi Arabian instrument. The men surrounding him, some holding prayer beads, listen intently almost as if partaking in a religious ritual. Quickly however, the scene goes from being static to a much more dynamic tone, more expressions are articulated on the singer’s face as the men around him dance. Later, the setting changes, however, rather than being in a religious building, it becomes a concert hall where everyone is dressed in much more modern clothing. However, even then, women are seen absent in the video. Throughout the video, there is a visual transition from traditional setting and attire, to one that is much more modern and widely understood. While it mimics a passage of time for the men of the country, showcasing some traditional values as well as modern ones, it completely excludes the representation of women within the country.
Tumblr media Tumblr media
With this extreme exclusion of women in mainstream media, the only forms of which they are represented in regular media are in images of protest. The most notable being the protest against the women’s ban on driving. The images above are two images taken from the Women2Drive campaign. Some, though still wearing a veil, remained topless throughout the protest. However, the question remains, why do women care so much about driving? For a state where “the ideology undergirding the monarchy defines the legitimate ruler as one who will enforce standards of Islamic conduct upon the individual for the good of the community as a whole” (Doumato, 2016) their ban on driving remains a way to preserve the traditional roles of women. Nonetheless, with progressive times, “tensions have grown over how society wishes to define the standards for these Islamic and social values” (Doumato, 2016). Women2drive stands to fight against this ban on driving, opening the doorway for progressive action to take place against a culture that limits the roles women place in social settings. Another image produced by women2drive is one based off the American wartime propaganda promoting for women to join the working class during World War 2. In this new reworking of the original poster, the woman is dressed in a niqab, revealing only the eyes. The words “We can drive” is also added onto the poster under the original message of we can do it. Women2Drive takes this American feminist message and rewrites it, fitting of their Saudi Arabian audience. Though these images do serve well as mediums for political messages to step through, often the seriousness of the situation can be dismissed, but one that the playfulness of music videos can hint at, allowing the message to be received through the façade placed by visuals.
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
In December 2016, a new music video named Hwages by Majedalesa directed by Majed al-Esa was posted on youtube and gained international attraction. For the first time, a music video from Saudi Arabia featured not only women, but acting freely outside without any visible supervision from men. Since its release, the video has been “viewed more than 2 million times and has become the subject of widespread debate in Saudi Arabia — as well as considerable celebration.” (Taylor, 2017) But why has it become such a viral phenomenon, and what makes it such a controversial music video? The video starts off as six women enter a van all sitting in the back while a child seats himself on the driver’s seat. From this first scene, the notion against the ban on women driving is already set. Keeping to cultural ideals that women, the scene technically does not break any traditional laws by keeping women away from the wheel. Nonetheless, it still shows a ridiculous image of allowing a child to drive rather than the adults in the car. Fighting against the notion on a cultural ban on women driving, it makes its audience question whether this ban seem reasonable as it “emphasizes the hypocrisy of disapproving Saudi men, who can drive freely and travel abroad as they wish” (Taylor, 2017). In another scene, the women now, though still dressed in their full niqab, wear bright colorful shoes and dresses while they skateboard and ride scooters. The scene cuts to two males looking and waving their finger in disapproval. This imagery again, breaks the norms of how a traditional Saudi Arabian women are seen. Free from their male guardianships, they act freely and happily on their own, an act seen as disgusting and even offensive by some Saudi citizens. One tweet by Saudi citizen, Hassan al-Ghamdi announcing, “The director offends the Muslim women in our country. Where are our preachers to deny this?” Perhaps one scene that speaks most directly towards the patriarchal ruling of the country is one where they are bowling in a carnival setting. For a very short moment, the pins are shown to have pictures of men placed on the pin before cutting to a long shot of a woman knowing the pins down. Though it is unclear who the men in the pictures are, the message still exists, using this metaphor of knocking down pins to highlight the need for change towards a more equal ruling between males and females within the country.  The video is also laden with references towards American politics, namely the frequent use of a caricature of recently elected president Donald Trump. As the title “Hwages” roughly translates into “concerns” it seems the video speaks not only towards the citizens of its own country but also to the American audience. To Moya Crocket, a writer for The Stylist, she possesses the question in response “Can you stand in moral judgement of other countries’ treatment of women, when you elected this man as president?” (2017). Aside from the colourful, playful visuals, the lyrics also speak very strongly against Saudi Arabian’s female oppressive culture. Roughly translated to “May men go extinct, they have caused us psychiatric diseases”, the lyrics themselves are a reference to a Saudi Arabian protest song released in 2014. While on its own, it’s aggressive lyrics may come off as harsh, paired with the visual of the song, it almost becomes its own satirical existence however still relaying the message of a need for progressive behavior to take place in Saudi Arabia.
Tumblr media
The group behind the video is 8ies studios, a Saudi Arabia based media company who specialize in photography, video and audio production. While the use of women in music video is new to both the company and Saudi Arabian media as a whole, they are no stranger to the production of highly viral videos. Late December of 2015, they released a video called Barbs by Majedalesa which have now gained over 40million views. It’s had become such a viral sensation that it “launched a huge dance craze in the Arab world, leading to the arrest of two in Abu Dhabi who uploaded a video of their own that showed them dancing in military uniform” (Taylor, 2017). Though Barbs has no political message to communication, it exemplifies the studio’s ability to capture a large audience understanding the inner workings of how to create a viral sensation. 8ies Studios acts unafraid of government ruling and as such has created a platform where cultural problems that are seen as controversial can be freely communicated.  Even on their website, their mission statement makes the claim “We believe that there are no limits to anything you can do, and no border for any dreams to come true. We build relations and create attitudes. We make an effort and work on another. We look for change and hope to gain.” While no direct comment has been made official by Majed al-Esa, the director of the video, it is clear to see that a political message is being told, allowing the voices of women in Saudi Arabia to be heard and their struggles against their culture to be expressed.
Hwages has picked up not only as an international sensation in its depiction of Saudi Arabian women, but more importantly, has gained political traction sparking controversial debate. It’s controversial ideals have caused many to act disapprovingly, and in response “hardline Islamists have taken to social media to rant against the pop song, decrying it as “cheap”, “inappropriate” and “disgusting”” (JPOST.COM STAFF, 2017). For these fundamentalists, this music video goes against much of their moral and cultural laws of how women should act, existing as a blatant act of perversion on their beliefs. But despite the backlash it has received, the music videos has gained an overwhelming amount of positive feedback as well as support for its take on protest against women’s oppression. Not only from general Saudi citizens, but from powerful political figures as well. Not long after its release “Amera al-Taweel, the 33-year-old ex-wife of prominent Saudi prince Al-Waleed bin Talal, shared the video to her twitter” (Taylor, 2017) and “one of the oldest newspapers in the country, Al-Bilad, offered its own praise for the video, noting that “the new generation of women is different from the past.” (Taylor, 2017). Whether good or bad, the music video has allowed conversation to take place as its circulation throughout the internet creates “active participation in policy debates and articulation of new perspectives.” (Sakr, 2002). As a medium, the video has gone beyond its purpose as an engine driving entertainment towards its viewers but making a political statement as well.  Nonetheless, the vast amount of support the video has gained has no doubt shown that perhaps for Saudi Arabia, an era of cultural change is approaching.  
Tumblr media
Hwages have paved a new way for media and specifically music videos to represent women and their need to be heard as a people. Existing as a driving force behind progressive movement in a country where conservative beliefs and mindsets are so central to its politics, this music video has allowed a new perspective to be freely discussed. For Saudi Arabia, this could very possibly lead towards a time of change for the women of the country. Representation within the video has allowed women to be empowered as figures that exist as individuals rather than as second class citizens. Seen in such a public scale, acts formerly seen as taboo may lose its grip on women allowing them to act freely. For women to be taken seriously in this regard, it is obvious that representation matters within the media form used to express their freedom.  And though images of real life occurrences showing protest and outrage can instill awareness, music videos can reach a much larger audience with its ability to communicate its message through semantics. Hwages successfully infiltrates Saudi Arabian media creating an uprising of voices to be heard. If more music videos were to follow 8ies studios impact, changes would no doubt come to Saudi Arabia, leading to both political and cultural changes allowing both men and women to leave freely. Already it’s influence has travelled overseas where in India, a music video titled Alpha Female directed by Sasha Rainbow takes heavy influence with Hwages to spread the message of female empowerment. Masked in colourful visuals and playful tones, this political act stands above many others fostering a mindset enlightening its international audience as well as gathering support within the country for human rights. As a music video not only does it entertain but radically challenges the country its government to think differently. Nonetheless, the issue of allowing women to drive is only a gateway into the world of restrictions Saudi Arabian women are faced with. In the end, the issue goes beyond allowing women to drive but allowing them freedom to act independently as individuals free from patriarchal society. No doubt, change is starting to occur within Saudi Arabia, with countless national and international support behind the video’s positive message, it is an inevitable cause that will only lead to propelling this feminist movement forward. Perhaps in the near future, this eastern society will gain as much freedom as those in the west.
References:
Khalife, L. (2017, February 10). A Saudi feminist music video inspired an Indian version. Retrieved March 06, 2017, from http://stepfeed.com/a-saudi-feminist-music-video-inspired-an-indian-version-6123
Kraidy, M. M. (2012). Contention and Circulation in the Digital Middle East: Music Video As Catalyst. Television & New Media, 14(4), 271-285. doi:10.1177/1527476412463450
Kraidy, M. M. (2013). Contention and Circulation in the Digital Middle East. Television & New Media,14(4), 271-285. doi:10.1177/1527476412463450
Sakr, N. (2004). Women and media in the Middle East: power through self-expression. London: I.B. Tauris.
Sakr, N. (n.d.). Volume 69 Number 3 (Fall 2002) The Status of Women in the Developing World. Retrieved January 24, 2017, from https://epay.newschool.edu/C21120_ustores/web/product_detail.jsp?PRODUCTID=5228pp. 821-850
Taylor, A. (2017, January 03). A music video featuring skateboarding women has Saudi Arabia entranced. Retrieved March 06, 2017, from https://www.washingtonpost.com/news/worldviews/wp/2017/01/03/a-music-video-featuring-skateboarding-women-has-saudi-arabia-entranced/?utm_term=.4cbb5fe44cb3
WATCH: 'God, rid us of men' implores viral Saudi feminist song. (n.d.). Retrieved March 06, 2017, from http://www.jpost.com/Middle-East/WATCH-God-rid-us-of-men-implores-viral-Saudi-feminist-song-477445
0 notes