Tumgik
readyaiminquire · 1 year
Text
Prophets of Doom: I’m feeling lucky!
Tumblr media
RUDIMENTARY creatures of blood and flesh. You touch my mind, fumbling in ignorance, incapable of understanding. […] Organic life is nothing but a genetic mutation, an accident. Your lives are measured in years and decades. You wither and die. We are eternal. The pinnacle of evolution and existence. Before us, you are nothing. Your extinction is inevitable. We are the end of everything. […] We impose order on the chaos of organic evolution. You exist because we allow it. And you will end because we demand it.
              Sovereign, Mass Effect.
Let us all bow down before our new machine overlords. Even if they’re still a mere foetus, it might be good to begin practising our kowtowing early; to make a good impression, you know? After all, this is a matter of life and death on a planetary scale. We flick that switch, and it’s all over. We will be devoured by the flash of the Singularity; humanity’s bright flame suffocated by a much larger fire we set ourselves. The drama alone makes this narrative appealing; it has a poetic resonance harkening back generations: the creation undoing its creator. Despite this appealing dramaturgy, it is a narrative that has received its fair share of criticism and backlash. And I definitely have critiqued it, and will likely continue doing so. However, misunderstand me correctly – as we say in Swedish – the risks associated with AI development, as with any powerful technology, are worth taking seriously. One must ensure to be worried about the right things. In other words, it’s not that these ‘doomers’ are concerned about emergent AI tech that rubs me the wrong way. It is what they are worried about; how they understand the potential risks.
I read a tweet by Yann LeCun – head of AI research at Meta – that, in many ways (and likely more ways than he intended), sums up the current climate in the ChatGPT debate:
AI doomism is quickly becoming indistinguishable from an apocalyptic religion. Complete with prophecies of imminent fire and brimstone caused by an omnipotent entity that doesn’t actually exist.
It might sound harsh – but it’s fair. I’ll allow it. As one of these galaxy-brained prophets of doom concluded, if push comes to shove, we must be ready to nuke anyone who continues with unsupervised AI development. This might sound like an extreme conclusion, but really, it is the best way forward: at least humanity, as a species, has a chance to survive a nuclear holocaust; we shan’t be so lucky when the Terminators come to… erhm, terminate us. This AI doomsday narrative has been around for a while. It is by no means anything new. This is evidenced by the voices who now speak up, in tandem, that this needs to be stopped. It’s not a conspiracy of any kind, but rather, it’s a narrative and discussion that has been going on for decades among tech communities; and frankly, it’s a conversation that had no need to leave those particular circles.
However, I don’t intend to let the other side of the aisle off so easily, either. In the tweet above, these ‘doomers’ are likened to an apocalyptic religion. Overlooking here the ‘soft’ optimists, i.e. those who simply think something like AI isn’t that big of an issue – this is such a supremely naïve position to take – I want to turn to the more active proponents of AI. These are the folks who not only think that the suggestion of a six-month moratorium is tantamount to an anathema, but who typically wish to accelerate the development. Why? Because it will lead us to the land of milk and honey, of course. AI will be a potent tool. Think of the problems it can solve! No more resource scarcity; no more disease; no more death; humanity can finally take its place in the cosmos.
There are of course people out there who espouse completely reasonable beliefs on artificial intelligence. Still, I think it is fair to say that these are not the folks who take up any space in T H E  D I S C O U R S E. I do not here mean to implicate Professor LeCun as some techno-utopian – I am not familiar with him enough to say. Instead, I wanted to foreground his tweet because he’s 100% correct, and 50% wrong. The whole discourse surrounding artificial intelligence is steeped in this kind of ideological framing – it’s the tool to end all tools, or it’s the tool to end us tools. Many parallels have been drawn between the development of nuclear weapons and where we currently are with AI, typically something to the effect of ‘it’s like nuclear weapons on steroids’. However, few folks continue with this particular analogy.
So, let’s do that for a moment.
Nukes undoubtedly added a whole slew of risks for the planet as a whole, not just humanity. We’ve all heard the old chiché ‘humanity, for the first time, has the ability to destroy itself’, usually in conjunction with Oppenheimer quoting the Bhagavad Gita, “Now I am become death, the destroyer of worlds”. It’s a well-known narrative, often placed front and centre when telling ‘The Story of Humanity: 1945-1991’. And it tracks pretty well with the folks so profoundly concerned about AI’s ability to ctrl+z humanity as a whole. Cause for concern, no? What is often forgotten, however, is the buck-wild idealism and optimism harnessing the atom generated. “Soon!” the quant 1950s radio voice heralded, “You, too, will have an atomically powered home, and atomic appliances, and a car powered by your very own reactor!” Looking back at it, it sounds completely unhinged, but the era that was heralded by the advent of humanity seemingly mastering the atom (and ‘seemingly’ is doing a lot of work in that sentence) made anything seem possible! Science would lead us to a new world, and a more decent world, where folks had the chance to work; where youth had a future, and everybody owned a uranium-powered blender. In short: Atomic Utopia.
Only this was not to be, as all of us without plutonium-powered shaver or hair-dryer know. The atomic optimism had far overshot its mark: mainly because we knew oh-so little about the atom, and far, far less than we realised, it turns out. If ‘AI is like nukes on steroids’, what does this say about the more optimistic, and even utopian, narratives? If atomic optimism overshot its mark, is AI optimism halfway to the moon by now? One would be tempted to conclude that this might be the case, on balance: AI won’t build us the lands of hopes and dreams, but it will nonetheless pose a significant threat. So the so-called ‘doomers’ are correct? Well, not really. The nuclear analogy isn’t as apt as it appears at first glance.
Firstly, harnessing an atom is harnessing a literal force of nature; building an AI is attempting to create a machine that thinks with us; for us; alongside us. In both these ways, AI and nukes are extremely different, and the analogy beings to break down on some fundamental level. E.g., and without sounding like I’m leaning into AI hype, artificial intelligence, should one truly be created, can do much more than splitting an atom ever could. Don’t get me wrong, the idea of near-endless energy is a tempting one, and would be a real game-changer, but even then the act of atom-splitting would be far less ubiquitous. Similarly, the processes are very different processes: triggering an atomic chain reaction v/s making an algorithm self-aware (if that’s even what it takes? Or, indeed, if that is even possible?). The former had been theorised and calculated ahead of its practical implementation; the latter is still shrouded in diffused philosophy of mind. They become absurd to compare. Secondly, I think it is safe to say that AI will be far more ubiquitous than nuclear reactors, or weapons, ever could be. It is a much more flexible tool, and its myriad potential areas of application change how we will relate to what it can and can’t do, or what it should or shouldn’t do. None of the above even touches on the context in which either of these technologies was created: who owns them, who controls them, and so on.
In other words, it is time to abandon clunky analogies that seem appropriate at first glance. It is also time to leave entrenched ideological positions where the only outcomes that seem to ‘matter’ are whether AI will turn out to be lawful good or chaotic evil. Viewing emergent technologies through such a binary lens completely absolves those responsible for whatever happens next of any responsibilities, instead shifting it to the AI as the primary actor. This is, of course, not true. It recontextualises technological development as some uncontrollable force of nature; a physical force akin to gravity, electromagnetism, or the weak and strong nuclear forces. There is not much we can do from this perspective, beyond building enough shelters that we can live with these cosmic winds of ‘progress’; or, perhaps worse yet, usher in your new-fangled God no matter who is in the way. I have noted technological advancement as a natural or physical phenomenon repeatedly in my research, and it’s particularly present in this whole debate: AI’s existence is binary. Whatever happens after that is simply out of ‘our’ ‘collective’ hands.
Focusing on the worst-case/best-case outcomes is a clear giveaway, especially when the timelines appear to be some far-flung, often unspecified, future. It is worth noting that the narrative for such drastic solutions to curtail AI’s development mentioned above skips over all the time between the “now” and the “at some point in the future”. This gives the impression that we flip the switch and we’re instantly Thanos’d, creating both a sense of urgency reminiscent of a nuclear flash of light, whilst also wholly absolving anyone from the responsibility of what took place between the initial switch, and the proverbial flash of light. Not only does it absolve responsibility from those who ought to face the music, but it also robs anyone else of agency.
And, of course, if you’re on the most optimistic side of the spectrum, all of the above is a non-issue!
Understanding that technology is more often than not an inherently emergent phenomenon, that is to say, that it doesn’t exist as a stable category, but is reiterated and changed and developed, even as it exists, is critical in understanding how best to proceed. To illustrate with an example: humans have had hammers for a long time. However, a stone age hammer, a bronze age hammer, and a contemporary hammer all look very different. The materials change, the shape of the head changes, and so on. Even uses change and evolve: from hammers as a tool, to war hammers. Over time, ‘hammer’ is not a stable category. The same goes effectively for any technology. Thus, by solely looking at AI teleologically –from the perspective of its end – you miss the vast gap of everything that happens in between. Especially when the telos is at some far-flung point in the future.
And it is within this gap that the vulnerable fall.
Slavoj Žižek once said that,
‘If there is no God, then everything is permitted’. [… T]his statement is simply wrong. […] It is precisely if there is a God that everything is permitted to those […] who perceive themselves as instruments […] of the Divine Will. If you posit or perceive or legitimise yourself as a direct instrument of Divine Will, […] petty moral considerations disappear. How can you even think in such narrow terms when you are a direct instrument of God? [1]
This quote has really stood out to me throughout my research, mainly as I have observed that this sentiment is a well-established and oft-recurring phenomenon. The debate around AI: what it can and can’t do; what it might change; and perhaps most importantly, who it will affect and how, is too important a discussion to have it be dominated by the most extreme ideological positions – because that is what these are, whether you believe in the AI-salvation, or advocate for the use of WMDs. These positions have previously been called longtermist, due to their focus on the extreme long-term. This is a discussion I shan’t get into here, however.
Instead, I want to attempt to re-focus the conversation on the sort of questions that we must, truly, be asking ourselves when faced with powerful emergent technologies. As I’ve mentioned several times in this text, I support a moratorium on AI (sans nukes, however…). Still, such a moratorium is only effective if used appropriately. I have alluded, briefly, above to some critical questions, such as thinking about who this impacts – in practical terms – and what can be done to mitigate adverse impacts on such groups and communities, whilst also using the newfound potential of these technologies to bring about a more equitable and just society. These are essential things, truly, and their importance cannot be understated (though, it is often dismissed; or worse: forgotten).
I think that what is missing most in any discussion surrounding emerging technologies, especially potentially powerful technologies like AI (whether strong or weak), is a discussion around the macro perspective: what is the actual value to society of creating these technologies? It appears to be taken for granted today that these innovations exist to disrupt markets; disrupt society; disrupt the status quo. Disruption is the name of the game, and has been for a long time – though the more on-the-nose language has been dialled back, especially by larger corporations as they face more public scrutiny. None of this means, however, that the underlying attitudes have changed. STS scholar Fred Turner, I think, summarises it well when he says,
I think if you imagine yourself as a disruptor, then you don’t have to imagine yourself as a responsible builder. You don’t have to imagine yourself as a member; as a citizen; as an equal. You’re the disruptor: your job is to disrupt and do whatever it takes to disrupt. Your job is not to help build the state; [to] help build the community; [to] build the infrastructure that made your life possible in the first place. […] I think it’s a matter of how you imagine yourself, and that this is where disruption as an ideology is such a problem. It makes it very difficult to build continuity and community, and […] any egalitarian kind of society. [2]
As we can see with the latest debate around ChatGPT and the subsequent AI-hypefest, governing bodies are playing catch-up. It’s technological whack-a-mole: innovators ‘disrupt’, and legislators reactively clean up any mess left behind.
Bringing the discussion of AI risk into focus – and by extension, highlighting the core societal questions that ought to be asked more often when it comes to technological development – is a good thing, overall, but like with much else pertaining to technological development, much potential is lost by the exclusionary nature of the debate itself: navel-gazing conversations about human extinction at some undisclosed point in the future at the hands of totally-not-Skynet, whilst people are suffering right now, and many more at the risk of suffering, should these technologies be rolled out without a care beyond disruption. It may sound hyperbolic, but this is not the first time we have seen catastrophic short-term effects from short-sighted tech innovation rolled out under the banner of disruption in the name of techno-optimism. Indeed, if the past decade has shown us anything, it’s that one does not have to wait for the far-flung future to risk the existence of a whole people: the Rohingya know this well enough already. A moratorium on AI development is probably a good idea. Still, it’s equally critical that the time isn’t spent discussing nonsensical thought experiments like Roko’s Basilisk or the ‘paperclip problem’. As it stands, the public debate is handled by prophets, priests and acolytes, who are most likely LARPing a schism over techno-orthodoxy; the underlying assumptions remain shared among them. Critically, these same folks view these developments in purely teleological terms. On our road to their vision of techno-enlightenment, some of us mere mortals will have to die, but this is a sacrifice they are willing to make. As long as our AI overlords don’t snuff us out when we finally get there. After all, we are but rudimentary creatures of blood and flesh.
  [1] Slavoj Zizek, in “A Pervert’s Guide to Ideology”, dir. Sophie Fiennes.
[2] Fred Turner, in “Bay Area Disrupted” by Andreas Bick. See here: https://vimeo.com/110557774
0 notes
readyaiminquire · 2 years
Text
Technological inevitability.
Tumblr media
One of the great platitudes which are very popular today, when we are confronted with acts of violence [is …]: “If there is no God, then everything is permitted”. [… T]his statement is simply wrong.
Even a brief look at our predicament today clearly tells us this. It is precisely if there is a God that everything is permitted to those […] who perceive themselves as instruments […] of the Divine Will.
If you posit or perceive or legitimise yourself as a direct instrument of Divine Will, […] petty moral considerations disappear. How can you even think in such narrow terms when you are a direct instrument of God?
Slavoj Žižek
The Illuminati. Globe-trotting action. Robot arms! Honestly, Deus Ex: Human Revolution, the video game, has everything you’d want from a cyberpunk videogame. From robust role-playing elements that really flesh out the world, to a compelling (if totally absurd) narrative. And did I mention the robot arms?? It’s no wonder that it’s considered a classic in the genre and certainly an excellent piece of cyberpunk media. It has been recommended by many of the folks I work with and research alongside here in Sweden as a solid sci-fi title — a game that strongly plays into the fantasy of advanced human augmentation technologies, and robot arms! — while also being cited as a source of inspiration. This, simultaneously, makes perfect sense, and makes no sense at all. Based on how it is discussed, it presents a positive view of the future: as something cool and remarkable, something worth striving for. Thematically, however, it turns out the game is far, far, far more critical than this. Corporate espionage and global conspiracies set into motion with the end goal to control the world population; the New World Order. Rising racism and social tensions between the augmented and the ‘natural’ humans. Lies within lies. It’s a cyberpunk game, after all. Cyberpunk! A genre that, by definition, requires everything to be awful. How can such criticisms, whether structural critiques, or critiques of tech-use, go over the head of so many people? Indeed, I recently had a friend say that as much as he loved Deus Ex, its main downside being that it is “kinda anti-transhumanist” in its messaging.
As trope-ridden as Deus Ex is — a near-requirement for cyberpunk works, it seems — it carefully navigates tough topics to truly highlight issues, dangers, problems, and pitfalls with the potential technologies of the future. And it is key to highlight here that the game, like much cyberpunk, isn’t so much against technologies or a more technological future, but rather attempts — to varying degrees of success — to offer a structural critique of technology and society. In this sense, it is not anti-transhumanist as it is anti-large-corporations-using-technology-as-a-means-of-consolidating-power-and-resources-for-their-own-gain-ism, though this, of course, is far less snappy. As is shown in the epitaph “high tech, low life”, cyberpunk’s central thesis is that technology is an extension of society and present social systems, relations, and schema. If such systems remain unjust, riddled with inequities, and corrupt, no matter how fantastic(al) any technological developments may be, they will invariably end up making the world worse. The hammer that can help build a house can just as easily bash your head open. Portraying this tension is something good cyberpunk ought to do well — being the central thesis and all — and Deus Ex cannot be faulted on this front. Perhaps, if anything, at times it is a bit too heavy-handed in “““implying””” the aforementioned thematic elements. “Why did you save the innocent hostages, Jensen?? Saving my corporate secrets is far more important than their peon-lives!” your totally-not-corrupt-or-evil-CEO-boss booms across your cyber-cochlear implants. Subtle? As subtle as a boomer meme.
Even then, it’s not all too skewed either. The technologies are impressive, and important developments in many ways. They save lives; they change otherwise difficult lives. ROBOT ARMS! Granted, the negative aspects of these technologies, hands down (no pun intended), receive far more screen time, but even then, the evils that are shown are not shown because of the technologies themselves, but rather the emergence of such fantastical technologies have led to an increased articulation of the inequities and injustices already present in our neoliberal societies that we know and… erhm, ‘love’. The genre is often said to be one that criticises the present from the perspective of the future, and while it cannot be accused, by and large, of not criticising enough, perhaps its greatest weakness remains a general failure to present meaningful alternatives, let alone coherent ones.
Imperfect but clear, that’s how I would summarise my experience with Deus Ex: Human Revolution, and indeed much Cyberpunk media. With this in mind, how could this meaningfully act as an inspiration among the typically entrepreneurial, oft-libertarian-leaning techno-enthusiasts I work with? Sure, sure, seeing past the technology as evils unleashed upon society is to be expected. After all, technology is a mere tool. And the tools in question heighten the problems already present. However, this fails to account for the fact that the majority of these individuals believe that enterprise and market economics is perhaps not the only way to usher in a technologically enlightened future, but the best way of doing so. If cyberpunk media, like Deus Ex, posits, albeit implicitly, that capitalism generally and neoliberalism more specifically is the root to dystopian future they present, how can the same neoliberal logics be repurposed to result in the opposite effect?
Some of it can be put down to misreading the works in question, accidentally or wilfully; or even hubris, that human trait much sci-fi likes to invoke, “When I do things, it will be different. I will do it right!” one may declare; “it had to be me; someone else might have gotten it wrong”, to mix science fiction works. Even then, I suspect these answers are far too simplistic. In essence, the folks I work with aren’t so stupid, naïve, or filled with hubris, and it would be a disservice to assume such a surface level reading is, at the core of things, correct. Instead, I propose we shift the perspective on things. Consider this: what if the technological development in question isn’t the goal, what if it’s inevitable?
Among scholars working with temporality, the past, present, and future are often considered multiplicitous. That is to say: that it is not so much accurate to speak of the past, or the present, but rather of pasts, presents, and futures. Each present moment is experienced differently, from different vantage points, and with different perspectives and understandings of the past; the collected information from which is extrapolated into different futures. Equally, the future may hold some new information — a new historical discovery, an archaeological find, or philosophical insight — that sheds new light on the past, further shifting the understanding of the present and its extrapolation into the future. Even for each individual, the past-present-future dynamic is not set in stone. Both on a broad societal scale and on the individual level, temporality remains one of multiplicities.
What, then, happens when we throw in notions of the inevitable into this already fluid mix? It creates perceived anchor-points in history. In essence, progress, as it were, is understood more as a Civilisation V technology tree rather than the many potential paths to tread. All societies begin with the wheel, and then move on to animal husbandry, or bows, or sailing, and inevitably ends with ICBMs, space colonies, and robot arms (!). This is not how things work in practice. Technology and innovation are far from a linear tree where progress is so nicely demarcated, and the direction runs merely from left (less complex, more ‘primitive’ technologies) to the right (highly complex, highly technical). More accurately, innovation, development, and progress are the results of environmental, geographical, social, cultural, and a whole slew of other factors: live in the mountains without any really accessible roads? Then the wheel will be far less useful to you. No animals around to husband? Guess you’re stuck tilling the fields yourself. On the other hand, an ideology of technological inevitability means that all roads lead to Rome; there are many paths to tread but them ROBOT ARMS(!) will be. What follows is an argumentum ad fati; an appeal to destiny; that all paths are laid out, and there is nothing anyone can do about anything — let’s just sit back and enjoy the ride.
This appeal to destiny invariably takes on a transcendental, even divine, character: of an ever-moving, entirely unstoppable, otherworldly force, pushing us forward towards progress, whatever that may be. Like Walter Benjamin’s gale-winds carrying the Angel of Destiny towards the unseen future, the inevitability of technological progress won’t stop for anyone or anything, and it sure as hell won’t stop for any silly moral considerations. No matter the outcome in practice, these technologies will be, so there’s no point in worrying about it. This, like religious or spiritual prophecy, is an ideological stance, coated in terms of rationality: statistically speaking it is bound to happen; given enough time, anything and everything must happen. Who are you to question the Divine Plan?
This, here, betrays the first clearly visible ideological marker of any such technological prophet: timescale. Timescale is a funny thing, like that. It can be used to justify near everything and anything. It gets at the question of what responsibilities we who exist have towards those who not yet. Depending on the timescale chosen, near anything can be justified: if we do not do an evil now, another even bigger evil will happen in the far future! This approach, longtermism, has been brilliantly deconstructed by Phil Torres so I won’t dwell too much on it. Taken together with the inevitability of progress, the evils in the present or near future are merely inevitable evils that we cannot do much about anyway, which, besides, will justify themselves in the future once destiny is fulfilled. To paraphrase the opening quote, if you view yourself as being caught up in an inevitable change, “petty” moral considerations do not only disappear, but they become veritably absurd to consider. What will happen is inevitable either way. What is there to do about it?
“Some of you will die, but that’s a sacrifice I’m willing to make” as the meme goes: sucks to be you if you end up on the receiving end of the future cyberpunk hellscape. You can rest easy in the knowledge that in the future future, people will have it grand, though. It’s a relaxing thought. Ironically, these proponents of such technologies today, those inspired by the warnings yet ignoring the genuine risks they present and ethical questions they raise are also involved in the development of these technologies, a pretty solid way to achieve the status allowing you to reap the benefits of this future; to be soaring high above the clouds, rather than squatting in the dank neon-soaked streets many thousand floors below. The irony here, of course, is that this serves to reproduce the material conditions that lead to this aforementioned hellscape. Here, perhaps even more ironically, the major shortcoming with cyberpunk media rears its head once again: the critique is good and fair, but it never presented an alternative. The present state of things is what we get until we reach our inevitable transcendence, the much hoped for Singularity: and the faster we proverbially move towards it, the less time people will even have to suffer. Material improvements in the present are moot when future transcendence is inevitable. Just suck it up and trust the CEO when he says that it is better for the world if you save his company secrets at the cost of the hostages’ lives.
It’s not like he has anything to gain from the situation…
0 notes
readyaiminquire · 2 years
Text
The Spirit of Gift-Giving.
Tumblr media
 “A fine Māori proverb runs:
    ‘Ko maru kai atu
    Ko maru kai mai,
    Ka ngohe ngohe.’
Give as much as you receive and all is for the best.”
-- Marcel Mauss, The Gift, 1966 [1925]
I have spent the last couple of days running around like a headless chicken attempting to find a present for my brother and the lady in his life. Now, it is the first time I meet her, and I have not seen my brother in two years – and even in a life marred by pandemics and lockdowns, much can happen; much can and has changed, and that realisation did not bring me any closer to figuring out what it is I would like to give them. In a sense, this is unfortunate, as it risks reducing gift-giving and gift exchange to an exercise in frustration, expenditure, and ultimately consumer capitalism. It is a common trope that the gifting bonanza that Christmas is marketed as is, indeed, nothing but marketing. That it is an exercise in buying stuff we don’t need to satisfy corporations’ bottom line; an expectation to spend and buy that has been fluffed up to maximise others’ profits. Certainly, I believe there is a truth in this. Corporations and companies go ballistic with marketing and advertising this time of year. As it is so often referred to in the Anglophone world, the Holidays make up a critical sales period. However, like a good anthropologist, I must also remind myself that things are rarely, if ever, so simple. Gift-giving, and the ritualised nature often associated with it predates consumer capitalism; predates this so-called ‘modern’ world; predates even Christmas as a concept. And perhaps more importantly, gift-giving fulfils an essential social function. In fact, I would even argue that this very loss of understanding and appreciation of its deeper socio-symbolic purpose is precisely what has allowed sales and marketing to go awry, as meaning is lost and is instead replaced by consumption.
Okay, what am I talking about? While spending the time poking around (or, perhaps more accurate wracking my brains) for gift ideas, my brain instead spits out unhelpful advice pertaining to one Marcel Mauss and his anthropological classic: The Gift. Like any anthropologist would argue, the act of gift-giving is never as simple as merely giving a gift. It is always laden with symbolic and social meanings; overlapping and complexly layered, and deeply interwoven with other cultural and social practices and assumptions, and thus cannot easily be separated out. From how the packaging of a gift is more important than the item itself, to how gifts can be means of flexing power and status, a gift is never merely a gift. It is always more. What matters more here are the immaterial – perhaps even metaphysical – qualities that a gift harbour.
When a gift is given, or exchanged, a link is established between the given and the receiver: the act itself is symbolic, emotionally laden, and even spiritual. It has ritual associated with it, whether one is expected to refuse it, or how one ought to act when receiving it, and so on. I much like the Māori concept of hau (pronounced [Hoh]) as a means of explaining this link. Mauss quotes a Māori man, at length, in his essay:
I shall tell you about hau. […] Suppose you have some particular object, taonga, and you give it to me; you give it to me without a price. We do not bargain over it. Now I give this thing to a third person who after a time decides to give me something in repayment for it, and he makes me a present of something [i.e. another taonga]. Now this taonga I received from him is the spirit – the hau – of the taonga I received from you and which I passed on to him. The taonga which I receive on account of the taonga that came from you, I must return to you. It would not be right on my part to keep these taonga whether they were desirable or not. I must give them to you since they are the hau of the taonga you gave to me.
What is hau? As Tamati Ranaipiri, quoted by Mauss above, outlines, it is the spirit of the gift. It is the metaphysical something-more that comes with the act of gifting an object. The object is not merely the object: it is the object and this intangible quality. A ‘spirit’ that ties the giver and the receiver together in a loop of reciprocity. I give, you receive; you give, I receive, but the hau is never exactly the same. The books, as it were, are never fully balanced, so this cycle of reciprocity remains.
This cycle of reciprocity is also key to what differentiates gift-giving from mere bartering, or trade. Bartering or trade fulfils a predominantly economic purpose. In contrast, gift-giving is a social practice: to create this bond of reciprocity between people, groups, or communities. Gift giving, in this sense, is also not limited to particularly extravagant or even material goods: buying friends a pint with the understanding that ‘it will even out in time’ is a perfect example of a mundane form of gift exchange. Gift-giving binds people together.
It was once said that there is no such thing as society, but only individuals (and perhaps families). Never has an observation been so incorrect. A fallacious and potentially dangerous development seen across many societies is the supreme reign of the ‘individual’; the belief or perception of personhood as a bounded whole. ‘Society’, in this view, is not so much a collective as it is a collection. The issue lies with, as is so often the case, the base presumption. As much as legal and philosophical traditions construct us all as ‘individuals’, and thus we are inculcated into this line of thinking, this framing remains altogether too simplistic. Anthropologists often speak of cultures as individualistic or dividualistic – and though this implies a binary, it is more accurately a sliding scale. What an individual is, I suspect most readers will already know. A dividual, on the other hand? Perhaps this is more unclear.
In essence, a dividual is the opposite of an individual: a concept of personhood in which personhood is not defined as a bounded whole, but instead by one’s social connection: to be a sum total of all social relationships, kinship networks, clan and community belonging, and other such affiliations. It may sound radical, but it is by no means a theoretical position. However, as mentioned above, it is also not binary. All cultures exist on a sliding individual-dividual scale (among other scales…), and this naturally means that as no culture is purely dividualistic, equally is no culture purely individualistic – no matter what scholastic traditions might maintain. We are who we are, but we are also defined by the networks, connections, families, friends, religious affiliations (or lack thereof), political organising, neighbourhoods, classes, professions… to mention a few. We, as persons, are not mere individuals; we are also a nexus of other connections. We are the communities we are a part of, as much as we are ourselves; indeed, we are not a collection of individuals, we are a collective and individuals at the same time.
Gift-giving and reciprocity takes on more profound meaning and deeper significance when considered through the lens of dividualism. The making and maintenance of these reciprocity loops that bind people together – family, friends, acquaintances, even strangers – into a broader community or society are produced by various types of gift-giving and exchange. The hau, as it were, can be boiled down and simplified into a broader idea of society, a community spirit, a sense of belonging or of home. In his original writing, Mauss pointed out that the importance of social programmes, on a national level, is not merely economic – to redistribute resources – but also to create a sense of reciprocity towards society at large; what Mauss called the moral imperative of gift-giving. This point has been picked up and expanded upon by others, such as the late David Graber: that kindness and generosity work to create and maintain community and social cohesion.
This isn’t limited to the macro-est of scales, but equally is vital on the smallest of social scales: neighbourhoods, families, or friendships. The genuine importance of gift-giving during holidays like Christmas, is that reciprocal links are maintained, even strengthened, and new ones have a chance of being made during this designated time of ritual gift exchange. It is not merely the material gifts being exchanged, but also that little extra, the hau as the Māori refer to it, that truly matters. This moral imperative of gift exchange, then, is what has been lost in a lot of ways, paving the way for much more cynical evaluations of Christmas and its gifts, as mere consumption. It is an easy trap to fall into – even I become frustrated struggling to find a gift for someone – mainly because the whole moral aspect of giving has been, broadly, undermined; the view of society as something bigger than an individual that needs to be collectively maintained has been fading away. Just as it has been neglected that we are not merely individuals – that we are a mix of ourselves and connections external to us – so has the hau; the spirit of the gift. A step in the right direction here, of course, is to remind oneself of the humbling nature of dividualism: I am not merely myself, but also the society or community of which I am a part.
What I do, and have done, in life is not something that has been purely of my own doing – indeed, there’s a meme declaring that “I exist without my consent”. All this is a result of my parents, my sibling, neighbours, friends, school teachers, sports clubs, and communities that all, in their own way, raised me and shaped me. Even now, the research that I do and the PhD I am pursuing is not a dream I possibly could have pursued by myself. Without my family’s assistance and support; without friends to lean on, and people that care for me; without my supervisors; even without the people that I work with here in Sweden (whom we anthropologists call informants or interlocutors); and of course without my own will and drive to pursue this research to begin with, it could not succeed. It is indeed my work, insofar as I am producing it, but it is also a collective effort. In the opening chapter of Marcus Aurelius’ Meditations, he spends time and ink thanking the various blessings that he has received – to be born healthy, to have had loving parents, and caring friends, invested tutors and teachers, capable and virtuous mentors, and so forth – to recognise that the Roman Emperor himself ought to remain humble. Now, I am no Roman Emperor, but if humble pie is a suitable snack for him, it sure as hell is for me, too; an exercise of reminding oneself of those people and communities around you that make you – to all these people I say thank you.
And remind myself of hau, the spirit of the gift.
Merry Christmas.
0 notes
readyaiminquire · 2 years
Text
Dreamers at fault.
Tumblr media
'The basic lesson of psychoanalysis [ … is] that we are responsible for our dreams. Our dreams stage our desires, and our desires are not objective facts. We created them. We sustain them. We are responsible for them.'
Slavoj Žižek, A Pervert’s Guide to Ideology.
What does it mean to dream of a better world? We have all done it at some point. The drunken, likely lively, conversation with a friend; those animated discussions during which you solve all the world’s problems. Having imbibed a thing or two, you finally see things clearly: you see how they really are, and you see how they ought to be. You have envisioned Utopia – or your own personal Utopia, at least. Problem solved, right? Let’s pack it in, boys. We’re all done! I, for one, have found myself in these situations perhaps more times than I wish to admit (or perhaps I revel in the deeply studentikosa nature of it all). Admission or no, solving the problems with some napkin-maths rarely actually solves the problem. Indeed, practice cannot be left out of the picture, though it is undoubtedly what often ends up happening. To paraphrase my favourite raccoon-cum-philosopher, the truly interesting question is not the question that Hollywood and games and popular media shows us – that is: the revolution itself – but instead what happens after[1]. Once the party is over, and the janitor rocks up on Monday to mop the confetti, pick up the empty bottles, and dispose of the blood, guts, and brains. What then? Rather than asking the question, as such, I’d rather like to explore one of the reasons why I suspect this question is explored far less often. Why does the first day of Utopia make us so uncomfortable? It’s almost as if we don’t want it at all.
I have recently been part of a reading and discussion group on Utopia: conceptions of Utopia, particularly considering Utopia through various lenses, such as ethics, economics, or culture. It is not a purely academic group, but rather aims to include clever folks from any number of backgrounds. Through the multiplicity of perspectives, it synthesises a fruitful and eye-opening discussion. And it has been! One of many takeaways I have noted thus far is the tendency to perceive Utopia as something abstracted: treat it as a blueprint, while at the same time denying its materiality. I mean to say here that Utopia is only imagined in complete terms: a snapshot of someone’s perfectly balanced society at some point out of time. In essence: denying Utopia its critical temporal angle.
Indeed, temporality is critical to Utopian thinking. It is common to conclude something along the lines that Utopia can never truly exist, that it is just a guiding light for how society might improve. However, I think there is a grave, if subtle, mistake in this perception. It is not that Utopia cannot truly exist, but rather that Utopia always exists at some point in the future. Utopia’s temporal angle signals the end of history (NOT YOU). What do I mean by this? In abstracted terms, Utopia signifies a society or social system in absolute harmony towards some end goal (pleasure, individual freedom, economic production, etc.) – the end-goal in this case isn’t all too relevant. Due to its perfect harmony, Utopia will not change: what has been reached is the state of the world, such as it is, forever. To briefly turn to Walter Benjamin, it is the point in the future where history is redeemed, thus ceasing to exist. Beyond Utopia, nothing further changes. History ends.
This, of course, means that Utopia, as a place or system or however you prefer to conceptualise it, necessarily exists at some point in the future, and necessarily always in the future, whether it is realised or not. If it is not realised, it is in the future as it has not been reached yet; once realised, the future – in historical terms – no longer needs to exist.
Imagining this moment is profoundly anxiety-inducing and is thus something most would wish not to think about. It is, indeed, why the after-the-revolution is rarely shown – especially true should such a revolution be successful! Why do I believe this is something people would rather not ponder? It is often said that of course there are no novels or stories set in Utopia because it would be challenging to make a story about it, as there cannot be stakes. This is a standpoint I, too, have stood by and uttered. But I wish here to make an amendment to my previous ignorance, and put forth an argument that such stories done well might be more akin to reading something like Lolita by Nabokov: deeply uncomfortable; uncanny; anxiety-inducing. Albeit for different reasons than Nabokov’s literary masterpiece!
To unpick this, I want to briefly turn to psychoanalyst Jacques Lacan. To paraphrase Lacan, anxiety arises when you know that something is expected of you, but you are unable to understand what that is. To link this back to Utopian imaginaries, we must look at desire. Critically, Lacan clarifies that desire is not merely desire, but more specifically, the desire to desire. In other words, we never only desire the object of our focus as such, but we also desire the sensation of desiring it. In a straightforward sense: the feeling of unhappiness when we get something we have coveted for a long time is a manifestation of the anxiety arising not from gaining our object of desire, but from losing the sense of desire itself. Typically, this leads us to want or desire something else instead. Therefore, a healthy sense of fulfilment is contingent on never truly being complete, never truly being whole; the contradiction is that if we have all we desire, we lose the critical core that our desires are.
A fulfilled Utopia produces the same contradiction on a broad social level. The very sense Utopia conceptually summons presupposes its very incompleteness. Utopia presupposes its own state of becoming; it presupposes a perfect balance maintained at its event horizon. The perfect harmony, in this sense, includes a continuation of history, not a need to transgress historicity, or to move beyond it ‘out of time,’ as it were. Anything more complete would induce a form of socio-cultural anxiety at the loss of desire at a collective, societal level.
Thinking beyond Utopia, so to speak, is therefore not only difficult, but is in itself undesirable. The looping back of history unto itself is a source of anxiety as societal desire must be annihilated by definition. This, I maintain, is the reason why thinking of Utopia in more concrete terms – as something to even be achievable – isn’t desirable at all. Instead, it must be maintained as a point in the future, looming on the horizon of possibilities. This implied unwillingness to actually reach Utopia is something I note in my own research and experience with broadly-speaking Utopian groups. For example, one of the larger groups I work with, transhumanists, speaks of the future in transformational terms. It is telling they do not refer to themselves as posthumanists[2]. Instead, the universe and mankind are spoken of in terms of becoming: transforming our humanity, towards the posthuman, but never really wishing to reach the posthuman state as such. In my particular research, the Utopian event horizon is the much-theorised technological Singularity. However, in practical terms, for each new technology, each new implant, and each new modification, there is always a next one ad infinitum. In fact, the enhancements and modifications, I suspect, will be incorporated into the perception of the “mere” human. The posthuman ideal will always remain on the horizon, forever pushed forward into the future, as humanity moves into the future.
We will never reach this posthuman ideal, because reaching it is not the point. The desire is for the future in itself, so the notion of a future must always be maintained.
[1] Netflix’s Snowpiercer takes a solid shot at this, though I am certain there are other examples. Nonetheless, it is fair to say that this is very much in the minority of popular media portrayals of revolution or otherwise dramatic/drastic societal change. [2]Though, of course, posthumanism exists, it is not a techno-utopian movement or ideology in the same way that transhumanism is.
0 notes
readyaiminquire · 2 years
Text
Together, in Cyberspace.
Tumblr media
'Cyberspace. A consensual hallucination experienced daily by billions of legitimate operators, in every nation, by children being taught mathematical concepts... A graphic representation of data abstracted from the banks of every computer in the human system. Unthinkable complexity. Lines of light ranged in the nonspace of the mind, clusters and constellations of data. Like city lights, receding.'
William Gibson, Neuromancer, 1984.
‘IRL’ is such a weird phrase. ‘In real life,’ meant to demarcate an online interaction from an off-line one, to make real the DMZ between cyberspace and space. Baked into the very words is the implication that whatever might happen online is not real. The virtual, relegated to the immaterial, the abstract, the non-real. Subsumed as a separate category, somehow nestled under the real, yet not a part of it. ‘IRL’ denotes the apparent un-reality of online interaction. It’s an absurd phrase, indeed: a contradiction upheld by Luddite biases that digital space cannot possibly compete with the real. Yet in the past two years one would be hard-pressed to maintain this clear distinction between IRL and virtual space, to with a straight face maintain the fundamental meaninglessness of virtual interaction, as is implied in the virtual’s exclusion from partaking in real life. Families have bid farewell to their elders via cameras and screens; friendships maintained through incessant typing and texting; lovers kept together by voice recordings and selfies; all of them in cyberspace. Screens, texts, cameras, and mere words are not the same as the last hug from a loved one, nor indeed the friendly pat on the back by a friend, or a lover’s gentle kiss – for how could it be so? Nonetheless, dismissing virtual spaces so outright is like rejecting the ‘realness’ of a soldier’s last letter home, or a grey inscription on a tombstone. So, are virtual spaces any less meaningful?
The distinction between ‘real’ and ‘virtual’ is already fallacious. Following Gilles Deleuze, ‘the real,’ that is to say, the realm in which human experience and meaning is negotiated, has two constituent parts: the Virtual and the Actual. The Actual is the physical spaces and places around us humans, such as they are our streets, cities, homes, and shelters. In shorthand, the Actual is constituted by that which materially is. The Virtual is somewhat more difficult to grasp (pun very much intended). It encapsulates the immaterial realms in which humans create and recreate meaning. This is a label most commonly applied (predominantly by us anthropologists) to the spirit realms of various animist peoples, communities, and societies[1]. In a nutshell: these spirit realms are understood to exist, and have a direct impact on the Actual world. Nonetheless, such realms cannot be directly acted upon by humans. They can be entered through shamanistic rites and rituals, but only for the select few, yet we – or rather our spirits – forever roam these distant shores. Just as the Actual and the Virtual constitute the Real, so do these spirit realms constitute a part of the world, as it is understood by varying animist cultures.
Why might this matter? I am sure you have long figured out the parallel I am drawing. Still, for the record, here it is nonetheless: virtual spaces – cyberspace – are real spaces, and spaces in which humans create meaning. It is, in the words of Marc Augé, anthropological space. Anthropological space are the spaces in which humans make sense of the world, physically and metaphysically: negotiate their identities; make sense of their place in the world; establish and re-establish their kinship ties. Augé contrasts this to what he calls non-places, space designed not for humans to make sense of, or to linger in, but rather merely to move through. In a non-place, meaning is not made: it is the corridors of an airport terminal before you reach your gate and finally meet your loved ones; it is the passageways of a local mall, that you must move through to get to the store you seek; or indeed the long stretches of highway, roads that may as well be entirely virtual as they flit past us outside the car window. Cyberspace, it is safe to say, is not a non-place, though undoubtedly they do exist online. Looking at each blog like this, each community group on Facebook, the organically organised ‘Twitterspheres,’ or Subreddits dedicated to the most obscure hobbies, cyberspace is filled to the brim with anthropological spaces, where nerds share their love and build community and belonging around extremely niche board games, or where a movement of guerrilla gardeners organise and plan the next step of their green urban revolution… Or indeed, where we all, just some months past, were forced to celebrate birthdays, anniversaries, weddings, and births.
So again, why does this even matter? Subsuming virtual spaces under the rubric of ‘not in real life’ carries not only risks on broader societal levels, but also on micro-scales. We can already see the impact that memes and trolls have had on broad sociopolitical discourse, as the actions of teenagers who are just taking the piss, rather than the social movements that they are. Populist politicians have gained support specifically in these sections because these virtual spaces are indeed anthropological spaces. Meaning is negotiated within them, and what happens there impacts the material world. We might not stand in these virtual fields, among these wireframe cities, but nonetheless we carry out actions there: each abstracted screen name and avatar is a person, doing a thing.
On a smaller, or more individualised, scale the implications are, not surprisingly more personal, but they also run the risk of leading to ever more alienation – a matter all the more apparent as long as our political leaders retain this sense of social distance, and remain unwilling or incapable of taking these spaces seriously – that is to say, as spaces that are just as material as streets, town squares, or parliamentary halls. The broader issues here are immense, as a large portion of “us” (and I count myself among them) digital natives operate perhaps not only on a different level, but within a different dimension from the generations before us. I can count the times that socialising with friends in an online video game, or the communities that sprung up around it has been waved away as a waste of time, or something unreal, while myself and many others made genuine friends, and maintained even more friendships through this virtual dimension. How many of us do not know the pain of seeing an old video-game friend’s profile read 'last online: 1,127 days ago.' It matters to us precisely because it was real.
We may not be wearing VR goggles tethered to pastiche cassette-futuristic computer-tables to chat with our nans on Facebook, nor do we ‘plug in’ ourselves and cruise through wireframed cityscapes while video-calling friends, families, or lovers, stuck in their own lockdowns, or merely being located beyond our horizon. Though this development may yet come, sooner than we might have expected. However, presently we do not, as it were, interact with virtual spaces as we do with actual spaces: we do not see them, taste them, smell them, or touch them. But they are felt: emotionally and socially. Cyberspace, such as it is, remains elusive, but this doesn’t mean we do not operate within it, and in some parts of the world we are arguably inside of it, constantly. A permanent backdrop that we feel, to the life that we see. It is our Spirit Realm, the realm that exists and impacts our actual world, bleeding together; screens are our portals, and cyberspace makes itself known to us not necessarily by its material dimension (for this does exist: underseas cables, satellites, server farms) but by the social connection, togetherness, and meaning production that we carry with us, like we carry a smartphone.
There we are. Together, in cyberspace.
[1] Do note: this is an extreme condensation of a whole subfield of anthropology, and I summarise a field that has studied groups from Amazonia, to Greenland, to Papua New Guinea, to Siberia. In other words, it would be an understatement to call this an extremely small nutshell. Nonetheless, the broad principles stand.
0 notes
readyaiminquire · 3 years
Text
A paragon of their kind.
Tumblr media
“In rode the Lord of the Nazgûl. A great black shape against the fires beyond he loomed up, grown to a vast menace of despair. In rode the Lord of the Nazgûl, under the archway that no enemy ever yet had passed, and all fled before his face.
[…]
And in that very moment, away behind in some courtyard of the city, a cock crowed. Shrill and clear he crowed, recking nothing of war nor of wizardry, welcoming only the morning that in the sky far above the shadows of death was coming with the dawn. And as if in answer there came from far away another note. Horns, horns, horns, in dark Mindolluin’s sides they dimly echoed. Great horns of the north wildly blowing. Rohan had come at last.”
Lord of the Rings: The Return of the King. Chapter IV – The Siege of Gondor.
Heroes have always fascinated me, both who they are as people, and their actions; what makes one ascend to such a status? It is a topic I’ve been mulling over for quite a time, and not least has my interest in these stories been rekindled by all that has been going on for the past year or so. From doctors and healthcare workers being hailed as heroes, or the heroic acts of delivery drivers and other such, often invisible, ‘unqualified’ workers. What makes a hero is something rather difficult to put one’s finger on. It seems people instinctively know what a hero is, at least within their own social and cultural framing, but often struggle with really pinning down the criteria. We see a hero when we hear the stories of a hero, when the act is laid out in front of us, but it is far more challenging to know what a hero is, in the abstract. Indeed, what are the criteria, and what might such an abstracted checklist say about a culture? Or, perhaps more importantly, what might heroes say about how we remember?
I suspect much of my fascination with heroes stems from my deep-seated love for Tolkien’s legendarium and how his writing made heroes. There is much to say on the topic in general, far too much to discuss here (as is my usual cop-out), as his work features heroes a-plenty. They are an inalienable part of the works’ thematic richness: making flesh, so to speak, the core theme of good fighting evil. Evil has its villains, and within such a framing, the good need their heroes. One of my favourite heroic scenes in both the movies and novels is the arrival of the Rohirrim to lift the siege of Minas Tirith. It is a remarkable moment, both narratively, but also for its heroic aesthetics. It is truly a stand-out piece, and in Peter Jackson’s masterful movie adaptations, the whole segment has an otherworldly feel to it; the sense of a divine intervention [1] manifest through near-absurd levels of heroism, and gosh-darn it, if it doesn’t get me every time!
“Arise, arise!” Théoden, king of Rohan, begins his pre-battle speech; his men eventually chant “death!” in response. “Death” the host chants, and death they bring, as they merge into one entity, a singular collective of 10,000 riders: 60,000 legs barrelling down the gentle slopes of Pelennor Fields. A flash flood of death and destruction, momentum and cohesion maintained by rage and a burning hatred for their enemies and the darkness they bring. The whole scene is deliciously epic.
The text is littered with heroic tropes – the pre-battle speech (the logistics of which I still question), or as my father once noted: “this chanting for death just shows the power of mass psychology”. Buzzkill, certainly, but not wrong. Nonetheless, I think this misses the point. The scene is a narrative constructed and filled with culturally and socially coded ideals to signal the heroicness of the actions themselves. Hero stories are hero stories because they are treated as such. In a blunt sense, heroes are always constructed by narratives; arguably not meant to be accurate, but rather meant to show the transcendental nature of the action or actor(s). As a result, the actors ascend to impossible heights with expectations that cannot be realistically maintained in reality; or perhaps more accurately, that cannot be maintained by the merely human. Instead, their heroic transcendence is maintained by ideology, not by their material existence, and Tolkien, in his text, perhaps inadvertently, makes an important distinction: it is not Théoden-king that has arrived with 10,000 riders: rather Rohan had come at last.
What do I mean by this? A hero is transcendental by definition, a symbolic paragon of their kind; an individual or group of individuals who have done something that so perfectly aligns with overarching cultural values and morals that through these actions they transcend their mere humanity and, in effect, become part of the cultural Big Other. However, human is still only human, and such transcendence into the overhanging moral structure is, practically speaking, impossible. It is always a product of narrative: it is always a story; an inspirational tale; a noble lie.
There are a surprising number of commonalities between the heroic figure and what’s been called The King’s Two Bodies. In the European monarchical traditional, there has at times existed a clear distinction between a monarch’s corporeal body – the blood-and-flesh, breathing, eating, shitting, imperfect human body – and his body politic: the transcendental body, the perfect body, the body imbued with divine might, the body chosen by God. The symbolic division between the individual and the power they hold is just as clear in the present with, for example, the different connotations between Boris Johnson-the-man and the Prime Minister of the United Kingdom, or indeed Joe Biden-the-man and the President of the United States of America. Such a leader-follower dynamic seeks to establish a sense of divine kinship, a sense of relatedness to the leader in question by invoking the Nation, a belief, or other ideological constructs. The leader-figure becomes, in effect, a material embodiment of the dominant cultural-ideological environment; a figure imbued with charismatic power which, following Max Weber, is an inspirational individual that people want to follow without promise of reciprocity.
Nonetheless, the tension between the body that shits, and the body blessed by God cannot be ignored. A hero is not the same as a leader, of course. The hero is a paragon, the embodiment of the moral-cultural framework that defines them. However, both leader-figures and the hero-figures are maintained by the existence a second body. This creates a Two-Body Problem. Heroes invariably suffer from a tension that arises between their transcendental heroic self and their excremental human self, and this is a tension that cannot be reconciled effectively. The only course of action is to ignore the tension, as is often done: ignoring a historical individual and instead embrace the mythic figure. The individual subject must be destroyed and replaced by the mythic, the transcendental.
Winston Churchill wrote that “history will be kind to me, for I intend to write it,” a statement which, by and large, turned out to be true. The Churchillian figure is now a trope, and the heroic figure in modern British society par excellence. Nonetheless, even with Churchill’s considerable success, his human self has not been fully exorcised. Criticism against his rampant racism, his colonial policies, and his (lack of) action during the Bengal Famine still exist in public discourse, and become points of contestation, as they painfully highlight the tension between the historical and the mythic figure. If the ol’ British Bulldog was such a remarkably heroic figure, a shining beacon, he cannot be shown to have filth on his underbelly. Even here the Two-Body Problem remains, and short of complete destruction of the historical record (challenging in its own right), it cannot be circumvented – and this is not to mention the implications for a heroic figure that is still live (ritualistic hero-murder might be a somewhat charged suggestion…).
That being said, the destruction of the individual subject can also be read as more metaphorical than literal. The most obvious way of destroying the individual subject is by placing the subject within a collective: to celebrate heroic acts carried out by a group, over those specifically made by an individual. Folding the individual into a collective, which may include any number of otherwise questionable individuals, shifts the focus away from how these individuals may or may not have behaved in the rest of their lives, and instead emphasises the actions of the group, allowing the heroic actions to speak for themselves. The group, as a collective entity, also ceases to exist eventually, insofar as it cannot be maintained indefinitely, and thus circumvents the Two-Body Problem.
The Rohirrim’s chant before their charge at the gates of Minas Tirith was, in a way, correct. “Death,” they chanted; literally and figuratively, did they bring death with them, that is to say, the destruction of the individual in favour of a collective cohesion that they created during their change, just as much as the literal death of crushed Orc-skulls underhoof. Théoden may have been their king, but the charge was not carried out by him. The Charge of the Rohirrim, as a collective, brings to the fore the body politic of the people of Rohan: their courage, their sacrifice, their loyalty, their heroism. Much can be gleaned from this approach to heroism and commemoration – in short: deeds not men! – for without Rohan’s collective heroism, Minas Tirith would have surely fallen.
… As long as they don’t raise a goddamned statue of Théoden.
Selected Bibliography
Yurchak, Alexei. 2015. The Bodies of Lenin: The Hidden Science of Communist Sovereignty.
Michelutti, Lucia. 2013. “We are all Chávez:” Charisma as an embodied experience.
[1] Likely not by accident, and all the more fitting when the charge of the Rohirrim was allegedly inspired by the lifting of the siege of Vienna in 1683 by the Holy League.
0 notes
readyaiminquire · 3 years
Text
The Future as Vapor.
Tumblr media
‘The semiotic phantoms, bits of deep cultural imagery that have split off and taken on a life of their own.’
              William Gibson, The Gernsback Continuum.
  I’ve been thinking a lot about time lately. Not wholly sure as to why, perhaps it’s because we’ve just moved from one year to another, and taking stock is only natural; or perhaps because of the peculiar nature of the year that has just ended, with its pandemic, lockdowns, and the many challenges and tragedies borne out of it. Perhaps my research and its focus on time and temporality makes me particularly vulnerable to this sort of introspection; perhaps I am just predisposed to it? Likely, it is a mixture of all of these, but I already digress from the main point I was making, which is, quite simply: I have been thinking a lot about time lately. I’d wager the year that has just been, and which doesn’t feel as if it has fully ended quite yet, has a lot to do with it. My soundtrack for 2020, if there was such a thing, has undoubtedly been vaporwave, dyschronous ‘trapped-in-a-loop’ music for a year where everything stood still: a semi-ironic haunting from the past with empty, tinny beats and retro-synths, just mangled enough to sound new, but not too mangled so as to lose its retro-80s soundscape. It is, as absurd as it sounds, Muzak with teeth. The ironic resurrection of a dead aesthetic, brought back with a vengeance and with a purpose.
Vaporwave gets its name from ‘vaporware’, software that never was. Vaporware is software that has been announced, sometimes even showcased, but which then disappeared into some development maelstrom and seemingly vanished from view. It is neither cancelled, proclaimed dead and left to rest in the pile of ‘what could have been’, but always kept alive – a zombified software – as a potential. Its nonexistence-with-a-side-of-potential is precisely what makes vaporware vaporware. What does vaporwave take from this? The music is a form of Muzak, seemingly generic elevator music perfect for blending into the background but never meant to be listened to. This implies a vaporware existence (existence in nonexistence; or rather nonexistence in existence), vaporwave has more to it than that. It is precisely its purposeful meaningless soundscape that gives vaporwave ability to critique. Often made up of repeating synth riffs, tinny beats, sometimes sounds or jingles reminiscent of 1980s and 1990s TV and radio commercials, it is not an accident that the genre has modelled itself on Muzak. It is an echo of a past that has long disappeared into memory, even into cultural memory; a haunting reminding its listeners of what was, through its twisted soundscape of an otherwise well-trodden cultural form. The genre is best described as music optimised for abandoned malls.
Vaporwave is the audial version of a ruin. Or rather, it is the erection of a folly among ruins, a means to highlight the absurdity of the action itself. Its soundscape exists as a reminder of a past that promised a future that has not appeared; its central thesis – if it were to have one – is that we live surrounded by the ruins of this future-that-never-was. Crucially, and this gets at the heart of the present predicament, we only live and operate among these cultural ruins strictly because we have been unable to reconfigure these cultural building blocks into something new. The ruined landscape of a future that never existed has only come to pass because it has not been replaced by the new. Instead, the orientation has shifted to focusing on the past in the present, not the future ahead of us. The emergence of vaporwave in the present is thus by no means a result of the pandemic, the lockdowns, and the perceived stalling of time as a result, but rather predates it. The pandemic has likely brought such feelings of standstill to the fore, but it by no means created it.
This essay was prompted by a post on Reddit. Paraphrasing, the posted said something to the effect of ‘I don’t want to play the video games from when I was a kid, I want to feel like I did when I played the video games from when I was a kid.’ This, again, gets at the heart of the predicament. That feeling many of us remember from the past is one we have not felt in a long time – myself included. Indeed, video games are a fantastic case study for this development. Using an example from my own experience: I remember when I first played World of Warcraft. I know, your mental image of me as the narrator just shifted substantially, but bear with me. The nature of a fluid massively multiplayer online roleplaying game (MMORPG) wasn’t new by the time WoW was released. Still, it had never been done quite so well: the graphics were fantastic (at the time…), the level of interaction, the fluidity and connectivity of the world, the social aspects and community building… the list goes on, but the software was an adventure, and I (and countless, millions of others) couldn’t get enough of it. It was an unrivalled experience in many ways. Nothing like it had existed before. It was a completely new cultural artefact. It invoked a sense of future-shock.
WoW is, in addition, an interesting example as its original (well almost original) game was re-released in 2019 to thunderous applause, and a community bracing itself for another nerdgasm. The re-release was undoubtedly popular, it was undoubtedly fun, but it wasn’t the same. The feeling that it evoked in the past was no longer there. The future-shock with which it had once been densely packed had melted into air. This disconnect has even been picked up by parts of the community. A debate has raged between players who wish for no changes to be made to the original, for it to be released in its ‘pure’ state (as some changes had been made around specific mechanics, bugs that were never ironed out originally had been, and so forth), and players who call not for a recreation of the original game, but a recreation of the feeling of the original game.
But this is the issue with nostalgia. The original feeling of something unique, the future-shock as it were (or what German historian Reinhard Koselleck called the Überraschung; lit. surprise) cannot by definition be re-created; it must be created anew, with something new. The tragedy faced in the present, then, is that the dominant form among popular cultural media is that of nostalgia: a harkening for past experiences not for the experiences themselves but for that feeling of wonder that came with them: the surprise when playing your first 3d video game, or when first using a smartphone, or at the choice of music on an iPod (not to mention that the songs never skipped if you bumped it!). In many ways, this sense of surprise and wonder has been lost, even if innovation has sped up. Computing is faster than ever. Technology is near-ubiquitous in some parts of the world, yet nothing new seems to come from it. It is the same experiences, but faster, or in higher fidelity – occasionally this even folds back unto itself: vaporwave being a prime example — the mockery of a past cultural form that is only made possible with new technologies and innovations. In short, for all this new potential, nothing new is created.
Much has been written on what has caused this predicament, be it Mark Fisher’s argument that the foundations for innovative cultural forms have all been eroded with the rise of neoliberal capitalism, Franco ‘Bifo’ Berardi’s analysis that the future has disappeared because social imaginaries have been eroded with the rise of global techno-capitalism, or indeed Fredric Jameson’s take that capital is too effective at rehabilitating the radically new. To varying degrees, these thinkers (and others) speak to the problem of nostalgia, specifically how the marketing of nostalgia is but a logical conclusion. In the present neo-liberal configuration, innovating is a risk, especially within the realm of culture and pop-culture. It is much safer, and more in line with the underpinning profit motive, to repackage and re-sell old cultural forms as nostalgia and pastiche: think of the Star Wars universe's resurrection yet again, or indeed the example above with the re-release of WoW.
‘Fine’, you say, ‘you’re right’, you concede, ‘but what’s the problem?’ you finally ask. The issue with nostalgia becoming one of the main pop-cultural articulations is that it reorients the present away from the future and towards a past long gone. A lack of future orientation, in turn, takes out much of the hope surrounding societal and cultural development and innovation. To frame this less abstractly: it is hardly news that scientific research and literature, typically in the form of science fiction, exist in a feedback loop. They both take inspiration from one another. Scientific breakthroughs lead to authors to push the boundaries of the imaginable, which in turn inspire scientists, engineers, and inventors to make science fiction science reality. In the words of William Gibson: ‘There are bits of the literal future right here, right now, if you know how to look for them. Although I can’t tell you how; it’s a non-rational process.’ Just think of how many present innovation and inventions we have already seen on shows like Star Trek. Lacking this future orientation, in short, invariably leads to a form of social and cultural stagnation. Let me be clear here: this is not a piece lamenting the ‘fall’ of some romanticised Western culture or some such nonsense. Instead, much of our present social, political and cultural order is underpinned by a futural orientation insofar as it is a belief in a future that drives engagement, innovation, and creativity; that creates future-shock. Why bother changing anything if ‘this is it’? It is precisely this process that ‘Bifo’ Berardi described as the slow cancellation of the future, and that the late Mark Fisher referred to when he asked, “Is there no alternative?”
When I say that nostalgia has become the dominant cultural form, this is what I mean. The conventional means of artistic productions have been subsumed under an unmoving profit motive. As a result, real, shocking, surprising innovation cannot take place. But I wish not to end it with such a conclusion, as merely pointing at a problem isn’t necessarily helpful. Instead, new & radically different forms of production must be discovered. Fredric Jameson calls such an exercise cognitive mapping, the process to resituate oneself in the cultural landscape and thus gain a new perspective. To continue a metaphor: to move out of the ruins and into new vistas to regroup, reshape, and ultimately rebuild. The first step is to realise the impasse faced, the second is to do something about it. This process can already be seen in some spaces, especially among grass-roots movements like the markers’ movement, citizen scientists, and other groups – be they tech-focused or artists’ collectives. What ought not be understated, on the other hand, is the importance of ensuring such a shift takes place, lest we end up reading our own collective epitaph:
‘[…]
And on the pedestal, these words appear:
My name is Ozymandias, King of Kings;
Look on my Works, ye Mighty, and despair!
Nothing beside remains. Round the decay
Of that colossal Wreck, boundless and bare
The lone and level sands stretch far away.’
              Ozymandias by Percy Bysshe Shelley, 1818.
8 notes · View notes
readyaiminquire · 4 years
Text
Blood for the Blood God.
Tumblr media
The year of our Lord two-thousand and twenty, or 20-20 in common vernacular, has been a wild ride. It’s been the kind of year when time compresses and six months simultaneously feels like six weeks and six years. The year started with an almost-war, a continent almost burning to the ground, then a pandemic, and now we’re almost back where we started: a(nother) continent is on fire, the pandemic is coming back for its own electric boogaloo, and perhaps this year will include a war after all. To misquote the LEGO Movie: everything is awful. What may be at the top of most of our shit-lists at the moment is the growth of the COVID-19 infections, despite what has felt like a constant bombardment of information, PSAs, commentary, and debate surrounding this global pandemic.
 Most countries had a time-out over the summer, but now we’re headed back into the ring, so to speak, to see how this next round plays out. This long and rather mixed metaphor is, in effect, to say that across the globe people are deeply aware of not only the COVID-19 virus, but the risks associated with it, and the threat it poses to society. Which in my mind raises one question: what brought people to swarm shops once lockdown was eased? What caused such a quick return, and willingness to return, to business-as-usual: to offices, to pubs, to shops, to restaurants? With everybody being aware of the risks that still hover above us, surely one would expect to see much more caution? Here, I will argue that under capitalism shopping – and consumption more generally – functions as a cultural equivalent to sacrificial rites, and under late-capitalism more specifically, this form of sacrifice becomes more closely tied to the individual subject. With the uncertainty hanging above us all at the moment, sacrificial rites as a means to pacify a Divine Other becomes a completely rational thing to do – despite the apparent risks of breaking social distancing measures, individual action becomes key to managing the uncertainty of the present future.
We’re all aware of the general functioning of a capitalist economy, specifically how it is prone to crises when there isn’t enough growth, and therefore keeping the machinery going through spending in one form or another is key. I am not going to comment or analyse this because, frankly, I am not qualified for that particular discussion. If you want to read a critique of capitalism, growth, and crises, I might suggest turning to someone like David Harvey and his work on the ‘spatial fix’.
  Indeed, as much as our current economic-political system maintains its economic imperative through spending and the flow of capital, so, too, does it create sociocultural imperatives. Though these imperatives have emerged to support and work in concert with the broader economic imperatives, they exist in a separate arena, of course. While the economic arena is driven by the cold, harsh economic calculus of PNLs, the social and cultural have a different currency: meaning. Anthropologist Danny Miller makes the case for shopping – that is, the leisure activity of spending hard-earned cash on ‘frivolous’ or luxury items – being the equivalent to a rite for sacrifice in contemporary capitalist societies.
  This is a bold statement, you might think, but Miller’s argument is rather convincing. Sacrifice, firstly, shouldn’t be understood by its action, but rather its purpose. Therefore, the equivalent of ‘sacrifice’ across cultures may look wildly different, but they fulfil the same function. What Miller argues is that through shopping, “the labour of production is turned into the process of consumption”. In other words, shopping is done specifically to spend the money we have made in order to consume. The purpose of sacrifice is to establish or maintain a link with a divine entity or otherwise larger-than-human forces. This connection exists to elicit protection, pacification, or otherwise positive outcomes for the society which engages in said sacrificial rites. In the case of contemporary capitalism, what is sacrificed is money, that we earn with our bodies (labour), to maintain the economy as a near-divine force. In turn, The Economy takes care of our future income: through economic booms. Viewed from this perspective, shopping doesn’t function so differently from a farmer sacrificing some of his harvests to ensure larger harvests down the line.
  This consumption, Miller notes, shouldn’t be read as “mere” consumption, or as consumption born from pure pragmatism (indeed, not all buying of goods constitutes shopping). The shopping/sacrifice that he discusses is one that from its very inception is understood as either an improvement or at the very least, a maintenance of society at large. The object of consumption is used to constitute a material connection to the divine force. This material connection is indeed key, as we must understand the sacrifice to be both in the material object being consumed, and the act of consumption itself. In other words, the performance of shopping is equally important. This might explain why online shopping doesn’t quite scratch the same itch: it lacks performativity. It is, in a sense, closer to “mere” consumption. This sounds far-fetched, without a doubt, and extremely abstracted, but bear with me.
  One of the defining aspects of late capitalism is that everything either has been commodified or is potentially understood as a commodity: from good ol’ resources, to human labour, and more abstract concepts like personal identity. By consuming goods, be they clothes, or where we buy food, the restaurants we frequent and so on, we do not only consume the goods themselves, but we also use this pattern of consumption as a means to establish, re-establish, and reproduce our personal identities. As Jill Fisher notes: “[T]he late capitalist economy has created a structure in which our lives and bodies have been violently commodified”.
  Understanding this degree of commodification through Marilyn Strathern’s seminal work The self in self-decoration, a potentially hidden set of processes begin to emerge. Strathern argues that decorating the body doesn’t necessarily serve to highlight the body itself, but to hide it. Just as “the body hides the inner self […] [Strathern] argue[s] that the physical body is disguised by decorations precisely because the self is one of their messages”. In more straightforward English, decorating the body serves to hide it specifically so that one’s ‘true self’ – what cannot be typically seen  – can emerge; one’s individual subjectivity.
  Applying this to late capitalism, the consumption of goods becomes a means through which we assert our sense of individual subjectivity (and take note of this being individual, it will be important later). The consumption of goods, therefore, establishes a metaphysical connection between ourselves and capital, as it is only through capital that we are capable of asserting our own independent selves. Shopping, thus, becomes the necessary prerequisite to such consumption, the act that sacrifices our hard-earned cash facilitating the consumption that connects us with the Divine Other of Capital.
How does this relate to the COVID-19 experience? As I mentioned at the start, people are, broadly speaking, aware of the risks that such a pandemic poses. However, much of this is undermined by the presence of several uncertainties in how this information is both presented and understood: uncertainties with regards to the virus itself, or of the economic uncertainties, the social impact, and the future itself. Typically, scientific (or specialist) knowledge has existed to legitimise governmental or state action, however, in times of great(er) uncertainty, this paradigm breaks down and such legitimation cannot take place. What we, as subjects, are left with is a sense of uncertainty and that something needs to be done, but without any clear sense of what this ought to be.
  As anthropologist Mary Douglas outlines in her work on risk, the risk calculus has been individualised, like much of society at large, after the emergence of neoliberalism. The doing of the something mentioned above, therefore, falls to the individual, rather than any collective, though what this something is remains unclear. Here, the link between the individual and the Divine Other comes into focus. Much like the uncertainty that surrounds the virus itself, there is also a lot of uncertainty around how capital actually works: most people broadly understand capitalist economic structures, but not beyond the general. Seen from this perspective, the drive to go out and shop: to buy new clothes, go to restaurants or pubs, and in general to spend money, becomes not so much an articulation of ‘Western overconsumption’, but a genuinely sympathetic and rational drive to re-assert some control over a situation marred with feelings of uncertainty and lack of direction for individual action. This latter point is particularly damning in late capitalism given the onus placed on individual choice as being valued above all else; the collective action required to handle a pandemic like this requires the opposite sociocultural responses that many of us have been inculcated to understand as responses at all.
  However, there is without a doubt a hidden dimension to this sacrifice, which is far more implicit and therefore not as clear, particularly as it is a result of circumstance rather than design. By engaging in our ritual shopping, we’re opening the door to additional COVID-19 spread. The culturally driven ‘need’ to maintain our connection with Capital (spurred on and reinforced by politicians, pundits, and indeed capital itself) becomes detrimental to what we, through these individual actions, are attempting to achieve. Instead, we’re entering a stage of meta-sacrifice, whereby we carry out the rites to ritually exchange our hard-earned cash for goods to consume, but due to the sheer scale of shopping and consumption taking place we are also indirectly sacrificing the weakest in society: the elderly, those with underlying conditions, and so on. This individually-driven response in dealing with our collective uncertainties appears, then, to come with the implicit acceptance that some individuals will simply be lost in the process.
  At the end of the day, we neither understand the intricate processes of economics nor epidemiology, and alas we find ourselves in a moment where the economists and epidemiologists themselves do not have clear ideas of what will happen next. We’re stuck in a quagmire of uncertainty, with a need for individual action. Shopping, despite the continued threat of COVID-19 and a second wave emerging as I write it, is not merely an outlet of individualistic greed or rabid hyper-consumerism. Instead, with shopping and consumption understood through the framework of sacrifice, as a rite to pacify a Divine Other and, through an all-important individualisation of such action, re-establish not only our own connection with this Other, it emerges as a response to the uncertainty that hangs over us all. Haven’t we been told that shopping and spending money might keep the (alas, inevitable) economic crisis at bay? But at what additional cost, specifically a cost we might not see directly? If blood is for the blood god, capital is without a doubt for Capital.
Selected bibliography
Douglas, M. 1994 Risk and Blame: Essays in Cultural. Theory Milton Park: Routledge.
Fisher, J. 2002 “Tattooing the Body, Marking Culture”. in: Body & Society 8(4) pp. 91-107.
Miller, D. 2013 A Theory of Shopping. Hoboken: Wiley.
Strathern, M. 1979 “The Self in Self-Decoration”. in: Oceania 49(4) pp. 241-257.
0 notes
readyaiminquire · 4 years
Text
Part 3 - Unimaginable by design.
Tumblr media
This is the third part on the rewrite of my thesis, from 2019. Here I take a slightly different approach, and rather than rehashing the same arguments from my previous works, I instead use the same data to argue for something new, and novel. Hopefully this will be as enjoyable, if not more so!
You can find the the introduction here, part 1 here and part 2 here.
How does someone build something that, for all intents and purposes, they are incapable of imagining, or visualising? This is at the core of Mark Fisher’s work on cultural hauntology, itself derived from the work of French philosopher Jacques Derrida. Our current experiences are haunted, it is said, by our past experiences, and our future anticipation. However, losing the ability to fully anticipate a future in which substantial change has taken place would imply the inability to also bring such a future into being. Looking over my experience working with transhumanists, biohackers, tech-enthusiasts, self-avowed futurists, among others, in Sweden, made me think about not only whether Fisher’s cultural diagnosis might have been correct – which, to be up-front, I do think he was correct – but perhaps more importantly, how to break out of such a cultural impasse. Fisher himself states that to fix this disjointed time, we must first recognise that it is indeed disjointed, and from there attempt to find solutions to put it back together. It dawned on me without realising it at the time, that this is what these Swedish techno-utopians were working towards, though likely not consciously. Their focus on building a new future, a better future, while remaining notoriously vague as to what this might entail came into new focus. The trust put in new technologies, while maintaining a high lack of knowledge of the future (as neither they nor I own a bona fide crystal ball), I would argue is exactly the point. What is being built, in other words, is not the future per se, but rather a new context: to create opportunities to experience the world in ways that are currently unimaginable, and through such experiences, also imagine new futures.
Robotic eyes to see the world in a new light.
Stagnation, cancelled futures, and how we go from here.
Mark Fisher’s work on hauntology is very clearly rooted in Jacques Derrida’s work, the man who coined the term itself. Derrida observed that we never truly experience anything as fully present, but everything that is, is always coloured by past experiences and anticipations of the future. Music paints a very clear picture of this: a single note holds no melodic quality, but is simply a note. It gains these qualities only when understood in the context of the preceding notes and in anticipation of future notes. The melody is thus ‘haunted’ by that which no longer exists, and by that which does not yet exist. This interplay, Derrida argues, exist across all our experiences. We always experience them as an interplay between past, present and future.
Fisher’s use of hauntology is much more specific, though. He refers to a type of cultural hauntology, in which the phenomenology – or the feeling – of time itself is disjointed. The past (and often the futures imagined in the past) bleed into the present, making it evermore challenging to delineate between ‘past’ times, our experientially present time, and anticipated new futures. To borrow a phrase from Fisher, the future has been cancelled. This cancellation, Fisher is careful to point out, was not sudden, though he argues that it started sometime around the 1980s or 1990s (indeed, pinning an exact date on such a sociocultural development will always be folly). What Fisher does observe, however, is the emergence of neoliberal capitalism and the beginning of this slow cancellation of the future. Neoliberalism, he argues, makes all other developments subservient to its own profit motive, as a means of reproducing the system itself. While this doesn’t make the system completely impervious to change, it does make change much more unlikely to take place organically.
It is important to understand that developments as a whole have not stagnated, but rather there exists a systemic and cultural stagnation. The phenomenology of time is that of standstill. For example, while digital technologies have made enormous strides, these new technological capabilities are, by and large, not deployed to do anything new. Rather, they remain subservient to neoliberal logics, and therefore operate instead to make already established processes and sociocultural modes faster, and by extension more efficient. Examples of this in practice is the digital addition of crackle to music to make a digital file sound as if it is played on an LP (an largely obsolete piece of technology) or to produce nostalgic movie remakes from the 1980s or 1990s. Marx famously wrote that all things in history appear twice, first as tragedy and then as farce, and with cultural forms, they appear first genuinely, and then as nostalgic pastiche. As a result, truly new futures become harder and harder to imagine.
How might such a cultural impasse be broken? It is important to delve deeper into what the phenomenology of time is. German historian Reinhart Koselleck once argued that what makes people experience a historical period as distinct is its tendency of existing within a complex knot of new developments and easily anticipated repetition which constitutes a “specific historical temporality”, or specific experience of the now, as different from the past (and indeed, different from an anticipated future). This is, in effect, why the 1970s might feel like an era in itself, distinct from both the 60s and the 80s, and themselves distinct from another such era, on a phenomenological level. Koselleck places much emphasis on the “surprise” (Überraschung) as the process through which one era comes to experientially feel like another. Once these surprises have been lived through in their original uniqueness, they become part of a framework of repeatability, and is therefore added to a kind of “horizon of expectation”. What makes different eras feel different is, according to Koselleck, the result of a process of accumulation.
Fisher himself wrote that to break out of his diagnosed impasse, he emphasised the need to first recognise the impasse itself, though he prescribed no clear roadmap, highlighting instead the importance of local contexts. Koselleck’s focus on the surprise, I think, serves as a good framing. It is not far off Alain Badiou’s capital-E Event, what he identified as the driver behind cultural change. Badiou defined the Event straightforwardly as the moment after which the world can never be the same again. The parallel between an Event and Koselleck’s Überraschung is clear, and serves as a useful framing for how such a cultural hauntology can be circumvented: to discover the ability to once again be surprised.
  Future
I met with Patrick, an older gentleman, in Stockholm. He worked out of a shared workspace focusing very much on start-ups, aiming to connect ambitious entrepreneurs and to foster innovation. The offices themselves felt like they had been modelled on something from a cyberpunk novel: stepping in from the grey and rainy Stockholm streets (one might even be reminded of the opening lines to Neuromancer: that the sky above the city “was the color of television, tuned to a dead channel”) through a corridor leading to a lift, that took me to the heart of the building. Irregularly shaped, with a platform suspended in its centre, the ceiling a skylight, people milling around, a lot of buzz. I eventually met Patrick, perhaps in his mid-60s, a stark contrast to the hive of otherwise young entrepreneurs buzzing around us. We moved through the building, past meeting rooms encased in glass, until we finally found a quiet corner in which we could speak – and within an amicable distance of a coffee machine (this was Sweden, after all). “Everything in the building is linked to our key cards; from meeting rooms, to the locks, lifts, and even the vending and coffee machines”, Patrick told me, excited to be working in a space that seemed to really lean into integrating technology even more in our daily lives. “Coffee?” he asked, waving his hand by a machine; it powered up.
Patrick looked delighted, as I was there to speak to him specifically about his apparent Jedi-coffee powers. See, beneath the skin of his left hand, nestled in the soft flesh between his thumb and forefinger, was a small NFC chip – and this is what I had ostensibly come to speak to him about. I suppose the question on my mind then is the one I often encounter when I reveal my own implant: “why?”. Patrick: “It’s an inevitable development, isn’t it? Technology just keeps getting better and bigger and faster”, and that “with modern medicine, and later computers, it was only a matter of time before this [gesturing at his phone] would be integrated in the body!” This ‘argumentum ad inevitability’ is one that many of the people I have worked with bring up, in one form or another. The logic goes, in a nutshell, that technological innovation, by definition, solves problems. Therefore, as technology grows and improves it will solve more problems: the implication being that technology will eventually be all-encompassing. I will not dwell much on this here, as I have discussed this elsewhere. Instead, as Patrick very much believed, I want to unpack the notion of this technologically driven future. What will it be?
Here we reach a degree of vagueness which permeated many of my conversations with these Swedish techno-utopists. From the logic outlined above, this imagined future was largely understood to be a good future, or perhaps more accurately as having the potential to be good. Indeed, much of their present efforts are directed towards ensuring the ‘correct’ use of future digital technologies (again, something I have discussed at length previously). Nonetheless, the perceived or imagined goodness of this potential future is worth dwelling on, specifically because of its vagueness. Another informant I spoke to, Jacob, made sure to highlight the importance of working on these kinds of projects because he wanted to “make sure my little ones grow up in a better world than this, and sure as hell not a worse one”. Yet another informant put it very succinctly with: “there is no inherent end goal; it’s all fluid. It’s fluid because we don’t yet know what it is we can do”. These approaches are all teeming with an inherent positivity towards technology and its potential.
Yet, beyond this positive feeling towards technology, this view of its seemingly limitless positive potential, as long as all get invested and channel some of Gilles Deleuze’s wisdom that, “there is no need to fear or hope, but only to look for new weapons”, there is a stark lack of clarity as to what exactly this future might look like. This in stark contrast to the potentially horrific outcomes of technology gone awry, on which ample articles, books, lectures, and presentations have been written. Thought experiments with names such as The Paperclip Problem, or other such clearly defined (yet to a casual listener) seemingly absurd in scope and specificity exist. During my three months conducting fieldwork, the clearest vision of the future presented to me was at a Transhumanist conference here in London: TransVision 2019, at which the organiser merely described future as having the potential to bring about a world of plenty.
Yet, no-one offers much clarity as to what any of that means.
 The futures that never came to be.
If the future, such as my informants seem to imagine it, cannot be described with much clarity, some answers may be found in the past, where (presumably) the inspiration for these projects lie. Fred Turner reminds us that the metaphor for digital technologies as having inherently liberating qualities is a relatively recent one, and did not fully take root until the 1980s or 1990s. It was thus simultaneously surprising and not that Ethan, a university student at Lund and probably my youngest informant cited the video game franchise Deus Ex as a key inspiration. Deus Ex, solidly a piece of cyberpunk media, often frames the conflicts and risks associated with human augmentation: the division of humans into different groups, the ‘pure’ versus the ‘augmented’ and so on – deep-rooted risks, and issues which, in one shape or another, we tackle in contemporary society, though with different categories and labels. When pressed, Ethan, surrounded by lab equipment in his student dorm, highlighted the potential that he saw in the technology: that despite the bleak world presented by Deus Ex, he focused more on what could be instead. Deus Ex, and cyberpunk as a genre, a cautionary tale, should one read it as such.
Not surprisingly, many informants cited science fiction as a source of inspiration – the famous drive from science fiction to science fact. References, beyond the one mentioned above, was Star Trek, or Star Wars, as well as many comic books. This, again unsurprisingly, was deeply dependent on their age group. While Ethan referred to a contemporary video game franchise, Jacob referred to the Iron Man comics he read as a kid. However, despite such gaps, the takeaway was always very similar if not the same: not to focus on what technology was used for in these various settings, but rather what it could be used for instead. The clearest, and perhaps on the nose, an example of this came from a speaker at the transhumanist conference, quoting Arthur C. Clarke’s three laws:
When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.
The only way of discovering the limits of the possible is to venture a little way past them into the impossible.
Any sufficiently advanced technology is indistinguishable from magic.
This opens for a discussion around inspiration taken from the past – from many different pasts! –  but it is also made very clear that the futures in the past are not compatible with the future my informants are seeking to build as a contemporary one.
This drive to want a new type of future, or one future that feels like a new era in some sense or another, is articulated especially strongly in sentiments around the importance of involvement, and put very bluntly, doing things within the various communities. This is a longstanding pillar among several techno-utopian groups, especially those focusing more heavily on grassroots involvement. My informants all showed how they valued the importance of direct involvement, from decrying a general lack of investment in maintaining broader community relations and events, to phrases such as “theory is nothing if you don’t put it into practice”. Returning to Ethan, who is exemplary of this stance:
“Some people come on the forums, or in a YouTube-comment section or whatever, and just talk about how amazing this or that would be. Well, have you done anything? No? Your ideas aren’t that original, so at least try to make something with them. Try to make a difference, so that these things can actually become reality.”
I have mentioned before that my informants hold themselves to an ideal initially put forward by architect, futurists, and many more things, R. Buckminster Fuller. Bucky Fuller put forward the idea of the comprehensive designer, as someone who can put bluntly ‘step outside’ of the current system and structures to therefore view it from a novel position. These comprehensive designers are by definition hard to classify because the very idea is to not be classifiable; flexibility from societal illegibility. These are, in theory, the type of people who hold the potential to be true innovators. Though this is a problematic ideal for many reasons, the notion of attempting to live up to a broader ideal to change and build something new for the future does highlight a certain, at least implicit, understanding of the current cultural predicament à la Fisher.
 Old habits die hard
There is the fundamental problem of imagining yourself as being able to ‘step outside’ of a system to view it form some neutral point in nowhere. If there is anything my favourite raccoon-cum-philosopher has taught me, it is that we can never step out of our ideology because it is, by definition, inside of us. As he says, we are “already eating from the trashcan all the time”. This predicament becomes painfully clear among my informants. One of the most prevailing ways of speaking about innovation, and building, testing, or disseminating new technologies is squarely through the lens of the contemporary entrepreneur, both in practice but also in aesthetics. It is telling, indeed, that my earlier vignette was centred squarely at one of these entrepreneur centres in Stockholm, and it is far from the only time where this became relevant, or even central, to my experience with the people I worked with.
Three of my main informants, Harrison, Jacob, and Samuel own their own companies focusing on selling and implanting the microchips in Sweden. Harrison, in addition, is a quite prolific speaker on the subject of transhumanism both in Sweden and in Europe, while Jacob is heavily involved in other forms of body modifications. Much of it is, very clearly, centred around an entrepreneurial sphere. The same can also be said about many of the people I met. Out of the two chipping events I attended in Stockholm – both organised by Samuel – many of the attendees spoke about the commercial applications, potential, and excitement of their implants, while others yet again referred to the implants as really useful PR stunts either for their own personal brands, or within their wider professional life (I remember that one of the only two women I managed to speak to used it as a way to leverage her image within an otherwise deeply male-dominated field).
This also became abundantly clear when attending TransVision 2019 in London, where all speakers either had their own book coming out, owned their own companies, and some attendees even attended to find start-ups worth investing in. Going back to my conversation with Patrick, he went as far as to compare the modern entrepreneurial spirit with the spirit of discovery among scientists in the 20th century. The new discoverers were, as it was told to me, the likes of Steve Jobs, Elon Musk, and indeed anyone who has the grit and drive to commit to new technologies and finds ways to push these out into society. In addition, other informants, Ethan among them, spoke of future developments in very clear market-logic and metaphors. Specifically, when discussing the risks of creating an ‘underclass’ of non-augmented humans, the response was very much “sure, as the technology develops, only the rich will have the resources to make use of it, but as things go on, the technology will become cheaper, and more accessible. That is nothing but a temporary step, and the future past that will be better than today”.
The entrepreneurial metaphors really just highlight how deep the neoliberal/capitalist logics run, what other writers have called the “Silicon Valley ideology”. This, again, is closely tied to Bucky Fuller’s ideal, but it also inherently serves to undermine it. Though some individuals may have the appearance of stepping beyond the bounds of what is believed to be possible (refer back to Arthur C. Clarke’s rules), the inherent ideological framing remains, and such an operation still takes place very much within an established socio-political hegemony. The fundamental framing is still capitalist – and this without going into a discussion about, say, Elon Musk the symbol, and Musk the person.
Spoiler alert: he’s not Tony Stark.
  Purposeful unclarity
It is worth returning to Fisher here. Our fundamental predicament as he saw it is not difficulty of imagining a future, but imagining new futures. In an oft-quoted line attributed either to Žižek or Frederic Jameson, it is easier to imagine the end of the world than the end of capitalism. The future, of course, remains, but it remains painfully constant. Herein lies the issue for my informants in Sweden, and likely many others within the same groups and communities: how to create the space in which a sense of newness can emerge. I argue that it is not surprising that their ideal futures are so ill-defined, for lack of a better phrase. The lack of clarity is indeed the point.
Given what he been outlined above, there emerges a clear tension between the will to create a new future, one better than today, a future of plenty, so to speak, and how this future is articulated. Either an image is painted with disappointingly few pixels, or the means through which the future might be created come through already well established and at times problematic logics. The entrepreneurial ideal, the comprehensive designer, and what is at its very base a neoliberal logic, is still extremely clear across all these movements, not only in words but also in action. Not only are the new discoverers and inventors compared to successful entrepreneurs, but most people operate within what can broadly be called a start-up space.
However, turning this perception on its head, it would not be unreasonable to think that these groups themselves have a feeling that they do indeed struggle to imagine a new future, at which point vagueness becomes a necessity. They do not stop believing in a better future being possible, but they recognise the difficulties they’re faced with describing what one might look like. The rejection of a clear view of the future is, to some extent proof for the accuracy of Fisher’s diagnosis, but it is also extremely telling of how such a cultural impasse may finally be broken.
Fisher himself told us that perhaps the only way to break the current loop is to recognise that time itself is out of whack, and once recognised deploy appropriate measures to “mend” time. Based on my own fieldwork, however, it appears this step isn’t entirely necessary. My informants have not explicitly recognised there being a hauntological component to either their day-to-day life, nor their ideology. Nonetheless, they move past this as a matter of course and instead begin to focus on creating this (admittedly) undefined future.
The problem with this approach is how it simply pushes the envelope. If we don’t know what to build, what do we build? A shift in focus becomes key here: it is not about creating a new future, but rather to create the context in which a new future can develop. What Koselleck called a surprise – Überraschung – is what is sought after, as what surprises us is also what delineates the phenomenology of time itself; what separates the feeling of one time from another time. The technology they strive for: human augmentation, human-computer interfacing, AI, and so on, are technologies whose outcomes we cannot quite predict and much less truly imagine. Replacing the human eye with a cybernetic eye capable of seeing more than just the visible spectrum of light create a fundamentally different way in which we interact with the world at large, and imagining the impact it will have it near-impossible: it would literally require us to imagine a new colour.
While the true aim is a new future, the practical aim is more about creating a context in which a surprise can take place, to create the context in which society can broadly move forward into a new phenomenological era of time; to not only move into a future, but to move into a new future.
  Conclusion
Mark Fisher declared that the future has been cancelled; that as a result of neoliberal logics, the cultural capability to imagine anything new from what already exists, socioculturally speaking, has been lost. Time is a funny thing in that respect, as it is often thought of as linear, one era leading to another. When Fisher says that time is out of joint it is not that time does not keep flowing, of course, it does. Today still turns into tomorrow. The phenomenology of time, on the other hand, has stalled: time might keep flowing, but not much changes. In fact, the past is capitalised on and repackaged and resold as a product of nostalgia and pastiche. Time keeps flowing, but culture almost feels regressive. German historian Reinhart Koselleck argued that how we perceive history is contingent on a horizon of expected experiences, and what breaks such an experience is the introduction of that which has not been expected, a surprise – the Überraschung. This mirrors the work of Alain Badiou and the capital E-Event. What produces change, or at least the feeling of difference from yesterday to today is how we might be surprised by something. This is what I argue my informants work to bring about. While they use the language of “the future” to position their aims, what such a future is remains painfully unclear. Even with such a lofty goal in mind, the language, the articulation of their work, and many of the spaces they inhabit remain (perhaps painfully) mundane. They are entrepreneurs, they are public speakers, they have their own start-ups or book deals. In a word, they attempt to capitalise on this vision. Despite these shortcomings what cannot be denied is the drive to continue forward, and to keep developing their ideas, and how internally these communities and groups place a high premium on those practically involved in developing new ideas or technologies. The lack of clarity for the future is somewhat purposeful; there is an acceptance that they cannot imagine what lies ahead, perhaps because they recognise their own inability to look past contemporary ideologies. What they recognise, most likely implicitly, is that they require surprise. Something that cannot be imagined, that throws the world on its head and forces new perspectives to emerge.
How do you build what you can’t imagine? You don’t; you build that which allows you to imagine something new.
 Key references
BADIOU, A. 2003. Saint Paul: The Foundation of Universalism (Translated by: R. Brassiered ). Palo Alto: Stanford University Press.
DELEUZE, G. 1992. Postscript on the Societies of Control. October 59, 3–7.
FISHER, M. 2009. Capitalism Realism: Is there no alternative? London: Zero Books.
FISHER, M. 2012. “What is Hauntology?” in Film Quarterly 2012 Vol. 66:1, pp. 16-24.
FISHER, M. 2014. Ghosts of My Life: Writings on Depression, Hauntology and Lost Futures. London: Zero Books.
SCUCCIMARRA, L. 2008. Semantics of Time and Historical Experience: Remarks on Koselleck’s “Historik” in Contributions to the History of Concepts 4(2), pp. 160-175.
0 notes
readyaiminquire · 4 years
Text
Rehabilitating cyberpunk: Altered Carbon, past critiques, and a call to nature.
Tumblr media
Note: Some spoilers head.
There is no need to fear or hope, but only to look for new weapons.
              Gilles Deleuze
  We’ve seen a re-emergence of cyberpunk over the past few years. From the sequel to Blade Runner, Blade Runner 2049, to the upcoming videogame Cyberpunk 2077, the genre appears to be making a come-back. What might cause a genre like Cyberpunk, distinguished by its cassette-futurist aesthetic, its grittiness, and overall negative view of the future, to re-emerge? While it reached its zenith in the 90s, it largely faded from popular view throughout the 2000s. It is important here to distinguish between the ‘original’ cyberpunk genre, a deeply ambitious project to produce societal and cultural change, and the cyberpunk aesthetics we see today. As someone extremely interested in these ideas and imaginations of the future, cyberpunk is infinitely fascinating, and though I revel in its new popularity, I couldn’t help but notice a strong thematic shift away from cyberpunk’s original ambitions and towards a much more vapid and generalised aesthetic. Most recently I found myself puzzled by Netflix’s adaptation of Altered Carbon. Season one seemingly had everything and remained largely true to its genre’s root. This all changed with season two, by bringing out an undercurrent that had been present throughout the first season without being made a central plot point: that of technologically induced immortality, and humanity’s ‘natural’ state of existence. In this post, I want to look at this thematic shift in the genre, and its implication to the wider cyberpunk project. I also want to consider the implications of ‘declawing’ a subversive genre as it re-emerges a mere simulacrum of itself. This is by no means unique to cyberpunk as a genre, but I wish to use it here a more general example, with the show Altered Carbon more specifically as a case study. It is time to investigate how a subversive genre is culturally rehabilitated.
As a genre, cyberpunk has its roots in the 1980s and can be said to have been a reaction against the corporate aesthetics of the 1980s and 1990s. It arose as a form of cultural critique against the global-unity-through-consumerism-narrative that gained traction around this time and took off after the fall of the USSR in 1991. William Gibson’s 1984 novel Neuromancer is widely credited with solidifying the themes, tropes, and aesthetics of cyberpunk, though it is by no means the first cyberpunk work created; Ridley Scott’s 1982 Blade Runner comes to mind. Some cultural theorists have argued that the most important aspect of the genre, and why it gained such a large and diverse following, stems from it being set in the future. Typically, historical novels critique contemporary society through the lens of the past, whereas cyberpunk imagined a future through which it critiqued the present. Cyberpunk was thus unfettered by needing to be framed in the past allowing it to simultaneously appear hopeless and dystopian, while offering hope for the future – as what it portrayed could still be chaged.
Though I can’t give a complete and detailed rundown here (this video does a good job of that already) it was cyberpunk’s flexibility to be a type of roadmap for the future, that both broke down barriers and allowed a contextualisation of the present that made it both an ambitious and powerful project. French philosopher Gilles Deleuze once wrote of technology, that “there is no need to fear or hope, but only to look for new weapons”, that technology was neither an oppressive or liberating force, but simply a force of change. What we needed was to look for new ways to operate within a social, cultural, economic, or political framing changed by further technological development. As a result, cyberpunk authors, artists, and scholars often looked to break down barriers, in a double sense: through interdisciplinary and shared work, but also in the work itself. Such a breaking down of barriers is exemplary in the image of the cyborg. The cybernetically enhanced human – part human, part machine – became a powerful image for how we can consider ourselves, and played with the very concept of the ‘human’ as a specific thing. As anthropologist Aaron Parkhurst points out, there’s a deeply ingrained idea that the body is a sacred entity in itself, and as a result, joining the body with anything external (e.g. digital technology) corrupts it; makes it ‘unnatural’. This distinction is fickle, of course, and is fundamentally challenged by the image of the cyborg. As Donna Haraway argues in her seminal work A Cyborg Manifesto, that our sense of belonging and affinity ought to come not from sameness, but through differences.
People already suggested I watch Netflix’s Altered Carbon while researching my master’s dissertation, as it appeared very relevant to the research I was, and still am, interested in. As a disclaimer, what I cover here is the Netflix adaptation of Altered Carbon, I have not read the novel published in 2002, so I, therefore, can’t comment on that. The discrepancies in the show are most evident in Season 2, but first, we must understand the central technology in this universe: the ‘stack’. Stacks are disks inserted into your neck shortly after birth, and stores your consciousness. Through the stacks, humanity has invented its own immortality. Despite such technological developments, however, all’s not well in the world. As this is cyberpunk, corporate greed, corruption, and social strife are all widespread. This brings us to the rebellion. Led by Quellcrist Falconer, this revolutionary gang is not so much against the aforementioned social strife and hardship, but instead seek to undo the stacks, to undo immortality.
This is exemplary of the thematic and tonal shift that has, for lack of better words, rehabilitated cyberpunk as a genre. Falconer’s reasoning, condensed, goes thusly: the stacks were invented for good, the world, as it is right now, is not good, in fact, it appears to have gotten worse and multiplied human suffering across several lifetimes. It is the stacks that are the problem, or: it is humanity’s foray into the ‘unnatural’ state of technology-induced immortality that is at fault. We have, in a nutshell, our unnatural state of being to blame for the strife, and must, therefore, seek to return to nature in order to again find an acceptable balance. This ignores one key issue, however: the social structures that were present before the stacks have themselves been amplified by the technology. From Falconer’s perspective, what she wants to return to is not a time when there was no suffering, but rather to a time where she was ignorant of it.
Seen from this perspective, Falconer’s revolutionary project is a remarkably conservative one, though I’m not sure this was at all intended. But more nefariously, by pointing to humanity’s foray into a state of unnaturalness as the focal point of all the badness in the universe, it implies that the structures present prior to its invention are themselves in balance with this imagined state of nature. In other words, the social hierarchies, economic injustices, and political repression are painted as being natural. Through this naturalness they are implied to be if not something good, merely something morally acceptable. The themes that cyberpunk set out to critique has folded back on themselves, and it suddenly defends what it once criticised and rejects what it once embraced.
Mark Fisher, in his book Capitalist Realism, argues that neo-liberal capitalism’s apparent longevity, despite its tendency towards economic crisis and social strife, is rooted in its ability to repurpose critique against the system, incorporate the critique on face-value, and by extension ‘declawing’ it. It is worth noting that this is not uique to the cyberpunk genre, as Fisher writes that most, if not all, cultural modes are often repurposed in this way. By producing pop-culture that appears to be critiquing the system as we consume it like a commodity – participating in the system – we buy not only entertainment but more specifically we consume moral relief. The paradox, thus, is that by partaking in the system we wish to critique we trick ourselves into feeling as if we have critiqued the system, leading to a sense of empty fulfilment. My favourite raccoon-turned-philosopher Slavoj Žižek puts this in even starker terms, with Starbucks. By paying more for a cup of coffee, but being told that part of the excess we’re spending goes towards fair trade coffee we are not simply buying the coffee, but we’re buying moral relief.
In this sense, resistance itself becomes central to the system, ironically because there are always new (and pre-packaged) battled to be waging. How would this relate to technology? And more specifically: technology today? The rise of modern cyberculture is itself rooted in the countercultural movement of the 1960s and 70s, and our collective ideas of technological progress hinge in ideas that technology must fundamentally be liberating, a remarkably subversive ideal. We’re getting more and more used to living in a world where the likes of Facebook, Google, or other major corporations carry out data breaches, leak data, lose data, sell data, try to influence our lives, politics, and so on. How our lived realities function is changing. Thus, reinventing a genre subversive to its core (and why a genre needed to be re-invented rather than something new emerging entirely is another interesting discussion, relating to Mark Fisher’s ideas of cultural hauntology. I’ve written something, but not completely related to this here), but shifting the focus away from the socioeconomic system and debate of technology vs. nature fulfils both the need for resistance but also serves to declaw the potential such resistance may have altogether. In the end, Altered Carbon sends a message that it’s not the political system in which they exist that is the problem, but rather technology; and even then it is not that we are going to be convinced to stop using Facebook any time soon.
  Selected references
FISHER, M. 2009. Capitalism Realism: Is there no alternative? London: Zero Books.
TURNER, F. 2008. From Counterculture to Cyberculture: Stewart Brand, the Whole Earth Network, and the rise of digital utopianism. Chicago: University of Chicago Press.
DELEUZE, G. 1992. Postscript on the Societies of Control. October 59, 3–7.
HARAWAY, D. 2014. Anthropocene, Capitalocene, Chthulucene: Staying with the Trouble. Anthropocene: Arts of Living on a Damaged Planet (Presentation).
ŽIŽEK, S 2012 A Pervert’s Guide to Ideology (transcript/subtitles).
0 notes
readyaiminquire · 4 years
Text
The COVID games, or: why Sweden must lose.
Tumblr media
The above image, a callback to a famous Swedish WWII propaganda poster, “En Svensk Tiger”, a wordplay on a Swedish Tiger but also a Swede remains silent. The above uses the image of a raccoon, or tvättbjörn in Swedish, which literally translates to wash-bear.
All makt åt Tegnell!1 or so one might think the crowds chat in Sweden. HRH King Carl XVI Gustaf can move over, the State Epidemiologist is now running the show. At least this is the picture one might get when reading much of the media criticism to Sweden’s approach to the COVID-19 pandemic. Sweden has set itself apart from most other countries, European or otherwise, in its response to the pandemic, specifically by taking a much softer approach. This ‘lunacy’ is likened to driving a speeding car towards the edge of a cliff -- while blindfolded. The so-called “Swedish Experiment” has included much softer measures, trusting the people of Sweden to follow social distancing rules, and deploy a people’s common sense [Folkvett], to quote our Prime Minister, in their day-to-day life. Much of Swedish society continues, unhindered. Most schools remain open, with the exception of colleges and higher education; most offices are open though it is recommended to work from home if possible; bars, restaurants, and shops have been told to enforce social distancing measures, but remain accessible. Sweden hasn’t even formally closed its borders (though it should be noted that all nations bordering Sweden have closed their, de facto closing Sweden’s borders). Is this response a gross misreading of the gravity of the situation? Is Sweden, its governmental and state bureaucracies, simply inept when it comes to dealing with such a crisis? Is Sweden carrying out a large-scale social experiment on its population, one likely to end in tragedy? These are some of the questions I hope to answer here.
I argue here that reaction to Sweden, and the dubbing of the response as the “Swedish experiment” is another case of what other scholars and analysts have dubbed Sweden-bashing: a means of being overly critical to Sweden’s (perhaps ironic) nonconformity when responding to global crises, and one often deployed to demarcate sides. This is something that goes back to the Cold War and has taken many forms over the years. These include criticisms of Sweden’s non-alignment during the Cold War, to criticism of Swedish state intervention, social programmes, and even responses to the 2015 refugee crisis. It is important to consider that the discourse if often, though of course not always, external, and typically come with ulterior political motives. This is something I will touch upon later, but for now we need to unpack not what Sweden’s response is, but why it is what it is.
  Culture, risks & hazards.
The criticism levied against Sweden often hinges risk. Either Sweden is taking too big of a risk with their response, or they are not taking the risks seriously enough. For a more detailed discussion on the maths behind the pandemic in Sweden, read this excellent post by a friend of mine; here we will look at the social and cultural process behind Sweden’s response. First, we need to define some terms: the difference between risks and hazards. The pandemic is better understood as a collection of hazards, or something with negative societal implications (social instability, economic damage, death, and so on), while the risks associated with the pandemic are the perceived chances of various hazards to come to pass. However, both risks and hazards are culturally coded, insofar as what we both perceived as a risk and a hazard is depending on our social and cultural positionality. In short, ideas of risk produce ideas of hazards. Take an extreme example: a community lives at the edge of a cliff, and they have done so for generations, and this cliff is an integrated part of their social and cultural milieu. Due to the community’s proximity to the cliff’s edge, it is perceived to be of minimal risk specifically as the community has organised itself in such a way as to reduce the chance of people falling off it. Due to the risk being (near-) zero, the cliff is not conceptualised as a hazardous object. Risk, in other words, produces hazards, and if the risk isn’t conceptualised as such, neither is the hazard.
How does this apply to the Swedish response? Surely the pandemic itself is conceptualised as a hazard? This is correct,  or rather more accurately a collection of hazards – health hazards, social hazards, economic hazards, etc. First, we must look at a Swede’s relationship to the state. In their book Är Svensken Människa? (Is the Swede Human?), historians Henrik Berggren and Lars Trägårdh define the Swedish sociopolitical hegemony as “state-individualism”. Through cooperation between State and People, a strong state-controlled framework is created in which the People exist and operate on remarkably individualistic (sometimes bordering on libertarian) assumptions. This symbiosis may appear contradictory, but it rests on deeply rooted cultural trust between State and People -- likely with its roots in Sweden’s social democratic tradition. Most relevant to the discussion here is the strong will to cooperate with the State, making most forms of incentives through coercion unnecessary. Cooperation as the operative function between State and People creates a specific framework in which risks and hazards are understood.
Let’s turn to a specific example in the Swedish response: no lockdown. As mentioned above, some restrictions have been introduced, daily life in Sweden continues largely unhindered. Shops are open, people go to restaurants; (most) children to school. The foremost hazard to be avoided – in Sweden and elsewhere – is an uncontrolled spread of the virus that may result in a collapse of the healthcare system and a skyrocketing death toll. From the perspective of an uncooperative population, harsh lockdown measures seem to make sense, despite the other hazards that come with more draconic responses (social, economic, mental health, to name a few). With a cooperative population, on the other hand, the risk and hazards of allowing society to remain open shifts and a complete shutdown instead presents a larger risk and hazard than working with the proverbial People. In the Swedish context, a shutdown presents strong economic and social risks than allowing self-policing presents risks around the uncontrolled spread.
  Experiments?
The above example considers just one dimension of the Swedish response to the COVID-19 spread, but nonetheless serves to paint a wider picture. The so-called “Swedish Experiment” is not so much a State response as it is a National response, insofar that it is underpinned by a strong sense of cooperation between the State and the People. The pervasive narrative of Sweden as being either particularly unprepared, or otherwise not taking the response seriously does not hold up to scrutiny. If anything, an argument can be made that Sweden’s response is one of the more thought-through responses, as it incorporates a strong understanding of its own societal structures, not to mention the State’s confidence in sticking to their plan despite the external criticism levied against it.
Calling the Swedish response an experiment seems to imply that what everybody else is doing isn’t. In other words, the typical responses we’ve seen: lockdown, coerced social distancing, the closing of shops, offices and schools, and so on, is implied to be a tried-and-tested method, while Sweden’s softer approach is a result of some crazed social experiment; all this in spite of there being no randomised controlled evidence that supports a lockdown as beneficial to halting epidemic spread. It is more accurate to say that all nations are experimenting with their responses. A pandemic of this scale has not been dealt with in living memory, so there are no “tried-and-tested methods”. Everything is an experiment. The general anxiety for the future illustrates this point succinctly: When will lockdown end? What will happen to the economy? Will there be a second wave? The answer to these, and many more, questions is that we simply do not know. If everything is an experiment, and even the most widely used response comes with a high cost and plenty of uncertainty to boot, how come Sweden is so strongly criticised?
Famed anthropologist Mary Douglas, in her work on risk and social meaning, argued that though the underpinning function of risk-thinking is to limit ‘bads’ from happening to a community, she highlights that it more specifically limits such outcomes by controlling social behaviour. The individual who takes unnecessary risks is chastised, and can therefore be upheld as an example to the community as a whole: don’t behave this way, because it is risky, and even if it doesn’t result in a bad outcome this time, you are nonetheless castigated for acting out of line. The reaction to Sweden’s response is not so blunt as a 1:1 comparison; I rather think it more subtle than that. Sweden is heavily criticised not to bring Sweden in line with other responses, but rather to show the necessity of stricter responses to the pandemic. In other words, Sweden is used in justifying one’s own response to the pandemic; as a stand-in for the individual in a community who is behaving recklessly. By reacting like this to the Swedish response, by painting its response as an outrageous experiment, you serve to justify your own harsher forms of lockdown or social distancing. Sweden becomes the stand-in for ‘that guy’.
This mentality, to justify the in-group behaviour by creating a narrative of a deviant Sweden is not only incorrect, but also dangerous. As outlined above, each country faces different challenges when responding to a crisis like a pandemic: perhaps they have an ageing population, or an underfunded healthcare system, or high unemployment, or a culture of large families living together. Each factor, and many more, will invariably affect the necessary response, and as seen on the graphs below (taken from the Financial Times), every country has a unique curve:
Tumblr media
Therefore, each country is required to have a response that fit their own circumstance, and by comparing statistics across widely different countries, responses, and circumstances before everything is over doesn’t truly tell us anything of how (in)effective any single response has been. By creating a narrative in which there is one ‘perfect’ response (harsh limitations, state coercion, and so on) you assume that any response outside of this narrow definition is wrong. The fallacy at play here, “we must be right, therefore they must be wrong”, is potentially detrimental or even dangerous, as it may serve to shut out more contextually effective responses to the virus. Experimentation, counter to how it is typically present, remains a necessary component in finding the best solutions to such a pandemic – and it is something we are all engaged in, whether we like to think of it as such or not.
  Room for improvement.
This is all to say, however, that even with all the above in mind, we need to be mindful not to uphold Sweden’s response as perfect. There have been shortcomings, and mistakes have been made, and more likely will be made. It is the nature of the situation, but this is true for any country right now. In the case of Sweden, not communicating the potential threat early on resulted in a slow response in some areas, specifically banning visits to old people’s homes – places which have now been decimated by the virus. The response has also largely assumed a monolithic Swedish population, in other words assuming that all in Sweden fall under a normative Swedish cultural framework, often forgetting marginalised groups such as ethnic minorities, or people in poverty who struggle to comply with social distancing measures due to cramped housing or an inability to work from home. The response hasn’t been perfect, and it would be intellectually dishonest to argue this to be the case.
Besides, there are also important ethical questions that must be discussed. Sweden’s response appears to rest on a set of assumptions that a particular number of people will die due to the pandemic, and nothing can be done about this. The conclusion, therefore, is that it is better to have these deaths take place in the short term rather than the long term. It also (seemingly) operates with the end-goal being herd-immunity to avoid a potential second wave of the pandemic after the summer months. These are ethical dilemmas that must be discussed and dissected – not only for their potential scientific accuracy, but also for their underlying ethics. This points towards the elephant in the room: it is too early to tell whether the Swedish response (or any other) is effective at all, and these conclusions likely cannot be drawn for another year, if not longer. The point, however, is that just because one response to the pandemic is different than many others doesn’t necessarily mean it is (or will be shown to be) ineffective.
  Conclusions.
The Swedish response, dubbed the “Swedish Experiment” has been largely chastised for being a non-response, a bad response, or otherwise one lacking in ethics. However, it is important to understand that any nation facing a threat such as COVID-19 comes at the issue from different starting points, and while the end-goal is to minimise the hazards posed by such a pandemic, both the path to success and the end-goal in itself aren’t linear. We will do well to realise that while we’re all experimenting in our responses, we should incorporate various social and cultural factors into the response in order to create one that is as effective as possible for that specific context. The damaging potential of chastising any response that doesn’t fit a perceived linear narrative is that we miss out of the truly effective response as it serves to limit the potential for narrative thinking. Sweden’s response may be unorthodox, and it isn’t without its problems, but rather than focusing on that specifically, it might be worth taking inspiration from both Sweden’s response to the pandemic, but also the State’s willingness to craft a response that incorporates an understanding of its relation to the People, and people’s relation to each other. There can never be a single silver bullet to a situation like this; more optimistically there may be many.
1: A meme that spread across Swedish social media, it’s a play on a line from Astrid Lindgren’s children’s book The Brothers Lionheart, in which the main villain’s, Tengil, forces use the greeting “All makt åt Tengil!”.
Edited for clarification 13/04/2022.
Selected references
  Fox, N. J. (1999) Postmodern reflections on ‘risks’, ‘hazards’ and life choices. In: D. Lupton (eds.) (1999) Risk and sociocultural theory: new directions and perspectives. Cambridge: Cambridge University Press. Pp. 12-34.
  Lupton, D. (2005) Sociology and risk. In: D. Lupton (eds.) (2005) Risk. Abingdon: Routledge. Pp. 11-24.
  Berggren, M. and Trägårdh, L. (2015) Är svensken människa? Norstedts: Elib.
  Marklund, C. (2016) From “false” neutrality to “true” socialism: US “Sweden-bashing” during the Later Palme Years, 1973-1986. In: Journal of Transnational Studies 7 (1).
Special thanks to Prof. David Goldsmith for his insight into the numbers behind the pandemic in Sweden. Read more here!
1 note · View note
readyaiminquire · 4 years
Text
Old Pandemic, New Future
Tumblr media
"There is great chaos under the heavens; the situation is excellent" -- Mao Zedong
  Even a broken clock is right twice a day. I think we may have found one of the two times Mao was correct. If you want to produce societal change, few things are as effective as a good dose of crisis. In the context of Mao and wider revolutionary politics, the fact that a revolution often stems from chaos is tautological to some, though let's not forget that chaotic times affect all societies - and only a minority of these end up with a 'typical' political revolution. Wars, natural disasters, famines, and of course epidemics & pandemics have often left the world both scarred and changed. These capital-E Events create a break with the past, and require those who live through it to reimagine the world, to understand it differently - and it would be naïve to imagine we will go through the current COVID-19 pandemic without permanent changes to society and culture. However, of particular interest right now is the social response. While the virus is unquestionably deadly, it is far less deadly than some of the historical head-line pandemic. We are, in a sense, going through the motions of an apocalyptic Hollywood-esque event, without effectively risking that sort of cataclysm (though, for anyone who loses a loved one, such a tragedy should never be diminished). In addition, this pandemic has highlighted on how unprepared society at large appears to be to these sort of shocks, and therefore has the potential to eventually ask more fundamental questions - what should society be to avoid this in the future? What should our future be?
 Let me explain. Firstly, we need to understand hauntology. A phrase originally coined by Jacques Derrida, he posits that we never experience things as fully present. They're always either reflected through the past or distorted by the future. We can only make sense of any present moment by comparing it to the past, and by anticipating the future. A very basic example of this is music. Taken in its isolation, any one note lacks melodic quality - It is only by comparing a note to previous notes and anticipating future notes that we make any sense of the melody itself. The melody is never fully present but only emerges through an interplay of past, present, and future. Taking a step back, all our experiences are like this: we can only make sense of the present by looking to the past and anticipating the future. In this sense, our experiences are haunted - by that which no longer exists, and by that which does not yet exist. Hauntology, as word, is hauntological (to be really meta about it), being a mixture of "haunted" and "ontology", ontology being the philosophical study of the nature of being. This becomes even clearer in French, Derrida's native language, where the H is silent, so hauntology and ontology are pronounced the same. The only way to tell the difference between the two is by writing them down, the H therefore gains a hauntological quality.
 The way hauntology is applied to culture is of course far more specific. Cultural theorist Mark Fisher, one of my favourite thinkers, popularised the term in cultural theory. He argued that our pasts haunt how we imagine the future, typically clearly visible in our popular culture. Specifically, he referred to how we - often paradoxically - turn to our pasts to relive our anticipation of futures that never were. What Fisher effectively argues is that because of neoliberalism we have reached a cultural impasse; we are incapable of imagining new futures. Neoliberalism demands short term solutions, and as a result new imagined futures are never created. Even in areas where development is increasing, technology for example, these new developments don't open up for new modes of cultural exploration, but rather are subservient to the pre-established cultural modes. In this sense, Fisher argues, it is only logical that we have turned to nostalgia: reboots, remixes of old music, or aesthetics that seem to return in cycles. And the new technology that exists today is subservient to these cultural modes, often, again paradoxically, using new techniques and practices to reproduce the old. In Fisher's own words "the future has been cancelled." What Fisher writes here harkens back to Frederic Jameson and Slavoj Žižek's notion that "it is easier to imagine the end of the world than the end of capitalism", insofar as capitalism, and specifically neoliberalism, has positioned itself as the "be-all-end-all" ideology. There is nothing beyond; there cannot be anything beyond.
  The future is indeed cancelled.
 If we can imagine the end of the world but not the end of capitalism, what happens during the dress rehearsal for the apocalypse? We live, so we are told, in a post-ideological society. We are, as Žižek puts it, no longer subjects of duties - serving society, expected to sacrifice ourselves if needed, and so on - but rather subjects of pleasure. What we do is to chase our pleasures, to consume our pleasures and through that we serve society. Just like neoliberalism has made new developments subservient to the wider ideological hegemony, so too has late-capitalism made 'the world' conceptually subservient. Capitalism isn't a part of this world, this world is a part of capitalism. Functionally, however, it is clear that this is not the case. Recurring crises, economic, environmental, and now pandemic shows that late capitalism is indeed not a superstructure, but rather maintains an illusion of such - a form of internalised ideology. What we're seeing here is a sort-of reversed "equivocation" to borrow a word from Eduardo Viveiros de Castro, in which two conceptually distinct words have come to be understood as ontologically the same. This, of course, creates a paradox.
  As we continue with our dress rehearsal for the apocalypse, it has become abundantly clear to me - and I believe many others - that our society is simply not geared towards dealing with these sort of wide-spanning crises and shocks. This is both systematic unpreparedness, as the system itself seems largely unable to absorb an external shock, but also social. People are largely unable to make sense of what is going on and instead attempts as much as possible to continue as they have. Indeed, going back to Fisher, what are the alternatives? With capitalism as a system and the world such as it is being seen effectively the same, seeing the façade of our world crack will invariably damage how we view capitalism as a system. Perhaps, then, there will be an apocalypse? A world will be destroyed in our combined cultural imaginations - and with the death of such a world, it will hopefully leave space for a new one. What exactly might come from this is impossible to speculate. Some of the slew of social changes that have already taken effect are likely to remain, and these will, in turn, produce additional effects as social conventions are reimagined, reconstituted, and codified. It is even, of course, possible that the changes and the new future that may come of this will be worse than we have today, of course, a factor that shouldn't be ignored. However, the hopelessness of Fisher's hauntological analysis might finally be shattered. With this crisis, we may be able to dream once more of a new future - and as long as we can imagine, we can at least give ourselves hope.
Selected references:
BADIOU, A. 2003 Saint Paul: The Foundation of Universalism (Translated by: R. Brassiered ). Palo Alto: Stanford University Press.
FISHER, M. 2012 “What is Hauntology?” in Film Quarterly 2012 Vol. 66:1, pp. 16-24.
FISHER, M. 2014 Ghosts of My Life: Writings on Depression, Hauntology and Lost Futures. London: Zero Books.
VIVEIROS DE CASTRO, E. 2004 Perspectival Anthropology and the Method of Controlled Equivocation. Tipiti 2, 3–22.
ŽIŽEK, S. 2012 A Pervert’s Guide to Ideology (transcript / subtitles). Found online here: https://zizek.uk/the-perverts-guide-to-ideology-transcriptsubtitles/ (Last accessed: 16/03/2020).
0 notes
readyaiminquire · 4 years
Text
Part 2 - The future was yesterday.
Tumblr media
This is the second (somewhat delayed) post on microchipping, transhumanism, and techno-utopian ideas, based on my fieldwork in Sweden. The first part can be found HERE, and the introductory post HERE. Enjoy, and stay tuned for Part 3!
The sun was shining over Sweden, summer had finally come it seemed. The beaches along the city were packed with people of all ages enjoying the cool sea and the bright skies. I was sitting at a café in the marina waiting for Jacob, a local Helsingborger and entrepreneur who had been one of the early proponents of microchip implants in Sweden - though he later informed me that he had stumbled into the whole thing as an off-shoot of his interest in body-modification (something made abundantly clear by several visible tattoos and a handful of piercings). His reason for pursuing these particular implants? Childhood sci-fi, of course! But it did not take long for the implants to begin to morph into something more than a futuristic 'cool factor'.  Soon they began to represent the gateway to a new future, a better future, as he saw it. It was a future he needed to work for, to be involved in. "This technology, and many others, will inevitably shape the future," he told me, about the current implants "and even if it's not these chips specifically, I hope they develop further, it starts here. Or it started here, I should say" - Tomorrow was already being shaped, being outlined, being developed. "Sure, I'm a businessman. I like to see myself as a pragmatist. I don't care much about the ideology here - biohacker, transhumanists, whatever. Not my problem. It doesn't matter. There are bigger things to worry about than infighting over details". Despite this pragmatic approach, he made it clear to me that he wasn't just here for the money, however, though that did without a doubt help: "I want my little ones to grow up in a better world than we have - and sure as hell not one that's worse". Despite the inherent techno-optimism, the recurring conviction across most people I spoke to that technology can reshape the world for the better, it also began to emerge that this future world wasn't given. It wasn't necessarily bound to happen. What was inevitable was technology and its development, but not its application.
The future needed to be built, and to some extent, fought for.
These techno-utopists weren't just philosophers waiting for the inevitable to come to them. Ideologically speaking, it was even frowned upon not to have any practical involvement. Technological direct action was the order of the day. Their involvement could take many different shapes, for example like Jacob above with entrepreneurialism. Other people were running community events, while others still fancied themselves inventors. The possibilities and overlap were, of course, endless. What mattered the most, however, was your involvement. A person that exemplifies this mindset is Ethan. In his own words, in the midst of his "small side project" to crack telepathic communication. The sci-fi inspiration is very clearly visible - Ethan even saying as much, having drawn inspiration from the video game series Deus Ex. It is also important to understand that this wasn't some crack-pot idea; Ethan had done his research, undoubtedly, as outlined in the previous post. So, he set off to create a new polymer that should in theory be sensitive enough to pick up on this: mixing chemicals in his student dorm room. Though some of this makes Ethan sound a bit mad, I think it is important to highlight the absolute seriousness of the endeavour. His solution may not be perfect, but the idea that he can create something - find some solution - that someone else can then build upon is central to the ideology. It is how the community itself engages with the world to build the future they hope for. In Ethan's own words: "some people come to the forums, or a YouTube comment section, and just talk about how amazing this or that would be. Well, have you done anything? No? Your ideas aren't that original, so at least try to make something with them. Try to make a difference, so that these things can actually become a reality."
Despite this focus on doing, there remains a sense of the inevitable wrapped up in the overall world view. It is, however, related to technological development, and not so much about what technology is used for. That is what needs to be ensured. If looking at the world today is anything to go by, according to my informants, we have a lot of work ahead of us. As things stand now, innovation is understood to be too centralised in the halls of governance, or among multinational corporations. In this sense, remaining uninvolved is not only naïve but dangerous. Given technology's near-limitless potential, leaving it to "somebody else" to develop and innovate is tantamount to signing away any claim on the future - especially if this power is concentrated among global corporations and centralised states, often understood as having dubious intents at best. This may come across as somewhat paranoid, but my informants made the case that as long as you looked around at recurring data leaks, targeted ads, tracking technologies - all of this linked to a phone you almost invariably always carry with you. Add to this that we often lack insight into how these many systems (from targeted ads to location tracking, and beyond) function. This "black boxing" serves to limit the true extent to which digital practices might affect us and society beyond us. In short, the case that was made to me is that as much as these potentially Orwellian technologies are already upon us, so is the digital extension of ourselves. The data gathered constitutes what my informants, and other researchers, call a digital body, or data body. As one phrased it: "I think all of this would change if people saw data as themselves. It's not just their data, and it's not just data, it is them." Put bluntly, there is no ontological difference between our selves and our data selves. The great tragedy and our predicament is, therefore, that we are effectively cut off from these digital extensions of ourselves by the centralised nature of current digital practices. We cannot understand ourselves as fully free subjects as long as we are tethered to a data-body in chains.
The solution is quite straightforward: a democratisation of technology. This is, in effect, what the practical involvement outlined above is all about. This countercultural streak is by no means new within these types of movements - Biohackers, DIY science, and other forms of open science often operating with an intent to resist and change currently established structures. Even the very early days of Silicon Valley were shaped by a willingness to stand against what was then seen as hyperrationalised social structures, or in simpler terms: a bureaucratic system where the individual has no choice but to follow the path the system expects you to take. In short, imagine the rigid social expectations of the 1950s and 1960s. This approach, sometimes called "soft resistance", as a means of subvert already established practices is still as present today as it was at the height of the countercultural movement in the 60s. The focus has of course changed. The concern of reconnecting with your digital body is a symptom of the wider systemic resistance: specifically a resistance against a power structure which has, unlike that of the 1950s and 1960s, melted into the background. I mentioned above that Silicon Valley used to be a centre of this anti-bureaucracy rebellion, specifically hoping to turn technology against these structures. The late Mark Fisher, aka K-Punk, a fantastic cultural theorist, outlines in his essay Capitalist Realism that capitalism as a system is outstanding at incorporating critiques against it and declawing it. Continuing with the Silicon Valley example: despite its origins as a countercultural project, this too has been incorporated by capital. Capital has assimilated, as Delfanti summarises, "the normative [drive] towards horizontality, participation, cooperation, giving, flat hierarchies and networking" without disrupting its systemic reproduction. In other words, it has 'declawed' this former resistance, but as is evident by our data selves, its socially oppressive tendencies of the past remain - even if they are obscured.
More concretely, this manifests itself in the archetype of the comprehensive designer. This itself a call-back to the 'original' counterculture, is a character coined by M. Buckminster Fuller, a leading thinker at the time. The comprehensive designer is not an engineer, nor a designer or inventor; neither an academic, nor architect, not a captain or a leader. Rather, they are all of these, while at the same time none. There exist an inherent universality to a comprehensive designer, understood as having the ability to step outside of the system and see it as a whole. This not only makes them highly adept at understanding society, but it also makes them exceptionally well suited for changing it, specifically because they're capable of disengaging from bureaucratic structures that invariably need to classify them. Though I would argue no-one can step outside of their own cosmology or ideology - indeed, as Zizek says, "the tragedy of our predicament when we are within ideology is that when we think that we escape it into our dreams, at that point we are within ideology" - but this is not the point. Instead, the comprehensive designer acts as the ideal individual subject my informants are aiming to be. The comprehensive designer is, in effect, the resistor par excellence.
  Selected references:
  DELFANTI, A. 2013. Biohackers: The Politics of Open Science, Pluto Press.
  FISHER, M. 2009. Capitalism Realism: Is there no alternative? Zero Books.
  FULLER, R. B. 1963. Ideas and Integrities: A spontaneous autobiographical disclosure. Englewood Cliffs, NJ: Prentice-Hall.
  LYON, D. 2010. Liquid Surveillance: The Contribution of Zygmunt Bauman to Surveillance Studies. International Political Sociology 4, 325–338.
  NAFUS, D. & J. SHERMAN 2014. This one does not go up to 11: The quantified self movement as an alternative big data practice. International Journal of Communication 8, 1784–1794.
  TAMMINEN, S. & E. HOLMGREN 2016. The Anthropology of Wearables: The Self, The Social, and the Autobiographical. Ethnographic Praxis in Industry Conference Proceedings 2016, 154–174.
  ŽIŽEK, S. 2012. The Year of Dreaming Dangerously. Verso.
  ŽIŽEK, S 2012 A Pervert's Guide to Ideology (transcript / subtitles). Found online here: https://zizek.uk/the-perverts-guide-to-ideology-transcriptsubtitles/ (Last accessed: 16/03/2020)
  Cover image source:
Arthur Sadlos @ Deviantart: https://www.deviantart.com/artursadlos/art/Cyberpunk-City-683952796
0 notes
readyaiminquire · 4 years
Text
‘Did my phone buzz?’ - Social media & distraction.
Tumblr media
Social media is a funny thing - many polemics have decried what our feeds, and apps, and accounts are doing to our minds - our ability to focus, to think, to reason, or to form genuine connection. "Oh won't anyone think of the children?!" the Void is asked. Though some of these arguments are without question important to consider - it would be very naive to pretend social media had no effects at all: and once considered we may find that some effects social media has on society aren't preferable ("Oh won't anyone think of our democracy?!"). Perhaps not surprisingly, just as many texts have come to the defence of social media; highlighting it's liberating potential, its ability to keep communities connected, to produce connections between diaspora communities and their ancestral homes, or how it may increase our ability to create meaningful connections in a world that is far more globalised than it’s ever been.
With this text, I am not so much writing a piece against social media, nor am I coming to its defence as such. Instead, I have been hoping to get a better understanding of how social media is distracting, mainly by tracking my usage, observations, and interactions with it. I have done this mainly in the form of a fieldwork journal, often with several entries per day. The data I collected through my little autoethnographic experiment became the foundation for this whole analysis, though of course, I have fleshed it out with work by other scholars as well.
Much of the debate around social media and its distractive elements focus on algorithms, format & design, feedback loops, notifications, and so on. Algorithms, in particular, are painted as the ubiquitous "bad guy" in any scenario, with some compelling arguments even being made that they operate from the fear of not being seen in others news feed - a terrifyingly Deleuzian concept. Others yet worry that we are being hypermodulated, that is we're having so much information being thrown at us at such speed, that we cannot create a cohesive social narrative due to the speed and disjointedness of newsfeeds. Eventually, our attention span is ground to dust.
While my own ethnographic experience doesn't necessarily disprove any of the above, it does instead point towards another 'driver' in the distraction game. To best explain this, it's not just important to look at how I use social media today, but how such usage has changed - how it differs - from the past. I've always found it difficult to ignore messages, this is what they're designed for I suppose, but there's been a very clear change how often this interrupts or distract me from whatever I am doing. It was far less of a problem five or ten years ago - but that might just be stating the obvious. We have more apps than ever demanding our attention. All of it has simply become more: more Facebook notifications, WhatsApp, Instagram, Twitter, Reddit, and the list goes on.
However, the point I'm trying to make here is not one of volume. Of course, distraction becomes far more prominent the more things there are to be distracting. This goes without saying. What I want to get it is not the individual distraction produced by individual apps, but rather how 'social media', understood in a more holistic sense, creates an even larger pull. How apps link, connect, overlap, and seamlessly allow you to switch from one to the other: Receive a Whatsapp message with a Reddit link, which you later share through Facebook Messenger (now a separate app from the main Facebook app), and perhaps even downloading the picture from Reddit and Tweeting it out. All of these avenues available and the remarkable overlap between them leads, from a practical standpoint, to 'social media' being one ‘thing’ rather than separate platforms. I have decided to call this particular development 'platform collapse', hearkening to danah boyd's work on context collapse (the process of which social contexts collapse completely on social media network).
Having tracked my own social media usage, it has become clear that I and many people I interact with through social media uses it as a bounded whole, rather than individual platforms. All the while this 'collapse' can be materially represented by one simple object: the smartphone - through which most of these interactions take place. However, there is still a social aspect that ties this whole nexus of platforms & connectedness together, and it's the general expectation to always be connected; to always be reachable. Put simply, sociality is just as important as how it is communicated. In practice, the process of 'platform collapse' functions as a force-multiplier: more people can reach you through more channels, with a social (or arguably cultural) weighted expectation for at least a swift response.
In spite of some sort of interaction with social media being the first and last thing socially done almost every day, it is worth noting that not all social media interactions are created equal. Though social media, as a whole, is tethered to these distracting elements, it is possible to ignore certain people, emails, or calls. As Danny Miller puts it, there is an ideology of friendship. Through my own ethnographic experience, some friendships require some sense of constant, yet passive, interaction is necessary. Similar to what scholar Mirca Madianou has called ambient co-presence. Having family in Sweden, Malaysia, and America, not to mention friends across even more countries, social media is perhaps the only means of effectively maintaining certain close friendships or specific familial ties. These, therefore, have a stronger distractive potential than others.
Throughout this small auto-ethnographic experiment, I have grounded social media's distracting potential mainly in two phenomena: what I have called 'platform collapse', and the ideology of friendships. These two aspects together create a nexus in which individual platforms, apps, connections, and means of communication begins to break down, only to turn into a larger holistic 'social media' The ideology of friendships, in turn, often experienced through forms of ambient co-presence, adds a layer to this always-connectedn-ess, making it near impossible to ignore. Though algorithms, hypermodulation, and surely many more factors have undeniable effects on distraction and attention, what this short piece points to more specifically is what pulls me to social media interactions, rather than what keeps me interacting.
Perhaps without this social media distraction, I would finally get around to reading War & Peace.
But what would the social cost be?
Selected references:
Bucher, T. 2012. Want to Be on the Top? Algorithmic Power and the Threat of Invisibility on Facebook. New Media & Society 14 (7): 1164–80.
Costa, E. 2018. Affordances-in-Practice: An Ethnographic Critique of Social Media Logic and Context Collapse. New Media & Society 20 (10): 3641–56.
Madianou, M. 2016. Ambient Co-Presence: Transnational Family Practices in Polymedia Environments. Global Networks 16 (2): 183–201.
Miller, D. 2017. The Ideology of Friendship in the Era of Facebook. HAU: Journal of Ethnographic Theory 7 (1): 377–95.
Pettman, D. 2016A. Hypermodulation (or the Digital Mood Ring). In Infinite Distraction: Paying Attention to Social Media. Cambridge: Polity.
Pettman, D. 2016B. Slaves to the Algorithm. In Infinite Distraction: Paying Attention to Social Media. Cambridge: Polity.
0 notes
readyaiminquire · 4 years
Text
Hippies, communes, Star Wars, and the future.
While I am duct-taping my life back together after Christmas and New Year - and with a to-do list longer than a Leonard Cohen song - I wanted to do a quick-fire piece to bridge things over while I am completing the next part of my Microchipping series. Enjoy!
I’ve also endeavoured to keep spoilers to a minimum - but read at your own risk all the same.
Tumblr media
An old man hangs from a mechanical arm, controlling him like a puppeteer in his underground lair. His eyes are white, faded, clearly dead. Before him, our two heroes lie, burned by the lightning shot out of the Emperor's hands. He cackles at them, as he realises he no longer needs to convert either of them to the dark side - he no longer needs to steal one of their bodies to continue his maniacal reign. No, he announces, with a bond between them so rare and so strong, he can corrupt that power, and fully restore himself. He can circumvent what the mechanical puppet-arm only succeeded to do as pathetic imitation. With such a rarity, he can bend nature to his will. 
He will corrupt it.
This is at least how I remember one of the scenes at the climax of Star Wars: Rise of the Skywalker. I finally had the opportunity to watch it over Christmas, and it left me with a lot of question - many of which I shan't air here. Nonetheless, after speaking to family and friends about it, one recurring comment (again, among many that I will not mention here) is the perceived absurdity of the story itself. It came across as ridiculous, a rehash, absurd, or as simply uninteresting. A part of me was inclined to agree - where had I seen, read, heard this story in the past? Was it the original trilogy? Probably, after all, several story beats are clearly the same - but there is more to it.
This is when I realised; it was literally a story from a different era. A different time. It was, in many ways, based on the same foundation as the original trilogy, and this foundation was and remains a very particular product of its time. In essence, the story is a conflict between those using technology to as a means of controlling what is around them, to get further from 'Nature' (to ‘transcend’ nature, if you will) and through this control it; and those who seek to use technology and innovation as a means to get closer to nature. The Sith and Jedi very clearly embody this dichotomy, something made clear in the scene I outlined above: machines (implied to be an unnatural and pathetic imitation of life) versus using the very rare, special, and most important ‘natural’ bond between our two heroes as a resource to 'fix' the Emperor's predicament.
The concept of using technology to control, reshape, or otherwise 'change' what we perceive to be the natural order around us is not particularly novel today. However, the idea of using technology - something often imagined as being squarely in the realm of the man-made, and therefore 'unnatural' by definition - might be a bit tougher to grasp. It's important to appreciate that the Jedi and their allies are not Luddites. There's no rejection of technology here - after all the light sabre is still pretty damned advanced. However, structuring the Jedi order and their subsequent duties, aims, rules, and so on as a monastic order is exactly the point. They do not reject technology, as long as it furthers their understanding of what 'becoming one with the Force' means. Summed up by Obi-Wan, when referring to the light sabre, "a more elegant weapon for a more civilised time" - The mass-production and industralisation of conflict surely horrified him.
The original Star Wars trilogy was very much conceived in a time when this particular perspective was not only emergent but was beginning to take deep hold of a particular scene on the West Coast of the US. In his book From Counter Culture to Cyberculture Fred Turner outlines how today's ideas around technology as something fundamentally and inherently liberating, developed - perhaps most clearly seen in what other scholars have called "the Silicon Valley ideology". Perhaps surprising, in the 1960s, the computer (though but a distant great-great-great-something grandfather of what we use today) was understood as something far more oppressive. During the 1960s Berkeley student protests, for example, students wore punch cards around their necks to signify how they were nothing more than a cog in an ever-growing machine, one which forced them to do whatever the wider system demanded. The development of the computer was heralded as a new stage of bureaucratic centralisation, one which struck fear into the hearts of these young students, a new dawn of mechanisation (or digitalisation) that would crush their dreams.
With the emergence of the 1960s and 1970s countercultural movements - in particular, the hippies - an ideology of 'returning to nature' began taking root. The enemy, if you will, remained centralised bureaucratic power. This sparked the founding of many, many communes in what was understood to be untouched land, with the express purpose of reforming society, from scratch. It was among these communalists that the liberating potential of technological innovation first found fertile soil. The use of new structures and materials (famously, the geodesic dome), simple computers (something as basic as calculators), mixed with the ideas and ideals of writers like Buckminster Fuller, systems thinking, and likely a shedload of LSD, all that was needed was something to tie all of this together - the Whole Earth Catalogue. Though I won't go into details of WEC here (it's a really interesting story, though!), suffice it to say that the Catalogue functioned like a proto-social networking tool which mixed all of the above ideas, with ads for buying the latest tech or tool for these communards. It, in effect, tied together the fabric into what can be largely understood as a movement (or network forum; see my text here for more detail).
The point is, these hippies eventually returned to California - and in particular the Bay Area - as these communes failed one by one and brought these ideas back with them. Doing some historical compression, these ideas spread throughout this particular social strata, allowing a whole new positive and liberating view of technology to take hold. Suddenly, computers and the technology of the future would not enslave us under the yoke of technocratic oppression, it would emancipate us in ways we could never even imagine - some even imagined it would usher in a new era of post-humans: humans, but more.
In George Lucas' original Trilogy, it becomes very clear why this Sith-Jedi dichotomy remains so strong: thematically it ties into the whole debate and ideology outlined above. And in 1977 it was still very much a debate, technology was still by our standards deeply rudimentary. After all, the questions remained in the air, and it is clear what side Lucas stood on. After all, one side uses technology to oppress a whole galaxy, committing genocide, and forcefully enslaving just about anything that isn't them. The Empire also drew strongly on Nazi imagery. The Rebels, though no Luddites, use technology as a means of liberation; as a path to awaken the connection between people (of all races, species, etc.), to bring us closer to one another. In Episode VI, with the help of teddy bears in a redwood forest, they not only destroy the Imperial Forces, but they allow the Jedi as an order to reemerge - Isn't this a large signposting for nature reclaiming its central position, signalling to the Empire that it cannot win (and by extension invoking ideas of the ‘”noble savage” is even less surprising)?
The problem with this is, of course, that the world has moved on. As far as technology and innovation goes, the techno-utopian ideology of Silicon Valley has largely won out, and as a result, the concern of technology vis-à-vis nature or society has shifted. We are now more afraid of algorithms tracking our movement or usage, the gathering and sale of personal data, the influence and manipulation that automated systems might mete out. The current top-of-mind concern is not that technology will bring back goose-stepping soldiers in black leather trenchcoats, but that it will oppress us through very different means - and perhaps even oppress us without our noticing, until it is too late (assuming we might realise at all). The popularity of shows like Black Mirror very clearly show an understanding of what within the current popular discourse around technology concerns us.
Instead, Rise of the Skywalker very much returned to the old approach, that of violently oppressive technology versus a more humanising technology. This debate is in many ways dated, it doesn't connect within the current cultural imagination and instead alienates its audience. Good fantasy and sci-fi holds up some sort of mirror to society, critiques it in some way or another (and some more subtly than others) - Rise of the Skywalker and in many ways the rest of the 'new' trilogy holds up a mirror to what society was, and signals to us that it is still very much stuck in 1983.
Selected References
Delfanti, A. 2013. Biohackers: The Politics of Open Science, Pluto Press.
Haraway, D. 2014. Anthropocene, Capitalocene, Chthulucene: Staying with the Trouble. Anthropocene: Arts of Living on a Damaged Planet (Presentation)
Turner, F. 2008. From Counterculture to Cyberculture: Stewart Brand, the Whole Earth Network, and the rise of digital utopianism. University of Chicago Press.
0 notes
readyaiminquire · 4 years
Text
Part 1 - Homo Liberaretur
Tumblr media
This is the first complete post in my series on human augmentation from the perspective of microchips. You can read the introductory post here, and the follow-up post will (eventually) be linked here as well. This particular post deals with discussions around the human body, what it is, and how this particular view can be problematised. Enjoy!
Note: All names are pseudonyms.
I was in Lund, a city revolving around its university. It is very near to where I grew up, so it seemed an appropriate place to find myself early on in my research. I was here to meet Harrison, grabbing a fika, or meeting socially for over a coffee - something unmistakably Swedish, and a good way to get to know someone new. Harrison was a self-avowed transhumanist, openly, vocally, and proudly. He held a deeply rooted moral conviction that not only does technology inherently lead to progress but if there emerged any technology with the capability of improving the human body or experience - the human condition, in short - there exist a moral obligation to use it. It later became clear that Harrison’s idea of ‘improvement’ was mainly rooted to the concept of extending the capability of the human body: seeing more wavelengths, hearing higher pitches, lifting heavier boxes, living longer. Adding technology, even replacing otherwise functional parts of our bodies was, according to my interlocutor, not only the next (and seemingly obvious) step in human evolution, but our fundamental raison d'etre. After all, he told me putting down a now empty cup, “we have always improved ourselves. Clothes, therapeutic tattoos, eye-glasses; and now wearables. Why stop outside the body? The human body is weak, fallible, and even though nature did an amazing job getting us this far, this obviously can’t be it. This can’t be the endpoint. Why should we be confined to this meat bag [sv: köttpåse] forever?
'Meat bag’ is a particularly jarring phrase. It certainly makes me wince every time I read it, if just internally. There’s something deeply undignified, rudimentary, and substandard with the phrase, made doubly uncomfortable by it describing the human body - my body. My knee-jerk was to reject the idea outright. The human body isn’t so thoughtlessly thrown together, so messy and disorganised, decaying and dysfunctional, and it most certainly isn’t something as disgusting and tragic as a sack of meat. It later became clear that Harrison wasn’t the only one to describe the body in these terms, these exact terms in some cases. The whole sentiment, it seemed was that the body we had today wasn’t the body we ought to have; it’s not 'even our final form’. During the same meeting, Harrison flatly told me that we are not what we have always been, evolution is a fact of life, of existence even, and we therefore shouldn’t be so attached to our current bodies. "Just as something has come before us,” Harrison said, sipping a second cappuccino, “monkeys and all that, something must come after us. There must necessarily be something post-human”. Though the statement, such as it is, holds up, there was something refined about it, something overly-thought-through, especially in the way he mixed English and Swedish as if to make the statement quotable. “Why not let evolution do its job?” I asked, naively. Based on what I had read previously, and what Harrison had told me, this was nothing short of a ridiculous suggestion. The answer was what I had, in many ways, expected: “Because evolution doesn’t know what it’s doing. It happens, random mutations, and it happens slowly. We have no control over it, we don’t know what’s going to happen, where things will go. It’s just not worth that risk.”
He seemed excited I had brought up this particular question, and quickly produced his phone, “Have you read or heard of Max More?” he asked while frantically poking around on the screen, before finding what he wanted, handing it to me, “read this”. I must admit that I only really skimmed it at the time, and only really read it in detail a few days later. The piece, A letter to Mother Nature, is one of the deeply influential texts in the transhumanist movement. It is clear why. It reads like a declaration of independence from biology itself. “We will no longer tolerate the tyranny of ageing” it pontificates, further demanding that “we will no longer be a slave to our genes”. Though More does somewhat thank this anthropomorphic Mother Nature for getting us this far, he makes it clear it is better if she retires (or else).
Max More, in his piece, makes it extremely clear that this sense of human empowerment, emancipation even, will stem from science and technology. This is how we will take control of our futures, of our collective destiny. When I found myself up in Stockholm a short time later, I had the opportunity to meet with Peter, a friend and colleague of Harrison. He without a doubt had bought into More’s argument, “You drank the Koolaid” I said, jokingly, “Yes”, Peter replied, “and this time the only danger is that not enough people will!”. We sat in a shared working space in the centre of Stockholm. It was a high-tech building, organised as a top of the line office for technology and innovation start-ups, and the hope was to bring together entrepreneurs from all facets to create a community out of which technological innovation can emerge. He showed me around, while we were looking for a free meeting room. The vast open space, interspersed with fish-tank meeting rooms made the space feel like a slice of Silicon Valley in the heart of Stockholm. Peter was a marketing man, and he had only really bought into transhumanism once he had met Harrison, but he was by now convert, in everything but name: “I wouldn’t go as far as to call myself a transhumanist. Why? I mean, I’m a bit older, I’m out of the loop in many ways, but I try to keep up. The point is, I don’t dare to call myself a transhumanist, because I wouldn’t have the balls to lead the charge into these new things. I’m a follower, I’m convinced without a doubt, but I won’t take that final plunge”. The proverbial plunge being human augmentation. In the most quotable way, Peter told me that technology is the future simply because “technology is clean, and biology is messy”. Besides, I was assured, we are already engaging in these forms of augmentation: specifically within the medical sphere. Pacemakers, hip replacements, cochlear implants, just to name a few. These were are all being done already, are major surgeries, and all carry out a form of “magic, but through science! Now all we need to figure out is how to take it to the next step.”
When questioned about bringing things to the “next step”, especially bringing up the discomfort around discussions of 'human improvement’ given its historical implications ranging from eugenics to concentration camps, the answer more often than not take a stance on the perfect opposite end of that spectrum. In other words, not to reconcile, but to outright dismiss. I was back in Lund now, meeting with a student at the university: Ethan. He was heavily involved with DIY science, and had a love from human augmentation after first being exposed to it through popular media, “in particular Deus Ex, the video game”. We sat at a cafe on a side-street, one of the first properly sunny days of the year, and despite the wind, the narrow medieval streets of Lund made it feel like high summer. Over a beer, our conversation went on. “There is this thing in the game that the characters to communicate telepathically, and I always thought that was so cool,” Ethan said, with a glimmer in his eye, “I always wanted to replicate that”, which it turns out he is attempting. This seemed outrageous to me, after all, he was 'just’ a physics undergraduate. He agreed but continued “it’s not really about completely doing it now, but someone’s got to start. I read a paper that showed that when you’re reading your throat muscles are producing micro twitches as if you were speaking. I’m trying to build electrodes that are sensitive enough to pick up the twitches, and then software to translate that to, like, a Word document or something”. Simply starting the process, making the first breakthrough was good enough.
“But aren’t you concerned when it comes to evening adding or taking things away from people, from humans?” I asked him. It was clear it was a question he had gotten a hundred times before, “I don’t think it’s reasonable to throw out an option completely just because it can be used for bad ends. I don’t think anyone reasonable wants to force people to get these augmentations, it’s more about giving people options. To expand what you can and cannot do, and to allow people to really be what they want to be”. The ideal sought after at the end of the day is what many within the movement call morphological freedom: the freedom to take on any form of existence you would like. On this note, a few months after my meeting with Ethan, I had the opportunity to see Max More speak at a conference in London. In his talk, he outlined morphological freedom as a fundamental human right that had been ignored for too long. If you exist, you have an inalienable right to take on any form you wish, he insisted. Simply abiding by the limitations - completely arbitrary ones, he added - nature has set for us doesn’t have any innate value or reason. It is, therefore, our responsibility to find a way to overcome them, the next step in our evolution. Evolution has brought us to the brink of being able to take control, so taking this control must now be humanity’s collective raison d'etre.
At this stage, it might seem a bit confusing as to how this at all has anything to do with microchipping - these high-minded ideals, talk of techno-utopianism, liberation from the 'shackles of nature’, or morphological freedom. Going back to my fika with Hannes, I asked him just this question. After all, his involvement with popularising micro-implants was why we had met to begin with, and the meeting immediately took a turn for the utopian. “These?” he said, pointing at his left hand as if there was something these, “oh, they’re just toys, neither here nor there. They’re just the beginning of something, the first steps, nothing to flaunt [sv. Ingenting att hänga i julgranen]”. Harrison, however, was very careful to point out to me that it wasn’t very long ago that wearable technologies were just as rudimentary, and though we now have smartwatches and fitness trackers, they are still in their infancy, however, “not even these would have gotten very far if people hadn’t used them”. It appears, then, that much of the high-minded idealism aims to convince people of the same future my interlocutors like to imagine.
The recurring image is of the techno-utopian future as inevitable, but that it is also being held back by people either not believing, or not realising its inevitability. When speaking to people outside of the movement, the reactions to something as minimally intrusive as a microchip in your hand (bearing in mind that it is injected into your hand through a syringe, and the procedure doesn’t take much longer than 15 seconds) the reactions are either neutral or steeped in horror. During a conversation with my sister and brother-in-law about my implant, my sister was made so uncomfortable by the very idea of it all, she refused to touch my hand where the chip was implanted. A bit melodramatic, perhaps, but by no means unique. If not met with cute visceral discomfort it is often understood to be unnecessary. It highlights the main tension that the movement is up against: that between therapy and enhancement. While the medical field is often highlighted as simply doing 'what we’re doing’ when it comes to medical implants, as outlined above, the difference is not only in context but also intent. A pacemaker, for example, might equate to an artificial heart, but it is nonetheless only administered when a 'regular’ heart isn’t working. In other words, it is therapeutic, and therapy inherently means to bring bodily functions in line with a normative socio-cultural view of what the human body is, and thereby what it ought to be able to do (have a beating heart, walk, breathe, see, hear, and so on). Enhancement exists to bring the capabilities of the human body above this normative view, thereby causing a 'break’ between how we view medical intervention compared to 'frivolous’ improvement.
Harrison, Peter, or Ethan and many more, are up against more than just technological limitations, they’re up against wider social and cultural forces. As they see it, they are working towards a solution to a problem that society at large doesn’t recognise. STS professor Steven Hilgartner have called these types of movements 'socio-technical vanguards’, or people who formulate and act to realise particular visions of the future, specifically visions that have not yet been accepted by wider society. However, the ideas being floated within these communities aren’t as radical as you might first believe. Though it is traditionally espoused that the Western view of the body and self is squarely rooted in the so-called Cartesian split (i.e. that the body and mind are two different 'entities’, and the mind controls the body), researchers have not only begun questioning this but have even started noticing a very clear shift towards 'datafied’ understanding of the body. The theory goes that everything about the body can be quantified, down to cell structures and DNA, even your mind is not different. In other words, you’re simply made up of code.
This shift has some clear implications for the typical transhumanist view of the body. It creates a more 'level’ playing field in communicating this particular version of the body. Arguing that we’re not much different than computers, whereby our bodies are hardware that’s running particular software becomes more grounded in the pop-sci understanding of the latest research. What follows is an understanding of the body being exchangeable as a matter-of-course. It is, after all, not much than a (granted, rather long) extension of the currently dominant discourse. But is it that simple?
Scholar and thinker Donna Haraway reminds us that it’s not only our current worldviews that matter, but also what worldviews lead their development. As she puts it, it matters what “worlds word worlds”. With the growth of wearables, social media, and other algorithms, there has been an inevitable shift in how engineers and developers understand the body, one that necessitated quantification. As this later spread to other communities (Quantified Self springs to mind as a very clear example), it became a more and more ingrained. However, there are some problems with this, given that this particular understanding is grounded in particular views of enlightenment philosophy and the scientific revolution, meaning that - in short - this particular view is far from as neutral as it might appear. As yet others have argued, this limiting scope of the body also helps to reproduce its views. More specifically through an example, the choices made as to what metrics to track in a fitness tracker invariably comes with an assumption that this aspect needs to be improved - steps taken, heart rate, sleeping patterns, and so on. Simultaneously, any such decisions will discount other 'metrics’, and by extension make (perhaps implicit) decisions as to what it means to be human.
It is through this process that speaking of a 'technologically improved human’ becomes problematic, as we are never asked to ponder the question: Whose human? Whose improvement? Whose technology? It does indeed matter what “worlds world worlds”, and what this shows more than anything is that these movements, organisations, and people do not operate after an objective understanding of anything, but rather operate under a very specific ideology.
An ideology of techno-utopianism.
Selected references
CERQUI, D. 2002. The future of humankind in the era of human and computer hybridization: An anthropological analysis. Ethics and Information Technology.
DELFANTI, A. 2013. Biohackers: The Politics of Open Science, 130–140. Pluto Press.
HARAWAY, D. 2014. Anthropocene, Capitalocene, Chthulucene: Staying with the Trouble. Anthropocene:http://opentranscripts.org/transcript/anthropocene-capitalocene-chthulucene/, accessed 05/09/19).
HILGARTNER, S. 2015. Capturing the imaginary: Vanguards, visions and the synthetic biology revolution. In Science and democracy: Making knowledge and making power in the biosciences and beyond (eds) S. Hilgartner & C. Miller, 33–55. New York: Rob Hagendijk Routledge.
MORE, M. 2013. A Letter to Mother Nature. The Transhumanist Reader: Classical and Contemporary Essays on the Science, Technology, and Philosophy of the Human Future 449–450.
TAMMINEN, S. & E. HOLMGREN 2016. The Anthropology of Wearables: The Self, The Social, and the Autobiographical. Ethnographic Praxis in Industry Conference Proceedings 2016, 154–174.
0 notes
readyaiminquire · 4 years
Text
Part 0.5 - Initialising, please wait...
Tumblr media
This is a re-write of my master's thesis, to be better suited for a blog. As such, you can expect some changes, and more importantly, expect it to be far more approachable than the otherwise heavy and academic work this is derived from. If you want to read the original work, you can find it here, otherwise, sit back and enjoy!
I was sitting in a basement lecture hall in Birkbeck College in Bloomsbury, London. Just a few steps from the anthropology department at UCL, it felt appropriate that my last days 'in the field' were so close to ‘home’. I was attending a transhumanist conference organised by the London Futurists, themselves a part of the global Humanity+ network. It was a weekend of techno-utopianism, futurism, AI, human augmentation, quantum computing; everything that might be in a broad sense fit under the topic of "future technology". During a break, I found myself in conversation with a self-avowed transhumanist and entrepreneur from the U.S. After he heard I was researching practices of human augmentation, he asked me "Do you think all transhumanists are optimists?". Sure, I figured. Over the past few months I had been exposed to so many different strands of techno-optimism, I had started taking it for granted. Part of me even figured that you had to be an optimist. "I don't think they are," my American interlocutor told me, "I mean, sure, ideologically they are, optimists in technology. But I don't think they are optimists. All they do is complain, and think and overthink, and talk, and over-talk. No-one has dared to do anything in years. They're not optimists, they're just optimistic about technology."
I have always found myself thinking back at this particular conclusion, and the more I have thought about it, the more I believe this American transhumanist to be correct. In many ways, this isn't a story solely about optimists, though some people and characters that I came across are undoubtedly optimists. Rather it is a story about optimism in technology. This techno-optimism will hopefully shine clearly through in the text, more than the number of times I refer to it directly. This view of technology, in turn, has great implications on how these individuals perceive the future: of society, and humanity as a whole. It is also a story of the body, and how it is - put bluntly - lacklustre. Though before we get into any of this, it is important to fully understand what "this" is. Fleshing out transhumanism, biohacking, human augmentation, implants to name a few will be the focus of this post. It will lack some of the narrative elements of the other posts in this series, but having a solid understanding of the background is fundamental to understanding the other posts in this series. Always bear in mind that things go much deeper than just descriptions and explanations; there is only ever so much that can be translated to paper, and what is transferred is invariably affected by what the author wishes to say and perhaps more importantly how the author wishes to say it.
I had spent four months in Sweden, originally to study microchipping practices. A media-buzz had been created around this ‘sub-culture’ (of sorts, more on that later) in late 2018, and much was put down to a type of early-adopter mentality; something I will in hindsight proclaim to be the lazy answer. This became the starting point for my adventure. I worked with and among transhumanists, biohackers, and self-proclaimed tech-enthusiasts only to be surprised early on that there exists no 'chipping community' as such. I was often told outright that there was no community at all. Given media reports, but also the existence of discussion boards and social media groups, this seemed unlikely. The truth lay somewhere in-between. The 'community' as I had imagined it did not exist. These practices of body-modification, augmentation, or experimentation existed as a part of several groups: Transhumanism, in short, is a philosophy that hopes to improve the human condition through the marriage between humanity and technology. Author Mark O'Connell writes that their end goal is "[the] total emancipation from biology itself". BioHackers, on the other hand, aim to maximise the potential of human bodily potential, often by 'hacking' biology (hence the name). These 'hacking' practices range from optimising diets, workout routines, or sleep patterns, to using nootropics. This often overlaps with life-logging - keeping a log of any number of metrics to better understand yourself. This key difference, then, between these two groups is often put down to the practical. Unlike transhumanism, BioHacking can't just be theoretical but requires practical involvement. The final group is general tech-enthusiasts - or tech-nerds, as some put it. While they might share ideas or practices with both transhumanists and BioHackers, they don't identify as either, and they're a more fluid group in this sense, more easily defined after what they are not, rather than by what they are.
It became clear quickly during my time with these different people that they simultaneously were and weren’t a community. They worked in a sphere where they would exchange ideas, thoughts, or developments, while also granting each-other legitimacy as they were, to some degree, mutually supportive communities. This is not anything new, especially not within the world of technology and innovation (see Fred Turner's book From Counterculture to Cyberculture for more details, where he calls these type of groups “network forums”). Though they aren't one community, for the same of brevity I will continue to refer to them as one community.
The glue that keeps these people together in Sweden is the relatively new practice of microchipping. Though it has been used in animals for around 15 years, it wasn't until 2007 when a club in Barcelona started offering RFID implants to their VIP customers. Through various twists and turns it arrived in Sweden in 2014 - though some of the people I worked with recall implanting themselves and their friends as far back as 2011. In 2014, in a basement in Stockholm, I was told, six friends met up after hearing about these implants through internet message boards. They had ordered the first chips from the U.S., and were ready to begin; "It was me and Samuel, you know Samuel, and four or five others that met that night. It was our first chipping party. I think we might have been the first to get our implants in Sweden then, but I'm not sure" Harrison told me while we met for a coffee. These individuals, though tied together by their shared interest in this new technology, would later part ways, and already in 2014, they represented the various branches of 'chippers' that I would engage with in 2019.
The implants, though sometimes spoken about in terms of human augmentation, are more mundane than what might be implied - compare a biplane with the SR-71 spy plane; in this analogy the implants are the biplane. Though there are several different version of the implants - Samuel ironically said that "there are about eight different standards" - there are (at this time, at least) two main models. The MyFare Classic and NTAG 216. These are around 13mm x 3mm and 12mm x 2mm respectively. In theory, the implants can be used for anything that uses contactless technology: travel cards, bank cards, access control (keys, locks, doors, passwords etc.), and general information storage. However, the chips remain technologically primitive, still, meaning much of the functionality is dependent on the infrastructure. This means in practice that what can be done is entirely dependent on what companies and organisations allow them to do. Though the comment I get the most is whether you can use the chips to pay, this is still impossible, as banks don't allow it. The same goes for most travel cards, though there are some exceptions like Statens Järnvägar (SJ) in Sweden. It should not be forgotten that on top of all of this you have the different standards, furthering inhibiting use.
What do you use it for, then? Most use it for access control, computer passwords, and information storage (often business cards and the like). Part of the reason this is especially popular is its ease of implementation: most phones can transfer the relevant information. There are more complicated applications: implementing the chip into your smart home appliances, starting cars, or even opening home-built lockboxes, but these often require much more involvement and know-how. I spend some time playing around with using the chip to remotely access my personal server, to some success. However, in the word of one of the people I've worked with, it would "often be simpler with a normal button".
To end this section, I want to point out a couple of things. It is worth knowing, as you read this, that I don't identify as a transhumanist, BioHacker, or otherwise 'involved' a chipping community. Rather, I see myself as a sympathetic observer, whereby I am sympathetic to what they want to achieve, but I am also critical of some of their methods. This is invariably going to colour my account. In addition, I will also focus on some specific themes. These are the body, the future, and society. This is not to say that there aren't other themes that would be equally, if not more, fascinating to look at. The ones I have come across are:
Gender: something especially noteworthy as in my months in the field, I only spoke to two women. I think why this would be fascinating speaks for itself.
Risk: this is a topic that comes up, a lot. However, there seems to be a pattern that focuses very much of either discrediting risks as minor or ascending them to existential risks. The lack of a middle ground implies that there is work that needs to be done there.
Mundanity: some research has already been done on the mundane uses of things like fitness trackers and other wearable technologies. As such, looking at how body implants, modifications, and ideas of human augmentation is used among groups that lack any strong ideological underpinnings (i.e. 'mundane' uses, for the lack of better words) calls for more insight.
This about wraps up this segment, and I hope you will stay with me for the ride ahead. I think you might be in for something very different than what you first envisioned.
Suggested introductory readings:
O’Connell, M. 2017. To Be a Machine: Adventures among cyborgs, utopians, hackers, and the futurists solving the modest problem of death. Granta.
Petersén, M. 2019. The Swedish human microchipping phenomenon.
Tegmark, M. 2017. Life 3.0: Being human in the age of Artificial Intelligence. Penguin books.
Turner, F. 2008. From Counterculture to Cyberculture: A Stewart Brand, the Whole Earth Network, and the rise of digital utopianism. University of Chicago Press.
0 notes