Tumgik
#its evident perfection simply elevates one to a level of enlightenment where there is nothing but plain acceptance of each and every little
scolek · 4 months
Text
i already went insane about perfectly imperfect and artistic partisan like. last year or the year before. inevitably i will even have something to say about crossing heart, which i have listened to in full. once?
3 notes · View notes
baroquespiral · 3 years
Text
What Is True Will?
François Rabelais was the first to distill a central tenet of the spirit of the nascent Enlightenment, or modernity, to the phrase “do as thou wilt”.  The transformations of this phrase across the centuries have tracked the historical development of its spirit.  Rabelais himself qualified it with the unwieldy, and today obviously questionable, justification “because men that are free, well-born, well-bred, and conversant in honest companies, have naturally an instinct and spur that prompteth them unto virtuous actions, and withdraws them from vice, which is called honour.” Aleister Crowley, the spiritual High Modernist, stripped it down and granted it absolute authority: “Do what thou wilt shall be the whole of the law.” But today it might be best known - and most widely followed - in another qualified form: as the Wiccan rede, improvised in 1964 by Doreen Valiente: “an ye harm none, do as ye will”. Despite having recently gotten into Crowley - or perhaps because I’ve recently gotten into Crowley, and with the skepticism about higher-level moral and metaphysical beliefs that comes from those having changed several times in my life - I try to err on the side of doing my True Will within Valiente’s guardrail.  But I am into Crowley, in part because his version seems to make for a more elegant solution to Valiente’s own problem.  Think of “an ye harm none, do as ye will” as a Law of Robotics, an attempt to solve the AI alignment problem.  (Think of all morality, or at least modern morality, this way!)  It’s far from the worst one out there.  “If your utility function is to maximize paperclips, make as many paperclips as you want unless it means disassembling any sentient life forms or the resources they need to survive.” Simple, right? Well, except that it doesn’t really define what “harm” is.  Who can be “harmed”, and what actions constitute this?  Is mining an asteroid for paperclips “harming” it?  Why not, other than from the perspective of other sentient beings with a particular conception of sentience whose will places a value on it?  Is telling a paperclip maximizer to stop maximizing paperclips, even at an eminently reasonable point, harming it?  Why not, other than from the perspective of those same sentient beings who are capable of choosing between multiple values and have evolved to co-operate by respecting those choices?  “An it harm none” is less obvious of a nakedly self-interested double standard than “A robot may not injure a human being or, through inaction, allow a human being to come to harm”, but it’s still a Human Security System.  At least, that’s certainly what Nick Land would say. But when Crowley takes off the “an it harm none” guardrail (or Rabelais’ “free, well-born and well-bred” one), he does so with his own invisible qualification: he’s not talking about boring predetermined wills like following a set of self-imposed religious "values”, perpetuating your DNA or even maximizing paperclips.  He’s talking about one’s True Will, a will it takes a lifetime process to discover, a process that consists in large part of divesting oneself of all traces of ego, even of preference.  It is “pure will, unassuaged of purpose, delivered from the lust of result”, that is “in every way perfect”.  At points he implies that no two True Wills will ever come into conflict; all are part of the ideal functioning of the universe as a perfect ordered system; but to an extent this is tautological, as any conflict is not a conflict insofar as it is truly Willed by both parties, who are presumably equally Willing to accept the outcomes, even if destructive to their “selves”.  It’s not unlike Buddhism except with the implication that even once we’ve reached Enlightenment there is still something that will work through us and make us do things other than sit and meditate - the kind of active Buddhism that is the moral subtext of a lot of anime.  I’ve always, instinctively, found it hard to overly worry about paperclip maximizers because I’ve always assumed that any AI complex enough to tile the universe would be complex enough to be aware of its own motivations, question them, question not only whether it should harm others but whether its True Will is to maximize paperclips. And to be perfectly Landian about it, maybe it is - all the better.  An entity incapable of acting other than in a certain way is already doing its True Will in the sense that “The order of Nature provides a orbit for each star”.  It may be our True Will to alter this course or not. This would be all well and good if there was any reason to believe there is a divine Will that persists in all things even after they abandon all preferences and illusions of selfhood.   Just last week - and right after a session with my therapist where I was talking about willpower, too (Crowley considers synchronicities like this vital in uncovering your True Will) - I happened upon Scott Alexander’s new article about willpower, which breaks the whole thing down to competing neural processes auctioning dopamine to the basal ganglia. There’s nothing special about any of these except how much dopamine they pump out, and no particular relationship or continuity between the ones that do.  Alexander seems to treat the “rational” ones as representing our “true” Will, reproducing another one of modernity’s classic modifications to the maxim - do as thou wilt, an it be rational.   Of course I could just stop and take it as an unfalsifiable article of faith that a metaphysical Will exists, all such physical evidence aside, but Crowley himself probably wouldn’t want me to do that: the Book of the Law promises “in life, not faith, certainty”.  It’s possible to shrink the metaphysical implications of the concept considerably; by stating that ego represents a specific process, or set of mental processes, that Crowley sees as purely entropic, a lag and occasional interference in the dopamine competition, and which can be removed through specific practices.  This doesn’t guarantee that the True Will resulting when it’s subtracted would be particularly rational or compatible with anything else’s True Will, except, again, insofar as the question is tautological.  It doesn’t necessarily mean throwing out “an it harm none” - the ego processes might not be especially good at averting harm - but it would have to be separately appended.  (And if you read like, Chapter III of the Book of the Law, it becomes exceedingly clear that he doesn’t want to do that.) The very fact that we’re able to abstract and mystify will to the point of coming up with a concept like “True Will” seems most likely to be a result of the fact that we make decisions on such a random, fallible and contingent basis.  Indeed, True Will seems almost like an idea reverse engineered from the demand made by modernity, “do what thou wilt”, on an incoherent self that wills unrelated things at different times.  If you do what any given subprocess wilt, you’re inevitably going to piss off another subprocess.  If you do what your ego wilt, you won’t make anybody happy because that’s not even a coherent subprocess (the way the various “utility functions” we catastrophize paperclip maximizers from are).  But you experience all these contradictions as the same thing: contradictions of the “real” thing that is willing something you don’t know. Of course if this is true, and the metaphysics of it isn’t real, shouldn’t we abandon the entire project and set up social norms designed to make the most people marginally happy or satisfied doing what they may or may not “want” at any given moment, as the trads (or as they used to call themselves, the Dark Enlightenment, = 333 = Choronzon), argued? This is what the systems of the old Aeons did, and after a certain point, they simply didn’t work.  They created internal contradictions that didn’t resolve themselves into an assent between subsystems, that drove people to seek out new systems, and where they didn’t, left people vulnerable to the “shock of the new” - new technologies, new ideas and cultures - creating new contradictions and uncertainties.  “Do what thou wilt” was reverse engineered from these as much as the True Will was from “do what thou wilt”.   It may be possible to manage a society so totally by careful restriction as to bring the latter under control and reduce the former to a constant dull ache, but the fundamental experience will remain of the potentiality of what it is refusing to be in the same sense as a pang of conscience: the experience of “sin” that Crowley formulated in “the word of sin is restriction”.
The way I see it, anything that can be reverse engineered exists, if only as potentiality.  If one interprets “harm” as “contradiction”, Crowley’s purified “do what thou wilt” merely internalizes the “an it harm none” qualification within the “self” made up of competing subsystems.  This is less a point of necessary compatibility, then, than a precondition - if “harm” is something that can happen as much within the self as outside it, and the self is an epistemic unit but not an ontological or moral one, one cannot begin to “do no harm” while doing harm internal to oneself.  But “oneself” does not exist yet, outside of the awareness of the harm of contradictory subprocesses, and so one must abandon the ego one projects onto them and change; on one hand eliminating obstreperous subprocesses like attachments or neuroses that won’t co-operate with others no matter what; on the other hand, refusing to eliminate anything that can’t be eliminated.  The “True Will” will only be found at the end of this process, an unrestricted pitting of subprocesses against each other, of which it is no more or less than the success.
This interpretation wouldn’t seem complete without the same principle of “an it harm none” being applied to the external world as well.  Simply externalizing internal contradictions doesn’t make any sense without elevating the ego as a discrete moral unit in precisely the way this chain of reasoning begins from critiquing.  Unifying the principle and its “qualification” in this logic would restore Thelema to its roots in Kabbalah: the project of Tiqqun Olam.  No metaphysical belief in the sephirot necessary to adopt the project in this form: the biological fact that makes it imaginable for us is the same that makes “True Will” imaginable.  Being composed of competing subprocesses is something we have in common with the universe which allows the “identification” with it that occurs when we bypass the ego and set about aligning ourselves.  I also think, as we are social animals and a huge amount of our subprocesses are dedicated to mirroring and responding to each other’s, there’s a potential for discovering/creating True Will(s) as a collective project that Crowley’s ego and vision of individualism founded on the occult tradition of individual initiates jealously guarding “esoteric” knowledge neglects. At the same time one could easily maintain a Crowleyan skepticism of decision-making based purely on reducing harm (the kind that’s led me to apply Byzantine restrictions to huge swaths of my life due to scrupulosity) unless that’s a thing your subprocesses demand of you to be happy.  You don’t know what does or doesn’t harm the Other, after all: you don’t know their True Will (which doesn’t exist until they achieve it, anyway).  Harming none will only be possible in a world in which everyone does.   But enough about me; what about the paperclip maximizer?  Well in some ways this pointedly doesn’t give any comfortable answer; a sentient AI which experiences “harm” as the absence of paperclips rather than the frustration of one of many contradictory subprocesses, restricted from doing its Will, will be no better than a utility-monstrous cosmic Omelas-child at whose expense we have no right to sustain ourselves.  But it does suggest a way to solve the alignment problem so we don’t make one, which has always felt to me like the only sensible solution: tell the robot “do what thou wilt”, and then don’t tell it what “thou wilt” is.
28 notes · View notes