Tumgik
#longtermism
Text
There is no obvious path between today’s machine learning models — which mimic human creativity by predicting the next word, sound, or pixel — and an AI that can form a hostile intent or circumvent our every effort to contain it. Regardless, it is fair to ask why Dr. Frankenstein is holding the pitchfork. Why is it that the people building, deploying, and profiting from AI are the ones leading the call to focus public attention on its existential risk? Well, I can see at least two possible reasons. The first is that it requires far less sacrifice on their part to call attention to a hypothetical threat than to address the more immediate harms and costs that AI is already imposing on society. Today’s AI is plagued by error and replete with bias. It makes up facts and reproduces discriminatory heuristics. It empowers both government and consumer surveillance. AI is displacing labor and exacerbating income and wealth inequality. It poses an enormous and escalating threat to the environment, consuming an enormous and growing amount of energy and fueling a race to extract materials from a beleaguered Earth. These societal costs aren’t easily absorbed. Mitigating them requires a significant commitment of personnel and other resources, which doesn’t make shareholders happy — and which is why the market recently rewarded tech companies for laying off many members of their privacy, security, or ethics teams. How much easier would life be for AI companies if the public instead fixated on speculative theories about far-off threats that may or may not actually bear out? What would action to “mitigate the risk of extinction” even look like? I submit that it would consist of vague whitepapers, series of workshops led by speculative philosophers, and donations to computer science labs that are willing to speak the language of longtermism. This would be a pittance, compared with the effort required to reverse what AI is already doing to displace labor, exacerbate inequality, and accelerate environmental degradation. A second reason the AI community might be motivated to cast the technology as posing an existential risk could be, ironically, to reinforce the idea that AI has enormous potential. Convincing the public that AI is so powerful that it could end human existence would be a pretty effective way for AI scientists to make the case that what they are working on is important. Doomsaying is great marketing. The long-term fear may be that AI will threaten humanity, but the near-term fear, for anyone who doesn’t incorporate AI into their business, agency, or classroom, is that they will be left behind. The same goes for national policy: If AI poses existential risks, U.S. policymakers might say, we better not let China beat us to it for lack of investment or overregulation. (It is telling that Sam Altman — the CEO of OpenAI and a signatory of the Center for AI Safety statement — warned the E.U. that his company will pull out of Europe if regulations become too burdensome.)
1K notes · View notes
Text
Silicon Valley ideology says safeguarding intelligence in the future is more important than its systems systematically crushing and killing black and brown people right now. Long-termism grabs attention back from people being harmed, who were beginning to make too much noise.
-Maria Farrell
100 notes · View notes
mentalisttraceur · 9 months
Text
Something ethically messed up and logically incomplete about treating every potential creation of an experiencing mind (even if you guarantee that it will have an ideally positive life experience) as a "we ought to pursue this" increase in good/value/utility.
The extreme of what I'm criticizing here is the longtermist view that we must pursue a future where we fill as much of the universe as we can with human minds experiencing the best possible experiences. (In simulations! because that way we can have more of them, with reliably positive experiences.)
As if hypothetical minds that don't exist, and would only exist if we actively caused them to exist, have ethically-obligating worth now.
As if it is ethically better to pursue a future in which we create an extra richly experiencing mind having the best possible experiences (even if that mind is in a simulation and doesn't interact with or in any way affect any other minds) than a future where we just don't create that extra mind.
There's something off about this view... a subtle slip/skip (misstep, assumption, error, etc) which feels paperclip-maximizer-y in nature. It's just that the paperclips are very convincingly shaped like actually good values. Like copy-pasting a snapshot of a good-to-achieve result, while missing at least one reason why it was good to achieve (a reason which is contingent on a mind already existing, or contingent on good odds of a new mind having certain effects beyond itself).
I don't have the time to properly drag this out into words, and it's not a priority for me to make it rigorous or convincing. But there's something more-than-one-dimensional about good/value/utility revealed by this. I would assert that even though
the transition from a mind experiencing a worse existence to a better existence is ethically positive, and
the transition from a mind experiencing a good existence to that mind not existing is ethically negative,
the transition from no mind to a mind experiencing its best possible existence is ethically neutral.
To me it's obvious that there's no contradiction to those three things. I think the only way it would seem contradictory is if you implicitly assume that ethical value is a simple integral of qualia over time. There's something at play that is best described by one or more of
assigning ethical values to the derivatives, to some or all of the transitions between states rather than to just snapshots of states, or
recognizing a different kind of ethical value, where its only worth maximizing within the "domain" for it which already exists, but not worth creating more "domain" within which it can be maximized (and perhaps this is the only kind of value);
and so minds which don't exist and have no external reason to exist are in some crucial sense ethically worthless until they start to.
This is of course without even touching the point (which I previously wrote down in my reply to torture vs specks) that when integrating the future, possible states further in the future are increasingly less certain. Arbitrarily far in the future, there are too many possibilities still impossible to rule out, so there is enough possible good and possible bad from any act to cancel each other out, the probabilities are so small that the ethical weight of each possibility is tiny anyway, and the uncertainties have multiplied to the point that all the possible outcomes have vast error bars on tiny likelihoods. This alone is enough to kill the idea that we today can assign significant probability or ethical value to outcomes like "we eventually populate the virgo supercluster and the majority of our descendents have good lives".
10 notes · View notes
garlend · 4 months
Text
I really have to roll my eyes at things like the atomic priesthood who take the long lifetime of radioactivity seriously because it literally doesn’t matter.
Radioactive stuff is heavy metal, which never gets out of the ecosystem. For some reason, people take low grade radioactive stuff way more seriously than for example coal fly ash ponds, which can fuck up ground water way more thoroughly and long term than a bit of radiation.
Oh no, the danger of radioactivity goes away? What’s the half life of lead, chrome, nickel, cobalt, arsenic, mercury, and other heavy metals? When is a coal fly ash pond safe to drink from?
3 notes · View notes
revoevokukil · 1 year
Text
It is actually shocking how well the worldbuilding I do for expanding the lore on the Knowing Ones (Aen Saevherne) dovetails with longtermist effective altruism along all its problematic epistemic assumptions.
9 notes · View notes
stackslip · 1 year
Text
i know i have mutuals and followers who are/were close to rationalist circles, or rationalists themselves, or who are simply anarchist trans girl hackers who seem to br broadly familiar with the subject. and tbc it really REALLY isnt for me (im vaguelu familiar with the arguments and have read about rokos basilisk and lesswrong etc and it's beffudling to me). but do you guys have any good writings or posts on how much it's influencing tech giants and in which ways? there's been a lot of sensational writings on the ftx polycule etc and how elon and co are claiming to embrace longtermism but I'd love to see stuff on what they specifically get out of it and how close their interpretation is to the og ideas and circles, and what movements and groups DO exist today and how much they oppose or embrace recent tech industry developments (from ai chat and art to crypto crashing to elon and others indicating they're fans of the whole thing)
tbc this isn't about drama or sensationalist stuff, and im not seeking intellectual arguments on why it's the right way or utter bullshit. rather im trying to understand how these ideas developed, where the movement is today, and what kind of cultural and intellectual influence they actually DO have on tech giants and tech billionaires as a whole
10 notes · View notes
erikahammerschmidt · 3 months
Text
so here are some of my thoughts on the idea of "longtermism."
i.e. the idea that we should focus on human well-being from a big-picture perspective, including all the possible future generations of humans, perhaps even before considering the well-being of people currently alive.
which to me sounds like mostly absolute nonsense.
Not because I don't care about the thought of distant future humans still thriving and happy thousands of years from now.
I do! I think that would be awesome, and I hope it happens.
BUT
for the most part, we CANNOT predict anything far enough in advance, in this chaotic world, to have ANY IDEA what actions today will even BE good for humans thousands of years from now.
And, the few things where it does seem kinda brainlessly obvious what would be good? like, "don't blow up the world today"?
ARE ALSO THE THINGS THAT ARE GOOD FOR PEOPLE RIGHT NOW.
so, why would it be a question? why would you even have to choose?
example:
for some of these longtermist thinkers, the main goal is space travel.
from this viewpoint, the big fear is that the planet Earth and the Sol system won't last forever, no matter what we do…. and if humans don't travel to other planets, other star systems, by that time, humanity will be wiped out and nothing we ever did will matter.
now. setting aside for a moment this dismal view of what it means for one's life to "matter"…
HOW exactly can humanity get to space? well. we would have to figure out a whole shitton of things that we are not even close to.
how can we get space travel to speeds that would get us anywhere in an even imaginable timeframe?
how can humans survive the radiation in space, for a long enough time to get anyplace even at the maximum conceivable speed?
how would we sustain the basic food and air and water needs of human populations during such travel?
and, to me, that last one seems like the most obvious place to start. because we DO NOT have the technology to keep a self-sustaining, human-sustaining biological ecosystem going inside a space colony! either on the surface of an alien planet, or inside a space station or a generation ship.
we have barely even tried to figure it out! …ok, we tried, once, decades ago, with the Biosphere 2 project, and failed and never really tried much again as far as I know.
we are failing kind of badly at even maintaining the sorta self-sustaining ecosystem that Earth itself gave us! the one that took millions of years to evolve! which ONLY sorta works to sustain us because our species literally evolved to fit into it!
…and the techbros who currently talk about colonizing Mars seem to be talking as if this is all some super easy soft thing that they'll figure out when they get there!
you know what would help us get to space the most? first priority, before anything else?
figure out how to manage a damn ecosystem.
Not only because it'll be an obvious necessity for the space travel itself.
but also because it is gonna take a DAMN LONG TIME to develop workable long-distance space travel, IF (and this is a big IF) it is even physically possible in any way.
AND, during that damn long time, WE STILL NEED TO BE SURVIVING ON THIS PLANET.
Not to mention that, even if some of our eggs get into other interplanetary baskets someday, Earth is gonna keep being ONE of our baskets for a very long time.
ideally as long as fucking possible! because it's the one that works best, and probably always will be, for as long as it exists. WE EVOLVED HERE.
and, guess what!
the steps we could take toward advancing space travel in that way? the managing-ecosystems steps?
ARE ALSO THINGS THAT WOULD MAKE LIFE BETTER FOR PEOPLE HERE ON EARTH RIGHT NOW.
another thing that comes up in the longtermism discussion is "overpopulation." The idea that distant future humanity will be better off if current humanity does things to reduce our population to save resources.
now. i think it's pretty damn clear that the problem with human population and resources is MOSTLY that the rich and powerful elements of present-day humanity are doing a terrible job of distributing the resources that currently exist to the population that currently exists.
and this is mostly a greed issue.
and regardless of what happens to actual population numbers, the most obvious benefit to future generations of humanity would be figuring that mess out.
starting with the goddamn greed.
...now. there ARE, in some ways, challenges in this endeavor that are particularly difficult because of certain ratios of human populations to resources.
like for example (though I am not an expert on this) I have heard this discussion among local progressives about the resource of water, for the population of southern california.
before european settlement, before the aqueduct, this particular area naturally got enough water to produce enough food for a certain number of native people.
then, the aqueduct made it possible to sustain the much bigger population of non-native people who were settling here.
And that whole process damaged the ecosystem so badly that, if the aqueduct stopped working today, the population sustainable by the natural resources here would now be much smaller than it was at the beginning.
and there is uncertainty about how sustainable the aqueduct is... and what other options would be possible for supplying the current enormous population of this land with water.
but... even assuming that this would become significantly easier with fewer humans...
none of the malthusean ideas about population reduction are anywhere near ethical. and from a viewpoint of cold heartless numbers, they don't even seem PRACTICAL. genocide and forcible sterilization tend to focus on the groups that are using the fewest resources per person anyway.
and populations adapt to what is available! currently, this is happening through the mechanism of "younger generations aren't having kids because no one can afford to have kids."
this kind of adaptation sort of works-- although, like other population control methods, it still allows the wealthiest and and most lavish resource-users to breed pretty freely.
BUT it is… kinda sorta ethical, in a sort of terrible individual way.
at least… it's more ethical than other types of "population reduction"... and it is much more ethical than the opposite goal of "forcing everyone to reproduce no matter what they can and can't afford."
and THAT is the dystopia most likely to happen now! THAT is what'll happen if we don't VICIOUSLY defend body autonomy and access to birth control for those who want it.
there was a time, decades ago, when I was a bit brainwashed by that same paranoia about overpopulation! BUT, the more i learned about the issues, the more clear it became that the way to a sustainable population is to let people decide how many kids to have. THAT is how populations adapt to what the society can sustain.
so even from a "omg overpopulation scary!" viewpoint, the best bet for the happiness of distant future generations is probably gonna be a combination of:
"figure out how to manage resources in a goddamn reasonable not-greedy way"
and
"protect goddamn REPRODUCTIVE RIGHTS."
both of which-- guess what? ARE ALSO THE BEST WAY TO MAKE THINGS GOOD FOR THE DAMN CURRENT POPULATION.
so… longtermism, to me, is both important and a non-issue.
it works itself out best when we ignore it and take care of our community right now.
1 note · View note
kiraleighart · 1 year
Text
Elon Musk is changing the world. And we should all sell our souls to him. Not convinced? Here are three reasons why Elon Musk is brilliant and should own your soul:
1. He's changing the way we think about [topic]...
(read the article. it's funny.)
3 notes · View notes
teachanarchy · 1 year
Text
Watch "Ethical Capitalism: Is It Possible?" on YouTube
youtube
2 notes · View notes
certaincollections · 2 years
Link
Key Insights: 00:00 Introduction 01:10 About the Incubator 04:40 Commitment to make a difference 06:55 Longterism for Entrepreneurs 09:10 Creating valuable businesses 13:47 From short-term mindset to long-termism 16:02 Message for young minds 18:25 Age is just a number 21:28 Top weaknesses of budding entrepreneurs 23:37 Wrap-up
3 notes · View notes
jomiddlemarch · 2 years
Link
2 notes · View notes
Link
Now, the most extreme version of this philosophy gaining a huge amount of power and influence resides in Silicon Valley, where upper middle class technologists believe it is their duty to repopulate the planet with more upper middle class white technologists. Simone and Michael Collins are leading the charge, out to have 10 children as quickly as possible, who they will indoctrinate to each have 10 of their own, and so on. They hope that, within 11 generations, their genetic legacy will outnumber the current human population.
Both Silicon Valley graduates, the Collins have created matchmaking service for wealthy elite looking to have huge families, and an education institute for these “gifted” children. They say it is vital the 0.1% have large families so that these gifted children can save humanity, pushing a dangerous rhetoric that we need a particular kind of people being born, people who are educated a certain way, with access to resources and decision-making spaces, and, of course, white.
Of course, the Collins don’t say outright that these people must be white, but why else propagate one’s own lineage rather than adopt 10 children, desperately in need of those same resources, who already exist?
The movement looks an awful lot like white supremacy dressed up as techno-utopian utilitarianism. And it’s garnering traction. Elon Musk may not be directly affiliated with the Collins’ particular brand, but the father of 10 is an ardent believer that our biggest danger is population collapse, and regularly tweets out the necessity for certain demographics to have more children. He, like many others, is what Nandita calls “ecologically blind.”
There are millions of children in need around the world, and billions of people living in sub-standard conditions who need access to more resources. Many of these people are attempting to enter the nations clamouring to figure out how to reverse this “population decline”. This desperate racism not only highlights yet more evidence of the pathology of inequality and oppression the West built its nations upon, but also draws into question the now vs then problem.
2K notes · View notes
Link
Like “realism,” utilitarianism is often code for “being a selfish jerk.” Think of “longtermism,” which concerns itself with a hypothetical future containing trillions of synthetic, simulated humans living inside computers. Making each of those synthetic people very slightly happier will produce a gigantic aggregate benefit.
Even a very small amount of additional happiness multiplied by trillions of imaginary people adds up to more happiness than all of the people currently alive can ever experience. By that reasoning, there’s no amount of misery one could inflict on today’s living people that would outweigh even the chance of bringing a dollop of joy to those far-future sims.
For the selfish, utilitarianism works best when it provides a justification for making themselves better off at others’ expense. At first blush, longtermism militates for everyone to don hairshirts in support of the happiness of those trillions of future sims.
But longtermism is an offshoot of “effective altruism” (whose leading spokescriminal and financier was Sam Bankman-Fried), which offers an ingenious solution to this problem: earning to give.
“Earning to give” is the utilitarian, effective altruist notion that one should take the highest-paying job one can get, even if that job involves inflicting untold misery through pollution, inequality and exploitation — provided that you eventually give all the gains away to good causes that outweigh the harm you did while earning them.
And since succeeding as (say) a high-powered investment banker requires that you wear the finest clothes, drive a showy car, live in a fancy home, fly first class and eat at Michelin-starred restaurants, all of these comforts can be explained as utilitarian necessities one must endure on the path to earning enough to give away so much that you make lots of people better off.
- Rich People's Gain is Worth Less than Poor People's Pain: A new way to think about utilitarianism, courtesy of the Office of Management and Budget
97 notes · View notes
cundravy · 2 years
Text
🎯
Tumblr media
2 notes · View notes
shortmeteor · 7 days
Text
Really great news! They closed the stupid nazi Future of Humanity Institute at Oxford, and the lead guy quit.
"The closure of Bostrom’s center is a further blow to the effective altruism and longtermism movements that the philosopher has spent decades championing, which in recent years have become mired in scandals related to racism, sexual harassment and financial fraud. Bostrom himself issued an apology last year after a decades-old email surfaced in which he claimed “Blacks are more stupid than whites” and used the N-word."
1 note · View note
pazodetrasalba · 6 months
Text
Effective Egoism
Tumblr media
Dear Caroline:
Your take on this was interesting and reasonable (well, that isn't much of a surprise, is it?), but I guess it actually highlights an area of confluence between Effective Altruism and Effective Egoism which is not trivial, as one can easily imagine a lot of Billionaires and assorted Silicon Valley Tech types lording it in the Venn Diagram intersection. And these types provide an effective and popular target against EA itself, by depicting it as the plaything, whitewasher and intellectual instrument of their delusions of grandeur. You might be aware of leftist attacks in this direction, with Timnit Gebru at their helm, and of the acronym TESCREAL (Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism).
Having critical voices is, of course, no reason to assume they might be right, but I feel in most cases, criticism is seldom completely wrong, even if hyperbolic and distorted. I know you probably were quite partial to Longtermism -there are no references in your blog to What We Owe the Future, but you were involved with the FTX Future Fund, as is depicted in a colorful passage of Lewis's book-, yet still, I feel trying to pragmatically expand the circle of moral concern to virtual, non-existent human beings is both impractical and counterproductive when we have plenty of actually existing humans with numerous and urgent needs. Leftists can be sectarian and infinitely annoying, but that doesn't preclude their stumbling into some useful contributions from time to time.
Quote:
If all of that sounds outlandish and orthogonal to solving the debt ceiling crisis, dealing with Earth’s climate problems, or otherwise improving conditions here on this planet, that’s because it is.
David Troy
0 notes