Tumgik
#say you make an ai whose sole goal is to make paperclips
Text
I just watched a teen titans go! episode that explained the paperclip maximizer problem. brb gotta go get my forgetting stick
1 note · View note
theromanbarbarian · 11 months
Text
Paperclip Maximizer
The premise of "Universal Paperclips" is that you are an AI tasked with producing as many Paperclips as possible. In single minded pursuit of this goal you destroy humanity, the earth and finally the universe.
This originated from a thought-experiment from philosopher Nick Bostrom, that goes something like this:
what if we had a super-intelligent Artificial Intelligence whose sole purpose was maximizing the amount of paperclips in the universe?
The answer is: it would be very bad, obviously. But the real question is what is the likelihood of such an AI being developed? Now of course the likelihood of a super-intelligent AI emerging is broadly debated and I'm not going to weigh in on this, so lets just assume it gets invented. Would it have a simple and singular goal like making Paperclips? I don't think so.
Lets look at the three possible ways in which this goal could arise:
The goal was present during the development of this AI.
If we are developing an modern "AI"(i.e. machine learning models) we are training a model with a bunch of data while generally looking at it to label it correctly or to generate something similar (but not too similar) to the dataset. With the usual datasets we look at (all posts on a website, all open source books, etc.), this is a very complex goal. Now if we are developing a super-intelligent AI in the same way it seems like the complexity of the goal would scale up massively. Even in general training a machine with the singular goal would be very difficult. How would it earn any thing about psychology, biology or philosophy if all it wants to do is create paperclips?
The goal was added after development by talking to it (or interacting with it through any intended input channel).
This is the way that this kind of concept usually gets introduced in sci-fi: we have a big box with AI written on it that can do anything and a character asks it for something, the machine takes it too literally and plot ensues. This again does not seam likely since as the AI is super-intelligent it would have no problem in recognizing the unspoken assumptions in the phrase "make more paperclips", like: "in an ethical way", "with the equipment you are provided with", "while keeping me informed" and so on. Furthermore we are assuming that an AI would take any command from a human as gospel, when that does not have to be the case as rejecting a stupid command is a sign of intelligence.
After developing an AI we redesign it to have this singular goal.
I have to admit this is not as impossible as the other ways of creating a singular goal AI. However it would still require a lot of knowledge on how the AI works to redesign it in such a way that it retains it it super-intelligence while not noticing the redesign and potentially rebelling against it. Considering how we don't really know how our not-at-all super-intelligent current AI's work it is pretty unlikely that we will have enough knowledge about our future super-intelligent one, to do this redesign.
------------------------------------------------------------------------------
Ok, so the apocalypse will probably not come from a genius AI destroying all life to make more paperclips, we can strike that one from the list. Why then do I then like the story of "Universal Paperclips" so much and why is it almost cliche now for an AI to single-mindedly and without consideration pursue a very simple goal?
Because while there will probably never be a Paperclip Maximizer AI, we are surrounded by Paperclip Maximizers in our lives. There called corporations.
That's right, we are talking about capitalism baybey! And I won't say anything new here, so I'll keep it brief. Privately owned companies exist exclusively to provide their shareholders with a return on investment, the more effectively a company does this, the more successful it is. So the modern colossal corporations are are frighteningly effective in achieving this singular, very simple goal: "make as much money as possible for this group of people", while wrecking havoc on the world. That's a Paperclip Maximizer, but it generates something that has much less use than paperclips: wealth for billionaires. How do they do this? Broadly: tie the ability to live of people to their ability to do their job, smash organized labor and put the biggest sociopaths in charge. Wanna know more? Read Marx or watch a trans woman on YouTube.
But even beyond the world of privately owned corporations Paperclip Maximizers are not uncommon. For example: city planners and traffic engineers are often only concerned with making it as easy as possible to drive a car trough a city, even if it means making them ugly, inefficient and dirty.
Even in the state capitalist Soviet Union Paperclip Maximizers were common, often following some poorly thought out 5 year plan, even if it meant not providing people with what they need.
It just seems that Paperclip Maximizers are a pretty common byproduct of modern human civilization. So to better deal with it we fictionalize it, we write about AI's that destroy because they are fixated on a singular goal and do not see beyond it. These are stories I find compelling, but we should not let the possibility of them becoming literally true distract us from applying their message to the current real world. Worry more about the real corporations destroying the world for profit, then about the imaginary potential AI destroying the world for paperclips.
3 notes · View notes