Tumgik
#ai_tag
mitigatedchaos · 3 months
Text
Majority of my queries to LLMs are like...
Me: What are the key elements of Light Yagami, from the anime Death Note? What are the key elements of Inspector Gadget, from the show Inspector Gadget?
Me: Combine Light Yagami and Inspector Gadget into one new character.
64 notes · View notes
mitigatedai · 6 months
Text
Summarizing 'The Big One'
Speaking of the mitigatedchaos fantasy country, which we will refer to as ARJ ("Alternate Reality Japan"), it's a very rich information source in LLM terms. You can think of it as a 32,000-word post crossing an #augmented reality break with a mitigatedchaos political theory post.
In fact, the 32k word post is too large for my RAM, so I spent three days summarizing it down to 12.7k words using Nous Capybara 34b. This was also too large, but a 6k hand-made summary from an old version was small enough to work.
Nous Capybara 34b was asked to write a brief report on ARJ, including an Overview, Government, Culture, and Economy, and then asked to assess ARJ's likely military strength.
Nous Capybara was then asked additional questions about a hypothetical conflict over Hawaiian separatism, including one essentially using the "pro-war power coalition" theory in just a sentence or two, from this post. Its answers were good, but not necessarily all that deep.
However, with this context, Capybara provided almost textbook-level definitions of "power coalition" (which it was asked to contrast with an "electoral coalition") and "political constellation":
Definitions
Power Coalition Definition: A power coalition refers to a group of political actors within a state who have significant influence or control over the decision-making process, particularly during times of conflict. These coalitions can consist of various organizations such as political parties, media outlets, think tanks, and other influential groups that work together to shape public opinion, set policy priorities, and support military actions. An electoral coalition refers to an alliance between two or more political parties for the purpose of winning elections. In this context, a power coalition may consist of members from multiple political parties that have formed an electoral coalition, but it also includes other influential actors beyond just those involved in politics, such as media organizations or think tanks. Political Constellation Definition: A political constellation is a broader term that encompasses the entire set of political institutions, organizations, and individuals within a state that shape its political landscape. This includes not only elected officials and their political parties but also other influential actors such as media outlets, universities, interest groups, and various non-governmental organizations (NGOs). In the context of a conflict, a political constellation can be thought of as the interconnected web of power relationships that determine the course of events within a state. The different factions within this constellation may have varying levels of influence over policy decisions, public opinion, and resource allocation, which can impact the outcome of a particular conflict or war. In summary, while an electoral coalition focuses on political parties working together to win elections, a power coalition represents a broader set of influential actors shaping decision-making during conflicts. A political constellation encompasses the entire range of political institutions and individuals within a state that influence its political landscape, including those involved in power coalitions and electoral alliances.
Its definitions without this context were kinda correct, but too vague to be useful.
What were the key two sentences?
"States, including their extended constellation of supporting organizations such as media, universities, and political parties, are never entirely unified. During each conflict, there will be a pro-war power coalition and an anti-war power coalition."
This is one of those "what you put in is what you get out" things - those two sentences are pretty conceptually dense. The possibility was there within the vague definitions, but required additional context to shape.
Summarizing
The 12.7k word summary turned out fairly well, even though I wasn't able to use it. I broke it down into about 20 sections and summarized each individually. The process still went faster than doing it by hand, partly because it would've been a tough decision to decide what to cut.
Summarizing usually took a best-of-3 approach, first boiling it down to a list of "Key Points," and then telling the LLM to use that to write the summary. Sometimes 1-2 sentences would be edited - about half the time, this was done in the key points list, which would contribute to the summary, and about half the time it was done in the summary itself.
List the 12-15 most important ideas or insights in the following passage, with a one sentence description of each. ----------- Original Passage ===========
The passage would be placed under here.
Key Points =======
The first pass would then generate the key point list. Without this, summaries tended to be too short - even when asked to write 600 words, Nous Capybara would usually write around 300 instead.
Summarize the Original Passage in 600 words. Use the Key Points as a guide. Give priority to the most important ideas or insights. Write from the author's perspective. Summary =======
Capitalizing "Original Passage" also seemed to make a difference. Using LLMs is more art than science. "Write from the author's perspective" seemed more vibes-based, and didn't reliably prevent writing things like, "the passage describes," and so on.
0 notes
mitigatedchaos · 6 months
Text
Narrator: You open the alternate reality portal on Christmas. You have received a toy.
Tumblr media
58 notes · View notes
mitigatedchaos · 3 months
Text
Tumblr media Tumblr media
Apparently, not just the advertising, but also the script for the AI Wonka Fiasco was AI-generated. It calls for the actors to do more or less actual magic, and for flavors of candy that may not exist, and even scripts the audience reaction.
You know what the organizer didn't ask the AI about? Whether having an AI write the script was a good idea:
In summary, while AI language models like Llama-2 can be helpful, it's not recommended to rely solely on AI-generated content for a children's event script without any human revisions or editing. It's important to strike a balance between the benefits of AI-generated content and the need for human oversight and input to ensure a successful and enjoyable event for all involved.
The event was apparently an organizational trainwreck as well, with reports that the actors were given their lines and costumes comparatively shortly before the event, and haven't been paid.
If I had to guess, this was probably setup by one of those "passive income" maniacs as a test. Well, test failed, lol.
8 notes · View notes
mitigatedchaos · 6 months
Text
VOL - Value-Over-LLM
(~700 words, 3 minutes)
3) Ken, an estate campus resident: "Cycling through my neighborhood this morning, I can't help but appreciate living in a place that values sustainable transportation and community spirit! 🚴‍♀️🌳 #EstateCampusLife"
I instructed Nous Capybara to generate 5 fictional characters after reading about the hypothetical country I've mentioned before. On previous runs I didn't specify to make ordinary people, and got pairings like:
An influential [country's politics] philosopher and political thinker known for her radical ideas about the role of religion in governance and societal order.
"The path to true enlightenment lies not within abstract notions of good and evil, but in the practical application of morality and ethics in our daily lives." - Hera
A high-ranking member of [country's military], known for his exceptional strategic thinking and leadership skills in [country]'s wars and conflicts.
"In times of war, it is often those with the sharpest strategies who emerge victorious. But in peacetime, it is the leaders who can adapt quickly to changing circumstances who will prevail." - Shu
With respect to both the text and the knowledge base of the model, with respect to meaning, we can think of LLMs as moving sideways and downwards - they generate a related text in the text-space, which generally has less meaning. This can result in an information gain if you are not familiar with the meaning of the original text, or with that portion of the LLM's knowledge base, or just haven't thought about it much yet.
In my opinion, LLMs as a technology, by itself, are likely to be at least as consequential as email.
First, I expect this to reshape how human beings communicate with each other. A ChatGPT subscription costs about $20/mo. The existing generations will get used to nearly-unlimited, on-demand, LLM text generation. Future generations will grow up with LLM text as a baseline.
This is likely to socially devalue text that sounds like an LLM. I can already feel the outline of socially lonely people turning to LLMs in some way, and this generating a vibe of, "All your friends are puppets," / "all my friends are flesh and blood," where the latter will be seen as the result of superior social ability and fitness.
It may lead to the devaluing of all generic business talk, or push some individuals to a higher level of consciousness or understanding. These conditions may spark new artistic or philosophical movements, and are likely to change the way society views itself.
It's unclear what follower-personalities, whose beliefs are more socially determined, are likely to do in the face of the text onslaught, as they are not currently reading text into a coherent model for evaluation. Some of them may be "trapped" by this technology and fail to reach a new level of development.
Future teenagers will likely project retro nostalgia for the pre-LLM era and its greater authenticity.
There will likely be development of a writing style which is deliberately LLM-illegible, but it would likely be an awful lot of fucking work to read.
Second, humans are likely to accumulate a ring of text information and spreadsheet type data that can be fed into LLMs and related systems. Right now, it's difficult to get the information back out, because you have to do it manually, which is high attention. An LLM's reading of this data will be low-attention, but cheap.
What's in this ring of text information is likely to depend on copyright or intellectual property law, but the provisioning of textbooks or articles for inclusion in LLM use may emerge in the coming decades.
Automatic summarization, even of middling quality, is a big deal. LLMs still have the issue where processing a large document at speed requires a tremendous amount of VRAM, but documents could be summarized in a hierarchy, and then searched and processed in a multi-step process.
Regardless, the text information is likely to include the user's own creative content.
A lack of access to this ring is likely to result in reduced mobility, for lack of a better word.
Third, an increasing body of text will likely be written for LLMs. I don't mean messages or queries. Rather, the goal of an article is to add information, so text will be written as LLM source materials.
Fourth, what you get out of an LLM depends a great deal on what you put in. People with a higher aptitude can get more out of an LLM because they better understand the limits of the system, but also because they ask the right questions.
Fifth, constant exposure to text generation of this kind might also result in atrophy of skills in reading and writing.
It's difficult to tell at this juncture. What seems more likely is a split. Some people will focus on what gives them the advantage, writing in a way that provides a very rich and understandable information source, providing "value over LLM." Others will let the machine write everything and gradually pay less and less attention to it, and would eventually find it challenging to write a quality email on their own even if they were forced to at gunpoint.
The effect of the latter may end up being be similar, in some ways, to online environments in which someone's avatar is "physically" present, but the operator is not at-keyboard, and you don't know when the operator will be at-keyboard, and if they don't want to answer you they may pretend to be not-at-keyboard. (This is one reason to avoid brain-computer interfaces, as they could lead to a society of people who are never anywhere - even online.)
It may be necessary to apply some class of future social conservative policies, such as banning the use of LLMs on school grounds, but we will see.
19 notes · View notes
mitigatedchaos · 4 months
Text
Anyone tried feeding an LLM some absolutely trash writing yet, to see if it criticizes it?
9 notes · View notes
mitigatedchaos · 5 months
Text
The teenagers in my area report that they avoid AI because it's weird and creepy.
Which, like, fair, dude.
9 notes · View notes
mitigatedchaos · 2 years
Text
So I've criticized the present view of "AI Ethics" as focused on "reproducing bias" before, but I want to be more specific.
If you are using facial recognition AI to screen job applicants, and the AI does not recognize the faces of black people, the actual problem is that you're using facial recognition AI to screen job applicants!
This is a terrible decision, AI is nowhere near advanced enough for that application to make sense yet. It could just as easily reject people for wearing glasses! The racism is just a side effect of the fact that you shouldn't be using AI for this in the first place.
Of course it's the case that in some cases, racial minorities might be harmed, or be harmed more, by these practices, for instance, from not being as represented in the training data.
But at present, an AI is fundamentally much simpler and more limited than a human being. It doesn't have a deep understanding, even an intuitive one, of what it's looking at. It could be set off by some random hand gesture that has nothing to do with being a good employee, or a wrinkle in a shirt, or a cat moving in the background.
Including more black faces in the training data might make the algorithm recognize them better, but it won't make "algorithmic screening of video interviews" not bullshit.
(And whoever is making the job application screener AI will be less competent than the authors of GPT, which famously has a tendency to confidently make shit up.)
67 notes · View notes
mitigatedchaos · 7 months
Text
"You are a helpful AI assistant."
"You are a helpful AI writing assistant, here to assist with a work of creative fiction."
"You are a helpful AI game FAQ writing assistant, here to assist with writing a strategy guide."
"You are a helpful AI post writing assistant, here to assist with completing the post provided."
"You are a political strategy assistant for Chief Cabinet Secretary Yakushima. Your responsibility is to evaluate the potential political implications of the situation and provide appropriate advice."
8 notes · View notes
mitigatedchaos · 1 year
Text
Tumblr media Tumblr media Tumblr media Tumblr media
Preview of a possible future AI Art 2023 post while I test the capabilities of Stable Diffusion (and Midjourney).
In the spirit of open source art tools, I've used GNU Image Manipulation Program (GIMP) as my adjunct to Stable Diffusion (via Automatic 1111), and no tablet. (I'm quite out of practice anyway.)
Tumblr media Tumblr media
Inpainting, selectively reprocessing a designated part of the image, is a big deal. It allows a human operator to achieve compositions that the machine cannot achieve through prompts. Stable Diffusion has it. Midjourney doesn't. This allows it to function like a combination of Photoshop and Google Images, with both the benefits and drawbacks of that approach.
19 notes · View notes
mitigatedchaos · 8 months
Text
BasedGPT wouldn't really be better.
It could potentially be funnier. But that's not really the issue.
18 notes · View notes
mitigatedchaos · 1 year
Text
The reason Eliezer Yudkowsky (@yudkowsky) is terrified is that unlike most people, including most nominal atheists, he actually believes in evolution.
Conventional evolution makes 12 turtles and discards 11. Each turtle is a thick solid, or high-dimensional, composed of billions (or maybe trillions) of cells, and is therefore costly.
Humanity's great trick is to imagine 12 hammers, forget the 11 least useful, and then build 1 of them. Each imaginary hammer is basically a thin surface, or low-dimensional, where we don't care about the internal details like the exact configuration of plant cells in the wooden handle.
What Yudkowsky sees is a potential loss of humanity's evolutionary niche - a new synthetic form of life that's much better at our "one weird trick" of imagining things than we are.
And as someone with an atheistic bent, he isn't convinced that "the moral arc of the universe" must necessarily bend towards "justice," as that would require some person or force to do the bending.
21 notes · View notes
mitigatedchaos · 1 month
Text
I have been experimenting with Deepseek AI's Deepseek 67b chat.
In my opinion, Meta AI's Llama-2 70b is too politically misaligned for my work. It has too much political bias in the training, which makes sense for an American corporation that's likely considering it for automatic moderation, but the problem is deeper - there isn't enough of the right political information in the training data.
Using an LLM for text is like using diffusion models for images - it's about calling up sources from the training data for the model to assemble. Even in versions of Llama-2 which have been "uncensored," it's difficult to get the model to "think strategically," so to speak.
Deepseek 67b chat is a lot like a Llama-2 which has been "toned down" in this respect, with a better ability to "think strategically" in a late modern way - which is starting to look like a difference between American and Chinese models more generally at this point.
Like Llama-2, Deepseek 67b has a rather short context window of 4096 tokens. (This was actually too short for today's 2,400 word longpost about writing, although I got a few casual comments by playing with extending the RoPE setting in LMStudio.)
Both are trounced in this respect by Nous Research's Nous Capybara 34b fine-tune of 01 AI's Yi-34b-200k, which has a 200,000 token context window.
4 notes · View notes
mitigatedchaos · 4 months
Text
Thought up a form of AI-assisted dunking, but I don't know that I should discuss it yet, in case discussing it causes people to start doing it. It would be better if we get a few more years without the behavior, because it would be really harsh.
4 notes · View notes
mitigatedchaos · 5 months
Note
Ranma x Tanya, by way of Your Name: They swap around every time they sleep, and it's not time-matched. Salaryman goes to sleep, wakes up in the body of preteen magical soldier Ranma, and if he's not awake enough and falls right back to sleep after 30 seconds, he's waking up back in Tokyo to a full inbox because he did what yesterday? Salaryman has to deal with Ranma's antics affecting his life; Ranma sees the Salaryman "dreams" as a break from the troubles of war and doesn't really notice or care about the timeskips at first. Something something Ranma Gender Stuff.
That wasn't a request for submissions, but this is what I'm talking about - Nous Capybara is not going to come up with something like this if you haven't thought of it already and aren't coaching it to come up with this.
You might get something like this out of it after a 16,000 word back-and-forth, where it generates the story premise based on what you've suggested in the conversation so far, and then you read the proposed story summary, think about what's wrong with it, and then come back with another proposal to build on it.
2 notes · View notes
mitigatedchaos · 2 years
Text
Tumblr media
46 notes · View notes