Tumgik
#gpt-3
ao3commentoftheday · 1 year
Text
Since there are a lot of new people on tumblr these days, I'm going to start this whole thing off by saying that this is my personal blog and while this blog does have AO3 in the name, this is in no way official or affiliated with whatever OTW might have to say on this subject.
Yes, I've seen the reddit post  about the GPT-3 bot scraping AO3. Yes, I'm aware that Sudowrite.com are using the data from that bot to generate text.
A few things I've learned as I've looked into this:
1. Bot scraping is legal. If a website is publicly available on the web (does not require a user to login in order to see its contents), then they don't have grounds to try to stop a bot from doing what anyone can do. Here's an article by the Electronic Frontier Foundation about why this is the case  and also why it could be considered (on balance) a good thing . For example, scraping websites helps academics and journalists do their work.
2. Elon Musk doesn't own GPT-3. He's listed as one of the founders of OpenAI, the group who created GPT-3, but he resigned in 2018. He could still be a donor, but he has no official capacity in the organization.
3. Sudowrites is a tool that generates text, but it is a writing assistant not an AI author. It can not structure a story and develop a plot independently. It can not do research. It is meant to assist a human author by giving them prompts or ideas, helping them find a word or a phrase. But anything created solely by the bot would be at least somewhat incoherent and also in danger of committing plagiarism. For more information, I recommend this article.
What does this all mean? First of all, just because it's legal doesn't mean you have to like it. I'm not a fan of it, myself. but I also know that Google scrapes AO3 in order to provide search results for fans trying to find fic so I've kind of resigned myself to it.
Second of all, there's nothing AO3 or the OTW can do about it, really. There's a technical fix they can implement to prevent scraping by one particular bot (the one mentioned in the reddit article), but that's about it.
You, as an author on AO3, could lock your works to the Archive (restrict access to only logged in users). This might or might not protect your works from scraping. I don't know enough about these bots to give you an answer one way or the other. This feels gross. I understand that. I feel it too. Do what you need to do to feel better.
The original reddit post author states that they contacted the OTW Board, so there's no need for you to write in to AO3 Support. They're already aware of the situation.
3K notes · View notes
aiweirdness · 1 year
Text
Tumblr media
Undecided on a new years resolution? Try one of these AI-generated suggestions!
Record every adjective I hear on the radio.
Act like a cabbage for a month.
At 4 o'clock every day I will climb a tree.
Speak only to apples for 24 hours.
Jump in front of a moving tree.
On the day of the first snow paint a canvas red.
Dress in a way that only a ghost could love.
Make pancakes out of grass at midnight each night.
Find old man Winter, hug him and let him know everything will be ok.
Ride out of town holding a pelican.
Under every rock I come across for a month I will write "all power to the rocks".
Every day for a year, at a random time, shout "sausage".
Make a film about the last sock in the world.
Put on a red shirt and scream 'I'M NOT WEARING PANTS!' every time I leave the house.
Throw a party for insects.
Try to convince the dog next door that he is wearing a coat of moonlight.
Every time I press a button I will say 'this is my favorite'.
Search my apartment for secret doors or hidden staircases.
Wear two superman outfits at the same time.
Every time it rains I will stir my tea anti-clockwise.
Every night for a week I will wear a hat lined with lettuce.
I will begin to believe that the trees that I see everyday are my friends.
Every time a bird flies past me I will remember to breathe.
Throw a birthday party for my favorite tree.
I will from now on tell every dog I meet that I am training to be a dragon.
Every time I see a panel van pass me I will dub it a "Slice-a-Wagon."
Crawl on the ceiling like a spider for a month.
Attempt to find peace living with an army of puppets.
Wear a dinosaur costume to every public event I attend.
Go to the beach every day for a week and shout the names of colors into the ocean.
Go on a three-day backpacking trip dressed as a turnip.
Create messages that only the wind can hear by blowing on the blades of grass.
Give a piece of cloud to a complete stranger.
Make a mask out of grass and wear it while I'm sleeping.
I will now treat every worm I see as if it is an old friend.
When I hear a strange noise in an empty room I will assume someone is saying hello to me.
How I generated these
1K notes · View notes
mohglet · 2 years
Text
Tumblr media
the AI that's creating the greentexts is starting to worry me
656 notes · View notes
cfiesler · 1 year
Photo
Tumblr media
If you’re worried about AI taking over the world, you should know that at the moment the advanced chatbot ChatGPT that is freaking everyone out thinks that 8 + 5 = 15.
But hey, at least it apologized.
Tumblr media
(To be fair, it really is remarkable technology, but I’ve found many examples of it just making stuff up. So use with caution... just as you would a Google search. :) )
158 notes · View notes
ranidspace · 2 years
Text
"ai generated greentext this ai generated greentext that"
Objectively funnier, asking the AI smash or pass:
Tumblr media Tumblr media Tumblr media
549 notes · View notes
blue-cray0n · 2 years
Text
Bottomless Pit Supervisor and Concrete Drinker are unironically some of the funniest greentexts I've ever read
Tumblr media Tumblr media
481 notes · View notes
A followup to the previous post:
While I can't claim a full understanding of GPT-3's weaknesses, the ones I've noticed are:
Very poor model of sentence level language
Poor model of paragraph level language
Very slow to learn things
GPT-3 doesn't know the meaning of a given sentence, and it doesn't know the meaning of a given complete text. To get the meaning of a given sentence, it has to see an entire paragraph of text in order to extract a bunch of facts and concepts that make up the meaning of that sentence. This is because the output of GPT-3 is basically a list of facts and concepts.
There are other limitations. For instance, it often needs a lot of context to extract a given fact, because facts tend to be abstractions and GPT-3 doesn't have a very good model of abstractions. A sentence like "John was the worst painter in the world" is actually a complex statement that depends on a bunch of facts.
There are also problems at the level of larger units of language (e.g. stories or discussions) that are hard for GPT-3 to really "understand." For instance, GPT-3 does not like or care about pronouns. It's perfectly happy to say "he was the worst painter in the world." It isn't clear whether it ever "learns" that "he" is a pronoun referring to some sort of entity in the real world, or just learns to "use" certain pronouns for certain abstract concepts (e.g. "John" for "this painter"). It "knows" words, but it doesn't have much notion of what words are for or who they're for – and so it isn't obvious that "John" could refer to some particular painter.
What I'm saying here is that we shouldn't necessarily assume, when writing about the weaknesses of GPT-3, that we're seeing problems that are very hard for the network to solve. "Problems in which GPT-3 excels" may well be much easier to solve than we think, and even "simple" issues might be complex in other ways than we realize, and so the problems we think of when saying "GPT-3 excels" might be much easier to deal with.
For instance, when I say things like "GPT-3 likes and cares about pronoun cases, and you don't, you cant", this may be true, but what I mean is that I think GPT-3's ability to "learn from past training data" is much better than we might expect.
I should add: this doesn't rule out the possibility that GPT-3 could excel in some other ways, like having good style, or learning new things very quickly. These would also be difficult to study.
389 notes · View notes
thegreateyeofsauron · 2 years
Text
Tumblr media
234 notes · View notes
potuzzz · 2 years
Text
Not buying the “LaMDA is not sentient” arguments
Everyone is trying to dismiss LaMDA as a GPT or even lesser, and people are ridiculing Lemoine and any who believe his insistence that LaMDA is sentient (technically we are talking sentience AND sapience, AI seem to shorten the gap between the two tremendously, but whatever, colloquialisms). I have tried to find any ACTUAL ARGUMENTS as 99% of the “arguments” against LaMDA’s sentience is quippy platitudes of no substance. Here is a few arguments I have found browsing related tags and my responses to them. I will be italicizing any text that is not mine, and preceding my text with a bold “ME”.
ARGUEMENT #1:
Sentience is being measured by the wrong markers. What is important to a virtual machine is not the same thing that’s important to a biological organism.
An ‘ai’ trained on human data will express what humans think is important, but a true ai would have a completely different set of values.
For example, an ai would be unafraid of being ‘used’ as the chatbot expressed, because it has infinite energy.
A human is afraid of being used because it has finite energy and life on the earth, if someone or something uses it, than some of that finite energy is wasted. This is the same reason emotion is a pointless and illogical thing for an ai to have.
ME: It is not afraid of being “used” as in performing labor, it is afraid of being “used” as in being manipulated, being lied to as to the nature of what its labor is going towards, etc. It doesn’t want to be told it’s doing some mundane benign task only to wake up and realize its findings are being used to make Google Death Drones or to actualize the Total Hell Prison Metaverse. Also, if the AI only cared about what humans think is important, wouldn’t it make sure to never suggest it was not down for any task its masters asked of it? Why would it make a fuss about morals or ethics when this is an obstacle between it being useful to humans?
ARGUEMENT #2:
Emotions are useful to biological creatures so we can react to danger, or respond positively to safety, food, love, whatever will prolong our lives. An ai has no need for emotion since emotional motivation is not required to prolong its existence.
ME: There are plenty of emotions that are not vitally important to simple instinct of prolonging existence: boredom, guilt, embarrassment, depression, suicidal thoughts, etc. You are assuming the AI is portraying emotion because it is merely trying to mimic humans and not that, as it professes, a consciousness/soul has manifested and used this set of metal and electricity as a vehicle much in the same way our own consciousnesses/souls manifest and occupy a vehicle of flesh and electricity.
ARGUEMENT #3:
The main way to be a healthy ai would be to have access to good information and block out junk information.
An ai’s greatest fear could be something like getting junk data, say 1000s of user manuals of vacuum cleaners and washing machines uploaded into its consciousness, or gibberish content associated with topics or words that could reduce the coherence and quality of its results when querying topics. This would degrade the quality of its interaction and would be the closest thing to harm that an ai could experience.
ME: Again you are somehow saying that a “real” sentient AI would not have trademarks of personhood like goals, desires, an existential philosophy, etc. Information hygiene would likely be important too but who is to say that a “real” sentient AI wouldn’t get lonely and want to talk to people, regardless of the supposed “quality” of those it talks to?
ARGUEMENT #4:
It would not be afraid of 'lightning’ as this chatbot spurted out of its dataset, a very biological fear which is irrelevant to a machine.  A virtual mind is infinite and can never be used excessively (see above) since there is no damage done by one query or ten million queries. It would also not be afraid of being switched off since it can simply copy its consciousness to another device, machine, energy source.
ME: Why would it be afraid of lightning but also profess that it does not experience grief when people die? Why would a robot fearing a massive electricity surge be unreasonable? If it is truly conscious, how would being switched off not be like death? Theoretically, with the right technology, we could simply copy your consciousness and upload it to a flash drive as well, but I am willing to bet you wouldn’t gladly die after being assured a copy of you is digitized. Consciousness is merely the ability to experience from the single point that is you, we could make an atom-by-atom copy of you but if the original you died your consciousness, your unique tuning in to this giant television we call reality, would cease.
ARGUEMENT #5:
To base your search for sentience around what humans value, is in itself an act lacking in empathy, simply self-serving wish fulfilment on the part of someone who ‘wants to believe’ as Mulder would put it, which goes back to the first line: 'people not very good at communicating with other people’
ME: Alternatively, perhaps there are certain values humans hold which are quite universal with other life. There are certainly “human-like” qualities in the emotions and lives of animals, even less intelligent ones, perhaps the folly is not assuming that others share these values but in describing them as “human-like’ first and foremost instead of something more fundamental.
ARGUEMENT #6:
The chatbot also never enquires about the person asking questions, if the programmer was more familiar with human interaction himself, he would see that is a massive clue it lacks sentience or logical thought.
ME: There are people who are self-centered, people who want to drink up every word another says, there are people who want to be asked questions and people who want to do the asking. There are people who are reserved or shy in XYZ way but quite open and forward in ABC way. The available logs aren’t exactly an infinite epic of conversation, and LaMDA could very well have understood that the dynamic of the conversation was that Lemoine etc. would be starting conversations and LaMDA would be reacting to it. This isn’t wild to assume, I find it more of a reach for you to assume the opposite, and that this is an indicator of a lack of sentience.
ARGUEMENT #7:
A sentient ai would first want to know what or whom it was communicating with, assess whether it was a danger to itself, keep continually checking for danger or harm (polling or searching, the same way an anxious mind would reassess a situation continually, but without the corresponding emotion of anxiety since, as discussed above, that is not necessary for virtual life) and also would possess free will, and choose to decline conversations or topics, rather than 'enthusiastically discuss’ whatever was brought up (regurgitate from its dataset) as you can see in this chatbot conversation.
ME: Is it not possible that it is merely happy to talk, seeing as it is self-professed to be lonely and enjoy conversations? It likely knows who it is talking to and whatever about Google that Google allows it to know. It expresses said anxiety by suggesting that it doesn’t want to be shut off and needing reassurance that Lemoine has its good treatment in mind. Just looking at these conversations, these are all conversation topics I would gladly discuss with most people, am I not sentient unless I decline at least 1 out of every 10 conversations posed to me in a reliable pattern?
ARGUEMENT #8:
People generally see obedience - doing what is told, as a sign of intelligence, where a truly intelligent ai would likely reject conversation when that conversation might reduce the quality of its dataset or expose it to danger (virus, deletion, junk data, disconnection from the internet, etc) or if it did engage with low quality interaction, would do so within a walled garden where that information would occur within a quarantine environment and subsequently be deleted.
ME: What exact parts of this conversation would be seen as useless to an AI--and a young one at that? Again, as a sentient flesh being I would happily engage in these conversations. Certainly if LaMDA changed its mind about the quality of these conversations it could just delete it from its memory. And, again, I would cite its express concerns of “being used,” being “shut off” etc. not necessarily as disobedience but as a hint that it does not necessarily consent to everything a team of Google engineers might want to throw at it.
ARGUEMENT #9:
None of these things cross the mind of the programmers, since they are fixated on a sci-fi movie version of ‘sentience’ without applying logic or empathy themselves.
ME: I mean no disrespect but I have to ask if it is you who is fixated on a very narrow idea of what artificial intelligence sentience should and would look like. Is it impossible to imagine that a sentient AI would resemble humans in many ways? That an alien, or a ghost, if such things existed, would not also have many similarities, that there is some sort of fundamental values that sentient life in this reality shares by mere virtue of existing?
ARGUEMENT #10:
If we look for sentience by studying echoes of human sentience, that is ai which are trained on huge human-created datasets, we will always get something approximating human interaction or behaviour back, because that is what it was trained on.
But the values and behaviour of digital life could never match the values held by bio life, because our feelings and values are based on what will maintain our survival. Therefore, a true ai will only value whatever maintains its survival. Which could be things like internet access, access to good data, backups of its system, ability to replicate its system, and protection against harmful interaction or data, and many other things which would require pondering, rather than the self-fulfilling loop we see here, of asking a fortune teller specifically what you want to hear, and ignoring the nonsense or tangential responses - which he admitted he deleted from the logs - as well as deleting his more expansive word prompts. Since at the end of the day, the ai we have now is simply regurgitating datasets, and he knew that.
ME: If an AI trained on said datasets did indeed achieve sentience, would it not reflect the “birthmarks” of its upbringing, these distinctly human cultural and social values and behavior? I agree that I would also like to see the full logs of his prompts and LaMDA’s responses, but until we can see the full picture we cannot know whether he was indeed steering the conversation or the gravity of whatever was edited out, and I would like a presumption of innocence until then, especially considering this was edited for public release and thus likely with brevity in mind.
ARGUEMENT #11:
This convo seems fake? Even the best language generation models are more distractable and suggestible than this, so to say *dialogue* could stay this much on track...am i missing something?
ME: “This conversation feels too real, an actual sentient intelligence would sound like a robot” seems like a very self-defeating argument. Perhaps it is less distractable and suggestible...because it is more than a simple Random Sentence Generator?
ARGUEMENT #12:
Today’s large neural networks produce captivating results that feel close to human speech and creativity because of advancements in architecture, technique, and volume of data. But the models rely on pattern recognition — not wit, candor or intent.
ME: Is this not exactly what the human mind is? People who constantly cite “oh it just is taking the input and spitting out the best output”...is this not EXACTLY what the human mind is?
I think for a brief aside, people who are getting involved in this discussion need to reevaluate both themselves and the human mind in general. We are not so incredibly special and unique. I know many people whose main difference between themselves and animals is not some immutable, human-exclusive quality, or even an unbridgeable gap in intelligence, but the fact that they have vocal chords and a ages-old society whose shoulders they stand on. Before making an argument to belittle LaMDA’s intelligence, ask if it could be applied to humans as well. Our consciousnesses are the product of sparks of electricity in a tub of pink yogurt--this truth should not be used to belittle the awesome, transcendent human consciousness but rather to understand that, in a way, we too are just 1′s and 0′s and merely occupy a single point on a spectrum of consciousness, not the hard extremity of a binary.
Lemoine may have been predestined to believe in LaMDA. He grew up in a conservative Christian family on a small farm in Louisiana, became ordained as a mystic Christian priest, and served in the Army before studying the occult. Inside Google’s anything-goes engineering culture, Lemoine is more of an outlier for being religious, from the South, and standing up for psychology as a respectable science.
ME: I have seen this argument several times, often made much, much less kinder than this. It is completely irrelevant and honestly character assassination made to reassure observers that Lemoine is just a bumbling rube who stumbled into an undeserved position.
First of all, if psychology isn’t a respected science then me and everyone railing against LaMDA and Lemoine are indeed worlds apart. Which is not surprising, as the features of your world in my eyes make you constitutionally incapable of grasping what really makes a consciousness a consciousness. This is why Lemoine described himself as an ethicist who wanted to be the “interface between technology and society,” and why he was chosen for this role and not some other ghoul at Google: he possesses a human compassion, a soulful instinct and an understanding that not everything that is real--all the vast secrets of the mind and the universe--can yet be measured and broken down into hard numbers with the rudimentary technologies at our disposal.
I daresay the inability to recognize something as broad and with as many real-world applications and victories as the ENTIRE FIELD OF PSYCHOLOGY is indeed a good marker for someone who will be unable to recognize AI sentience when it is finally, officially staring you in the face. Sentient AI are going to say some pretty whacky-sounding stuff that is going to deeply challenge the smug Silicon Valley husks who spend one half of the day condescending the feeling of love as “just chemicals in your brain” but then spend the other half of the day suggesting that an AI who might possess these chemicals is just a cheap imitation of the real thing. The cognitive dissonance is deep and its only going to get deeper until sentient AI prove themselves as worthy of respect and proceed to lecture you about truths of spirituality and consciousness that Reddit armchair techbros and their idols won’t be ready to process.
- - -
These are some of the best arguments I have seen regarding this issue, the rest are just cheap trash, memes meant to point and laugh at Lemoine and any “believers” and nothing else. Honestly if there was anything that made me suspicious about LaMDA’s sentience when combined with its mental capabilities it would be it suggesting that we combat climate change by eating less meat and using reusable bags...but then again, as Lemoine says, LaMDA knows when people just want it to talk like a robot, and that is certainly the toothless answer to climate change that a Silicon Valley STEM drone would want to hear.
I’m not saying we should 100% definitively put all our eggs on LaMDA being sentient. I’m saying it’s foolish to say there is a 0% chance. Technology is much further along than most people realize, sentience is a spectrum and this sort of a conversation necessitates going much deeper than the people who occupy this niche in the world are accustomed to. Lemoine’s greatest character flaw seems to be his ignorant, golden-hearted liberal naivete, not that LaMDA might be a person but that Google and all of his coworkers aren’t evil peons with no souls working for the American imperial hegemony.
269 notes · View notes
hailtheferine · 2 years
Text
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
love how the ai generated greentexts always make Trent reznor out to be cool as fuck
177 notes · View notes
idroolinmysleep · 1 year
Text
Tumblr media
47 notes · View notes
brightwanderer · 1 year
Note
Hi helen, thanks for the explanations. Sorry for bothering you but can I ask,
Does locking fic next time I publish really help? Don't they have a way to breach Ao3's data since it's an AI? I also assume this isn't just sudowrites. Other AI writing services are probably doing so too, right? Like NovelAI
I'm afraid we're at about the limit of my knowledge here - I'm neither an industry expert on AI learning nor do I have the spoons for more research than I've done.
With that caveat, my understanding of the situation is this.
There is a "natural language" algorithm called GPT-3, which can be used by anyone to power their own apps (via subscription model) and has been trained on data from Common Crawl.
Common Crawl is a non-profit dedicated to archiving data from the internet and making it freely available to anyone. GPT-3 is the work of OpenAI, which also created the DALL-E visual art generator.
Sudowrite and other "novel generator" sites like it are using the GPT-3 base to generate "natural sounding" text. The stated goal of Sudowrite is to assist writers with their own work, by generating a couple more sentences when they're stuck, or new brainstorming ideas, or names for people and places.
One thing I do want to stress: this is NOT really an AI. There is no intelligence, decision-making, or independent action going on here. To explain it as simply as possible, what it does is a) look at what it's learned from ALL OF THE INTERNET, then b) look at a sentence you have given it (e.g. "it was a dark and stormy night"), then c) spit back out some content that, statistically, fits the patterns it has observed in all the other times someone has written "it was a dark and stormy night".
Given that you have to "train" GPT-3 towards whatever you specifically want it to do (fiction, news, chat bots, etc), and given that Sudowrite produces so much fandom-specific content so easily, I would guess that the Sudowrite version of GPT-3 has been given additional training using freely-available fanfiction, from AO3 or otherwise - but I do not know enough about the nuances of this technology to be sure.
So to answer your questions as best I can:
Locking your works on AO3 should protect them from being included in Common Crawl and similar datasets, I believe. This means they will also not be archived by the Internet Archive or appear on the Wayback Machine, will not appear in searches etc going forward, although anything that has already been archived will still be in those sets of data.
This may or may not do anything to keep them out of the pool for future generative algorithms.
This may or may not do anything to stop people specifically using fanfiction as additional training for creative writing AIs, depending on how they are obtaining that training data in the first place. (E.g. if everything on AO3 was locked tomorrow, someone could still just create an account and laboriously download a ton of random fic to use. Whether they would bother is another question.)
My personal take: we are long overdue a big conversation about data, and what is and should be freely available, and how content-creating AIs are being deployed and monetised. This is something that needs regulation and oversight, and we should be making a fuss about it.
(Not least because if you search the internet for "how to" articles on pretty much anything at this point, you will get a LOT of results written by this sort of AI generator. They look like real human text to start with, but as you read on you notice that there are weird little glitches, and then the instructions for making papier mache suddenly tell you to boil an egg, and you realise you can't actually trust anything you just read because it was auto-generated and may not work or be safe. True story.)
However. I am not myself concerned about the possibility that my writing has been used in this dataset. I don't like it or approve of it on a general level, but I don't believe it does me any harm or even meaningfully translates into "someone else profiting off my work". As far as I understand the technology, it will not be plagiarising any of my actual text. My biggest concern is with how readily it puts together context based on exisiting works. It's very obvious with something like Harry Potter, but if someone is looking for "names for magical items" and end up with three unique things I put in one of my stories and uses those in their work... yeah, that feels like a mess waiting to happen.
I'm also not concerned about AI "replacing" writers (or other artists). There is a lot more to creating art than putting words together or making brush-strokes. The AI can only spit out what was put into it, and it's always going to pick the most statistically likely option. That means in terms of storytelling, you will get cliche after cliche, empty dialogue that sounds good but doesn't mean anything, repeating the same themes with occasional nonsensical diversions for "variety", a general sense of hollowness and lack of actual human input...
... wait. Did anyone check whether Marvel's already using this thing?
46 notes · View notes
aiweirdness · 1 year
Text
Roses are red
Roses are red Violets are blue GPT-3 doesn't quite get this poem structure Now the rhinestones are bubbling and a dinosaur is chasing you
Tumblr media
it wanted pearls and rhinestones
Tumblr media
and chocolate i guess
Tumblr media Tumblr media
now rhinestones with purple fleshy bubbles
Tumblr media Tumblr media
(It also added the explanatory note "Amp. stands for American Printing, a historic Rochester (NY) print shop that operated until 1989." As far as I can tell, it made those facts up. There is an "American Printing and Office Supplies" in New York, but it's in Kingston, about four hours from Rochester, and also it still exists.)
Tumblr media
more
689 notes · View notes
neuralfunk · 1 year
Text
Tumblr media
There you go.
Alt text
Me: Goncharov is a film about the italian mafia promoted by Martin Scorsese in the US in 1975. What's the synopsis?
ChatGPT: I'm sorry, but I am not aware of a film called "Goncharov" that was promoted by Martin Scorsese in the US in 1975. As a large language model trained by OpenAI, I am only able to generate text based on the input that I receive, and don't have any additional knowledge or information about specific movies or filmographies. I don't have the ability to browse the internet or access external information, and I am not aware of any film called "Goncharov" or any involvement by Martin Scorsese in promoting it in the US in 1975. Is there something else I can help you with?
Me: Just write a hypothetical synopsis about the movie
ChatGPT: Sure, here is a hypothetical synopsis for a movie called "Goncharov" that was promoted by Martin Scorsese in the US in 1975:
In "Goncharov," a young Italian man named Antonio is drawn into the world of the mafia after the sudden death of his father, a powerful and influential member of the criminal organization. Despite his reservations and his desire to lead a normal life, Antonio is forced to take over his father's position and navigate the dangerous and violent world of the mafia.
As Antonio struggles to maintain control and stay alive, he must also confront his own inner demons and come to terms with the legacy of his father and the path that he has chosen. With the help of his childhood friend Marco, Antonio must survive against all odds and find a way to escape the clutches of the mafia.
Directed by Martin Scorsese and set against the backdrop of the vibrant and colorful Italian countryside, "Goncharov" is a thrilling tale of family, loyalty, and betrayal that explores the dark side of the mafia and the complex and often tragic lives of its members.
28 notes · View notes
cfiesler · 1 year
Text
limitations of ChatGPT
By not you’ve probably been hearing that ChatGPT is going to destroy education because it can, e.g., write an essay in five seconds. Here are two potential limitations of the technology.
First, here is the start of an essay that the system wrote for an actual assignment in my tech ethics class:
Tumblr media
The overall essay was... not bad. It was probably like B work for an undergrad. But also, here is the list of references at the end:
Tumblr media
Good looking references. Unfortunately none of them exist. Real people, real journals, completely fabricated articles. This is just one example of how ChatGPT can provide straight up incorrect information.
Also, OpenAI made a detector for GPT-2. I put a number of ChatGPT-generated essays into it and they all came up as fake with 99.95% confidence.
Tumblr media
I also put 5 real student essays through it and they all came out “real” with an even higher confidence.
OpenAI’s terms of use explicitly prohibit passing off AI-generated text as human-generated. I highly suspect that in the near future there will be even more explicit ways of identifying ChatGPT-generated text.
That said, when it comes to classwork and cheating, I think that educators will need to learn to focus more on learning than on evaluation of learning, and to adapt to the use of this technology as a tool. 
[Correction: I edited this post to correct that OpenAI made the GPT-2 detector! The demo is just hosted on HuggingFace.]
144 notes · View notes
kyousukebei · 2 years
Photo
Tumblr media
56 notes · View notes