Tumgik
#openai
softwaring · 3 months
Text
Tumblr media Tumblr media
this reply kills me 😭 article link
66K notes · View notes
8pxl · 1 month
Text
PSA: Tumblr/Wordpress is preparing to start selling our user data to Midjourney and OpenAI.
you have to MANUALLY opt out of it as well.
Tumblr media Tumblr media Tumblr media Tumblr media
to opt out on desktop, click your blog ➡️ blog settings ➡️ scroll til you see visibility options and it’ll be the last option to toggle.
to opt out on mobile, click your blog ➡️ scroll then click visibility ➡️ toggle opt out option.
if you’ve already opted out of showing up in google searches, it’s preselected for you. if you don’t have the option available, update your app or close your browser/refresh a few times. important to note you also have to opt out for each blog you own separately, so if you’d like to prevent AI scraping your blog i’d really recommend taking the time to opt out. (source)
31K notes · View notes
jv · 1 month
Text
And about the AI opt-out setting Tumblr will deploy tomorrow, I'll just let you with the words of the person in charge of the heist:
Andrew Spittle, Automattic's head of AI replied: "We will notify existing partners on a regular basis about anyone who's opted out since the last time we provided a list. [...] I believe partners will honor this based on our conversations with them to this point."
So Tumblr will notify openAi and Midjourney that you wish your posts to not be used and it's entirely up to them to comply or not. And as we know, we are talking about some of the less ethical companies on the internet (which honestly is kind of a feat), so that cute little setting will most than probably do nothing and less.
7K notes · View notes
Text
FYI artists and writers: some info regarding tumblr's new "third-party sharing" (aka selling your content to OpenAI and Midjourney)
You may have already seen the post by @staff regarding third-party sharing and how to opt out. You may have also already seen various news articles discussing the matter.
But here's a little further clarity re some questions I had, and you may too. Caveat: Not all of this is on official tumblr pages, so it's possible things may change.
(1) "I heard they already have access to my data and it doesn't really matter if I opt out"
From the 404 article:
A new FAQ section we reviewed is titled “What happens when you opt out?” states “If you opt out from the start, we will block crawlers from accessing your content by adding your site on a disallowed list. If you change your mind later, we also plan to update any partners about people who newly opt-out and ask that their content be removed from past sources and future training.”
So please, go click that opt-out button.
(2) Some future user: "I've been away from tumblr for months, and I just heard about all this. I didn't opt out before, so does it make a difference anymore?"
Another internal document shows that, on February 23, an employee asked in a staff-only thread, “Do we have assurances that if a user opts out of their data being shared with third parties that our existing data partners will be notified of such a change and remove their data?” Andrew Spittle, Automattic’s head of AI replied: “We will notify existing partners on a regular basis about anyone who's opted out since the last time we provided a list. I want this to be an ongoing process where we regularly advocate for past content to be excluded based on current preferences. We will ask that content be deleted and removed from any future training runs. I believe partners will honor this based on our conversations with them to this point. I don't think they gain much overall by retaining it.”
It should make a difference! Go click that button.
(3) "I opted out, but my art posts have been reblogged by so many people, and I don't know if they all opted out. What does that mean for my stuff?"
This answer is actually on the support page for the toggle:
This option will prevent your blog's content, even when reblogged, from being shared with our licensed network of content and research partners, including those that train AI models.
And some further clarification by the COO and a product manager:
zingring: A couple people from work have reached out to let me know that yes, it applies to reblogs of "don't scrape" content. If you opt out, your content is opted out, even in reblog form. cyle: yep, for reblogs, we're taking it so far as "if anybody in the reblog trail has opted out, all of the content in that reblog will be opted out", when a reblog could be scraped/shared.
So not only your reblogged posts, but anyone who contributed in a reblog (such as posts where someone has been inspired to draw fanart of the OP) will presumably be protected by your opt-out. (A good reason to opt out even if you yourself are not a creator.)
Furthermore, if you the OP were offline and didn't know about the opt-out, if someone contributed to a reblog and they are opted out, then your original work is also protected. (Which makes it very tempting to contribute "scrapeable content" now whenever I reblog from an abandoned/disused blog...)
(4) "What about deleted blogs? They can't opt out!"
I was told by someone (not official) that he read "deleted blogs are all opted-out by default". However, he didn't recall the source, and I can't find it, so I can't guarantee that info. If I get more details - like if/when tumblr puts up that FAQ as reported in the 404 article - I will add it here as soon as I can.
Edit, tumblr has updated their help page for the option to opt-out of third-party sharing! It now states:
The content which will not be shared with our licensed network of content and research partners, including those that train AI models, includes: • Posts and reblogs of posts from blogs who have enabled the "Prevent third-party sharing" option. • Posts and reblogs of posts from deleted blogs. • Posts and reblogs of posts from password-protected blogs. • Posts and reblogs of posts from explicit blogs. • Posts and reblogs of posts from suspended/deactivated blogs. • Private posts. • Drafts. • Messages. • Asks and submissions which have not been publicly posted. • Post+ subscriber-only posts. • Explicit posts.
So no need to worry about your old deleted blogs that still have reblogs floating around. *\o/*
But for your existing blogs, please use the opt out option. And a reminder of how to opt out, under the cut:
The opt-out toggle is in Blog Settings, and please note you need to do it for each one of your blogs / sideblogs.
On dashboard, the toggle is at https://www.tumblr.com/settings/blog/blogname [replace "blogname" as applicable] down by Visibility:
Tumblr media
For mobile, you need the most recent update of the app. (Android version 33.4.1.100, iOs version 33.4.) Then go to your blog tab (the little person icon), and then the gear icon for Settings, then click Visibility.
Tumblr media
Again, if you have a sideblog, go back to the blog tab, switch to it, and go to settings again. Repeat as necessary.
If you do not have access to the newest version of the app for whatever reason, you can also log into tumblr in your mobile browser. Same URL as per desktop above, same location.
Note you do not need to change settings in both desktop and the app, just one is fine.
I hope this helps!
3K notes · View notes
phoenixyfriend · 3 months
Text
Also did you know that the reason NYT can sue openAI with the expectation of success is that the AI cites its sources about as well as James Somerton.
It regurgitates long sections of paywalled NYT articles verbatim, and then cites it wrong, if at all. It's not just a matter of stealing traffic and clicks etc, but also illegal redistribution and damaging the NYT's brand regarding journalistic integrity by misquoting or citing incorrectly.
OpenAI cannot claim fair use under these circumstances lmao.
3K notes · View notes
sreegs · 1 month
Text
I guess it's real:
archive.is failing to get around the paywall, sorry. You can either register to read it for free, or read the pdf here. Feel free to reblog with a more permanent link: https://www.swisstransfer.com/d/e0b6eaf6-07d5-4a1e-a90c-50efd4f07157
Biggest takeaways are:
It's happening
There will be a switch to opt out
1K notes · View notes
Text
One assessment suggests that ChatGPT, the chatbot created by OpenAI in San Francisco, California, is already consuming the energy of 33,000 homes. It’s estimated that a search driven by generative AI uses four to five times the energy of a conventional web search. Within years, large AI systems are likely to need as much energy as entire nations. And it’s not just energy. Generative AI systems need enormous amounts of fresh water to cool their processors and generate electricity. In West Des Moines, Iowa, a giant data-centre cluster serves OpenAI’s most advanced model, GPT-4. A lawsuit by local residents revealed that in July 2022, the month before OpenAI finished training the model, the cluster used about 6% of the district’s water. As Google and Microsoft prepared their Bard and Bing large language models, both had major spikes in water use — increases of 20% and 34%, respectively, in one year, according to the companies’ environmental reports. One preprint suggests that, globally, the demand for water for AI could be half that of the United Kingdom by 2027. In another, Facebook AI researchers called the environmental effects of the industry’s pursuit of scale the “elephant in the room”. Rather than pipe-dream technologies, we need pragmatic actions to limit AI’s ecological impacts now.
1K notes · View notes
camembertlythere · 2 years
Text
Tumblr media Tumblr media
21K notes · View notes
clini-calia · 2 years
Text
Tumblr media
18K notes · View notes
kawaoneechan · 1 month
Text
Tumblr media
591 notes · View notes
Text
The moral injury of having your work enshittified
Tumblr media
This Monday (November 27), I'm appearing at the Toronto Metro Reference Library with Facebook whistleblower Frances Haugen.
On November 29, I'm at NYC's Strand Books with my novel The Lost Cause, a solarpunk tale of hope and danger that Rebecca Solnit called "completely delightful."
Tumblr media
This week, I wrote about how the Great Enshittening – in which all the digital services we rely on become unusable, extractive piles of shit – did not result from the decay of the morals of tech company leadership, but rather, from the collapse of the forces that discipline corporate wrongdoing:
https://locusmag.com/2023/11/commentary-by-cory-doctorow-dont-be-evil/
The failure to enforce competition law allowed a few companies to buy out their rivals, or sell goods below cost until their rivals collapsed, or bribe key parts of their supply chain not to allow rivals to participate:
https://www.engadget.com/google-reportedly-pays-apple-36-percent-of-ad-search-revenues-from-safari-191730783.html
The resulting concentration of the tech sector meant that the surviving firms were stupendously wealthy, and cozy enough that they could agree on a common legislative agenda. That regulatory capture has allowed tech companies to violate labor, privacy and consumer protection laws by arguing that the law doesn't apply when you use an app to violate it:
https://pluralistic.net/2023/04/12/algorithmic-wage-discrimination/#fishers-of-men
But the regulatory capture isn't just about preventing regulation: it's also about creating regulation – laws that make it illegal to reverse-engineer, scrape, and otherwise mod, hack or reconfigure existing services to claw back value that has been taken away from users and business customers. This gives rise to Jay Freeman's perfectly named doctrine of "felony contempt of business-model," in which it is illegal to use your own property in ways that anger the shareholders of the company that sold it to you:
https://pluralistic.net/2023/11/09/lead-me-not-into-temptation/#chamberlain
Undisciplined by the threat of competition, regulation, or unilateral modification by users, companies are free to enshittify their products. But what does that actually look like? I say that enshittification is always precipitated by a lost argument.
It starts when someone around a board-room table proposes doing something that's bad for users but good for the company. If the company faces the discipline of competition, regulation or self-help measures, then the workers who are disgusted by this course of action can say, "I think doing this would be gross, and what's more, it's going to make the company poorer," and so they win the argument.
But when you take away that discipline, the argument gets reduced to, "Don't do this because it would make me ashamed to work here, even though it will make the company richer." Money talks, bullshit walks. Let the enshittification begin!
https://pluralistic.net/2023/11/22/who-wins-the-argument/#corporations-are-people-my-friend
But why do workers care at all? That's where phrases like "don't be evil" come into the picture. Until very recently, tech workers participated in one of history's tightest labor markets, in which multiple companies with gigantic war-chests bid on their labor. Even low-level employees routinely fielded calls from recruiters who dangled offers of higher salaries and larger stock grants if they would jump ship for a company's rival.
Employers built "campuses" filled with lavish perks: massages, sports facilities, daycare, gourmet cafeterias. They offered workers generous benefit packages, including exotic health benefits like having your eggs frozen so you could delay fertility while offsetting the risks normally associated with conceiving at a later age.
But all of this was a transparent ruse: the business-case for free meals, gyms, dry-cleaning, catering and massages was to keep workers at their laptops for 10, 12, or even 16 hours per day. That egg-freezing perk wasn't about helping workers plan their families: it was about thumbing the scales in favor of working through your entire twenties and thirties without taking any parental leave.
In other words, tech employers valued their employees as a means to an end: they wanted to get the best geeks on the payroll and then work them like government mules. The perks and pay weren't the result of comradeship between management and labor: they were the result of the discipline of competition for labor.
This wasn't really a secret, of course. Big Tech workers are split into two camps: blue badges (salaried employees) and green badges (contractors). Whenever there is a slack labor market for a specific job or skill, it is converted from a blue badge job to a green badge job. Green badges don't get the food or the massages or the kombucha. They don't get stock or daycare. They don't get to freeze their eggs. They also work long hours, but they are incentivized by the fear of poverty.
Tech giants went to great lengths to shield blue badges from green badges – at some Google campuses, these workforces actually used different entrances and worked in different facilities or on different floors. Sometimes, green badge working hours would be staggered so that the armies of ragged clickworkers would not be lined up to badge in when their social betters swanned off the luxury bus and into their airy adult kindergartens.
But Big Tech worked hard to convince those blue badges that they were truly valued. Companies hosted regular town halls where employees could ask impertinent questions of their CEOs. They maintained freewheeling internal social media sites where techies could rail against corporate foolishness and make Dilbert references.
And they came up with mottoes.
Apple told its employees it was a sound environmental steward that cared about privacy. Apple also deliberately turned old devices into e-waste by shredding them to ensure that they wouldn't be repaired and compete with new devices:
https://pluralistic.net/2023/09/22/vin-locking/#thought-differently
And even as they were blocking Facebook's surveillance tools, they quietly built their own nonconsensual mass surveillance program and lied to customers about it:
https://pluralistic.net/2022/11/14/luxury-surveillance/#liar-liar
Facebook told employees they were on a "mission to connect every person in the world," but instead deliberately sowed discontent among its users and trapped them in silos that meant that anyone who left Facebook lost all their friends:
https://www.eff.org/deeplinks/2021/08/facebooks-secret-war-switching-costs
And Google promised its employees that they would not "be evil" if they worked at Google. For many googlers, that mattered. They wanted to do something good with their lives, and they had a choice about who they would work for. What's more, they did make things that were good. At their high points, Google Maps, Google Mail, and of course, Google Search were incredible.
My own life was totally transformed by Maps: I have very poor spatial sense, need to actually stop and think to tell my right from my left, and I spent more of my life at least a little lost and often very lost. Google Maps is the cognitive prosthesis I needed to become someone who can go anywhere. I'm profoundly grateful to the people who built that service.
There's a name for phenomenon in which you care so much about your job that you endure poor conditions and abuse: it's called "vocational awe," as coined by Fobazi Ettarh:
https://www.inthelibrarywiththeleadpipe.org/2018/vocational-awe/
Ettarh uses the term to apply to traditionally low-waged workers like librarians, teachers and nurses. In our book Chokepoint Capitalism, Rebecca Giblin and I talked about how it applies to artists and other creative workers, too:
https://chokepointcapitalism.com/
But vocational awe is also omnipresent in tech. The grandiose claims to be on a mission to make the world a better place are not just puffery – they're a vital means of motivating workers who can easily quit their jobs and find a new one to put in 16-hour days. The massages and kombucha and egg-freezing are not framed as perks, but as logistical supports, provided so that techies on an important mission can pursue a shared social goal without being distracted by their balky, inconvenient meatsuits.
Steve Jobs was a master of instilling vocational awe. He was full of aphorisms like "we're here to make a dent in the universe, otherwise why even be here?" Or his infamous line to John Sculley, whom he lured away from Pepsi: "Do you want to sell sugar water for the rest of your life or come with me and change the world?"
Vocational awe cuts both ways. If your workforce actually believes in all that high-minded stuff, if they actually sacrifice their health, family lives and self-care to further the mission, they will defend it. That brings me back to enshittification, and the argument: "If we do this bad thing to the product I work on, it will make me hate myself."
The decline in market discipline for large tech companies has been accompanied by a decline in labor discipline, as the market for technical work grew less and less competitive. Since the dotcom collapse, the ability of tech giants to starve new entrants of market oxygen has shrunk techies' dreams.
Tech workers once dreamed of working for a big, unwieldy firm for a few years before setting out on their own to topple it with a startup. Then, the dream shrank: work for that big, clumsy firm for a few years, then do a fake startup that makes a fake product that is acquihired by your old employer, as an incredibly inefficient and roundabout way to get a raise and a bonus.
Then the dream shrank again: work for a big, ugly firm for life, but get those perks, the massages and the kombucha and the stock options and the gourmet cafeteria and the egg-freezing. Then it shrank again: work for Google for a while, but then get laid off along with 12,000 co-workers, just months after the company does a stock buyback that would cover all those salaries for the next 27 years:
https://pluralistic.net/2023/09/10/the-proletarianization-of-tech-workers/
Tech workers' power was fundamentally individual. In a tight labor market, tech workers could personally stand up to their bosses. They got "workplace democracy" by mouthing off at town hall meetings. They didn't have a union, and they thought they didn't need one. Of course, they did need one, because there were limits to individual power, even for the most in-demand workers, especially when it came to ghastly, long-running sexual abuse from high-ranking executives:
https://www.nytimes.com/2018/10/25/technology/google-sexual-harassment-andy-rubin.html
Today, atomized tech workers who are ordered to enshittify the products they take pride in are losing the argument. Workers who put in long hours, missed funerals and school plays and little league games and anniversaries and family vacations are being ordered to flush that sacrifice down the toilet to grind out a few basis points towards a KPI.
It's a form of moral injury, and it's palpable in the first-person accounts of former workers who've exited these large firms or the entire field. The viral "Reflecting on 18 years at Google," written by Ian Hixie, vibrates with it:
https://ln.hixie.ch/?start=1700627373
Hixie describes the sense of mission he brought to his job, the workplace democracy he experienced as employees' views were both solicited and heeded. He describes the positive contributions he was able to make to a commons of technical standards that rippled out beyond Google – and then, he says, "Google's culture eroded":
Decisions went from being made for the benefit of users, to the benefit of Google, to the benefit of whoever was making the decision.
In other words, techies started losing the argument. Layoffs weakened worker power – not just to defend their own interest, but to defend the users interests. Worker power is always about more than workers – think of how the 2019 LA teachers' strike won greenspace for every school, a ban on immigration sweeps of students' parents at the school gates and other community benefits:
https://pluralistic.net/2023/04/23/a-collective-bargain/
Hixie attributes the changes to a change in leadership, but I respectfully disagree. Hixie points to the original shareholder letter from the Google founders, in which they informed investors contemplating their IPO that they were retaining a controlling interest in the company's governance so that they could ignore their shareholders' priorities in favor of a vision of Google as a positive force in the world:
https://abc.xyz/investor/founders-letters/ipo-letter/
Hixie says that the leadership that succeeded the founders lost sight of this vision – but the whole point of that letter is that the founders never fully ceded control to subsequent executive teams. Yes, those executive teams were accountable to the shareholders, but the largest block of voting shares were retained by the founders.
I don't think the enshittification of Google was due to a change in leadership – I think it was due to a change in discipline, the discipline imposed by competition, regulation and the threat of self-help measures. Take ads: when Google had to contend with one-click adblocker installation, it had to constantly balance the risk of making users so fed up that they googled "how do I block ads?" and then never saw another ad ever again.
But once Google seized the majority of the mobile market, it was able to funnel users into apps, and reverse-engineering an app is a felony (felony contempt of business-model) under Section 1201 of the Digital Millennium Copyright Act. An app is just a web-page wrapped in enough IP to make it a crime to install an ad-blocker.
And as Google acquired control over the browser market, it was likewise able to reduce the self-help measures available to browser users who found ads sufficiently obnoxious to trigger googling "how do I block ads?" The apotheosis of this is the yearslong campaign to block adblockers in Chrome, which the company has sworn it will finally do this coming June:
https://www.tumblr.com/tevruden/734352367416410112/you-have-until-june-to-dump-chrome
My contention here is not that Google's enshittification was precipitated by a change in personnel via the promotion of managers who have shitty ideas. Google's enshittification was precipitated by a change in discipline, as the negative consequences of heeding those shitty ideas were abolished thanks to monopoly.
This is bad news for people like me, who rely on services like Google Maps as cognitive prostheses. Elizabeth Laraki, one of the original Google Maps designers, has published a scorching critique of the latest GMaps design:
https://twitter.com/elizlaraki/status/1727351922254852182
Laraki calls out numerous enshittificatory design-choices that have left Maps screens covered in "crud" – multiple revenue-maximizing elements that come at the expense of usability, shifting value from users to Google.
What Laraki doesn't say is that these UI elements are auctioned off to merchants, which means that the business that gives Google the most money gets the greatest prominence in Maps, even if it's not the best merchant. That's a recurring motif in enshittified tech platforms, most notoriously Amazon, which makes $31b/year auctioning off top search placement to companies whose products aren't relevant enough to your query to command that position on their own:
https://pluralistic.net/2023/04/25/greedflation/#commissar-bezos
Enshittification begets enshittification. To succeed on Amazon, you must divert funds from product quality to auction placement, which means that the top results are the worst products:
https://pluralistic.net/2023/11/06/attention-rents/#consumer-welfare-queens
The exception is searches for Apple products: Apple and Amazon have a cozy arrangement that means that searches for Apple products are a timewarp back to the pre-enshittification Amazon, when the company worried enough about losing your business to heed the employees who objected to sacrificing search quality as part of a merchant extortion racket:
https://www.businessinsider.com/amazon-gives-apple-special-treatment-while-others-suffer-junk-ads-2023-11
Not every tech worker is a tech bro, in other words. Many workers care deeply about making your life better. But the microeconomics of the boardroom in a monopolized tech sector rewards the worst people and continuously promotes them. Forget the Peter Principle: tech is ruled by the Sam Principle.
As OpenAI went through four CEOs in a single week, lots of commentators remarked on Sam Altman's rise and fall and rise, but I only found one commentator who really had Altman's number. Writing in Today in Tabs, Rusty Foster nailed Altman to the wall:
https://www.todayintabs.com/p/defective-accelerationism
Altman's history goes like this: first, he founded a useless startup that raised $30m, only to be acquired and shuttered. Then Altman got a job running Y Combinator, where he somehow failed at taking huge tranches of equity from "every Stanford dropout with an idea for software to replace something Mommy used to do." After that, he founded OpenAI, a company that he claims to believe presents an existential risk to the entire human risk – which he structured so incompetently that he was then forced out of it.
His reward for this string of farcical, mounting failures? He was put back in charge of the company he mis-structured despite his claimed belief that it will destroy the human race if not properly managed.
Altman's been around for a long time. He founded his startup in 2005. There've always been Sams – of both the Bankman-Fried varietal and the Altman genus – in tech. But they didn't get to run amok. They were disciplined by their competitors, regulators, users and workers. The collapse of competition led to an across-the-board collapse in all of those forms of discipline, revealing the executives for the mediocre sociopaths they always were, and exposing tech workers' vocational awe for the shabby trick it was from the start.
Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2023/11/25/moral-injury/#enshittification
556 notes · View notes
gen-z-superheroes · 7 months
Text
Tumblr media
"Stopncii.org is a free tool designed to support victims of Non-Consensual Intimate Image (NCII) abuse."
"Revenge Porn Helpline is a UK service supporting adults (aged 18+) who are experiencing intimate image abuse, also known as, revenge porn."
"Take It Down (ncmec.org) is for people who have images or videos of themselves nude, partially nude, or in sexually explicit situations taken when they were under the age of 18 that they believe have been or will be shared online."
442 notes · View notes
odinsblog · 28 days
Text
Tumblr media Tumblr media Tumblr media Tumblr media
These are demo videos made from prompts on OpenAI’s Sora. It’s similar to how you would prompt ChatGPT and get text or a still image output, but with Sora the output is video. (source)
I cynically believe that by November, Sora will have perfected its algorithm enough to make the upcoming 2024 election online ads … very interesting.
And even after the terrible job that Facebook, Instagram and Twitter (never calling it x) did in the 2016 elections and Brexit, they somehow still decided to cut back on their departments that could at least theoretically curtail attempts at political disinformation.
Anyway, be forewarned: Social media manipulation and disinformation campaigns are very real things. Don’t believe everything you see on social media. Slightly similar A.I. deepfake technologies already exist. (example) (example) (example) (example)
211 notes · View notes
jv · 7 months
Text
Once again, US tech bros totally misinterpret how much your regular EU politician, or citizen, cares about their products:
Don't threat us with a good time, Sama.
15K notes · View notes
blackkatdraws2 · 1 month
Text
They are my lifeline
[individual drawings below]
Tumblr media Tumblr media Tumblr media Tumblr media
210 notes · View notes
tyote · 1 month
Text
"Hey guys, Tumblr here. We're gonna sell your data to an AI company. But there'll be an opt-out. Except we've already been scraping data, and even scraped some stuff we shouldn't have, so... we're really hoping the people we're selling this to will honor the user's data preferences retroactively."
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
Source: (requires an account but the article is completely free)
110 notes · View notes