Tumgik
#Data curation in us
orientalsolutions · 11 months
Text
Data curation and Data Mining company
We, at Oriental Solutions, the best data curation and data mining company in Chennai, India are primarily engaged in offering health care and medical transcription services to other companies. Apart from that, we also work for automotive and consumer goods / services verticals.
Oriental Solutions is a leading company based in Chennai, India specializing in comprehensive data solutions. With expertise in data curation, mining, and migration, we provide efficient document management and image curation services. Our skilled team excels in medical transcription, secretary services, billing, and coding, ensuring accurate and reliable outcomes. Trust Oriental Solutions for top-quality data summarization and a wide range of medical-related services. Experience excellence and professionalism with us .
0 notes
overtake · 9 months
Text
red bull: daniel’s bad habits from mclaren are fixed, he did well in the sim, and his tire test was so good that we immediately knew he was ready to be in a car.
alpha tauri & yuki: daniel’s feedback on the car has been instrumental, and he performed so well in difficult race conditions even though he didn’t have any upgrades and had some bad luck.
rando twitter user who doesn’t have the data, didn’t know liam lawson’s name three weeks ago, and constantly insults the way red bull quickly drops drivers who don’t perform: they’re only letting him drive for pr even though he’s a washed up failure.
230 notes · View notes
short666bread · 1 year
Note
I'm so happy to see u around this site again. I hope things are getting easier or nicer for you and many happy memories are to come.
¡Tu puedes chicx!
Tumblr media
*and have new equipment !! :-)
I hope things are good for you as well!!! I’m doing my best
12 notes · View notes
snickerdoodlles · 11 months
Text
*pinches nose bridge* even if there wasn’t 6 degrees of separation between AO3 and generative AI, has anyone in this tag even considered that if it was possible for individuals to fuck up generative AI or their training datasets just by writing a/b/o fic, then fascists, bigots, or even just internet trolls could and would fuck it up worse with hate speech
#honestly my first thought here is that you lot need to take a statistics class#you’re not even data bombing???????#ao3 is such a small fraction in the common crawl data even as a whole. it *cant*#and it’s currently requesting to be left out of that anyways now hello??????#not that that even fucking matters???????#ao3 is not used to train AI#the *common crawl* was used in the first stage of training some AIs#which happened to include ao3 amongst the TERABYTES of information within it#and it’s not like the common crawl is the only thing used to train these models??#it’s literally just the low quality bulk to beef up the training data#not to mention at that stage all the data is broken down into strings of integers#the LLMs not even learning *your* words it’s literally just learning words#this is just the base stage training there’s still 3 more stages of training for AIs after that#all of which use much more curated data#some of those stages might include common crawl data but…no? not really highly unlikely not really useful#it’s a web scrape it’s low quality by definition#like. Wikipedia is *right there* and much more useful to them#ao3 just isn’t good training data#a/b/o isn’t even ‘corrupting’ AI???????????#it’d be corrupting AI if ‘knot’ was associated with it over like. rope knots or something#or if it had a predisposition to spitting out omegaverse unprompted#but the examples I’ve seen are just Literally people asking it to write omegaverse#…a LLM giving you exactly what you ask for for even a niche topic means it’s acting exactly the way its trainers want it to#not that that’s even my fucking point here#i get the frustrations behind AI training datasets but we as individuals can’t fuck these things up and that’s a *good* thing
4 notes · View notes
pierswife · 1 year
Text
God once again it feels so freeing to just have my small little corner for selfship aside from my main like... Idk but it's helped my mental so much keeping everything separate
6 notes · View notes
yuls-real-consort · 1 year
Text
YEAH SEX IS GOOD AND ALL BUT CONSIDER : STATIONARY SETS
I REST MY CASE
2 notes · View notes
eggydev · 1 year
Text
also, if I download my private data from tungl from my previous account, (and if it contains all my posts marked as "remember" etc.), then I might end up deleting it afterall. I'll have to see though....
1 note · View note
cowboyishbabe · 1 year
Text
The older i get the less social media is for me ig
1 note · View note
bisexualbaker · 7 months
Text
Why do people keep recommending Dreamwidth as a Tumblr alternative, when Dreamwidth and Tumblr are so different?
To be flat-out honest, it's because Dreamwidth has so many things that Tumblr users say they want, even if it's also lacking a lot of features that Tumblr users have come to love:
Dreamwidth has incredibly lax content hosting rules. I'd say that it's slightly more restrictive than AO3, but only just slightly, and only because AO3's abuse team has been so overwhelmed and over-worked. Otherwise, the hosting policies are pretty similar. You want to go nuts, show nuts? You can do that on Dreamwidth.
In fact, Dreamwidth is so serious about "go nuts, show nuts", it gave up the ability to accept transactions through PayPal in 2009 to protect our ability to do that. (It's also one reason why Dreamwidth doesn't have an app: Dreamwidth will never be beholden to Apple's content rules this way.)
Dreamwidth cares about your privacy; it doesn't sell your data, and barely collects any to begin with. As far as I'm aware, it only collects what it needs to run the site. The owners have also spoken out on behalf of internet privacy many times, and are prepared to put their money where their mouth is.
No ads. Ever. Period. They mean it. Dreamwidth is entirely user funded.
Posts viewed in reverse chronological order; no algorithm, opt-in or otherwise. No algorithm at all. No "For You" or "Suggested" page. You still entirely create and curate your own experience.
The ability to make posts that only your "mutuals", or even only a specific subset of your "mutuals", can see. Want to make a post that's only open to Bonnie, Clyde, Butch, and Cassidy? You can do that! Want to make a post that's only open to Bonnie and Butch, but Clyde and Cassidy can't see shit? You can do that, too!
The owners have forsworn NFTs and the blockchain in general. Not as big a worry now as it was even a year ago, but still good to know!
We are explicitly the customers of Dreamwidth. Dreamwidth wants to make us happy, so any changes they make (and they do make changes) are made with us in mind, and after exploring as many possibilities as they can.
Dreamwidth is very transparent about their policies and changes. If you want to know why they're making a specific change, or keeping or getting rid of a feature, they will tell you. You don't have to find out ten months later that they're locked into a contract to keep it for a year (cough cough Tumblr Live cough cough).
So those are some things that Tumblr users would probably love about Dreamwidth.
Another reason Dreamwidth keeps being recommended is that a significant portion of the Age 30+ crowd spent a lot of earlier fandom years on a site known as LiveJournal. Dreamwidth may not be much like Tumblr, but it it started out as a code fork of LiveJournal, so it will be very familiar to anyone who spent any time there. Except better.
Finally, we're recommending Dreamwidth because some of the things that Tumblr users want are just... not going to happen on the web as it is now. Image hosting is the big one for this. Maybe in the future, the price of data will be much cheaper, and Dreamwidth will be able to host as much as we all want for a pittance that a fraction of the userbase will happily pay for everyone, but right now that's just not possible.
Everywhere you want to go that hosts a lot of images will either be running lots of ads, selling your data, or both.
Dreamwidth knows how much it costs to host your data, and has budgeted for that. They are hosting within their means, within our means.
Dreamwidth is the closest thing we may ever get to AO3 as a social media platform. One of the co-owners is from, and still in, fandom; she knows our values, because they are also her values. It may as well be the Blogsite Of Our Own.
5K notes · View notes
ao3org · 3 months
Text
Update on "No Fandom" tags
AO3 Tag Wranglers recently began testing processes for updating canonical tags (tags that appear in the auto-complete and the filters) that don’t belong to any particular fandom (commonly known as No Fandom tags). We have already begun implementing some of the decisions made during the earliest discussions. By the time this post is published, you may have already noticed some changes we have made.  Several canonical tags are slated to be created or renamed, and we will also be adjusting the subtag and metatag relationships between some tags to better aid Archive users in filtering.  Please keep in mind that many of these changes are large and require a lot of work to identify and attach relevant tags, so it will likely take some time to complete. We ask that you please be patient with us while we work! While we will not be detailing every change we make under the new process, we will be making periodic posts with updates on those changes we believe are most likely to prove helpful for users looking to tag or filter works with the new or revised tags and to avoid confusion as to why changes are being made. 
New Canonicals!
1. Edging
For a long while, there has been some confusion caused by the fact that we have a canonical for Edgeplay, but not for Edging which has led to some unintentional mistagging and other challenges. Consequently, we will be creating a canonical tag for Edging with the format Orgasm Edging and this new canonical tag will be subtagged to Orgasm Control. Relatedly, we will be reorganizing the Orgasm Control tag tree to allow for easier and more straightforward filtering and renaming Edgeplay to add clarity. You’ll find more details regarding these changes in the Renamed and Reorganized canonicals section below.
2. Generative AI
We have canonized three tags related to Generative AI. 
Created Using Generative AI
AI-Generated Text
AI-Generated Images 
All tags which make mention of specific Generative AI tools will be made a synonym of the most relevant AI-Generated canonical. Additionally, please note that AI-Generated Text and AI-Generated Images will be subtagged to Created Using Generative AI. How to Use These To Filter For/Filter Out Works Tagged as Using Generative AI: ❌ Filtering Out: To filter out all works that use tags about being created with AI, add Created Using Generative AI to the “other tags to exclude” field in the works filter. This will also exclude works making use of the subtags AI-Generated Text and AI-Generated Images. If you wish to exclude either the Images or Text tags only, you can do so by excluding either AI-Generated Text or AI-Generated Images.
☑️ Filtering For: Add Created Using Generative AI to the “other tags to include” field in the works filter. This will also automatically include the works making use of the subtags AI-Generated Text and AI-Generated Images. If you wish to filter for Images or Text only, you can do so by including either AI-Generated Text or AI-Generated Images only .
As a reminder, the use of these tools in the creation of works is not against AO3's ToS. These new tags exist purely to help folks curate their own experience on the Archive. If you would like to see more information about AO3’s policies in regards to AI generated works, please see our News post from May 2023 on AI and Data Scraping on the Archive.
Renamed and Reevaluated Canonicals!
3. EdgeplayAs mentioned above, we will be renaming Edgeplay to clarify the tag's meaning, given that it is often confused for Edging. This tag will be decanonized and made a synonym of Edgeplay | High Risk BDSM Practices. It will be removed as a subtag of Sensation Play and be subtagged instead directly to BDSM. Please note if you have made use of the Edgeplay tag on your works or wish to continue to use it in the future, you are still welcome to do so. The tag Edgeplay will be made a synonym of the new canonical, so all works tagged with Edgeplay now or in the future will fall under the new tag so that they’re still easy for users to find. If you have made it a favorite tag, it will be transferred  automatically when we make this change. 
4. Orgasm Delay/Denial The tag Orgasm Delay/Denial will be decanonized and made a synonym of Orgasm Control to help limit confusion with the more specific Orgasm Delay and Orgasm Denial canonicals. Tags that are currently synonyms of Orgasm Delay/Denial are being analyzed and moved to either Orgasm Control or Orgasm Delay or Orgasm Denial or Orgasm Edging. The revised tree structure for this tree will feature Orgasm Control as the top-level metatag with subtags Orgasm Edging, Orgasm Delay, and Orgasm Denial. So, if you wish to filter for all these tags at once, you can do so just by filtering for Orgasm Control. 
5. Female Ejaculation Female Ejaculation will be decanonized and made a synonym of Squirting and Vaginal Ejaculation.  We hope this new phrasing will be more inclusive, clear, and make the tag easier to find whether users are searching for Squirting or the previous canonical. All current synonyms of Female Ejaculation will also be made a synonym of Squirting and Vaginal Ejaculation, including Squirting. You may continue to tag your works as suits your preferences, and we will make sure these tags are made synonyms of the new canonical so that your work can be found in the filters for it.
These are just some of the changes being implemented. While we won’t be announcing every change, you can expect similar updates in the future as we continue to work toward improving the Archive experience. So if you have an interest in the changes we’ll be making, you can follow us on Twitter @ao3_wranglers or keep an eye on this Tumblr for future announcements. Thank you for your patience and understanding as we continue our work!
(From time to time, ao3org posts announcements of recent or upcoming wrangling changes on behalf of the Tag Wrangling Committee.)
3K notes · View notes
not-terezi-pyrope · 5 months
Text
Often when I post an AI-neutral or AI-positive take on an anti-AI post I get blocked, so I wanted to make my own post to share my thoughts on "Nightshade", the new adversarial data poisoning attack that the Glaze people have come out with.
I've read the paper and here are my takeaways:
Firstly, this is not necessarily or primarily a tool for artists to "coat" their images like Glaze; in fact, Nightshade works best when applied to sort of carefully selected "archetypal" images, ideally ones that were already generated using generative AI using a prompt for the generic concept to be attacked (which is what the authors did in their paper). Also, the image has to be explicitly paired with a specific text caption optimized to have the most impact, which would make it pretty annoying for individual artists to deploy.
While the intent of Nightshade is to have maximum impact with minimal data poisoning, in order to attack a large model there would have to be many thousands of samples in the training data. Obviously if you have a webpage that you created specifically to host a massive gallery poisoned images, that can be fairly easily blacklisted, so you'd have to have a lot of patience and resources in order to hide these enough so they proliferate into the training datasets of major models.
The main use case for this as suggested by the authors is to protect specific copyrights. The example they use is that of Disney specifically releasing a lot of poisoned images of Mickey Mouse to prevent people generating art of him. As a large company like Disney would be more likely to have the resources to seed Nightshade images at scale, this sounds like the most plausible large scale use case for me, even if web artists could crowdsource some sort of similar generic campaign.
Either way, the optimal use case of "large organization repeatedly using generative AI models to create images, then running through another resource heavy AI model to corrupt them, then hiding them on the open web, to protect specific concepts and copyrights" doesn't sound like the big win for freedom of expression that people are going to pretend it is. This is the case for a lot of discussion around AI and I wish people would stop flagwaving for corporate copyright protections, but whatever.
The panic about AI resource use in terms of power/water is mostly bunk (AI training is done once per large model, and in terms of industrial production processes, using a single airliner flight's worth of carbon output for an industrial model that can then be used indefinitely to do useful work seems like a small fry in comparison to all the other nonsense that humanity wastes power on). However, given that deploying this at scale would be a huge compute sink, it's ironic to see anti-AI activists for that is a talking point hyping this up so much.
In terms of actual attack effectiveness; like Glaze, this once again relies on analysis of the feature space of current public models such as Stable Diffusion. This means that effectiveness is reduced on other models with differing architectures and training sets. However, also like Glaze, it looks like the overall "world feature space" that generative models fit to is generalisable enough that this attack will work across models.
That means that if this does get deployed at scale, it could definitely fuck with a lot of current systems. That said, once again, it'd likely have a bigger effect on indie and open source generation projects than the massive corporate monoliths who are probably working to secure proprietary data sets, like I believe Adobe Firefly did. I don't like how these attacks concentrate the power up.
The generalisation of the attack doesn't mean that this can't be defended against, but it does mean that you'd likely need to invest in bespoke measures; e.g. specifically training a detector on a large dataset of Nightshade poison in order to filter them out, spending more time and labour curating your input dataset, or designing radically different architectures that don't produce a comparably similar virtual feature space. I.e. the effect of this being used at scale wouldn't eliminate "AI art", but it could potentially cause a headache for people all around and limit accessibility for hobbyists (although presumably curated datasets would trickle down eventually).
All in all a bit of a dick move that will make things harder for people in general, but I suppose that's the point, and what people who want to deploy this at scale are aiming for. I suppose with public data scraping that sort of thing is fair game I guess.
Additionally, since making my first reply I've had a look at their website:
Used responsibly, Nightshade can help deter model trainers who disregard copyrights, opt-out lists, and do-not-scrape/robots.txt directives. It does not rely on the kindness of model trainers, but instead associates a small incremental price on each piece of data scraped and trained without authorization. Nightshade's goal is not to break models, but to increase the cost of training on unlicensed data, such that licensing images from their creators becomes a viable alternative.
Once again we see that the intended impact of Nightshade is not to eliminate generative AI but to make it infeasible for models to be created and trained by without a corporate money-bag to pay licensing fees for guaranteed clean data. I generally feel that this focuses power upwards and is overall a bad move. If anything, this sort of model, where only large corporations can create and control AI tools, will do nothing to help counter the economic displacement without worker protection that is the real issue with AI systems deployment, but will exacerbate the problem of the benefits of those systems being more constrained to said large corporations.
Kinda sucks how that gets pushed through by lying to small artists about the importance of copyright law for their own small-scale works (ignoring the fact that processing derived metadata from web images is pretty damn clearly a fair use application).
1K notes · View notes
I've seen a couple of comments from someone around paying Tumblr for stuff that I want to address. I'm not going to mention the person who made these comments because I'm not trying to pick a fight, but I think they're worth talking about. The comments in question are: "you think user money is anything compared to advertisers" and in a pinned post they tell people to not give money to Tumblr.
The thing is, user money can definitely be something compared to advertisers. There are multiple ways that an online company (in general, not just Tumblr) can make money, but let's break them down into three categories:
A. From the users - selling merchandise, subscriptions, premium packages, asking for donations, etc.
B. From advertisers - selling views and space on the platform to companies that use it to try and sell stuff to the users
C. From data - selling information about the user base to other companies that might use it in a whole bunch of dodgy and malicious ways, or just try to find better ways to sell stuff to us
All three of these are viable ways for a company to make money, and many companies use some combination of the above. What matters is what the company sees as their PRIMARY method of making money, because that is what drives their corporate decisions.
If none of the methods are making money, the company will shut down, and I don't want Tumblr to shut down - I like this hellsite. If option B is what makes them the most money, then they will make business decisions that make the platform look better to advertisers and this is likely to drive everything in a more algorithm-centric direction and give users fewer options to curate their own experience. If option C is what makes them the most money, then they will focus on features that enable privacy invasion and data harvesting. If option A is what makes them the most money, then they have to think about how to keep the users spending that money. Now, option A doesn't always lead to good outcomes - in mobile/online games it can end up as loot box gambling add-ins and pay-to-win options, but thankfully Tumblr isn't the sort of site where loot box mechanics would make a lot of sense. Which makes it more likely they'll go the other option: delivering the features that users want to keep them coming back and paying for subscriptions. 
I would much rather Tumblr goes for option A than options B or C because it means that Tumblr is more likely to put the user base first when making decisions instead of advertisers. We just need to show them that it's a viable option.
Tumblr is trying what online games have done for years - crabs and checkmarks are the equivalent of horse armour DLCs and cosmetics. They're trying to make the business work through microtransactions. If enough people spend a small amount, it can add up to a large amount of money. The point of crab day is to send a message to Tumblr that option A is viable so that they make the choice to focus on that. If everyone goes, "No, don't spend money on Tumblr, you're nothing compared to advertisers," then it becomes a self-fulfilling prophecy and Tumblr will have to go with options B or C if they want to keep making money.
I'm not giving Tumblr money out of naivety or because I think they're somehow deserving - I'm giving them my money because I would much rather they make money directly from me and give them an incentive to provide features I like, than by making the site worse so that they can exploit me.
2K notes · View notes
lackadaisycats · 4 months
Note
Hey Tracy! Have you heard about the new Ai called Sora? Apparently it can now create 2D and 3D animations as well as hyper realistic videos. I’ve been getting into animation and trying to improve my art for years since I was 7, but now seeing that anyone can create animation/works in just a mare seconds by typing in a couple words, it’s such a huge slap in the face to people who actually put the time and effort into their works and it’s so discouraging! And it has me worried about what’s going to happen next for artists and many others, as-well. There’s already generated voices, generated works stolen from actual artists, generated music, and now this! It’s just so scary that it’s coming this far. 
Yeah, I've seen it. And yeah, it feels like the universe has taken on a 'fuck you in particular' attitude toward artists the past few years. A lot of damage has already been done, and there are plenty of reasons for concern, but bear in mind that we don't know how this will play out yet. Be astute, be justifiably angry, but don't let despair take over. --------
One would expect that the promo clips that have been dropping lately represent some of the best of the best-looking stuff they've been able to produce. And it's only good-looking on an extremely superficial level. It's still riddled with problems if you spend even a moment observing. And I rather suspect, prior to a whole lot of frustrated iteration, most prompts are still going to get you camera-sickness inducing, wibbly-wobbly nonsense with a side of body horror.
Will the tech ultimately get 'smarter' than that and address the array of typical AI giveaways? Maybe. Probably, even. Does that mean it'll be viable in quite the way it's being marketed, more or less as a human-replacer? Well…
A lot of this is hype, and hype is meant to drive up the perceived value of the tech. Executives will rush to be early adopters without a lot of due diligence or forethought because grabbing it first like a dazzled chimp and holding up like a prize ape-rock makes them look like bleeding-edge tech geniuses in their particular ecosystem. They do this because, in turn, that perceived value may make their company profile and valuations go up too, which makes shareholders short-term happy (the only kind of happy they know). The problem is how much actual functional value will it have? And how long does it last? Much of it is the same routine we were seeing with blockchain a few years ago: number go up. Number go up always! Unrealistic, unsustainable forever-growth must be guaranteed in this economic clime. If you can lay off all of your people and replace them with AI, number goes up big and never stops, right?
I have some doubts. ----------------------
The chips also haven't landed yet with regards to the legality of all of this. Will these adopters ultimately be able to copyright any of this output trained on datasets comprised of stolen work? Can computer-made art even be copyrighted at all? How much of a human touch will be required to make something copyright-able? I don't know yet. Neither do the hype team or the early adopters.
Does that mean the tech will be used but will have to be retrained on the adopter's proprietary data? Yeah, maybe. That'd be a somewhat better outcome, at least. It still means human artists make specific things for the machine to learn from. (Watch out for businesses that use 'ethical' as a buzzword to gloss over how many people they've let go from their jobs, though.)
Will it become industry standard practice to do things this way? Maybe. Will it still require an artist's sensbilities and oversignt to plan and curate and fix the results so that it doesn't come across like pure AI trash? Yeah, I think that's pretty likely.
If it becomes standard practice, will it become samey, and self-referential and ultimately an emblem of doing things the cookie-cutter way instead of enlisting real, human artists? Quite possibly.
If it becomes standard industry practice, will there still be an audience or a demand or a desire for art made by human artists? Yes, almost certainly. With every leap of technology, that has remained the case. ------------------ TL;DR Version:
I'm not saying with any certainty that this AI blitz is a passing fad. I think we're likely to experience a torrential amount of generative art, video, voice, music, programming, and text in the coming years, in fact, and it will probably irrevocably change the layout of the career terrain. But I wouldn't be surprised if it was being overhyped as a business strategy right now. And I don't think the immensity of its volume will ever overcome its inherent emptiness.
What I am certain of is that it will not eliminate the innate human impulse to create. Nor the desire to experience art made by a fellow soul. Keep doing your thing, Anon. It's precious. It's authentic. It will be all the more special because it will have come from you, a human.
911 notes · View notes
Text
When Facebook came for your battery, feudal security failed
Tumblr media
When George Hayward was working as a Facebook data-scientist, his bosses ordered him to run a “negative test,” updating Facebook Messenger to deliberately drain users’ batteries, in order to determine how power-hungry various parts of the apps were. Hayward refused, and Facebook fired him, and he sued:
https://nypost.com/2023/01/28/facebook-fires-worker-who-refused-to-do-negative-testing-awsuit/
If you’d like an essay-formatted version of this post to read or share, here’s a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2023/02/05/battery-vampire/#drained
Hayward balked because he knew that among the 1.3 billion people who use Messenger, some would be placed in harm’s way if Facebook deliberately drained their batteries — physically stranded, unable to communicate with loved ones experiencing emergencies, or locked out of their identification, payment method, and all the other functions filled by mobile phones.
As Hayward told Kathianne Boniello at the New York Post, “Any data scientist worth his or her salt will know, ‘Don’t hurt people…’ I refused to do this test. It turns out if you tell your boss, ‘No, that’s illegal,’ it doesn’t go over very well.”
Negative testing is standard practice at Facebook, and Hayward was given a document called “How to run thoughtful negative tests” regarding which he said, “I have never seen a more horrible document in my career.”
We don’t know much else, because Hayward’s employment contract included a non-negotiable binding arbitration waiver, which means that he surrendered his right to seek legal redress from his former employer. Instead, his claim will be heard by an arbitrator — that is, a fake corporate judge who is paid by Facebook to decide if Facebook was wrong. Even if he finds in Hayward’s favor — something that arbitrators do far less frequently than real judges do — the judgment, and all the information that led up to it, will be confidential, meaning we won’t get to find out more:
https://pluralistic.net/2022/06/12/hot-coffee/#mcgeico
One significant element of this story is that the malicious code was inserted into Facebook’s app. Apps, we’re told, are more secure than real software. Under the “curated computing” model, you forfeit your right to decide what programs run on your devices, and the manufacturer keeps you safe. But in practice, apps are just software, only worse:
https://pluralistic.net/2022/06/23/peek-a-boo/#attack-helicopter-parenting
Apps are part what Bruce Schneier calls “feudal security.” In this model, we defend ourselves against the bandits who roam the internet by moving into a warlord’s fortress. So long as we do what the warlord tells us to do, his hired mercenaries will keep us safe from the bandits:
https://locusmag.com/2021/01/cory-doctorow-neofeudalism-and-the-digital-manor/
But in practice, the mercenaries aren’t all that good at their jobs. They let all kinds of badware into the fortress, like the “pig butchering” apps that snuck into the two major mobile app stores:
https://arstechnica.com/information-technology/2023/02/pig-butchering-scam-apps-sneak-into-apples-app-store-and-google-play/
It’s not merely that the app stores’ masters make mistakes — it’s that when they screw up, we have no recourse. You can’t switch to an app store that pays closer attention, or that lets you install low-level software that monitors and overrides the apps you download.
Indeed, Apple’s Developer Agreement bans apps that violate other services’ terms of service, and they’ve blocked apps like OG App that block Facebook’s surveillance and other enshittification measures, siding with Facebook against Apple device owners who assert the right to control how they interact with the company:
https://pluralistic.net/2022/12/10/e2e/#the-censors-pen
When a company insists that you must be rendered helpless as a condition of protecting you, it sets itself up for ghastly failures. Apple’s decision to prevent every one of its Chinese users from overriding its decisions led inevitably and foreseeably to the Chinese government ordering Apple to spy on those users:
https://pluralistic.net/2022/11/11/foreseeable-consequences/#airdropped
Apple isn’t shy about thwarting Facebook’s business plans, but Apple uses that power selectively — they blocked Facebook from spying on Iphone users (yay!) and Apple covertly spied on its customers in exactly the same way as Facebook, for exactly the same purpose, and lied about it:
https://pluralistic.net/2022/11/14/luxury-surveillance/#liar-liar
The ultimately, irresolvable problem of Feudal Security is that the warlord’s mercenaries will protect you against anyone — except the warlord who pays them. When Apple or Google or Facebook decides to attack its users, the company’s security experts will bend their efforts to preventing those users from defending themselves, turning the fortress into a prison:
https://pluralistic.net/2022/10/20/benevolent-dictators/#felony-contempt-of-business-model
Feudal security leaves us at the mercy of giant corporations — fallible and just as vulnerable to temptation as any of us. Both binding arbitration and feudal security assume that the benevolent dictator will always be benevolent, and never make a mistake. Time and again, these assumptions are proven to be nonsense.
Image: Anthony Quintano (modified) https://commons.wikimedia.org/wiki/File:Mark_Zuckerberg_F8_2018_Keynote_%2841118890174%29.jpg
CC BY 2.0: https://creativecommons.org/licenses/by/2.0/deed.en
[Image ID: A painting depicting the Roman sacking of Jerusalem. The Roman leader's head has been replaced with Mark Zuckerberg's head. The wall has Apple's 'Think Different' wordmark and an Ios 'low battery' icon.]
Next week (Feb 8-17), I'll be in Australia, touring my book *Chokepoint Capitalism* with my co-author, Rebecca Giblin. We'll be in Brisbane on Feb 8, and then we're doing a remote event for NZ on Feb 9. Next is Melbourne, Sydney and Canberra. I hope to see you!
https://chokepointcapitalism.com/
4K notes · View notes
godbirdart · 9 days
Text
"all our devices are listening to us and they track you everywhere you go online" okay yeah i Do believe this to be true but whoever is collecting my data to target ads at me fucking SUCKS!!!!
my ads cycle through the following across each and every damn platform:
outdoor power tools
mens birkenstock sandals
mens tropical-themed polo shirts
electric heaters / furnaces
some kind of farming ad
ride-on lawn mowers
febreze plug in [i suspect it is because i call my cat "stinky"]
kubota tractors
baby essentials [i also call my cat "baby"]
obligatory truck commercial
i really want to know how the tech companies took "furry artist that lives alone with his cat and spends hours looking at anime men and playing video games" and spun it into ads curated for a lost dad in Home Depot
245 notes · View notes
cleolinda · 11 months
Text
Threads update so you don’t have to:
I have not followed many people yet. As such, Threads’ algorithm is populating my feed with anything it can lay hands on. I have to assume it is not, in fact, using Instagram data, because if it were, my feed would look A SHIT TON BETTER THAN THIS. If you like American ball-centric sports, you are valid and I am happy for you, but I Do Not, and my feed is football and basketball posts as far as the eye can see. Also, streaming services’ PR accounts. Lotta Disney properties and Netflix. Something called Wasted posts constantly, I don’t know what it is. Someone who has a cooking show on cable. Various actors. Oh God, this is everything I don’t follow on Twitter. Why am I here.
I sat there muting accounts for like five minutes and then I had an existential crisis and stared at the wall for a moment. If all I’m doing is muting, why am I even there? If I weren’t on it to report back, would I just close the thing and never return? Will it get better as more people join? Part of my problem is that I’ve stayed off Twitter so long that I’m not even sure I enjoy the microblogging format, period, anymore. Maybe I’ll get back into the rhythm of it? But apparently I think in paragraphs, not short sentences. I like having room to clarify what I’m saying. I like the (comparatively) slower turnover of my dash.
The worst part is that you’d say, if you want your feed to be calmer, then curate your experience. Unfollow people. But I can’t, because Threads is shoving strangers at me. So is Twitter these days. Maybe Threads will become The Conversation™ the way Twitter used to be, something worth slogging through to feel “in the know,” but right now, from a usability perspective, it’s not.
1K notes · View notes