Tumgik
#software testing risks
compunnelinc · 1 day
Text
What Is Risk-Based Testing in QA and How to Prioritize Tests?
In the dynamic world of software development, ensuring quality while managing tight deadlines is crucial. Risk-based testing (RBT) emerges as a powerful strategy in Quality Assurance (QA) to address this challenge. This blog delves into the essentials of RBT, explaining how it focuses on identifying and mitigating the most significant risks in a software project. Discover the core principles of RBT, its benefits, and practical steps to implement this approach effectively. Learn how to prioritize your tests based on risk assessment, ensuring that your testing efforts are both efficient and impactful. Whether you're a QA professional or a project manager, this guide will equip you with the knowledge to enhance your testing strategy and deliver high-quality software with confidence.
Read more: https://www.compunnel.com/blogs/risk-based-testing-in-qa-prioritizing-tests-for-maximum-impact/
0 notes
caramel12345 · 6 months
Text
Tumblr media
HR Innovations & Growth | FYI Solutions
Innovate your HR practices for sustained growth and competitive advantage. Visit today https://fyisolutions.com
0 notes
news4nose · 9 months
Link
Do you know that there are hidden, harmful commands within that normal-looking code? Instead of relying on copy-pasting, take the time to learn and understand the commands you’re using. Learning the basics of command-line operations can go a long way in helping you.
0 notes
garymdm · 11 months
Text
Poor data quality significantly hinders banks ability to present accurate regulatory reports
Poor #dataquality significantly hinders bank's ability to present accurate regulatory reports
Recent research conducted by Aite Group highlighted the link between poor data quality and regulatory penalties, further creating a case for financial services organisations to improUnderstanding Data Quality Dimension: Consistencyve their data quality. Poor data quality can lead to an adverse conclusion by regulators about a bank’s ability to accurately monitor business risks. Focus on data…
Tumblr media
View On WordPress
0 notes
Text
#Blog | Risk-based testing is an important approach to software testing as it helps ensure that testing efforts focus on the most critical areas of the software or system. It helps improve the effectiveness and efficiency of the entire process.
0 notes
Text
Three AI insights for hard-charging, future-oriented smartypantses
Tumblr media
MERE HOURS REMAIN for the Kickstarter for the audiobook for The Bezzle, the sequel to Red Team Blues, narrated by @wilwheaton! You can pre-order the audiobook and ebook, DRM free, as well as the hardcover, signed or unsigned. There’s also bundles with Red Team Blues in ebook, audio or paperback.
Tumblr media
Living in the age of AI hype makes demands on all of us to come up with smartypants prognostications about how AI is about to change everything forever, and wow, it's pretty amazing, huh?
AI pitchmen don't make it easy. They like to pile on the cognitive dissonance and demand that we all somehow resolve it. This is a thing cult leaders do, too – tell blatant and obvious lies to their followers. When a cult follower repeats the lie to others, they are demonstrating their loyalty, both to the leader and to themselves.
Over and over, the claims of AI pitchmen turn out to be blatant lies. This has been the case since at least the age of the Mechanical Turk, the 18th chess-playing automaton that was actually just a chess player crammed into the base of an elaborate puppet that was exhibited as an autonomous, intelligent robot.
The most prominent Mechanical Turk huckster is Elon Musk, who habitually, blatantly and repeatedly lies about AI. He's been promising "full self driving" Telsas in "one to two years" for more than a decade. Periodically, he'll "demonstrate" a car that's in full-self driving mode – which then turns out to be canned, recorded demo:
https://www.reuters.com/technology/tesla-video-promoting-self-driving-was-staged-engineer-testifies-2023-01-17/
Musk even trotted an autonomous, humanoid robot on-stage at an investor presentation, failing to mention that this mechanical marvel was just a person in a robot suit:
https://www.siliconrepublic.com/machines/elon-musk-tesla-robot-optimus-ai
Now, Musk has announced that his junk-science neural interface company, Neuralink, has made the leap to implanting neural interface chips in a human brain. As Joan Westenberg writes, the press have repeated this claim as presumptively true, despite its wild implausibility:
https://joanwestenberg.com/blog/elon-musk-lies
Neuralink, after all, is a company notorious for mutilating primates in pursuit of showy, meaningless demos:
https://www.wired.com/story/elon-musk-pcrm-neuralink-monkey-deaths/
I'm perfectly willing to believe that Musk would risk someone else's life to help him with this nonsense, because he doesn't see other people as real and deserving of compassion or empathy. But he's also profoundly lazy and is accustomed to a world that unquestioningly swallows his most outlandish pronouncements, so Occam's Razor dictates that the most likely explanation here is that he just made it up.
The odds that there's a human being beta-testing Musk's neural interface with the only brain they will ever have aren't zero. But I give it the same odds as the Raelians' claim to have cloned a human being:
https://edition.cnn.com/2003/ALLPOLITICS/01/03/cf.opinion.rael/
The human-in-a-robot-suit gambit is everywhere in AI hype. Cruise, GM's disgraced "robot taxi" company, had 1.5 remote operators for every one of the cars on the road. They used AI to replace a single, low-waged driver with 1.5 high-waged, specialized technicians. Truly, it was a marvel.
Globalization is key to maintaining the guy-in-a-robot-suit phenomenon. Globalization gives AI pitchmen access to millions of low-waged workers who can pretend to be software programs, allowing us to pretend to have transcended the capitalism's exploitation trap. This is also a very old pattern – just a couple decades after the Mechanical Turk toured Europe, Thomas Jefferson returned from the continent with the dumbwaiter. Jefferson refined and installed these marvels, announcing to his dinner guests that they allowed him to replace his "servants" (that is, his slaves). Dumbwaiters don't replace slaves, of course – they just keep them out of sight:
https://www.stuartmcmillen.com/blog/behind-the-dumbwaiter/
So much AI turns out to be low-waged people in a call center in the Global South pretending to be robots that Indian techies have a joke about it: "AI stands for 'absent Indian'":
https://pluralistic.net/2024/01/29/pay-no-attention/#to-the-little-man-behind-the-curtain
A reader wrote to me this week. They're a multi-decade veteran of Amazon who had a fascinating tale about the launch of Amazon Go, the "fully automated" Amazon retail outlets that let you wander around, pick up goods and walk out again, while AI-enabled cameras totted up the goods in your basket and charged your card for them.
According to this reader, the AI cameras didn't work any better than Tesla's full-self driving mode, and had to be backstopped by a minimum of three camera operators in an Indian call center, "so that there could be a quorum system for deciding on a customer's activity – three autopilots good, two autopilots bad."
Amazon got a ton of press from the launch of the Amazon Go stores. A lot of it was very favorable, of course: Mister Market is insatiably horny for firing human beings and replacing them with robots, so any announcement that you've got a human-replacing robot is a surefire way to make Line Go Up. But there was also plenty of critical press about this – pieces that took Amazon to task for replacing human beings with robots.
What was missing from the criticism? Articles that said that Amazon was probably lying about its robots, that it had replaced low-waged clerks in the USA with even-lower-waged camera-jockeys in India.
Which is a shame, because that criticism would have hit Amazon where it hurts, right there in the ole Line Go Up. Amazon's stock price boost off the back of the Amazon Go announcements represented the market's bet that Amazon would evert out of cyberspace and fill all of our physical retail corridors with monopolistic robot stores, moated with IP that prevented other retailers from similarly slashing their wage bills. That unbridgeable moat would guarantee Amazon generations of monopoly rents, which it would share with any shareholders who piled into the stock at that moment.
See the difference? Criticize Amazon for its devastatingly effective automation and you help Amazon sell stock to suckers, which makes Amazon executives richer. Criticize Amazon for lying about its automation, and you clobber the personal net worth of the executives who spun up this lie, because their portfolios are full of Amazon stock:
https://sts-news.medium.com/youre-doing-it-wrong-notes-on-criticism-and-technology-hype-18b08b4307e5
Amazon Go didn't go. The hundreds of Amazon Go stores we were promised never materialized. There's an embarrassing rump of 25 of these things still around, which will doubtless be quietly shuttered in the years to come. But Amazon Go wasn't a failure. It allowed its architects to pocket massive capital gains on the way to building generational wealth and establishing a new permanent aristocracy of habitual bullshitters dressed up as high-tech wizards.
"Wizard" is the right word for it. The high-tech sector pretends to be science fiction, but it's usually fantasy. For a generation, America's largest tech firms peddled the dream of imminently establishing colonies on distant worlds or even traveling to other solar systems, something that is still so far in our future that it might well never come to pass:
https://pluralistic.net/2024/01/09/astrobezzle/#send-robots-instead
During the Space Age, we got the same kind of performative bullshit. On The Well David Gans mentioned hearing a promo on SiriusXM for a radio show with "the first AI co-host." To this, Craig L Maudlin replied, "Reminds me of fins on automobiles."
Yup, that's exactly it. An AI radio co-host is to artificial intelligence as a Cadillac Eldorado Biaritz tail-fin is to interstellar rocketry.
Tumblr media Tumblr media
Back the Kickstarter for the audiobook of The Bezzle here!
Tumblr media
If you’d like an essay-formatted version of this post to read or share, here’s a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/01/31/neural-interface-beta-tester/#tailfins
1K notes · View notes
luminlogic · 2 years
Text
Tumblr media
Best-in-Class Software for Product Lifecycle Development
By accelerating the creation and circulation of documents, we sell your product. Our specialists will create plans for risk management and keep records. Contact us today
0 notes
iww-gnv · 3 months
Text
As firms increasingly rely on artificial intelligence-driven hiring platforms, many highly qualified candidates are finding themselves on the cutting room floor. Body-language analysis. Vocal assessments. Gamified tests. CV scanners. These are some of the tools companies use to screen candidates with artificial intelligence recruiting software. Job applicants face these machine prompts – and AI decides whether they are a good match or fall short. Businesses are increasingly relying on them. A late-2023 IBM survey of more than 8,500 global IT professionals showed 42% of companies were using AI screening "to improve recruiting and human resources". Another 40% of respondents were considering integrating the technology. Many leaders across the corporate world hoped AI recruiting tech would end biases in the hiring process. Yet in some cases, the opposite is happening. Some experts say these tools are inaccurately screening some of the most qualified job applicants – and concerns are growing the software may be excising the best candidates. "We haven't seen a whole lot of evidence that there's no bias here… or that the tool picks out the most qualified candidates," says Hilke Schellmann, US-based author of the Algorithm: How AI Can Hijack Your Career and Steal Your Future, and an assistant professor of journalism at New York University. She believes the biggest risk such software poses to jobs is not machines taking workers' positions, as is often feared – but rather preventing them from getting a role at all.
98 notes · View notes
freckledboss · 3 months
Text
Pepper isn't one for tampering with things, especially mechanical and technological devices she's not familiar with. But the lab was empty, and she was merely checking in on the progress Tony had made when it happened. A sudden flash of light, blinding and intense. Then her surroundings became rather dark, body going limp, mind slipping into unconsciousness. This perhaps was a classic case of in the wrong place, at wrong time. Or maybe fate had a hand to play. No matter how it came about, one thing is for certain. She' may be in danger if this isn't sorted out.
The redhead stirs to the sound of birds chirping, and the smell of nature all around. Am I outside? Her head feels a little sore, memory is a bit fuzzy. She slowly sits up and blinks into the bright sunlight, a hand lifting to help see better.
Everything looks different... she glances down and realizes she's sitting in a pile of hay? This is definitely not the lab, or Stark Industries.
Pepper stands then, a tad unbalanced until she gains her footing--stilettoes and dirt do not mix--and is able to really examine the place. A barn... I'm inside a barn? How... Panic is beginning to creep inside, but she refuses to let it consume her just yet. There has to be a logical explanation.
"Mr. Stark, if this is some sort of new simulation software you're testing out, I would appreciate the heads up next time," assuming, of course, this is an illusion. "Or maybe warn me not to enter the lab if you're planning on experimenting with new equipment? You know how I feel about potentially hazardous situations..." because lets face it, Tony and risk go hand in hand. "Mr. Stark?"
@mr-tony-stark
75 notes · View notes
Text
Coral restoration efforts usually involve transplanting tiny corals, cultivated in nurseries, on to damaged reefs.
However the work can be slow and costly, and only a fraction of the reefs at risk are getting help.
In the shallow waters of the Abrolhos Islands, [Marine biologist Taryn Foster] is testing a system she hopes will revive reefs more quickly...
It involves grafting coral fragments into small plugs, which are inserted into a moulded base. Those bases are then placed in batches on the seabed....
Ms Foster has formed a start-up firm called Coral Maker and hopes that a partnership with San Francisco-based engineering software firm Autodesk will accelerate the process further.
Their researchers have been training an artificial intelligence to control collaborative robots (cobots), which work closely alongside humans.
"Some of these processes in coral propagation are just repetitive pick and place tasks, and they're ideally suited to robotic automation," says Ms Foster.
A robotic arm can graft or glue coral fragments to the seed plugs. Another places them in the base, using vision systems to make decisions about how to grab it.
"Every piece of coral is different, even within the same species, so the robots need to recognise coral fragments and how to handle them," says Nic Carey, senior principal research scientist at Autodesk.
"So far, they're very good at handling the variability in coral shapes."
76 notes · View notes
scientia-rex · 4 months
Note
Hello!
Would you be willing to take a look at this paper and share your thoughts?
https://www.nature.com/articles/s41591-022-02012-w
Specifically Fig. 2a - Obesity - Relation between daily steps over time and incident chronic disease. The graph has the most fascinating increase in obesity rates at around 8000 steps per day but the researchers do not mention it at all in their results or discussion.
My hypothesis would be that overweight individuals are more likely to be aware that 8-10k steps is the recommended amount, given that the participants are Fitbit wearers? But presumably fitbit wearers are semi-uniformly aware of the recommended # of steps per day?
Thank you for any insights.
Honestly, I wouldn't put a lot of stock in that association until or unless it's confirm by repeated studies. Looking at their data analysis, they ran a SHIT TON of analyses, and when you do that, you need to divvy up the level of Type I error risk across ALL the analyses--which gets hellaciously complicated when you're looking at the kind of analyses they're doing, and I'm not shocked they're mostly like "look, the P-value here is less than 0.001, it's significant, OK?" but I'm not even kidding--that might not be significant enough when you consider how many tests they ran. Researchers running multiple tests to try and find something significant is a notorious problem in science because they rarely disclose how many nonsignificant findings they got before they found a significant one, and what you are supposed to do if you run the numbers multiple times (pulling out one variable, adding another in) is again raise the P-value bar proportionate to how many times you crunched the data. So if I'm willing to accept a 5% risk of a false positive result (the typical p < 0.05 we know and love) and I run 100 different analyses, which is not at all difficult to do with modern software, I am supposed to now raise my threshold for a significant result to p < 0.0005. No one does that. Which means that a lot of "significant" findings in research aren't, by the strict mathematical criteria we're supposed to use in order to make statistical analysis valid.
Which isn't to say that couldn't be genuinely meaningful at the population level, just that I doubt it is. And if I'm not convinced by the data and I don't see a reason why that has face validity, it goes in my "huh" file and I move on. Sometimes you need to revisit the "huh" file but most of those results drop into nothingness because they were nothing to begin with.
31 notes · View notes
jtargaryen18 · 2 years
Text
Things to Think About Post-Roe
Regardless of whether you live in a state where abortion is still legal or not, here are some things to think about moving forward. Most of us, me included, aren't old enough to remember a time before Roe. But it is going to impact all of us women, trans men, non-binary, and others along with those who love them.
1. Remove all period trackers/apps from your phone. Use a physical pocket calendar instead and use an easy-to-remember phrase like "lunch with (friend or SO)" or "return library books."
2. If you go to any facility for reproduction healthcare services, don't take your phone, or tracking data will place you there.
3. If you go to a facility for healthcare services, remember to wear a mask. Facial recognition software is a growing technology.
4. If you miscarry, don't tell your healthcare providers if you took anything. The treatment for abortion and miscarriage is the same. Those who miscarry can and will be criminally investigated and charged. There have been reports of them being turned in by their healthcare providers. BIPOC are at the highest risk for this and drug testing.
5. Incognito browsers are a weak defense. Deleting browsing data helps but isn't a failsafe. Use a VPN. Always.
6. Be careful who you trust. The father may not want that pregnancy or be ready for it. Tell me he (or someone else) wouldn't want the $10k bounty some states are offering for turning you in.
7. Many are posting links to viable organizations and networks to help those with unwanted pregnancies. Donate what you can. Vet them first.
561 notes · View notes
sixstringphonic · 1 year
Text
“A recent Goldman Sachs study found that generative AI tools could, in fact, impact 300 million full-time jobs worldwide, which could lead to a ‘significant disruption’ in the job market.”
“Insider talked to experts and conducted research to compile a list of jobs that are at highest-risk for replacement by AI.”
Tech jobs (Coders, computer programmers, software engineers, data analysts)
Media jobs (advertising, content creation, technical writing, journalism)
Legal industry jobs (paralegals, legal assistants)
Market research analysts
Teachers
Finance jobs (Financial analysts, personal financial advisors)
Traders (stock markets)
Graphic designers
Accountants
Customer service agents
"’We have to think about these things as productivity enhancing tools, as opposed to complete replacements,’ Anu Madgavkar, a partner at the McKinsey Global Institute, said.”
What will be eliminated from all of these industries is the ENTRY LEVEL JOB.  You know, the jobs where newcomers gain valuable real-world experience and build their resumes?  The jobs where you’re supposed to get your 1-2 years of experience before moving up to the big leagues (which remain inaccessible to applicants without the necessary experience, which they can no longer get, because so-called “low level” tasks will be completed by AI).
There’s more...
Wendy’s to test AI chatbot that takes your drive-thru order
“Wendy’s is not entirely a pioneer in this arena. Last year, McDonald’s opened a fully automated restaurant in Fort Worth, Texas, and deployed more AI-operated drive-thrus around the country.”
BT to cut 55,000 jobs with up to a fifth replaced by AI
“Chief executive Philip Jansen said ‘generative AI’ tools such as ChatGPT - which can write essays, scripts, poems, and solve computer coding in a human-like way - ‘gives us confidence we can go even further’.”
Why promoting AI is actually hurting accounting
“Accounting firms have bought into the AI hype and slowed their investment in personnel, believing they can rely more on machines and less on people.“
Will AI Replace Software Engineers?
“The truth is that AI is unlikely to replace high-value software engineers who build complex and innovative software. However, it could replace some low-value developers who build simple and repetitive software.”
75 notes · View notes
Text
Hypothetical AI election disinformation risks vs real AI harms
Tumblr media
I'm on tour with my new novel The Bezzle! Catch me TONIGHT (Feb 27) in Portland at Powell's. Then, onto Phoenix (Changing Hands, Feb 29), Tucson (Mar 9-12), and more!
Tumblr media
You can barely turn around these days without encountering a think-piece warning of the impending risk of AI disinformation in the coming elections. But a recent episode of This Machine Kills podcast reminds us that these are hypothetical risks, and there is no shortage of real AI harms:
https://soundcloud.com/thismachinekillspod/311-selling-pickaxes-for-the-ai-gold-rush
The algorithmic decision-making systems that increasingly run the back-ends to our lives are really, truly very bad at doing their jobs, and worse, these systems constitute a form of "empiricism-washing": if the computer says it's true, it must be true. There's no such thing as racist math, you SJW snowflake!
https://slate.com/news-and-politics/2019/02/aoc-algorithms-racist-bias.html
Nearly 1,000 British postmasters were wrongly convicted of fraud by Horizon, the faulty AI fraud-hunting system that Fujitsu provided to the Royal Mail. They had their lives ruined by this faulty AI, many went to prison, and at least four of the AI's victims killed themselves:
https://en.wikipedia.org/wiki/British_Post_Office_scandal
Tenants across America have seen their rents skyrocket thanks to Realpage's landlord price-fixing algorithm, which deployed the time-honored defense: "It's not a crime if we commit it with an app":
https://www.propublica.org/article/doj-backs-tenants-price-fixing-case-big-landlords-real-estate-tech
Housing, you'll recall, is pretty foundational in the human hierarchy of needs. Losing your home – or being forced to choose between paying rent or buying groceries or gas for your car or clothes for your kid – is a non-hypothetical, widespread, urgent problem that can be traced straight to AI.
Then there's predictive policing: cities across America and the world have bought systems that purport to tell the cops where to look for crime. Of course, these systems are trained on policing data from forces that are seeking to correct racial bias in their practices by using an algorithm to create "fairness." You feed this algorithm a data-set of where the police had detected crime in previous years, and it predicts where you'll find crime in the years to come.
But you only find crime where you look for it. If the cops only ever stop-and-frisk Black and brown kids, or pull over Black and brown drivers, then every knife, baggie or gun they find in someone's trunk or pockets will be found in a Black or brown person's trunk or pocket. A predictive policing algorithm will naively ingest this data and confidently assert that future crimes can be foiled by looking for more Black and brown people and searching them and pulling them over.
Obviously, this is bad for Black and brown people in low-income neighborhoods, whose baseline risk of an encounter with a cop turning violent or even lethal. But it's also bad for affluent people in affluent neighborhoods – because they are underpoliced as a result of these algorithmic biases. For example, domestic abuse that occurs in full detached single-family homes is systematically underrepresented in crime data, because the majority of domestic abuse calls originate with neighbors who can hear the abuse take place through a shared wall.
But the majority of algorithmic harms are inflicted on poor, racialized and/or working class people. Even if you escape a predictive policing algorithm, a facial recognition algorithm may wrongly accuse you of a crime, and even if you were far away from the site of the crime, the cops will still arrest you, because computers don't lie:
https://www.cbsnews.com/sacramento/news/texas-macys-sunglass-hut-facial-recognition-software-wrongful-arrest-sacramento-alibi/
Trying to get a low-waged service job? Be prepared for endless, nonsensical AI "personality tests" that make Scientology look like NASA:
https://futurism.com/mandatory-ai-hiring-tests
Service workers' schedules are at the mercy of shift-allocation algorithms that assign them hours that ensure that they fall just short of qualifying for health and other benefits. These algorithms push workers into "clopening" – where you close the store after midnight and then open it again the next morning before 5AM. And if you try to unionize, another algorithm – that spies on you and your fellow workers' social media activity – targets you for reprisals and your store for closure.
If you're driving an Amazon delivery van, algorithm watches your eyeballs and tells your boss that you're a bad driver if it doesn't like what it sees. If you're working in an Amazon warehouse, an algorithm decides if you've taken too many pee-breaks and automatically dings you:
https://pluralistic.net/2022/04/17/revenge-of-the-chickenized-reverse-centaurs/
If this disgusts you and you're hoping to use your ballot to elect lawmakers who will take up your cause, an algorithm stands in your way again. "AI" tools for purging voter rolls are especially harmful to racialized people – for example, they assume that two "Juan Gomez"es with a shared birthday in two different states must be the same person and remove one or both from the voter rolls:
https://www.cbsnews.com/news/eligible-voters-swept-up-conservative-activists-purge-voter-rolls/
Hoping to get a solid education, the sort that will keep you out of AI-supervised, precarious, low-waged work? Sorry, kiddo: the ed-tech system is riddled with algorithms. There's the grifty "remote invigilation" industry that watches you take tests via webcam and accuses you of cheating if your facial expressions fail its high-tech phrenology standards:
https://pluralistic.net/2022/02/16/unauthorized-paper/#cheating-anticheat
All of these are non-hypothetical, real risks from AI. The AI industry has proven itself incredibly adept at deflecting interest from real harms to hypothetical ones, like the "risk" that the spicy autocomplete will become conscious and take over the world in order to convert us all to paperclips:
https://pluralistic.net/2023/11/27/10-types-of-people/#taking-up-a-lot-of-space
Whenever you hear AI bosses talking about how seriously they're taking a hypothetical risk, that's the moment when you should check in on whether they're doing anything about all these longstanding, real risks. And even as AI bosses promise to fight hypothetical election disinformation, they continue to downplay or ignore the non-hypothetical, here-and-now harms of AI.
There's something unseemly – and even perverse – about worrying so much about AI and election disinformation. It plays into the narrative that kicked off in earnest in 2016, that the reason the electorate votes for manifestly unqualified candidates who run on a platform of bald-faced lies is that they are gullible and easily led astray.
But there's another explanation: the reason people accept conspiratorial accounts of how our institutions are run is because the institutions that are supposed to be defending us are corrupt and captured by actual conspiracies:
https://memex.craphound.com/2019/09/21/republic-of-lies-the-rise-of-conspiratorial-thinking-and-the-actual-conspiracies-that-fuel-it/
The party line on conspiratorial accounts is that these institutions are good, actually. Think of the rebuttal offered to anti-vaxxers who claimed that pharma giants were run by murderous sociopath billionaires who were in league with their regulators to kill us for a buck: "no, I think you'll find pharma companies are great and superbly regulated":
https://pluralistic.net/2023/09/05/not-that-naomi/#if-the-naomi-be-klein-youre-doing-just-fine
Institutions are profoundly important to a high-tech society. No one is capable of assessing all the life-or-death choices we make every day, from whether to trust the firmware in your car's anti-lock brakes, the alloys used in the structural members of your home, or the food-safety standards for the meal you're about to eat. We must rely on well-regulated experts to make these calls for us, and when the institutions fail us, we are thrown into a state of epistemological chaos. We must make decisions about whether to trust these technological systems, but we can't make informed choices because the one thing we're sure of is that our institutions aren't trustworthy.
Ironically, the long list of AI harms that we live with every day are the most important contributor to disinformation campaigns. It's these harms that provide the evidence for belief in conspiratorial accounts of the world, because each one is proof that the system can't be trusted. The election disinformation discourse focuses on the lies told – and not why those lies are credible.
That's because the subtext of election disinformation concerns is usually that the electorate is credulous, fools waiting to be suckered in. By refusing to contemplate the institutional failures that sit upstream of conspiracism, we can smugly locate the blame with the peddlers of lies and assume the mantle of paternalistic protectors of the easily gulled electorate.
But the group of people who are demonstrably being tricked by AI is the people who buy the horrifically flawed AI-based algorithmic systems and put them into use despite their manifest failures.
As I've written many times, "we're nowhere near a place where bots can steal your job, but we're certainly at the point where your boss can be suckered into firing you and replacing you with a bot that fails at doing your job"
https://pluralistic.net/2024/01/15/passive-income-brainworms/#four-hour-work-week
The most visible victims of AI disinformation are the people who are putting AI in charge of the life-chances of millions of the rest of us. Tackle that AI disinformation and its harms, and we'll make conspiratorial claims about our institutions being corrupt far less credible.
Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/02/27/ai-conspiracies/#epistemological-collapse
Tumblr media
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
144 notes · View notes
callmearcturus · 2 years
Text
the best handheld gaming options today
ohyesididnotjustdothat: Idk if u were serious or if this is even a good time, but that rec list for handhelds sounds like it'd be baller ngl
yeah sure, i like to do the research so others don't have to. it's turning into my gimmick
Let's do this by
Best Starter Device
Best Midtier Device
"But What About Sony" Device
Best Thing If You Don't Want To Invest In A Steam Deck
Tumblr media
Best Starter Device/Low Budget: Miyoo Mini
as soon as this fucking thing came out, everyone was talking about it and I can see why. This thing is adorable, it's sturdy, it is the most pocketable device in the space, and it reliably plays anything up to the PSX generation. It has shoulder buttons on the back, comes with a very decent user interface out of the box, and that screen is both gorgeous and it has extra lamination so it's well protected.
And it sells for less than 70$. There's a very loving fanbase around the Miyoo Mini with people pushing the hardware to its limits and supporting it, so it's a good pick.
Tumblr media
Best Midtier Device: Retroid Pocket 2+
the + is important, this is a refresh and update to the original RP2 with a lot of improvements.
Full disclosure, I wanted to get myself a Powkiddy RGB10 Max 2 (a mouthful) because it has super good ergonomics and a huge screen. But its not about the screen, it's what you do with it, and RP2+ is just head and shoulders above the competition and tends to be priced at 99$, which is cheaper than the competition as well. That this thing exists while the Ambernick RG552 has the audacity to ask for 225$ is staggering.
This fucking thing will play SNES, GBA, PSX, sure. But thanks to its chipset, it'll also play N64, Nintendo DS, Dreamcast. There is a community-maintained GDoc with testing of the device with advice on how to wring the best results out of the RP2+. Also its one of the only devices where the manufacturer lets you buy spare parts if something breaks, which is something more folks should do. Also this thing is getting active updates both officially and from the community. Like.... it's a good fucking purchase.
This is what I'm getting and I'm planning to explore the NDS and Dreamcast libraries for the first time.
Tumblr media
"But I want to play PSP as well as older generations!": CFW PS Vita
This was my first foray into the space and I fucking love it. The amount of shit you can do with a custom firmware (CFW) Vita is mind boggling. I just downloaded a free port of Fallout 2 from the Homebrew Browser that's runs on the Vita itself. Last night, I opened Adrenaline, which is Sony's own PSP emulation software and booted up Chrono Cross. There is a very sneaky little app I won't name that literally just lets you directly, cleanly download any PS Vita game onto the device.
The screen is fucking beautiful, the device feels luxury, it's got bluetooth, it can run older libraries all the way up to N64 (though it chugs on some GBA games for some reason).
And while a lot of other custom handhelds are trying to get PSP games to run, this is the device that was BUILT to run them, and it cannot be beat. No fucking with settings, no swapping cores in hopes another will run better. It just works for that amazing fucking library. And setting up the CFW is very fun and has a great step by step guide online.
Honorable Mention: CFW 3DS/3DS XL
One of the things these handhelds struggle with is emulation of the dual screen Nintendo devices. It's super easy to emulate the top screen, and for many games the bottom screen isn't actually necessary to play. But for the ones where it is, there are few viable handhelds you can really use.
But much like the Vita, the 3DS has a vibrant and thriving mod community. Once you have it set up, it's the most reliable avenue for the Nintendo catalog, and with Nintendo shutting down support for it, that's two generations of extremely large catalogs at risk of disappearing.
Also, once you've got it set up, you can go to hshop, pick the game you want to play, scan a QR code with your 3DS, and just install it. The pirate game is so fucking advanced. The only reason I don't play mine more is the battery life bugs me. If I put my vita to sleep, in a month it'll still have power. If I put my 3DS to sleep, it's dead in 2 days.
Tumblr media
The "I Don't Mind Spending For the Best Option That Is Fully Futureproofed" Option: The AYN Odin
Starting price 230$ before shipping and you'll be waiting at minimum 3 months for it to arrive.
This will play everything. This is right now the only device to reliably run the PS2/PSP generation. It will also flawlessly stream Game Pass, PS Now (or whatever they're calling it these days), Stadia, Moonlight, Parsec, and Nvidia's service I forget the name of. If you're daring, you can sideload windows and play a respectable amount of full PC games. It has very good controls, super nice shoulder buttons, a turbo mode with a cooling fan, and that enormous fucking screen.
This is the device every motherfucker wants but you will be paying and you will be waiting to get your hands on it. It's the best device you can get without investing in a Steam Deck. (Also you can very easily get away with the Odin Lite over the Odin Base or Pro unless you are planning to sideload Windows, just FYI.)
Final Thoughts: There's a lot of good options out there and a lot of people covering them, but I personally like Retro Game Corps the most. He has in depth reviews of goddamn everything as well as video guides to setting up specific devices, learning Retroarch, exploring firmware options as they release, and WRITTEN GUIDES to everything on his site. Dude is a marvel. I would watch his stuff to see what sounds good, then watch other reviews for the same device. (This is what talked me out of the Powkiddy and into the RP2+.)
458 notes · View notes
luminlogic · 2 years
Text
Tumblr media
Improve your Operational Efficiency by our Tools
Today, most project management professionals use project management software to plan, execute and control projects. Reach your goals quickly and efficiently with the help of our experts. Contact us today.
0 notes