Tumgik
#Trust Science
lounesdarbois · 4 months
Text
Tumblr media
6 notes · View notes
dafukdidiwatch · 1 year
Text
Tumblr media
Pick the devil you know or the devil you don’t.
2 notes · View notes
Text
Tumblr media
Please continue #stayingsafe as we’re n the midst of a very bad #CovidSummerSurge2023.
This powerful image is courtesy of #CovidDataReport out of #Philadelphia. He does an amazingly intelligent job of presenting real science, honest facts minus any political or financially driven governmental nonsense. No time for ignorance, #CovidDeniers or #Covidminimalists. His channel on You Tube is the real deal with no BS.
Check out his excellent channel at:
youtube
1 note · View note
its-blorbin-time · 5 months
Text
Graphic designer will Graham: I drag my cursor to the font selection box. After careful deliberation I choose comic sans. This will inflame the Chesapeake Hacker’s sophisticated sensibilities and provoke an attack.This is my graphic design. This is my passion.
888 notes · View notes
Text
Palantir’s NHS-stealing Big Lie
Tumblr media
I'm on tour with my new, nationally bestselling novel The Bezzle! Catch me in TUCSON (Mar 9-10), then SAN FRANCISCO (Mar 13), Anaheim, and more!
Tumblr media
Capitalism's Big Lie in four words: "There is no alternative." Looters use this lie for cover, insisting that they're hard-nosed grownups living in the reality of human nature, incentives, and facts (which don't care about your feelings).
The point of "there is no alternative" is to extinguish the innovative imagination. "There is no alternative" is really "stop trying to think of alternatives, dammit." But there are always alternatives, and the only reason to demand that they be excluded from consideration is that these alternatives are manifestly superior to the looter's supposed inevitability.
Right now, there's an attempt underway to loot the NHS, the UK's single most beloved institution. The NHS has been under sustained assault for decades – budget cuts, overt and stealth privatisation, etc. But one of its crown jewels has been stubbournly resistant to being auctioned off: patient data. Not that HMG hasn't repeatedly tried to flog patient data – it's just that the public won't stand for it:
https://www.theguardian.com/society/2023/nov/21/nhs-data-platform-may-be-undermined-by-lack-of-public-trust-warn-campaigners
Patients – quite reasonably – do not trust the private sector to handle their sensitive medical records.
Now, this presents a real conundrum, because NHS patient data, taken as a whole, holds untold medical insights. The UK is a large and diverse country and those records in aggregate can help researchers understand the efficacy of various medicines and other interventions. Leaving that data inert and unanalysed will cost lives: in the UK, and all over the world.
For years, the stock answer to "how do we do science on NHS records without violating patient privacy?" has been "just anonymise the data." The claim is that if you replace patient names with random numbers, you can release the data to research partners without compromising patient privacy, because no one will be able to turn those numbers back into names.
It would be great if this were true, but it isn't. In theory and in practice, it is surprisingly easy to "re-identify" individuals in anonymous data-sets. To take an obvious example: we know which two dates former PM Tony Blair was given a specific treatment for a cardiac emergency, because this happened while he was in office. We also know Blair's date of birth. Check any trove of NHS data that records a person who matches those three facts and you've found Tony Blair – and all the private data contained alongside those public facts is now in the public domain, forever.
Not everyone has Tony Blair's reidentification hooks, but everyone has data in some kind of database, and those databases are continually being breached, leaked or intentionally released. A breach from a taxi service like Addison-Lee or Uber, or from Transport for London, will reveal the journeys that immediately preceded each prescription at each clinic or hospital in an "anonymous" NHS dataset, which can then be cross-referenced to databases of home addresses and workplaces. In an eyeblink, millions of Britons' records of receiving treatment for STIs or cancer can be connected with named individuals – again, forever.
Re-identification attacks are now considered inevitable; security researchers have made a sport out of seeing how little additional information they need to re-identify individuals in anonymised data-sets. A surprising number of people in any large data-set can be re-identified based on a single characteristic in the data-set.
Given all this, anonymous NHS data releases should have been ruled out years ago. Instead, NHS records are to be handed over to the US military surveillance company Palantir, a notorious human-rights abuser and supplier to the world's most disgusting authoritarian regimes. Palantir – founded by the far-right Trump bagman Peter Thiel – takes its name from the evil wizard Sauron's all-seeing orb in Lord of the Rings ("Sauron, are we the baddies?"):
https://pluralistic.net/2022/10/01/the-palantir-will-see-you-now/#public-private-partnership
The argument for turning over Britons' most sensitive personal data to an offshore war-crimes company is "there is no alternative." The UK needs the medical insights in those NHS records, and this is the only way to get at them.
As with every instance of "there is no alternative," this turns out to be a lie. What's more, the alternative is vastly superior to this chumocratic sell-out, was Made in Britain, and is the envy of medical researchers the world 'round. That alternative is "trusted research environments." In a new article for the Good Law Project, I describe these nigh-miraculous tools for privacy-preserving, best-of-breed medical research:
https://goodlawproject.org/cory-doctorow-health-data-it-isnt-just-palantir-or-bust/
At the outset of the covid pandemic Oxford's Ben Goldacre and his colleagues set out to perform realtime analysis of the data flooding into NHS trusts up and down the country, in order to learn more about this new disease. To do so, they created Opensafely, an open-source database that was tied into each NHS trust's own patient record systems:
https://timharford.com/2022/07/how-to-save-more-lives-and-avoid-a-privacy-apocalypse/
Opensafely has its own database query language, built on SQL, but tailored to medical research. Researchers write programs in this language to extract aggregate data from each NHS trust's servers, posing medical questions of the data without ever directly touching it. These programs are published in advance on a git server, and are preflighted on synthetic NHS data on a test server. Once the program is approved, it is sent to the main Opensafely server, which then farms out parts of the query to each NHS trust, packages up the results, and publishes them to a public repository.
This is better than "the best of both worlds." This public scientific process, with peer review and disclosure built in, allows for frequent, complex analysis of NHS data without giving a single third party access to a a single patient record, ever. Opensafely was wildly successful: in just months, Opensafely collaborators published sixty blockbuster papers in Nature – science that shaped the world's response to the pandemic.
Opensafely was so successful that the Secretary of State for Health and Social Care commissioned a review of the programme with an eye to expanding it to serve as the nation's default way of conducting research on medical data:
https://www.gov.uk/government/publications/better-broader-safer-using-health-data-for-research-and-analysis/better-broader-safer-using-health-data-for-research-and-analysis
This approach is cheaper, safer, and more effective than handing hundreds of millions of pounds to Palantir and hoping they will manage the impossible: anonymising data well enough that it is never re-identified. Trusted Research Environments have been endorsed by national associations of doctors and researchers as the superior alternative to giving the NHS's data to Peter Thiel or any other sharp operator seeking a public contract.
As a lifelong privacy campaigner, I find this approach nothing short of inspiring. I would love for there to be a way for publishers and researchers to glean privacy-preserving insights from public library checkouts (such a system would prove an important counter to Amazon's proprietary god's-eye view of reading habits); or BBC podcasts or streaming video viewership.
You see, there is an alternative. We don't have to choose between science and privacy, or the public interest and private gain. There's always an alternative – if there wasn't, the other side wouldn't have to continuously repeat the lie that no alternative is possible.
Tumblr media
Name your price for 18 of my DRM-free ebooks and support the Electronic Frontier Foundation with the Humble Cory Doctorow Bundle.
Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/03/08/the-fire-of-orodruin/#are-we-the-baddies
Tumblr media
Image: Gage Skidmore (modified) https://commons.m.wikimedia.org/wiki/File:Peter_Thiel_(51876933345).jpg
CC BY-SA 2.0 https://creativecommons.org/licenses/by-sa/2.0/deed.en
521 notes · View notes
nyancrimew · 1 year
Text
im a scientist in the sense that i often fuck around and sometimes find out
2K notes · View notes
awesomecooperlove · 13 days
Text
LIES, LIES, LIES ENDLESS LIES… TIME TO WAKE UP HUMANITY
👩🏼‍🔬🤥👨🏻‍🔬
315 notes · View notes
poorly-drawn-mdzs · 5 months
Text
Tumblr media
+3 friendship with Jin Ling: He actively tries to dissuade you from further embroiling yourself in the homosexual allegations.
[First] Prev <–-> Next
912 notes · View notes
reality-detective · 5 months
Text
Tumblr media
465 notes · View notes
ophilosoraptoro · 1 year
Text
Tumblr media
1K notes · View notes
Text
A new landmark study has found that access to gender-affirming healthcare significantly reduces rates of depression, gender dysphoria, and suicidality among transgender people.
While it’s no secret that providing gender-affirming care to transgender individuals who ask for it can greatly benefit their well-being, an increase in transphobic rhetoric and bans on gender-affirming healthcare has prompted thorough medical studies into the impact of such care.
Now, brand new research conducted in Melbourne, Australia, has found that allowing transgender people to access the care they’re after can reduce suicidality by a stunning 55%.
As part of the first-ever randomized controlled trial (RCT) on gender-affirming care, researchers took 64 transgender and gender-diverse adults who had been looking to start testosterone therapy and randomly split them into a treatment group and a control group.
While the treatment group was allowed to begin hormone therapy that week, the control group waited three months for their treatment to begin.
Before the study began, both groups were evaluated on depression, gender dysphoria, and suicidality. Three months later, the two groups were evaluated again.
RCTs for medical care can often be hard to conduct due to practical and ethical concerns. However, researchers of this study found a way to hold an RCT for this study by incorporating a shorter follow-up period. Rather than giving the control group a placebo drug, or no treatment at all, they were simply given a longer wait time.
The results showed a notable decrease in gender dysphoria, depression, and – most significantly – suicidality.
The group that received gender-affirming care right away saw a 55% reduction in suicidality compared to a 5% drop within the control group.
Depression scores in the treatment group decreased by half, while gender dysphoria rates also significantly decreased.
Breaking down their findings, researchers Brendan J. Nolan MBBS, Sav Zwickl, PhD, and Peter Locke wrote: “There was a statistically significant decrease in gender dysphoria in individuals with immediate [access to gender-affirming care] compared with delayed initiation of testosterone therapy.”
“A clinically significant decrease in depression and a decrease in suicidality also occurred with immediate testosterone therapy.”
“The findings of this trial suggest that testosterone therapy significantly decreases gender dysphoria, depression, and suicidality in transgender and gender-diverse individuals desiring testosterone therapy.”
Tumblr media
Of course, this isn’t the first time that research has shown significant drops in depression and suicide rates among transgender individuals who receive gender-affirming care.
A 2022 medical study showed that young transgender people who have access to puberty blockers are 73% less at risk of suicide and report improved well-being.
But, as anti-trans activists advocate for further bans on gender-affirming care, one of the key arguments is that the evidence in support of the care isn’t up to scratch with GRADE (Grades of Recommendation, Assessment, Development, and Evaluation) standards.
So research like this landmark RCT is so significant to the transgender community and its allies as the fight for their healthcare rights rumbles on.
451 notes · View notes
storm-of-feathers · 3 months
Text
bpd stands for big penis disorder
137 notes · View notes
cy-fi-theansweris42 · 1 month
Text
Ghostbusters Frozen Empire spoilers ahead
I've seen some criticism over Phoebe's actions in Frozen Empire, more specifically over the point when she separates her spirit from her body, calling it "a dumb decision", that "she's smarter than that", that it's "out of character for her", and just like.
First of all, remember she's 15??? She's a teenager??
But also I want to argue that it does make sense for her character, especially after everything that happened in Frozen Empire leading up to that moment.
Before we talk about Frozen Empire, we have to remember her story arc from Afterlife.
In Afterlife, Phoebe is uprooted from where they were previously living, and we know she has trouble making friends. Trevor tells her to tell him some of the jokes she's been practicing, her mom tells her to "not be herself" (jokingly/affectionately, but it's still said), and we know that Phoebe struggles to make friends. She struggles to connect with people.
Then she finds out her grandfather was a scientist, like her. He was a Ghostbuster, something that she connects to. She compares herself to Egon and is upset when she finds out her mom didn't tell her that her grandfather was a scientist like her. Then, most importantly here, her mom says that Phoebe "found herself" at the farm.
During Afterlife, Phoebe's story is about finding herself and connecting to who her grandfather was, a scientist, a Ghostbuster.
Flashforward to Frozen Empire.
Now they're in New York, and from what we see even though it's been 3 years since the events of Afterlife, it doesn't seem like Phoebe has really made any new friends in New York, she's still friends with Podcast, but he still lives in Summerville and is only there for the summer. She has her family, but she doesn't have any friends there. Her focus seems to be on being a Ghostbuster because that's where she found herself, that's what she wants to do.
And then it's being taken away from her.
She's suddenly told that she can't be a Ghostbuster because she's too young, and all the while the rest of her family is continuing to go out on busts and leaving her behind. There's things going on, and she's told she can't help, which just throws the fact that the thing she loves, the thing she wants to do, has been taken away from her.
And then, while upset from this, she makes a friend, Melody. It's someone she can talk to, that she seems to enjoy being around. She doesn't know that Melody has an agenda, that Melody is going to use her (not hating on Melody, that's just literally what happens). To her, Melody is a friend and someone she can talk to that distracts her from how another part of her life has been taken away.
Now I don't know if anything has been confirmed, but Phoebe comes off as neurodivergent (honestly I would not be surprised if she was intentionally written that way). One thing when you're neurodivergent and struggle to make friends is like, when someone acts like they want to be around you, to be your friend, you might not catch any signs that something's up.
So with that in mind, remember that Phoebe had no reason to suspect anything was going on with Melody. Before separating her soul from her body, Phoebe just thought she was finding a way to exist on the same plane as her friend. It was a way to just be with her friend for just a couple of minutes. Remember, it was just for two minutes.
So, during a time when Phoebe had something important to her taken away, a part of her identity taken away, she made a connection with someone she trusted and believed to be her friend. Of course she'd trust her, of course she'd try to essentially hang out with her friend for just a couple minutes.
She didn't know what was going to happen next.
107 notes · View notes
e-kamski-cyberlifeceo · 5 months
Text
Yes you're "following" me, but would you let me preform human testing on you? It's for science, I promise
122 notes · View notes
failed-inspection · 4 months
Text
Tumblr media
"Back home they thought I'd vanished without a trace, but I found space, I found space!"
('I Found Space' by Miracles of Modern Science)
Hii hii, this is my first time ever actually drawing Sliver of Straw, at least as an iterator, alongside my first time ever drawing a void work, had a blast with this! (Also gosh I just really like the krita lighting brush tools!)
Bonus no text version
Tumblr media
75 notes · View notes
Text
The real AI fight
Tumblr media
Tonight (November 27), I'm appearing at the Toronto Metro Reference Library with Facebook whistleblower Frances Haugen.
On November 29, I'm at NYC's Strand Books with my novel The Lost Cause, a solarpunk tale of hope and danger that Rebecca Solnit called "completely delightful."
Tumblr media
Last week's spectacular OpenAI soap-opera hijacked the attention of millions of normal, productive people and nonsensually crammed them full of the fine details of the debate between "Effective Altruism" (doomers) and "Effective Accelerationism" (AKA e/acc), a genuinely absurd debate that was allegedly at the center of the drama.
Very broadly speaking: the Effective Altruists are doomers, who believe that Large Language Models (AKA "spicy autocomplete") will someday become so advanced that it could wake up and annihilate or enslave the human race. To prevent this, we need to employ "AI Safety" – measures that will turn superintelligence into a servant or a partner, nor an adversary.
Contrast this with the Effective Accelerationists, who also believe that LLMs will someday become superintelligences with the potential to annihilate or enslave humanity – but they nevertheless advocate for faster AI development, with fewer "safety" measures, in order to produce an "upward spiral" in the "techno-capital machine."
Once-and-future OpenAI CEO Altman is said to be an accelerationists who was forced out of the company by the Altruists, who were subsequently bested, ousted, and replaced by Larry fucking Summers. This, we're told, is the ideological battle over AI: should cautiously progress our LLMs into superintelligences with safety in mind, or go full speed ahead and trust to market forces to tame and harness the superintelligences to come?
This "AI debate" is pretty stupid, proceeding as it does from the foregone conclusion that adding compute power and data to the next-word-predictor program will eventually create a conscious being, which will then inevitably become a superbeing. This is a proposition akin to the idea that if we keep breeding faster and faster horses, we'll get a locomotive:
https://locusmag.com/2020/07/cory-doctorow-full-employment/
As Molly White writes, this isn't much of a debate. The "two sides" of this debate are as similar as Tweedledee and Tweedledum. Yes, they're arrayed against each other in battle, so furious with each other that they're tearing their hair out. But for people who don't take any of this mystical nonsense about spontaneous consciousness arising from applied statistics seriously, these two sides are nearly indistinguishable, sharing as they do this extremely weird belief. The fact that they've split into warring factions on its particulars is less important than their unified belief in the certain coming of the paperclip-maximizing apocalypse:
https://newsletter.mollywhite.net/p/effective-obfuscation
White points out that there's another, much more distinct side in this AI debate – as different and distant from Dee and Dum as a Beamish Boy and a Jabberwork. This is the side of AI Ethics – the side that worries about "today’s issues of ghost labor, algorithmic bias, and erosion of the rights of artists and others." As White says, shifting the debate to existential risk from a future, hypothetical superintelligence "is incredibly convenient for the powerful individuals and companies who stand to profit from AI."
After all, both sides plan to make money selling AI tools to corporations, whose track record in deploying algorithmic "decision support" systems and other AI-based automation is pretty poor – like the claims-evaluation engine that Cigna uses to deny insurance claims:
https://www.propublica.org/article/cigna-pxdx-medical-health-insurance-rejection-claims
On a graph that plots the various positions on AI, the two groups of weirdos who disagree about how to create the inevitable superintelligence are effectively standing on the same spot, and the people who worry about the actual way that AI harms actual people right now are about a million miles away from that spot.
There's that old programmer joke, "There are 10 kinds of people, those who understand binary and those who don't." But of course, that joke could just as well be, "There are 10 kinds of people, those who understand ternary, those who understand binary, and those who don't understand either":
https://pluralistic.net/2021/12/11/the-ten-types-of-people/
What's more, the joke could be, "there are 10 kinds of people, those who understand hexadecenary, those who understand pentadecenary, those who understand tetradecenary [und so weiter] those who understand ternary, those who understand binary, and those who don't." That is to say, a "polarized" debate often has people who hold positions so far from the ones everyone is talking about that those belligerents' concerns are basically indistinguishable from one another.
The act of identifying these distant positions is a radical opening up of possibilities. Take the indigenous philosopher chief Red Jacket's response to the Christian missionaries who sought permission to proselytize to Red Jacket's people:
https://historymatters.gmu.edu/d/5790/
Red Jacket's whole rebuttal is a superb dunk, but it gets especially interesting where he points to the sectarian differences among Christians as evidence against the missionary's claim to having a single true faith, and in favor of the idea that his own people's traditional faith could be co-equal among Christian doctrines.
The split that White identifies isn't a split about whether AI tools can be useful. Plenty of us AI skeptics are happy to stipulate that there are good uses for AI. For example, I'm 100% in favor of the Human Rights Data Analysis Group using an LLM to classify and extract information from the Innocence Project New Orleans' wrongful conviction case files:
https://hrdag.org/tech-notes/large-language-models-IPNO.html
Automating "extracting officer information from documents – specifically, the officer's name and the role the officer played in the wrongful conviction" was a key step to freeing innocent people from prison, and an LLM allowed HRDAG – a tiny, cash-strapped, excellent nonprofit – to make a giant leap forward in a vital project. I'm a donor to HRDAG and you should donate to them too:
https://hrdag.networkforgood.com/
Good data-analysis is key to addressing many of our thorniest, most pressing problems. As Ben Goldacre recounts in his inaugural Oxford lecture, it is both possible and desirable to build ethical, privacy-preserving systems for analyzing the most sensitive personal data (NHS patient records) that yield scores of solid, ground-breaking medical and scientific insights:
https://www.youtube.com/watch?v=_-eaV8SWdjQ
The difference between this kind of work – HRDAG's exoneration work and Goldacre's medical research – and the approach that OpenAI and its competitors take boils down to how they treat humans. The former treats all humans as worthy of respect and consideration. The latter treats humans as instruments – for profit in the short term, and for creating a hypothetical superintelligence in the (very) long term.
As Terry Pratchett's Granny Weatherwax reminds us, this is the root of all sin: "sin is when you treat people like things":
https://brer-powerofbabel.blogspot.com/2009/02/granny-weatherwax-on-sin-favorite.html
So much of the criticism of AI misses this distinction – instead, this criticism starts by accepting the self-serving marketing claim of the "AI safety" crowd – that their software is on the verge of becoming self-aware, and is thus valuable, a good investment, and a good product to purchase. This is Lee Vinsel's "Criti-Hype": "taking press releases from startups and covering them with hellscapes":
https://sts-news.medium.com/youre-doing-it-wrong-notes-on-criticism-and-technology-hype-18b08b4307e5
Criti-hype and AI were made for each other. Emily M Bender is a tireless cataloger of criti-hypeists, like the newspaper reporters who breathlessly repeat " completely unsubstantiated claims (marketing)…sourced to Altman":
https://dair-community.social/@emilymbender/111464030855880383
Bender, like White, is at pains to point out that the real debate isn't doomers vs accelerationists. That's just "billionaires throwing money at the hope of bringing about the speculative fiction stories they grew up reading – and philosophers and others feeling important by dressing these same silly ideas up in fancy words":
https://dair-community.social/@emilymbender/111464024432217299
All of this is just a distraction from real and important scientific questions about how (and whether) to make automation tools that steer clear of Granny Weatherwax's sin of "treating people like things." Bender – a computational linguist – isn't a reactionary who hates automation for its own sake. On Mystery AI Hype Theater 3000 – the excellent podcast she co-hosts with Alex Hanna – there is a machine-generated transcript:
https://www.buzzsprout.com/2126417
There is a serious, meaty debate to be had about the costs and possibilities of different forms of automation. But the superintelligence true-believers and their criti-hyping critics keep dragging us away from these important questions and into fanciful and pointless discussions of whether and how to appease the godlike computers we will create when we disassemble the solar system and turn it into computronium.
The question of machine intelligence isn't intrinsically unserious. As a materialist, I believe that whatever makes me "me" is the result of the physics and chemistry of processes inside and around my body. My disbelief in the existence of a soul means that I'm prepared to think that it might be possible for something made by humans to replicate something like whatever process makes me "me."
Ironically, the AI doomers and accelerationists claim that they, too, are materialists – and that's why they're so consumed with the idea of machine superintelligence. But it's precisely because I'm a materialist that I understand these hypotheticals about self-aware software are less important and less urgent than the material lives of people today.
It's because I'm a materialist that my primary concerns about AI are things like the climate impact of AI data-centers and the human impact of biased, opaque, incompetent and unfit algorithmic systems – not science fiction-inspired, self-induced panics over the human race being enslaved by our robot overlords.
Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2023/11/27/10-types-of-people/#taking-up-a-lot-of-space
Tumblr media
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
288 notes · View notes