Tumgik
#deep fake ai
tightjeansjavi · 8 months
Text
deepfake AI’s are so incredibly FUCKED UP.
If I see anyone spreading that deep fake of Pedro, it’s on sight. I don’t care. This AI shit is getting way too out of hand and if you participate in it? Why??
It’s not some “silly fun little thing”
It’s ruining peoples lives.
It’s invading someone’s privacy.
It’s stealing art from creators.
Please don’t partake in it.
62 notes · View notes
kp777 · 1 year
Text
By Gene Marks
The Guardian Opinions
April 9, 2023
Everyone seems to be worried about the potential impact of artificial intelligence (AI) these days. Even technology leaders including Elon Musk and the Apple co-founder Steve Wozniak have signed a public petition urging OpenAI, the makers of the conversational chatbot ChatGPT, to suspend development for six months so it can be “rigorously audited and overseen by independent outside experts”.
Their concerns about the impact AI may have on humanity in the future are justified – we are talking some serious Terminator stuff, without a Schwarzenegger to save us. But that’s the future. Unfortunately, there’s AI that’s being used right now which is already starting to have a big impact – even financially destroy – businesses and individuals. So much so that the US Federal Trade Commission (FTC) felt the need to issue a warning about an AI scam which, according to this NPR report “sounds like a plot from a science fiction story”.
But this is not science fiction. Using deepfake AI technology, scammers last year stole approximately $11m from unsuspecting consumers by fabricating the voices of loved ones, doctors and attorneys requesting money from their relatives and friends.
“All [the scammer] needs is a short audio clip of your family member’s voice – which he could get from content posted online – and a voice-cloning program,” the FTC says. “When the scammer calls you, he’ll sound just like your loved one.”
And these incidents aren’t limited to just consumers. Businesses of all sizes are quickly falling victim to this new type of fraud.
That’s what happened to a bank manager in Hong Kong, who received deep-faked calls from a bank director requesting a transfer that were so good that he eventually transferred $35m, and never saw it again. A similar incident occurred at a UK-based energy firm where an unwitting employee transferred approximately $250,000 to criminals after being deep-faked into thinking that the recipient was the CEO of the firm’s parent. The FBI is now warning businesses that criminals are using deepfakes to create “employees” online for remote-work positions in order to gain access to corporate information.
Deepfake video technology has been growing in use over the past few years, mostly targeting celebrities and politicians like Mark Zuckerberg, Tom Cruise, Barack Obama and Donald Trump. And I’m sure that this election year will be filled with a growing number of very real-looking fake videos that will attempt to influence voters.
But it’s the potential impact on the many unsuspecting small business owners I know that worries me the most. Many of us have appeared on publicly accessed videos, be it on YouTube, Facebook or LinkedIn. But even those that haven’t appeared on videos can have their voices “stolen” by fraudsters copying outgoing voicemail messages or even by making pretend calls to engage a target in a conversation with the only objective of recording their voice.
This is worse than malware or ransomware. If used effectively it can turn into significant, immediate losses. So what do you do? You implement controls. And you enforce them.
This means that any financial manager in your business should not be allowed to undertake any financial transaction such as a transfer of cash based on an incoming phone call. Everyone requires a call back, even the CEO of the company, to verify the source.
And just as importantly, no transaction over a certain predetermined amount must be authorized without the prior written approval of multiple executives in the company. Of course there must also be written documentation – a signed request or contract – that underlies the transaction request.
These types of controls are easier to implement in a larger company that has more structure. But accountants at smaller businesses often find themselves victim of management override which can best be explained by “I don’t care what the rules are, this is my business, so transfer the cash now, dammit!” If you’re a business owner reading this then please: establish rules and follow them. It’s for your own good.
So, yes, AI technology like ChatGPT presents some terrifying future risks for humanity. But that’s the future. Deepfake technology that imitates executives and spoofs employees is here right now and will only increase in frequency.
128 notes · View notes
rejectingrepublicans · 2 months
Text
Tumblr media
#Republicans Lie About Everything
77 notes · View notes
gwiji23 · 2 months
Text
Tumblr media Tumblr media Tumblr media
29 notes · View notes
odinsblog · 6 months
Text
Considering how Twitter, YouTube and Facebook have terminated most of their departments that were responsible for limiting the spread of dis/misinformation, it’s going to be an exceptionally bad election season in 2024. Worse even, than the 2016 elections. The example here is intentionally glaring in its differences from Biden’s fake vs. his real words. But far more subtle deep fakes are coming (or are already here). Even if think that YOU can always spot the difference, others may not be able to, and do not underestimate your ability to be fooled by something that you tacitly support or subconsciously agree with. And that’s without even getting into troll farms. So please be weary of the media you consume.
Please help spread awareness and share.
38 notes · View notes
prisoner89 · 2 years
Text
Tumblr media
Pulling me deeper, with hooks in my back.
427 notes · View notes
Text
20 notes · View notes
stra-tek · 3 days
Text
Welcome to AI bizarro world, with trailers for 1950's versions of...
youtube
Deep Space 9
youtube
Star Trek: Voyager
youtube
The Next Generation
youtube
Wrath of Khan
8 notes · View notes
royalteachitchat · 2 months
Text
Deep Fake Technology in Action. This is how they do it...
16 notes · View notes
recursive360 · 3 months
Text
Tumblr media Tumblr media Tumblr media Tumblr media
NOTE: NBC needs to green light ASAP a 2 hour LAW & ORDER: SVU w/ special guest star: Taylor Swift. Everyone would watch it. It would be topical and shine more public awareness on this issue. Plus, the ratings would be HUGE!!!
11 notes · View notes
simplegenius042 · 2 months
Text
youtube
Surprisingly missed a MatPat video spreading awareness of corporations uses propaganda to get people to support A.I art and writing (among other things).
Keep your wits about everyone. I’ll also be looking into any news regarding WGA and SAG-AFTRA on the voice actors and gaming side of things.
9 notes · View notes
jonostroveart · 2 months
Text
Tumblr media
Deep Truth: Trump is now claiming that the videos circulating showing him making verbal glitches and other random babblings are all AI generated, deep fakes. Is there anything that the lying SOB won’t brazenly lie about? (sigh!)
5 notes · View notes
kp777 · 5 months
Text
By Olivia Rosane
Common Dreams
Dec. 26, 2023
"If people don't ultimately trust information related to an election, democracy just stops working," said a senior fellow at the Alliance for Securing Democracy.
As 2024 approaches and with it the next U.S. presidential election, experts and advocates are warning about the impact that the spread of artificial intelligence technology will have on the amount and sophistication of misinformation directed at voters.
While falsehoods and conspiracy theories have circulated ahead of previous elections, 2024 marks the first time that it will be easy for anyone to access AI technology that could create a believable deepfake video, photo, or audio clip in seconds, The Associated Press reported Tuesday.
"I expect a tsunami of misinformation," Oren Etzioni, n AI expert and University of Washington professor emeritus, told the AP. "I can't prove that. I hope to be proven wrong. But the ingredients are there, and I am completely terrified."
"If a misinformation or disinformation campaign is effective enough that a large enough percentage of the American population does not believe that the results reflect what actually happened, then Jan. 6 will probably look like a warm-up act."
Subject matter experts told the AP that three factors made the 2024 election an especially perilous time for the rise of misinformation. The first is the availability of the technology itself. Deepfakes have already been used in elections. The Republican primary campaign of Florida Gov. Ron DeSantis circulated images of former president Donald Trump hugging former White House Coronavirus Task Force chief Anthony Fauci as part of an ad in June, for example.
"You could see a political candidate like President [Joe] Biden being rushed to a hospital," Etzioni told the AP. "You could see a candidate saying things that he or she never actually said."
The second factor is that social media companies have reduced the number of policies designed to control the spread of false posts and the amount of employees devoted to monitoring them. When billionaire Elon Musk acquired Twitter in October of 2022, he fired nearly half of the platform's workforce, including employees who worked to control misinformation.
Yet while Musk has faced significant criticism and scrutiny for his leadership, co-founder of Accountable Tech Jesse Lehrich told the AP that other platforms appear to have used his actions as an excuse to be less vigilant themselves. A report published by Free Press in December found that Twitter—now X—Meta, and YouTube rolled back 17 policies between November 2022 and November 2023 that targeted hate speech and disinformation. For example, X and YouTube retired policies around the spread of misinformation concerning the 2020 presidential election and the lie that Trump in fact won, and X and Meta relaxed policies aimed at stopping Covid 19-related falsehoods.
"We found that in 2023, the largest social media companies have deprioritized content moderation and other user trust and safety protections, including rolling back platform policies that had reduced the presence of hate, harassment, and lies on their networks," Free Press said, calling the rollbacks "a dangerous backslide."
Finally, Trump, who has been a big proponent of the lie that he won the 2020 presidential election against Biden, is running again in 2024. Since 57% of Republicans now believe his claim that Biden did not win the last election, experts are worried about what could happen if large numbers of people accept similar lies in 2024.
"If people don't ultimately trust information related to an election, democracy just stops working," Bret Schafer, a senior fellow at the nonpartisan Alliance for Securing Democracy, told the AP. "If a misinformation or disinformation campaign is effective enough that a large enough percentage of the American population does not believe that the results reflect what actually happened, then Jan. 6 will probably look like a warm-up act."
The warnings build on the alarm sounded by watchdog groups like Public Citizen, which has been advocating for a ban on the use of deepfakes in elections. The group has petitioned the Federal Election Commission to establish a new rule governing AI-generated content, and has called on the body to acknowledge that the use of deepfakes is already illegal under a rule banning "fraudulent misrepresentation."
"Specifically, by falsely putting words into another candidate's mouth, or showing the candidate taking action they did not, the deceptive deepfaker fraudulently speaks or act[s] 'for' that candidate in a way deliberately intended to damage him or her. This is precisely what the statute aims to proscribe," Public Citizen said.
The group has also asked the Republican and Democratic parties and their candidates to promise not to use deepfakes to mislead voters in 2024.
In November, Public Citizen announced a new tool tracking state-level legislation to control deepfakes. To date, laws have been enacted in California, Michigan, Minnesota, Texas, and Washington.
"Without new legislation and regulation, deepfakes are likely to further confuse voters and undermine confidence in elections," Ilana Beller, democracy campaign field manager for Public Citizen, said when the tracker was announced. "Deepfake video could be released days or hours before an election with no time to debunk it—misleading voters and altering the outcome of the election."
Our work is licensed under Creative Commons (CC BY-NC-ND 3.0). Feel free to republish and share widely.
10 notes · View notes
Text
now and then is pure camp
11 notes · View notes
saltofmercury · 8 months
Text
really fucked up but someone used AI/edited Pedro’s face on some guy’s only fans video and they’re literally saving and sharing it im just so disgusted like ????
10 notes · View notes
Link
Pro-Trump conservatives are making a push to win over black votes to support the former president with images generated by artificial intelligence.
Conservatives have published dozens of false images designed to deceive the public, also known as “deepfakes,” featuring black voters wearing pro-Trump paraphernalia and standing around him, according to the BBC (British Broadcasting Corporation).
3 notes · View notes