So, you get on Facebook because your friends are there, but you don’t leave because you can’t take your friends with you.
If there was interoperability, those switching costs would come down. If you could leave Facebook but continue to stay in touch with the communities, the customers, the family members, and the friends that you value there, then Facebook would, first of all, have to work to keep your business.
2 notes
·
View notes
What are the factors, causes, and consequences of Tech Layoffs 2023?
The employment market is always changing, with certain occupations becoming outdated as technology and automation improve. However, new opportunities are opening. However, in today's competitive economy, it is critical to be adaptable and up to date on the latest advancements. While sectors have a long way to go in terms of justice in the tech sector, the present layoffs force them to take a significant step back. Recruitment teams have become redundant as hiring freezes and IT businesses opt for mass layoffs. This also suggests that jobs are starting to be mechanised. The layoffs were caused in part by tech companies' plans to invest in artificial intelligence (AI) and automation.
0 notes
"Major technology companies signed a pact on Friday to voluntarily adopt "reasonable precautions" to prevent artificial intelligence (AI) tools from being used to disrupt democratic elections around the world.
Executives from Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI, and TikTok gathered at the Munich Security Conference to announce a new framework for how they respond to AI-generated deepfakes that deliberately trick voters.
Twelve other companies - including Elon Musk's X - are also signing on to the accord...
The accord is largely symbolic, but targets increasingly realistic AI-generated images, audio, and video "that deceptively fake or alter the appearance, voice, or actions of political candidates, election officials, and other key stakeholders in a democratic election, or that provide false information to voters about when, where, and how they can lawfully vote".
The companies aren't committing to ban or remove deepfakes. Instead, the accord outlines methods they will use to try to detect and label deceptive AI content when it is created or distributed on their platforms.
It notes the companies will share best practices and provide "swift and proportionate responses" when that content starts to spread.
Lack of binding requirements
The vagueness of the commitments and lack of any binding requirements likely helped win over a diverse swath of companies, but disappointed advocates were looking for stronger assurances.
"The language isn't quite as strong as one might have expected," said Rachel Orey, senior associate director of the Elections Project at the Bipartisan Policy Center.
"I think we should give credit where credit is due, and acknowledge that the companies do have a vested interest in their tools not being used to undermine free and fair elections. That said, it is voluntary, and we'll be keeping an eye on whether they follow through." ...
Several political leaders from Europe and the US also joined Friday’s announcement. European Commission Vice President Vera Jourova said while such an agreement can’t be comprehensive, "it contains very impactful and positive elements". ...
[The Accord and Where We're At]
The accord calls on platforms to "pay attention to context and in particular to safeguarding educational, documentary, artistic, satirical, and political expression".
It said the companies will focus on transparency to users about their policies and work to educate the public about how they can avoid falling for AI fakes.
Most companies have previously said they’re putting safeguards on their own generative AI tools that can manipulate images and sound, while also working to identify and label AI-generated content so that social media users know if what they’re seeing is real. But most of those proposed solutions haven't yet rolled out and the companies have faced pressure to do more.
That pressure is heightened in the US, where Congress has yet to pass laws regulating AI in politics, leaving companies to largely govern themselves.
The Federal Communications Commission recently confirmed AI-generated audio clips in robocalls are against the law [in the US], but that doesn't cover audio deepfakes when they circulate on social media or in campaign advertisements.
Many social media companies already have policies in place to deter deceptive posts about electoral processes - AI-generated or not...
[Signatories Include]
In addition to the companies that helped broker Friday's agreement, other signatories include chatbot developers Anthropic and Inflection AI; voice-clone startup ElevenLabs; chip designer Arm Holdings; security companies McAfee and TrendMicro; and Stability AI, known for making the image-generator Stable Diffusion.
Notably absent is another popular AI image-generator, Midjourney. The San Francisco-based startup didn't immediately respond to a request for comment on Friday.
The inclusion of X - not mentioned in an earlier announcement about the pending accord - was one of the surprises of Friday's agreement."
-via EuroNews, February 17, 2024
--
Note: No idea whether this will actually do much of anything (would love to hear from people with experience in this area on significant this is), but I'll definitely take it. Some of these companies may even mean it! (X/Twitter almost definitely doesn't, though).
Still, like I said, I'll take it. Any significant move toward tech companies self-regulating AI is a good sign, as far as I'm concerned, especially a large-scale and international effort. Even if it's a "mostly symbolic" accord, the scale and prominence of this accord is encouraging, and it sets a precedent for further regulation to build on.
145 notes
·
View notes
Suuuuuuuuper random (au, mundane)
Magnus, to Alec: "Would you like to go out with me sometime? And before you answer, I realize that I'm in a bit of a power position here, what with just having hired your sister to work for my company, so I would just like to make this clear. Your answer has no bearing whatsover over the employment and career status of Miss Isabelle. You can absolutely say no. I will not hold your sister as leverage or anything like that. In fact, Raphael would shoot me should I ever succumb to this level."
Izzy, whispering to Raphael: "He is kidding, right?"
Raphael: "No. He had that condition put into my contract."
Izzy: "Seriously?"
Raphael: "You get used to his brand of crazy eventually."
64 notes
·
View notes
A thing I haven't seen discussion of is that as more and more AI gets used for evil, there is and will be more and more development of ways to trick it or break it. But simultaneously, AI is also being pushed as an accessibility solution for people with disabilities, often replacing universal design, personal adaptation, or human/animal assistance. And eventually there's going to be a tipping point where the strategies and technology developed to combat ai become so necessarily widespread that it's going to also impact the usability of those accessibility tools. Everyone working to defend against abuses of ai also need to be working to maintain or create avenues of access that are being forced into the realm of ai.
The most obvious and recognizable example of this is every time I see a post about someone fucking with a driverless vehicle by drawing something on the road or whatever, I think 'ok but also everybody is out here telling me that that's going to be the wonderful future for me as a blind person who will never be able to drive'.
If you're going to develop software to prevent AI understanding your images, you'd also better be finding ways to force people to write image descriptions or text-based alternatives.
Auto-captioning might not always be great but it's better than none at all.
Anyway all of this to say they've now invented a robot guide dog (because of course they have( and I need to go lie down.
12 notes
·
View notes