Tumgik
#eu ai act
aquitainequeen · 1 year
Text
The first legislation in the world dedicated to regulating AI could become a blueprint for others to follow. Here’s what to expect from the EU’s AI Act. The word ‘risk’ is often seen in the same sentence as ‘artificial intelligence’ these days. While it is encouraging to see world leaders consider the potential problems of AI, along with its industrial and strategic benefits, we should remember that not all risks are equal. On June 14, the European Parliament voted to approve its own draft proposal for the AI Act, a piece of legislation two years in the making, with the ambition of shaping global standards in the regulation of AI.
Read more from Nello Cristianini!
50 notes · View notes
jcmarchi · 2 months
Text
Tech policy is only frustrating 90% of the time
New Post has been published on https://thedigitalinsider.com/tech-policy-is-only-frustrating-90-of-the-time/
Tech policy is only frustrating 90% of the time
Tumblr media Tumblr media
Many technologists stay far away from public policy. That’s understandable. In our experience, most of the time when we engage with policymakers there is no discernible impact. But when we do make a difference to public policy, the impact is much bigger than what we can accomplish through academic work. So we find it fruitful to engage even if it feels frustrating on a day-to-day basis.
In this post, we summarize some common reasons why many people are cynical about tech policy and explain why we’re cautiously optimistic. We also announce some recent writings on tech policy as well as an upcoming event for policymakers in Washington D.C., called AI Policy Precepts.
Some people want more tech regulation and others want less. But both sides seem to mostly agree that policymakers are bad at regulating tech: because they don’t have tech expertise; or because tech moves too rapidly for law to keep up; or because policymakers are bad at anticipating the effects of regulation. 
While these claims have a kernel of truth, they aren’t reasons for defeatism. It’s true that most politicians don’t have deep technical knowledge. But their job is not to be subject matter experts. The details of legislation are delegated to staffers, many of whom are experts on the subject. Moreover, much of tech policy is handled by agencies such as the Federal Trade Commission (FTC), which do have tech experts on their staff. There aren’t enough, but that’s being addressed in many ways. Finally, while federal legislators and agencies get the most press, a lot happens on the state and local levels.
Besides, policy does not have to move at the speed of tech. Policy is concerned with technology’s effect on people, not the technology itself. And policy has longstanding approaches to protecting humans that can be adapted to address new challenges from tech. For example, the FTC has taken action in response to deceptive claims made by AI companies under its existing authority. Similarly, the answer to AI-enabled discrimination is the enforcement of long-established anti-discrimination law. Of course, there are some areas where technology poses new threats, and that might require changes to laws, but that’s relatively rare.
In short, there is nothing exceptional about tech policy that makes it harder than any other type of policy requiring deep expertise. If we can do health policy or nuclear policy, we can do tech policy. Of course, there are many reasons why all public policy is slow and painstaking, such as partisan gridlock, or the bias towards inaction built into the structure of the government due to checks and balances. But none of these factors are specific to tech policy.
To be clear, we are not saying that all regulations or policies are useful—far from it. In past essays, we have argued against specific proposals for regulating AI. And there’s a lot that can be accomplished without new legislation. The October 2023 Executive Order by the Biden administration tasked over 50 agencies with 150 actions, showing the scope of existing executive authority.
We work at Princeton’s Center for Information Technology Policy. CITP is home to interdisciplinary researchers who look at tech policy from different perspectives. We have also begun working closely with the D.C. office of Princeton’s School of Public and International Affairs. Recently, we have been involved in a few collaborations on informing tech policy:
Foundation model transparency reports: In a Stanford-MIT-Princeton collaboration, we propose a structured way for AI companies to release key information about their foundation models. We draw inspiration from transparency reporting in social media, financial reporting, and FDA’s adverse event reporting. We use the set of 100 indicators developed in the 2023 Foundation Model Transparency Index.
We analyze how the 100 indicators align with six existing proposals on AI: Canada’s Voluntary Code of Conduct for generative AI, the EU AI Act, the G7 Hiroshima Process Code of Conduct for AI, the U.S. Executive Order on AI, the U.S. Foundation Model Transparency Act, and the U.S. White House voluntary AI commitments. 43 of the 100 indicators in our proposal are required by at least one proposal, with the EU AI Act requiring 30 of the 100 proposed indicators. 
We also found that transparency requirements in government policies can lack specificity: they do not detail how precisely developers should report quantitative information, establish standards for reporting evaluations, or account for differences across modalities. We provide an example of what Foundation Model Transparency Reports could look like to help sharpen what information AI developers must provide. Read the paper here. 
New Jersey Assembly hearing on deepfakes: Last month, Sayash testified before the New Jersey Assembly on reducing harm from deepfakes. We were asked to provide our opinion on four bills creating penalties and mitigations for non-consensual deepfakes. The hearing included testimonies from four experts in intellectual property, tech policy, civil rights, and constitutional law. 
We advocated for collecting better evidence on the impact of AI-generate deepfakes, content provenance standards to help prove that a piece of media is human-created (as opposed to watermarking to prove it is AI-generated), and bolstering defenses on downstream surfaces such as social media. We also cautioned against relying too much on the non-proliferation of powerful AI as a solution—as we’ve argued before, it is likely to be infeasible and ineffective. Read the written testimony here.
Open models and open research: We submitted a response to the National Telecommunications and Information Administration on its request for comments on openness in AI, in collaboration with various academic and civil society members. Our response built on our paper and policy brief analyzing the societal impact of open foundation models. We were happy to see this paper being cited in responses by several industry and civil society organizations, including the Center for Democracy and Technology, Mozilla, Meta, and Stability AI. Read our response here.
We also contributed to a comment to the copyright office in support of a safe harbor exemption for generative AI research, based on our paper and open letter (signed by over 350 academics, researchers, and civil society members). Read our comment here.
AI safety and existential risk. We’ve analyzed several aspects of AI safety in our recent writing: the impact of openness, the need for safe harbors, and the pitfalls of model alignment. Another major topic of policy debate is on the existential risks posed by AI. We’ve been researching this question for the past year and plan to start writing about it in the next few weeks.
AI policy precepts. CITP has launched a non-partisan program to explore the core concepts, opportunities, and risks underlying AI that will shape federal policy making for the next ten years. The sessions will be facilitated by Arvind alongside CITP colleagues Matthew Salganik, and Mihir Kshirsagar. The size is limited to about 18 participants, with policymakers drawn from Congressional offices and Federal agencies. We will explore predictive and generative AI, moving beyond familiar talking points and examining real world case studies. Participants will come away with frameworks to address future challenges, as well as the opportunity to establish relationships with a cohort of policymakers. See here for more information and here to nominate yourself or a colleague. The deadline for nomination is this Friday, April 5.
We thank Mihir Kshirsagar for feedback on a draft of this post. Link to cover image: Source
0 notes
aitalksblog · 3 months
Text
EU Artificial Intelligence Act: An Overview
(Images made by author with Microsoft Copilot) On March 13, 2024, the European Parliament took a historic step by approving the Artificial Intelligence Act (AI Act). This landmark legislation marks the world’s first comprehensive regulatory framework for AI, aiming to ensure responsible development and use of this powerful technology within the European Union (EU). This blog post provides an…
Tumblr media
View On WordPress
0 notes
mgeist · 4 months
Text
The Law Bytes Podcast, Episode 191: Luca Bertuzzi on the Making of the EU Artificial Intelligence Act
European countries reached agreement late last week on a landmark legislative package to regulate artificial intelligence. AI regulation  has emerged as a key issue over the past year as the explosive growth of ChatGPT and other generative AI services have sparked legislation, lawsuits and national consultations. The EU AI Act is heralded as the first of its kind and as a model for Canadian AI…
Tumblr media
View On WordPress
0 notes
dye-it-rouge-et-noir · 3 months
Text
Hello! I come with some good news! In the EU, a law has been passed restricting generative AIs (particularly ones that pose a high risk). I hope other areas in the world follow suit, but this is huge! It'll take a while to be applied as well, though this is still a major win for everyone.
26K notes · View notes
michellesanches · 2 months
Text
Latest AI Regulatory Developments:
As artificial intelligence (AI) continues to transform industries, governments worldwide are responding with evolving regulatory frameworks. These regulatory advancements are shaping how businesses integrate and leverage AI technologies. Understanding these changes and preparing for them is crucial to remain compliant and competitive. Recent Developments in AI Regulation: United Kingdom: The…
Tumblr media
View On WordPress
1 note · View note
judas-isariot · 3 months
Text
HOLY SMOKE, guys one of my report for about IA was taken to be pointed at the April UE meeting about AI.
This meeting will determinate if we enforce rules about AI and make more laws to restreint it's use (some of them already exist in France and Germany). This can go both ways (being wasted time or huge).
Anyways, the one sentence of my report who was keeped who says (translated):
"AI generated image should be considerate worthless, has only what is made by men hand should be sell"
Guys, I spent almost a years having my report, pdf and mails ignore. But in the end it wasn't vain, the IA plagiat prevention team has done it !
0 notes
businessfotos · 6 months
Text
Menschenverstand vs. Künstliche Intelligenz – 1:1
Das Spiel ist grade erst eröffnet und wird noch einige spannende Wendungen parat halten – nun aber ist es der EU erstmal gelungen, einen Ausgleich zu erzielen. Mit dem „AI Act“ gibt es jetzt einen Gesetzentwurf, der europäisches Recht werden soll und der erstmalig Regeln für die Entwicklung und Anwendung von KI aufstellt. Waren die Tech-Firmen bisher unbeaufsichtigt im Sandkasten unterwegs, gilt…
Tumblr media
View On WordPress
0 notes
t-jfh · 11 months
Text
Tumblr media
The world is scrambling to take back control of a risky technology.
Tumblr media
The EU Artificial Intelligence Act
1 note · View note
dduane · 3 months
Text
Sorry, the original EuroNews link seems to have gone away. Here's a CNN one instead.
...The EU AI Act outlaws social scoring systems powered by AI and any biometric-based tools used to guess a person’s race, political leanings or sexual orientation. It bans the use of AI to interpret the emotions of people in schools and workplaces, as well as some types of automated profiling intended to predict a person’s likelihood of committing future crimes. Meanwhile, the law outlines a separate category of “high-risk” uses of AI, particularly for education, hiring and access to government services, and imposes a separate set of transparency and other obligations on them. Companies such as OpenAI that produce powerful, complex and widely used AI models will also be subject to new disclosure requirements under the law. It also requires all AI-generated deepfakes to be clearly labeled, targeting concerns about manipulated media that could lead to disinformation and election meddling.
5K notes · View notes
Text
Apple to EU: “Go fuck yourself”
Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/02/06/spoil-the-bunch/#dma
Tumblr media
There's a strain of anti-anti-monopolist that insists that they're not pro-monopoly – they're just realists who understand that global gigacorporations are too big to fail, too big to jail, and that governments can't hope to rein them in. Trying to regulate a tech giant, they say, is like trying to regulate the weather.
This ploy is cousins with Jay Rosen's idea of "savvying," defined as: "dismissing valid questions with the insider's, 'and this surprises you?'"
https://twitter.com/jayrosen_nyu/status/344825874362810369?lang=en
In both cases, an apologist for corruption masquerades as a pragmatist who understands the ways of the world, unlike you, a pathetic dreamer who foolishly hopes for a better world. In both cases, the apologist provides cover for corruption, painting it as an inevitability, not a choice. "Don't hate the player. Hate the game."
The reason this foolish nonsense flies is that we are living in an age of rampant corruption and utter impunity. Companies really do get away with both literal and figurative murder. Governments really do ignore horrible crimes by the rich and powerful, and fumble what rare, few enforcement efforts they assay.
Take the GDPR, Europe's landmark privacy law. The GDPR establishes strict limitations of data-collection and processing, and provides for brutal penalties for companies that violate its rules. The immediate impact of the GDPR was a mass-extinction event for Europe's data-brokerages and surveillance advertising companies, all of which were in obvious violation of the GDPR's rules.
But there was a curious pattern to GDPR enforcement: while smaller, EU-based companies were swiftly shuttered by its provisions, the US-based giants that conduct the most brazen, wide-ranging, illegal surveillance escaped unscathed for years and years, continuing to spy on Europeans.
One (erroneous) way to look at this is as a "compliance moat" story. In that story, GDPR requires a bunch of expensive systems that only gigantic companies like Facebook and Google can afford. These compliance costs are a "capital moat" – a way to exclude smaller companies from functioning in the market. Thus, the GDPR acted as an anticompetitive wrecking ball, clearing the field for the largest companies, who get to operate without having to contend with smaller companies nipping at their heels:
https://www.techdirt.com/2019/06/27/another-report-shows-gdpr-benefited-google-facebook-hurt-everyone-else/
This is wrong.
Oh, compliance moats are definitely real – think of the calls for AI companies to license their training data. AI companies can easily do this – they'll just buy training data from giant media companies – the very same companies that hope to use models to replace creative workers with algorithms. Create a new copyright over training data won't eliminate AI – it'll just confine AI to the largest, best capitalized companies, who will gladly provide tools to corporations hoping to fire their workforces:
https://pluralistic.net/2023/02/09/ai-monkeys-paw/#bullied-schoolkids
But just because some regulations can be compliance moats, that doesn't mean that all regulations are compliance moats. And just because some regulations are vigorously applied to small companies while leaving larger firms unscathed, it doesn't follow that the regulation in question is a compliance moat.
A harder look at what happened with the GDPR reveals a completely different dynamic at work. The reason the GDPR vaporized small surveillance companies and left the big companies untouched had nothing to do with compliance costs. The Big Tech companies don't comply with the GDPR – they just get away with violating the GDPR.
How do they get away with it? They fly Irish flags of convenience. Decades ago, Ireland started dabbling with offering tax-havens to the wealthy and mobile – they invented the duty-free store:
https://en.wikipedia.org/wiki/Duty-free_shop#1947%E2%80%931990:_duty_free_establishment
Capturing pennies from the wealthy by helping them avoid fortunes they owed in taxes elsewhere was terribly seductive. In the years that followed, Ireland began aggressively courting the wealthy on an industrial scale, offering corporations the chance to duck their obligations to their host countries by flying an Irish flag of convenience.
There are other countries who've tried this gambit – the "treasure islands" of the Caribbean, the English channel, and elsewhere – but Ireland is part of the EU. In the global competition to help the rich to get richer, Ireland had a killer advantage: access to the EU, the common market, and 500m affluent potential customers. The Caymans can hide your money for you, and there's a few super-luxe stores and art-galleries in George Town where you can spend it, but it's no Champs Elysees or Ku-Damm.
But when you're competing with other countries for the pennies of trillion-dollar tax-dodgers, any wins can be turned into a loss in an instant. After all, any corporation that is footloose enough to establish a Potemkin Headquarters in Dublin and fly the trídhathach can easily up sticks and open another Big Store HQ in some other haven that offers it a sweeter deal.
This has created a global race to the bottom among tax-havens to also serve as regulatory havens – and there's a made-in-the-EU version that sees Ireland, Malta, Cyprus and sometimes the Netherlands competing to see who can offer the most impunity for the worst crimes to the most awful corporations in the world.
And that's why Google and Facebook haven't been extinguished by the GDPR while their rivals were. It's not compliance moats – it's impunity. Once a corporation attains a certain scale, it has the excess capital to spend on phony relocations that let it hop from jurisdiction to jurisdiction, chasing the loosest slots on the strip. Ireland is a made town, where the cops are all on the take, and two thirds of the data commissioner's rulings are eventually overturned by the federal court:
https://www.iccl.ie/digital-data/iccl-2023-gdpr-report/
This is a problem among many federations, not just the EU. The US has its onshore-offshore tax- and regulation-havens (Delaware, South Dakota, Texas, etc), and so does Canada (Alberta), and some Swiss cantons are, frankly, batshit:
https://lenews.ch/2017/11/25/swiss-fact-some-swiss-women-had-to-wait-until-1991-to-vote/
None of this is to condemn federations outright. Federations are (potentially) good! But federalism has a vulnerability: the autonomy of the federated states means that they can be played against each other by national or transnational entities, like corporations. This doesn't mean that it's impossible to regulate powerful entities within a federation – but it means that federal regulation needs to account for the risk of jurisdiction-shopping.
Enter the Digital Markets Act, a new Big Tech specific law that, among other things, bans monopoly app stores and payment processing, through which companies like Apple and Google have levied a 30% tax on the entire app market, while arrogating to themselves the right to decide which software their customers may run on their own devices:
https://pluralistic.net/2023/06/07/curatorial-vig/#app-tax
Apple has responded to this regulation with a gesture of contempt so naked and broad that it beggars belief. As Proton describes, Apple's DMA plan is the very definition of malicious compliance:
https://proton.me/blog/apple-dma-compliance-plan-trap
Recall that the DMA is intended to curtail monopoly software distribution through app stores and mobile platforms' insistence on using their payment processors, whose fees are sky-high. The law is intended to extinguish developer agreements that ban software creators from informing customers that they can get a better deal by initiating payments elsewhere, or by getting a service through the web instead of via an app.
In response, Apple, has instituted a junk fee it calls the "Core Technology Fee": EUR0.50/install for every installation over 1m. As Proton writes, as apps grow more popular, using third-party payment systems will grow less attractive. Apple has offered discounts on its eye-watering payment processing fees to a mere 20% for the first payment and 13% for renewals. Compare this with the normal – and far, far too high – payment processing fees the rest of the industry charges, which run 2-5%. On top of all this, Apple has lied about these new discounted rates, hiding a 3% "processing" fee in its headline figures.
As Proton explains, paying 17% fees and EUR0.50 for each subscriber's renewal makes most software businesses into money-losers. The only way to keep them afloat is to use Apple's old, default payment system. That choice is made more attractive by Apple's inclusion of a "scare screen" that warns you that demons will rend your soul for all eternity if you try to use an alternative payment scheme.
Apple defends this scare screen by saying that it will protect users from the intrinsic unreliability of third-party processors, but as Proton points out, there are plenty of giant corporations who get to use their own payment processors with their iOS apps, because Apple decided they were too big to fuck with. Somehow, Apple can let its customers spend money Uber, McDonald's, Airbnb, Doordash and Amazon without terrorizing them about existential security risks – but not mom-and-pop software vendors or publishers who don't want to hand 30% of their income over to a three-trillion-dollar company.
Apple has also reserved the right to cancel any alternative app store and nuke it from Apple customers' devices without warning, reason or liability. Those app stores also have to post a one-million euro line of credit in order to be considered for iOS. Given these terms, it's obvious that no one is going to offer a third-party app store for iOS and if they did, no one would list their apps in it.
The fuckery goes on and on. If an app developer opts into third-party payments, they can't use Apple's payment processing too – so any users who are scared off by the scare screen have no way to pay the app's creators. And once an app creator opts into third party payments, they can never go back – the decision is permanent.
Apple also reserves the right to change all of these policies later, for the worse ("I am altering the deal. Pray I don't alter it further" -D. Vader). They have warned developers that they might change the API for reporting external sales and revoke developers' right to use alternative app stores at its discretion, with no penalties if that screws the developer.
Apple's contempt extends beyond app marketplaces. The DMA also obliges Apple to open its platform to third party browsers and browser engines. Every browser on iOS is actually just Safari wrapped in a cosmetic skin, because Apple bans third-party browser-engines:
https://pluralistic.net/2022/12/13/kitbashed/#app-store-tax
But, as Mozilla puts it, Apple's plan for this is "as painful as possible":
https://www.theverge.com/2024/1/26/24052067/mozilla-apple-ios-browser-rules-firefox
For one thing, Apple will only allow European customers to run alternative browser engines. That means that Firefox will have to "build and maintain two separate browser implementations — a burden Apple themselves will not have to bear."
(One wonders how Apple will treat Americans living in the EU, whose Apple accounts still have US billing addresses – these people will still be entitled to the browser choice that Apple is grudgingly extending to Europeans.)
All of this sends a strong signal that Apple is planning to run the same playbook with the DMA that Google and Facebook used on the GDPR: ignore the law, use lawyerly bullshit to chaff regulators, and hope that European federalism has sufficiently deep cracks that it can hide in them when the enforcers come to call.
But Apple is about to get a nasty shock. For one thing, the DMA allows wronged parties to start their search for justice in the European federal court system – bypassing the Irish regulators and courts. For another, there is a global movement to check corporate power, and because the tech companies do the same kinds of fuckery in every territory, regulators are able to collaborate across borders to take them down.
Take Apple's app store monopoly. The best reference on this is the report published by the UK Competition and Markets Authority's Digital Markets Unit:
https://assets.publishing.service.gov.uk/media/63f61bc0d3bf7f62e8c34a02/Mobile_Ecosystems_Final_Report_amended_2.pdf
The devastating case that the DMU report was key to crafting the DMA – but it also inspired a US law aimed at forcing app markets open:
https://www.congress.gov/bill/117th-congress/senate-bill/2710
And a Japanese enforcement action:
https://asia.nikkei.com/Business/Technology/Japan-to-crack-down-on-Apple-and-Google-app-store-monopolies
And action in South Korea:
https://www.reuters.com/technology/skorea-considers-505-mln-fine-against-google-apple-over-app-market-practices-2023-10-06/
These enforcers gather for annual meetings – I spoke at one in London, convened by the Competition and Markets Authority – where they compare notes, form coalitions, and plan strategy:
https://www.eventbrite.co.uk/e/cma-data-technology-and-analytics-conference-2022-registration-308678625077
This is where the savvying breaks down. Yes, Apple is big enough to run circles around Japan, or South Korea, or the UK. But when those countries join forces with the EU, the USA and other countries that are fed up to the eyeballs with Apple's bullshit, the company is in serious danger.
It's true that Apple has convinced a bunch of its customers that buying a phone from a multi-trillion-dollar corporation makes you a member of an oppressed religious minority:
https://pluralistic.net/2024/01/12/youre-holding-it-wrong/#if-dishwashers-were-iphones
Some of those self-avowed members of the "Cult of Mac" are willing to take the company's pronouncements at face value and will dutifully repeat Apple's claims to be "protecting" its customers. But even that credulity has its breaking point – Apple can only poison the well so many times before people stop drinking from it. Remember when the company announced a miraculous reversal to its war on right to repair, later revealed to be a bald-faced lie?
https://pluralistic.net/2023/09/22/vin-locking/#thought-differently
Or when Apple claimed to be protecting phone users' privacy, which was also a lie?
https://pluralistic.net/2022/11/14/luxury-surveillance/#liar-liar
The savvy will see Apple lying (again) and say, "this surprises you?" No, it doesn't surprise me, but it pisses me off – and I'm not the only one, and Apple's insulting lies are getting less effective by the day.
Tumblr media
Image: Alex Popovkin, Bahia, Brazil from Brazil (modified) https://commons.wikimedia.org/wiki/File:Annelid_worm,_Atlantic_forest,_northern_littoral_of_Bahia,_Brazil_%2816107326533%29.jpg
CC BY 2.0 https://creativecommons.org/licenses/by/2.0/deed.en
--
Hubertl (modified) https://commons.wikimedia.org/wiki/File:2015-03-04_Elstar_%28apple%29_starting_putrefying_IMG_9761_bis_9772.jpg
CC BY-SA 4.0 https://creativecommons.org/licenses/by-sa/4.0/deed.en
595 notes · View notes
veganism · 4 months
Text
The genocide is also experimentation on living beings
Israel is currently testing new weapons in Gaza, some of which will soon be sold globally as "battle-tested," according to Antony Loewenstein, an author who has written a widely acclaimed book on the issue.
For years, the Israeli defense sector has used Palestine as a laboratory for new weapons and surveillance tech, he told Anadolu, adding that this is also the case in the current ongoing war on Gaza.
One of the main reasons why "many nations, democracies and dictatorships support Israeli occupation" of Palestine is because it allows them to buy these "battle-tested" weapons, asserted Loewenstein, author of The Palestine Laboratory: How Israel Exports the Technology of Occupation Around the World.
Another aspect of Israel's war on Gaza has been the use of artificial intelligence technology, he said.
According to Loewenstein, AI has been one of the key targeting tools used by the Israeli military in its deadly campaign of airstrikes, leading to mass killings of Palestinians-now over 28,500-and damage on an unprecedented scale.
The current war on Gaza is "inarguably one of the most consequential and bloody," he said.
He described Israel's use of AI against Palestinians as "automated murder," stressing that this model "will be studied and copied by other nation-states" and Tel Aviv will sell them these technologies as tried and tested weapons.
In the last 50 years, Israel has exported hi-tech surveillance tools to at least 130 countries around the world.
To maintain its illegal occupation of the West Bank and East Jerusalem, and blockade of the Gaza Strip, Israel has developed a range of tools and technologies that have made it the world's leading exporter of spyware and digital forensics tools.
But analysts say the intelligence failure during the Oct. 7, 2023 Hamas attacks casts doubts over Tel Avis's technological capabilities.
Israel's reliance on technology "is an illusion of safety, while imprisoning 2.3 million people under endless occupation," said Loewenstein, who is Jewish and holds Australian and German nationalities.
He described Israel's response in Gaza as "apocalyptic," stressing that the killings of Palestinian civilians, including children and women, is "on a scale of indiscriminate slaughter."
- 'BLOOD MONEY'
Loewenstein, who is also a journalist, said Israel has honed its weapons and technology expertise over decades as an occupying power, acting with increasing impunity in the Palestinian territories.
This led a small country like Israel to become one of the top 10 arms dealers in the world, he said, adding that Israeli arms sales in 2021 were "the highest on record, surging 55% over the previous two years to $11.3 billion."
In his book, Loewenstein explores thoroughly Israel's ties with autocracies and regimes engaged in mass displacement campaigns, and governments slinking their way into phones.
The Israeli NSO Group sold its well-known Pegasus software to numerous governments, a spyware tool for phones that gives access to the entire content, including conversations, text messages, emails and photos even when the device is switched off.
Israeli drones were first tested over Gaza, the besieged enclave that Loewenstein referred to as "the perfect laboratory for Israeli ingenuity in domination."
Surveillance technology developed in Israel has also been sold to the US in the form of watch towers now used on the border with Mexico.
The EU's border agency Frontex is known to have used Israeli drone technology to monitor refugees.
Loewenstein explains in his book that the EU has partnered with leading Israeli defense companies to use its drones, "and of course years of experience in Palestine is a key selling point."
"So again, one sees how there are so many examples of nations that are wanting to copy what Israel is doing in their own area in their own country on their own border," he said.
These technologies and "are sold by Israel as battle-tested," he said.
In other words, he contends that Palestinians essentially have become "guinea pigs," and despite some nations and the UN publicly criticizing the Israeli occupation, in reality "they're desperate for this technology for themselves for their own countries."
"And that's how in fact, the Palestine laboratory has been so successful for Israel for so long," he said.
In his exhaustive probe into Israel's dealings with arms sales around the world, he noted that the country has monetized the occupation of Palestine, by selling weapons, spyware tools and technologies to repressive regimes such as Rwanda during the genocide in 1994 and to Myanmar during its genocide against the Muslim Rohingya people in 2017.
"This to me is blood money. I mean, there's no other way to see that and again, as someone Jewish, who has spent many, many years reporting on this conflict, both within Israel and Palestine but also elsewhere, it's deeply shameful that Israel is making huge amounts of money from the misery of others," he said.
"This is not a legacy that I can be proud of."
- 'NO NATION ACTUALLY HOLDING ISRAEL TO ACCOUNT'
Profiting from misery is to some extent the nature of what capitalism has always been about, but Israel does this with a great deal of impunity, "because Israel does what it wants," said Loewenstein.
"There is no accountability, there is no transparency, there is no nation actually holding Israel to account," he added.
Israel's regime is shielded from any political backlash for years to come because nations are reliant on Israeli weapons and spyware, said the author.
Israel may not be the only player employing surveillance technology that leads to human rights violations, but it still plays a dominant role, which is why Loewenstein insists that it deserves singular attention.
Israel's foreign policy has always been "amoral and opportunistic," he said, calling on all nations to take a stand and hold Israel accountable, and acknowledge that the world is buying what Israel is selling.
129 notes · View notes
jcmarchi · 3 months
Text
Will the EU’s AI Act Set the Global Standard for AI Governance?
New Post has been published on https://thedigitalinsider.com/will-the-eus-ai-act-set-the-global-standard-for-ai-governance/
Will the EU’s AI Act Set the Global Standard for AI Governance?
In an unprecedented move, the European Parliament officially passed the Artificial Intelligence Act (AI Act), a comprehensive set of regulations designed to govern the rapidly evolving field of artificial intelligence. This groundbreaking legislation, marking a first in the realm of AI governance, establishes a framework for managing AI technologies while balancing innovation with ethical and societal concerns.
With its strategic focus on risk assessment and user safety, the EU AI Act serves as a potential blueprint for future AI regulation worldwide. As nations grapple with the technological advancements and ethical implications of AI, the EU’s initiative could bring a new era of global digital policy making.
The EU AI Act: A Closer Look
The journey of the EU AI Act began in 2021, conceived against the backdrop of a rapidly advancing technological landscape. It represents a proactive effort by European lawmakers to address the challenges and opportunities posed by artificial intelligence. This legislation has been in the making for several years, undergoing rigorous debate and revision, reflecting the complexities inherent in regulating such a dynamic and impactful technology.
Risk-Based Categorization of AI Technologies
Central to the Act is its innovative risk-based framework, which categorizes AI systems into four distinct levels: unacceptable, high, medium, and low risk. The ���unacceptable’ category includes AI systems deemed too harmful for use in European society, leading to their outright ban. High-risk AI applications, such as those used in law enforcement or critical infrastructure, will face stringent regulatory scrutiny.
The Act sets out clear compliance requirements, demanding transparency, accountability, and respect for fundamental rights. Meanwhile, medium and low-risk AI applications are subject to less stringent, but nonetheless significant, oversight to ensure they align with EU values and safety standards.
Key Prohibitions and Regulations for AI Applications
The Act specifically prohibits certain uses of AI that are considered a threat to citizens’ rights and freedoms. This includes AI systems used for indiscriminate surveillance, social scoring, and manipulative or exploitative purposes. In the realm of high-risk AI, the legislation imposes obligations for risk assessment, data quality control, and human oversight.
These measures are designed to safeguard fundamental rights and ensure that AI systems are transparent, reliable, and subject to human review. The Act also mandates clear labeling of AI-manipulated content, often referred to as ‘deepfakes’, to prevent misinformation and uphold informational integrity.
This segment of the legislation represents a bold attempt to harmonize technological innovation with ethical and societal norms, setting a precedent for future AI regulation on a global scale.
Industry Response and Global Implications
The EU AI Act has elicited a diverse array of responses from the technology sector and legal community. While some industry leaders applaud the Act for providing a structured framework for AI development, others express concerns about the potential for stifling innovation. Notably, the Act’s focus on risk-based regulation and ethical guardrails has been largely seen as a positive step towards responsible AI usage.
Companies like Salesforce have emphasized the importance of such regulation in building global consensus on AI principles. On the other hand, concerns have been raised about the Act’s ability to keep pace with rapid technological changes.
The EU AI Act is poised to significantly influence global trends in AI governance. Much like the General Data Protection Regulation (GDPR) became a de facto standard in data privacy, the AI Act could set a new global benchmark for AI regulation. This legislation could inspire other countries to adopt similar frameworks, contributing to a more standardized approach to AI governance worldwide.
Additionally, the Act’s comprehensive scope may encourage multinational companies to adopt its standards universally, to maintain consistency across markets. However, there are concerns about the competitive landscape, particularly in how European AI companies will fare against their American and Chinese counterparts in a more regulated environment. The Act’s implementation will be a crucial test of Europe’s ability to balance the promotion of AI innovation with the safeguarding of ethical and societal values.
Challenges and the Path Ahead
One of the primary challenges in the wake of the EU AI Act is keeping pace with the rapid evolution of AI technology. The dynamic nature of AI presents a unique regulatory challenge, as laws and guidelines must continually adapt to new advancements and applications. This pace of change could potentially render aspects of the Act outdated if they are not flexible and responsive enough. Furthermore, there is a concern about the practical implementation of the Act, especially in terms of the resources required for enforcement and the potential for bureaucratic complexities.
To effectively manage these challenges, the Act will need to be part of a dynamic regulatory framework that can evolve alongside AI technology. This means regular updates, revisions, and consultations with a broad range of stakeholders, including technologists, ethicists, businesses, and the public.
The concept of a ‘living document’, which can be modified in response to technological and societal shifts, is essential for the regulation to remain relevant and effective. Additionally, fostering an environment of collaboration between AI developers and regulators will be critical to ensuring that innovations can flourish within a safe and ethical framework. The path ahead is not just about regulation, but about building a sustainable ecosystem where AI can develop in a manner that aligns with societal values and human rights.
As the EU embarks on this pioneering journey, the global community will be closely observing the implementation and impact of this Act, potentially using it as a model for their own AI governance strategies. The success of the EU AI Act will depend not only on its initial implementation but on its ability to adapt and respond to the ever-changing landscape of artificial intelligence.
You can find the EU’s AI Act navigator here.
0 notes
tournament-announcer · 3 months
Note
Hey ik this is not tournament related, but in case you didn't know and want to spread the word, Tumblr is selling everybody's data to AI companies.
Here is the staff post about it https://www.tumblr.com/loki-valeska/743539907313778688
And a post with more information and how to opt out https://www.tumblr.com/khaleesi/743504350780014592/
Hi thanks for the information and sorry for my late reply. I was a bit low on spoons this week and I wanted to form thoughts about this.
Because the thing is, I am doing a PhD at an AI department in real-life. Not in generative AI, in fact I’m partly doing this because I distrust how organisations are currently using AI. But so this is my field of expertise and I wanted to share some insights.
First of all yes do try to opt out. We have no guarantee how useful that’s going to be, but they don’t need to be given your data that easily.
Secondly, I am just so confused as to why? Why would you want to use tumblr posts to train your model? Everyone in the field surely knows about the garbage in, garbage out rule? AI models that need to be trained on data are doing nothing more than making statistical predictions based on the data they’ve seen. Garbage in, garbage out therefore refers to the fact that if your data is shit, your results will also be shit. And like not to be mean but a LOT of tumblr posts are not something I would want to see from a large language model.
Thirdly I’ve seen multiple posts encouraging people to use nightshade and glaze on their art but also posts wondering what exactly it is these programs do to your art. The thing is, generative ai models are kinda stupid, they just learn to associate certain patterns in pictures with certain words. However these patterns are typically not patterns we’d want them to pick up on. An example would be a model that you want to differentiate between pictures of birds and dogs, but instead of learning to look for say wings, it learns that pictures of birds usually have a blue sky as background and so a picture of a bird in the grass will be labelled as ‘dog’.
So what glaze and nightshade are more or less doing is exploiting this stupidness by changing a few pixels in your art that will give it a very different label when an AI looks at it. I can look up papers for people who want to know the details, but this is the essence of it.
To see how much influence this might have on your art, see this meme I made a few years ago based on the paper ”Intriguing properties of neural networks”, Figure 5 by Szegedy et al. (2013)
Tumblr media
Finally, staff said in that post that they gave us the option to opt out because of the maybe upcoming AI act in Europe. I was under the impression that they should give us this opportunity because of the GDPR and that the AI act is supposed to be more about the use of AI and less about the creation and data aspect but nevertheless this shows that the EU has a real ability to influence these kinds of things and the European Parliament elections are coming up this year, so please go vote and also read up on what the parties are saying about AI and other technologies beforehand (next to everything else you care about) (also relevant for other elections of course but the EU has a good track record on this topic).
Tumblr media
Anyway sorry for the long talk, but as I said this is my area and so I felt the need to clarify some things. Feel free to send me more asks if you want to know something specific!
95 notes · View notes
abnerkrill · 6 months
Text
Petition: establish AI regulations
EU people (but possible to sign from elsewhere in the world) please add your name to this petition for human-centric and culture-friendly AI regulation. Needs 47,000 more votes as of the time of posting. From the petition:
'Summary: In an open letter, the Authors' Rights Network (Netzwerk Autorenrechte) calls on the German government as well as the French and Italian leaders to reconsider their stance on the (non-)regulation of AI, to take a stand against the massive damaging effects of unregulated AI applications based on theft, to protect people and authors from data theft and disinformation and to reflect on values such as trust, democracy and justice.
++ Open letter on the subject of France, Germany's and Italy's position on the planned EU Artificial Intelligence Act ++
Dear Chancellor Olaf Scholz (Germany),
Dear Federal Minister of Economic Affairs and Climate Action Robert Habeck,
Dear Federal Minister for Digital and Transport Volker Wissing,
Dear President Emmanuel Macron (France), 
Dear Prime Minister Giorgia Meloni (Italy):
It is with great concern that we, the members of the Netzwerk Autorenrechte which represents authors and translators in the book sector from 15 organisations in the D-A-CH region, observe Germany's, Frances and Italy's new position on the AI Act proposal. This new position runs counter to the consensus previously reached by EU Member States on the legal regulation of AI, in particular with regard to transparency and liability obligations for developers of generative technology.
According to reports from Euractiv on 19 November 2023, Germany – under the lead of the Digital Ministry and the Federal Ministry for Economic Affairs and Climate Action, and together with France and Italy – wants to push for "obligatory self-regulation" instead of legally binding regulation. There are no sanctions for saftey incidents such as copyright, authors’ rights and data protection violations, insufficient labeling, or circumventing ethical standards in the position of these three countries.
Reason
Dear Chancellor, dear Vice Chancellor, dear Federal Minister,
dear Mr President of France, dear Prime Minister of Italy:
We urge you to change your position, which currently favors supposed economic advantages to the detriment of sustainable legal rules. Your position sends a fatal signal to everyone in the cultural sectors and to all people in Europe: namely, that you're willing to protect the same tech companies that illegitimately make use of cultural works and citizens data for their own profits – rather than protecting the people whose work and private data have made these foundation models and generative applications possible in the first place.
The consequences of your position would be devastating. Generative technology is already threatening numerous jobs. We can already observe several harmful “business models” based on AI products and an increase in disinformation. It's been proven that generative AI uses unlawfully obtained works without the knowledge or consent of the works' authors. Without legal regulation, generative technologies will accelerate the theft of artistic work and data. They'll increase discrimination and the falsification of information, including damage to reputations. And they'll significantly contribute to climate change. The more legally deregulated generative products reach the market, the more irreparable the loss of trust in texts, images, and information will become for society as a whole.
We urge you to return to the values of trust, democracy, and justice. We're standing on the threshold of an evolution, of one of the most decisive moments in history. Will we regulate the machines that are using humans in order to replace them? Or will we choose the short-sighted ideology of money?
We trust you have the political resolve to do the right thing.
Berlin, 24 November 2023'
132 notes · View notes
mariacallous · 6 months
Text
The European Union today agreed on the details of the AI Act, a far-reaching set of rules for the people building and using artificial intelligence. It’s a milestone law that, lawmakers hope, will create a blueprint for the rest of the world.
After months of debate about how to regulate companies like OpenAI, lawmakers from the EU’s three branches of government—the Parliament, Council, and Commission—spent more than 36 hours in total thrashing out the new legislation between Wednesday afternoon and Friday evening. Lawmakers were under pressure to strike a deal before the EU parliament election campaign starts in the new year.
“The EU AI Act is a global first,” said European Commission president Ursula von der Leyen on X. “[It is] a unique legal framework for the development of AI you can trust. And for the safety and fundamental rights of people and businesses.”
The law itself is not a world-first; China’s new rules for generative AI went into effect in August. But the EU AI Act is the most sweeping rulebook of its kind for the technology. It includes bans on biometric systems that identify people using sensitive characteristics such as sexual orientation and race, and the indiscriminate scraping of faces from the internet. Lawmakers also agreed that law enforcement should be able to use biometric identification systems in public spaces for certain crimes.
New transparency requirements for all general purpose AI models, like OpenAI's GPT-4, which powers ChatGPT, and stronger rules for “very powerful” models were also included. “The AI Act sets rules for large, powerful AI models, ensuring they do not present systemic risks to the Union,” says Dragos Tudorache, member of the European Parliament and one of two co-rapporteurs leading the negotiations.
Companies that don’t comply with the rules can be fined up to 7 percent of their global turnover. The bans on prohibited AI will take effect in six months, the transparency requirements in 12 months, and the full set of rules in around two years.
Measures designed to make it easier to protect copyright holders from generative AI and require general purpose AI systems to be more transparent about their energy use were also included.
“Europe has positioned itself as a pioneer, understanding the importance of its role as a global standard setter,” said European Commissioner Thierry Breton in a press conference on Friday night.
Over the two years lawmakers have been negotiating the rules agreed today, AI technology and the leading concerns about it have dramatically changed. When the AI Act was conceived in April 2021, policymakers were worried about opaque algorithms deciding who would get a job, be granted refugee status or receive social benefits. By 2022, there were examples that AI was actively harming people. In a Dutch scandal, decisions made by algorithms were linked to families being forcibly separated from their children, while students studying remotely alleged that AI systems discriminated against them based on the color of their skin.
Then, in November 2022, OpenAI released ChatGPT, dramatically shifting the debate. The leap in AI’s flexibility and popularity triggered alarm in some AI experts, who drew hyperbolic comparisons between AI and nuclear weapons.
That discussion manifested in the AI Act negotiations in Brussels in the form of a debate about whether makers of so-called foundation models such as the one behind ChatGPT, like OpenAI and Google, should be considered as the root of potential problems and regulated accordingly—or whether new rules should instead focus on companies using those foundational models to build new AI-powered applications, such as chatbots or image generators.
Representatives of Europe’s generative AI industry expressed caution about regulating foundation models, saying it could hamper innovation among the bloc’s AI startups. “We cannot regulate an engine devoid of usage,” Arthur Mensch, CEO of French AI company Mistral, said last month. “We don’t regulate the C [programming] language because one can use it to develop malware. Instead, we ban malware.” Mistral’s foundation model 7B would be exempt under the rules agreed today because the company is still in the research and development phase, Carme Artigas, Spain's Secretary of State for Digitalization and Artificial Intelligence, said in the press conference.
The major point of disagreement during the final discussions that ran late into the night twice this week was whether law enforcement should be allowed to use facial recognition or other types of biometrics to identify people either in real time or retrospectively. “Both destroy anonymity in public spaces,” says Daniel Leufer, a senior policy analyst at digital rights group Access Now. Real-time biometric identification can identify a person standing in a train station right now using live security camera feeds, he explains, while “post” or retrospective biometric identification can figure out that the same person also visited the train station, a bank, and a supermarket yesterday, using previously banked images or video.
Leufer said he was disappointed by the “loopholes” for law enforcement that appeared to have been built into the version of the act finalized today.
European regulators’ slow response to the emergence of social media era loomed over discussions. Almost 20 years elapsed between Facebook's launch and the passage of the Digital Services Act—the EU rulebook designed to protect human rights online—taking effect this year. In that time, the bloc was forced to deal with the problems created by US platforms, while being unable to foster their smaller European challengers. “Maybe we could have prevented [the problems] better by earlier regulation,” Brando Benifei, one of two lead negotiators for the European Parliament, told WIRED in July. AI technology is moving fast. But it will still be many years until it’s possible to say whether the AI Act is more successful in containing the downsides of Silicon Valley’s latest export.
83 notes · View notes