Tumgik
#alone the fact that its not allowed in the eu because of data privacy concerns should make people think twice about signing up
swedna · 5 years
Link
The US government is sharpening its antitrust scrutiny of Big Tech. In a sign that formal inquiries could be forthcoming, the Justice Department and Federal Trade Commission last week divvied up antitrust oversight for Apple Inc., Amazon.com Inc., Facebook Inc. and Alphabet Inc.’s Google. While the timing caught some by surprise, the companies have been preparing for this moment for a while, hiring lawyers and lobbyists and publicly making their case. History may be on their side: Corporate breakups are a significant, and rare, undertaking for the US government.
The last major breakup of a monopoly was AT&T in 1982. Microsoft was ordered split up by a federal judge in 2000 after the Justice Department sued the software company in 1998, a decision that was reversed on appeal.In the past few years, there has been a groundswell of calls to at least rein in, if not break up, technology companies that are seen by some as having become too big and powerful in many ways.
US Senator Elizabeth Warren has made perhaps the most detailed case for breaking up and regulating the four companies, but the Massachusetts Democrat is not alone in her aggressive views on the industry. Any formal investigation will take a long time. Government scrutiny of Microsoft lasted years.
As all sides dig in, here’s a look at some of the issues the government could home in on as it builds its case, and some of the ways the companies might argue their way out.
APPLE
The Case: Though perhaps best known for the iPhone, Apple’s vulnerability is the App Store, which supports tools and games for mobile devices and increasingly holds the key to the company’s growth. Apple and Google control more than 95% of all mobile app spending by consumers in the US, meaning that most developers must work with them to reach millions of smartphone users. Spotify Technology SA is among those that have long complained about the 30% cut Apple takes of each app, claiming it amounts to an effective tax on competitors. Some smaller developers sued Apple this week, claiming the App Store suppresses competition. The European Union is already preparing to look into the case and Warren, in her proposal, said the App Store should be separated from the rest of Apple because its own apps compete with those of outside developers.
Last month the Supreme Court ruled that a large antitrust case, regarding App Store pricing, could proceed.
The Defence: Chief Executive Officer Tim Cook is adamant that Apple is “not a monopoly” in any way. He points to its share of the smartphone market in the U.S., which is only about 30% and is even lower on PCs. ''We don’t have a dominant position in any market,” he said in an interview Tuesday with CBS. Apple has long sought to distinguish itself from the rest of big tech, from providing tighter security on its hardware and software to highlighting that it doesn’t monetize users’ data. Regarding the App Store, Apple recently added a new section to its website to show the benefits it provides to developers, such as reaching a huge global audience and handling payment and identity details, thereby taking friction out of the sign-up process.
Apple, along with Google, has also pointed out its ability to filter out fake apps and malicious software, making its store more secure.
FACEBOOK
The Case: Facebook has received a barrage of government criticism over the past year due to the sheer amount of data it collects on some 2 billion users and its control over content posted on the platform. Digital advertising, where Facebook and Google together control about 60% of spending in the US, is also an area where regulators might seek to base a case. Warren and Representative David Cicilline, the Democratic chairman of the House’s antitrust panel, have suggested that Facebook’s acquisitions of Instagram and WhatsApp might need to be unwound. Chris Hughes, one of the company’s co-founders, recently wrote at length why he thought Facebook is a monopoly run by a chief executive officer whose “influence is staggering.”
The Defence: Facebook CEO Mark Zuckerberg has said he welcomes regulation, but that breaking up the company wouldn’t address legislators’ privacy and data concerns. In fact, Facebook argues that its size actually enables it to do the things that regulators want to see, such as better policing of content, which would be much harder if Instagram or WhatsApp were separate companies. Furthermore, in an age where the government is concerned about the rise of China, Facebook argues that the US needs its big tech companies. Breaking up Facebook would simply clear the way for Chinese tech companies, which don’t share American values, to step in and dominate, Zuckerberg has said. As for digital advertising, Facebook notes that it doesn’t even lead the market.
EMarketer pegs its share at 22% of the U.S. market in 2019. Facebook’s $55 billion in advertising revenue last year was less than half of Google’s. And, other social media companies are growing in key demographics, like Snapchat, which said earlier this year that it now reaches 90% of 13- to 24-year-olds in the US.
GOOGLE
The Case: Google appears to be the company that’s facing the most serious antitrust scrutiny now, after the FTC closed a two-year investigation in 2013 without any action. The company has a large or a majority market share in several important industries: Search, digital advertising and mobile operating systems. And Google could face concerns similar to Apple for its own app store.
Google commands about 37% of the US digital ad market, according to EMarketer, where it sells much of the online real estate available to advertisers, such as search ads and spots that play before YouTube videos. In search, where Google has 90% of the market, opponents claim the company can manipulate rankings to favor its own listings. And Google’s Android smartphone operating system is by far the most popular in the world, with 85% of the market, outpacing Apple’s iOS with 14%, according to IDC.
The company has already gone several rounds of antitrust scrutiny with the European Union and has been forced to pay billions of dollars in three different cases, including a record $5 billion fine for the practice of tying its search and browser tools to Android. Google has also made many enemies among smaller businesses, ranging from media to advertising technology companies that are now assembling evidence to help the Justice Department, according to a person familiar with the situation.
The Defence: Google has well-defined arguments ready to push back, honed over years of doing battle with the EU and the FTC. To counter the claim of favortism in search, Google argues that it's only trying to surface the best information faster for users. Google is appealing the $1.7 billion EU fine for stifling competition in the online advertising market. Regulators’ demands for Android will force Google to change its business model and start charging customers for the software, rather than giving it to handset makers for free, the company claims. Google may also argue that the global nature of the internet means it doesn’t actually have the power that its critics say it does, pointing to companies such as Amazon and Tencent Holdings Ltd. The internet has low barriers to entry, and if someone builds a better search engine or advertising system, customers can easily switch over, the argument goes. Google likes to say that “competition is only one click away.’’
AMAZON
The Case: The antitrust debate about Amazon focuses on the retailing giant’s perceived dominance of e-commerce, where it has nearly 50% of U.S. online sales. Since it’s both a retailer and a marketplace for third-party sellers, Amazon has drawn scrutiny over whether it uses its clout and huge amount of sales data to give itself a leg up over smaller vendors – an issue the EU is already investigating. Warren has suggested that Amazon, like Apple, should be barred from competing with other players that sell on its marketplace. Regulators may also look into Amazon’s fulfillment practice, in which the company handles all aspects of fulfilling customer orders from shipping to packing and storing, according to Vox.
The problem is that Amazon often charges much lower fees than competing platforms. Amazon Prime could also be a target. While consumers may love the fact that they can get free shipping on a wide range of goods and services with the subscription program, the FTC has shown interest in the question of whether bundling these services allows Amazon to undercut competitors on price, according to Vox.
The Defence: Every time Amazon buys another company in another sector it sends a jolt of fear throughout the industry -- whether groceries or pharmaceuticals or logistics. But Amazon claims it actually only holds a small percentage of the total retail market in the US and faces formidable competition from the likes of Walmart Inc. The company, also a frequent target of Senator Bernie Sanders of Vermont for its treatment of workers, also prides itself on being able to keep prices low for consumers.
0 notes
webbygraphic001 · 5 years
Text
Internet Censorship is Here: How Far Will it Go?
Within hours of the recent mass shooting at a New Zealand mosque by a far-right terrorist, the country’s authorities were scrambling to ensure a sickening video the murderer streamed on Facebook was barred from the nation’s screens. Due to the nature of the Internet, the task of removal proved very difficult. But eventually, the government succeeded — using controversial tactics usually associated with Internet censorship by authoritarian regimes.
For some, the action of one highly democratic nation was a worrying reminder that Internet freedom should not be taken for granted. For others it was a triumph of taste and decency over a Wild West online community that still refuses to accept regulation while simultaneously failing to take responsibility for its actions.
a billion Internet users are barely aware that Facebook and Google exist
Versions of this debate are being played out around the world, as authorities, online companies, journalists and web professionals try to strike a balance between free speech and protecting Internet users from highly offensive — and potentially also subversive — content. The spread of “fake news”, alleged attempts by foreign powers to meddle in elections, and the age-old difficulty of defining what should be permitted in a free society, are all part of this debate.
With the technology and the excuses for Internet censorship already in place, it’s a debate that will shape the future of the Web. Or should that be ‘futures’, plural?
Full Censorship Can Be Achieved
In China, a billion Internet users are barely aware that Facebook and Google exist. Authorities have no difficulty in ensuring unpleasant content is not seen on the search engines and social media boards that are available there: The Christchurch video was blocked just as effectively as disturbing footage of the Tiananmen Square massacre is, because the Chinese government has built a system of highly effective controls on the Internet known as “the Great Firewall of China”.
Officially called the Golden Shield Project, China’s system of Internet controls has made fools of the experts who said that the Internet could not be tamed or censored. Jon Penney, a Fellow at Harvard’s Berkman Center for Internet & Society and Toronto’s Citizen Lab, told Open Democracy recently that although China’s technology is not yet fully understood by the west, it is:
…among the most technically sophisticated Internet filtering/censorship systems in the world.
“Basically, access to the Internet in China is provided by eight Internet Service Providers, which are licensed and controlled by the Ministry of Industry and Information Technology,” he said. “These ISPs are important, because we’re learning that they do a lot of the heavy lifting in terms of content filtering and censorship.”

Controlling ISPs was one crucial brick of that firewall that allowed New Zealand to take the Christchurch killer’s video down. Indeed, what was controversial for many was the use of such an approach — and the fact that the government used a set of unpublished ‘blacklists’ of the sites it required to be blocked. Kalev Leetaru, a big data expert, wrote on Forbes: “The secret nature of the blacklist and opaque manner in which the companies decided which websites to add to the list or how to appeal an incorrect listing, echoed similar systems deployed around the world in countries like China.”
A Different Internet
China’s great firewall also tracks and filters keywords used in search engines; blocks many IP addresses; and can ‘hijack’ the Domain Name System to ensure attempts to access banned sites draw a blank. This is thought to be done at ISP level, but also further along the system as well, ensuring that browsing even a permitted foreign site from within China can be frustratingly slow. But with sites such as Google, Facebook, Twitter and Wikipedia blocked, most Chinese users simply view an entirely different Internet and App ecosystem.
most Chinese users simply view an entirely different Internet
Adrian Shahbaz, the research director for technology and democracy at Freedom House, an independent watchdog for democracy, says other authoritarian regimes — including Saudi Arabia and the United Arab Emirates — are already showing interest in China’s technology and censorship system. Russia is building its own version, which will allow it to totally isolate the domestic web from the rest of the Internet; ostensibly, this is to ensure the country’s ability to defend itself from a “catastrophic cyber attack”.
There are concerns that this censorship will spread to the West, where attempts to clamp down on hate speech, and to stop foreign ‘trolls’ pushing fake news in a bid to cause instability and influence elections, mean there is no shortage of justification for introducing controls. French President Emanuel Macron and US President Donald Trump are among the democratic leaders who have threatened crackdowns in the last few months alone.
Censorship or Responsible Regulation?
ISP controls and direct censorship are not the only threats to a unified and ‘free’ internet. With most people consuming their Internet through just a few very popular social media platforms or mainstream news providers, governments can also lean directly on these companies. Singapore — a country that admittedly sits in the bottom 30 of the Press Freedom Index — has just introduced a new “anti-fake news law” allowing authorities in the city-state to remove articles deemed to breach government regulations.
The country’s prime minister said the law will require media outlets to correct fake news articles, and “show corrections or display warnings about online falsehoods so that readers or viewers can see all sides and make up their own minds about the matter.”
Internet giants such as Facebook, Twitter and Google have their Asia headquarters in Singapore and are expected to come under pressure to aid implementation, meaning that those sites could look different when viewed from the city-state. Singapore may not be known for its freedom of speech, but its approach is telling as to how less authoritarian regimes — and those without China’s technology — can impose a creeping web censorship by leaning of the big tech companies that deliver most of the Internet users see.
The Singaporean premier added that “in extreme and urgent cases, the legislation will also require online news sources to take down fake news before irreparable damage is done.” It is not hard to imagine these words coming from a Western leader, or a judge.
Facebook is Already on Board
Facebook itself, after coming under intense pressure over the use of the site to spread everything from dubious news reports to videos promoting suicide, has now joined the calls for regulation. “From what I’ve learned, I believe we need new regulation in four areas: harmful content, election integrity, privacy and data portability,” Mark Zuckerberg said in a statement recently.
Copyright as Censorship
On the subject of data, Zuckerberg cited Europe’s GDPR — a set of regulations governing the use and storage of personal data — as an example to follow. But it is another EU law, passed in recent weeks, that threatens further Internet fragmentation.
The new Copyright Directive will require tech firms to automatically screen for and remove unauthorised copyrighted material from their platforms. Many campaigners have argued the directive will be harmful to free expression, since the only way to guarantee compliance is to simply block any user-generated content that references other copyrighted material in any way, including criticism, remixes, or even simple quotes.
until now, people have been relatively free to publish material online and then suffer the consequences
While the EU directive aims to bolster quality online news journalism by banning its wholesale re-use, sites that rely on user-generated content could end up looking very different when viewed from within Europe, compared to the US for example. Experts talk of a “splintering”, which means that there will effectively be different Internets in different jurisdictions.
Copyright enforcement, of course, is not censorship. And there have always been categories of images, for example, that are illegal in most jurisdictions. But until now, people have been relatively free to publish material online and then suffer the consequences, as was the case in the days of print. Proponents of tighter controls at source argue that simply removing material from sites once it is known to be illegal is a never-ending and ultimately pointless task, especially in the face of organized ’trolls’ who can re-post at will.
During the first 24 hours after the Christchurch attack, Facebook removed 1.5 million re-posts of the murderer’s video, for example. It was only the introduction of controls at ISP level that finally blocked it in New Zealand, at least.
The Human Element
“Extremist content” and “fake news’ look set to be the next targets for politicians who favor stricter Internet controls, or, as they may argue, greater responsibility from ISP providers or major websites. Unlike copyright, this is at least partially subjective, and would require real people, employed by the authorities, to decide what is acceptable on our screens. China, naturally, already employs an army of such censors; it even pays another large group to post material that is explicitly favorable to its policies.
Leetaru said: “Like New Zealand’s recent blocking efforts, China’s system officially exists for the same reason: to block access to disturbing content and content that would disrupt social order. In the Chinese case, however, the system has famously morphed to envelope all content that might threaten the government’s official narratives or call into question its actions.
“In New Zealand’s case, website censorship was limited to a small set of sites allegedly hosting sensitive content relating to the attack. Yet, the government’s apparent comfort with instituting such a nation-wide ban so swiftly and without debate reminds us of how Chinese-style censorship begins.”
Can’t imagine it happening? Britain’s government recently published a ‘White Paper’ — a way of signalling possible legislation — which proposed that social media companies should be forced to take down, within 24 hours, “unacceptable material” that “undermines our democratic values and principles”.
What Constitutes Fake News?
Exactly what constitutes “fake news” has always been open to interpretation: during election campaigns, some democratic leaders have already learned that it is a good label to discredit critical reports with. In Russia, fake news was banned recently, and is defined as anything that “exhibits blatant disrespect for the society, government, official government symbols, constitution or governmental bodies of Russia.”
One area that is being actively targeted in Europe is “extremist” material fostering violence or hatred. In Germany, which already has a system to force platforms to remove “hate speech,” this has recently included censure on a woman who posted pictures of the Iranian women’s volleyball team to contrast their attire in the 1970s (shorts and vests) and now (headscarves and long sleeves).
The following joke was deemed hateful enough to land the poster a social media ban: “Muslim men are taking a second wife. To finance their lives, Germans are taking a second job.”
Another area that Western governments are showing increasing concern about is private groups that carefully regulate membership, designed to allow like-minded people to share their views unchallenged. Already, there have been calls for Facebook to clamp down on these closed groups or “echo chambers”, on the grounds that they are able to serve undiluted misinformation without challenge. While these requests may once again sound reasonable, it is unclear what would constitute an echo chamber and what kind of ‘misinformation’ could be considered unacceptable — or indeed, who would decide that.
How to Beat the Censors
For those wanting to beat EU copywrite laws and, for example, see a meme their friend in California is ‘lol-ing’ about, a virtual private network (VPN) should be a good solution. Already recommended by many security experts, VPNs are encrypted proxy servers that hide your own IP address and can make it look like you are browsing from a different country. For occasional use, even using a public proxy site, a ‘browser within a browser’ may well work.
There are various levels of VPN – an in depth look at the options is available here. However, sophisticated censorship systems such as the Great Firewall of China are capable of detecting VPN use and blocking that too.
A popular alternative to VPN use is the Tor browser, which is designed with anonymity in mind. Although experts rate Tor’s privacy features (and therefore its anti-censorship abilities) higher than VPNs, Tor can also be blocked. What’s more, you have to install the browser on your device and using Tor does not hide the fact that you are using Tor. Both Tor and VPNs are illegal in some countries and their use could put you at risk.
Tor is also the gateway of preference for accessing the Deep Web or Dark Web — which are also used heavily by activists and journalists who are trying to circumvent curbs on their freedom of expression. In a detailed article explaining how to access and use the Dark Web, technology journalist Conor Shiels says:
The Deep Web has been heralded by many as the last bastion of internet privacy in an increasingly intrusive age, while others consider it one of the evilest places on the internet.
The Deep Web is technically any site not indexed by search engines. Such sites would be an obvious place for private groups to base themselves if they are thrown of Facebook or even banned — although of course they may find it harder to recruit new members if they remain hidden from the casual user.
Although the Deep or Dark Web is a popular place for illegal activity, it is not illegal in itself. For those seeking an uncensored experience, it remains a place hidden from the authorities, but of course, the flip side is that you will be hiding your own postings from the vast majority of web users. This aspect of censorship will perhaps be the hardest to bypass as authorities move to cut off the most popular sites and platforms from certain news, views and activities.
  Featured image via Unsplash.
Add Realistic Chalk and Sketch Lettering Effects with Sketch’it – only $5!
Source from Webdesigner Depot http://bit.ly/2HFm27a from Blogger http://bit.ly/2HyL1sw
0 notes
neptunecreek · 5 years
Text
The Public Deserves a Return to the 2015 Open Internet Order
Congress is actively debating how to fix the FCC’s repeal of the net neutrality rules. But the first bills offered (H.R. 1101 (Walden), H.R. 1006 (Latta), and H.R. 1096 (McMorris Rodgers) focus narrowly on the “bright line” rules of no blocking, no throttling, and no paid prioritization. A major problem with this approach is that the public supported the 2015 Open Internet Order and a huge array of parties (with the exception of basically just AT&T, Comcast, and Verizon) supported Title II reclassification because of what else was protected. Privacy, competition, and public safety are all worse off when all you do is ban three basic tactics.
Restoring the entirety of the 2015 Open Internet Order means protecting the vital components to keeping the Internet a free and open platform. If Congress decides to act, it should not shortchange the American public. Unfortunately, that appears to be where the House of Representatives is heading right now.
The Rules Need To Cover Anticompetitive Zero Rating
A straight ban on blocking, throttling and paid prioritization would leave out important limits on the practice of exempting certain traffic from data caps, also known as “zero rating.” This practice can be used to drive Internet users to the ISP’s own content or favored partners, squelching competition. A recent Epicenter.works multi-year study on zero-rating practices in the EU has found that countries that allow zero-rating plans have more expensive wireless services than countries that do not. It also found that when ISPs engage in zero-rating practices, only large companies are able to maintain the market relationships needed to be zero-rated. In addition, we already knew how zero rating can be used in anti-competitive ways and discriminates against low-income users, which is why EFF supported California’s ban on most harmful zero-rating practices.
Ignoring the harms that anticompetitive zero rating does to net neutrality is essentially just doing the bidding of AT&T, which has regularly leveraged its data caps in an anticompetitive way. It is worth noting that the current administration is concerned that AT&T intends to use tactics like this to privilege Time Warner content over that of competitors. Antitrust law may be unsuited to address this problem. As we see more and more of these kinds of vertical mergers, we need rules on zero rating to protect consumers.
While the 2015 Open Internet Order’s “general conduct rule” covering zero rating was too vague, a narrower alternative, like that in California’s net neutrality law, would ensure lower prices and keep ISPs from steering users to privileged websites and services. FCC staff had in fact found AT&T’s and Verizon’s zero rating practices to be in violation of the 2015 Order under the general conduct rule (which is not included in the three bills that have been introduced), but those investigations were terminated by FCC Chairman Ajit Pai before initiating the process to repeal net neutrality.
People Want Their ISP Privacy Rights Back
When all broadband access companies were classified under Title II of the Communications Act, Section 222 of the Act gave users a legal right to privacy when we use broadband communications. It also imposed a duty on your ISP to protect your personal information. In light of major wireless and broadband companies’ creation of a black market for bounty hunters (and everyone else) to be able to purchase the physical location of any American, it’s really important to restore these privacy rights. Over 90 percent of Americans feel that they have lost control of their personal information when they use the Internet, so restoring ISP privacy rules should be a part of any new legislation.
Congress made a huge mistake when it reversed and prohibited the widely supported FCC privacy rules that stemmed from the 2015 Open Internet Order. Congress still appears headed in the wrong direction on consumer privacy when it openly entertains preempting strong state privacy laws (such as Illinois’ BIPA and California’s CCPA) not just on behalf of big ISPs but also at the request of Google and Facebook. But should a new communications law come into focus, reinstating Section 222’s protections would yield a huge benefit to users. ISPs are the only entities that are able to track your entire Internet experience because you have to tell them where you want to go. Virtual private networks (VPNs) offer a partial fix at best. It makes little sense for Congress to ignore consumer privacy laws it already has on the books and not reapply them to broadband access companies once again.
We Need More Competition in Broadband Internet, Which the 2015 Open Internet Order Promoted
Dozens of small ISPs wrote the Federal Communications Commission (FCC) and asked them not to abandon the Open Internet Order because it provided a clear framework to address anticompetitive conduct by the largest players in the market (AT&T, Comcast, and Verizon). Specifically, being classified as a common carrier under Title II of the Act applied Section 251, which required ISPs to “interconnect” with other broadband providers and related market players in good faith. This prevents large players from leveraging their size to harm the competition.
The dispute made most famous by comedian John Oliver was between Comcast and Netflix where Comcast demanded new payments from Netflix simply because they had leverage. Large ISPs regularly misrepresent the cost of providing access to video services from their competitors, but the estimated cost to Comcast was a fraction of a penny per hour of viewing HD video and dropping when they demanded new fees. Other disputes exist that are less in the public eye including two between Comcast and unknown edge providers that came to light in a court filing after the passage of California’s net neutrality law (SB 822). Ultimately what this boils down to is whether interconnection charges become a rent-seeking opportunity for big ISPs as they have in many parts of the world.
The other pro-competition outcome of classifying broadband companies under Title II was the application of Section 224 of the Communications Act, otherwise known as “pole attachment rights.” Under the Open Internet Order, anyone selling broadband access was given a legal right to access infrastructure such as the poles outside your home that run wires. Given that the close to 60 to 80 percent of the cost of deploying a network can be attributed to local civil works like digging up the roads, equal access to infrastructure already built helps reduce the cost of market entry. Knowing this cost barrier, it should surprise no one that when an ISP owns the infrastructure it will categorically deny access to competitors much like AT&T did with Google Fiber. Today under the Restoring Internet Freedom Order, only telephone companies (like AT&T and Verizon) and cable television companies (like Comcast) have legal rights to infrastructure. New entrants that sell competitive broadband access, like Common Networks of Alameda, are forced to explore more difficult workarounds such as asking residents to offer a portion of their rooftops.
Public Safety Needs a Referee
Despite the fact that Verizon has admitted all fault for its throttling and upselling activities in California to firefighters during one of the worst fires in the state’s history, the FCC has done nothing to proactively address the problem. This is despite the problem remaining unresolved in Santa Clara County months after the fact. And that is because without its Title II authority under Section 201 and Section 202, the FCC can literally do nothing about Verizon’s conduct. Such an outcome raised serious questions at the D.C. Circuit’s oral arguments on the Restoring Internet Freedom Order as judges openly questioned the FCC’s wisdom in letting first responders navigate this field alone, despite the FCC’s legal duty to address public safety. As Santa Clara County’s attorney Danielle Goldstein pointed out during oral arguments, it is not rational to expect public safety entities to come to the FCC after an emergency occurs. Given the life and death matters involved, avoiding this issue carries extreme risks of recurrence in the future not because ISPs are bad actors, but because it is not their job to figure out the balancing act between their for-profit duties and less profitable needs of public safety. That has always been a government responsibility.
There is more at stake in the battle for net neutrality than preventing ISPs from blocking, throttling, or engaging in paid prioritization. Bills that are limited to those three rules ignore the high-speed cable monopoly problem that tens of millions of Americans face, and how a lack of privacy protections harms broadband adoption. These bills miss the larger impact of the 2015 rules and ask the public, which overwhelmingly opposed the Restoring Internet Freedom Order, to accept only a fraction of its benefits. The public deserves better.
from Deeplinks https://ift.tt/2NgrcJa
0 notes
sheminecrafts · 5 years
Text
Most Facebook users still in the dark about its creepy ad practices, Pew finds
A study by the Pew Research Center suggests most Facebook users are still in the dark about how the company tracks and profiles them for ad-targeting purposes.
Pew found three-quarters (74%) of Facebook users did not know the social networking behemoth maintains a list of their interests and traits to target them with ads, only discovering this when researchers directed them to view their Facebook ad preferences page.
A majority (51%) of Facebook users also told Pew they were uncomfortable with Facebook compiling the information.
While more than a quarter (27%) said the ad preference listing Facebook had generated did not very or at all accurately represent them.
The researchers also found that 88% of polled users had some material generated for them on the ad preferences page. Pew’s findings come from a survey of a nationally representative sample of 963 U.S. Facebook users ages 18 and older which was conducted between September 4 to October 1, 2018, using GfK’s KnowledgePanel.
In a senate hearing last year Facebook founder Mark Zuckerberg claimed users have “complete control” over both information they actively choose to upload to Facebook and data about them the company collects in order to target ads.
But the key question remains how Facebook users can be in complete control when most of them they don’t know what the company is doing. This is something U.S. policymakers should have front of mind as they work on drafting a comprehensive federal privacy law.
Pew’s findings suggest Facebook’s greatest ‘defence’ against users exercising what little control it affords them over information its algorithms links to their identity is a lack of awareness about how the Facebook adtech business functions.
After all the company markets the platform as a social communications service for staying in touch with people you know, not a mass surveillance people-profiling ad-delivery machine. So unless you’re deep in the weeds of the adtech industry there’s little chance for the average Facebook user to understand what Mark Zuckerberg has described as “all the nuances of how these services work”.
Having a creepy feeling that ads are stalking you around the Internet hardly counts.
At the same time, users being in the dark about the information dossiers Facebook maintains on them, is not a bug but a feature for the company’s business — which directly benefits by being able to minimize the proportion of people who opt out of having their interests categorized for ad targeting because they have no idea it’s happening. (And relevant ads are likely more clickable and thus more lucrative for Facebook.)
Hence Zuckerberg’s plea to policymakers last April for “a simple and practical set of — of ways that you explain what you are doing with data… that’s not overly restrictive on — on providing the services”.
(Or, to put it another way: If you must regulate privacy let us simplify explanations using cartoon-y abstraction that allows for continued obfuscation of exactly how, where and why data flows.)
From the user point of view, even if you know Facebook offers ad management settings it’s still not simple to locate and understand them, requiring navigating through several menus that are not prominently sited on the platform, and which are also complex, with multiple interactions possible. (Such as having to delete every inferred interest individually.) 
The average Facebook user is unlikely to look past the latest few posts in their newsfeed let alone go proactively hunting for a boring sounding ‘ad management’ setting and spending time figuring out what each click and toggle does (in some cases users are required to hover over a interest in order to view a cross that indicates they can in fact remove it, so there’s plenty of dark pattern design at work here too).
And all the while Facebook is putting a heavy sell on, in the self-serving ad ‘explanations’ it does offer, spinning the line that ad targeting is useful for users. What’s not spelt out is the huge privacy trade off it entails — aka Facebook’s pervasive background surveillance of users and non-users.
Nor does it offer a complete opt-out of being tracked and profiled; rather its partial ad settings let users “influence what ads you see”. 
But influencing is not the same as controlling, whatever Zuckerberg claimed in Congress. So, as it stands, there is no simple way for Facebook users to understand their ad options because the company only lets them twiddle a few knobs rather than shut down the entire surveillance system.
The company’s algorithmic people profiling also extends to labelling users as having particular political views, and/or having racial and ethnic/multicultural affinities.
Pew researchers asked about these two specific classifications too — and found that around half (51%) of polled users had been assigned a political affinity by Facebook; and around a fifth (21%) were badged as having a “multicultural affinity”.
Of those users who Facebook had put into a particular political bucket, a majority (73%) said the platform’s categorization of their politics was very or somewhat accurate; but more than a quarter (27%) said it was not very or not at all an accurate description of them.
“Put differently, 37% of Facebook users are both assigned a political affinity and say that affinity describes them well, while 14% are both assigned a category and say it does not represent them accurately,” it writes.
Use of people’s personal data for political purposes has triggered some major scandals for Facebook’s business in recent years. Such as the Cambridge Analytica data misuse scandal — when user data was shown to have been extracted from the platform en masse, and without proper consents, for campaign purposes.
In other instances Facebook ads have also been used to circumvent campaign spending rules in elections. Such as during the UK’s 2016 EU referendum vote when large numbers of ads were non-transparently targeted with the help of social media platforms.
And indeed to target masses of political disinformation to carry out election interference. Such as the Kremlin-backed propaganda campaign during the 2016 US presidential election.
Last year the UK data watchdog called for an ethical pause on use of social media data for political campaigning, such is the scale of its concern about data practices uncovered during a lengthy investigation.
Yet the fact that Facebook’s own platform natively badges users’ political affinities frequently gets overlooked in the discussion around this issue.
For all the outrage generated by revelations that Cambridge Analytica had tried to use Facebook data to apply political labels on people to target ads, such labels remain a core feature of the Facebook platform — allowing any advertiser, large or small, to pay Facebook to target people based on where its algorithms have determined they sit on the political spectrum, and do so without obtaining their explicit consent. (Yet under European data protection law political beliefs are deemed sensitive information, and Facebook is facing increasing scrutiny in the region over how it processes this type of data.)
Of those users who Pew found had been badged by Facebook as having a “multicultural affinity” — another algorithmically inferred sensitive data category — 60% told it they do in fact have a very or somewhat strong affinity for the group to which they are assigned; while more than a third (37%) said their affinity for that group is not particularly strong.
“Some 57% of those who are assigned to this category say they do in fact consider themselves to be a member of the racial or ethnic group to which Facebook assigned them,” Pew adds.
It found that 43% of those given an affinity designation are said by Facebook’s algorithm to have an interest in African American culture; with the same share (43%) is assigned an affinity with Hispanic culture. While one-in-ten are assigned an affinity with Asian American culture.
(Facebook’s targeting tool for ads does not offer affinity classifications for any other cultures in the U.S., including Caucasian or white culture, Pew also notes, thereby underlining one inherent bias of its system.)
In recent years the ethnic affinity label that Facebook’s algorithm sticks to users has caused specific controversy after it was revealed to have been enabling the delivery of discriminatory ads.
As a result, in late 2016, Facebook said it would disable ad targeting using the ethnic affinity label for protected categories of housing, employment and credit-related ads. But a year later its ad review systems were found to be failing to block potentially discriminatory ads.
The act of Facebook sticking labels on people clearly creates plenty of risk — be that from election interference or discriminatory ads (or, indeed, both).
Risk that a majority of users don’t appear comfortable with once they realize it’s happening.
And therefore also future risk for Facebook’s business as more regulators turn their attention to crafting privacy laws that can effectively safeguard consumers from having their personal data exploited in ways they don’t like. (And which might disadvantage them or generate wider societal harms.)
Commenting about Facebook’s data practices, Michael Veale, a researcher in data rights and machine learning at University College London, told us: “Many of Facebook’s data processing practices appear to violate user expectations, and the way they interpret the law in Europe is indicative of their concern around this. If Facebook agreed with regulators that inferred political opinions or ‘ethnic affinities’ were just the same as collecting that information explicitly, they’d have to ask for separate, explicit consent to do so — and users would have to be able to say no to it.
“Similarly, Facebook argues it is ‘manifestly excessive’ for users to ask to see the extensive web and app tracking data they collect and hold next to your ID to generate these profiles — something I triggered a statutory investigation into with the Irish Data Protection Commissioner. You can’t help but suspect that it’s because they’re afraid of how creepy users would find seeing a glimpse of the the truth breadth of their invasive user and non-user data collection.”
In a second survey, conducted between May 29 and June 11, 2018 using Pew’s American Trends Panel and of a representative sample of all U.S. adults who use social media (including Facebook and other platforms like Twitter and Instagram), Pew researchers found social media users generally believe it would be relatively easy for social media platforms they use to determine key traits about them based on the data they have amassed about their behaviors.
“Majorities of social media users say it would be very or somewhat easy for these platforms to determine their race or ethnicity (84%), their hobbies and interests (79%), their political affiliation (71%) or their religious beliefs (65%),” Pew writes.
While less than a third (28%) believe it would be difficult for the platforms to figure out their political views, it adds.
So even while most people do not understand exactly what social media platforms are doing with information collected and inferred about them, once they’re asked to think about the issue most believe it would be easy for tech firms to join data dots around their social activity and make sensitive inferences about them.
Commenting generally on the research, Pew’s director of internet and technology research, Lee Rainie, said its aim was to try to bring some data to debates about consumer privacy, the role of micro-targeting of advertisements in commerce and political activity, and how algorithms are shaping news and information systems.
Update: Responding to Pew’s research, Facebook sent us the following statement:
We want people to understand how our ad settings and controls work. That means better ads for people. While we and the rest of the online ad industry need to do more to educate people on how interest-based advertising works and how we protect people’s information, we welcome conversations about transparency and control.
from iraidajzsmmwtv https://tcrn.ch/2RuM1G4 via IFTTT
0 notes
Link
A study by the Pew Research Center suggests most Facebook users are still in the dark about how the company tracks and profiles them for ad-targeting purposes.
Pew found three-quarters (74%) of Facebook users did not know the social networking behemoth maintains a list of their interests and traits to target them with ads, only discovering this when researchers directed them to view their Facebook ad preferences page.
A majority (51%) of Facebook users also told Pew they were uncomfortable with Facebook compiling the information.
While more than a quarter (27%) said the ad preference listing Facebook had generated did not very or at all accurately represent them.
The researchers also found that 88% of polled users had some material generated for them on the ad preferences page. Pew’s findings come from a survey of a nationally representative sample of 963 U.S. Facebook users ages 18 and older which was conducted between September 4 to October 1, 2018, using GfK’s KnowledgePanel.
In a senate hearing last year Facebook founder Mark Zuckerberg claimed users have “complete control” over both information they actively choose to upload to Facebook and data about them the company collects in order to target ads.
But the key question remains how Facebook users can be in complete control when most of them they don’t know what the company is doing. This is something U.S. policymakers should have front of mind as they work on drafting a comprehensive federal privacy law.
Pew’s findings suggest Facebook’s greatest ‘defence’ against users exercising what little control it affords them over information its algorithms links to their identity is a lack of awareness about how the Facebook adtech business functions.
After all the company markets the platform as a social communications service for staying in touch with people you know, not a mass surveillance people-profiling ad-delivery machine. So unless you’re deep in the weeds of the adtech industry there’s little chance for the average Facebook user to understand what Mark Zuckerberg has described as “all the nuances of how these services work”.
Having a creepy feeling that ads are stalking you around the Internet hardly counts.
At the same time, users being in the dark about the information dossiers Facebook maintains on them, is not a bug but a feature for the company’s business — which directly benefits by being able to minimize the proportion of people who opt out of having their interests categorized for ad targeting because they have no idea it’s happening. (And relevant ads are likely more clickable and thus more lucrative for Facebook.)
Hence Zuckerberg’s plea to policymakers last April for “a simple and practical set of — of ways that you explain what you are doing with data… that’s not overly restrictive on — on providing the services”.
(Or, to put it another way: If you must regulate privacy let us simplify explanations using cartoon-y abstraction that allows for continued obfuscation of exactly how, where and why data flows.)
From the user point of view, even if you know Facebook offers ad management settings it’s still not simple to locate and understand them, requiring navigating through several menus that are not prominently sited on the platform, and which are also complex, with multiple interactions possible. (Such as having to delete every inferred interest individually.) 
The average Facebook user is unlikely to look past the latest few posts in their newsfeed let alone go proactively hunting for a boring sounding ‘ad management’ setting and spending time figuring out what each click and toggle does (in some cases users are required to hover over a interest in order to view a cross that indicates they can in fact remove it, so there’s plenty of dark pattern design at work here too).
And all the while Facebook is putting a heavy sell on, in the self-serving ad ‘explanations’ it does offer, spinning the line that ad targeting is useful for users. What’s not spelt out is the huge privacy trade off it entails — aka Facebook’s pervasive background surveillance of users and non-users.
Nor does it offer a complete opt-out of being tracked and profiled; rather its partial ad settings let users “influence what ads you see”. 
But influencing is not the same as controlling, whatever Zuckerberg claimed in Congress. So, as it stands, there is no simple way for Facebook users to understand their ad options because the company only lets them twiddle a few knobs rather than shut down the entire surveillance system.
The company’s algorithmic people profiling also extends to labelling users as having particular political views, and/or having racial and ethnic/multicultural affinities.
Pew researchers asked about these two specific classifications too — and found that around half (51%) of polled users had been assigned a political affinity by Facebook; and around a fifth (21%) were badged as having a “multicultural affinity”.
Of those users who Facebook had put into a particular political bucket, a majority (73%) said the platform’s categorization of their politics was very or somewhat accurate; but more than a quarter (27%) said it was not very or not at all an accurate description of them.
“Put differently, 37% of Facebook users are both assigned a political affinity and say that affinity describes them well, while 14% are both assigned a category and say it does not represent them accurately,” it writes.
Use of people’s personal data for political purposes has triggered some major scandals for Facebook’s business in recent years. Such as the Cambridge Analytica data misuse scandal — when user data was shown to have been extracted from the platform en masse, and without proper consents, for campaign purposes.
In other instances Facebook ads have also been used to circumvent campaign spending rules in elections. Such as during the UK’s 2016 EU referendum vote when large numbers of ads were non-transparently targeted with the help of social media platforms.
And indeed to target masses of political disinformation to carry out election interference. Such as the Kremlin-backed propaganda campaign during the 2016 US presidential election.
Last year the UK data watchdog called for an ethical pause on use of social media data for political campaigning, such is the scale of its concern about data practices uncovered during a lengthy investigation.
Yet the fact that Facebook’s own platform natively badges users’ political affinities frequently gets overlooked in the discussion around this issue.
For all the outrage generated by revelations that Cambridge Analytica had tried to use Facebook data to apply political labels on people to target ads, such labels remain a core feature of the Facebook platform — allowing any advertiser, large or small, to pay Facebook to target people based on where its algorithms have determined they sit on the political spectrum, and do so without obtaining their explicit consent. (Yet under European data protection law political beliefs are deemed sensitive information, and Facebook is facing increasing scrutiny in the region over how it processes this type of data.)
Of those users who Pew found had been badged by Facebook as having a “multicultural affinity” — another algorithmically inferred sensitive data category — 60% told it they do in fact have a very or somewhat strong affinity for the group to which they are assigned; while more than a third (37%) said their affinity for that group is not particularly strong.
“Some 57% of those who are assigned to this category say they do in fact consider themselves to be a member of the racial or ethnic group to which Facebook assigned them,” Pew adds.
It found that 43% of those given an affinity designation are said by Facebook’s algorithm to have an interest in African American culture; with the same share (43%) is assigned an affinity with Hispanic culture. While one-in-ten are assigned an affinity with Asian American culture.
(Facebook’s targeting tool for ads does not offer affinity classifications for any other cultures in the U.S., including Caucasian or white culture, Pew also notes, thereby underlining one inherent bias of its system.)
In recent years the ethnic affinity label that Facebook’s algorithm sticks to users has caused specific controversy after it was revealed to have been enabling the delivery of discriminatory ads.
As a result, in late 2016, Facebook said it would disable ad targeting using the ethnic affinity label for protected categories of housing, employment and credit-related ads. But a year later its ad review systems were found to be failing to block potentially discriminatory ads.
The act of Facebook sticking labels on people clearly creates plenty of risk — be that from election interference or discriminatory ads (or, indeed, both).
Risk that a majority of users don’t appear comfortable with once they realize it’s happening.
And therefore also future risk for Facebook’s business as more regulators turn their attention to crafting privacy laws that can effectively safeguard consumers from having their personal data exploited in ways they don’t like. (And which might disadvantage them or generate wider societal harms.)
Commenting about Facebook’s data practices, Michael Veale, a researcher in data rights and machine learning at University College London, told us: “Many of Facebook’s data processing practices appear to violate user expectations, and the way they interpret the law in Europe is indicative of their concern around this. If Facebook agreed with regulators that inferred political opinions or ‘ethnic affinities’ were just the same as collecting that information explicitly, they’d have to ask for separate, explicit consent to do so — and users would have to be able to say no to it.
“Similarly, Facebook argues it is ‘manifestly excessive’ for users to ask to see the extensive web and app tracking data they collect and hold next to your ID to generate these profiles — something I triggered a statutory investigation into with the Irish Data Protection Commissioner. You can’t help but suspect that it’s because they’re afraid of how creepy users would find seeing a glimpse of the the truth breadth of their invasive user and non-user data collection.”
In a second survey, conducted between May 29 and June 11, 2018 using Pew’s American Trends Panel and of a representative sample of all U.S. adults who use social media (including Facebook and other platforms like Twitter and Instagram), Pew researchers found social media users generally believe it would be relatively easy for social media platforms they use to determine key traits about them based on the data they have amassed about their behaviors.
“Majorities of social media users say it would be very or somewhat easy for these platforms to determine their race or ethnicity (84%), their hobbies and interests (79%), their political affiliation (71%) or their religious beliefs (65%),” Pew writes.
While less than a third (28%) believe it would be difficult for the platforms to figure out their political views, it adds.
So even while most people do not understand exactly what social media platforms are doing with information collected and inferred about them, once they’re asked to think about the issue most believe it would be easy for tech firms to join data dots around their social activity and make sensitive inferences about them.
Commenting generally on the research, Pew’s director of internet and technology research, Lee Rainie, said its aim was to try to bring some data to debates about consumer privacy, the role of micro-targeting of advertisements in commerce and political activity, and how algorithms are shaping news and information systems.
Update: Responding to Pew’s research in a statement, Facebook said:
We want people to see better ads — it’s a better outcome for people, businesses, and Facebook when people see ads that are more relevant to their actual interests. One way we do this is by giving people ways to manage the type of ads they see. Pew’s findings underscore the importance of transparency and control across the entire ad industry, and the need for more consumer education around the controls we place at people’s fingertips. This year we’re doing more to make our settings easier to use and hosting more in-person events on ads and privacy.
from Social – TechCrunch https://tcrn.ch/2RuM1G4 Original Content From: https://techcrunch.com
0 notes
toomanysinks · 5 years
Text
Most Facebook users still in the dark about its creepy ad practices, Pew finds
A study by the Pew Research Center suggests most Facebook users are still in the dark about how the company tracks and profiles them for ad-targeting purposes.
Pew found three-quarters (74%) of Facebook users did not know the social networking behemoth maintains a list of their interests and traits to target them with ads, only discovering this when researchers directed them to view their Facebook ad preferences page.
A majority (51%) of Facebook users also told Pew they were uncomfortable with Facebook compiling the information.
While more than a quarter (27%) said the ad preference listing Facebook had generated did not very or at all accurately represent them.
The researchers also found that 88% of polled users had some material generated for them on the ad preferences page. Pew’s findings come from a survey of a nationally representative sample of 963 U.S. Facebook users ages 18 and older which was conducted between September 4 to October 1, 2018, using GfK’s KnowledgePanel.
In a senate hearing last year Facebook founder Mark Zuckerberg claimed users have “complete control” over both information they actively choose to upload to Facebook and data about them the company collects in order to target ads.
But the key question remains how Facebook users can be in complete control when most of them they don’t know what the company is doing. This is something U.S. policymakers should have front of mind as they work on drafting a comprehensive federal privacy law.
Pew’s findings suggest Facebook’s greatest ‘defence’ against users exercising what little control it affords them over information its algorithms links to their identity is a lack of awareness about how the Facebook adtech business functions.
After all the company markets the platform as a social communications service for staying in touch with people you know, not a mass surveillance people-profiling ad-delivery machine. So unless you’re deep in the weeds of the adtech industry there’s little chance for the average Facebook user to understand what Mark Zuckerberg has described as “all the nuances of how these services work”.
Having a creepy feeling that ads are stalking you around the Internet hardly counts.
At the same time, users being in the dark about the information dossiers Facebook maintains on them, is not a bug but a feature for the company’s business — which directly benefits by being able to minimize the proportion of people who opt out of having their interests categorized for ad targeting because they have no idea it’s happening. (And relevant ads are likely more clickable and thus more lucrative for Facebook.)
Hence Zuckerberg’s plea to policymakers last April for “a simple and practical set of — of ways that you explain what you are doing with data… that’s not overly restrictive on — on providing the services”.
(Or, to put it another way: If you must regulate privacy let us simplify explanations using cartoon-y abstraction that allows for continued obfuscation of exactly how, where and why data flows.)
From the user point of view, even if you know Facebook offers ad management settings it’s still not simple to locate and understand them, requiring navigating through several menus that are not prominently sited on the platform, and which are also complex, with multiple interactions possible. (Such as having to delete every inferred interest individually.) 
The average Facebook user is unlikely to look past the latest few posts in their newsfeed let alone go proactively hunting for a boring sounding ‘ad management’ setting and spending time figuring out what each click and toggle does (in some cases users are required to hover over a interest in order to view a cross that indicates they can in fact remove it, so there’s plenty of dark pattern design at work here too).
And all the while Facebook is putting a heavy sell on, in the self-serving ad ‘explanations’ it does offer, spinning the line that ad targeting is useful for users. What’s not spelt out is the huge privacy trade off it entails — aka Facebook’s pervasive background surveillance of users and non-users.
Nor does it offer a complete opt-out of being tracked and profiled; rather its partial ad settings let users “influence what ads you see”. 
But influencing is not the same as controlling, whatever Zuckerberg claimed in Congress. So, as it stands, there is no simple way for Facebook users to understand their ad options because the company only lets them twiddle a few knobs rather than shut down the entire surveillance system.
The company’s algorithmic people profiling also extends to labelling users as having particular political views, and/or having racial and ethnic/multicultural affinities.
Pew researchers asked about these two specific classifications too — and found that around half (51%) of polled users had been assigned a political affinity by Facebook; and around a fifth (21%) were badged as having a “multicultural affinity”.
Of those users who Facebook had put into a particular political bucket, a majority (73%) said the platform’s categorization of their politics was very or somewhat accurate; but more than a quarter (27%) said it was not very or not at all an accurate description of them.
“Put differently, 37% of Facebook users are both assigned a political affinity and say that affinity describes them well, while 14% are both assigned a category and say it does not represent them accurately,” it writes.
Use of people’s personal data for political purposes has triggered some major scandals for Facebook’s business in recent years. Such as the Cambridge Analytica data misuse scandal — when user data was shown to have been extracted from the platform en masse, and without proper consents, for campaign purposes.
In other instances Facebook ads have also been used to circumvent campaign spending rules in elections. Such as during the UK’s 2016 EU referendum vote when large numbers of ads were non-transparently targeted with the help of social media platforms.
And indeed to target masses of political disinformation to carry out election interference. Such as the Kremlin-backed propaganda campaign during the 2016 US presidential election.
Last year the UK data watchdog called for an ethical pause on use of social media data for political campaigning, such is the scale of its concern about data practices uncovered during a lengthy investigation.
Yet the fact that Facebook’s own platform natively badges users’ political affinities frequently gets overlooked in the discussion around this issue.
For all the outrage generated by revelations that Cambridge Analytica had tried to use Facebook data to apply political labels on people to target ads, such labels remain a core feature of the Facebook platform — allowing any advertiser, large or small, to pay Facebook to target people based on where its algorithms have determined they sit on the political spectrum, and do so without obtaining their explicit consent. (Yet under European data protection law political beliefs are deemed sensitive information, and Facebook is facing increasing scrutiny in the region over how it processes this type of data.)
Of those users who Pew found had been badged by Facebook as having a “multicultural affinity” — another algorithmically inferred sensitive data category — 60% told it they do in fact have a very or somewhat strong affinity for the group to which they are assigned; while more than a third (37%) said their affinity for that group is not particularly strong.
“Some 57% of those who are assigned to this category say they do in fact consider themselves to be a member of the racial or ethnic group to which Facebook assigned them,” Pew adds.
It found that 43% of those given an affinity designation are said by Facebook’s algorithm to have an interest in African American culture; with the same share (43%) is assigned an affinity with Hispanic culture. While one-in-ten are assigned an affinity with Asian American culture.
(Facebook’s targeting tool for ads does not offer affinity classifications for any other cultures in the U.S., including Caucasian or white culture, Pew also notes, thereby underlining one inherent bias of its system.)
In recent years the ethnic affinity label that Facebook’s algorithm sticks to users has caused specific controversy after it was revealed to have been enabling the delivery of discriminatory ads.
As a result, in late 2016, Facebook said it would disable ad targeting using the ethnic affinity label for protected categories of housing, employment and credit-related ads. But a year later its ad review systems were found to be failing to block potentially discriminatory ads.
The act of Facebook sticking labels on people clearly creates plenty of risk — be that from election interference or discriminatory ads (or, indeed, both).
Risk that a majority of users don’t appear comfortable with once they realize it’s happening.
And therefore also future risk for Facebook’s business as more regulators turn their attention to crafting privacy laws that can effectively safeguard consumers from having their personal data exploited in ways they don’t like. (And which might disadvantage them or generate wider societal harms.)
Commenting about Facebook’s data practices, Michael Veale, a researcher in data rights and machine learning at University College London, told us: “Many of Facebook’s data processing practices appear to violate user expectations, and the way they interpret the law in Europe is indicative of their concern around this. If Facebook agreed with regulators that inferred political opinions or ‘ethnic affinities’ were just the same as collecting that information explicitly, they’d have to ask for separate, explicit consent to do so — and users would have to be able to say no to it.
“Similarly, Facebook argues it is ‘manifestly excessive’ for users to ask to see the extensive web and app tracking data they collect and hold next to your ID to generate these profiles — something I triggered a statutory investigation into with the Irish Data Protection Commissioner. You can’t help but suspect that it’s because they’re afraid of how creepy users would find seeing a glimpse of the the truth breadth of their invasive user and non-user data collection.”
In a second survey, conducted between May 29 and June 11, 2018 using Pew’s American Trends Panel and of a representative sample of all U.S. adults who use social media (including Facebook and other platforms like Twitter and Instagram), Pew researchers found social media users generally believe it would be relatively easy for social media platforms they use to determine key traits about them based on the data they have amassed about their behaviors.
“Majorities of social media users say it would be very or somewhat easy for these platforms to determine their race or ethnicity (84%), their hobbies and interests (79%), their political affiliation (71%) or their religious beliefs (65%),” Pew writes.
While less than a third (28%) believe it would be difficult for the platforms to figure out their political views, it adds.
So even while most people do not understand exactly what social media platforms are doing with information collected and inferred about them, once they’re asked to think about the issue most believe it would be easy for tech firms to join data dots around their social activity and make sensitive inferences about them.
Commenting generally on the research, Pew’s director of internet and technology research, Lee Rainie, said its aim was to try to bring some data to debates about consumer privacy, the role of micro-targeting of advertisements in commerce and political activity, and how algorithms are shaping news and information systems.
Update: Responding to Pew’s research in a statement, Facebook said:
We want people to see better ads — it’s a better outcome for people, businesses, and Facebook when people see ads that are more relevant to their actual interests. One way we do this is by giving people ways to manage the type of ads they see. Pew’s findings underscore the importance of transparency and control across the entire ad industry, and the need for more consumer education around the controls we place at people’s fingertips. This year we’re doing more to make our settings easier to use and hosting more in-person events on ads and privacy.
source https://techcrunch.com/2019/01/16/most-facebook-users-still-in-the-dark-about-its-creepy-ad-practices-pew-finds/
0 notes
fmservers · 5 years
Text
Most Facebook users still in the dark about its creepy ad practices, Pew finds
A study by the Pew Research Center suggests most Facebook users are still in the dark about how the company tracks and profiles them for ad-targeting purposes.
Pew found three-quarters (74%) of Facebook users did not know the social networking behemoth maintains a list of their interests and traits to target them with ads, only discovering this when researchers directed them to view their Facebook ad preferences page.
A majority (51%) of Facebook users also told Pew they were uncomfortable with Facebook compiling the information.
While more than a quarter (27%) said the ad preference listing Facebook had generated did not very or at all accurately represent them.
The researchers also found that 88% of polled users had some material generated for them on the ad preferences page. Pew’s findings come from a survey of a nationally representative sample of 963 U.S. Facebook users ages 18 and older which was conducted between September 4 to October 1, 2018, using GfK’s KnowledgePanel.
In a senate hearing last year Facebook founder Mark Zuckerberg claimed users have “complete control” over both information they actively choose to upload to Facebook and data about them the company collects in order to target ads.
But the key question remains how Facebook users can be in complete control when most of them they don’t know what the company is doing. This is something U.S. policymakers should have front of mind as they work on drafting a comprehensive federal privacy law.
Pew’s findings suggest Facebook’s greatest ‘defence’ against users exercising what little control it affords them over information its algorithms links to their identity is a lack of awareness about how the Facebook adtech business functions.
After all the company markets the platform as a social communications service for staying in touch with people you know, not a mass surveillance people-profiling ad-delivery machine. So unless you’re deep in the weeds of the adtech industry there’s little chance for the average Facebook user to understand what Mark Zuckerberg has described as “all the nuances of how these services work”.
Having a creepy feeling that ads are stalking you around the Internet hardly counts.
At the same time, users being in the dark about the information dossiers Facebook maintains on them, is not a bug but a feature for the company’s business — which directly benefits by being able to minimize the proportion of people who opt out of having their interests categorized for ad targeting because they have no idea it’s happening. (And relevant ads are likely more clickable and thus more lucrative for Facebook.)
Hence Zuckerberg’s plea to policymakers last April for “a simple and practical set of — of ways that you explain what you are doing with data… that’s not overly restrictive on — on providing the services”.
(Or, to put it another way: If you must regulate privacy let us simplify explanations using cartoon-y abstraction that allows for continued obfuscation of exactly how, where and why data flows.)
From the user point of view, even if you know Facebook offers ad management settings it’s still not simple to locate and understand them, requiring navigating through several menus that are not prominently sited on the platform, and which are also complex, with multiple interactions possible. (Such as having to delete every inferred interest individually.) 
The average Facebook user is unlikely to look past the latest few posts in their newsfeed let alone go proactively hunting for a boring sounding ‘ad management’ setting and spending time figuring out what each click and toggle does (in some cases users are required to hover over a interest in order to view a cross that indicates they can in fact remove it, so there’s plenty of dark pattern design at work here too).
And all the while Facebook is putting a heavy sell on, in the self-serving ad ‘explanations’ it does offer, spinning the line that ad targeting is useful for users. What’s not spelt out is the huge privacy trade off it entails — aka Facebook’s pervasive background surveillance of users and non-users.
Nor does it offer a complete opt-out of being tracked and profiled; rather its partial ad settings let users “influence what ads you see”. 
But influencing is not the same as controlling, whatever Zuckerberg claimed in Congress. So, as it stands, there is no simple way for Facebook users to understand their ad options because the company only lets them twiddle a few knobs rather than shut down the entire surveillance system.
The company’s algorithmic people profiling also extends to labelling users as having particular political views, and/or having racial and ethnic/multicultural affinities.
Pew researchers asked about these two specific classifications too — and found that around half (51%) of polled users had been assigned a political affinity by Facebook; and around a fifth (21%) were badged as having a “multicultural affinity”.
Of those users who Facebook had put into a particular political bucket, a majority (73%) said the platform’s categorization of their politics was very or somewhat accurate; but more than a quarter (27%) said it was not very or not at all an accurate description of them.
“Put differently, 37% of Facebook users are both assigned a political affinity and say that affinity describes them well, while 14% are both assigned a category and say it does not represent them accurately,” it writes.
Use of people’s personal data for political purposes has triggered some major scandals for Facebook’s business in recent years. Such as the Cambridge Analytica data misuse scandal — when user data was shown to have been extracted from the platform en masse, and without proper consents, for campaign purposes.
In other instances Facebook ads have also been used to circumvent campaign spending rules in elections. Such as during the UK’s 2016 EU referendum vote when large numbers of ads were non-transparently targeted with the help of social media platforms.
And indeed to target masses of political disinformation to carry out election interference. Such as the Kremlin-backed propaganda campaign during the 2016 US presidential election.
Last year the UK data watchdog called for an ethical pause on use of social media data for political campaigning, such is the scale of its concern about data practices uncovered during a lengthy investigation.
Yet the fact that Facebook’s own platform natively badges users’ political affinities frequently gets overlooked in the discussion around this issue.
For all the outrage generated by revelations that Cambridge Analytica had tried to use Facebook data to apply political labels on people to target ads, such labels remain a core feature of the Facebook platform — allowing any advertiser, large or small, to pay Facebook to target people based on where its algorithms have determined they sit on the political spectrum, and do so without obtaining their explicit consent. (Yet under European data protection law political beliefs are deemed sensitive information, and Facebook is facing increasing scrutiny in the region over how it processes this type of data.)
Of those users who Pew found had been badged by Facebook as having a “multicultural affinity” — another algorithmically inferred sensitive data category — 60% told it they do in fact have a very or somewhat strong affinity for the group to which they are assigned; while more than a third (37%) said their affinity for that group is not particularly strong.
“Some 57% of those who are assigned to this category say they do in fact consider themselves to be a member of the racial or ethnic group to which Facebook assigned them,” Pew adds.
It found that 43% of those given an affinity designation are said by Facebook’s algorithm to have an interest in African American culture; with the same share (43%) is assigned an affinity with Hispanic culture. While one-in-ten are assigned an affinity with Asian American culture.
(Facebook’s targeting tool for ads does not offer affinity classifications for any other cultures in the U.S., including Caucasian or white culture, Pew also notes, thereby underlining one inherent bias of its system.)
In recent years the ethnic affinity label that Facebook’s algorithm sticks to users has caused specific controversy after it was revealed to have been enabling the delivery of discriminatory ads.
As a result, in late 2016, Facebook said it would disable ad targeting using the ethnic affinity label for protected categories of housing, employment and credit-related ads. But a year later its ad review systems were found to be failing to block potentially discriminatory ads.
The act of Facebook sticking labels on people clearly creates plenty of risk — be that from election interference or discriminatory ads (or, indeed, both).
Risk that a majority of users don’t appear comfortable with once they realize it’s happening.
And therefore also future risk for Facebook’s business as more regulators turn their attention to crafting privacy laws that can effectively safeguard consumers from having their personal data exploited in ways they don’t like. (And which might disadvantage them or generate wider societal harms.)
Commenting about Facebook’s data practices, Michael Veale, a researcher in data rights and machine learning at University College London, told us: “Many of Facebook’s data processing practices appear to violate user expectations, and the way they interpret the law in Europe is indicative of their concern around this. If Facebook agreed with regulators that inferred political opinions or ‘ethnic affinities’ were just the same as collecting that information explicitly, they’d have to ask for separate, explicit consent to do so — and users would have to be able to say no to it.
“Similarly, Facebook argues it is ‘manifestly excessive’ for users to ask to see the extensive web and app tracking data they collect and hold next to your ID to generate these profiles — something I triggered a statutory investigation into with the Irish Data Protection Commissioner. You can’t help but suspect that it’s because they’re afraid of how creepy users would find seeing a glimpse of the the truth breadth of their invasive user and non-user data collection.”
In a second survey, conducted between May 29 and June 11, 2018 using Pew’s American Trends Panel and of a representative sample of all U.S. adults who use social media (including Facebook and other platforms like Twitter and Instagram), Pew researchers found social media users generally believe it would be relatively easy for social media platforms they use to determine key traits about them based on the data they have amassed about their behaviors.
“Majorities of social media users say it would be very or somewhat easy for these platforms to determine their race or ethnicity (84%), their hobbies and interests (79%), their political affiliation (71%) or their religious beliefs (65%),” Pew writes.
While less than a third (28%) believe it would be difficult for the platforms to figure out their political views, it adds.
So even while most people do not understand exactly what social media platforms are doing with information collected and inferred about them, once they’re asked to think about the issue most believe it would be easy for tech firms to join data dots around their social activity and make sensitive inferences about them.
Commenting generally on the research, Pew’s director of internet and technology research, Lee Rainie, said its aim was to try to bring some data to debates about consumer privacy, the role of micro-targeting of advertisements in commerce and political activity, and how algorithms are shaping news and information systems.
Via Natasha Lomas https://techcrunch.com
0 notes
johnaculbreath · 5 years
Text
The top 10 law and tech news stories of 2018
Home
Daily News
The top 10 law and tech news stories of 2018
Legal Technology
The top 10 law and tech news stories of 2018
By Jason Tashea
Posted December 21, 2018, 6:30 am CST
Shutterstock
What were 2018's most important legal tech stories? ABA Journal legal affairs writer Jason Tashea takes a look back on the biggest news in law and technology, including Facebook’s never-ending missteps, the creation of new data privacy standards and the destruction of federal net neutrality.
1. The Fourth Amendment evolves for a digital era.
One of the year’s U.S. Supreme Court barnburners was U.S. v. Carpenter, where a 5-4 court found that a warrant was necessary when seeking geolocation data about a suspect from a service provider.
“Given the unique nature of cellphone location records, the fact that the information is held by a third party does not by itself overcome the user’s claim to Fourth Amendment protection,” Chief Justice John G. Roberts Jr. wrote in the majority opinion.
At the ABA Annual Meeting in Chicago this year, Electronic Privacy Information Center senior counsel Alan Butler told the ABA Journal that Carpenter is a major inflection point for the Fourth Amendment.
“I think many will look back on Fourth Amendment cases as before Carpenter and after Carpenter,” he said.
Even with this win for privacy advocates, there are plenty of unresolved issues as law enforcement agencies wrangle with technology and their search powers. For example, it was reported this year that the FBI and police in North Carolina are using “reverse” search warrants to collect cellphone location data of individuals near crime scenes. These warrants lack specificity and do not identify suspects.
2. Data protection gets some teeth.
Speaking of inflection points, data privacy got one May 25, when the European Union’s General Data Protection Regulation went into effect.
The GDPR, which replaced a 1995 EU directive, covers topics as diverse as a right to be forgotten and an individual’s ability to confront automated decision-making systems.
Enforcement of the law will be done through data protection authorities, which are government agencies in the EU member states. Failure to comply could be devastating—a company could be fined up to 4 percent of its global annual revenue.
In June, California passed the California Consumer Privacy Act of 2018, the country’s strictest consumer privacy law.
The law applies to any company that does business in California and has gross revenues above $25 million; annually buys, receives or sells personal information of 50,000 or more consumers, households or devices; or derives 50 percent or more of its annual revenue from selling personal information.
Tracking parts of the EU’s GDPR, the CCPA gives consumers access to their data, the power to have that data deleted and the ability to opt out of having their data sold. California passed a separate law protecting the data collected and transmitted by internet-enabled devices.
The need for these laws and smarter cybersecurity was underscored by the hack of hotel company Marriott International revealed in November, which spilled the data of as many as 500 million customers, ranging from names to contact information to passport numbers. Experts told Wired the breach was as big as it was because it went on for about four years; it was a vulnerability Marriott inherited when it bought Starwood Hotels and Resorts Worldwide in 2016.
3. Facebook’s terrible, horrible, no good, very bad year.
Shutterstock
Already under fire for its role propagating misinformation that may have affected the 2016 presidential election, Facebook got more bad news this year.
In March, the New York Times detailed the links between the social network giants and London-based Cambridge Analytica, a consulting firm that had been involved with the presidential campaigns of U.S. Sen. Ted Cruz (R-Texas) and then-candidate Donald Trump. The company had access to personal data of about 87 million Facebook users and was able to mine that data in order to better target potential voters.
Shortly after this news broke, the Federal Trade Commission opened an investigation into Facebook’s privacy practices. The results of that investigation are still forthcoming. Company CEO Mark Zuckerberg and Chief Operating Officer Sheryl Sandberg have since testified in front of Congress and the European Parliament.
In December, a British Parliamentary committee made public internal Facebook emails illustrating how the company would favor some companies and punish others. Reaction was harsh on both sides of the Atlantic. Claude Moraes, a member of the European Parliament, called for tougher regulations of the “possible monopoly” of social media giants.
In the states, Columbia University law professor Tim Wu Tweeted that the recent revelations should bring more attention to Facebook’s antitrust and anti-competitive behavior.
The antitrust / competition implications of the latest Facebook revelations need be discussed more by the media. Among other things, I think they clearly add to the case that the acquisition of WhatsApp was illegal; also suggest exclusionary conduct against Twitter/Vine
— Tim Wu (@superwuster) December 5, 2018
4. Net neutrality, we hardly knew ye.
Shutterstock
On June 11, net neutrality ceased to exist at the federal level.
Originally passed in 2015 and upheld by the U.S. Court of Appeals for the D.C. Circuit in 2016, the rules required that internet service providers like AT&T and Comcast treat all web traffic equally. This meant that, for example, providers would not be allowed to “throttle” or change the speed with which a person accesses a website. The repeal allows ISPs to block or slow some online traffic. In other cases, the provider can negotiate with a website for “fast lanes” to users.
There was a last-minute attempt by Congress to save the rule. While the Senate passed a bill in a rare showing of bipartisan support, the legislation did not gain traction in the House.
With no federal rules on the books, California passed its own net neutrality law this year. The Department of Justice sued to stop the state’s rules from going into effect. The case is ongoing.
California is not alone. Governors in six states signed executive orders supporting net neutrality principles and three states enacted net neutrality legislation this year, according to the National Conference of State Legislatures.
5. Trump makes a Chinese phone company great again.
Chinese telecom company ZTE Corp. was saved from collapse this year.
The company, which uses American technology, pleaded guilty in 2017 to exporting that technology to Iran and North Korea, agreeing to pay a $1.19 billion penalty. In April, the U.S. Department of Commerce determined that ZTE made false statements on its compliance documents. This led the U.S. to ban American companies from exporting products to ZTE for seven years. In response, ZTE suspended its major operations.
In May, President Donald Trump stated he would work with Chinese president Xi Jinping to end the ban. Under a new settlement, the ban was lifted in July.
The Senate version of the National Defense Authorization Act for Fiscal Year 2019 blocked the settlement, but a similar bill in the House that the president ultimately signed let the deal stand.
Beyond working around U.S. sanctions, many believe that the company—which has close ties to the Chinese government—and its products are a national security threat. The National Defense Authorization Act largely banned the use of ZTE and rival Chinese company Huawei’s products by U.S. agencies and government contractors.
“I think both Huawei, ZTE and multiple other Chinese companies pose a threat to our national interests—our national economic interests, and our national security interests,” U.S. Sen. Marco Rubio (R-Fla.) said on CBS’ Face the Nation this month.
In December, Japan banned the two companies’ hardware from its 4G networks. Canada, where Huawei’s CFO was arrested this month at the behest of the American government, is considering a similar ban.
6. Mega media mergers are proposed.
Shutterstock
Telecommunications and media companies made big moves in 2018.
In June, T-Mobile filed with the Federal Communications Commission to buy rival Sprint for $26 billion. The companies are respectively the third and fourth biggest cellular carriers in the country. In September, the FCC announced it would take more time to review the proposed deal. While speaking at a conference in Barcelona, J. Braxton Carter, T-Mobile chief financial officer, said the deal may close in the first half of 2019, according to Reuters.
Walt Disney Co. had announced its plan to buy 21st Century Fox for $52.4 billion in stock back in 2017, but Comcast set off a bidding war this year. Once the dust settled, Disney won with a $71.3 billion counteroffer.
The deal is expected to be set in the early part of 2019. Reuters reports that Brazil’s antitrust regulator released a report in December that raised concerns about concentration of power and market control if the merger went through.
The third major deal is in a protracted legal dispute. A federal judge greenlighted AT&T’s proposed $85 billion purchase of Time Warner in June, but the Department of Justice has fought against the decision. In a filing, attorneys for the DOJ argued that U.S. District Judge Richard Leon of Washington, D.C., ignored “fundamental principles of economics and common sense” in making his decision.
The case is on appeal at the U.S. Court of Appeals for the D.C. Circuit, and arguments took place earlier in December.
7. Apple and Samsung call a truce.
Shutterstock
The heavyweight matchup between smartphone producers is finally over.
Dating back to 2011, the litigation surrounded design and utility patents and involved multiple retrials and appeals, including one round at the U.S. Supreme Court. The appeals largely pertained to significant jury awards for Apple, including one from a 2012 trial totaling $1 billion. Those damages were lowered to $930 million after a retrial.
In June, the companies finally settled outstanding claims and counterclaims in the Northern District of California. U.S. District Judge Lucy Koh signed the order dismissing with prejudice, meaning the claims cannot be brought again.
“This settlement marks the official end of the ‘smartphone patent wars,’” Brian Love, an assistant professor at the Santa Clara University School of Law, told the CNET. “So, it seems like an opportune time to ask: After almost a decade of litigation, what was accomplished? I’d say very little.”
8. Manufacturers of self-driving cars temporarily pause tests after pedestrian fatality.
In March, an autonomous vehicle owned by ride-sourcing company Uber hit and killed a woman in Tempe, Arizona. It was likely the first known pedestrian fatality from to self-driving technology.
During the night of the incident, the car was traveling under the posted speed limit of 45 miles per hour. The vehicle did have a human driver in the driver’s seat, however she was watching television show on her phone at the time of the accident, according to a report by the Tempe Police Department.
Uber quickly settled with the victim’s family for an undisclosed amount, according to Tech Crunch.
After the incident, Arizona Gov. Doug Ducey suspended Uber’s ability to test its technology on the state’s public streets. Uber also stopped tests of its self-driving cars in Pittsburgh, San Francisco and Toronto.
The criminal investigation was transferred to the Yavapai County Attorney, and the case is still under review, according to Penny Cramer, administrative assistant to Yavapai County Attorney Sheila Polk.
In December, Uber stated its intention to restart its tests. However, the vehicles will only be used on a path between two of the company’s Pittsburgh offices and will not be tested at night or in wet conditions. The cars will also be capped at 25 mph.
9. Legal tech acquisitions carry on.
Screenshot courtesy of Lexicata’s Twitter feed.
It’s been another year of significant legal technology mergers.
In January, Avvo was acquired by Internet Brands, which already owned the Martindale-Nolo Legal Marketing Network. This summer, Internet Brands announced it would discontinue Avvo Legal Services, a fixed-cost service, and in October, the company rebranded its legal offerings as Martindale-Avvo.
Legal research company Fastcase purchased legal technology company Docket Alarm and Law Street Media, a legal news site, marking an entrance into the media market. According to a press release at the time, Law Street will be retooled and relaunched in the second quarter of 2019 to highlight national and state legal news complemented by analytics provided by Fastcase’s other products.
In April, court technology company Tyler Technologies acquired analytics firm Socrata. In October, cloud computing platform Clio announced its acquisition of Lexicata, a cloud-based client intake and management tool.
In November, legal consulting and technology company Elevate Services bought data analysis and consulting company LexPredict and contract management company Sumati Group in December.
This is just a small sample of this year’s numerous acquisitions.
10. Mugshots.com’s alleged owners get their own mugshots.
Thomas Keesee, Palm Beach County Sheriff’s Office; Sahar Sahid, Broward County Sheriff’s Office.
In a new step against online extortion schemes, the four alleged owners and operators of Mugshots.com were arrested and extradited to California this summer. The men face charges of extortion, money laundering and identity theft.
Mugshots.com and other similarly situated websites operate “depublishing” schemes where they collect public mugshot and arrest records and publish them to their site. When someone asks for the photo to be taken down, the website demands a fee. For many, paying one site leads to the mugshot appearing on a different website, according to an affidavit filed May 10.
The California Attorney General’s Office alleged in its release that over a three-year period, the defendants collected more than $64,000 in removal fees from approximately 175 individuals with California billing addresses; and during the same period collected more than $2 million in removal fees from approximately 5,703 individuals.
States, including California, have attempted to rein in these websites with dubious impact. According to the Pew Charitable Trusts, 18 states have passed laws to restrict mugshot websites. California passed a ban on charging money to take down photos in 2014.
Because of the industry’s response and lackluster action taken by law enforcement up until now, Pew points out that the success of these laws has been limited.
The top 10 law and tech news stories of 2018 republished via ABA Journal Daily News - Business of Law
0 notes
theinvinciblenoob · 6 years
Link
The European Union’s executive body has signed up tech platforms and ad industry players to a voluntary  Code of Practice aimed at trying to do something about the spread of disinformation online.
Something, just not anything too specifically quantifiable.
According to the Commission, Facebook, Google, Twitter, Mozilla, some additional members of the EDIMA trade association, plus unnamed advertising groups are among those that have signed up to the self-regulatory code, which will apply in a month’s time.
Signatories have committed to taking not exactly prescribed actions in the following five areas:
Disrupting advertising revenues of certain accounts and websites that spread disinformation;
Making political advertising and issue based advertising more transparent;
Addressing the issue of fake accounts and online bots;
Empowering consumers to report disinformation and access different news sources, while improving the visibility and findability of authoritative content;
Empowering the research community to monitor online disinformation through privacy-compliant access to the platforms’ data.
Mariya Gabriel, the European commissioner for digital economy and society, described the Code as a first “important” step in tackling disinformation. And one she said will be reviewed by the end of the year to see how (or, well, whether) it’s functioning, with the door left open for additional steps to be taken if not. So in theory legislation remains a future possibility.
“This is the first time that the industry has agreed on a set of self-regulatory standards to fight disinformation worldwide, on a voluntary basis,” she said in a statement. “The industry is committing to a wide range of actions, from transparency in political advertising to the closure of fake accounts and demonetisation of purveyors of disinformation, and we welcome this.
“These actions should contribute to a fast and measurable reduction of online disinformation. To this end, the Commission will pay particular attention to its effective implementation.”
“I urge online platforms and the advertising industry to immediately start implementing the actions agreed in the Code of Practice to achieve significant progress and measurable results in the coming months,” she added. “I also expect more and more online platforms, advertising companies and advertisers to adhere to the Code of Practice, and I encourage everyone to make their utmost to put their commitments into practice to fight disinformation.”
Earlier this year a report by an expert group established by the Commission to help shape its response to the so-called ‘fake news’ crisis, called for more transparency from online platform, as well as urgent investment in media and information literacy education to empower journalists and foster a diverse and sustainable news media ecosystem.
Safe to say, no one has suggested there’s any kind of quick fix for the Internet enabling the accelerated spread of nonsense and lies.
Including the Commission’s own expert group, which offered an assorted pick’n’mix of ideas — set over various and some not-at-all-instant-fix timeframes.
Though the group was called out for failing to interrogate evidence around the role of behavioral advertising in the dissemination of fake news — which has arguably been piling up. (Certainly its potential to act as a disinformation nexus has been amply illustrated by the Facebook-Cambridge Analytica data misuse scandal, to name one recent example.)
The Commission is not doing any better on that front, either.
The executive has been working on formulating its response to what its expert group suggested should be referred to as ‘disinformation’ (i.e. rather than the politicized ‘fake news’ moniker) for more than a year now — after the European parliament adopted a Resolution, in June 2017, calling on it to examine the issue and look at existing laws and possible legislative interventions.
Elections for the European parliament are due next spring and MEPs are clearly concerned about the risk of interference. So the unelected Commission is feeling the elected parliament’s push here.
Disinformation — aka “verifiably false or misleading information” created and spread for economic gain and/or to deceive the public, and which “may cause public harm” such as “threats to democratic political and policymaking processes as well as public goods such as the protection of EU citizens’ health, the environment or security”, as the Commission’s new Code of Practice defines it — is clearly a slippery policy target.
And online multiple players are implicated and involved in its spread. 
But so too are multiple, powerful, well resourced adtech players incentivized to push to avoid any political disruption to their lucrative people-targeting business models.
In the Commission’s voluntary Code of Practice signatories merely commit to recognizing their role in “contributing to solutions to the challenge posed by disinformation”. 
“The Signatories recognise and agree with the Commission’s conclusions that “the exposure of citizens to large scale Disinformation, including misleading or outright false information, is a major challenge for Europe. Our open democratic societies depend on public debates that allow well-informed citizens to express their will through free and fair political processes,” runs the preamble.
“[T]he Signatories are mindful of the fundamental right to freedom of expression and to an open Internet, and the delicate balance which any efforts to limit the spread and impact of otherwise lawful content must strike.
“In recognition that the dissemination of Disinformation has many facets and is facilitated by and impacts a very broad segment of actors in the ecosystem, all stakeholders have roles to play in countering the spread of Disinformation.”
“Misleading advertising” is explicitly excluded from the scope of the code — which also presumably helped the Commission convince the ad industry to sign up to it.
Though that further risks muddying the waters of the effort, given that social media advertising has been the high-powered vehicle of choice for malicious misinformation muck-spreaders (such as Kremlin-backed agents of societal division).
The Commission is presumably trying to split the hairs of maliciously misleading fake ads (still bad because they’re not actually ads but malicious pretenders) and good old fashioned ‘misleading advertising’, though — which will continue to be dealt with under existing ad codes and standards.
Also excluded from the Code: “Clearly identified partisan news and commentary”. So purveyors of hyper biased political commentary are not intended to get scooped up here, either. 
Though again, plenty of Kremlin-generated disinformation agents have masqueraded as partisan news and commentary pundits, and from all sides of the political spectrum.
Hence, we must again assume, the Commission including the requirement to exclude this type of content where it’s “clearly identified”. Whatever that means.
Among the various ‘commitments’ tech giants and ad firms are agreeing to here are plenty of firmly fudgey sounding statements that call for a degree of effort from the undersigned. But without ever setting out explicitly how such effort will be measured or quantified.
For e.g.
The Signatories recognise that all parties involved in the buying and selling of online advertising and the provision of advertising-related services need to work together to improve transparency across the online advertising ecosystem and thereby to effectively scrutinise, control and limit the placement of advertising on accounts and websites belonging to purveyors of Disinformation.
Or
Relevant Signatories commit to use reasonable efforts towards devising approaches to publicly disclose “issue-based advertising”. Such efforts will include the development of a working definition of “issue-based advertising” which does not limit reporting on political discussion and the publishing of political opinion and excludes commercial
And
Relevant Signatories commit to invest in features and tools that make it easier for people to find diverse perspectives about topics of public interest.
Nor does the code exactly nail down the terms it’s using to set goals — raising tricky and even existential questions like who defines what’s “relevant, authentic, and authoritative” where information is concerned?
Which is really the core of the disinformation problem.
And also not an easy question for tech giants — which have sold their vast content distribution farms as neutral ‘platforms’ — to start to approach, let alone tackle. Hence their leaning so heavily on third party fact-checkers to try to outsource their lack of any editorial values. Because without editorial values there’s no compass; and without a compass how can you judge the direction of tonal travel?
And so we end up with very vague suggestions in the code like:
Relevant Signatories should invest in technological means to prioritize relevant, authentic, and authoritative information where appropriate in search, feeds, or other automatically ranked distribution channels
Only slightly less vague and woolly is a commitment that signatories will “put in place clear policies regarding identity and the misuse of automated bots” on the signatories’ services, and “enforce these policies within the EU”. (So presumably not globally, despite disinformation being able to wreak havoc everywhere.)
Though here the code only points to some suggestive measures that could be used to do that — and which are set out in a separate annex. This boils down to a list of some very, very broad-brush “best practice principles” (such as “follow the money”; develop “solutions to increase transparency”; and “encourage research into disinformation”… ).
And set alongside that uninspiringly obvious list is another — of some current policy steps being undertaken by the undersigned to combat fake accounts and content — as if they’re already meeting the code’s expectations… so, er…
Unsurprisingly, the Commission’s first bite at ‘fake news’ has attracted some biting criticism for being unmeasurably weak sauce.
A group of media advisors — including the Association of Commercial Television in Europe, the European Broadcasting Union, the European Federation of Journalists and International Fact-Checking Network, and several academics — are among the first critics.
Reuters reports them complaining that signatories have not offered measurable objectives to monitor the implementation. “The platforms, despite their best efforts, have not been able to deliver a code of practice within the accepted meaning of effective and accountable self-regulation,” it quotes the group as saying.
Disinformation may be a tough, multi-pronged, multi-dimensional problem but few would try to argue that an overly dilute solution will deliver anything at all — well, unless it’s kicking the can down the road that you’re really after.
The Commission doesn’t even seem to know exactly what the undersigned have agreed to do as a first step, with the commissioner saying she’ll meet signatories “in the coming weeks to discuss the specific procedures and policies that they are adopting to make the Code a reality”. So double er… !
The code also only envisages signatories meeting annually to discuss how things are going. So no pressure for regular collaborative moots vis-a-vis tackling things like botnets spreading malicious disinformation then. Not unless the undersigned really, really want to.
Which seems unlikely, given how their business models tend to benefit from engagement — and disinformation-fuelled outrage has shown itself to be a very potent fuel on that front.
As part of the code, these adtech giants have at least technically agreed to make information available to the Commission on request — and generally to co-operate with its efforts to assess how/whether the code is working.
So, if public pressure on the issue continues to ramp up, the Commission does at least have a route to ask for relevant data from platforms that could, in theory, be used to feed a regulation that’s worth the paper it’s written on.
Until then, there’s nothing much to see here.
via TechCrunch
0 notes
endenogatai · 3 years
Text
Apple takes aim at adtech hysteria over iOS app tracking change
Apple has used a speech to European lawmakers and privacy regulators today to come out jabbing at what SVP Craig Federighi described as dramatic, “outlandish” and “false” claims being made by the adtech industry over a forthcoming change to iOS that will give users the ability to decline app tracking. 
The iPhone maker had been due to introduce the major privacy enhancement to the App Store this fall but delayed until early 2021 after the plan drew fire from advertising giants.
Facebook, for example, warned the move could have a major impact on app makers which rely on its in-app advertising network to monetize on iOS, as well as some impact on its own bottom line.
Since then four online advertising lobby groups have filed an antitrust complaint against Apple in France — seeking to derail the privacy changes on competition grounds.
Facebook isn’t happy about Apple’s upcoming ad tracking restrictions
However Apple made it clear today that it’s not backing down.
Federighi described online tracking as privacy’s “biggest” challenge — saying its forthcoming App Tracking Transparency (ATT) feature represents “the front line of user privacy” as far as it’s concerned.
“Never before has the right to privacy — the right to keep personal data under your own control — been under assault like it is today. As external threats to privacy continue to evolve, our work to counter them must, too,” he said in the speech to the European Data Protection & Privacy Conference.
The aim of ATT is “to empower our users to decide when or if they want to allow an app to track them in a way that could be shared across other companies’ apps or websites”, according to Federighi.
Civic society’s objection to the adtech industry’s tracking ‘dark art’ is that it sums to hellishly opaque mass surveillance of the mainstream Internet.
While harms attached to the practice include the risk of discrimination; manipulation of vulnerable groups; and election interference, to name a few.
Federighi took clear aim in his own attack — returning to a descriptor that Apple’s CEO Tim Cook used in a speech to an earlier European privacy conference back in 2018.
“The mass centralization of data puts privacy at risk — no matter who’s collecting it and what their intentions might be,” he warned. “So we believe Apple should have as little data about our customers as possible. Now, others take the opposite approach.
“They gather, sell, and hoard as much of your personal information as they can. The result is a data-industrial complex, where shadowy actors work to infiltrate the most intimate parts of your life and exploit whatever they can find — whether to sell you something, to radicalize your views, or worse.”
Since Cook wooed EU lawmakers by denouncing the “data-industrial complex” — and simultaneously lauding Europe’s pro-privacy approach to digital regulation — scores of individual and collective complaints have been lodged against the adtech infrastructure that underpins behavioral advertising under the EU’s General Data Protection Regulation (GDPR).
Yet regional regulators still haven’t taken any enforcement action over these adtech complaints. Turning the cookie-tracking tanker clearly isn’t a cake walk.
And while the adtech lobby may have been heartened by remarks made yesterday by Commission EVP and competition chief, Margrethe Vestager — who told the OECD Global Competition Forum that antitrust enforcers should be “vigilant so that privacy is not used as a shield against competition” — there was a sting in the tail as she expressed support for a ‘superprofiling’ case against Facebook in Germany, which combines the streams of privacy and competition in new and interesting ways, with Vestager dubbing the piece of regulatory innovation “inspiring and interesting”.
Antitrust case against Facebook’s ‘super profiling’ back on track after German federal court ruling
Federighi urged Europe’s lawmakers to screw their courage to the sticking place where privacy is concerned.
“Through GDPR and other policies — many of which have been implemented by Commissioner Jourová, Commissioner Reynders, and others here with us today — Europe has shown the world what a privacy-friendly future could look like,” he said, lathering on the kind of ‘geopolitical influencer’ praise that’s particularly cherished in Brussels.
He also reiterated Apple’s support for a GDPR-style “omnibus privacy law in the U.S.” — something Cook called for two years ago — aka: a law that “empowers consumers to minimize collection of their data; to know when and why it is being collected; to access, correct, or delete that data; and to know that it is truly secure”.
“It’s already clear that some companies are going to do everything they can to stop [ATT] — or any innovation like it — and to maintain their unfettered access to people’s data. Some have already begun to make outlandish claims, like saying that ATT — which helps users control when they’re tracked — will somehow lead to greater privacy invasions,” he went on, taking further sideswipes at Apple’s adtech detractors.
“To say that we’re skeptical of those claims would be an understatement. But that won’t stop these companies from making false arguments to get what they want. We need the world to see those arguments for what they are: a brazen attempt to maintain the privacy-invasive status quo.”
In another direct appeal to EU lawmakers, Federighi suggested ATT “reflects both the spirit and the requirements of both the ePrivacy Directive, and the planned updates in the draft ePrivacy Regulation” — displaying a keen insight into the (oftentimes fraught) process of EU policymaking. (The ePrivacy update has in fact been stalled for years — so the subtle suggestion in Apple’s appeal is its technology levers being flipped to enable greater user privacy could help unblock the EU’s bunged up policy levers.)
“ATT, like ePrivacy, is about giving people the power to make informed choices about what happens to their data. I hope that the lawmakers, regulators, and privacy advocates here today will continue to stand up for strong privacy protections like these,” he added.
Earlier in the speech Federighi also made some plainer points: Likening ATT to the Intelligent Tracking Prevention (ITP) feature Apple added to its Safari browser back in 2017 — pointing out that despite similar objections from adtech then the industry as a whole has posted revenue increases every year since.
“Just as with ITP, some in the ad industry are lobbying against these efforts — claiming that ATT will dramatically hurt ad-supported businesses. But we expect that the industry will adapt as it did before — providing effective advertising, but this time without invasive tracking,” he said.
“Of course, some advertisers and tech companies would prefer that ATT is never implemented at all. When invasive tracking is your business model, you tend not to welcome transparency and customer choice,” he added, taking another swipe at the industry’s motives for objecting to more choice and privacy for iOS users.
At the same time Federighi did acknowledge that the iOS switch to requiring user permission for app tracking “is a big change from the world we live in now”.
Of course it’s one that will likely bring transitionary pain to iOS developers, too.
But on this his messaging stood firm: He made it clear Apple may wield the stick at developers who don’t get with its user privacy upgrade program, warning: “Early next year, we’ll begin requiring all apps that want to do that to obtain their users’ explicit permission, and developers who fail to meet that standard can have their apps taken down from the App Store.”
It was interesting to note that the speech contained both specific appeals to regional lawmakers to stay the course in regulating to protect data and privacy; and more amorphous appeals to (unnamed) competitors — to follow Apple’s lead and innovate around privacy.
But if you’re a tech giant being accused of anti-competitive behaviour by a self-interested adtech clique, framing your desire for increased competition in the (lucrative) business of enhancing user privacy is a nice rebuttal.
“We don’t define success as standing alone. When it comes to privacy protections, we’re very happy to see our competitors copy our work, or develop innovative privacy features of their own that we can learn from,” said Federighi.
“At Apple, we are passionate advocates for privacy protections for all users. We love to see people buy our products. But we would also love to see robust competition among companies for the best, the strongest, and the most empowering privacy features.”
Of course if more iOS developers have to rely on in-app subscriptions to monetize their wares, because users refuse app tracking, it’ll mean more money passing through the pearly App Store gates and straight into Apple’s coffers. But that’s another story.
Apple’s iOS 14 will give users the option to decline app ad tracking
The Apple SVP also took gentle aim at any EU policymakers who may be imagining it’s a clever idea to crack open the pandora’s box of end-to-end encryption — urging them to strengthen the bloc’s commitment to robust security. Duh.
The backstory here is there’s been some recent chatter around the topic. Last month a draft resolution made by the Council of the European Union triggered press coverage that suggested EU legislators are on the cusp of banning e2e encryption.
Although, to be fair, the only ‘b’ word the Commission has used so far is ‘balanced’ — when it said its new EU security strategy will “explore and support balanced technical, operational and legal solutions, and promote an approach which both maintains the effectiveness of encryption in protecting privacy and security of communications, while providing an effective response to serious crime and terrorism”.
“I also hope that you will strengthen Europe’s support for end-to-end encryption. Apple strongly supported the European Parliament when it EU parliament proposed a requirement that the ePrivacy Regulation support end-to-end encryption, and we will continue to do so,” Federighi added, tone set to ‘don’t disappoint’.
What’s all this about Europe wanting crypto backdoors?
from RSSMix.com Mix ID 8204425 https://ift.tt/2VVwyyw via IFTTT
0 notes
thegloober · 6 years
Text
Tech and ad giants sign up to Europe’s first weak bite at ‘fake news’
The European Union’s executive body has signed up tech platforms and ad industry players to a voluntary  Code of Practice aimed at trying to do something about the spread of disinformation online.
Something, just not anything too specifically quantifiable.
According to the Commission, Facebook, Google, Twitter, Mozilla, some additional members of the EDIMA trade association, plus unnamed advertising groups are among those that have signed up to the self-regulatory code, which will apply in a month’s time.
Signatories have committed to taking not exactly prescribed actions in the following five areas:
Disrupting advertising revenues of certain accounts and websites that spread disinformation;
Making political advertising and issue based advertising more transparent;
Addressing the issue of fake accounts and online bots;
Empowering consumers to report disinformation and access different news sources, while improving the visibility and findability of authoritative content;
Empowering the research community to monitor online disinformation through privacy-compliant access to the platforms’ data.
Mariya Gabriel, the European commissioner for digital economy and society, described the Code as a first “important” step in tackling disinformation. And one she said will be reviewed by the end of the year to see how (or, well, whether) it’s functioning, with the door left open for additional steps to be taken if not. So in theory legislation remains a future possibility.
“This is the first time that the industry has agreed on a set of self-regulatory standards to fight disinformation worldwide, on a voluntary basis,” she said in a statement. “The industry is committing to a wide range of actions, from transparency in political advertising to the closure of fake accounts and demonetisation of purveyors of disinformation, and we welcome this.
“These actions should contribute to a fast and measurable reduction of online disinformation. To this end, the Commission will pay particular attention to its effective implementation.”
“I urge online platforms and the advertising industry to immediately start implementing the actions agreed in the Code of Practice to achieve significant progress and measurable results in the coming months,” she added. “I also expect more and more online platforms, advertising companies and advertisers to adhere to the Code of Practice, and I encourage everyone to make their utmost to put their commitments into practice to fight disinformation.”
Earlier this year a report by an expert group established by the Commission to help shape its response to the so-called ‘fake news’ crisis, called for more transparency from online platform, as well as urgent investment in media and information literacy education to empower journalists and foster a diverse and sustainable news media ecosystem.
Safe to say, no one has suggested there’s any kind of quick fix for the Internet enabling the accelerated spread of nonsense and lies.
Including the Commission’s own expert group, which offered an assorted pick’n’mix of ideas — set over various and some not-at-all-instant-fix timeframes.
Though the group was called out for failing to interrogate evidence around the role of behavioral advertising in the dissemination of fake news — which has arguably been piling up. (Certainly its potential to act as a disinformation nexus has been amply illustrated by the Facebook-Cambridge Analytica data misuse scandal, to name one recent example.)
The Commission is not doing any better on that front, either.
The executive has been working on formulating its response to what its expert group suggested should be referred to as ‘disinformation’ (i.e. rather than the politicized ‘fake news’ moniker) for more than a year now — after the European parliament adopted a Resolution, in June 2017, calling on it to examine the issue and look at existing laws and possible legislative interventions.
Elections for the European parliament are due next spring and MEPs are clearly concerned about the risk of interference. So the unelected Commission is feeling the elected parliament’s push here.
Disinformation — aka “verifiably false or misleading information” created and spread for economic gain and/or to deceive the public, and which “may cause public harm” such as “threats to democratic political and policymaking processes as well as public goods such as the protection of EU citizens’ health, the environment or security”, as the Commission’s new Code of Practice defines it — is clearly a slippery policy target.
And online multiple players are implicated and involved in its spread. 
But so too are multiple, powerful, well resourced adtech players incentivized to push to avoid any political disruption to their lucrative people-targeting business models.
In the Commission’s voluntary Code of Practice signatories merely commit to recognizing their role in “contributing to solutions to the challenge posed by disinformation”. 
“The Signatories recognise and agree with the Commission’s conclusions that “the exposure of citizens to large scale Disinformation, including misleading or outright false information, is a major challenge for Europe. Our open democratic societies depend on public debates that allow well-informed citizens to express their will through free and fair political processes,” runs the preamble.
“[T]he Signatories are mindful of the fundamental right to freedom of expression and to an open Internet, and the delicate balance which any efforts to limit the spread and impact of otherwise lawful content must strike.
“In recognition that the dissemination of Disinformation has many facets and is facilitated by and impacts a very broad segment of actors in the ecosystem, all stakeholders have roles to play in countering the spread of Disinformation.”
“Misleading advertising” is explicitly excluded from the scope of the code — which also presumably helped the Commission convince the ad industry to sign up to it.
Though that further risks muddying the waters of the effort, given that social media advertising has been the high-powered vehicle of choice for malicious misinformation muck-spreaders (such as Kremlin-backed agents of societal division).
The Commission is presumably trying to split the hairs of maliciously misleading fake ads (still bad because they’re not actually ads but malicious pretenders) and good old fashioned ‘misleading advertising’, though — which will continue to be dealt with under existing ad codes and standards.
Also excluded from the Code: “Clearly identified partisan news and commentary”. So purveyors of hyper biased political commentary are not intended to get scooped up here, either. 
Though again, plenty of Kremlin-generated disinformation agents have masqueraded as partisan news and commentary pundits, and from all sides of the political spectrum.
Hence, we must again assume, the Commission including the requirement to exclude this type of content where it’s “clearly identified”. Whatever that means.
Among the various ‘commitments’ tech giants and ad firms are agreeing to here are plenty of firmly fudgey sounding statements that call for a degree of effort from the undersigned. But without ever setting out explicitly how such effort will be measured or quantified.
For e.g.
The Signatories recognise that all parties involved in the buying and selling of online advertising and the provision of advertising-related services need to work together to improve transparency across the online advertising ecosystem and thereby to effectively scrutinise, control and limit the placement of advertising on accounts and websites belonging to purveyors of Disinformation.
Or
Relevant Signatories commit to use reasonable efforts towards devising approaches to publicly disclose “issue-based advertising”. Such efforts will include the development of a working definition of “issue-based advertising” which does not limit reporting on political discussion and the publishing of political opinion and excludes commercial
And
Relevant Signatories commit to invest in features and tools that make it easier for people to find diverse perspectives about topics of public interest.
Nor does the code exactly nail down the terms it’s using to set goals — raising tricky and even existential questions like who defines what’s “relevant, authentic, and authoritative” where information is concerned?
Which is really the core of the disinformation problem.
And also not an easy question for tech giants — which have sold their vast content distribution farms as neutral ‘platforms’ — to start to approach, let alone tackle. Hence their leaning so heavily on third party fact-checkers to try to outsource their lack of any editorial values. Because without editorial values there’s no compass; and without a compass how can you judge the direction of tonal travel?
And so we end up with very vague suggestions in the code like:
Relevant Signatories should invest in technological means to prioritize relevant, authentic, and authoritative information where appropriate in search, feeds, or other automatically ranked distribution channels
Only slightly less vague and woolly is a commitment that signatories will “put in place clear policies regarding identity and the misuse of automated bots” on the signatories’ services, and “enforce these policies within the EU”. (So presumably not globally, despite disinformation being able to wreak havoc everywhere.)
Though here the code only points to some suggestive measures that could be used to do that — and which are set out in a separate annex. This boils down to a list of some very, very broad-brush “best practice principles” (such as “follow the money”; develop “solutions to increase transparency”; and “encourage research into disinformation”… ).
And set alongside that uninspiringly obvious list is another — of some current policy steps being undertaken by the undersigned to combat fake accounts and content — as if they’re already meeting the code’s expectations… so, er…
Unsurprisingly, the Commission’s first bite at ‘fake news’ has attracted some biting criticism for being unmeasurably weak sauce.
A group of media advisors — including the Association of Commercial Television in Europe, the European Broadcasting Union, the European Federation of Journalists and International Fact-Checking Network, and several academics — are among the first critics.
Reuters reports them complaining that signatories have not offered measurable objectives to monitor the implementation. “The platforms, despite their best efforts, have not been able to deliver a code of practice within the accepted meaning of effective and accountable self-regulation,” it quotes the group as saying.
Disinformation may be a tough, multi-pronged, multi-dimensional problem but few would try to argue that an overly dilute solution will deliver anything at all — well, unless it’s kicking the can down the road that you’re really after.
The Commission doesn’t even seem to know exactly what the undersigned have agreed to do as a first step, with the commissioner saying she’ll meet signatories “in the coming weeks to discuss the specific procedures and policies that they are adopting to make the Code a reality”. So double er… !
The code also only envisages signatories meeting annually to discuss how things are going. So no pressure for regular collaborative moots vis-a-vis tackling things like botnets spreading malicious disinformation then. Not unless the undersigned really, really want to.
Which seems unlikely, given how their business models tend to benefit from engagement — and disinformation-fuelled outrage has shown itself to be a very potent fuel on that front.
As part of the code, these adtech giants have at least technically agreed to make information available to the Commission on request — and generally to co-operate with its efforts to assess how/whether the code is working.
So, if public pressure on the issue continues to ramp up, the Commission does at least have a route to ask for relevant data from platforms that could, in theory, be used to feed a regulation that’s worth the paper it’s written on.
Until then, there’s nothing much to see here.
Source: https://bloghyped.com/tech-and-ad-giants-sign-up-to-europes-first-weak-bite-at-fake-news/
0 notes
robertrluc85 · 6 years
Text
What the Heck is GDPR? (and How to Make Sure Your Blog Is Compliant)
Tumblr media
Ever get that feeling that something’s just waiting to bite you on the ass?
A disturbance in the force that you just can’t put your finger on?
You’re sure it’s not your anniversary?
Your kid’s piano recital?  
Maybe it’s the cable bill.
Dammit.
You can’t place what it is, but something’s waving a red flag.
For bloggers, that brain worm might just be the GDPR.
Niggling away at you like an unscratchable itch.
In a way, that’s good: You know enough about GDPR to be worried.
But in case you’re in the category of “blissfully unaware,” we’ll take a look at what the GDPR is all about.
And why it absolutely CAN affect you and your blog.
Table of Contents
GDPR 101
The Five GDPR Basics You Absolutely Must Know
The Six GDPR Core Principles
Warning: Beware of These Three Dangerous Myths about GDPR
Four Common Blogging Activities That Could Put You in the GDPR Firing Line
How Some Bloggers Can Dodge the GDPR Bullet
The $64,000 Question: Is Your Blog in Scope?
Three Totally Legitimate Approaches to Tackling GDPR (Including One That’s Super Easy)
Seven Easy Steps Toward GDPR Compliance
Stop Hiding Under the Pillow and Get Ahead of GDPR
Disclaimer: I’m not a lawyer. The information below is absolutely not legal advice. But it might just save you a ton of worry and expense.
GDPR 101
GDPR is currently taking Europe by storm.
It’s the General Data Protection Regulation — a new data privacy law being introduced by the European Union — and it’s a bit of a game-changer.
It comes fully into force on May 25, 2018.
Yep, that looming deadline might just be lighting up your radar.
It affects people across the globe, not just in Europe. And some forward-thinking folks have been working on preparing themselves for the last year or two.
Well done them. Straight to the top of the class.
But the truth of the matter is that many people have been just the slightest bit “mañana, mañana” about the whole thing.
Now that the countdown can be measured in days, some people are getting a touch, well, panicky.
It’s like that school assignment that you had a year to write.
Here you are, “T-minus-one and counting,” and you’re staring at a blank page.
And that’s due, in no small part, to the fact that GDPR appears complex, and there are still some gray areas.
We are all struggling to interpret some of the details of the regulation.
But some things are clear — so in case GDPR is entirely new to you, let’s hit the basics.
The Five GDPR Basics You Absolutely Must Know
It applies to anyone who processes “personal data” — Most obviously, that’s things like names, email addresses and other types of “personally identifiable information”;
It creates significant new responsibilities — If you process personal data, you are now truly responsible and accountable for its security and the way it is used;
It has a global reach — It might be an EU law, but it can apply to anyone, regardless of their location;
It doesn’t just apply to traditional businesses — The principles are concerned with what you do with other people’s data, not who you are or why you do it;
There are eye-watering fines for non-compliance — up to €20 million ($24m) or 4% of global revenue, whichever is higher.
So the GDPR’s scope is surprisingly wide-ranging. It could easily apply to you.
It gives data regulators powers to apply unprecedented financial penalties.
And crucially, it’s becoming extremely high-profile. The Facebook/Cambridge Analytica scandal alone has elevated the subject of data privacy to mainstream debate.
So it’s worth spending a little time to try to understand the key principles that the GDPR is attempting to achieve.
The Six GDPR Core Principles
The central principles of the GDPR are not new.
They expand on existing European Union data protection regulations, and most folks might generally consider them to fall into the category of “quite a good idea, really” (from the consumer perspective, at least).
So let’s break them down one by one.
Principle #1: Lawfulness, Fairness and Transparency
You must process personal data in a way that is lawful, fair and transparent.
“Lawfulness” has a specific meaning under the GDPR. There are six legitimate, lawful grounds for processing personal data. You must satisfy at least one of these six criteria before your data processing is “lawful.”  
The first and most obvious lawful basis for processing personal data is consent — that is, where the individual has specifically agreed (usually via one or more checkable boxes) that you may use their data in a specific way. More on consent later.
The majority of the other lawful grounds will be less relevant to bloggers. They include situations where it is essential for you to process personal data to fulfill a contract with the consumer, or if you are required by law to collect specific data (such as information required for tax records).
But the sixth and final lawful basis is relevant:
It can be lawful to process personal data without the individual’s consent if it is in your legitimate interest as a Controller to do so.
This is the subject of heated debate — because it appears to provide a convenient catch-all for controllers. (More on controllers later, but assume for now that the controller is you!)
Well it’s certainly not that, but it is an acknowledgement that data privacy is not absolute.
There should be a balance between the individual’s right to data privacy and the controller’s legitimate interest in running their blog, business or whatever.  
“Legitimate interest” is most likely to be used where consent is not appropriate or feasible.
Examples might include:
Storing IP addresses in server logs for the detection and prevention of fraud.
Using non-privacy-intrusive cookies (such as Google Analytics).
Storing personal data in backups to allow a blog to be restored following a technical issue.
These scenarios highlight that in some situations (such as preventing fraud), consumers must not be permitted to prevent processing. In others, it would simply be unworkable to try to gain consent in advance.
It will typically apply where your data processing involves minimal risk or impact to the individual’s privacy, and it is of a type that the individual might reasonably expect you to undertake.  
That said, we can be clear that “legitimate interest” is not:
Carte blanche to do whatever you fancy without consumers’ knowledge.
A justification for collecting data that you know full well your consumers would not consent to.
Those scenarios would not be lawful, fair or transparent.
Anyone planning to rely on the “legitimate interest” lawful basis will need to familiarize themselves with the detail of the regulation because there are specific requirements, such as the need to conduct a Legitimate Interests Assessment.
“Fairness” is not specifically defined in the regulation, but on any definition it overlaps significantly with lawfulness and transparency.
All of the regulation guidance suggests that fair processing involves ensuring that it does not have any unjustified adverse effects on the individual, and that data is used in ways that the individual might reasonably expect, given your relationship with them.
In short, if you are being open and transparent about how you process data, then you will almost inevitably being processing it “fairly.”
Examples of unfair processing might include:
Deceiving consumers about your real identity.
Attempting to hide the true purpose of your data processing behind swathes of small print or unnecessarily formal legal language.
Trying to hoodwink consumers in any way into providing their data.
“Transparency” is a fundamental and recurring theme throughout the regulation. You are expected to be conspicuously open and honest about what data you collect and what you propose to do with it.
More on transparency later.
Principle #2: Data Is Only Used for Specified, Legitimate Purposes
You must only use personal data for the specific purposes that you have declared.
Closely related to the concept of transparency, this principle demands that you may not collect data for one purpose, and then go on to use it in a different way.
Let’s take the example of a “Sign Up to Receive This Free Report” offer.
On the face of it, the individual is providing their email address so that you can send them the report. That’s it.
You cannot then add their email to your mailing list and send them other promotional material unless you’ve made it clear at the point of sign-up that that’s what you intend to do.
Principle #3: Limited to What Is Required to Achieve the Stated Purposes
You must collect only the minimum amount of personal data required to achieve your stated objective.
This is the concept of data minimization.
If you collect personal data to allow you to send blog notifications by email, then the minimum information you require is an email address. “Name” is probably fine too (for the purpose of personalizing your emails), but collecting anything else could be seen as excessive.
So if, in the same scenario, you also collect cell phone number, gender and age, then you need to be very clear why that information is necessary to allow you to send blog notifications.
Principle #4: Accurate and Up-To-Date
You must take all reasonable steps to ensure that any data you collect is accurate and kept up-to-date.
The risks to individuals’ data privacy are clearly increased where that data contains inaccuracies. Incorrect email addresses are a prime example of where other personal data can be inadvertently disclosed or leaked.
You are therefore obliged to address data inaccuracies without delay — incorrect data must be rectified, or deleted.
In practice, if someone contacts you to update their email address, you should take action on it without undue delay.
But being proactive is also important — for example, if you are getting regular bounce-backs from addresses on your mailing list, then this should be telling you something. Periodically checking your list and removing bounced addresses is highly recommended.
Principle #5: Time Limited
You must only hold personal data for as long as is required to achieve the stated objective.
It’s central to the concept of fairness that data is not retained for any longer than required to achieve the purpose for which you collected it.
Data retention also has implications for accuracy. If you’re still storing customer address data that you collected five years ago, the chances are that a significant proportion of that stale data is now inaccurate.
Principle #6: Data Must Be Processed Securely
You must process personal data in a way that ensures appropriate security.
The security of the data you hold is clearly pivotal to the whole objective of the GDPR. You are responsible for ensuring that there exist appropriate technical and organizational measures to protect against unauthorized access, loss, alteration and disclosure.
That said, you’re not expected to be Fort Knox.
But you are expected to take steps that are proportionate to the sensitivity of the data that you collect, and the risk to the individuals concerned were the data to be lost or disclosed.
Basic precautions would include:
Not storing consumers’ data on a portable device like a smartphone (especially if you’re the type who regularly leaves it in a cab on a Friday night).
Never sharing system login details with others.
Password-protecting any office files that contain personal data.
Using encrypted (https) connections for your blog (while this isn’t specifically required by the GDPR, it’s an all-around good idea).
That’s obviously not an exhaustive list, but you get the point.
All of the specific requirements contained within the GDPR are based upon these six principles.
By keeping these principles in mind, you should never deviate too far away from what the GDPR expects from you, even if you’re not an expert in the details of the regulation.
The problem is, there’s a certain amount of GDPR misinformation doing the rounds too.
Warning: Beware of These Three Dangerous Myths about GDPR
GDPR is new, and there’s a huge amount of speculation about how it will be applied in practice.
So let’s deal with some of the emerging myths.
Myth #1: I’m Not Based in the EU so It Doesn’t Affect Me
Don’t be fooled. That’s not the point.
The regulation protects consumers within the EU, regardless of where in the world the person who collects their data is based.
Anyone who runs a blog that is available to consumers within any of the EU Member States is potentially affected.
There are subtly different rules for controllers outside the EU, but regardless of whether you operate out of London, Milan or New York, GDPR needs to be on your radar.
At the very least, you will need to take an informed position on the subject, and that means having a plan.
Myth #2: I’m a Blogger, Not a Business, so It Doesn’t Apply
A swing and a miss.
While there are some provisions aimed specifically at organizations, the core accountability applies to anyone considered to be a “Data Controller.”
A Data Controller is the person responsible for “determining the purpose” of processing.
And it can be anybody — an individual or a business.
Long story short, if you are the person who decides to collect the data, or decides what data is collected and why, then you are a Data Controller — regardless of whether you are operating as a business in the normal sense of the word.
Bloggers. Micro-businesses. Non-profits. Charities. Hobbyists.
All potentially covered.
I’ll get into why I say “potentially” later.
Myth #3: There’s an Exemption for Anyone with Fewer Than 250 Employees
Nope.
I’ve seen this one doing the rounds a lot, and it’s based on a very lazy interpretation of the rules.
If you process personal data and have fewer than 250 employees, you may have an exemption from one very specific administrative reporting requirement.
It is absolutely not a general exemption.
GDPR can apply if you have no employees at all.  
Four Common Blogging Activities That Could Put You in the GDPR Firing Line
As a blogger, you might feel that you’re not in the habit of collecting people’s personal data.
From there, it’s a very short walk to convincing yourself that GDPR is not your concern.
But think again — there are a number of very common blogging activities that can put you in the GDPR firing line.
#1. Collecting Email Addresses
Without doubt, this is the clearest scenario in which the GDPR can apply to bloggers.
Sure as eggs is eggs, names and email addresses are personal data.
If you invite people to give you this information — such as on a mailing list sign-up or via an online contact form — then you have a responsibility for that data.
As we’ll see later, this doesn’t of itself guarantee that the full force of the GDPR will apply, but it does mean that you are potentially affected.
#2. Using WordPress (or Another Content Management System)
Don’t misunderstand me, I’m a big fan of WordPress.
One of its biggest selling points is just how much it does for you straight out of the box.
But that can be a double-edged sword — would you know if WordPress was collecting/processing personal data in the background?
Possibly not.
Well it can, and it does:
With blog commenting enabled, WordPress will by default require all commenters to submit their names and email addresses before they can comment. This is personal data.
WordPress will set web cookies for anyone who logs into your site or submits a comment. The GDPR specifically states that cookies are potentially personal data.
All plugins that you install on your WordPress site give you additional functionality (that’s why you use them) — and every one of those plugins has the potential to collect personal data.
#3. Using Any Type of Web Tracking or Profiling
Use the Facebook pixel for tracking page views and conversions?
Track who opens your MailChimp or AWeber campaign emails?
Use Google Analytics to understand web traffic?
Each of these, to one extent or another, involves profiling the behavior of identifiable individuals, and is potentially within the GDPR’s remit.  
#4. Using a Web Host That Logs Visitors’ IP Addresses
It’s extremely common practice for your web server to record, in its server logs, the IP addresses of anyone who visits your blog.
Now there’s nothing the matter with that, because it can actually help to protect against malicious attacks and unauthorized access.
But IP addresses are personal data as far as the GDPR is concerned.
So, while you might not consider yourself to be actively collecting personal data, there’s a very good chance that, in reality, you are.
How Some Bloggers Can Dodge the GDPR Bullet
We’ve already seen that the core factor in determining whether the GDPR applies to you is whether or not you process personal data. It what the GDPR calls the “material scope” of the regulation.
But that’s not the only consideration.
We also need to consider what the GDPR calls “territorial scope” — and it’s this territorial scope that might allow some bloggers to dodge the GDPR bullet.
Territorial scope is EU-speak for the geographic limitation of the GDPR.
We’ve already touched on this in our first dangerous myth above.
The regulation protects the interests of consumers within the EU — regardless of whether the individual/business that collects their data is based in the EU or not.
So the real question is not where you are based — rather it is where your intended consumers are based.
A US-based blog can be caught within the scope of the GDPR if it in any way targets consumers in the EU.
But to be clear, if you can legitimately argue that your blog falls outside the territorial scope of the GDPR, the regulation will not apply to you — and none of the requirements, responsibilities or fines apply.
Some folks will, understandably, see this as a GDPR get out of jail free card.
Just be wary…
The GDPR makes a clear distinction between Data Controllers (remember, that’s probably you) who are based in the EU and those based outside the EU. It boils down to this:
Data Controllers in the EU are within the territorial scope, and the GDPR applies.
Data Controllers outside the EU are subject to the GDPR rules if they “offer goods and services” to individuals within the EU.
This distinction will be crucial for many bloggers.
It introduces the concept of your intended target audience.
If your blog is genuinely targeted at a non-EU audience and you don’t, in reality, process the data of EU consumers, then you have a potential exemption from the entirety of the GDPR.
But it’s important to understand that this is a gray area.
The actual wording of the regulation refers to whether “the Controller envisages offering goods and services to data subjects in the Union.”
If you blog about childcare in San Francisco, then I’d argue that you’re on pretty solid ground. It doesn’t have any obvious relevance to EU consumers, and it would seem fair to argue that you don’t “envisage offering a service” to them.
On the other hand, blog on a subject that’s not limited by location (such as the cool new features on the iPhone X), and that argument might not fly. Your content is just as relevant to EU consumers as it is to anyone else, and you probably have no real intention of limiting your readership.
So it’s going to depend very much on the nature of your blog.
Factors to bear in mind:
While there is no definition of what constitutes a “service,” it is highly likely that blogging will count as one (the UK data regulator has strongly implied to us that blogging is clearly an information “service”).
It is irrelevant whether or not your consumers pay you for your service;
Just because you have a blog that can be accessed from the EU does not necessarily mean that you intend to offer your services in the Union.
Some specific factors will strongly imply that you do intend to offer your services in the EU — such as offering payments in a European currency, having localized domain names (such as .eu or .co.uk), or offering local phone number options.
Importantly, if in reality you DO process the personal data of EU consumers (let’s say by having people with .co.uk email addresses on your email list), then it’s hard to argue that you don’t envisage offering a service to them.
Because you’re already actually doing it.
The $64,000 Question: Is Your Blog in Scope?
Coming to a conclusion about whether your blog falls within the scope of the GDPR is something that only you can do.
It will depend on the exact nature of your blog, the data you capture, and your target audience.
And there are areas that are not perfectly clear-cut when you apply them to blogging.
Just keep in mind that it’s human nature to try to shoehorn your own blog into one of the limited exemptions to the rules.
If you offer a service to consumers in the EU and, by so doing, process information that qualifies as “personal data,” then, at face value, the GDPR will apply.
If you’re in any doubt, the wise approach is to have a plan to tackle it.
Three Totally Legitimate Approaches to Tackling GDPR (Including One That’s Super Easy)
Let’s assume that the GDPR applies to you and your blog.
What now?
Strikes me that people are going to take one of three approaches that extend beyond simply pretending it’s not happening.
Approach #1: Do Nothing (aka “Wait and See”)
Let me be clear here: “Do nothing” is not the same as “ignore it.”
Ignoring it would be bad. It needs to be on your radar.
But depending on your approach to risk, you might well choose the “wait and see” method.
Day 1 GDPR compliance would be awesome — but pragmatically, it can take time, effort and potentially expense.
And realistically, you are unlikely to come to the attention of the data regulators unless you actually experience a data breach or someone chooses to make a complaint against you.
So why not just wait for the dust to settle and see what everyone else does?
Pros:
You buy yourself some time.
Provided you keep your ear to the ground, you’ll get to see how the regulators approach enforcing the rules in practice.
The specifics of how to be compliant can only get clearer over time — so you can possibly avoid going down a variety of rabbit holes in the meantime.
Cons:
This is undeniably a higher risk option.
You will technically be non-compliant on Day 1 (albeit along with much of the rest of the world).
Technically, you could be fined in the event of a data breach, such as your WordPress site being hacked.
Depending on your brand visibility, your reputation is at risk if you’re simply unprepared for things like individuals’ requests for access to data — and that might bring you to the attention of the regulators.
Regulators are likely to have little sympathy for people who have made no apparent effort to comply.
It’s hard for me to wholeheartedly advocate the “wait and see” approach — because it feels reactive, and maybe I’m a bit risk-averse.
But there is arguably a place for it if you understand and accept the risks.
That said, some of the risks can be mitigated, which leads me to the second approach.
Approach #2: Show Willingness by Implementing Some Quick Wins
While full GDPR compliance is going to be complex for some, there’s likely to be some low-hanging fruit to be had.
Not only will it start you off on a path toward full compliance, you’re also demonstrating a commitment to data privacy — and you might be surprised how much you’re already doing.
If you do nothing more than revisit your consent processes and publish a privacy policy on your blog, you will still be making a significant step towards compliance.
(Check out my Seven Easy Steps Toward GDPR Compliance below, which suggest what some of these approaches might look like.)
Pros:
Significantly lower risk than doing nothing.
Relatively low effort, time and cost.
Simply reviewing your privacy risks will put you in greater control.
It promotes a data privacy mindset that will inform your future decisions.
Practically, you are even less likely to attract the attention of regulators.
Cons:
Quick wins alone are unlikely to make your blog fully compliant.
You will need to commit some time and effort to evaluate your risks and liabilities.
My guess is that “showing willingness” will be where many bloggers and small businesses will be when the GDPR comes into force.
Approach #3: Go the Full Nine Yards and Aim for Complete GDPR Compliance
In an ideal world, full GDPR compliance from Day 1 is clearly the place to be.
It minimizes risk and — to those who know what to look for — demonstrates your credibility and professionalism.
For simple blogs and small online businesses, full compliance might be perfectly achievable, because simplicity is your friend.
Pros:
All privacy risks will be closely managed.
You won’t be caught off-guard in the event of a personal data inquiry or, worse, a complaint.
All other things being equal, you get to sleep at night.
Cons:
Will require time and effort to understand the full requirements of the GDPR.
May involve cost to bring processes and technology into line.
Seven Easy Steps Toward GDPR Compliance
The actual GDPR regulation itself is a horribly impenetrable document.
It runs to over 250 pages, with 99 main provisions (“Articles”) and 173 supplementary “recitals.”
And they wonder why people don’t read it.
Unless you’re a lawyer, you’ll likely come away from it feeling just a little overwhelmed.
But if you can master the concepts and the six core principles, you’ll see that there are a number of discrete, tangible things that you can do toward compliance.
And some of them are pretty pain-free.
#1. Make a Personal Data Inventory
Spend 30 minutes just brainstorming and documenting the types of personal data that you collect.
Then you’ll begin to understand where your actual liabilities are.
Make sure you consider:
The information you actually ask people for, in particular names and emails via contact forms and blog subscriptions.
The information that might be collected by your systems — if you use Google Analytics or Facebook remarketing, you will have some thinking to do about the fact that these applications use cookies. If you use WordPress or another CMS, it’s worth investigating whether you’re setting cookies that you don’t know about.
Only when you’ve identified how you collect data can you start to address whether you need to take further action.
#2. Publish a GDPR-Compliant Privacy Policy
Publishing a privacy policy is the most tangible thing you can do to demonstrate your commitment to data privacy.
It’s your opportunity to:
Outline what types of data you collect and specifically how you intend to use it — including who that data might be shared with.
Detail what types of cookies are used on your blog.
Describe what steps you take to ensure that the data is secure.
Highlight exactly what individuals are consenting to, how they provide consent and, importantly, how they may withdraw their consent in the future.
Explain the rights that individuals have over their data (the GDPR gives individuals a range of new rights, including the rights to access and the data and the “right to be forgotten”).
If you already have a privacy policy, you may already have much of this covered. But it’s unlikely that your policy will be GDPR-compliant without some form of amendment. If nothing else, you will need to add the range of data access rights that consumers have.
And just publishing your privacy policy is not enough.
You need to stick to it.
And make sure that anyone else working on your behalf sticks to it, too.
Your GDPR protection is only as strong as its weakest link.
Feel free to check out my own privacy policy as a guide to what should be included. You’ll find other great examples on the web, but I’m confident mine is firmly on the right track.
That disclaimer again: I’m not a lawyer, and this is not legal advice. And please don’t just copy my policy — it’s not polite, and your policy needs to reflect what you do, not what I do!)
#3. Be Crystal-Clear about Consent
A lot of people who talk about GDPR seem to think that consent is the silver bullet for all GDPR problems.
It’s not.
Consent is just one of six lawful grounds for collecting personal data under the GDPR, and it won’t always be the most appropriate one to rely on.
That said, it IS important.
Where consumers are volunteering personal information (such as online contact forms and blog sign-ups) you must ask for their specific consent if there is no other legal ground for processing that data.
This will usually mean having one or more checkable “consent” boxes on all sign-up forms.
Important things to consider:
People must be able to tell what they’re consenting to — vague and generalized statements about what you intend to do with the data will not cut it. (The days of “we collect data to improve your experience” are gone!).
Your privacy policy is the place for this information, and your readers must have the opportunity to read the policy before they are asked for consent.
Consent must be given as an “affirmative action” — so it is not acceptable to use a pre-checked consent box. Any consent checkbox must be unchecked by default (Some email providers like MailChimp make this easy with built-in GDPR features).
You must only use the information gained via consent for the reasons you gave when consent was given.
You should always take advantage of the “double opt-in” options that are found within campaign management tools like MailChimp. Double opt-in requires the individual to confirm their initial request before their data is added to your mailing list. It will also usually give you a means of demonstrating when consent was given.
#4. Stop Collecting Data You Don’t Need
Data minimization is the way to go.
Do you really need someone’s cell phone number to send them blog updates?
Probably not.
The more data you collect, the more data you’re responsible for.
If you can’t justify why you’re asking for a particular piece of data, don’t ask for it.
And if you already hold data that you don’t need (or can’t justify), now is the time to dispose of it. (Securely, of course!)
#5. Make Sure Your Blog Is Super-Secure
One of the core objectives of the GDPR is to keep personal data secure.
You can directly influence this by making sure that you are taking basic, common-sense security precautions such as:
Never sharing your blog’s login credentials with anyone else.
Always using strong passwords.
Removing the default “admin” user account on WordPress blogs.
Using a reputable security plugin to prevent unauthorised access.
Physically protecting data stored on removable storage such as USB sticks and external hard drives.
All of these things form the basis of the “how we protect your data” section of a privacy policy.
#6. Use a Reputable Web Host
You are most likely using some form of third-party web hosting for your blog — either shared hosting or maybe VPS.
By providing the servers that your blog runs on, that 3rd party hosting company becomes a “Data Processor” in GDPR terms — because they are processing data on your behalf.
You are effectively subcontracting the technical hosting activities to them.
As a result, they have access to any personal data that is stored on your blog — and they are therefore quite capable of being the weak link in the chain.
A reputable web host will be only too happy to talk to you about the security processes that they have in place, their security accreditations, and so on.
The best ones already have GDPR-compliant conditions within their standard terms of service, or will offer you a personalized data processing agreement on request.
This is important, because the GDPR expects you to have a written agreement with anybody who acts as a Data Processor on your behalf — especially if it involves processing that takes place outside the EU.
So choose your web host wisely.
And be prepared to find a different provider if you don’t get the answers you need.
#7. Check Your Google Analytics Configuration
Okay, this is a bit specific, but it might be the difference between compliance and non-compliance for some simple blogs.
Google Analytics uses cookies to track when people visit your blog. They enable GA to distinguish one visitor from another.
But, when set up correctly, GA cookies are likely to be seen as “non-privacy intrusive,” which means that you do not need to get prior specific consent to use them (which, believe me, would be a technical minefield).
For this exclusion to apply, though, you need to be careful:
It’s important that you haven’t implemented the (optional) User ID functionality within GA. User ID allows you to identify a particular individual even if they view your blog from different devices. You should know if you’re using this functionality, because it’s not enabled by default, and you would have had to implement it manually.
You should take advantage of the “anonymizeIP” function that GA provides, which has the effect of obscuring part of visitors’ IP addresses when the data is stored at Google. Note that this is switched OFF by default, but can be activated by adding a simple parameter to your GA tracking code (the exact code depends on which version of the Google Analytics code you’re using — analytics.js or gtag.js). If you’re using a plugin for analytics, you might find this option in the plugin settings.
You should make sure that you never (intentionally or inadvertently) include personal data within page URLs that are sent to GA. Not only is this bad for GDPR, it’s also a breach of Google’s Terms of Service.
For a handy visual reminder of the seven steps, check out the image below (click to see a larger view):
Tumblr media
  Embed This Infographic On Your Site
Tumblr media
What the Heck is GDPR? (and How to Make Sure Your Blog Is Compliant) from SmartBlogger.com
  Stop Hiding Under the Pillow and Get Ahead of GDPR
Like it or not, the GDPR could affect you.
Even if you’re not in the EU.
While regulators are extremely unlikely to start handing out huge fines on Day 1, smart bloggers will see this as an opportunity get their data processes properly nailed down.
Get on the front foot and you’ll have a better, deeper understanding of the value of the data that you hold, and the responsibility (and accountability) that you have for that information.
And frankly, even if the GDPR doesn’t apply to you, it’s a strong indication of where data privacy is going — so why not embrace the principles anyway?
It may seem a million miles away from why you pour your heart and soul into blogging. You blog to inform, to inspire, to share your passion.
But you’re also responsible to your loyal followers for the information they entrust to you.
So don’t lose sleep over it. Get ahead of it.
Because when you do, your blog will be stronger than ever.
Paul Long is a small business web designer, WordPress enthusiast and self-confessed data freak based in the UK. He currently spends his days helping folk to tread the fine line between GDPR denial and meltdown. For further actionable guidance, check out his free GDPR Action Plan for small businesses.
The post What the Heck is GDPR? (and How to Make Sure Your Blog Is Compliant) appeared first on Smart Blogger.
from SEO and SM Tips https://smartblogger.com/gdpr/
0 notes
cherylxsmith · 6 years
Text
What the Heck is GDPR? (and How to Make Sure Your Blog Is Compliant)
Tumblr media
Ever get that feeling that something’s just waiting to bite you on the ass?
A disturbance in the force that you just can’t put your finger on?
You’re sure it’s not your anniversary?
Your kid’s piano recital?  
Maybe it’s the cable bill.
Dammit.
You can’t place what it is, but something’s waving a red flag.
For bloggers, that brain worm might just be the GDPR.
Niggling away at you like an unscratchable itch.
In a way, that’s good: You know enough about GDPR to be worried.
But in case you’re in the category of “blissfully unaware,” we’ll take a look at what the GDPR is all about.
And why it absolutely CAN affect you and your blog.
Table of Contents
GDPR 101
The Five GDPR Basics You Absolutely Must Know
The Six GDPR Core Principles
Warning: Beware of These Three Dangerous Myths about GDPR
Four Common Blogging Activities That Could Put You in the GDPR Firing Line
How Some Bloggers Can Dodge the GDPR Bullet
The $64,000 Question: Is Your Blog in Scope?
Three Totally Legitimate Approaches to Tackling GDPR (Including One That’s Super Easy)
Seven Easy Steps Toward GDPR Compliance
Stop Hiding Under the Pillow and Get Ahead of GDPR
Disclaimer: I’m not a lawyer. The information below is absolutely not legal advice. But it might just save you a ton of worry and expense.
GDPR 101
GDPR is currently taking Europe by storm.
It’s the General Data Protection Regulation — a new data privacy law being introduced by the European Union — and it’s a bit of a game-changer.
It comes fully into force on May 25, 2018.
Yep, that looming deadline might just be lighting up your radar.
It affects people across the globe, not just in Europe. And some forward-thinking folks have been working on preparing themselves for the last year or two.
Well done them. Straight to the top of the class.
But the truth of the matter is that many people have been just the slightest bit “mañana, mañana” about the whole thing.
Now that the countdown can be measured in days, some people are getting a touch, well, panicky.
It’s like that school assignment that you had a year to write.
Here you are, “T-minus-one and counting,” and you’re staring at a blank page.
And that’s due, in no small part, to the fact that GDPR appears complex, and there are still some gray areas.
We are all struggling to interpret some of the details of the regulation.
But some things are clear — so in case GDPR is entirely new to you, let’s hit the basics.
The Five GDPR Basics You Absolutely Must Know
It applies to anyone who processes “personal data” — Most obviously, that’s things like names, email addresses and other types of “personally identifiable information”;
It creates significant new responsibilities — If you process personal data, you are now truly responsible and accountable for its security and the way it is used;
It has a global reach — It might be an EU law, but it can apply to anyone, regardless of their location;
It doesn’t just apply to traditional businesses — The principles are concerned with what you do with other people’s data, not who you are or why you do it;
There are eye-watering fines for non-compliance — up to €20 million ($24m) or 4% of global revenue, whichever is higher.
So the GDPR’s scope is surprisingly wide-ranging. It could easily apply to you.
It gives data regulators powers to apply unprecedented financial penalties.
And crucially, it’s becoming extremely high-profile. The Facebook/Cambridge Analytica scandal alone has elevated the subject of data privacy to mainstream debate.
So it’s worth spending a little time to try to understand the key principles that the GDPR is attempting to achieve.
The Six GDPR Core Principles
The central principles of the GDPR are not new.
They expand on existing European Union data protection regulations, and most folks might generally consider them to fall into the category of “quite a good idea, really” (from the consumer perspective, at least).
So let’s break them down one by one.
Principle #1: Lawfulness, Fairness and Transparency
You must process personal data in a way that is lawful, fair and transparent.
“Lawfulness” has a specific meaning under the GDPR. There are six legitimate, lawful grounds for processing personal data. You must satisfy at least one of these six criteria before your data processing is “lawful.”  
The first and most obvious lawful basis for processing personal data is consent — that is, where the individual has specifically agreed (usually via one or more checkable boxes) that you may use their data in a specific way. More on consent later.
The majority of the other lawful grounds will be less relevant to bloggers. They include situations where it is essential for you to process personal data to fulfill a contract with the consumer, or if you are required by law to collect specific data (such as information required for tax records).
But the sixth and final lawful basis is relevant:
It can be lawful to process personal data without the individual’s consent if it is in your legitimate interest as a Controller to do so.
This is the subject of heated debate — because it appears to provide a convenient catch-all for controllers. (More on controllers later, but assume for now that the controller is you!)
Well it’s certainly not that, but it is an acknowledgement that data privacy is not absolute.
There should be a balance between the individual’s right to data privacy and the controller’s legitimate interest in running their blog, business or whatever.  
“Legitimate interest” is most likely to be used where consent is not appropriate or feasible.
Examples might include:
Storing IP addresses in server logs for the detection and prevention of fraud.
Using non-privacy-intrusive cookies (such as Google Analytics).
Storing personal data in backups to allow a blog to be restored following a technical issue.
These scenarios highlight that in some situations (such as preventing fraud), consumers must not be permitted to prevent processing. In others, it would simply be unworkable to try to gain consent in advance.
It will typically apply where your data processing involves minimal risk or impact to the individual’s privacy, and it is of a type that the individual might reasonably expect you to undertake.  
That said, we can be clear that “legitimate interest” is not:
Carte blanche to do whatever you fancy without consumers’ knowledge.
A justification for collecting data that you know full well your consumers would not consent to.
Those scenarios would not be lawful, fair or transparent.
Anyone planning to rely on the “legitimate interest” lawful basis will need to familiarize themselves with the detail of the regulation because there are specific requirements, such as the need to conduct a Legitimate Interests Assessment.
“Fairness” is not specifically defined in the regulation, but on any definition it overlaps significantly with lawfulness and transparency.
All of the regulation guidance suggests that fair processing involves ensuring that it does not have any unjustified adverse effects on the individual, and that data is used in ways that the individual might reasonably expect, given your relationship with them.
In short, if you are being open and transparent about how you process data, then you will almost inevitably being processing it “fairly.”
Examples of unfair processing might include:
Deceiving consumers about your real identity.
Attempting to hide the true purpose of your data processing behind swathes of small print or unnecessarily formal legal language.
Trying to hoodwink consumers in any way into providing their data.
“Transparency” is a fundamental and recurring theme throughout the regulation. You are expected to be conspicuously open and honest about what data you collect and what you propose to do with it.
More on transparency later.
Principle #2: Data Is Only Used for Specified, Legitimate Purposes
You must only use personal data for the specific purposes that you have declared.
Closely related to the concept of transparency, this principle demands that you may not collect data for one purpose, and then go on to use it in a different way.
Let’s take the example of a “Sign Up to Receive This Free Report” offer.
On the face of it, the individual is providing their email address so that you can send them the report. That’s it.
You cannot then add their email to your mailing list and send them other promotional material unless you’ve made it clear at the point of sign-up that that’s what you intend to do.
Principle #3: Limited to What Is Required to Achieve the Stated Purposes
You must collect only the minimum amount of personal data required to achieve your stated objective.
This is the concept of data minimization.
If you collect personal data to allow you to send blog notifications by email, then the minimum information you require is an email address. “Name” is probably fine too (for the purpose of personalizing your emails), but collecting anything else could be seen as excessive.
So if, in the same scenario, you also collect cell phone number, gender and age, then you need to be very clear why that information is necessary to allow you to send blog notifications.
Principle #4: Accurate and Up-To-Date
You must take all reasonable steps to ensure that any data you collect is accurate and kept up-to-date.
The risks to individuals’ data privacy are clearly increased where that data contains inaccuracies. Incorrect email addresses are a prime example of where other personal data can be inadvertently disclosed or leaked.
You are therefore obliged to address data inaccuracies without delay — incorrect data must be rectified, or deleted.
In practice, if someone contacts you to update their email address, you should take action on it without undue delay.
But being proactive is also important — for example, if you are getting regular bounce-backs from addresses on your mailing list, then this should be telling you something. Periodically checking your list and removing bounced addresses is highly recommended.
Principle #5: Time Limited
You must only hold personal data for as long as is required to achieve the stated objective.
It’s central to the concept of fairness that data is not retained for any longer than required to achieve the purpose for which you collected it.
Data retention also has implications for accuracy. If you’re still storing customer address data that you collected five years ago, the chances are that a significant proportion of that stale data is now inaccurate.
Principle #6: Data Must Be Processed Securely
You must process personal data in a way that ensures appropriate security.
The security of the data you hold is clearly pivotal to the whole objective of the GDPR. You are responsible for ensuring that there exist appropriate technical and organizational measures to protect against unauthorized access, loss, alteration and disclosure.
That said, you’re not expected to be Fort Knox.
But you are expected to take steps that are proportionate to the sensitivity of the data that you collect, and the risk to the individuals concerned were the data to be lost or disclosed.
Basic precautions would include:
Not storing consumers’ data on a portable device like a smartphone (especially if you’re the type who regularly leaves it in a cab on a Friday night).
Never sharing system login details with others.
Password-protecting any office files that contain personal data.
Using encrypted (https) connections for your blog (while this isn’t specifically required by the GDPR, it’s an all-around good idea).
That’s obviously not an exhaustive list, but you get the point.
All of the specific requirements contained within the GDPR are based upon these six principles.
By keeping these principles in mind, you should never deviate too far away from what the GDPR expects from you, even if you’re not an expert in the details of the regulation.
The problem is, there’s a certain amount of GDPR misinformation doing the rounds too.
Warning: Beware of These Three Dangerous Myths about GDPR
GDPR is new, and there’s a huge amount of speculation about how it will be applied in practice.
So let’s deal with some of the emerging myths.
Myth #1: I’m Not Based in the EU so It Doesn’t Affect Me
Don’t be fooled. That’s not the point.
The regulation protects consumers within the EU, regardless of where in the world the person who collects their data is based.
Anyone who runs a blog that is available to consumers within any of the EU Member States is potentially affected.
There are subtly different rules for controllers outside the EU, but regardless of whether you operate out of London, Milan or New York, GDPR needs to be on your radar.
At the very least, you will need to take an informed position on the subject, and that means having a plan.
Myth #2: I’m a Blogger, Not a Business, so It Doesn’t Apply
A swing and a miss.
While there are some provisions aimed specifically at organizations, the core accountability applies to anyone considered to be a “Data Controller.”
A Data Controller is the person responsible for “determining the purpose” of processing.
And it can be anybody — an individual or a business.
Long story short, if you are the person who decides to collect the data, or decides what data is collected and why, then you are a Data Controller — regardless of whether you are operating as a business in the normal sense of the word.
Bloggers. Micro-businesses. Non-profits. Charities. Hobbyists.
All potentially covered.
I’ll get into why I say “potentially” later.
Myth #3: There’s an Exemption for Anyone with Fewer Than 250 Employees
Nope.
I’ve seen this one doing the rounds a lot, and it’s based on a very lazy interpretation of the rules.
If you process personal data and have fewer than 250 employees, you may have an exemption from one very specific administrative reporting requirement.
It is absolutely not a general exemption.
GDPR can apply if you have no employees at all.  
Four Common Blogging Activities That Could Put You in the GDPR Firing Line
As a blogger, you might feel that you’re not in the habit of collecting people’s personal data.
From there, it’s a very short walk to convincing yourself that GDPR is not your concern.
But think again — there are a number of very common blogging activities that can put you in the GDPR firing line.
#1. Collecting Email Addresses
Without doubt, this is the clearest scenario in which the GDPR can apply to bloggers.
Sure as eggs is eggs, names and email addresses are personal data.
If you invite people to give you this information — such as on a mailing list sign-up or via an online contact form — then you have a responsibility for that data.
As we’ll see later, this doesn’t of itself guarantee that the full force of the GDPR will apply, but it does mean that you are potentially affected.
#2. Using WordPress (or Another Content Management System)
Don’t misunderstand me, I’m a big fan of WordPress.
One of its biggest selling points is just how much it does for you straight out of the box.
But that can be a double-edged sword — would you know if WordPress was collecting/processing personal data in the background?
Possibly not.
Well it can, and it does:
With blog commenting enabled, WordPress will by default require all commenters to submit their names and email addresses before they can comment. This is personal data.
WordPress will set web cookies for anyone who logs into your site or submits a comment. The GDPR specifically states that cookies are potentially personal data.
All plugins that you install on your WordPress site give you additional functionality (that’s why you use them) — and every one of those plugins has the potential to collect personal data.
#3. Using Any Type of Web Tracking or Profiling
Use the Facebook pixel for tracking page views and conversions?
Track who opens your MailChimp or AWeber campaign emails?
Use Google Analytics to understand web traffic?
Each of these, to one extent or another, involves profiling the behavior of identifiable individuals, and is potentially within the GDPR’s remit.  
#4. Using a Web Host That Logs Visitors’ IP Addresses
It’s extremely common practice for your web server to record, in its server logs, the IP addresses of anyone who visits your blog.
Now there’s nothing the matter with that, because it can actually help to protect against malicious attacks and unauthorized access.
But IP addresses are personal data as far as the GDPR is concerned.
So, while you might not consider yourself to be actively collecting personal data, there’s a very good chance that, in reality, you are.
How Some Bloggers Can Dodge the GDPR Bullet
We’ve already seen that the core factor in determining whether the GDPR applies to you is whether or not you process personal data. It what the GDPR calls the “material scope” of the regulation.
But that’s not the only consideration.
We also need to consider what the GDPR calls “territorial scope” — and it’s this territorial scope that might allow some bloggers to dodge the GDPR bullet.
Territorial scope is EU-speak for the geographic limitation of the GDPR.
We’ve already touched on this in our first dangerous myth above.
The regulation protects the interests of consumers within the EU — regardless of whether the individual/business that collects their data is based in the EU or not.
So the real question is not where you are based — rather it is where your intended consumers are based.
A US-based blog can be caught within the scope of the GDPR if it in any way targets consumers in the EU.
But to be clear, if you can legitimately argue that your blog falls outside the territorial scope of the GDPR, the regulation will not apply to you — and none of the requirements, responsibilities or fines apply.
Some folks will, understandably, see this as a GDPR get out of jail free card.
Just be wary…
The GDPR makes a clear distinction between Data Controllers (remember, that’s probably you) who are based in the EU and those based outside the EU. It boils down to this:
Data Controllers in the EU are within the territorial scope, and the GDPR applies.
Data Controllers outside the EU are subject to the GDPR rules if they “offer goods and services” to individuals within the EU.
This distinction will be crucial for many bloggers.
It introduces the concept of your intended target audience.
If your blog is genuinely targeted at a non-EU audience and you don’t, in reality, process the data of EU consumers, then you have a potential exemption from the entirety of the GDPR.
But it’s important to understand that this is a gray area.
The actual wording of the regulation refers to whether “the Controller envisages offering goods and services to data subjects in the Union.”
If you blog about childcare in San Francisco, then I’d argue that you’re on pretty solid ground. It doesn’t have any obvious relevance to EU consumers, and it would seem fair to argue that you don’t “envisage offering a service” to them.
On the other hand, blog on a subject that’s not limited by location (such as the cool new features on the iPhone X), and that argument might not fly. Your content is just as relevant to EU consumers as it is to anyone else, and you probably have no real intention of limiting your readership.
So it’s going to depend very much on the nature of your blog.
Factors to bear in mind:
While there is no definition of what constitutes a “service,” it is highly likely that blogging will count as one (the UK data regulator has strongly implied to us that blogging is clearly an information “service”).
It is irrelevant whether or not your consumers pay you for your service;
Just because you have a blog that can be accessed from the EU does not necessarily mean that you intend to offer your services in the Union.
Some specific factors will strongly imply that you do intend to offer your services in the EU — such as offering payments in a European currency, having localized domain names (such as .eu or .co.uk), or offering local phone number options.
Importantly, if in reality you DO process the personal data of EU consumers (let’s say by having people with .co.uk email addresses on your email list), then it’s hard to argue that you don’t envisage offering a service to them.
Because you’re already actually doing it.
The $64,000 Question: Is Your Blog in Scope?
Coming to a conclusion about whether your blog falls within the scope of the GDPR is something that only you can do.
It will depend on the exact nature of your blog, the data you capture, and your target audience.
And there are areas that are not perfectly clear-cut when you apply them to blogging.
Just keep in mind that it’s human nature to try to shoehorn your own blog into one of the limited exemptions to the rules.
If you offer a service to consumers in the EU and, by so doing, process information that qualifies as “personal data,” then, at face value, the GDPR will apply.
If you’re in any doubt, the wise approach is to have a plan to tackle it.
Three Totally Legitimate Approaches to Tackling GDPR (Including One That’s Super Easy)
Let’s assume that the GDPR applies to you and your blog.
What now?
Strikes me that people are going to take one of three approaches that extend beyond simply pretending it’s not happening.
Approach #1: Do Nothing (aka “Wait and See”)
Let me be clear here: “Do nothing” is not the same as “ignore it.”
Ignoring it would be bad. It needs to be on your radar.
But depending on your approach to risk, you might well choose the “wait and see” method.
Day 1 GDPR compliance would be awesome — but pragmatically, it can take time, effort and potentially expense.
And realistically, you are unlikely to come to the attention of the data regulators unless you actually experience a data breach or someone chooses to make a complaint against you.
So why not just wait for the dust to settle and see what everyone else does?
Pros:
You buy yourself some time.
Provided you keep your ear to the ground, you’ll get to see how the regulators approach enforcing the rules in practice.
The specifics of how to be compliant can only get clearer over time — so you can possibly avoid going down a variety of rabbit holes in the meantime.
Cons:
This is undeniably a higher risk option.
You will technically be non-compliant on Day 1 (albeit along with much of the rest of the world).
Technically, you could be fined in the event of a data breach, such as your WordPress site being hacked.
Depending on your brand visibility, your reputation is at risk if you’re simply unprepared for things like individuals’ requests for access to data — and that might bring you to the attention of the regulators.
Regulators are likely to have little sympathy for people who have made no apparent effort to comply.
It’s hard for me to wholeheartedly advocate the “wait and see” approach — because it feels reactive, and maybe I’m a bit risk-averse.
But there is arguably a place for it if you understand and accept the risks.
That said, some of the risks can be mitigated, which leads me to the second approach.
Approach #2: Show Willingness by Implementing Some Quick Wins
While full GDPR compliance is going to be complex for some, there’s likely to be some low-hanging fruit to be had.
Not only will it start you off on a path toward full compliance, you’re also demonstrating a commitment to data privacy — and you might be surprised how much you’re already doing.
If you do nothing more than revisit your consent processes and publish a privacy policy on your blog, you will still be making a significant step towards compliance.
(Check out my Seven Easy Steps Toward GDPR Compliance below, which suggest what some of these approaches might look like.)
Pros:
Significantly lower risk than doing nothing.
Relatively low effort, time and cost.
Simply reviewing your privacy risks will put you in greater control.
It promotes a data privacy mindset that will inform your future decisions.
Practically, you are even less likely to attract the attention of regulators.
Cons:
Quick wins alone are unlikely to make your blog fully compliant.
You will need to commit some time and effort to evaluate your risks and liabilities.
My guess is that “showing willingness” will be where many bloggers and small businesses will be when the GDPR comes into force.
Approach #3: Go the Full Nine Yards and Aim for Complete GDPR Compliance
In an ideal world, full GDPR compliance from Day 1 is clearly the place to be.
It minimizes risk and — to those who know what to look for — demonstrates your credibility and professionalism.
For simple blogs and small online businesses, full compliance might be perfectly achievable, because simplicity is your friend.
Pros:
All privacy risks will be closely managed.
You won’t be caught off-guard in the event of a personal data inquiry or, worse, a complaint.
All other things being equal, you get to sleep at night.
Cons:
Will require time and effort to understand the full requirements of the GDPR.
May involve cost to bring processes and technology into line.
Seven Easy Steps Toward GDPR Compliance
The actual GDPR regulation itself is a horribly impenetrable document.
It runs to over 250 pages, with 99 main provisions (“Articles”) and 173 supplementary “recitals.”
And they wonder why people don’t read it.
Unless you’re a lawyer, you’ll likely come away from it feeling just a little overwhelmed.
But if you can master the concepts and the six core principles, you’ll see that there are a number of discrete, tangible things that you can do toward compliance.
And some of them are pretty pain-free.
#1. Make a Personal Data Inventory
Spend 30 minutes just brainstorming and documenting the types of personal data that you collect.
Then you’ll begin to understand where your actual liabilities are.
Make sure you consider:
The information you actually ask people for, in particular names and emails via contact forms and blog subscriptions.
The information that might be collected by your systems — if you use Google Analytics or Facebook remarketing, you will have some thinking to do about the fact that these applications use cookies. If you use WordPress or another CMS, it’s worth investigating whether you’re setting cookies that you don’t know about.
Only when you’ve identified how you collect data can you start to address whether you need to take further action.
#2. Publish a GDPR-Compliant Privacy Policy
Publishing a privacy policy is the most tangible thing you can do to demonstrate your commitment to data privacy.
It’s your opportunity to:
Outline what types of data you collect and specifically how you intend to use it — including who that data might be shared with.
Detail what types of cookies are used on your blog.
Describe what steps you take to ensure that the data is secure.
Highlight exactly what individuals are consenting to, how they provide consent and, importantly, how they may withdraw their consent in the future.
Explain the rights that individuals have over their data (the GDPR gives individuals a range of new rights, including the rights to access and the data and the “right to be forgotten”).
If you already have a privacy policy, you may already have much of this covered. But it’s unlikely that your policy will be GDPR-compliant without some form of amendment. If nothing else, you will need to add the range of data access rights that consumers have.
And just publishing your privacy policy is not enough.
You need to stick to it.
And make sure that anyone else working on your behalf sticks to it, too.
Your GDPR protection is only as strong as its weakest link.
Feel free to check out my own privacy policy as a guide to what should be included. You’ll find other great examples on the web, but I’m confident mine is firmly on the right track.
That disclaimer again: I’m not a lawyer, and this is not legal advice. And please don’t just copy my policy — it’s not polite, and your policy needs to reflect what you do, not what I do!)
#3. Be Crystal-Clear about Consent
A lot of people who talk about GDPR seem to think that consent is the silver bullet for all GDPR problems.
It’s not.
Consent is just one of six lawful grounds for collecting personal data under the GDPR, and it won’t always be the most appropriate one to rely on.
That said, it IS important.
Where consumers are volunteering personal information (such as online contact forms and blog sign-ups) you must ask for their specific consent if there is no other legal ground for processing that data.
This will usually mean having one or more checkable “consent” boxes on all sign-up forms.
Important things to consider:
People must be able to tell what they’re consenting to — vague and generalized statements about what you intend to do with the data will not cut it. (The days of “we collect data to improve your experience” are gone!).
Your privacy policy is the place for this information, and your readers must have the opportunity to read the policy before they are asked for consent.
Consent must be given as an “affirmative action” — so it is not acceptable to use a pre-checked consent box. Any consent checkbox must be unchecked by default (Some email providers like MailChimp make this easy with built-in GDPR features).
You must only use the information gained via consent for the reasons you gave when consent was given.
You should always take advantage of the “double opt-in” options that are found within campaign management tools like MailChimp. Double opt-in requires the individual to confirm their initial request before their data is added to your mailing list. It will also usually give you a means of demonstrating when consent was given.
#4. Stop Collecting Data You Don’t Need
Data minimization is the way to go.
Do you really need someone’s cell phone number to send them blog updates?
Probably not.
The more data you collect, the more data you’re responsible for.
If you can’t justify why you’re asking for a particular piece of data, don’t ask for it.
And if you already hold data that you don’t need (or can’t justify), now is the time to dispose of it. (Securely, of course!)
#5. Make Sure Your Blog Is Super-Secure
One of the core objectives of the GDPR is to keep personal data secure.
You can directly influence this by making sure that you are taking basic, common-sense security precautions such as:
Never sharing your blog’s login credentials with anyone else.
Always using strong passwords.
Removing the default “admin” user account on WordPress blogs.
Using a reputable security plugin to prevent unauthorised access.
Physically protecting data stored on removable storage such as USB sticks and external hard drives.
All of these things form the basis of the “how we protect your data” section of a privacy policy.
#6. Use a Reputable Web Host
You are most likely using some form of third-party web hosting for your blog — either shared hosting or maybe VPS.
By providing the servers that your blog runs on, that 3rd party hosting company becomes a “Data Processor” in GDPR terms — because they are processing data on your behalf.
You are effectively subcontracting the technical hosting activities to them.
As a result, they have access to any personal data that is stored on your blog — and they are therefore quite capable of being the weak link in the chain.
A reputable web host will be only too happy to talk to you about the security processes that they have in place, their security accreditations, and so on.
The best ones already have GDPR-compliant conditions within their standard terms of service, or will offer you a personalized data processing agreement on request.
This is important, because the GDPR expects you to have a written agreement with anybody who acts as a Data Processor on your behalf — especially if it involves processing that takes place outside the EU.
So choose your web host wisely.
And be prepared to find a different provider if you don’t get the answers you need.
#7. Check Your Google Analytics Configuration
Okay, this is a bit specific, but it might be the difference between compliance and non-compliance for some simple blogs.
Google Analytics uses cookies to track when people visit your blog. They enable GA to distinguish one visitor from another.
But, when set up correctly, GA cookies are likely to be seen as “non-privacy intrusive,” which means that you do not need to get prior specific consent to use them (which, believe me, would be a technical minefield).
For this exclusion to apply, though, you need to be careful:
It’s important that you haven’t implemented the (optional) User ID functionality within GA. User ID allows you to identify a particular individual even if they view your blog from different devices. You should know if you’re using this functionality, because it’s not enabled by default, and you would have had to implement it manually.
You should take advantage of the “anonymizeIP” function that GA provides, which has the effect of obscuring part of visitors’ IP addresses when the data is stored at Google. Note that this is switched OFF by default, but can be activated by adding a simple parameter to your GA tracking code (the exact code depends on which version of the Google Analytics code you’re using — analytics.js or gtag.js). If you’re using a plugin for analytics, you might find this option in the plugin settings.
You should make sure that you never (intentionally or inadvertently) include personal data within page URLs that are sent to GA. Not only is this bad for GDPR, it’s also a breach of Google’s Terms of Service.
For a handy visual reminder of the seven steps, check out the image below (click to see a larger view):
Tumblr media
  Embed This Infographic On Your Site
Tumblr media
What the Heck is GDPR? (and How to Make Sure Your Blog Is Compliant) from SmartBlogger.com
  Stop Hiding Under the Pillow and Get Ahead of GDPR
Like it or not, the GDPR could affect you.
Even if you’re not in the EU.
While regulators are extremely unlikely to start handing out huge fines on Day 1, smart bloggers will see this as an opportunity get their data processes properly nailed down.
Get on the front foot and you’ll have a better, deeper understanding of the value of the data that you hold, and the responsibility (and accountability) that you have for that information.
And frankly, even if the GDPR doesn’t apply to you, it’s a strong indication of where data privacy is going — so why not embrace the principles anyway?
It may seem a million miles away from why you pour your heart and soul into blogging. You blog to inform, to inspire, to share your passion.
But you’re also responsible to your loyal followers for the information they entrust to you.
So don’t lose sleep over it. Get ahead of it.
Because when you do, your blog will be stronger than ever.
Paul Long is a small business web designer, WordPress enthusiast and self-confessed data freak based in the UK. He currently spends his days helping folk to tread the fine line between GDPR denial and meltdown. For further actionable guidance, check out his free GDPR Action Plan for small businesses.
The post What the Heck is GDPR? (and How to Make Sure Your Blog Is Compliant) appeared first on Smart Blogger.
from SEO and SM Tips https://smartblogger.com/gdpr/
0 notes
sheminecrafts · 5 years
Text
Most Facebook users still in the dark about its creepy ad practices, Pew finds
A study by the Pew Research Center suggests most Facebook users are still in the dark about how the company tracks and profiles them for ad-targeting purposes.
Pew found three-quarters (74%) of Facebook users did not know the social networking behemoth maintains a list of their interests and traits to target them with ads, only discovering this when researchers directed them to view their Facebook ad preferences page.
A majority (51%) of Facebook users also told Pew they were uncomfortable with Facebook compiling the information.
While more than a quarter (27%) said the ad preference listing Facebook had generated did not very or at all accurately represent them.
The researchers also found that 88% of polled users had some material generated for them on the ad preferences page. Pew’s findings come from a survey of a nationally representative sample of 963 U.S. Facebook users ages 18 and older which was conducted between September 4 to October 1, 2018, using GfK’s KnowledgePanel.
In a senate hearing last year Facebook founder Mark Zuckerberg claimed users have “complete control” over both information they actively choose to upload to Facebook and data about them the company collects in order to target ads.
But the key question remains how Facebook users can be in complete control when most of them they don’t know what the company is doing. This is something U.S. policymakers should have front of mind as they work on drafting a comprehensive federal privacy law.
Pew’s findings suggest Facebook’s greatest ‘defence’ against users exercising what little control it affords them over information its algorithms links to their identity is a lack of awareness about how the Facebook adtech business functions.
After all the company markets the platform as a social communications service for staying in touch with people you know, not a mass surveillance people-profiling ad-delivery machine. So unless you’re deep in the weeds of the adtech industry there’s little chance for the average Facebook user to understand what Mark Zuckerberg has described as “all the nuances of how these services work”.
Having a creepy feeling that ads are stalking you around the Internet hardly counts.
At the same time, users being in the dark about the information dossiers Facebook maintains on them, is not a bug but a feature for the company’s business — which directly benefits by being able to minimize the proportion of people who opt out of having their interests categorized for ad targeting because they have no idea it’s happening. (And relevant ads are likely more clickable and thus more lucrative for Facebook.)
Hence Zuckerberg’s plea to policymakers last April for “a simple and practical set of — of ways that you explain what you are doing with data… that’s not overly restrictive on — on providing the services”.
(Or, to put it another way: If you must regulate privacy let us simplify explanations using cartoon-y abstraction that allows for continued obfuscation of exactly how, where and why data flows.)
From the user point of view, even if you know Facebook offers ad management settings it’s still not simple to locate and understand them, requiring navigating through several menus that are not prominently sited on the platform, and which are also complex, with multiple interactions possible. (Such as having to delete every inferred interest individually.) 
The average Facebook user is unlikely to look past the latest few posts in their newsfeed let alone go proactively hunting for a boring sounding ‘ad management’ setting and spending time figuring out what each click and toggle does (in some cases users are required to hover over a interest in order to view a cross that indicates they can in fact remove it, so there’s plenty of dark pattern design at work here too).
And all the while Facebook is putting a heavy sell on, in the self-serving ad ‘explanations’ it does offer, spinning the line that ad targeting is useful for users. What’s not spelt out is the huge privacy trade off it entails — aka Facebook’s pervasive background surveillance of users and non-users.
Nor does it offer a complete opt-out of being tracked and profiled; rather its partial ad settings let users “influence what ads you see”. 
But influencing is not the same as controlling, whatever Zuckerberg claimed in Congress. So, as it stands, there is no simple way for Facebook users to understand their ad options because the company only lets them twiddle a few knobs rather than shut down the entire surveillance system.
The company’s algorithmic people profiling also extends to labelling users as having particular political views, and/or having racial and ethnic/multicultural affinities.
Pew researchers asked about these two specific classifications too — and found that around half (51%) of polled users had been assigned a political affinity by Facebook; and around a fifth (21%) were badged as having a “multicultural affinity”.
Of those users who Facebook had put into a particular political bucket, a majority (73%) said the platform’s categorization of their politics was very or somewhat accurate; but more than a quarter (27%) said it was not very or not at all an accurate description of them.
“Put differently, 37% of Facebook users are both assigned a political affinity and say that affinity describes them well, while 14% are both assigned a category and say it does not represent them accurately,” it writes.
Use of people’s personal data for political purposes has triggered some major scandals for Facebook’s business in recent years. Such as the Cambridge Analytica data misuse scandal — when user data was shown to have been extracted from the platform en masse, and without proper consents, for campaign purposes.
In other instances Facebook ads have also been used to circumvent campaign spending rules in elections. Such as during the UK’s 2016 EU referendum vote when large numbers of ads were non-transparently targeted with the help of social media platforms.
And indeed to target masses of political disinformation to carry out election interference. Such as the Kremlin-backed propaganda campaign during the 2016 US presidential election.
Last year the UK data watchdog called for an ethical pause on use of social media data for political campaigning, such is the scale of its concern about data practices uncovered during a lengthy investigation.
Yet the fact that Facebook’s own platform natively badges users’ political affinities frequently gets overlooked in the discussion around this issue.
For all the outrage generated by revelations that Cambridge Analytica had tried to use Facebook data to apply political labels on people to target ads, such labels remain a core feature of the Facebook platform — allowing any advertiser, large or small, to pay Facebook to target people based on where its algorithms have determined they sit on the political spectrum, and do so without obtaining their explicit consent. (Yet under European data protection law political beliefs are deemed sensitive information, and Facebook is facing increasing scrutiny in the region over how it processes this type of data.)
Of those users who Pew found had been badged by Facebook as having a “multicultural affinity” — another algorithmically inferred sensitive data category — 60% told it they do in fact have a very or somewhat strong affinity for the group to which they are assigned; while more than a third (37%) said their affinity for that group is not particularly strong.
“Some 57% of those who are assigned to this category say they do in fact consider themselves to be a member of the racial or ethnic group to which Facebook assigned them,” Pew adds.
It found that 43% of those given an affinity designation are said by Facebook’s algorithm to have an interest in African American culture; with the same share (43%) is assigned an affinity with Hispanic culture. While one-in-ten are assigned an affinity with Asian American culture.
(Facebook’s targeting tool for ads does not offer affinity classifications for any other cultures in the U.S., including Caucasian or white culture, Pew also notes, thereby underlining one inherent bias of its system.)
In recent years the ethnic affinity label that Facebook’s algorithm sticks to users has caused specific controversy after it was revealed to have been enabling the delivery of discriminatory ads.
As a result, in late 2016, Facebook said it would disable ad targeting using the ethnic affinity label for protected categories of housing, employment and credit-related ads. But a year later its ad review systems were found to be failing to block potentially discriminatory ads.
The act of Facebook sticking labels on people clearly creates plenty of risk — be that from election interference or discriminatory ads (or, indeed, both).
Risk that a majority of users don’t appear comfortable with once they realize it’s happening.
And therefore also future risk for Facebook’s business as more regulators turn their attention to crafting privacy laws that can effectively safeguard consumers from having their personal data exploited in ways they don’t like. (And which might disadvantage them or generate wider societal harms.)
Commenting about Facebook’s data practices, Michael Veale, a researcher in data rights and machine learning at University College London, told us: “Many of Facebook’s data processing practices appear to violate user expectations, and the way they interpret the law in Europe is indicative of their concern around this. If Facebook agreed with regulators that inferred political opinions or ‘ethnic affinities’ were just the same as collecting that information explicitly, they’d have to ask for separate, explicit consent to do so — and users would have to be able to say no to it.
“Similarly, Facebook argues it is ‘manifestly excessive’ for users to ask to see the extensive web and app tracking data they collect and hold next to your ID to generate these profiles — something I triggered a statutory investigation into with the Irish Data Protection Commissioner. You can’t help but suspect that it’s because they’re afraid of how creepy users would find seeing a glimpse of the the truth breadth of their invasive user and non-user data collection.”
In a second survey, conducted between May 29 and June 11, 2018 using Pew’s American Trends Panel and of a representative sample of all U.S. adults who use social media (including Facebook and other platforms like Twitter and Instagram), Pew researchers found social media users generally believe it would be relatively easy for social media platforms they use to determine key traits about them based on the data they have amassed about their behaviors.
“Majorities of social media users say it would be very or somewhat easy for these platforms to determine their race or ethnicity (84%), their hobbies and interests (79%), their political affiliation (71%) or their religious beliefs (65%),” Pew writes.
While less than a third (28%) believe it would be difficult for the platforms to figure out their political views, it adds.
So even while most people do not understand exactly what social media platforms are doing with information collected and inferred about them, once they’re asked to think about the issue most believe it would be easy for tech firms to join data dots around their social activity and make sensitive inferences about them.
Commenting generally on the research, Pew’s director of internet and technology research, Lee Rainie, said its aim was to try to bring some data to debates about consumer privacy, the role of micro-targeting of advertisements in commerce and political activity, and how algorithms are shaping news and information systems.
Update: Responding to Pew’s research, Facebook sent us the following statement:
We want people to understand how our ad settings and controls work. That means better ads for people. While we and the rest of the online ad industry need to do more to educate people on how interest-based advertising works and how we protect people’s information, we welcome conversations about transparency and control.
from iraidajzsmmwtv https://tcrn.ch/2RuM1G4 via IFTTT
0 notes
shontaviajesq · 7 years
Text
Can the world ever really keep terrorists off the internet?
After London’s most recent terror attacks, British Prime Minister Theresa May called on countries to collaborate on internet regulation to prevent terrorism planning online. May criticized online spaces that allow such ideas to breed, and the companies that host them.
May did not identify any companies by name, but she could have been referring to the likes of Google, Twitter and Facebook. In the past, British lawmakers have said these companies offer terrorism a platform. She also might have been referring to smaller companies, like the developers of apps like Telegram, Signal and Wickr, which are favored by terrorist groups. These apps offer encrypted messaging services that allow users to hide communications.
May is not alone in being concerned about attacks on citizens. After her comments on Sunday, U.S. President Donald Trump vowed to work with allies and do whatever it takes to stop the spread of terrorism. He did not, however, specifically mention internet regulation.
Email
Twitter7
Facebook
LinkedIn2
Print
After London’s most recent terror attacks, British Prime Minister Theresa May called on countries to collaborate on internet regulation to prevent terrorism planning online. May criticized online spaces that allow such ideas to breed, and the companies that host them.
British Prime Minister Theresa May has said it is time to say ‘enough is enough’ when it comes to tackling terrorism.
May did not identify any companies by name, but she could have been referring to the likes of Google, Twitter and Facebook. In the past, British lawmakers have said these companies offer terrorism a platform. She also might have been referring to smaller companies, like the developers of apps like Telegram, Signal and Wickr, which are favored by terrorist groups. These apps offer encrypted messaging services that allow users to hide communications.
May is not alone in being concerned about attacks on citizens. After her comments on Sunday, U.S. President Donald Trump vowed to work with allies and do whatever it takes to stop the spread of terrorism. He did not, however, specifically mention internet regulation.
President Donald Trump addressed the London terror attacks during an event at Ford’s Theatre in Washington, DC.
Internet companies and other commentators, however, have pushed back against the suggestion that more government regulation is needed, saying weakening everyone’s encryption poses different public dangers. Many have also questioned whether some regulation, like banning encryption, is possible at all.
Because the internet is geographically borderless, nearly any message can have a global audience. Questions about online regulation have persisted for years, especially regarding harmful information. As a law professor who studies the impact of the internet on society, I believe the goal of international collaboration is incredibly complicated, given global history.
Some control is possible
While no one country has control over the internet, it is a common misconception that the internet cannot be regulated. In fact, individual countries can and do exert significant control over the internet within their own borders.
In 2012, for example, the Bashar al-Assad regime shut down the internet for all of Syria. According to Akamai Technologies, an internet monitoring company, the country went entirely offline on Nov. 29, 2012. The internet blackout lasted roughly three days.
Akamai traffic data supports @renesys observation (http://t.co/uxC2ZhTo) that Syria is effectively off the Internet pic.twitter.com/haNHwb5y
— StateOfTheInternet (@akamai_soti) November 29, 2012
Email
Twitter7
Facebook
LinkedIn2
Print
After London’s most recent terror attacks, British Prime Minister Theresa May called on countries to collaborate on internet regulation to prevent terrorism planning online. May criticized online spaces that allow such ideas to breed, and the companies that host them.
British Prime Minister Theresa May has said it is time to say ‘enough is enough’ when it comes to tackling terrorism.
May did not identify any companies by name, but she could have been referring to the likes of Google, Twitter and Facebook. In the past, British lawmakers have said these companies offer terrorism a platform. She also might have been referring to smaller companies, like the developers of apps like Telegram, Signal and Wickr, which are favored by terrorist groups. These apps offer encrypted messaging services that allow users to hide communications.
May is not alone in being concerned about attacks on citizens. After her comments on Sunday, U.S. President Donald Trump vowed to work with allies and do whatever it takes to stop the spread of terrorism. He did not, however, specifically mention internet regulation.
President Donald Trump addressed the London terror attacks during an event at Ford’s Theatre in Washington, DC.
Internet companies and other commentators, however, have pushed back against the suggestion that more government regulation is needed, saying weakening everyone’s encryption poses different public dangers. Many have also questioned whether some regulation, like banning encryption, is possible at all.
Because the internet is geographically borderless, nearly any message can have a global audience. Questions about online regulation have persisted for years, especially regarding harmful information. As a law professor who studies the impact of the internet on society, I believe the goal of international collaboration is incredibly complicated, given global history.
Some control is possible
While no one country has control over the internet, it is a common misconception that the internet cannot be regulated. In fact, individual countries can and do exert significant control over the internet within their own borders.
In 2012, for example, the Bashar al-Assad regime shut down the internet for all of Syria. According to Akamai Technologies, an internet monitoring company, the country went entirely offline on Nov. 29, 2012. The internet blackout lasted roughly three days.
  China aggressively blocks access to more than 18,000 websites, including Facebook, Google, The New York Times and YouTube. While there are some limited workarounds, the Chinese government regularly targets and eliminates them.
French courts have prohibited the display and sale of Nazi materials online in France by Yahoo’s online auction service. After losing a legal case, Yahoo banned the sale of Nazi memorabilia from its website worldwide, though it denied that the move was in direct response to the court ruling.
Even in the United States, local governments have shut down mobile data and cellphone service during protests. In addition, the United States reportedly either is developing or has developed its own internet “kill switch” for times of national crisis.
International collaboration
These types of regulation efforts aren’t limited to individual governments. Groups of countries have successfully collaborated to pursue common goals online.
The Global Privacy Enforcement Network, for example, is a network of representatives from nearly 50 countries including the United States, Australia, the United Kingdom and Germany. The GPEN works to develop shared enforcement practices related to internet privacy and has reviewed many companies’ online privacy policies. When the GPEN discovers websites or apps that violate a country’s privacy laws, it informs the administrators or developers and encourages them to follow those laws. The group can recommend countries take enforcement action against websites or apps that do not comply.
The European Union, made up of 28 countries, has also worked to regulate harmful messages on the internet. In 2016, the European Commission announced a joint agreement with internet companies Facebook, Microsoft, Twitter and YouTube. Among other things, the companies agreed to create clear and rapid processes for reviewing potentially objectionable information and removing it if need be.
At the UN
In addition, the United Nations has been pursuing general global regulation of the internet. The U.N.‘s first Working Group on Internet Governance was created in 2004 to propose models for global internet regulation.
Unfortunately, the working group has not been able to agree on how to create new transnational bodies with rule-setting or regulatory power over the internet. Each country has different views on the global political issues raised by the internet’s vast reach. While some countries can find common ground, it may be nearly impossible to create a worldwide model that harmonizes all of these perspectives.
The farthest the U.N. has gotten so far has been creating the Internet Governance Forum, which brings together governments, private companies and individuals to address questions about internet regulation. The group has discussed and reported on internet access, human rights and free speech issues. These discussions are an opportunity to exchange experiences and views, but there are no negotiated outcomes, rules or laws that come from the IGF.
Finding widespread common ground on internet-based issues will likely only become more difficult as the U.K. exits from the EU and the U.S. takes increasingly nationalist positions. Even so, the experiences of smaller groups of countries may inform a broader effort as global policies on terrorism shift, and the world’s approach to internet regulation changes with it.
This article was originally published on The Conversation. Read the original article.
0 notes
stopkingobama · 7 years
Text
UK attacks have UN planning global control of Internet
Shontavia Johnson, Drake University
After London’s most recent terror attacks, British Prime Minister Theresa May called on countries to collaborate on internet regulation to prevent terrorism planning online. May criticized online spaces that allow such ideas to breed, and the companies that host them.
youtube
British Prime Minister Theresa May has said it is time to say ‘enough is enough’ when it comes to tackling terrorism.
May did not identify any companies by name, but she could have been referring to the likes of Google, Twitter and Facebook. In the past, British lawmakers have said these companies offer terrorism a platform. She also might have been referring to smaller companies, like the developers of apps like Telegram, Signal and Wickr, which are favored by terrorist groups. These apps offer encrypted messaging services that allow users to hide communications.
May is not alone in being concerned about attacks on citizens. After her comments on Sunday, U.S. President Donald Trump vowed to work with allies and do whatever it takes to stop the spread of terrorism. He did not, however, specifically mention internet regulation.
youtube
President Donald Trump addressed the London terror attacks during an event at Ford’s Theatre in Washington, DC.
Internet companies and other commentators, however, have pushed back against the suggestion that more government regulation is needed, saying weakening everyone’s encryption poses different public dangers. Many have also questioned whether some regulation, like banning encryption, is possible at all.
Because the internet is geographically borderless, nearly any message can have a global audience. Questions about online regulation have persisted for years, especially regarding harmful information. As a law professor who studies the impact of the internet on society, I believe the goal of international collaboration is incredibly complicated, given global history.
Some control is possible
While no one country has control over the internet, it is a common misconception that the internet cannot be regulated. In fact, individual countries can and do exert significant control over the internet within their own borders.
In 2012, for example, the Bashar al-Assad regime shut down the internet for all of Syria. According to Akamai Technologies, an internet monitoring company, the country went entirely offline on Nov. 29, 2012. The internet blackout lasted roughly three days.
China aggressively blocks access to more than 18,000 websites, including Facebook, Google, The New York Times and YouTube. While there are some limited workarounds, the Chinese government regularly targets and eliminates them.
French courts have prohibited the display and sale of Nazi materials online in France by Yahoo’s online auction service. After losing a legal case, Yahoo banned the sale of Nazi memorabilia from its website worldwide, though it denied that the move was in direct response to the court ruling.
Even in the United States, local governments have shut down mobile data and cellphone service during protests. In addition, the United States reportedly either is developing or has developed its own internet “kill switch” for times of national crisis.
International collaboration
These types of regulation efforts aren’t limited to individual governments. Groups of countries have successfully collaborated to pursue common goals online.
The Global Privacy Enforcement Network, for example, is a network of representatives from nearly 50 countries including the United States, Australia, the United Kingdom and Germany. The GPEN works to develop shared enforcement practices related to internet privacy and has reviewed many companies’ online privacy policies. When the GPEN discovers websites or apps that violate a country’s privacy laws, it informs the administrators or developers and encourages them to follow those laws. The group can recommend countries take enforcement action against websites or apps that do not comply.
The European Union, made up of 28 countries, has also worked to regulate harmful messages on the internet. In 2016, the European Commission announced a joint agreement with internet companies Facebook, Microsoft, Twitter and YouTube. Among other things, the companies agreed to create clear and rapid processes for reviewing potentially objectionable information and removing it if need be.
At the UN
In addition, the United Nations has been pursuing general global regulation of the internet. The U.N.‘s first Working Group on Internet Governance was created in 2004 to propose models for global internet regulation.
Unfortunately, the working group has not been able to agree on how to create new transnational bodies with rule-setting or regulatory power over the internet. Each country has different views on the global political issues raised by the internet’s vast reach. While some countries can find common ground, it may be nearly impossible to create a worldwide model that harmonizes all of these perspectives.
The farthest the U.N. has gotten so far has been creating the Internet Governance Forum, which brings together governments, private companies and individuals to address questions about internet regulation. The group has discussed and reported on internet access, human rights and free speech issues. These discussions are an opportunity to exchange experiences and views, but there are no negotiated outcomes, rules or laws that come from the IGF.
Finding widespread common ground on internet-based issues will likely only become more difficult as the U.K. exits from the EU and the U.S. takes increasingly nationalist positions. Even so, the experiences of smaller groups of countries may inform a broader effort as global policies on terrorism shift, and the world’s approach to internet regulation changes with it.
Shontavia Johnson, Professor of Intellectual Property Law, Drake University
This article was originally published on The Conversation. Read the original article.
0 notes