Tumgik
#i respect the dedication but Google this is how misinformation spreads.
wasabikitcat · 18 days
Text
I know we all hate Fandom Wiki when it comes to it's usage as an actual wiki for various media because the website design is dog shit, but I feel like we need to at least respect it for it's role as a complete Wild West for 8 year olds on the internet to create elaborate fanons for their ideas about theoretical reboots and spinoffs and video game tie-ins of random kids shows. They're out there making full show bibles and scripts and 5 year business plans for their spin-offs of Fairly Oddparents and Veggietales for no one but themselves, just as god intended the internet to be used for. We need to design a better website for them to put this shit on because it's a shame that Fandom has a stranglehold on the market of entire wikis made exclusively for things that are entirely made up and only exist in the brains of like 3 random kids on the internet.
20 notes · View notes
stellar-stag · 7 years
Text
The Problem With Tech
Disclaimer: The opinions reflected in this essay are my own and do not represent LinkedIn, Inc. For any questions regarding LinkedIn, please direct inquiries to [email protected].
(don't wanna be sued or fired, so...)
Juicero is the hot new joke right now. A startup offering a $400 juicer (or juice bag squeezer, I suppose) that has an internet connection and QR scanner to keep you from drinking anything that's spoiled or recalled, with the distinct side effect of being legally unable to obtain your juice if the scanner breaks, your internet or Juicero's server goes down, someone hacks the juicer, etc etc. When a company has to issue a statement asking people to not manually squeeze their product despite it being both easy and the purpose of the product, something has gone horribly awry.
And we laugh and we mock, but underneath it all I feel that this is an issue that's perfectly representative of some of THE crucial issues in West Coast Tech right now (I would say Silicon Valley, but Seattle and Los Angeles companies are equally guilty). Some of you must be wondering how this could have possibly gotten through four rounds of investing, extensive design and user testing, and release before these issues came up.
It's not that they didn't know. It's that they didn't care.
There's a lot of factors that led us to this point, but I'm going to narrow it down to what I think are the four biggest: An idolization of "intelligence" as a supreme moral good, a conflation of success with intelligence, a lack of personal responsibility for consequence, and a widespread sense of complacency.
To start off: I am a software developer working for LinkedIn, and I currently live with three other software developers: two of them work for Google, one works for Facebook. Our broader local social circle consists almost exclusively of developers, mostly Google but also Uber, Infer, Palantir, and the like. And while I can't claim this is a universal attitude amongst tech, at least amongst this group, everything is an optimization problem. Playing board games, especially euro games, is an excruciating process where they can take upwards of twenty minutes to take a single turn, taking the time to analyze decision trees and modeling other players strategies and decisions. But they also seem to be completely ignorant of board games as a social function. My roommate who optimizes most is, as a direct result, very good at board games. But when players act against him to prevent him from winning because he always wins if we don't stage intervention, he protests. When someone makes a move he's deemed suboptimal, and it ends up being a benefit to them over what he thought was optimal because he didn’t anticipate it, he still couches it in language of "the wrong choice" or "what they should have done".
Because this group values optimization, efficiency, and intellect above all else. Obviously, this has a lot of issues, as "intelligence" as a quality has a long and storied history of being used to denigrate others and justify oppression, despite it being, just as anything is, a collection of unrelated skills that people can have varying amounts of practice at, and in practice far less important than dedication and willingness to practice and learn. Intelligence, as the public regards it, seems to mean "skill at mathematics, logic, memory, and reading comprehension, as well as rate of skill acquisition in these areas". But when you treat it as a general marker of value, we start getting problems.
This ties into the next point: Tech regards success as a marker of intellect, and therefore a moral good. When Elon Musk joined Trump's advisory board, there were arguments about whether or not it was good or bad, if it was lending legitimacy to Trump and cozying up versus an attempt at harm minimization. Regardless, people protested and boycotted, and I saw a former classmate respond that we "mustn't shame the smartest people in the country". And that really stuck with me. Putting aside the Tesla, which is admittedly a massive advancement towards renewable energy vehicles, and the advisory board debate, Musk has made some intensely strange causes as his goals, such as brain uploading and other transhumanist causes, which some might argue shows a disregard for accessibility or practicality, while simultaneously disincentivizing those who work in the Tesla manufacturing plants from unionizing by attempting to placate them with frozen yogurt. He also claimed that the unions were an unjust tyrant over a powerless oppressed company by likening it to the tale of David and Goliath. Panem et circenses, indeed.
In short, there is much about Musk to criticize. To claim he should receive immunity from this criticism by virtue of intellect is concerning to say the least, but it's an idea that's present in the tech community at large, from the rationalists at LessWrong.org to the Effective Altruism movement, and on down to the people who, in complete seriousness, advocate for Silicon Valley to lead the world, with Elon Musk as CEO of the United States. The form differs, but the underlying idea remains the same: the best thing one can be is smart, and since we are successful, we are the smartest and therefore the best.
However, despite feeling responsible enough for the well-being of the world to oh-so-magnanimously offer to take the reins and save the common masses from themselves, tech has a consistent problem with personal accountability. Facebook was, and remains, a prime means of spreading misinformation. But it took massive outcry for them to cop to their complicity in this matter or to take action. And this manifests in so, so many ways. One of my roommates refuses to act as though the rising costs of living in the Bay Area are detrimental, claiming that the influx of tech into SF is harmless because "cities are made to house people" and "tech has buses to get employees to work, so that lower income workers are driven further away from work isn't a problem" (ignoring the historical and cultural issues at play in gentrification, a rising sense of entitlement, and the fact that most tech companies only offer such luxurious benefits to their salaried and full time employees, not the contractors or part time workers, a.k.a. the workers who make the least, have the most trouble securing consistent transportation to work, and are most necessary to the upkeep of the offices and the benefits they provide while receiving the least respect and compensation. But hey, at least the buses have WiFi so you can work while you commute!)
And that's not the worst example. An acquaintance, who has thankfully moved very, very far away, once attended our weekly board game nights. He was a software engineer for Facebook. For those unaware, ad revenue is the prime, and essentially only, stream of revenue for Facebook. As part of compensation, workers receive ad credits, to be used for ads on Facebook. And this acquaintance once had an idea. He convinced his fellows to pool their credits together, and with it, he purchased an advertisement with the following stipulation: This ad would be served to all women in the Bay Area within the age range of roughly 23-30 or so. The content of the ad was simply his picture and the phrase "Date <acquaintance's name>" (at least, as he related it to me. I thankfully never witnessed the ad directly).
Now, given the fact that tech is incredibly male dominated and hostile to women, one would think this ad is at best tone-deaf and at worst horrifying. And yet, he related this to me in candor, treating this all as a joke that had gone awry. When I raised the possibility that this was literally harassment, regardless of any potential joking intent, I was met with blank stares and an insistence that it was hilarious and not serious (of which I remain unconvinced). Granted, one of the women targeted by the ad was his ex-girlfriend, who lodged a complaint, and the acquaintance was subsequently fired for his conduct after a massive scandal about the potential issues regarding the invasiveness of targeted advertising and how it contributes to a culture of exclusion.
Just kidding! There was a single local story about it where he was kept anonymous and he got a slap on the wrist and a book deal about his experiences dating in Silicon Valley as a software engineer. The book can be purchased on Amazon and while I haven't read it, nothing about the title, description, or author bio implies to me that he is even remotely repentant, beyond a vague sense that his missteps are due to being *socially awkward, but in an endearing way* as opposed to, you know, actively curating and supporting a toxic environment for women.
And it might seem as though these examples are simply bad eggs, but they really aren't. They're just symptoms of an industry that looks at a lack of diversity and, rather than seriously examine why women don't stay in industry and how the culture they so take pride in is complicit, decide that obviously it's just that being programmers didn't occur to women, so we've just got to make programming seem fun and feminine, right? Just lean in, women! Just grit your teeth, prepare yourself for an unending nightmare of disrespect and abuse, and lean in! And that's not even remotely approaching the severe underrepresentation of black and Latinx people in tech.
But I digress.
Where does this aversion to responsibility come from? There are so many possibilities. But the one most unique to West Coast Tech is the corporate culture, or perhaps, the lack thereof. It's a land of man-buns, flip flops, and company t-shirts. My roommate owns a combination bottle opener and USB drive, proudly emblazoned with Facebook’s logo. The brogrammer is alive and thriving. And to be completely fair, this culture is actually something I quite like about working in tech (The casual part, not the acting like a college freshman part). That I may be frank in my discussions with my co-workers, swear profusely and use emojis in email, and casually discuss my mental health with the man three steps above me in the corporate hierarchy (and two below the top) is quite refreshing. But it has drawbacks.
I attended a college that required a minor in the humanities, and had as its mission statement to educate people in STEM who would understand the impact of their work on society. But so many people just viewed those requirements as an obstacle, or just took economics and got the takeaway of how to best impact markets. And most colleges don't even pay lip service to such a goal. So I worry that "casual" is code for "unwilling to examine potential harm caused by one's actions". That the culture is why harassment can be seen as "just a joke". Why anyone who feels unwelcome is just "too uptight". Why people can be reasonably othered and rejected in interviews because of "a lack of culture fit". And without a willingness to accept responsibility for the consequences of actions, nothing will change.
This ties into the final point: the complacency. Everyone in tech wants to be seen as changing the world. But I'm also privy to the conversations we have in private, and you know what we care about more? Compensation. Its pretty rare that someone I know will come home from work and express that hey, their company is working on something that will legitimately help so many people. More often, we have discussions about who has the better offices, or the best snacks, or the best free meals. I like to think I'm a kind person, but is that really true? I may profess to be aware, but I still own no fewer than ten garments with LinkedIn's logo on them. I still take full benefit of all of the compensation, including free breakfast, lunch, and dinner, and great insurance, and a free gym. I still just used my ludicrous paycheck to purchase a condo instead of anything magnanimous or truly worthwhile. And my fellows are much the same. 
The irony that I wrote this entire essay, on company time, on a company device, because today is the Friday per month we get to devote to professional development and is discounted in work estimates because we are expected to do something other than our normal duties (read: not come to work) is not lost on me. 
I touched earlier on the Effective Altruism movement, which is comprised primarily of tech and tech-adjacent workers. I remain somewhat critical of the movement, for a number of reasons. Firstly, there is a focus on its own impact while simultaneously continuing the trend of disavowing consequences. One of the most notorious discussions in Effective Altruist groups is the avoidance of a theoretical AI that could eliminate humanity. This conversation seems to be staying in the wheelhouse of safety of testing of AIs that don’t seem to be anywhere close to a reality, rather than more concrete examples of how tech reinforces power imbalances, like, say, advertising algorithms that reinforce racist stereotypes. The second criticism I have is that for many of the metrics used by EA to measure the effectiveness of charities are purely monetary: how much of what goes in goes back out. This ignores other factors, such as raising awareness, operational costs at various sizes and scales, and a question of how directly does money even translate into benefit? The good done per dollar is not considered, merely dollar preservation from donor to donee. Furthermore, that the natural extension of Effective Altruism is that, in order to be a good person, the best thing one can do is obtain a high paying job (such as one in tech) and donate money, rather than donate time by volunteering, strikes me as convenient justification rather than honest analysis.
This excellent article (which by and large inspired this one) touches on many of these issues, but I would like to highlight one statement in particular: “Solving these problems is hard, and made harder by the fact that the real fixes for longevity don’t have the glamour of digitally enabled immortality.” As Emily Dreyfuss points out, Silicon Valley has very little interest in actually bringing about progress. Silicon Valley is trying to sell you on the idea of progress. They want to peddle you a “The Jetsons”-style future, but instead of the post-scarcity society that has mastered space travel, they want you to buy Rosie the Robot Maid. Helpful? Sure. Revolutionary? Hardly.
It's perhaps unrealistic to expect tech to actually do the hard, thankless work to improve the world, but it's certainly not unfair to expect them to at least be honest. LinkedIn is more benign than most tech companies: it is, for all intents and purposes, a resume book masquerading as a social network. The adage goes that "if you're not the customer, you're the product" and that rings true in tech. In exchange for use of the site, people surrender their information to the company to be sold as potential customers to advertisers. At least with LinkedIn, that's the expectation and goal. People give LinkedIn their resume and employment information and LinkedIn, in turn, lets recruiters look for leads. But the users more or less expect and want this, because they joined for the express purpose of finding job opportunities. But that this is benign doesn't mean it is revolutionary or radical. It remains only useful to white collar employees. Blue collar workers have no use for LinkedIn, and we can hardly claim to be changing the world of employment when the people who need us most can't benefit from the services we offer.
Could I go and find a company that does nobler work, or enter academia to advance at least the collective knowledge of humanity in some way? Sure. Will I? No. I am selfish, and don't want to give up my cushy job, and cushy benefits. And I'm not the only one.
The most interesting thing to me about the Juicero debacle is how with even the slightest forethought, they could have actually done something impressive. Consider the As-Sold-On-TV devices you see sold: I mean, who really needs a one-handed spaghetti twirler, right? Well, people with motor control issues or disabilities, is who. People who struggle with tasks most consider trivial. But people don't care about that, they care about what can be marketed, so we instead act as though the world is simply excessively clumsy and hope that someone who really needs that extra help sees it.
So, consider the Juicero bag. Reporters have noted, laughingly, jokingly, that the bag is exceedingly easy to squeeze and thus remove juice from. It's so simple, it requires hardly any effort! Someone went through the process of designing a bag, meant to be able to dispense its contents far more easily than other bags, as well as a device to automate the squeezing. Now I don't have motor control issues or disabilities, but I'm willing to bet: someone who does? Or who can't easily get, say, orange juice cartons from the fridge, open the top, lift the heavy, irregular object at just such an angle in just such a location for just such a time, all to get themselves a cup of juice? Yeah, I bet someone, somewhere, saw this and thought, finally, I can actually get myself milk without needing help or preparation.
And Juicero made this device, slapped an internet connection, QR Codes and a $400 price tag onto it, and marketed it as being the future of juice, vulnerabilities and use cases be damned. And I want to scream.
Because in the end, they cared more about being successful than being helpful.
Unfortunately, identifying the issues is one problem, addressing them is another. I'm not sure how to even begin tackling these. But we have to. People in tech, myself included, need to take responsibility for our culture and creations. We have a moral duty to do better. To be better. The internet is, at its core, a wonderful tool for accessibility of information. But like all tools, it can and is misused, and we're the ones who let it happen. We need to fix this.
6 notes · View notes
ageloire · 6 years
Text
Facebook Has Published Its Internal Community Standards and Will Roll out a New Appeals Process
Facebook has published its internal enforcement guidelines. 
These guidelines -- or community standards, as they're also known -- are designed to help human moderators decide what content should (not) be allowed on Facebook. Now, the social network wants the public to know how such decisions are made.
"We decided to publish these internal guidelines for two reasons," wrote Facebook's VP of Global Product Management Monica Bickert in a statement. "First, the guidelines will help people understand where we draw the line on nuanced issues."
"Second," the statement continues, "providing these details makes it easier for everyone, including experts in different fields, to give us feedback so that we can improve the guidelines – and the decisions we make – over time."
Facebook's content moderation practices have been the topic of much discussion and, at times, contention. At CEO Mark Zuckerberg's congressional hearings earlier this month, several lawmakers asked about the removal or suppression of certain content that they believed was based on political orientation.
And later this week, the House Judiciary Committee will host yet another hearing on the "filtering practices of social media platforms," where witnesses from Facebook, Google, and Twitter have been invited to testify -- though none have confirmed their attendance.
What the Standards Look Like
According to a tally from The Verge reporter Casey Newton, the newly-released community standards total 27 pages, and are divided into six main sections:
Violence and Criminal Behavior
Safety
Objectionable Content
Integrity and Authenticity
Respecting Intellectual Property
Content-Related Requests
Within these sections, the guidelines delve deeper into the moderation of content that might promote or indicate things like threats to public safety, bullying, self-harm, and "coordinating harm."
That last item is particularly salient for many, following the publication of a New York Times report last weekend on grave violence in Sri Lanka that is said ignited at least in part by misinformation spread on Facebook within the region.
According to that report, Facebook lacks the resources to combat this weaponization of its platform, due to some degree to its lack of Sinhalese-speaking moderators (one of the most common languages in which this content appears).
As promised, I wanted to briefly explain why I hope anyone who uses Facebook — in any country — will make time for our Sunday A1 story on how the newsfeed algorithm has helped to provoke a spate of violence, as we’ve reconstructed in Sri Lanka: https://t.co/hrpBSMfb4t
— Max Fisher (@Max_Fisher) April 23, 2018
But beyond that, many sources cited in the story say that even when this violence-inciting content is reported or flagged by concerned parties on Facebook, they're told that it doesn't violate community standards.
Within the guidelines published today, an entire page is dedicated to "credible violence," which includes "statements of intent to commit violence against any person, groups of people, or place (city or smaller)" -- which describes much of the violence reported in the New York Times story that Facebook was allegedly used to incite. 
Bickert wrote in this morning's statement that Facebook's Community Operations team -- which has 7,500 content reviewers (and which the company has said it hopes to more than double this year) -- currently oversees these reports and does so in over 40 languages.
Whether that includes Sinhalese and other languages spoken in the regions affected by the violence described in the New York Times report was not specified.
A New Appeals Process
The public disclosure of these community standards will be followed by the rollout of a new appeals project, Bickert wrote, that will allow users and publishers to contest decisions made about the removal of content they post.
To start, the statement says, appeals will become available to those whose content was "removed for nudity/sexual activity, hate speech or graphic violence." Users whose content is taken down will be notified and provided with the option to request further review into the decision.
That review will be done by a human moderator, Bickert wrote, usually within a day. And if it's been determined that the decision was made in error, it will be reversed and the content will be re-published.
In the weeks leading up to Zuckerberg's congressional testimony earlier this month, Facebook released numerous statements about its policies and practices, and the changes it would be making to them. It was predicted that the motivation behind those numerous statements was to leave as little room for speculation as possible among lawmakers.
There's some probability that this most recent statement was made, this appeals process introduced, and these standards published in anticipation of potential questions being asked at this week's hearings.
In addition to the aforementioned House Judiciary Committee hearing on "filtering practices" tomorrow, Facebook CTO Mike Schroepfer will testify before UK Parliament's Digital, Culture, Media and Sport Committee. (Dr. Aleksandr Kogan -- the Cambridge University professor behind the data-harvesting app who eventually sold personal user information to Cambridge Analytica -- testified before that same committee this morning.)
How or if this statement and the content review guidelines are raised within these events remain to be seen -- but it's important to again note that one reason for their release, according to Bickert, was to allow feedback from the public, whether it's comprised of daily Facebook users or topical experts.
It's possible that in the wake of Facebook's unfolding and continuous scrutiny, all of its policies will continue to evolve.
Featured image credit: Facebook
from Marketing https://blog.hubspot.com/marketing/facebook-internal-community-standards-appeals-process
0 notes
dorothydelgadillo · 5 years
Text
Facebook Outlines New Steps to Fight Misinformation in News Feeds
Facebook is continuing to make moves to strengthen its platform in an effort to build trust with its users and advertisers.
In just 2019, they’ve made great progress in fighting misinformation and boosting the authority of content posted on the platform.
This week, they posted a slew of announcements to their Newsroom blog regarding the steps they’ve taken, and what they’re planning to do to moving forward to limit the spread of “fake news,” hate speech, and other content that violates Facebook’s Community Standards.
Essentially, Facebook is planning to build upon its “Remove, Reduce, Inform” process first announced in 2016.  
To understand the basics of the 3-step plan, here’s a short video Facebook created upon the initial announcement:
Here’s a breakdown of how Facebook hopes to expand upon this process to strengthen their ability to control misleading or harmful content on the platform:
Removing Harmful Content
While Facebook has many measures in place to prevent content that violates its Community Standards from being posted, some can still slip through the cracks.
When that happens, the goal is to catch and remove it as quickly as possible - but doing so has presented challenges.
This problem persists most frequently in closed Facebook Groups. Facebook does have systems in place that will catch around 95% of these infractions, but found its team needs to be doing more to hold Groups accountable for upholding Facebook’s community standards.
In response, they have put two new standards in place.
First, they added a “Recent Updates” tab to their Community Standards site. This tab will show any standards that have been added or updated in the last two months.
Because Facebook’s terms can change so frequently, it's important that users have access to up to date information on what is considered a violation.
To boost compliance, Facebook is holding group admins more accountable for violations of Community Standards.
“Starting in the coming weeks, when reviewing a group to decide whether or not to take it down, we will look at admin and moderator content violations in that group, including member posts they have approved, as a stronger signal that the group violates our standards.”
Additionally, Facebook is introducing a new tool called Group Quality to increase transparency surrounding what it considers to be a “violation” of it’s Community Standards and when they enforce it.
The feature will show a historical overview of content that has been removed or flagged for violations, and will show a section for any false news posted in the group. It’s currently unclear if only group admins can see this feature, or if all group members are able to view it.
Reducing Misleading Content
This section refers to content that is problematic or annoying, like clickbait, spam, or fake news - but doesn’t necessarily violate Facebook’s Community Guidelines.
For example, you could share a blog article on “scientific research” on why the Earth is flat, and even if the content isn’t factual, it doesn’t violate Facebook’s standards.
This “grey” area between what’s just misleading and what’s truly harmful is why Facebook has had such a hard time limiting the spread of misinformation on its platform.
While False News is referenced in the Community Guidelines, it also explains why this has been such a struggle for the platform, and how Facebook approaches the issue:
“Reducing the spread of false news on Facebook is a responsibility that we take seriously. We also recognize that this is a challenging and sensitive issue. We want to help people stay informed without stifling productive public discourse. There is also a fine line between false news and satire or opinion. For these reasons, we don't remove false news from Facebook but instead, significantly reduce its distribution by showing it lower in the News Feed.”
Contrary to popular belief, the right to Free Speech doesn’t apply to private companies like Facebook (I didn’t realize that needed to be said until I saw Twitter’s response to this news).  
So while Facebook wants to allow all users to express themselves on the platform, they are well within their rights to put additional guidelines in place to enhance the content posted to the platform.
To expand on these efforts, they’ve set the following new standards in place:
Collaborating with outside experts to boost fact-checking efforts: A main reason misinformation spreads so rapidly is that there is so much of it that there aren’t enough eyes in the world to keep on top of it all. To create a strategy that will truly discredit false news and promote trustworthy content, Facebook is consulting academics, fact-checking experts, journalists, survey researchers and civil society organizations to build a system that works. They explain this process further in the video below:
Expanding its third-party fact checking program: Now, The Associated Press will be “expanding its efforts by debunking false and misleading video misinformation and Spanish-language content appearing on Facebook in the US”.
Limiting the reach of groups that repeatedly share misinformation: Groups that have consistently been flagged by fact-checkers for sharing misleading articles won’t have as much exposure in members’ News Feeds. This provides additional incentives for admins to police this content on their own, and also limits members from seeing the posts unless they seek them out in the groups itself.
Introducing “Click Gap” into their News Feed Ranking System: “Click Gap” is a new feature that will help determine how posts show up in your feed. For SEO professionals, this is very similar to how Google evaluates inbound and outbound links directed to your website pages. Facebook states that “Click-Gap looks for domains with a disproportionate number of outbound Facebook clicks compared to their place in the web graph. This can be a sign that the domain is succeeding on News Feed in a way that doesn’t reflect the authority they’ve built outside it and is producing low-quality content. “
Informing the Facebook Community
In order to truly make an impact, it’s important to not only limit the spread of misinformation, but also educate the public to inform their own perspective.
The problem with social media, and the internet in general, is that if you see a headline or read a statistic, you’re inclined to believe it.
Because Facebook users can post whatever they want, it makes it difficult to separate what’s credible and what isn’t.
So, rather than simply take reactive measures by removing and limiting the exposure of misinformation non Facebook, it’s important for people to know how credible these sources are before they decide to believe them.
In the past year, Facebook has added features help users determine a source's credibility on their own. Now, they’ve announced expansions of of these tools that will provide even more transparency into these sources.
First, Facebook’s Context Button is getting some new features. This tool was first launched in April 2018, and provides people with more background information about the publishers and articles that appear in their News Feed so they can better decide what to read, trust and share.
Now, Facebook is adding Trust Indicators to the Context button. This was created by the Trust Project, which is made up of an association of news organizations dedicated to fairness, accuracy, and transparency in news reporting. Moving forward, Facebook's articles will be evaluated against the Trust Indicators eight core standards to evaluate credibility. Additionally, the Context Button will now apply to photos that have been reviewed by third-party checkers.
Facebook’s Page Quality tab will also be getting new information to help users better understand if the page is biased or trustworthy. This will be continually updated over time as new information is identified - but Facebook is starting off this month by updating the page’s status with respect to how often they post “clickbait” articles.
Final Thoughts
Whether it’s Facebook’s fault or not, it’s undeniable that the spread of misinformation has gotten out of hand.
It’s true that everyone is entitled to their own opinion, but it’s important that people draw their conclusions based on facts, not false information.
These updates have important implications for marketers as well. If we want prospects and customers to trust our information, we should be mindful of the groups we’re in, the articles we share, and the content we put out.
For example, if you post an article that has exaggerated statistics or information from a source that has the potential to be flagged by Facebook, it might reflect poorly on your brand even if you thought it was factual before you posted it.
These updates are a reminder that the industry is changing, and it’s likely that social media will soon have similar restrictions to what you’d see on cable television. This isn’t necessarily a bad thing, but does have the potential to disrupt the current way you use social media to market your business.
So, as always, stay on top of the changes, and be proactive in adapting your strategy.
from Web Developers World https://www.impactbnd.com/blog/facebook-outlines-steps-to-fight-misinformation-in-news-feeds
0 notes
deniscollins · 6 years
Text
Dozens at Facebook Unite to Challenge Its ‘Intolerant’ Liberal Culture
A group of Facebook employees have formed a Political Diversity group that criticizes the organization for having a liberal bias in choosing what sites to ban. Meanwhile others have said Facebook, out of fear of being seen as biased, has let too many right-wing groups flourish on the site. If you were a Facebook executive, how would you address these political bias claims?
The post went up quietly on Facebook’s internal message board last week. Titled “We Have a Problem With Political Diversity,” it quickly took off inside the social network.
“We are a political monoculture that’s intolerant of different views,” Brian Amerige, a senior Facebook engineer, wrote in the post, which was obtained by The New York Times. “We claim to welcome all perspectives, but are quick to attack — often in mobs — anyone who presents a view that appears to be in opposition to left-leaning ideology.”
Since the post went up, more than 100 Facebook employees have joined Mr. Amerige to form an online group called FB’ers for Political Diversity, according to two people who viewed the group’s page and who were not authorized to speak publicly. The aim of the initiative, according to Mr. Amerige’s memo, is to create a space for ideological diversity within the company.
The new group has upset other Facebook employees, who said its online posts were offensive to minorities. One engineer, who declined to be identified for fear of retaliation, said several people had lodged complaints with their managers about FB’ers for Political Diversity and were told that it had not broken any company rules.
Another employee said the group appeared to be constructive and inclusive of different political viewpoints. Mr. Amerige did not respond to requests for comment.
The activity is a rare sign of organized dissent within Facebook over the company’s largely liberal workplace culture. While the new group is just a sliver of Facebook’s work force of more than 25,000, the company’s workers have in the past appeared less inclined than their peers at other tech companies to challenge leadership, and most have been loyalists to its chief executive, Mark Zuckerberg.
But over the past two years, Facebook has undergone a series of crises, including the spread of misinformation by Russians on its platform and the mishandling of users’ data. Facebook has also been accused of stifling conservative speech by President Trump and Senator Ted Cruz, Republican of Texas, among others. This month, the social network barred the far-right conspiracy theorist Alex Jones, a move that critics seized on as further evidence that the company harbors an anti-conservative bias.
Within Facebook, several employees said, people have argued over the decisions to ban certain accounts while allowing others. At staff meetings, they said, some workers have repeatedly asked for more guidance on what content the company disallows, and why. Others have said Facebook, out of fear of being seen as biased, has let too many right-wing groups flourish on the site.
The dispute over employees’ political ideology arose a week before Sheryl Sandberg, Facebook’s chief operating officer, is scheduled to testify at a Senate hearing about social media manipulation in elections. A team helping Ms. Sandberg get ready for the hearing next Wednesday has warned her that some Republican lawmakers may raise questions about Facebook and biases, according to two people involved in the preparations.
On Tuesday, Mr. Trump again brought up the issue of bias by tech companies with tweets attacking Google. In remarks later in the day, he widened his focus to include Twitter and Facebook.
Those companies “better be careful because you can’t do that to people,” Mr. Trump said. “I think that Google, and Twitter and Facebook, they are really treading on very, very troubled territory and they have to be careful. It is not fair to large portions of the population.”
Facebook has long been viewed as a predominantly liberal company. Mr. Zuckerberg and Ms. Sandberg have donated to Democratic politicians, for example, and have supported issues such as immigration reform.
The social network has sometimes struggled to integrate conservatives into its leadership. Palmer Luckey, the founder of Oculus, the maker of virtual reality goggles that Facebook acquired, was pressured to leave the company last year, months after news spread that he had secretly donated to an organization dedicated to spreading anti-Hillary Clinton internet memes. And Peter Thiel, an outspoken supporter of Mr. Trump, has faced calls for his resignation from Facebook’s board.
Mr. Zuckerberg publicly defended Mr. Thiel last year, saying that he valued Mr. Thiel and that it was important to maintain diversity on the board. In an appearance before Congress this year, Mr. Zuckerberg responded to a question about anticonservative bias by saying he wanted Facebook to “be a platform for all ideas.”
In May, Facebook announced that former Senator Jon Kyl, an Arizona Republican, would lead an inquiry into allegations of anticonservative bias on the social network. New employees also go through training that describes how to have respectful conversations about politics and diversity.
Other Silicon Valley companies, including Google, have also experienced a wave of employee activism over diversity. If tech companies are willing to adjust their workplaces to make underrepresented groups more welcome, some employees argue, they should extend the same regard to those who do not fit the liberal-leaning Silicon Valley mold.
Mr. Amerige, who started working at Facebook in 2012, said on his personal website that he followed philosophical principles laid out by the philosopher and writer Ayn Rand. He posted the 527-word memo about political diversity at Facebook on Aug. 20.
On issues like diversity and immigration, he wrote, “you can either keep quiet or sacrifice your reputation and career.”
Mr. Amerige proposed that Facebook employees debate their political ideas in the new group — one of tens of thousands of internal groups that cover a range of topics — adding that this debate would better equip the company to host a variety of viewpoints on its platform.
“We are entrusted by a great part of the world to be impartial and transparent carriers of people’s stories, ideas and commentary,” Mr. Amerige wrote. “Congress doesn’t think we can do this. The president doesn’t think we can do this. And like them or not, we deserve that criticism.”
0 notes
lindyhunt · 6 years
Text
Facebook Has Published Its Internal Community Standards and Will Roll out a New Appeals Process
Facebook has published its internal enforcement guidelines. 
These guidelines -- or community standards, as they're also known -- are designed to help human moderators decide what content should (not) be allowed on Facebook. Now, the social network wants the public to know how such decisions are made.
"We decided to publish these internal guidelines for two reasons," wrote Facebook's VP of Global Product Management Monica Bickert in a statement. "First, the guidelines will help people understand where we draw the line on nuanced issues."
"Second," the statement continues, "providing these details makes it easier for everyone, including experts in different fields, to give us feedback so that we can improve the guidelines – and the decisions we make – over time."
Facebook's content moderation practices have been the topic of much discussion and, at times, contention. At CEO Mark Zuckerberg's congressional hearings earlier this month, several lawmakers asked about the removal or suppression of certain content that they believed was based on political orientation.
And later this week, the House Judiciary Committee will host yet another hearing on the "filtering practices of social media platforms," where witnesses from Facebook, Google, and Twitter have been invited to testify -- though none have confirmed their attendance.
What the Standards Look Like
According to a tally from The Verge reporter Casey Newton, the newly-released community standards total 27 pages, and are divided into six main sections:
Violence and Criminal Behavior
Safety
Objectionable Content
Integrity and Authenticity
Respecting Intellectual Property
Content-Related Requests
Within these sections, the guidelines delve deeper into the moderation of content that might promote or indicate things like threats to public safety, bullying, self-harm, and "coordinating harm."
That last item is particularly salient for many, following the publication of a New York Times report last weekend on grave violence in Sri Lanka that is said ignited at least in part by misinformation spread on Facebook within the region.
According to that report, Facebook lacks the resources to combat this weaponization of its platform, due to some degree to its lack of Sinhalese-speaking moderators (one of the most common languages in which this content appears).
As promised, I wanted to briefly explain why I hope anyone who uses Facebook — in any country — will make time for our Sunday A1 story on how the newsfeed algorithm has helped to provoke a spate of violence, as we’ve reconstructed in Sri Lanka: https://t.co/hrpBSMfb4t
— Max Fisher (@Max_Fisher) April 23, 2018
But beyond that, many sources cited in the story say that even when this violence-inciting content is reported or flagged by concerned parties on Facebook, they're told that it doesn't violate community standards.
Within the guidelines published today, an entire page is dedicated to "credible violence," which includes "statements of intent to commit violence against any person, groups of people, or place (city or smaller)" -- which describes much of the violence reported in the New York Times story that Facebook was allegedly used to incite. 
Bickert wrote in this morning's statement that Facebook's Community Operations team -- which has 7,500 content reviewers (and which the company has said it hopes to more than double this year) -- currently oversees these reports and does so in over 40 languages.
Whether that includes Sinhalese and other languages spoken in the regions affected by the violence described in the New York Times report was not specified.
A New Appeals Process
The public disclosure of these community standards will be followed by the rollout of a new appeals project, Bickert wrote, that will allow users and publishers to contest decisions made about the removal of content they post.
To start, the statement says, appeals will become available to those whose content was "removed for nudity/sexual activity, hate speech or graphic violence." Users whose content is taken down will be notified and provided with the option to request further review into the decision.
That review will be done by a human moderator, Bickert wrote, usually within a day. And if it's been determined that the decision was made in error, it will be reversed and the content will be re-published.
In the weeks leading up to Zuckerberg's congressional testimony earlier this month, Facebook released numerous statements about its policies and practices, and the changes it would be making to them. It was predicted that the motivation behind those numerous statements was to leave as little room for speculation as possible among lawmakers.
There's some probability that this most recent statement was made, this appeals process introduced, and these standards published in anticipation of potential questions being asked at this week's hearings.
In addition to the aforementioned House Judiciary Committee hearing on "filtering practices" tomorrow, Facebook CTO Mike Schroepfer will testify before UK Parliament's Digital, Culture, Media and Sport Committee on Thursday.
Dr. Aleksandr Kogan -- the Cambridge University professor behind the data-harvesting app who eventually sold personal user information to Cambridge Analytica -- testified before that same committee this morning.
How or if this statement and the content review guidelines are raised within these events remain to be seen -- but it's important to again note that one reason for their release, according to Bickert, was to allow feedback from the public, whether it's comprised of daily Facebook users or topical experts.
It's possible that in the wake of Facebook's unfolding and continuous scrutiny, all of its policies will continue to evolve.
Featured image credit: Facebook
0 notes
Why I Wouldn’t Take Google’s Depression Test
At the end of August, Google decided to make available directly on its site (through a “knowledge panel”) the ability to take a depression screening quiz. We know a thing or two about online depression screening quizzes, because I put one of the first interactive depression screening quizzes online back in 1996, long before Google even existed.
Here’s the thing… Depression screening tests — like the PHQ-9 that Google is now offering on its website — are super helpful tools to give a person a little more insight into the possibility of having a serious mental illness. The problem with Google offering it is that this mega-marketing company is collecting your health data. Do you really want Google to have this kind of sensitive information about your mood?
Depression screening quizzes are great tools. They help a person learn whether they have symptoms commonly associated with clinical depression. They can then take those results to their family physician or a mental health professional to discuss further. Nobody (much) questions the usefulness of these kinds of tests.
But what happens when you give your health data to a mega-data company like Google? Here’s what Google says about your privacy when taking the depression test online:
All your answers are kept confidential. […]
Google respects the privacy and sensitivity of these results. No individual data linking you to your answers will be used by Google without your consent. Some anonymized data may be used in aggregate to improve your experience.
You’re apparently giving your consent to have Google use your data when you get started with the quiz — there’s no apparent way to opt-out of not having Google collect your quiz answers for their data usage. This isn’t really highlighted when you take the quiz — you have to click on a little arrow to find this out. I find this a bit unnerving personally.
Why would you trust Google with your health data? Google is a huge online marketing monopoly that has an iron grip on what people see when they search for information online — both in content and in video (through YouTube).
What’s NAMI Doing Here?
I guess to make people feel better about taking a quiz that’s been available online for more than a decade, Google partnered with a non-profit that works in the area of mental illness, the National Alliance on Mental Illness (NAMI). This is no dig on NAMI, but NAMI is not a scientific organization, nor does it have much to do with the PHQ-9. They are an organization who does great, amazing work from a family perspective of mental illness. But why only NAMI specifically? Why didn’t Google reach out to more than just one non-profit in mental illness to help with this effort?
There are literally hundreds of non-profits dedicated to ending the stigma of mental illness, and many who have done really great work in the past few years. For instance, Bring Change to Mind has really changed the modern conversation, in my mind, about mental illness. And Mental Health America has also worked very hard in this area of education and helping to reduce the stigma of mental illness. And that’s to name just two out of hundreds.
But only NAMI was chosen to help with this effort, which seems a little unfair to me.1
Trusted Results for Over Two Decades
The good news is that you don’t have to rely on Google for your depression test or in order to take a depression quiz online. We’ve been offering a number of different depression quizzes online since the 1990s, and we DO NOT collect your quiz results for anything other than scientific research — and then only when you specifically and voluntarily OPT-IN (not opt-out).2
Check them out below:
Quick Depression Test
Depression Quiz — Our standard 18-question depression test
All of our mental health quizzes — Over 3 dozen!
We applaud Google’s efforts in helping to disseminate more information about mental illness. But Google is a search engine, technology behemoth, and most of all, marketing company. They shouldn’t be providing information directly to consumers about these concerns, but rather directing people to the best information online. When they step over the line to become a publisher of mental health information, they need to be held to the same standards of other online health publishers.3
Today, they are not, and so the depression data they collect on you may very well be added to your existing online marketing profile. That may not be a concern to you today. But it may be in the future, when such data is used to make decisions about things you thought weren’t connected (such as getting the best rate on a mortgage, or applying for life insurance).
You don’t have to use our depression tests, but I would highly recommend not using Google’s either.
Footnotes:
But monopolies don’t need to worry about fairness, since they have the entire market and can pretty much do whatever they want.
We haven’t published anything on our quiz data, but we are working on such research now.
Who’s on their advisory or editorial board overseeing this information? Who’s their Editor-in-Chief? How is scientific information vetted for accuracy? There’s a good reason to ask: Google has a history of spreading misinformation about mental illness.
from World of Psychology http://ift.tt/2eLhCh9 via IFTTT
0 notes
Text
Why I Wouldn’t Take Google’s Depression Test
At the end of August, Google decided to make available directly on its site (through a “knowledge panel”) the ability to take a depression screening quiz. We know a thing or two about online depression screening quizzes, because I put one of the first interactive depression screening quizzes online back in 1996, long before Google even existed.
Here’s the thing… Depression screening tests — like the PHQ-9 that Google is now offering on its website — are super helpful tools to give a person a little more insight into the possibility of having a serious mental illness. The problem with Google offering it is that this mega-marketing company is collecting your health data. Do you really want Google to have this kind of sensitive information about your mood?
Depression screening quizzes are great tools. They help a person learn whether they have symptoms commonly associated with clinical depression. They can then take those results to their family physician or a mental health professional to discuss further. Nobody (much) questions the usefulness of these kinds of tests.
But what happens when you give your health data to a mega-data company like Google? Here’s what Google says about your privacy when taking the depression test online:
All your answers are kept confidential. […]
Google respects the privacy and sensitivity of these results. No individual data linking you to your answers will be used by Google without your consent. Some anonymized data may be used in aggregate to improve your experience.
You’re apparently giving your consent to have Google use your data when you get started with the quiz — there’s no apparent way to opt-out of not having Google collect your quiz answers for their data usage. This isn’t really highlighted when you take the quiz — you have to click on a little arrow to find this out. I find this a bit unnerving personally.
Why would you trust Google with your health data? Google is a huge online marketing monopoly that has an iron grip on what people see when they search for information online — both in content and in video (through YouTube).
What’s NAMI Doing Here?
I guess to make people feel better about taking a quiz that’s been available online for more than a decade, Google partnered with a non-profit that works in the area of mental illness, the National Alliance on Mental Illness (NAMI). This is no dig on NAMI, but NAMI is not a scientific organization, nor does it have much to do with the PHQ-9. They are an organization who does great, amazing work from a family perspective of mental illness. But why only NAMI specifically? Why didn’t Google reach out to more than just one non-profit in mental illness to help with this effort?
There are literally hundreds of non-profits dedicated to ending the stigma of mental illness, and many who have done really great work in the past few years. For instance, Bring Change to Mind has really changed the modern conversation, in my mind, about mental illness. And Mental Health America has also worked very hard in this area of education and helping to reduce the stigma of mental illness. And that’s to name just two out of hundreds.
But only NAMI was chosen to help with this effort, which seems a little unfair to me.1
Trusted Results for Over Two Decades
The good news is that you don’t have to rely on Google for your depression test or in order to take a depression quiz online. We’ve been offering a number of different depression quizzes online since the 1990s, and we DO NOT collect your quiz results for anything other than scientific research — and then only when you specifically and voluntarily OPT-IN (not opt-out).2
Check them out below:
Quick Depression Test
Depression Quiz — Our standard 18-question depression test
All of our mental health quizzes — Over 3 dozen!
We applaud Google’s efforts in helping to disseminate more information about mental illness. But Google is a search engine, technology behemoth, and most of all, marketing company. They shouldn’t be providing information directly to consumers about these concerns, but rather directing people to the best information online. When they step over the line to become a publisher of mental health information, they need to be held to the same standards of other online health publishers.3
Today, they are not, and so the depression data they collect on you may very well be added to your existing online marketing profile. That may not be a concern to you today. But it may be in the future, when such data is used to make decisions about things you thought weren’t connected (such as getting the best rate on a mortgage, or applying for life insurance).
You don’t have to use our depression tests, but I would highly recommend not using Google’s either.
Footnotes:
But monopolies don’t need to worry about fairness, since they have the entire market and can pretty much do whatever they want.
We haven’t published anything on our quiz data, but we are working on such research now.
Who’s on their advisory or editorial board overseeing this information? Who’s their Editor-in-Chief? How is scientific information vetted for accuracy? There’s a good reason to ask: Google has a history of spreading misinformation about mental illness.
from World of Psychology https://psychcentral.com/blog/archives/2017/09/01/why-i-wouldnt-take-googles-depression-test/
0 notes