Tumgik
#not in an ironic or meta way she is rewarded for being an awful piece of shit
yuzensoniktornavida · 2 years
Text
Zoya literally shouldn't have gotten anything she got, fuck the ending of Rule of Wolves
43 notes · View notes
arkus-rhapsode · 4 years
Note
Ok. So what do you think of the latest chapters of fairy tail 100 years quests? What what's happening to characters? I think you're so possed that you even stopped reading more
Hoo boy... *Cracks knuckles* We may be here for a while.
So before I start, let me make clear that I actually keep up with 100YQ, though not for any actual interest anymore. I now stay with it for ironic enjoyment. Because I could say EZ has dumb stuff, but at this point, if you’re Hiro Mashima’s main audience, its fine. No, 100YQ is unique in that it has WTF moments that transcend just the FT fandom sensibilities.
Before I dig in, I’m gonna lay out some formatting rules because we’re cover A LOT. Im gonna be fair and split this up into positive and negative aspects of the series. Because I at least try to be intellectually in what I do.
To avoid from going on tangents and jumping around, I’m gonna be going in chronological order of events. Now this will not be an overview of the series up to this point because that’s stuff I’ve already talked about. Instead I’m gonna start from the point this went from genuine interest to ironic interest. That begin the Whited Out FT Guild.
Positives:
The concept of the wood dragon god having a kingdom on his back is really cool world building. Its actually something that I really liked about the Sea Dragon God as well. Having a realm reliant on the dragon as well as a reason to revere them. What with the water dragon god controlling the tides while wood dragon god is the supporting the city on his back actually makes them seem like god figures and adds to the lore of the world of Earthland in a way that Ishgar and Alvarez sorely failed at.
Laxus punching out Kyria. Petty yes, as Kyria is my least favorite dragon slayer. However a lot of Whited out FT mages were getting jobbered like crazy or just given unceremonious defeats. So Laxus actually seeming like an obstacle was good.
The cat twist with Touka is actually a funny bit of trolling and was one of the few times there was effective foreshadowing with Touka having a tail. (Too bad it was suck in the meandering Gajeel plot.)
The Dragon Eater guild is a much better final villain army than Spriggan 12. The 12 had little structure as to who was the stronger members, resulting in multiple Spriggans feeling like such major disappointments.
Most Mashima Villain organizations tend to broken up like this: Boss->Special Units (I.E Spriggan 12, Nine Demon Gates, Element 4)->Fodder Units.
No one cares when the fodder of a villain group is beaten as they’re just faceless minions. However, when you get to the special unit, that’s wheen there are actual obstacles and the villains start becoming more like characters. However, Hiro has been bad at this when he’s dealing with bigger organizations.
He never had to worry about telling you which member of the villain group’s special unit were more powerful than the others. Due to working with units composed of low amounts of characters, such as Team Lyon, Element 4, and Death’s Head.
You could say that all were roughly around the same power threshold. However, where the spriggan 12 royally failed was there were 12 of them in that unit and they were all just given the blanket term of being on the same level of the number 1 wizard saint. Yeah... that’s a check so large that Hiro could not cash it.
Hiro even seemed to retroactively acknowledge this by stating that August, Irene, and Larcarde were the three best to cover his ass for the fact that all the spriggans seemed to be jobbered far easier than ones supposedly equal to the number 1 saint.
However, the Dragon Eaters are opponents we’re gradually introduced to over the storyline and we actually see a demonstration of what a group of them can do, instead of 1 just trying take on all of team Natsu at once. We see that Skullion’s team is roughly equal to Team Natsu, giving us a gauge for the Dragon Eater strength, but then we get both Wraith and Nebal, underlings not part of Skullion’s thre man team, implying that are of a weaker variety and thus serving as a stepping stone to fight Skullion. But also introducing us to the Black Dragon Slayer Cavalry. Members above Skullion’s team that give us an idea if when they show up what the audience should expect of their strength level.
There’s a reason why in One Piece why Yonkou opperations have so many categories. You’re not gonna care about the minor thugs, but by making a distinction between the Headliners and the Disasters in Kaidou’s crew, you’ve made it so that we the audience will not feel disappointed if a Headliner is beat by a weaker character like say Usopp but still know that they are more than a foot soldier so this win meant something.
Now time for the negatives:
The concept of White Out is not awful, and is actually a fairly interesting concept for a villain motivation in FT. However, the White Witch is one of the most transparently evil characters in the series, thus you know that she’s doing this morally ambiguous action because she’s evil. Imagine if this were about humans or royals who feared the growing power of mages. Or a disillusioned mage with the concept of people like Zeref or the GMG, where is seems like magic is endless and how that’s a threat to the world. No, White Witch really seems like she wants to be this grand manipulator and actively enjoys calling heer whited out people, puppets.
However, there’s also the fact the whiting out doesn’t make too much sense. Some characters seem like mind controlled puppets like Juvia, while others are basically the same except their evil now like Gajeel, Mira, Elfman, Laxus. And some are dumb jokes like Jellal.
So there’s no consistency to this brainwashing. Only other time I’ve seen a mind control plot like that in media before is Yugioh GX. Sometimes people act like they’ve been brainwashed into something different like Alexis. But then people like Bastion and a lot of the gag members of the society like Rose and Bob act as if they’re not affected by anything.
Yeah, this white witch plot feels distractedly ripped off from the society of Light from Yugioh GX.
The concept of Team Natsu vs FT in the vain of the fighting festival arc is dumb narratively from two standpoints. First from a story standpoint in the idea that why the battle of FT arc was opportunity, due to the fact they were all willing to fight to free the frozen girls. Which allowed for others to show shades to their motivations Like Alzack willing to mow through his other comrades for Bisca or Thunder God Tribe assisting in protecting Laxus so he’s the last man standing. There’s a tangible reward on th line that motivates the characters to act as do.
Here, the characters are clearly fighting against their will because of an intangible force. This white magic makes them slaves and are fighting because “white doctrine.” Something they only believeebecause brainwashing. As such, you want to see Natsu and gang beat them up to stop the white witch and free them. There’s no force or intrigue that makes the audience care about both sides like seeing Alzack vs Jet and Droy because you know they both want to save their partners but only one can. Instead people only care because a surface level of “friend turned evil” device. It takes the B and C list cast of the FT guild and makes them props.
And from a meta standpoint, there is no tension, due to the fact this is a post final battle with acnologia Team Natsu. The team is bounds ahead of so many guild members like Macao, Reedus, Max etc. that the only real threat is the S class mages. So that makes that big page spread of evil FT in cult robes dull as only like 3 of them are gonna actually matter.
Then there’s Wraith. Nebal was a boring an generic crazy guy is unimpressive, Wriath was actually really interesting at first. Is ghost magic allowed for an interesting fight and his possession actually having limitations on how effective it was made for a cool skill.
But then the reveal about him and Makarov. I eye-rolled at that point, but then I saw the previews and was like, maybe this’ll be the best thing to come out of this series. Everyone wants to know more about past FT around team Makarov’s time.
But all the potential of young Makarov and young Porylusica and the rest of their team is put on fast forward as they’re all suddenly thinking about leaving. But maybe the reveal with Wraith could be interesting. I saw a lot of good theories like Wraith was Makarov’s half brother or Wraith was the son of Makarov and Porylusica who was killed by Ivan.
Well... Any theory would’ve been better than Wraith was some random ass mage and when they say he a Makarov are related its because the bonds of FT that is real family and transcends death itself.
...Gag me...
And and Wraith just fucks off into the afterlife. Because we can’t actually end a fight because of the protagonist’s ingenuity. No, the villain just kills themselves because feels. Isn’t that right August and Irene?
In conclusion
That’s my thoughts as briefly and coherent as I could make them. So if you wanna know my feelings on 100YQ, it can basically be summed in FT being FT. If you expected more, you’re gonna be disappointed. But if you genuinely love the world and character regardless of Hiro’s writing, you’ll probably still enjoy it regardless of what I’m saying
6 notes · View notes
trendingnewsb · 6 years
Text
The Dismal Science Remains Dismal, Say Scientists
When Hristos Doucouliagos was a young economist in the mid-1990s, he got interested in all the ways economics was wrong about itself—bias, underpowered research, statistical shenanigans. Nobody wanted to hear it. “I’d go to seminars and people would say, ‘You’ll never get this published,’” Doucouliagos, now at Deakin University in Australia, says. “They’d say, ‘this is bordering on libel.’”
Now, though? “The norms have changed,” Doucouliagos says. “People are interested in this, and interested in the science.” He should know—he’s one of the reasons why. In the October issue of the prestigious Economic Journal, a paper he co-authored is the centerpiece among a half-dozen papers on the topic of economics’ own private replication crisis, a variation of the one hitting disciplines from psychology to chemistry to neuroscience.
The paper inhales more than 6,700 individual pieces of research, all meta-analyses that themselves encompass 64,076 estimates of economic outcomes. That’s right: It’s a meta-meta-analysis. And in this case, Doucouliagos never meta-analyzed something he didn’t dislike. Of the fields covered in this corpus, half were statistically underpowered—the studies couldn’t show the effect they said they did. And most of the ones that were powerful enough overestimated the size of the effect they purported to show. Economics has a profound effect on policymaking and understanding human behavior. For a science, this is, frankly, dismal.
One of the authors of the paper is John Ioannidis, head of the Meta Research Innovation Center at Stanford. As the author of a 2005 paper with the shocking title “Why Most Published Research Findings Are False,” Ioannidis is arguably the replication crisis’ chief inquisitor. Sure, economics has had its outspoken critics. But now the sheriff has come to town.
For a field coming somewhat late to the replication crisis party, it’s ironic that economics identified its own credibility issues early. In 1983 Edward Leamer, an economist at UCLA, published a lecture he called “Let’s Take the Con Out of Econometrics.” Leamer took his colleagues to task for the then-new practice of collecting data through observation and then fitting it to a model. In practice, Leamer said, econometricians fit their data against thousands of statistical models, found the one that worked the best, and then pretended that they were using that model all along. It’s a recipe for letting bias creep in.
At about the same time as Leamer wrote his paper, Colin Camerer—today an economist at Caltech—was getting pushback for his interest in reproducibility. “One of my first papers, in the 1980s, has all of the data and the instructions printed in the journal article. Nowadays it would all be online,” Camerer says. “I was able to kind of bully the editor and say, ‘This is how science works.’” Observe, hypothesize, experiment, collect data, repeat.
Over time, things improved. By 2010, the field was undergoing a “credibility revolution,” says Esther Duflo, an economist at MIT and editor of the American Economic Review. A few top journals began to sniff out shenanigans like p-hacking, massaging data for favorable outcomes. They asked for complete datasets to be posted online, and for pre-registered research plans (so investigators can’t change their hypotheses after the fact). To publish in these journals, economists now have to submit the actual code they used to carry out their analysis, and unlike the old days it has to work on someone else’s computer.
Yes, open data, available code, and pre-registration don’t always guarantee reproducibility. “If I pick up Chrissy Teigen’s cookbook, it might not taste the same as it does at her house,” says Camerer, “even though she’s only 10 miles away and was shopping at the same store.” In 2015, economists at the Federal Reserve and Department of the Treasury tried to replicate 67 papers using data and code from the original authors; they were able to do it without calling the authors for help for just 22. It was a little grim.
One thing that did help economics: an increasing reliance on experimental data over empirical or observational research. Randomized controlled trials in the lab and in the field are getting more common. In another big-deal paper, this one for the prestigious journal Science, Camerer’s team attempted to replicate 18 articles from two top journals. And the results were—well, let’s say the glass was half-full. All were statistically powerful enough to see the effect they purported to, and 11 out of 18 had “a significant effect in the same direction as the original study.”
Maybe more importantly, though, everyone was on board with the concept. “When somebody says ‘I want to replicate your study,’ usually it’s like when the IRS calls and says they want to check your math,” Camerer says. “But when we sent out letters to 18 groups saying, ‘We’re going to replicate your study,’ every one of them was quite cooperative.”
The problem is that only a few journals and subfields in economics have been willing to take up the new standards of controlled trials, openness, and reproducibility that other social sciences—behavioral psychology, most notably—have largely embraced. “Adoption of improved practices is idiosyncratic and governed by local norms,” Camerer says.
That leaves an awful lot of economics—and after failures like the inability to predict the housing crisis and ongoing political disagreements about things as fundamental as taxes and income levels, economics seems a little hard to trust. That’s where big meta-studies of meta-analyses come in, like the one Doucouliagos did with Ioannidis and Tom Stanley. This is the kind of work Ioannidis now specializes in—evaluating not just individual studies, like Camerer’s reproducibility paper, but entire bodies of literature, capturing all the data and stats embedded in many meta-analyses at once. In this case, that wasn’t randomized controlled trials. “The vast majority of available data are observational data, and this is pretty much what was included in these meta-analyses,” Ioannidis says.
The sort-of good news? According to his team, economics isn’t that bad. Sure, the statistical power was way too low and the bias was toward exaggerating effect sizes. “We have seen that pattern in many other fields,” Ioannidis says. “Economics and neuroscience have the same problem.” (So, OK, not great news for fans of brainscan studies.) But that also shows that Ioannidis isn’t just trying to nuke economics out of pique. “Not being an economist, hopefully I avoided the bias of having a strong opinion about any of these topics,” he says. “I just couldn’t care less about what was proposed to have been found.”
That paper should at least red-flag, then, the fact that while at the most elite level and in some fields, economics is working out its issues, elsewhere the familiar problems remain. The grungy spadework of reproducing other research still isn’t rewarded by journal editors and tenure committees. Scientists still want to land papers in top-shelf journals, and journals still want to publish “good” results—which is to say, statistically significant, positive findings. “People are likely to publish their most significant or most positive results,” Ioannidis says. It’s called data-dredging.
Science is supposed to have mechanisms for self-correction, and work to bridge the credibility gap across different fields shows self-correction in action. Still, though, you’d like to see economics farther along, maybe, instead of getting its lapels grabbed by Ioannidis. “We’re not very good at understanding how the brain works. We’re not that great on models of human nature and connections to anthropology,” Camerer says. “But economists are really good at understanding incentives and how we create systems to produce an outcome.”
And yet credibility-increasing incentives don’t yet exist within economics itself.
Journals and funding agencies have been slow, cautious even. Universities and institutions aren’t paying people or tenuring them for the work. “Fields like statistics or psychology are sending strong signals that they care about people working on research transparency,” says Fernando Hoces de la Guardia, a fellow at the Berkeley Initiative for Transparency in the Social Sciences. “You don’t see any of these folks placing in top economics departments.” When he sent me a relevant paper by a colleague, Hoces de la Guardia pointed out that it wasn’t his colleague’s “job market paper,” the piece of research a PhD student would use to find a job.
“One of the problems in raising these sorts of issues is finding the journal space for it,” Doucouliagos says. “You’re going to have bright scholars who would like to address these issues, but they’re worried about being seen as cassandras.” But maybe unlike Cassandra, if enough researchers and standard-setters see value in critiquing their own fields, they’ll be better equipped to survive the future.
Read more: http://ift.tt/2AGzwds
from Viral News HQ http://ift.tt/2BDY7BE via Viral News HQ
0 notes
trendingnewsb · 6 years
Text
The Dismal Science Remains Dismal, Say Scientists
When Hristos Doucouliagos was a young economist in the mid-1990s, he got interested in all the ways economics was wrong about itself—bias, underpowered research, statistical shenanigans. Nobody wanted to hear it. “I’d go to seminars and people would say, ‘You’ll never get this published,’” Doucouliagos, now at Deakin University in Australia, says. “They’d say, ‘this is bordering on libel.’”
Now, though? “The norms have changed,” Doucouliagos says. “People are interested in this, and interested in the science.” He should know—he’s one of the reasons why. In the October issue of the prestigious Economic Journal, a paper he co-authored is the centerpiece among a half-dozen papers on the topic of economics’ own private replication crisis, a variation of the one hitting disciplines from psychology to chemistry to neuroscience.
The paper inhales more than 6,700 individual pieces of research, all meta-analyses that themselves encompass 64,076 estimates of economic outcomes. That’s right: It’s a meta-meta-analysis. And in this case, Doucouliagos never meta-analyzed something he didn’t dislike. Of the fields covered in this corpus, half were statistically underpowered—the studies couldn’t show the effect they said they did. And most of the ones that were powerful enough overestimated the size of the effect they purported to show. Economics has a profound effect on policymaking and understanding human behavior. For a science, this is, frankly, dismal.
One of the authors of the paper is John Ioannidis, head of the Meta Research Innovation Center at Stanford. As the author of a 2005 paper with the shocking title “Why Most Published Research Findings Are False,” Ioannidis is arguably the replication crisis’ chief inquisitor. Sure, economics has had its outspoken critics. But now the sheriff has come to town.
For a field coming somewhat late to the replication crisis party, it’s ironic that economics identified its own credibility issues early. In 1983 Edward Leamer, an economist at UCLA, published a lecture he called “Let’s Take the Con Out of Econometrics.” Leamer took his colleagues to task for the then-new practice of collecting data through observation and then fitting it to a model. In practice, Leamer said, econometricians fit their data against thousands of statistical models, found the one that worked the best, and then pretended that they were using that model all along. It’s a recipe for letting bias creep in.
At about the same time as Leamer wrote his paper, Colin Camerer—today an economist at Caltech—was getting pushback for his interest in reproducibility. “One of my first papers, in the 1980s, has all of the data and the instructions printed in the journal article. Nowadays it would all be online,” Camerer says. “I was able to kind of bully the editor and say, ‘This is how science works.’” Observe, hypothesize, experiment, collect data, repeat.
Over time, things improved. By 2010, the field was undergoing a “credibility revolution,” says Esther Duflo, an economist at MIT and editor of the American Economic Review. A few top journals began to sniff out shenanigans like p-hacking, massaging data for favorable outcomes. They asked for complete datasets to be posted online, and for pre-registered research plans (so investigators can’t change their hypotheses after the fact). To publish in these journals, economists now have to submit the actual code they used to carry out their analysis, and unlike the old days it has to work on someone else’s computer.
Yes, open data, available code, and pre-registration don’t always guarantee reproducibility. “If I pick up Chrissy Teigen’s cookbook, it might not taste the same as it does at her house,” says Camerer, “even though she’s only 10 miles away and was shopping at the same store.” In 2015, economists at the Federal Reserve and Department of the Treasury tried to replicate 67 papers using data and code from the original authors; they were able to do it without calling the authors for help for just 22. It was a little grim.
One thing that did help economics: an increasing reliance on experimental data over empirical or observational research. Randomized controlled trials in the lab and in the field are getting more common. In another big-deal paper, this one for the prestigious journal Science, Camerer’s team attempted to replicate 18 articles from two top journals. And the results were—well, let’s say the glass was half-full. All were statistically powerful enough to see the effect they purported to, and 11 out of 18 had “a significant effect in the same direction as the original study.”
Maybe more importantly, though, everyone was on board with the concept. “When somebody says ‘I want to replicate your study,’ usually it’s like when the IRS calls and says they want to check your math,” Camerer says. “But when we sent out letters to 18 groups saying, ‘We’re going to replicate your study,’ every one of them was quite cooperative.”
The problem is that only a few journals and subfields in economics have been willing to take up the new standards of controlled trials, openness, and reproducibility that other social sciences—behavioral psychology, most notably—have largely embraced. “Adoption of improved practices is idiosyncratic and governed by local norms,” Camerer says.
That leaves an awful lot of economics—and after failures like the inability to predict the housing crisis and ongoing political disagreements about things as fundamental as taxes and income levels, economics seems a little hard to trust. That’s where big meta-studies of meta-analyses come in, like the one Doucouliagos did with Ioannidis and Tom Stanley. This is the kind of work Ioannidis now specializes in—evaluating not just individual studies, like Camerer’s reproducibility paper, but entire bodies of literature, capturing all the data and stats embedded in many meta-analyses at once. In this case, that wasn’t randomized controlled trials. “The vast majority of available data are observational data, and this is pretty much what was included in these meta-analyses,” Ioannidis says.
The sort-of good news? According to his team, economics isn’t that bad. Sure, the statistical power was way too low and the bias was toward exaggerating effect sizes. “We have seen that pattern in many other fields,” Ioannidis says. “Economics and neuroscience have the same problem.” (So, OK, not great news for fans of brainscan studies.) But that also shows that Ioannidis isn’t just trying to nuke economics out of pique. “Not being an economist, hopefully I avoided the bias of having a strong opinion about any of these topics,” he says. “I just couldn’t care less about what was proposed to have been found.”
That paper should at least red-flag, then, the fact that while at the most elite level and in some fields, economics is working out its issues, elsewhere the familiar problems remain. The grungy spadework of reproducing other research still isn’t rewarded by journal editors and tenure committees. Scientists still want to land papers in top-shelf journals, and journals still want to publish “good” results—which is to say, statistically significant, positive findings. “People are likely to publish their most significant or most positive results,” Ioannidis says. It’s called data-dredging.
Science is supposed to have mechanisms for self-correction, and work to bridge the credibility gap across different fields shows self-correction in action. Still, though, you’d like to see economics farther along, maybe, instead of getting its lapels grabbed by Ioannidis. “We’re not very good at understanding how the brain works. We’re not that great on models of human nature and connections to anthropology,” Camerer says. “But economists are really good at understanding incentives and how we create systems to produce an outcome.”
And yet credibility-increasing incentives don’t yet exist within economics itself.
Journals and funding agencies have been slow, cautious even. Universities and institutions aren’t paying people or tenuring them for the work. “Fields like statistics or psychology are sending strong signals that they care about people working on research transparency,” says Fernando Hoces de la Guardia, a fellow at the Berkeley Initiative for Transparency in the Social Sciences. “You don’t see any of these folks placing in top economics departments.” When he sent me a relevant paper by a colleague, Hoces de la Guardia pointed out that it wasn’t his colleague’s “job market paper,” the piece of research a PhD student would use to find a job.
“One of the problems in raising these sorts of issues is finding the journal space for it,” Doucouliagos says. “You’re going to have bright scholars who would like to address these issues, but they’re worried about being seen as cassandras.” But maybe unlike Cassandra, if enough researchers and standard-setters see value in critiquing their own fields, they’ll be better equipped to survive the future.
Read more: http://ift.tt/2AGzwds
from Viral News HQ http://ift.tt/2BDY7BE via Viral News HQ
0 notes