RT @RSocPublishing: The insignificance of the p-value: how wide-spread misinterpretation leads to false results: https://t.co/YkM1HUCfTI
RT @RSocPublishing: The insignificance of the p-value: how wide-spread misinterpretation leads to false results: https://t.co/YkM1HUCfTI
@chimbo23 Everything is wrong -see https://t.co/RPt9iHJLCS @Physio_Ben @laurent_bannock
As usual, Fisher vindicated. https://t.co/sO8pRkKsFZ
RT @david_colquhoun: I've asked @richarddmorey and @lakens to produce an example in which false postive rate is lower that the minimum in h…
RT @RSocPublishing: The insignificance of the p-value: how wide-spread misinterpretation leads to false results: https://t.co/YkM1HUCfTI
@anish_koka No. This https://t.co/RPt9iHJLCS @RogueRad
So would I :-) I recently answered 3 comments on the paper, from people who appeared not to have read it https://t.co/1oxGP2CmgF https://t.co/1HoM3vUnCG
RT @david_colquhoun: A really good open access journal. I certainly can't grumble about the number of people who've read this https://t.co/…
RT @RSocPublishing: The insignificance of the p-value: how wide-spread misinterpretation leads to false results: https://t.co/YkM1HUCfTI
RT @RSocPublishing: The insignificance of the p-value: how wide-spread misinterpretation leads to false results: https://t.co/YkM1HUCfTI
RT @david_colquhoun: A really good open access journal. I certainly can't grumble about the number of people who've read this https://t.co/…
A really good open access journal. I certainly can't grumble about the number of people who've read this https://t.co/tNBlxLsOG6 https://t.co/MulzhzNXDJ
I responded to some comments on my P value paper https://t.co/B9KqtaQ3gW Sometimes I wish people would read it before commenting.
@moritzkoerber It is based on nothing of the sort. Please read before criticising! https://t.co/B9Kqtayspo @richarddmorey
RT @david_colquhoun: @iwashyna Prob with them is that they are all based on NHST -don't say anything about false positive rate https://t.co…
I've asked @richarddmorey and @lakens to produce an example in which false postive rate is lower that the minimum in https://t.co/B9Kqtayspo
@TheMMP1 I know about them -famous for false postives -try https://t.co/B9Kqtayspo @diar_fattah
@AlexeiKotlar I'd be interested to hear whether you were taught about false postives in grad school. Often not so https://t.co/B9Kqtayspo
@AlexeiKotlar you can get a handful of positives for anything -for good statistical reasons https://t.co/B9KqtaQ3gW
.@MarcusMunafo why not format BioRxiv papers properly so they are readable? https://t.co/kwQykDStAb Like this https://t.co/tgP9sENOKt
RT @david_colquhoun: @CaulfieldTim I can't believe my trivial review is coming up to 150 k hits. https://t.co/B9Kqtayspo The ultimate demo…
@CaulfieldTim I can't believe my trivial review is coming up to 150 k hits. https://t.co/B9Kqtayspo The ultimate demolition of altmetrics
RT @david_colquhoun: @iwashyna Prob with them is that they are all based on NHST -don't say anything about false positive rate https://t.co…
@DrEstherHobson Try https://t.co/qNUenRFhvC or https://t.co/lVfVpeXm3u or the real paper, https://t.co/B9KqtaQ3gW Any help?
@iwashyna Prob with them is that they are all based on NHST -don't say anything about false positive rate https://t.co/B9Kqtayspo @statsepi
@markruddy well spec/sens is used only for screening tests. Equiv for sig tests is power/sig level -cf Figs 1 & 2 in https://t.co/B9Kqtayspo
@bobbury it's explained in https://t.co/lVfVpfeXs4 and https://t.co/B9Kqtayspo
https://t.co/drIAaAxtTg The P Value Debate just starting to accelerate. Read. Mark. Learn. Inwardly Digest. -dlj.
A well-written paper pertaining to the danger of p-values in science (it's an open-access paper) https://t.co/JMUnT8NPo9
@RogueRad is that a problem for https://t.co/B9Kqtayspo ? All I assume is that priors above 0.5 are unacceptable
RT @david_colquhoun: @RogueRad You might find that Figs 1 and 2 https://t.co/B9Kqtayspo clarify the analogy. Or https://t.co/qNUenRWSUc @l…
@RogueRad You might find that Figs 1 and 2 https://t.co/B9Kqtayspo clarify the analogy. Or https://t.co/qNUenRWSUc @larshdm
RT @david_colquhoun: That was a first draft of the paper https://t.co/B9Kqtayspo https://t.co/uQuLbBjGfV
RT @david_colquhoun: That was a first draft of the paper https://t.co/B9Kqtayspo https://t.co/uQuLbBjGfV
RT @david_colquhoun: That was a first draft of the paper https://t.co/B9Kqtayspo https://t.co/uQuLbBjGfV
RT @david_colquhoun: That was a first draft of the paper https://t.co/B9Kqtayspo https://t.co/uQuLbBjGfV
That was a first draft of the paper https://t.co/B9Kqtayspo https://t.co/uQuLbBjGfV
RT @scottishwormboy: why p<0.05 is not significant, or why we make a fool of ourselves 30% of the time. http://t.co/ZEXkWqFssb
RT @david_colquhoun: V largw effects in initial trial unlikely to be replicated, but better if P<0.001 in first trial, No surpise https://…
RT @david_colquhoun: V largw effects in initial trial unlikely to be replicated, but better if P<0.001 in first trial, No surpise https://…
RT @david_colquhoun: V largw effects in initial trial unlikely to be replicated, but better if P<0.001 in first trial, No surpise https://…
V largw effects in initial trial unlikely to be replicated, but better if P<0.001 in first trial, No surpise https://t.co/B9Kqtayspo https://t.co/AtqILHzOaf
P(a single p<0.05 result is wrong) > 0.3; Need p ≤ 0.001 for false positive < 0.05. paper @david_colquhoun: https://t.co/WyydC143QD #stats
Two weeks since the @aeonmag piece. It's given a spurt in downloads of the original paper -now over 22,000 https://t.co/tNBlxLKq4G https://t.co/aAYFSya5h8
@StortSkeptic even one of them wouldn't do very well https://t.co/B9Kqtayspo
p値問題 / “An investigation of the false discovery rate and the misinterpretation of p-values | Open…” https://t.co/0vDb0yDxVz #science #あとで読む
Um gostinho do material que a Royal Society está liberando por estes dias: https://t.co/9QopSDWfab
Another good critique p-values. Are economists right after all to say p<.10 evidence ;-) https://t.co/q9od9KYG8U
@botminds Explaining that is the whole point of what I wrote -see https://t.co/lVfVpfeXs4 and https://t.co/B9Kqtayspo @DrBrocktagon
@DrBrocktagon ie the lower arm in Fig 2 https://t.co/B9Kqtayspo But to get false positive rate you need upper arm too
@RogerKerry1 see, for example, https://t.co/B9Kqtayspo @PrestonsHealth
@GerardHarbison as in significance testing/p values. And we should be able to agree. https://t.co/bMgZcyeNgU https://t.co/qvTi3nkgF6
An investigation of the false discovery rate and the misinterpretation of p-values https://t.co/bSBvvL2eqP
@aheblog "If you do a sig test, just state the p-value and give the effect size and conf int. But be aware..." https://t.co/RPt9iHJLCS
RT @david_colquhoun: .@aheblog I didn't sya that P values have no role. On contrary they should be given always, just not misintepreted htt…
.@aheblog I didn't sya that P values have no role. On contrary they should be given always, just not misintepreted https://t.co/FSTLju0kdp
@kevinmarks Yes, that's excellent See also https://t.co/SmrTpNobEu and the right version https://t.co/FSTLju0kdp @RetractionWatch
RT @david_colquhoun: .@JenniRodd Intrigued by allusion to meaning of 'sigificant'. Most people get it wrong https://t.co/lVfVpfeXs4 & ht…
.@JenniRodd Intrigued by allusion to meaning of 'sigificant'. Most people get it wrong https://t.co/lVfVpfeXs4 & https://t.co/B9Kqtayspo
RT @david_colquhoun: and it's worse than Ioannidis said. See section 10 https://t.co/AI1cAIjJSG https://t.co/T1c0PplLJt
RT @david_colquhoun: and it's worse than Ioannidis said. See section 10 https://t.co/AI1cAIjJSG https://t.co/T1c0PplLJt
and it's worse than Ioannidis said. See section 10 https://t.co/AI1cAIjJSG https://t.co/T1c0PplLJt
Tilastollisesti merkittävä (p<0.05) artikkeli. Viittaa myös xkcd:seen eli ei voi olla huono (p<0.1). https://t.co/3CI60LkrOt https://t.co/zNWeaHhIZ6
An investigation of the false discovery rate and the misinterpretation of p-values https://t.co/WLl9eUA5cZ #PValues #StatisticalLiteracy
An investigation of the false discovery rate and the misinterpretation of p-values | Open Science - https://t.co/ZtdMkhoa3K
It's hard to explain why the sig/non-sig framework leads to high false positives in the lit, but RSOS does it so well here, using #rstats ! https://t.co/0GKK3aXGhp
ICYMI: There is no such thing as `significance' https://t.co/DuxeMypB6T Kudos to @david_colquhoun
RT @david_colquhoun: @oncology_bg I did NOT suggest abandoning P values. On the contrary https://t.co/FSTLju0kdp @ercowboy @RichardLehman1…
@oncology_bg I did NOT suggest abandoning P values. On the contrary https://t.co/FSTLju0kdp @ercowboy @RichardLehman1 @MartinStockler
RT @david_colquhoun: .@oncology_bg I suggested abandon sig/non-sig dichotomy, not abandon P values https://t.co/FSTLju0kdp @RichardLehman1
.@oncology_bg I suggested abandon sig/non-sig dichotomy, not abandon P values https://t.co/FSTLju0kdp @RichardLehman1
Though @david_colquhoun argues convincingly for importance of more consideration of false discovery rate https://t.co/JKqszrsGLQ https://t.co/ASTy9ywp2R
If you use p = 0.05… you will be wrong at least 30% of the time. by @david_colquhoun https://t.co/3WxMEiavMu
If you use p = 0.05… you will be wrong at least 30% of the time. by @david_colquhoun https://t.co/sl6r0B720t
If you use p = 0.05… you will be wrong at least 30% of the time. by @david_colquhoun https://t.co/2PPuFIjGEc
@greg_ashman @david_colquhoun @C_Hendrick again, you are forgetting the base rate and answering the wrong Q. Read: https://t.co/zlsI5VU3xG
@DantonQu To get the false pos rate you need an alternative hypothesis as well as null, the upper arms in Figs 1,2 https://t.co/B9Kqtayspo
"An investigation of the false discovery rate and the misinterpretation of p-values https://t.co/8noaAPVEF5 Un 86% de los findings no lo son
An investigation of the false discovery rate and the misinterpretation of p-values https://t.co/JJZkBErevE #Statistics
An investigation of the false discovery rate and the misinterpretation of p-values https://t.co/08NuBDE13n
@NicholasCork My latest is good example. A trivial unoriginal paper -amazingly cited https://t.co/tNBlxLKq4G Far more than my real work
RT @NU_FAME: Is a p < 0.05 enough? https://t.co/A7rsu8SCvG Discussion here at #NUMedEdDay16 #MedEd @NUFeinbergMed #MedicalEducation
Is a p < 0.05 enough? https://t.co/A7rsu8SCvG Discussion here at #NUMedEdDay16 #MedEd @NUFeinbergMed #MedicalEducation
Nice, but they should really look at P=0.05 (or whatever), not P<0.05 which is what PPV does. PPV is overoptimistic https://t.co/AI1cAIjJSG https://t.co/7rZwOaR4FQ
RT @david_colquhoun: That is terrible (and "PPV" as usually calculated, is overoptimistic -see https://t.co/AI1cAIjJSG ) https://t.co/kHMtN…
That is terrible (and "PPV" as usually calculated, is overoptimistic -see https://t.co/AI1cAIjJSG ) https://t.co/kHMtNFZZcW
Truly amazed: my false postive paper has passed 20,000 pdf downloads, Ultimate proof that #altmetrics is nonsense https://t.co/tNBlxLKq4G
On p-value abuse in statistics: "If you use p=0.05 ... you will be wrong at least 30% of the time" https://t.co/9cU7TkyozH -fantastic paper!
By coincidence just re-reading the paper linked below. Key reading for anyone doing statistics in ecology! https://t.co/cg8vojF3N4
.@keithfrankish Not AFAIK You can try the paper https://t.co/B9Kqtayspo or blog,https://t.co/67isGX9TeO or video https://t.co/qNUenRWSUc
@bradpwyble that's a huge question. All I can do is suggest you might be interested in https://t.co/B9Kqtayspo https://t.co/Anx4M4gM8F
RT @david_colquhoun: It seems that @NEJM has reviewers who are statistically illiterate https://t.co/B9Kqtayspo https://t.co/zbfnPY8mm7
Always worth another look: False discovery rate and the misinterpretation of p-values https://t.co/KJz5ttDfP0
RT @david_colquhoun: It seems that @NEJM has reviewers who are statistically illiterate https://t.co/B9Kqtayspo https://t.co/zbfnPY8mm7
RT @david_colquhoun: It seems that @NEJM has reviewers who are statistically illiterate https://t.co/B9Kqtayspo https://t.co/zbfnPY8mm7