@seis_matters @LizzieGadd Another example. 1. Fundamental result in single molecule analysis (hard maths), 126 citations since 1990 https://t.co/IbFYbsS8KZ 2. Simple simulation of t tests. 397 citations and 259,194 downloads since 2014 https://t.co/zbPxD
@lakens Sorry. I think that's quite wrong. For a start it uses p-less-than definition of "positive", not the p-equals. See https://t.co/RIhb7qalax Secondly it ignores the alternative hypothesis. Simulation of t tests gives 27% false positives, not 2.5% S
RT @Abraham_RMI: @david_colquhoun @raj_mehta @aminadibi @david_colquhoun , it's a tweetorial. Maybe if anybody need more information they c…
@david_colquhoun @raj_mehta @aminadibi @david_colquhoun , it's a tweetorial. Maybe if anybody need more information they can consult your papers: 1 👉🏼 https://t.co/eXgonpE6EL , 2 👉🏼 https://t.co/oBcUwX8LMA and 3 👉🏼 https://t.co/TfLGleY7yJ
@eLifeCommunity @dirnagl @eLife Recommended: The p value wars (again): https://t.co/23CHxnupoo An investigation of the false discovery rate and the misinterpretation of p-values: https://t.co/Z7SbkpvitM @RSocPublishing Why Most Published Research Findi
@ZachariahNKM @MarcusMunafo @mc_hankins That's a version of Fig 3 in https://t.co/zbPxDmhz2l The probability of observing any specified value is 0 for any continuous distribution 1/2
@ZachariahNKM @MarcusMunafo @mc_hankins I'm not sure what you mean by a "band of p values"? I first got into this by doing simulations (in 2014) https://t.co/nEVWyAHc3W Please criticise the assumptions I made in the simulations (I didn't get round to looki
RT @ClinicalRehab: DW. We publish 60-70 randomised trials a year, & reject well over that number. One major reason is small numbers (less t…
RT @ClinicalRehab: DW. We publish 60-70 randomised trials a year, & reject well over that number. One major reason is small numbers (less t…
@learnfromerror @georgizgeorgiev Simple simulations show they are up to the task: https://t.co/zbPxDmhz2l I know you don't like anything Bayesian, but Bayes doesn't go away because you choose to ignore it.
RT @ClinicalRehab: DW. We publish 60-70 randomised trials a year, & reject well over that number. One major reason is small numbers (less t…
RT @ClinicalRehab: DW. We publish 60-70 randomised trials a year, & reject well over that number. One major reason is small numbers (less t…
DW. We publish 60-70 randomised trials a year, & reject well over that number. One major reason is small numbers (less that 20-25 in each group) as uncertainty is too large, notwithstanding power calculation. Today I found support for this policy! http
RT @medevidenceblog: An investigation of the false discovery rate and the misinterpretation of p-values | Royal Society Open Science https:…
RT @medevidenceblog: An investigation of the false discovery rate and the misinterpretation of p-values | Royal Society Open Science https:…
RT @medevidenceblog: An investigation of the false discovery rate and the misinterpretation of p-values | Royal Society Open Science https:…
An investigation of the false discovery rate and the misinterpretation of p-values | Royal Society Open Science https://t.co/yOgm31kpB8
RT @pmgjones: The fact that this @NEJM article on statistical reporting (https://t.co/msjV4v8kKK) doesn’t reference @david_colquhoun’s trea…
RT @pmgjones: The fact that this @NEJM article on statistical reporting (https://t.co/msjV4v8kKK) doesn’t reference @david_colquhoun’s trea…
The fact that this @NEJM article on statistical reporting (https://t.co/msjV4v8kKK) doesn’t reference @david_colquhoun’s treatise on false discovery rates (https://t.co/7NAWfqaDOa) is incredible. David’s article is recommended reading for all researchers a
An investigation of the false discovery rate and the misinterpretation of p-values. https://t.co/7jaaD6FqkL
An investigation of the false discovery rate and the misinterpretation of p-values | Royal Society Open Science https://t.co/VUVdWWJxW7
@RemyLevin Well, if you're doing NHST and you're underpowered, then any significant effect you find will have an inflated effect size, which is sort of along the lines of your argument. See Fig 7 here https://t.co/NzIsDsSB0o
RT @david_colquhoun: @mr_3_smith oops https://t.co/kjz63SVggM but I've done more since then, mostly not nearly as popular Summary at https:…
@mr_3_smith oops https://t.co/kjz63SVggM but I've done more since then, mostly not nearly as popular Summary at https://t.co/DJpL0Hi0xZ
@Lester_Domes @JWSBayes @AndrewPGrieve @stephensenn @f2harrell @cjackstats @paulpharoah @raj_mehta @reverendofdoubt @bogdienache @venkmurthy @THilalMD @EpiEllie @d_spiegel I did say spike *prior*. Insofar as the spike is maximally skeptical prior for the
RT @stephensenn: @david_colquhoun @JadePinkSameera @Twitter @METRICStanford @Lester_Domes @StatModeling @vamrhein You have recommended 0.00…
@david_colquhoun @JadePinkSameera @Twitter @METRICStanford @Lester_Domes @StatModeling @vamrhein You have recommended 0.001 or three sigma as a threshold https://t.co/jfgF1lrIja
@SkeptPsych @stephensenn @f2harrell @JWSBayes @learnfromerror Yes I did that in 2017 paper. But simulations show much more directly the problem, Section 10 in https://t.co/kjz63SVggM shows FPR = 0.26 when you observe p=0.047 in a well-powered experiment.
@mariepkrabbe @Pottegard Se også de illustrative regneeksempler her: https://t.co/8EGJCxc4bf
RT @RSocPublishing: Into statistics during #BSW19? If, like many, you use p-values, you need to read this https://t.co/h3ZjI5dgrZ by @david…
Into statistics during #BSW19? If, like many, you use p-values, you need to read this https://t.co/h3ZjI5dgrZ by @david_colquhoun published in #RSOS https://t.co/1YeybYJN1q
RT briandavidearp: RT briandavidearp: An investigation of the false discovery rate and the misinterpretation of p-values | Royal Society Open Science https://t.co/OWjmoIdbFK
RT @briandavidearp: An investigation of the false discovery rate and the misinterpretation of p-values | Royal Society Open Science https:/…
RT @briandavidearp: An investigation of the false discovery rate and the misinterpretation of p-values | Royal Society Open Science https:/…
RT briandavidearp: An investigation of the false discovery rate and the misinterpretation of p-values | Royal Society Open Science https://t.co/OWjmoIdbFK
RT @briandavidearp: An investigation of the false discovery rate and the misinterpretation of p-values | Royal Society Open Science https:/…
An investigation of the false discovery rate and the misinterpretation of p-values | Royal Society Open Science https://t.co/R05FEV8JVN
@f2harrell And, although my suggested solution to the p-value problem tests a point null, you get similar results without that assumption, eg Valen Johnson's UMPBT approach. https://t.co/Wvvwxc4ujc
Paper #11 - thank you @paul_f_oreilly for a great talk on multiple testing and a brilliant lay summary of Bayes, but thank you especially for pointing out this paper by @david_colquhoun about the misinterpration of p-values. Fantastic find! https://t.co/3
RT @BenMazer: If you run an underpowered study with modest pre-test probability, what is the chance -- before p-hacking or publication bias…
RT @BenMazer: If you run an underpowered study with modest pre-test probability, what is the chance -- before p-hacking or publication bias…
@TheNewStats @nitopconference well a) is a contribution to it, but b) is a statement of a problem, not a solution to it IMO. I looked at inflation effect in 2014 for the case where there IS a real effect https://t.co/HxGmFDHjnw but it's surely not a big p
@mellojonny That isn't a p value problem. You can just use the tree diagram approach for that eg 16:20 in https://t.co/d2jhKOc5T1 or section 3 in https://t.co/vsOvW5Mv93
@AndrewPGrieve @iwashyna @learnfromerror Very good table from @learnfromerror . I also like this figure on paper from @david_colquhoun that exemplifies something similar (Full text 👉🏼 https://t.co/y7RV4s14s8 ). https://t.co/E97zKOoqqU
An investigation of the false discovery rate and the misinterpretation of p-values https://t.co/smxeTqm6uQ
But there are now methods for a wide range of distributions, I believe
@robertjwest The assumptions are exactly those made by the t test, as is clear from simulations (2014 https://t.co/B9Kqtayspo ) They are justified most explicitly in 2018 paper https://t.co/S2FMesTFu8
@katiehgreenaway @siminevazire Cool. I recall @david_colquhoun suggesting that even when alpha is set to .05 and power to .80 (perhaps ideal conditions), the false discovery rate can be 36% (assuming prior prob of hyp being true = .10) https://t.co/WWG3RHB
@doctorcaldwell @stendec6 @DanielBayley80 @valle_erling @vincentconnolly @somersetwyvern @mgtmccartney @TheIHI @bengoldacre @fgodlee There's a similar issue with researchers and P values. https://t.co/7VlHcxV9IY
@invisiblecomma @altmetric @Protohedgehog The only reason that a rather unoriginal paper like https://t.co/B9Kqtayspo has an altmetric score of 1500 is because a lot of people use "tests of significance"
@DRMacIver I am reasonably sure this is incorrect. See: https://t.co/czfMTPwctZ
@sTeamTraen The two different approaches to calculation of the likelihood ratio (and hence FPR) are shown in Fig 1 of https://t.co/B9Kqtayspo P-less-than uses areas. P-equals uses probability densities
@sTeamTraen simulation is easier to understand -see section 10 in 2014 paper https://t.co/B9Kqtayspo
@cdchu @rwyeh @SripalBangalore @StatModeling @f2harrell It's the inverse probability that I'm interested in - false discovery rate. This is a good read, which takes your analogy with diagnostic tests https://t.co/eM4f0Ql45Q
@PeterSellei Läsning på området https://t.co/fnxqhLrkqa https://t.co/opWmtWdIjL
RT @cenaptech: "If you use p=0.05 to suggest that you have made a discovery, you will be wrong at least 30% of the time. If, as is often t…
RT @cenaptech: "If you use p=0.05 to suggest that you have made a discovery, you will be wrong at least 30% of the time. If, as is often t…
"If you use p=0.05 to suggest that you have made a discovery, you will be wrong at least 30% of the time. If, as is often the case, experiments are underpowered, you will be wrong most of the time. " https://t.co/rKV7F93nls
@alc_anthro @DominicFurniss the 2014 paper, done by simulation mostly https://t.co/B9Kqtayspo (I apologise for using FDR rather then FPR in earlier stuff)
@alc_anthro @DominicFurniss Yes but in order to get the risk of a false positive you have to consider what happens also what happens when the null hypothesis is false! See Fig 2 in 2014 paper https://t.co/B9Kqtayspo
How much valuable is P value? 🤔 https://t.co/QNaNvmuwN3
An investigation of the false discovery rate and the misinterpretation of p-values https://t.co/cbWgTNBLcu
RT @BenJaneFitness: @dnunan79 @david_colquhoun @EvidenceLive Thanks for sharing...source URL https://t.co/NbiIX3YvIo
@dnunan79 @david_colquhoun @EvidenceLive Thanks for sharing...source URL https://t.co/NbiIX3YvIo
@greg_ashman Have you read this article, Greg? https://t.co/zlsI5VU3xG
An investigation of the false discovery rate and the misinterpretation of p-values (https://t.co/tOdLgneXMZ) helps me understand Why Most Published Research Findings Are False (https://t.co/1BL5zvgVWk)
@fredericschutz @RomainStuder This paper also explains things very clearly: https://t.co/a1TEWHg4jF
@SalCross @carlsmythe aha now we are getting to the point. Yes of course there are predatory journals. I've been going on about them for years. But my first paper in RS Open Science jus passed 35,000 pdf downloads https://t.co/B9Kqtayspo
RT @david_colquhoun: @lakens Agree it's useful -as shown in Figs 3, 5 and 6 in my 2014 paper -easy to simulate https://t.co/Vf2Ykm4qD5
RT @david_colquhoun: @lakens Agree it's useful -as shown in Figs 3, 5 and 6 in my 2014 paper -easy to simulate https://t.co/Vf2Ykm4qD5
@lakens Agree it's useful -as shown in Figs 3, 5 and 6 in my 2014 paper -easy to simulate https://t.co/Vf2Ykm4qD5
RT @nephmatt: This paper has transformed my thinking on significance testing and p values 👌 #fridaynightnerdtweet @GalbraithN - the paper w…
RT @nephmatt: This paper has transformed my thinking on significance testing and p values 👌 #fridaynightnerdtweet @GalbraithN - the paper w…
@VinayPrasad82 Agree but also important to note that p < 0.05 is rarely good enough and is often not evidence of a new discovery! 😃 @david_colquhoun @GalbraithN https://t.co/U7WZSMJ9Um
This paper has transformed my thinking on significance testing and p values 👌 #fridaynightnerdtweet @GalbraithN - the paper we were talking about @csainsbury - you will like it too @dhb1987 - you probs won’t 😉https://t.co/U7WZSMJ9Um
@DrSylviaMcLain well there's an early version on my blog. The papers are open access (obv) 2014 https://t.co/B9Kqtayspo and 2017 https://t.co/VbYkbRgQkw That one comes with a web calculator: https://t.co/3kcOOZXxtn
@learnfromerror I'm uncertain, but paper seems to be following logic of Colquhoun '14 regarding long-term false discovery rate? https://t.co/5OeqQq2Lg4
while working to include genetics into biofundamentals, I was reminded to read (and take to heart) https://t.co/MURBJecbJ3 @biofundamentals
RT @david_colquhoun: @deevybee briefly. that's because type 1 errors are not the only positives. To get false positive risk you need also t…
RT @david_colquhoun: @deevybee briefly. that's because type 1 errors are not the only positives. To get false positive risk you need also t…
RT @david_colquhoun: @deevybee briefly. that's because type 1 errors are not the only positives. To get false positive risk you need also t…
@deevybee briefly. that's because type 1 errors are not the only positives. To get false positive risk you need also the true positives (as illustrated by simulation in 2014 https://t.co/B9Kqtayspo and analytically in 2017 https://t.co/Nt5pgP1id1
@david_colquhoun @stephensenn This is not how your paper https://t.co/2L9Gm7NALT reads: The FDR simply builds on the probability of (non)replication of the p-values https://t.co/W2sjFjmhN5
RT @david_colquhoun: Weird. 33k downloads of my 2014 paper on false positives https://t.co/B9Kqtayspo And 15k downloads of preprint versio…
I guess statisticians already have their own opinions and are reluctant to read others (honourable exception. @stephensenn ) https://t.co/so3KLz8WqN
Weird. 33k downloads of my 2014 paper on false positives https://t.co/B9Kqtayspo And 15k downloads of preprint versions of 2017 paper https://t.co/Nt5pgP1id1 before it was even published. Yet it's been hard to get opinions from statisticians about it.
RT @Aaron_Spivak: I reject the 10% prior assumption, but an interesting read: FDR and misinterpretation of p-values https://t.co/TF52FStjtm
David Colquhoun expertly outlines the problem with using p-values to determine significance here. "If you use p=0.05 to suggest that you have made a discovery, you will be wrong at least 30% of the time." https://t.co/yuJi0YYX7D
RT @TuQmano: @wsosaescudero Mirá la referencia que me pasaron por los "falsos positivos" @JMarquezP https://t.co/1yza6BmtMj https://t.co/g7…
@wsosaescudero Mirá la referencia que me pasaron por los "falsos positivos" @JMarquezP https://t.co/1yza6BmtMj https://t.co/g7EC9P5Lpi
@scienmag Read this - An investigation of the false discovery rate and the misinterpretation of p-values. By David Colquhoun @david_colquhoun https://t.co/4xUgapMi5E
@PWGTennant @melb4886 @katetilling @statsmethods Agreed. That's partly why I used simulation to look at false positive risk in 2014. https://t.co/B9Kqtayspo But important to do the algebra too -as in 2017 https://t.co/5enFeWZmM7