@TomKindlon @david_colquhoun Thanks, Tom. Very interesting read – as is his paper on the misinterpretation of p-values: https://t.co/FXQvYRr75E I wish David could be persuaded to investigate the PACE trial and other flawed bio-psychosocial research.
@JohnTuckerPhD @matthewherper that costs $40. But the decline effect is well known. It's partly statistical eg Fig 7 https://t.co/B9Kqtayspo partly cheating/hubris
Excellent review of p-value misinterpretation and false discovery rate—must-read for scientists and drug developers https://t.co/izXCuE2EnO
@DrAndyHolt @cragcrest and, still more interesting, the analogy with tests if significance https://t.co/B9Kqtayspo and https://t.co/0xgF2SNY8B
@DrAndyHolt .@cragcrest yes indeed though I find the tree diagram approach a clearer way to explain eg Fig 1 https://t.co/B9KqtaQ3gW
@omaclaren @EJWagenmakers @learnfromerror @smartin2018 @richarddmorey @JeffRouder Not at all. I do standard t tests on data that obey exactly the assumptions of the t test (eg Fig 4 in https://t.co/B9Kqtayspo
@liversedge Guessing you like maths… This (open) paper is particularly relevant given very tiny sport studies https://t.co/qGNJPtM6pY
RT @stephensenn: 1) A necessary postulate in your derivation of the posterior is not necessary to the t-test and was not used by those who…
2) please show me where Luroth, Student or Fisher used your postulate, 3) You are following Jeffreys but his approach not necessary https://t.co/khZ4bsYXr9
1) A necessary postulate in your derivation of the posterior is not necessary to the t-test and was not used by those who 1st derived it.. https://t.co/khZ4bsYXr9
Picking a fight with @HL327 on diagnostic methods is as unwise as my picking one with you on ion channels. Tread carefully https://t.co/PLNe19Gdvv
@stephensenn Figure 4 in https://t.co/B9Kqtayspo is exactly the assumptions of the t test
@HL327 @learnfromerror @omaclaren @lakens @uri_sohn @JnfrLTackett @TonyLFreitas @analisereal @ShlomoArgamon @StatModeling @lukasvermeer so are you saying that the analysis of diagnostic screening is wrong? It's absolutely standard! (Fig 1 in https://t.co/B
I reject the 10% prior assumption, but an interesting read: FDR and misinterpretation of p-values https://t.co/TF52FStjtm
@omaclaren and he taks the example of diagnostic screening tests, which is my starting point for sig tests https://t.co/B9Kqtayspo
RT @david_colquhoun: P value wars! A preprint of my paper is now on arXiv http://t.co/YGxPXEj4L8 I hope lively discussion will ensue
@lakens Even if you can define a p-value accurately it's not easy to see what it means. I explain as lower limb in Fig 2 https://t.co/B9Kqtayspo
@brembs 2014 https://t.co/B9Kqtayspo and 2017: https://t.co/F2ZVMw9vmZ
@stephspiel @brembs @lakens In other words, P values tell you only about lower limb in Fig 2 (and Fig 1) in https://t.co/B9Kqtayspo but you need upper limb too
An investigation of the #FalseDiscovery rate and the #misinterpretation of #PValues https://t.co/8KVvyqXc9l | via @RSocPublishing
@Nuno_H_Franco My first P value paper was published in Roy Soc Open Science before it even had an IF. Now over 30,000 pdf downloads https://t.co/B9Kqtayspo
@jjodx Thanks for ref. See https://t.co/yRZyenuF9w and Table 2 in https://t.co/rZ9RB3aX7Q P<0.001 reduces false positives compared to 0.05
@omaclaren @stephensenn @RSSAnnualConf well the postulate is observations that obey assumptions of t test exactly (Fig 4 in https://t.co/B9Kqtayspo ) -ideal case
@kierisi @deevybee Mine too. Took 3 days or so from zero to write stuff for https://t.co/B9Kqtayspo
RT @david_colquhoun: No, I don't advocate any fixed threshold -more detail in https://t.co/j5DpudHzAz But agree about citation :-) https://…
No, I don't advocate any fixed threshold -more detail in https://t.co/j5DpudHzAz But agree about citation :-) https://t.co/hogcrjOhy9
RT @BahaNick: @raybbecker @david_colquhoun makes a good case for 0.001 cut off. https://t.co/mu2W4ul0RX surprised not cited here
RT @david_colquhoun: @daniel_bilar @IgorCarron Agreed, And his conclusions are very simikar to those found more simple in https://t.co/B9Kq…
@daniel_bilar @IgorCarron Agreed, And his conclusions are very simikar to those found more simple in https://t.co/B9Kqtayspo and https://t.co/7QQW0WNPGb
@chrisdc77 @AlxEtz Well I write about precisely the same topic in 2014. Not sure how you an say it's irrelevant https://t.co/B9Kqtayspo
RT @BahaNick: @raybbecker @david_colquhoun makes a good case for 0.001 cut off. https://t.co/mu2W4ul0RX surprised not cited here
@AlxEtz (a) arXiv accepts stats (it is where I put https://t.co/B9Kqtayspo before pub) and (b) bioRxiv accepts all biol https://t.co/ACPeM6RcdW
RT @BahaNick: @raybbecker @david_colquhoun makes a good case for 0.001 cut off. https://t.co/mu2W4ul0RX surprised not cited here
RT @BahaNick: @raybbecker @david_colquhoun makes a good case for 0.001 cut off. https://t.co/mu2W4ul0RX surprised not cited here
Well, I don't thinks any fixed threshold is a good idea. Updat here https://t.co/ACPeM6RcdW https://t.co/hogcrjOhy9
@raybbecker @david_colquhoun makes a good case for 0.001 cut off. https://t.co/mu2W4ul0RX surprised not cited here
@jjodx See @david_colquhoun excellent paper https://t.co/Bn624tTia0 not sure what impact of 2 replicated studies with p =0.05 has on FDR
RT @david_colquhoun: @siminevazire @the100ci @hardsci The problem is that published effect sizes very often too big: eg Fig 7 in https://t.…
@siminevazire @the100ci @hardsci The problem is that published effect sizes very often too big: eg Fig 7 in https://t.co/B9Kqtayspo so power calcs rarely work
@wendympatterson I was very happy with Royal Society Open Science https://t.co/B9Kqtayspo
@jonroiser @learnfromerror @deevybee partly because of the inflation effect (Fig 7 in https://t.co/B9Kqtayspo )
@jonroiser @learnfromerror @deevybee And judging by nearly 30k pdf downloads for 1st paper, someone is taking note https://t.co/tNBlxLKq4G
@jonroiser @deevybee I think you are transposing conditionals -See https://t.co/B9Kqtayspo https://t.co/lVfVpfeXs4 and https://t.co/nYXhoE0g7u
@torwager @CaulfieldTim That is clear https://t.co/B9Kqtayspo and https://t.co/5lHrwz9Ghh Problem is you don't know prior. I like reverse Bayesian approach
RT @david_colquhoun: Ciatations depend mainly on nuimber of people in the field, The high citations of https://t.co/B9Kqtayspo (a trivial…
RT @david_colquhoun: Ciatations depend mainly on nuimber of people in the field, The high citations of https://t.co/B9Kqtayspo (a trivial…
RT @david_colquhoun: Ciatations depend mainly on nuimber of people in the field, The high citations of https://t.co/B9Kqtayspo (a trivial…
RT @david_colquhoun: Ciatations depend mainly on nuimber of people in the field, The high citations of https://t.co/B9Kqtayspo (a trivial…
Ciatations depend mainly on nuimber of people in the field, The high citations of https://t.co/B9Kqtayspo (a trivial review) show that https://t.co/6Tn3gF3aYe
@professor_dave When I published, free for me and readers, in RSOpen Science, it was too young to have an IF. Just look at downloads https://t.co/tNBlxLKq4G
RT @david_colquhoun: Royal Society Open Science is a great example of the future of publishing https://t.co/B9Kqtayspo https://t.co/NdzCrij…
RT @david_colquhoun: Royal Society Open Science is a great example of the future of publishing https://t.co/B9Kqtayspo https://t.co/NdzCrij…
RT @david_colquhoun: Royal Society Open Science is a great example of the future of publishing https://t.co/B9Kqtayspo https://t.co/NdzCrij…
RT @david_colquhoun: Royal Society Open Science is a great example of the future of publishing https://t.co/B9Kqtayspo https://t.co/NdzCrij…
RT @david_colquhoun: Royal Society Open Science is a great example of the future of publishing https://t.co/B9Kqtayspo https://t.co/NdzCrij…
RT @david_colquhoun: Royal Society Open Science is a great example of the future of publishing https://t.co/B9Kqtayspo https://t.co/NdzCrij…
RT @david_colquhoun: Royal Society Open Science is a great example of the future of publishing https://t.co/B9Kqtayspo https://t.co/NdzCrij…
Royal Society Open Science is a great example of the future of publishing https://t.co/B9Kqtayspo https://t.co/NdzCrijhLt
@Nuno_H_Franco Uou may have a good estimate of prior prob in the case of screening tests. https://t.co/B9KqtaQ3gW but essentially never for sig tests
An investigation of the false discovery rate and the misinterpretation of p-values https://t.co/cshTWhQgtA https://t.co/ZdW1duEQrm
@NickTyler4 A huge amount has been written aboiut that.. I did quote Fisher in the first paper https://t.co/B9Kqtayspo
@emilyncosta Pra algo mais comprido, esse aqui também é legal: https://t.co/ASy9YRVZZ2
@shravanvasishth @schneiderleonid Sure (Fig 3 in https://t.co/B9Kqtayspo ). You CAN conclude something, as Fisher showed 100 years ago.
RT @david_colquhoun: I've spent about 12 hours writing about these probs today -for my sequel to https://t.co/B9Kqtayspo Expect some disagr…
I've spent about 12 hours writing about these probs today -for my sequel to https://t.co/B9Kqtayspo Expect some disagreements https://t.co/9qYsWJhUOW
@data42morrow Interesting read as well https://t.co/YxLB629HKz
@herzoghal wow, that's bad! I think this paper highlights the problem with under powered studies well https://t.co/lqQ2NZB4jQ
Best article in #statistics and required reading for any #experimental researcher who found something significant. https://t.co/0geLyEZ9Rr
@politicory I think this article gives a good explanation! https://t.co/CaQk0gPuDp First sentence is overly simplistic but probably close to true!
I suspect thet @SkeptPsych has not bothered to read https://t.co/B9Kqtayspo https://t.co/0Z3S959Bja
@cochranetrain @CochraneUK Good ! You might find these useful https://t.co/B9Kqtayspo and https://t.co/qNUenRWSUc and https://t.co/lVfVpfeXs4
@rand18m @michelaccad Not all, but it's done systematically: "...never use the word ‘significant’." https://t.co/5sYrBX9nUS
@ChristosArgyrop @anish_koka @statsguyuk all the things you mention are addtional problems that makes things still worse than I ound in https://t.co/B9Kqtayspo
@ChristosArgyrop @anish_koka @statsguyuk So what is wrong with https://t.co/B9Kqtayspo If you don't believe it please say why
An investigation of the false discovery rate and the misinterpretation of p-values | Open Science https://t.co/PkNBKxnnXP
An investigation of the false discovery rate and the misinterpretation of p-values https://t.co/PkNBKxnnXP
@AndrewPiFactori I was talking about much more fundamental Q interpretations in sec 10 https://t.co/AI1cAIjJSG @AndrewPGrieve @stephensenn
@Neuro_Skeptic wouldn't effect size inflation in underpowered studies work in the opposite direction? Eg Fig 7 in https://t.co/HqVeb0WNKb
Here's @david_colquhoun putting it better than I ever could https://t.co/9N9f4Jsjn4
.@AndrewPiFactori Sorry. that's not what I was talking about. I was alluding to interpetation in sec 10 https://t.co/AI1cAIjJSG
RT @david_colquhoun: "Ioannidis bias" by @mfuggetta https://t.co/hq0jZQGyxO I'm not totally convinced, Should surely us p=, not P< https://…
"Ioannidis bias" by @mfuggetta https://t.co/hq0jZQGyxO I'm not totally convinced, Should surely us p=, not P< https://t.co/AI1cAIjJSG
The misinterpretation of p <0.05 is, alone, enough to account for many unreliable results. https://t.co/B9Kqtayspo https://t.co/PFIs0tE942
V good, but still no mention of p<0.05 myth and false pos rate @bengoldacre @carlheneghan https://t.co/B9Kqtayspo https://t.co/MYElAckN76
@chartgerink it's not as simple as that https://t.co/AI1cAIjJSG @Protohedgehog
@Protohedgehog obv p-vals alone can't tell you false pos rate. Effect of sample size dep on p= vs p< https://t.co/AI1cAIjJSG @chartgerink
@VandekerckhoveJ Obviosly not. My views are in https://t.co/B9Kqtayspo and in https://t.co/lVfVpfeXs4
@sTeamTraen I tried to straddle both camps, and despite flak from @lakens, it's been quite well received https://t.co/tNBlxLKq4G
@ChristosArgyrop If you think there is someing wrong with https://t.co/B9Kqtayspo, please say what it is
@rahatheart1 Think false pos rate is worse than Ioannidis says -see sec 10 in https://t.co/AI1cAIjJSG @ChristosArgyrop @lakens @DanMarkMD
@lakens Oh sorry but that's absolutely not true -eg Goodman 1995, or https://t.co/B9Kqtayspo @ChristosArgyrop @stephensenn
Also uses the unsatisfactory P<0.05 criterion & ignores fact that false pos rate is insens. to power when P= used https://t.co/AI1cAIjJSG https://t.co/HiNp9o3NU6
An investigation of the false discovery rate and the misinterpretation of p-values https://t.co/e47U1jobBh Great discussion on p values
RT @david_colquhoun: That's quite like my Fig 7 https://t.co/AI1cAIjJSG Effect has been knwn for years, but still ignored https://t.co/Oc3…
RT @david_colquhoun: That's quite like my Fig 7 https://t.co/AI1cAIjJSG Effect has been knwn for years, but still ignored https://t.co/Oc3…
RT @david_colquhoun: That's quite like my Fig 7 https://t.co/AI1cAIjJSG Effect has been knwn for years, but still ignored https://t.co/Oc3…
RT @david_colquhoun: That's quite like my Fig 7 https://t.co/AI1cAIjJSG Effect has been knwn for years, but still ignored https://t.co/Oc3…