"If you use p=0.05...you will be wrong at least 30% of the time."Must read on misinterpreting P https://t.co/FF79MtgNRW via @RSocPublishing
@greg_slodkowicz you want to think only of the lower arms of the tree diagrams Figs 1 & 2. You need upper arm too https://t.co/B9Kqtayspo
@Heinonmatti I think give P and CI but change description of P thus https://t.co/FSTLju0kdp (still overoptimistic if prior is small enough)
@Heinonmatti yes, you don't know prior but you can put a minimum on false pos rate assuming only that prior =< 0.5 https://t.co/B9Kqtayspo
@AndrewPGrieve what do you think this does? https://t.co/B9Kqtayspo (but at least one error in madpage piece)
@Research_Tim I'm not talking abt param estimation, but false pos rate (FDR) -that's what's needed for sig tests https://t.co/B9Kqtayspo
@david_colquhoun @stephensenn @learnfromerror Let me add a bit more of context https://t.co/ODR6LdHeIi
An investigation of the false discovery rate and the misinterpretation of p-values https://t.co/whPQVkg0G1 via @RSocPublishing
@e_astronomer but what's you prior? My take on that is at https://t.co/B9Kqtayspo
Widespread misunderstanding about how to interpret a pvalue and the problem that many experiments are underpowered https://t.co/tDrPsHvNqv
@ThomasChesney yes -no fixed sig level Give exact P and CI, but interpret it roughly as here https://t.co/FSTLju0kdp https://t.co/wL5lOgRtv5
RT @davidjglassMD: Read both of these; 1 of 2 An investigation of the false discovery rate and the misinterpretation of p-values https://…
RT @davidjglassMD: Read both of these; 1 of 2 An investigation of the false discovery rate and the misinterpretation of p-values https://…
Read both of these; 1 of 2 An investigation of the false discovery rate and the misinterpretation of p-values https://t.co/LqcBr78oPx
@statwonk Neither -we were talking about thus https://t.co/B9KqtaQ3gW @michaelhoffman @timtriche @pzmyers
RT @spurll: Really good overview of the problem with significance testing in science https://t.co/FfnKa2Btdg
RT @spurll: Really good overview of the problem with significance testing in science https://t.co/FfnKa2Btdg
Really good overview of the problem with significance testing in science https://t.co/FfnKa2Btdg
@IdiotTracker It is stated clearly in the paper -see https://t.co/RPt9iHJLCS and https://t.co/FSTLju0kdp @pzmyers
@IdiotTracker No again. P close to 0;05 implying false positive rate of AT LEAST 26% -see section 10 in https://t.co/B9Kqtayspo @pzmyers
@IdiotTracker That's in https://t.co/B9Kqtayspo I cna' do it in 140 ch given that you misdefine P @pzmyers
Uhuh. You r making the error of the transposed conditional. Please read https://t.co/B9Kqtayspo before digging more https://t.co/yLGvqOVj7P
@IdiotTracker No. I said false positive rate would be huge. Confusing that with P value is a statistical howler https://t.co/B9Kqtayspo
.@IdiotTracker what on earth do you mean? They are an attribute of any test of significance. Try reading https://t.co/B9Kqtayspo & come back
@HealthEvidence That definition of p leads to wrong conclusions, high false positive rate https://t.co/hZTsdXd3Ew
Good, though what they call PPV (yuk) isn't quite right IMO -see section 10 https://t.co/AI1cAIjJSG https://t.co/U8yxrA38ho
@amusebarf Absolutely NO!! False pos rate much higher. See https://t.co/B9Kqtayspo @BMJ_Open
Understanding the wicked problem of false positives in data analysis. https://t.co/PWCNPE8dIb
RT @Campbell_MD: Teach or study an applied science? This gem is a must read... @david_colquhoun https://t.co/ZeO86DrOEN
Teach or study an applied science? This gem is a must read... @david_colquhoun https://t.co/ZeO86DrOEN
That's arguable. Tho even with prior = 0.5, observation of P = 0.047 implies 26% false pos https://t.co/B9Kqtayspo https://t.co/v80QBciYNZ
@jodyaberdein yes, of course. The real problem is the word "significant". It should be banned altogether. IMO https://t.co/RPt9iHJLCS
RT @iamdavecampbell: Easy reading: ’An investigation of the false discovery rate and the misinterpretation of p-values’: https://t.co/SFfhC…
RT @david_colquhoun: Hmm doesn't do it in right way -see section 10 in https://t.co/AI1cAIjJSG https://t.co/5woNE8dOst
Hmm doesn't do it in right way -see section 10 in https://t.co/AI1cAIjJSG https://t.co/5woNE8dOst
RT @jimmyzliu: "If you use p = 0.05 to suggest that you have made a discovery, you will be wrong at least 30% of the time." http://t.co/SZ7…
@canuckinarabia Though I'd maintain that the right approach is as in sec 10 https://t.co/AI1cAIjJSG rather than the tree diagram
@canuckinarabia Table 1 is an algebraic version if the Tree diagram, Fig 2 in https://t.co/B9Kqtayspo Animated in https://t.co/qNUenRWSUc
But it neglects fact that Type 1 error isn't helpful. You need false positive rate https://t.co/B9Kqtayspo https://t.co/aSdZzSvjWQ
Nice, but it neglects the false positive rate. See, for example, https://t.co/B9Kqtayspo I left a comment https://t.co/Lgic46YujT
@newaccount212 I discuss that in sec 12 https://t.co/Pzcqv9LKAF Doesn't convince everyone, but makes sense to me
how right... https://t.co/wmFlul18w4
@Alexis_Verger Compare citations: https://t.co/B9Kqtayspo (trivial but trendy) with https://t.co/HYU8BhxCj7 @jodyaberdein (orig but hard)
@stevenhill @elebelfiore My paper with the biggest impact is trivial and non-original, just timely https://t.co/B9Kqtayspo Impact corrupts!
RT @david_colquhoun: @hildabast I think that's best way to explain false positive rates in sig tests too https://t.co/B9Kqtayspo #EvidenceL…
RT @david_colquhoun: @hildabast tho in case of sig tests, that approach is overoptimistic https://t.co/AI1cAIjJSG @d_spiegel @kimvie #Evide…
@hildabast tho in case of sig tests, that approach is overoptimistic https://t.co/AI1cAIjJSG @d_spiegel @kimvie #EvidenceLive
@hildabast I think that's best way to explain false positive rates in sig tests too https://t.co/B9Kqtayspo #EvidenceLive @d_spiegel @kimvie
RT @david_colquhoun: Yes he is. How many people understand the false positive rate? https://t.co/B9KqtaQ3gW https://t.co/mYuXgjVUFo
RT @david_colquhoun: Yes he is. How many people understand the false positive rate? https://t.co/B9KqtaQ3gW https://t.co/mYuXgjVUFo
RT @david_colquhoun: Yes he is. How many people understand the false positive rate? https://t.co/B9KqtaQ3gW https://t.co/mYuXgjVUFo
RT @david_colquhoun: Yes he is. How many people understand the false positive rate? https://t.co/B9KqtaQ3gW https://t.co/mYuXgjVUFo
Yes he is. How many people understand the false positive rate? https://t.co/B9KqtaQ3gW https://t.co/mYuXgjVUFo
@KathrynAsbury1 "never, ever, use the word 'significant'" :-) https://t.co/B9Kqtayspo
RT @david_colquhoun: .@JimJohnsonSci Not true, eg https://t.co/B9Kqtayspo was in jnl so new it didn't have an IF. 122,000 full text views.…
.@JimJohnsonSci Not true, eg https://t.co/B9Kqtayspo was in jnl so new it didn't have an IF. 122,000 full text views. 18,000 pdf downloads
@katiehopperton good article on p-values (again :P haha) https://t.co/Sp2lXKkBUQ
RT @david_colquhoun: hmm bar chart with one pesky asterisk, at https://t.co/MShOgfANea Please read https://t.co/B9Kqtayspo
hmm bar chart with one pesky asterisk, at https://t.co/MShOgfANea Please read https://t.co/B9Kqtayspo
@raphaels7 no account of false positives in the sense of https://t.co/B9Kqtayspo
Rediscovered this great paper by @david_colquhoun in @royalsociety on The Misinterpretation of p-values https://t.co/VhhwjfMUp4
RT @david_colquhoun: @jvrbntz point. It considers only P<0.05 rather than P = 0.05 so it is overoptimistic -see https://t.co/AI1cAIjJSG
@jvrbntz point. It considers only P<0.05 rather than P = 0.05 so it is overoptimistic -see https://t.co/AI1cAIjJSG
Thanks for an excellent talk today about misinterpretation of p-values: https://t.co/Wr7ovN3y7r https://t.co/mzqiQDJZ9m
@learnfromerror two reasons why the false positive rate isn't irrelevant https://t.co/STKoQl6y5p and https://t.co/d0W7jaeLP2
@peterscott1965 2 pm tomorrow, You can always read paper https://t.co/B9Kqtayspo , or video https://t.co/G6mtrnWrnj
@caio_maximino 2/n but you CAN give minimum FPR. for P(real)=0.5. Suggest something like https://t.co/FSTLju0kdp
.@skepchicks @pzmyers It isn't even clear that tumour rate was increased.P=0.04 implies high false positive rate https://t.co/B9Kqtayspo
@reinboth folgender Artikel könnte dich ebenfalls interessieren https://t.co/Yz3zRl84It vielen Dank an @JoernLoviscach für den Link
@ucfagls why not? It's so arbitrary as to be meaningless, and because it ignores false pos rate https://t.co/RPt9iHJLCS
Interesting article about the misunderstanding of P values. P=0.05 does not necessarily mean significance https://t.co/6xrWfu7SJR
False discovery rate ~30% with p<0.05 😳 Real problem for research @david_colquhoun @ameeraxpatel @JerryT88 https://t.co/qf4PCziAIK
RT @AMCELL: An investigation of the false discovery rate and the misinterpretation of p-values https://t.co/tYdb8eGGKO
RT @AMCELL: An investigation of the false discovery rate and the misinterpretation of p-values https://t.co/tYdb8eGGKO
RT @AMCELL: An investigation of the false discovery rate and the misinterpretation of p-values https://t.co/tYdb8eGGKO
An investigation of the false discovery rate and the misinterpretation of p-values https://t.co/tYdb8eGGKO
.@StephenSerjeant or, more recently, this trivium would be rated ridiculously highly https://t.co/tNBlxLKq4G
@UoSCHEBS well my argument is essentially bayesian in https://t.co/B9Kqtayspo It's the invented bits I can't stomach @economeager
Over p-waardes en false positives interessant voor als u even tijd hebt. https://t.co/hRh8042xbC
An investigation of the false discovery rate and the misinterpretation of p-values https://t.co/37otq9zbqQ
RT @david_colquhoun: If you observe p=0.047, with prior of 0.5, the false pos rate isn't 5% as he says, but 26% https://t.co/AI1cAIjJSG htt…
RT @david_colquhoun: If you observe p=0.047, with prior of 0.5, the false pos rate isn't 5% as he says, but 26% https://t.co/AI1cAIjJSG htt…
If you observe p=0.047, with prior of 0.5, the false pos rate isn't 5% as he says, but 26% https://t.co/AI1cAIjJSG https://t.co/ioZJaxQ9R4
@jvrbntz If you observe p=0.047, with prior of 0.5, the false positive rate isn't 5% as he says, but 26% https://t.co/AI1cAIjJSG
@CoyneoftheRealm yes but not enough. Conf intervals tell you little abt what really matters, the false positive rate https://t.co/B9Kqtayspo
@david_colquhoun You say the data favour H1 more than H0 when H0 is more likely than H1! https://t.co/5eqrP0nWAS https://t.co/dh7A95bSLe
RT @jvrbntz: An investigation of the false discovery rate and the misinterpretation of p-values https://t.co/UBtG8N8nV9 #pvalue https://t.c…
RT @jvrbntz: An investigation of the false discovery rate and the misinterpretation of p-values https://t.co/UBtG8N8nV9 #pvalue https://t.c…
An investigation of the false discovery rate and the misinterpretation of p-values https://t.co/UBtG8N8nV9 #pvalue https://t.co/obAfPmyLry
Something all sabermetricians should re-read: https://t.co/ao64Xd5sE1
.@JPdeRuiter Perfect example. https://t.co/B9Kqtayspo has more citations than https://t.co/Fk2lFB7BQK @stephanneuhaus1 @Marcia4Science
@Nuno_H_Franco My suggestions here https://t.co/RPt9iHJLCS
@economeager Agreed. That's what I was trying to do in https://t.co/B9Kqtayspo
Never use the word 'Significant' - interesting paper on the flaw in using the p-value to claim success https://t.co/RN73Itijhd
@Erdenschimmer the problem with stopping rules is quite different from the prob that I discuss https://t.co/B9KqtaQ3gW @economeager
@SJBPhysio_sport @AdamMeakins Here's today's one: https://t.co/msKZpqzjp4 but maybe: https://t.co/4V0h4bwRNg or https://t.co/GsNWxNWL0Z
Nice approachable article about statistics https://t.co/y3VA2AaXSD
RT @janzilinsky: “If you use p=0.05 to suggest that you have made a discovery, you will be wrong at least 30% of the time” https://t.co/JJJ…
“If you use p=0.05 to suggest that you have made a discovery, you will be wrong at least 30% of the time” https://t.co/JJJHWKSU4S
Changed several bits between https://t.co/YGxPXEiwoy and https://t.co/B9KqtaQ3gW But pity that arXiv has no comments https://t.co/Hb7tUTJ1so