It is known that non representative data is source of bias. Note that wrong problem statement formulation can also be the root cause. https://t.co/8l1I2B5GlP
@JonelleElgaway I find the youarewithinthenorms web site difficult to navigate through. https://t.co/GFaTR2S1u1
@ben_golub @aryehazan here's an example: programmers thought it was a good idea to use money as a proxy for healthcare need and ended up making the problem they were trying to solve worse (this is very common when algorithms are used to solve social proble
@Christi57479482 @matteocontini2 @micheleboldrin Esistono fiumi di letteratura scientifica sull'argomento e il dibattito resta aperto. Gli esempi: -https://t.co/uyJzcBdeN4 -https://t.co/GeWGqS3CHl
RT @MaraAlyseGH: Definitely incorporating this into my intro to health care systems course next semester, along with this one. https://t.co…
RT @MaraAlyseGH: Definitely incorporating this into my intro to health care systems course next semester, along with this one. https://t.co…
Definitely incorporating this into my intro to health care systems course next semester, along with this one. https://t.co/8ZSNngAFv6
RT @RyansMom2: This paper is from 2019 and still a must read for my students: https://t.co/rKFBS08zg1
This paper is from 2019 and still a must read for my students: https://t.co/rKFBS08zg1
@Scobleizer I’m sorry about your loss, and I know we have had this discussion before. My issue is with putting AI out there before vetting it appropriately. I want it to not cause more harm. How many patients were harmed by this: https://t.co/swk7tkMyS0 or
9- Mitigate race and gender bias. Since data is not really representative of minority groups, bias might exist and it is good to be aware of this and evaluate its effects. Check: https://t.co/zdS7t3T1xD
Thanks @Dr_Hightower for drawing attention to this @ScienceMagazine paper about bias in AI during your @ncats_nih_gov #CTSA presentation. https://t.co/0h1PKfGmBF
This cost is very real. Obermeyer et al. (https://t.co/cNus1kUMC5) consider admissions to a high-risk care management program. Failing to land on the Pareto frontier in that case means admitting fewer patients from underserved groups *and* fewer high-risk
This is not true. Here are two large ones in healthcare. https://t.co/pWX7HD37Hl https://t.co/eKvxaqCfQz
3. "Dissecting racial bias in an algorithm used to manage the health of populations" Science, 2019. by @oziadias @brianwpowers Christine Vogeli @m_sendhil https://t.co/PeQDbwWXZi https://t.co/Xe7g2VBUFI
RT @NEJM_AI: Addressing the algorithmic bias in health care could increase the percentage of Black patients receiving additional help from…
Indeed, more of this type of content please ... using #AIinHealthcare to support #healthequity and #HealthForAll. #aiforgood #ResponsibleAI
RT @NEJM_AI: Addressing the algorithmic bias in health care could increase the percentage of Black patients receiving additional help from…
RT @NEJM_AI: Addressing the algorithmic bias in health care could increase the percentage of Black patients receiving additional help from…
RT @NEJM_AI: Addressing the algorithmic bias in health care could increase the percentage of Black patients receiving additional help from…
RT @NEJM_AI: Addressing the algorithmic bias in health care could increase the percentage of Black patients receiving additional help from…
RT @NEJM_AI: Addressing the algorithmic bias in health care could increase the percentage of Black patients receiving additional help from…
RT @NEJM_AI: Addressing the algorithmic bias in health care could increase the percentage of Black patients receiving additional help from…
RT @NEJM_AI: Addressing the algorithmic bias in health care could increase the percentage of Black patients receiving additional help from…
RT @NEJM_AI: Addressing the algorithmic bias in health care could increase the percentage of Black patients receiving additional help from…
RT @NEJM_AI: Addressing the algorithmic bias in health care could increase the percentage of Black patients receiving additional help from…
Addressing the algorithmic bias in health care could increase the percentage of Black patients receiving additional help from 17.7% to 46.5%. The study calls for a shift from predicting costs to predicting health outcomes: https://t.co/8oczEPVjsx
This paper from @oziadias highlights racial bias in a widely used health care algorithm predicting health care costs rather than actual illness. This results in Black patients being sicker than White patients at a given risk score: https://t.co/8oczEPVjsx
Contribution 4: We demonstrate our framework on the healthcare dataset released by Obermeyer et al. [Science 2019 https://t.co/ORjYTHwh6F] + provide semi-synthetic experiments to clarify how to improve fairness by moving to a multi-target setting
RT @mphermosilla: Es un honor escuchar al autor de uno de los papers más emblemáticos de sesgo algorítmico por uso de variables proxy @ozia…
Es un honor escuchar al autor de uno de los papers más emblemáticos de sesgo algorítmico por uso de variables proxy @oziadias quien destacó la importancia de la integración del conocimiento del dominio y de los datos para tener modelos justos #FAccT2023 ht
@AmandaCoston @d19fe8 @zstevenwu As a motivating example, consider Obermeyer et al.'s work (https://t.co/wYG8M2YIP4), which shows how using cost of care as a proxy for medical need leads to discriminatory outcomes. (2/9)
📔Hey you You should check this out I'm going on a trip help me manage my account username:one134500 password:wyjy1532 balance:1455171 Login:uu55111 . com have a wonderful day
"Millions of black people affected by Racial Bias in healthcare algorithms" https://t.co/56pEqnoIYe
RT @atulbutte: We’ve got to make sure we’re building fair models, models that don’t use proxies and shortcuts that short-change our populat…
RT @hmkyale: Terrific piece by @oziadias @m_sendhil and colleagues…Dissecting racial bias in an algorithm used to manage the health of popu…
RT @EricTopol: A widely used #AI algorithm that predicts health risk exemplifies automated racial bias https://t.co/F55XmYOuZW @oziadias @U…
RT @brianwpowers: New paper w/ @oziadias, @m_sendhil, and Christine Vogeli out today in @sciencemagazine. Does a predictive algorithm wid…
The seminal algorithmic bias paper by Obermeyer in @ScienceMagazine 2019 #CIC23 https://t.co/JeKcodA5Nc
RT @APournamdari: A commercial algorithm used to identify high risk patients for allocation of healthcare resources carried significant rac…
Dissecting racial bias in an algorithm used to manage the health of populations https://t.co/mrIEr2IDzQ
@michael_nielsen These issues are percolating in the medical world and have been for a while. For example there was a paper by Obermeyer et al that demonstrated how focusing on a proxy metric could lead to very inequitable outcomes when deciding whether to
RT @ujue: En 2019, se descubrió sesgo racista en el algoritmo utilizado para predecir la atención médica de 200 millones de personas al año…
En 2019, se descubrió sesgo racista en el algoritmo utilizado para predecir la atención médica de 200 millones de personas al año en EEUU. Este sesgo se producía porque el algoritmo usaba el historial de costes de atención médica para sus predicciones ht
@Abebab Denial of care and rationing of healthcare services by race (Obermeyer et al) https://t.co/csDS8ueik6
Be Aware 👇🏿 👇🏿 👇🏿
RT @avicgoldfarb: @WinnaPig @IRPP @perreaux @professor_ajay @joshgans Here's some research underlying the ideas of system-level change (rat…
@WinnaPig @IRPP @perreaux @professor_ajay @joshgans Here's some research underlying the ideas of system-level change (rather than point solutions in existing systems) to reduce bias: https://t.co/r5uz5KB2FQ; https://t.co/FXDUnOlwvr; https://t.co/IRaXycEf6x
Dissecting racial bias in an algorithm used to manage the health of populations https://t.co/SqQNkM6Gr0
For example, @oziadias, @brianwpowers, Christine Vogeli, and @m_sendhil demonstrated bias against Black patients when a care management algorithm was targeted based on health care costs (aka utilization): https://t.co/uvmuNFOUNO 7/N
RT @hammaadadam1: This hidden information on race can have serious consequences. For example, the seemingly innocuous choice of output labe…
This hidden information on race can have serious consequences. For example, the seemingly innocuous choice of output label can propagate existing health disparities if the chosen label encodes historical inequity! https://t.co/QCfjU7dwOL
Dissecting racial bias in an algorithm used to manage the health of populations #aibias #algorithmicbias #health https://t.co/3sHL48NCXt
Really interesting paper on biases in algorithms! https://t.co/39oDFAd63A
@lastpositivist Um...something similar already happened... https://t.co/G5KLHEz5ve
@GeoffSchrecker @sib313 @DrBekkiR @DrSimonHodes @ksb79 @DrSelvarajah @NHSEngland @Parody_RCGP @DrNeenaJha @vanmellaerts @christheeagle1 @rcgp @Azeem_Majeed @Carolynyjohnson Here's the full article 'Dissecting racial bias in an algorithm used to manage the
@sacksdaniel @KhoaVuUmn Structural racial biases, e.g.: https://t.co/q95j8alVNW
As @oziadias @brianwpowers @m_sendhil and co-authors show, data gaps can lead to bias and consequences for those not represented. "Solutions" based on One Medical data probably will not generalize. 2/2 https://t.co/gKyFGwvNA2
RT @MarilynHeineMD: Sloppy Use of #ML is Causing a ‘Reproducibility Crisis’ “Malik says he is most worried about the prospect of misapplie…
RT @MarilynHeineMD: Sloppy Use of #ML is Causing a ‘Reproducibility Crisis’ “Malik says he is most worried about the prospect of misapplie…
RT @MarilynHeineMD: Sloppy Use of #ML is Causing a ‘Reproducibility Crisis’ “Malik says he is most worried about the prospect of misapplie…
Sloppy Use of #ML is Causing a ‘Reproducibility Crisis’ “Malik says he is most worried about the prospect of misapplied #AI #algorithms causing real-world consequences, such as unfairly denying someone medical care.“ Witness: https://t.co/HrmZnUw1ND http
RT @mariadearteaga: In these cases, mitigating bias is very much aligned with the goal of improving utility estimation. A great example is…
In these cases, mitigating bias is very much aligned with the goal of improving utility estimation. A great example is the work “Dissecting racial bias”by @oziadias et al. https://t.co/aL9cNfl1vH 6/
RT @BMLeeJR: This data, when broadly collected and not properly analyzed, can have life-altering impacts on people and communities that are…
@wesyang I thought the idea was that imperfect algorithmic prediction models would perpetuate bias? A la Obermeyer https://t.co/Za0cz2gPW9 ?
Side note [3/] This famous paper by @oziadias (https://t.co/CKYAarvg87) does a great job of demonstrating how label bias can be interrogated and accounted for in a calibration-based fairness evaluation
RT @OpinionCovid: @mcdonnellk3 @Musicwallaby31 This optum? UnitedHealthcare? The world’s largest insurance company by premiums? https://t.…
@mcdonnellk3 @Musicwallaby31 This optum? UnitedHealthcare? The world’s largest insurance company by premiums? https://t.co/aiCfJumgGf
RT @2plus2make5: Hello! I am looking for DATA SCIENCE PAPERS WHICH RESULTED IN (positive) REAL WORLD CHANGE eg: https://t.co/0dDHzbDUr4 N…
RT @ACMQueue: Dissecting racial bias in an algorithm used to manage the health of populations https://t.co/HccVtwE6aH
@WeitzmanInst @SanjaybMDPhD @CrossRiverStrat @CHWNational @hwangc01 @DrKamLeigh @YalePsych @IncludedHealth @PennNursing @ceejhcenter @Driantong Studies from Dr. Basu regarding tech (esp AI) and equity: https://t.co/IvLiPyuRRs https://t.co/lzl2A5LzEw https:
Alarming research published in @ScienceMagazine has found #AITechnology in #Healthcare has been discriminating against millions of black people when allocationg healthcare to patients. ➡️Read the full research article here: https://t.co/fbhSN8nGHv
RT @innodim: Resources curated for today’s #AIMART (articles, books, podcast, video) 🧵 https://t.co/Uox4xAYPKz https://t.co/at7bk2Tztk…
RT @innodim: Resources curated for today’s #AIMART (articles, books, podcast, video) 🧵 https://t.co/Uox4xAYPKz https://t.co/at7bk2Tztk…
RT @AmmahStarr: Get your learning on with @innodim 👇🏿👇🏿👇🏿👇🏿
RT @innodim: Resources curated for today’s #AIMART (articles, books, podcast, video) 🧵 https://t.co/Uox4xAYPKz https://t.co/at7bk2Tztk…
RT @innodim: Resources curated for today’s #AIMART (articles, books, podcast, video) 🧵 https://t.co/Uox4xAYPKz https://t.co/at7bk2Tztk…
RT @innodim: Resources curated for today’s #AIMART (articles, books, podcast, video) 🧵 https://t.co/Uox4xAYPKz https://t.co/at7bk2Tztk…
RT @innodim: Resources curated for today’s #AIMART (articles, books, podcast, video) 🧵 https://t.co/Uox4xAYPKz https://t.co/at7bk2Tztk…
RT @AmmahStarr: Get your learning on with @innodim 👇🏿👇🏿👇🏿👇🏿
RT @innodim: Resources curated for today’s #AIMART (articles, books, podcast, video) 🧵 https://t.co/Uox4xAYPKz https://t.co/at7bk2Tztk…
RT @innodim: Resources curated for today’s #AIMART (articles, books, podcast, video) 🧵 https://t.co/Uox4xAYPKz https://t.co/at7bk2Tztk…
More visual aids: https://t.co/jPFOo28vHr @NIHCMfoundation #AIMART https://t.co/yEt1a5CybS
Get your learning on with @innodim 👇🏿👇🏿👇🏿👇🏿
RT @innodim: Resources curated for today’s #AIMART (articles, books, podcast, video) 🧵 https://t.co/Uox4xAYPKz https://t.co/at7bk2Tztk…
RT @innodim: Resources curated for today’s #AIMART (articles, books, podcast, video) 🧵 https://t.co/Uox4xAYPKz https://t.co/at7bk2Tztk…
Resources curated for today’s #AIMART (articles, books, podcast, video) 🧵 https://t.co/Uox4xAYPKz https://t.co/at7bk2Tztk https://t.co/gxqroKG16D https://t.co/abuXZuBZBw https://t.co/i4aYBnwI2O https://t.co/dE5F40rf1f https://t.co/Ow6zmyUtRv https
While risk stratification models can help personalize care, their use can worsen health inequities when models mis-estimate risk for some groups - as shown by @oziadias https://t.co/TGIoN6b4bf
@xcx_loona @razzrboy @arjunsubgraph @JanelleCShane This paper is a great example of that point: https://t.co/Kpm2c5bDK9 The algorithm exhibited racial bias, even though race was explicitly NOT used in the algorithm. Only by looking deeply into the code and