DNNs are easily fooled … oldie but goodie https://t.co/GdS4jQ85mT #ai #Security
RT @GaryMarcus: Happy birthday deep neural nets! You are (in your celebrated rebirth) nearly 10 years old now, and still utterly fooled by…
@Vera_Lucia_Rap Check out the abstract! https://t.co/a4Gzq2St4d
Another ridiculous example involves a DL system confusing blue and orange wavy lines for starfish (https://t.co/KKIOh1YXdI). AI tools require a degree of human control and vigilance to identify problems, maybe some of them are just funny but others can lea
RT @FilipoGiovanni: @giannis_daras @GaryMarcus This reminds me a lot of adversarial noise https://t.co/dE0yLDInFJ or images https://t.co/W…
@giannis_daras @GaryMarcus This reminds me a lot of adversarial noise https://t.co/dE0yLDInFJ or images https://t.co/WivA53sxuh found with CNN classifiers
why do we trust deep nets for anything again? 😂 https://t.co/q4vpzeYBRf https://t.co/7DfcyIc71M
RT @TheDevilOps: @la_oraculo Es un clásico éste... https://t.co/iczZDFIR43
@la_oraculo Es un clásico éste... https://t.co/iczZDFIR43
El problema está cuando se automatizan los algoritmos sin que haya participación humana. Pueden ser manipulados dándoles información incorrecta. Haciéndolos funcionar de forma errónea y perjudicial. https://t.co/KsNoxwkn1r
RT @GaryMarcus: Happy birthday deep neural nets! You are (in your celebrated rebirth) nearly 10 years old now, and still utterly fooled by…
RT @danielrussruss: Looks like they should've tried a *deeper* neural net à la @ykilcher https://t.co/y59qj6fcPr https://t.co/IvB9MY0kJA
RT @danielrussruss: Looks like they should've tried a *deeper* neural net à la @ykilcher https://t.co/IvB9MY0kJA
RT @danielrussruss: Looks like they should've tried a *deeper* neural net à la @ykilcher https://t.co/IvB9MY0kJA
RT @danielrussruss: Looks like they should've tried a *deeper* neural net à la @ykilcher https://t.co/IvB9MY0kJA
Looks like they should've tried a *deeper* neural net à la @ykilcher https://t.co/IvB9MY0kJA
RT @GaryMarcus: Happy birthday deep neural nets! You are (in your celebrated rebirth) nearly 10 years old now, and still utterly fooled by…
RT @GaryMarcus: Happy birthday deep neural nets! You are (in your celebrated rebirth) nearly 10 years old now, and still utterly fooled by…
RT @GaryMarcus: Happy birthday deep neural nets! You are (in your celebrated rebirth) nearly 10 years old now, and still utterly fooled by…
ANI often deeply miscaken …lets be clearer ANI=Artificial Narrow Intelligence anything that isn't AGI + Hard Limits of Data-Driven Logic https://t.co/cXRwLkQFLa ht @savvyRL @scaleai @allen_ai
RT @GaryMarcus: Happy birthday deep neural nets! You are (in your celebrated rebirth) nearly 10 years old now, and still utterly fooled by…
RT @GaryMarcus: Happy birthday deep neural nets! You are (in your celebrated rebirth) nearly 10 years old now, and still utterly fooled by…
RT @GaryMarcus: Happy birthday deep neural nets! You are (in your celebrated rebirth) nearly 10 years old now, and still utterly fooled by…
RT @GaryMarcus: Happy birthday deep neural nets! You are (in your celebrated rebirth) nearly 10 years old now, and still utterly fooled by…
RT @GaryMarcus: Happy birthday deep neural nets! You are (in your celebrated rebirth) nearly 10 years old now, and still utterly fooled by…
RT @nordicgeo: Beyond questions re training data, data privacy, & AI for weaponry and killer robots, let’s not forget the critical question…
RT @GaryMarcus: Happy birthday deep neural nets! You are (in your celebrated rebirth) nearly 10 years old now, and still utterly fooled by…
RT @GaryMarcus: Happy birthday deep neural nets! You are (in your celebrated rebirth) nearly 10 years old now, and still utterly fooled by…
RT @nordicgeo: Beyond questions re training data, data privacy, & AI for weaponry and killer robots, let’s not forget the critical question…
RT @GaryMarcus: Happy birthday deep neural nets! You are (in your celebrated rebirth) nearly 10 years old now, and still utterly fooled by…
RT @GaryMarcus: Happy birthday deep neural nets! You are (in your celebrated rebirth) nearly 10 years old now, and still utterly fooled by…
RT @GaryMarcus: Happy birthday deep neural nets! You are (in your celebrated rebirth) nearly 10 years old now, and still utterly fooled by…
RT @GaryMarcus: Happy birthday deep neural nets! You are (in your celebrated rebirth) nearly 10 years old now, and still utterly fooled by…
RT @GaryMarcus: Happy birthday deep neural nets! You are (in your celebrated rebirth) nearly 10 years old now, and still utterly fooled by…
😂
RT @GaryMarcus: Happy birthday deep neural nets! You are (in your celebrated rebirth) nearly 10 years old now, and still utterly fooled by…
RT @GaryMarcus: Happy birthday deep neural nets! You are (in your celebrated rebirth) nearly 10 years old now, and still utterly fooled by…
RT @GaryMarcus: Happy birthday deep neural nets! You are (in your celebrated rebirth) nearly 10 years old now, and still utterly fooled by…
RT @GaryMarcus: Happy birthday deep neural nets! You are (in your celebrated rebirth) nearly 10 years old now, and still utterly fooled by…
RT @GaryMarcus: Happy birthday deep neural nets! You are (in your celebrated rebirth) nearly 10 years old now, and still utterly fooled by…
RT @jmhessel: "Natural Adversarial Objects"... aka "Errors" ? 😅 . cool work either way! https://t.co/RhvDEzW4g4
RT @GaryMarcus: Happy birthday deep neural nets! You are (in your celebrated rebirth) nearly 10 years old now, and still utterly fooled by…
RT @GaryMarcus: Happy birthday deep neural nets! You are (in your celebrated rebirth) nearly 10 years old now, and still utterly fooled by…
RT @GaryMarcus: Happy birthday deep neural nets! You are (in your celebrated rebirth) nearly 10 years old now, and still utterly fooled by…
Beyond questions re training data, data privacy, & AI for weaponry and killer robots, let’s not forget the critical question of > #AI for #Whom and to what ends? #AIforElmo & a great forum to start w is the #DreamTeam @ @Inspired__Minds @IntHe
RT @GaryMarcus: Happy birthday deep neural nets! You are (in your celebrated rebirth) nearly 10 years old now, and still utterly fooled by…
RT @GaryMarcus: Happy birthday deep neural nets! You are (in your celebrated rebirth) nearly 10 years old now, and still utterly fooled by…
RT @GaryMarcus: Happy birthday deep neural nets! You are (in your celebrated rebirth) nearly 10 years old now, and still utterly fooled by…
RT @GaryMarcus: Happy birthday deep neural nets! You are (in your celebrated rebirth) nearly 10 years old now, and still utterly fooled by…
Wonderful
"Natural Adversarial Objects"... aka "Errors" ? 😅 . cool work either way!
RT @GaryMarcus: Happy birthday deep neural nets! You are (in your celebrated rebirth) nearly 10 years old now, and still utterly fooled by…
RT @GaryMarcus: Happy birthday deep neural nets! You are (in your celebrated rebirth) nearly 10 years old now, and still utterly fooled by…
RT @GaryMarcus: Happy birthday deep neural nets! You are (in your celebrated rebirth) nearly 10 years old now, and still utterly fooled by…
Happy birthday deep neural nets! You are (in your celebrated rebirth) nearly 10 years old now, and still utterly fooled by images like this. Nice work from @savvyRL @scaleAI/ @allen_ai https://t.co/2nSTFP6tJ7, redoubling @anh_ng8 @jeffclune @jasonyo htt
Imágenes de entrada que sirvieron para engañar una red neuronal profunda Artículo relacionado (2014): https://t.co/yzsQhfcBPq https://t.co/aXXWxgGaHW
RT @drscotthawley: For my book on classification I’ll be writing about adversarial examples (& misclassification), and for my course this f…
For my book on classification I’ll be writing about adversarial examples (& misclassification), and for my course this fall I’ll be teaching a bit on GANs,...and sometimes we make connections when we get confused about stuff 😉. Thanks @theshawwn for k
Credit to @drscotthawley for pointing out this line of reasoning, which I had never once made the connection between. https://t.co/lqtJxD5cu3 Notice that these look not-dissimilar from a collapsed generator. That's what G is trying to do this whole time!
@neuro_data @skornblith @tyrell_turing @GaryMarcus Aren't humans the oracle for the ground truth? If an image has been perturbed such that humans get it wrong, then it *is* "wrong". Otherwise, the kind of adversarial images that ANNs classify with high co
@MSFTIssues 2. Confidence scores are highly vulnerable to adversarial attacks. We have known for years that adversarial examples that produce high confidence scores despite missclassification exist. See for example these two papers: https://t.co/4NrOCNKudh
Someone attached the arxiv paper to an ai meme he had made.... We are at peak AI https://t.co/zdwf7xiX8E
One day a piece of AI-generated abstract expressionism is gonna show up at the @MuseumModernArt, and it will be as confusing as the ones made by Barnett Newman. Source: https://t.co/6AYZ8jLBkl https://t.co/HhFyK5puXN
RT @EffingBoring: I've actually been doing research on this for one of my clients; there is no hope for professional wrestling and porn-det…
RT @EffingBoring: I've actually been doing research on this for one of my clients; there is no hope for professional wrestling and porn-det…
RT @EffingBoring: I've actually been doing research on this for one of my clients; there is no hope for professional wrestling and porn-det…
RT @EffingBoring: I've actually been doing research on this for one of my clients; there is no hope for professional wrestling and porn-det…
RT @EffingBoring: I've actually been doing research on this for one of my clients; there is no hope for professional wrestling and porn-det…
RT @EffingBoring: I've actually been doing research on this for one of my clients; there is no hope for professional wrestling and porn-det…
I've actually been doing research on this for one of my clients; there is no hope for professional wrestling and porn-detecting algos. A great paper: "Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images" https://
RT @todesking: DNNが誤認するような画像を進化的アルゴリズムで生成する論文、いい画像が大量にある http://t.co/zivo0AT74n http://t.co/fCFhNxwtAo
tl;dr of this paper: If you want to really fuck with your robot friends just show them the original visualizer from Windows Media Player https://t.co/QPAu5XkZwc
RT @chazfirestone: Humans can decipher directly encoded "fooling images" (based on work reported here https://t.co/2mb5mQJuBA) https://t.co…
Humans can decipher indirectly encoded "fooling images" (based on work reported here https://t.co/2mb5mQJuBA) https://t.co/GoPj9zVEYF
Humans can decipher directly encoded "fooling images" (based on work reported here https://t.co/2mb5mQJuBA) https://t.co/hpDAyenx9F
RT @simonmcmahon19: @drpuffa Attended your talk @ Brisbane AI last night. Loved it. Was wanting to ask your opinion on adversarial design &…
@drpuffa Attended your talk @ Brisbane AI last night. Loved it. Was wanting to ask your opinion on adversarial design & the problem of false negative/positives in NNs (e.g. https://t.co/TL7eXBWi1U & similar work). Do you thinks its a security or l
[1412.1897] Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images https://t.co/x9FFoCtc2h
Deep neural networks are easily fooled: High confidence predictions for unrecognizable images (from arxiv) https://t.co/hAO45IgOVU
« The AI uprising doesn’t appear as likely now, does it? » 😂 Footnotes [1] https://t.co/wu1snKA2ib [2] https://t.co/OfAFUEmgZ7 [3] https://t.co/PdJvQtAgYP... [4] https://t.co/HFxXXKUoyi https://t.co/8P7qflLVBy
Why we shouldn't blindly trust machine learning AI https://t.co/xmxsvXCniN https://t.co/B8j4JechsF https://t.co/NzZkHrZIiM
RT @Sistemabigdata: Clune's impactful paper on how #deeplearning algorithms are easily fooled https://t.co/RC0gafHsAN #machinelearning… htt…
RT @Sistemabigdata: Clune's impactful paper on how #deeplearning algorithms are easily fooled https://t.co/RC0gafHsAN #machinelearning… htt…
RT @moorejh: Clune's impactful paper on how #deeplearning algorithms are easily fooled https://t.co/WpoasS1WtM #machinelearning #GPTP2017 h…
RT @moorejh: Clune's impactful paper on how #deeplearning algorithms are easily fooled https://t.co/WpoasS1WtM #machinelearning #GPTP2017 h…
RT @moorejh: Clune's impactful paper on how #deeplearning algorithms are easily fooled https://t.co/WpoasS1WtM #machinelearning #GPTP2017 h…
RT @fadis_: 人間には全く判別できないけどニューラルネットワークによる判別器は特定の候補に違いないと主張しだす画像を作り出すのが簡単である事を示して、ニューラルネットワークが見ている事象と人間が見ている事象が割と違う事を指摘している論文 https://t.co/LT…
RT @fadis_: 人間には全く判別できないけどニューラルネットワークによる判別器は特定の候補に違いないと主張しだす画像を作り出すのが簡単である事を示して、ニューラルネットワークが見ている事象と人間が見ている事象が割と違う事を指摘している論文 https://t.co/LT…
RT @fadis_: 人間には全く判別できないけどニューラルネットワークによる判別器は特定の候補に違いないと主張しだす画像を作り出すのが簡単である事を示して、ニューラルネットワークが見ている事象と人間が見ている事象が割と違う事を指摘している論文 https://t.co/LT…
パンダの話かと思ったら違った https://t.co/4tKxjko5xM
RT @fadis_: 人間には全く判別できないけどニューラルネットワークによる判別器は特定の候補に違いないと主張しだす画像を作り出すのが簡単である事を示して、ニューラルネットワークが見ている事象と人間が見ている事象が割と違う事を指摘している論文 https://t.co/LT…
人間には全く判別できないけどニューラルネットワークによる判別器は特定の候補に違いないと主張しだす画像を作り出すのが簡単である事を示して、ニューラルネットワークが見ている事象と人間が見ている事象が割と違う事を指摘している論文 https://t.co/LThPgMj0RX
RT @Liberationtech: Deep Neural Networks Are Easily Fooled: They Might Accidentally Label You a Terrorist with 99.9% Confidence [pdf] https…
RT @Liberationtech: Deep Neural Networks Are Easily Fooled: They Might Accidentally Label You a Terrorist with 99.9% Confidence [pdf] https…
RT @Liberationtech: Deep Neural Networks Are Easily Fooled: They Might Accidentally Label You a Terrorist with 99.9% Confidence [pdf] https…
RT @Liberationtech: Deep Neural Networks Are Easily Fooled: They Might Accidentally Label You a Terrorist with 99.9% Confidence [pdf] https…