AI-screened eye pics diagnose childhood autism with 100% accuracy::undefined

  • Wogi@lemmy.world
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    2
    ·
    9 months ago

    100% accuracy is troublesome. Literally statistics 101 stuff, they tell you in no uncertain terms, never, never trust 100% accuracy.

    You can be certain to some value of p. That number is never 0. .001 is suspicious as fuck, but doable. .05 is great if you have a decent sample size.

    They had fewer than 1000 participants.

    I just don’t trust it. Neither should they. Neither should you. Not at least until someone else recreates the experiments and also finds this AI to be 100% accurate.

    • eggymachus@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      12
      ·
      9 months ago

      What they’re saying, as far as I can tell, is that after training the model on 85% of the dataset, the model predicted whether a participant had an ASD diagnosis (as a binary choice) 100% correctly for the remaining 15%. I don’t think this is unheard of, but I’ll agree that a replication would be nice to eliminate systemic errors. If the images from the ASD and TD sets were taken with different cameras, for instance, that could introduce an invisible difference in the datasets that an AI could converge on. I would expect them to control for stuff like that, though.

      • dragontamer@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        edit-2
        9 months ago

        I would expect them to control for stuff like that, though.

        What was the problem with that male vs female deep-learning test a few years ago?

        That all the males were earlier in the day, so the sun angle in the background was a certain direction, while all the females were later in the day, so the sun was in a different angle? And so it turned out that the deep-learning AI was just trained on the window in the background?

        100% accuracy almost certainly means this kind of effect happened. No one gets perfect, all good tests should be at least a “little bit” shoddy.

      • dirtdigger@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        9 months ago

        You need to report two numbers for a classifier, though. I can create a classifier that catches all cases of autism just by saying that everybody has autism. You also need a false positive rate.