A reference to a passage from Eric Topol’s book DeepMedicine and reissued on his twitter account today to provoke the thoughts:
“We already accept back boxes in medicine. For example, electroconvulsive therapy is highly effective for severe depression, but we have no idea how it works. Likewise, there are many drugs that seem to work, even though no one can explain how. As patients we willingly accept this human type of black box, so long as we feel better or have good outcomes. Should we do the same for AI algorithms? Pedro Domingos would, telling me that he’d prefer one “that’s 99 percent accurate but is a black box” over “one that gives me explanation information but is only 80 percent accurate.””