If you’re eager for our AI overlords to come take all the difficult choices out of your daily routine (do I want hazelnut coffee or a Frappucino??), a new article at MIT’s Technology Review might give you pause.
Turns out that AI is quickly becoming downright incredible and capable of making complex decisions and predictions. The only problem is that no one quite seems to know how the technology does what it does—and that could be a problem. For instance, one medical AI called Deep Patient has the uncanny ability to pinpoint patients who might develop schizophrenia, a notoriously difficult disease to predict. The AI can’t explain its reasoning, though, so it’s diagnosis can’t really be used to justify treatment without an explanation. While we can all breathe a sigh of relief that doctors aren’t foisting meds on patients based solely on a machine’s say-so (yet!), to use the technical term, it’s pretty freaking weird. “We can build these models,” says Joel Dudley, who leads a team an AI research team at Mount Sinai, “but we don’t know how they work.” Check out the long read here while we confirm that Skynet is still not sentient. ML