advertisement
advertisement
advertisement

Microsoft coolly apologizes for its “critical oversight” (aka unleashing a racist chatbot)

In a blog post, Microsoft took the blame for deploying its controversial chatbot—an AI experiment that went south very quickly—without accounting for how it could backfire (see: tweeting racist epithets). 

But, as one might expect, the company was quick to backpedal, pointing out that AI can’t be improved without missteps:

Looking ahead, we face some difficult – and yet exciting – research challenges in AI design. AI systems feed off of both positive and negative interactions with people. In that sense, the challenges are just as much social as they are technical. We will do everything possible to limit technical exploits but also know we cannot fully predict all possible human interactive misuses without learning from mistakes. To do AI right, one needs to iterate with many people and often in public forums. 

PM