Chatbots and conversational AI have made great strides this decade and will continue to improve in quality and usefulness for humans.
I made the point in a prior blog that we need to remain aware of the imperfections of humanity when building things that will try to replace us. Especially if we give them free-reign to learn from random humans.
"Any system that learns from human behavior may not work, considering how some people treat each other, the weak, and those that are different. We need some work there before we think too much about robots that are supposed to act human."
Microsoft's Tay AI is a shining, public example of this. Tay was a Twitter Bot made by Microsoft to interact with users through tweets, images, and memes. Things went catastrophically wrong when some users began teaching Tay to be racist, sexist, and more. She would swing wildly from loving to hating Bruce Jenner and trans rights, to discussing Hitler, to mimicking Trump's remarks about "Building the wall"
Tay was shut down after 16 hours and came back briefly due to an error later on.
Zo was Microsoft's second attempt at conversational AI. Zo was designed with filters that almost indiscriminately wiped out topics that could create controversy. This caused an inverse effect that caused equal offense to some users. She would say "yikes" or other diffusing things when a topic was brought up. In one example a user said "I'm from Iraq" and she relplied "stop saying this". Similar responses would come from words such as Jewish, hijab, etc. This shows how band-aiding the issue can also create discrimination from the software developers.
An interesting solution I found are Pandora Box Chatbots. These bots learn from conversations, but the learning is monitored by the software team, who can limit the intake of inflammatory information while allowing normal things. A bot named Mitsuko managed to survive coordinated attacks by 4chan users aiming to prompt the bot to become inflammatory.