top of page

Blog 6: Thoughts on Social Robotics and Emotive Displays

Will robots bully us back?

I'm basing my blog off of a question that popped into my head today. How many emotions will we allow a social robot to 'feel' and emote? Should our future robots be systems that only exude unfettered happiness? Should they be a jerk like a cat? Should they act as if they love us or act in emotionally abrasive ways?

The easiest answer would boil down to making sure that our robot companions fit the role they are designed to fill. We would all want a nice robot, but at what cost? If a social robot dealt with violent criminals or was involved in punitive tasks, would it punish you with a smile? In an angry manner? I hope that robotics does not reach a punitive level/role in our lives, but it's worth talking about examining the ethics of it.

A constant undercurrent in my blogs is the fear of losing humanity from automation. I predict that as we may lose the emotion in menial activities as humans are automated out of jobs, will we become bored over-coddled by gentile robots? Will that result in frustration or violence as we get fed up at insensitively nice robots?

As we've seen through experiments in conversational AI, robots can be funny and interesting to interact when given autonomy and machine learning. I am curious to see how companies and designers handle the issue of making robots interesting and layered like humans while also avoiding conflict and harm. Even well-intended people make mistakes and can hurt the ones they love, I predict it will be very easy for a well-intentioned robot to do harm to its master, so I worry even more for a robot with a wise attitude.

Nothing is conclusive in this blog because we are on the tip of the iceberg for social robotics and AI. I hope that the people making these systems are also concerned about how their robots moods will affect ours, and how the fabric of our society may be modified by making more and more human-robot interactions rather than human-human. Lastly, any system that learns from human behavior may not work, considering how some treat each other, the weak, and those that are different. We need some work there before we think too much about robots that are supposed to act human.

This is how someone decided to treat a friendly hitchiking robot.

5 views0 comments

Recent Posts

See All
bottom of page