Science-fiction often portrays a future involving robots taking our 'human' jobs, a fight for survival against our creations, and usually a conclusion involving some kind of amalgamation of robots and humans working together after the oppressors have been taken down. But how realistic is it to have Artificial Intelligence at the level of say an NS5 (I, Robot)?
I've been working with Machine Learning and I can tell you that what we are currently capable of programming is definitely NOT sentient. Honestly I don't see how the way we program using Machine Learning or AI could become sentient anytime soon but if we could, what would the ramifications be?
We've seen the movies where we end up having to destroy our creations because they take over, but we never see the part that leads up to it. We have laws surrounding housing, marriage, tax, cloning, but not AI. I remember reading in the news quite a while ago about cloning and how cloning people was illegal. So should making an artificially intelligent robot also be illegal if it represents another person? If this was the case then we wouldn't have people like robots, which would solve the robotic overlords problem. I would love to see a movie that goes through this part of our possible robotic future. Asimov has the 3 laws of robotics, but what about the laws for people about creating robotics and how they are used?
All-in-all this is a topic that is going to be coming up a lot in the next 10-20 years as computers get faster and 'smarter', and robotics takes great leaps to become better at 'human tasks'. But a question for all of us is: Do we need human-like robots? Or just robots that help us live better by doing individual tasks instead of everything?