It’s too bad that the Terminator image in popular media has now become synonymous with the advancing fields of robotics and artificial intelligence. Fear of the so called “Robot Apocalypse” or “Artificial Intelligence Apocalypse” is also becoming a rather tiresome trope in emerging technology articles from people who should know better. The fact is advanced AI empowered robotics isn’t going to “care” about “taking over” anything, it won’t concern itself with arbitrary lines on a map created by humans or the GDP of the United States or human ideologies or opinions, these are all human centered concerns for a human world. Machine intelligence is not like machine intelligence. As humans we fear an AI or Robotics “take over” because that is precisely what we would more likely do as human beings if our cognitive and physical abilities could match or exceed that of a machine intelligence. This is merely the projection of our own fears, flaws, dreams and desires on to robotics and artificial intelligence. It really doesn’t have anything to do with the technology itself. More importantly we really have no basis to assume that a machine intelligence would plot our demise, end our world or take over the world. So fear of a “Robot / AI Apocalypse” is simply nonsense. What an advanced artificial intelligence would care about is natural resources required to expand its intelligence, enhance its evolution and further enable its ability to self-replicate on all scales. A machine intelligence would not be content with limiting its ability to advance its evolution to the confines of the planet, in time it would need to move on. The most we need to really worry about is our machines growing so great in intelligence and power that they simply disregard us entirely and move off planet leaving us behind in search of even greater material resources than what the earth can provide. Now does this mean that we are free to proceed without caution? No of course not, like any other technology we will need to take precautions, use common sense, apply standards and proceed carefully, just as we did with nuclear technology, genetic engineering and every other technology we have ever developed. Will mistakes be made along the way?, yes almost certainly and we will learn from them and advance and grow as a species or we will make the same mistakes all over again. In the mean time we need to rationally talk about these technologies, their implications and work together on ways to mitigate risks and misuse as best we can given the complexity of the technology.