As long as we can survive and thrive somewhat as a human society, we will develop more and more complex robots. Industrial efficiency, medical diagnosing and general private use are some of the areas where robots will help us out. But at the same time endanger us. Because we don’t know the nature of consciousness, whether it can arise by itself through complexity with our help, or if we will ever be able to control that in a machine. Perhaps consciousness is something totally reserved for those sentient beings who already have it. If it’s not, then both machines will be able to get it and develop it – while we will be able to download and upload our consciousness into, let’s say, a new machine body. Or to live ever happy in a hard drive.
This is speculation, for the moment robots are tools, controlled by us and our methods are all connected through systems. Systems that can be hacked by anyone anywhere in the world, a couple of talented people could gain control over military robotics. Or why not a traitor? Just look at these small creatures, they are a perfect spy tool and a toy for kids perhaps. Imagine when it comes in nano sized packages.
This topic blows up in so many areas. What if the robots start to reproduce, manufacture and upgrade themselves. Cutting off all communication with their initial creators, that’s us. For if they get advanced, why would they communicate with us? Maybe not out of evil but for the same reason you and I don’t talk to ants. If we reach that moment where this new intelligence surpasses us so much that we become ants in their eyes, we’re done here, all that would be left for us is to be cute and interesting like an anthill.
Then we have the question of consciousness, can it be created, born and transfered? For the robo-apocalyptic scenario this is not a criteria. It’s just enough that the machines are complex computers and if they get hung up on manufacturing paperclips, like the example Nick Bostrom uses, then they will adapt everything they have, can get and think of – for the sake of creating paperclips. Destroying humans, planet Earth or whatever. Paperclips are the meaning of life. Ironically, if machines never develop consciousness and just get better and better as computers, then this paperclip breaking point will be caused by some random human program they happen to catch up on.
Remember, that as our society gets more and more complex and people stop using road maps and get totally dependent on their mobile device. They get less in touch with the actual technology. Because complexity is inherently ungraspable for individuals, but in this case there will always be a human breakaway civilization. The top, not in wealth, but in understanding of how things work. A deep network that needs the masses and that 1% of rich people to consume and invest.
Then again, complexity doesn’t always bring the best results. Look what the Apollo program did and the fact that we haven’t left Earth orbit for about fifty years. Even music was better when made with simpler technology. Just keep this in mind when all this nice, cute technology is being promoted – like a forthcoming nail cutting schedule by Apple or something else which sounds silly but will be sold.
Use wisely and keep educating yourselves. All the best.
What do you think?