Kurzweil opined about the notion of spiritual machines, but what of moral machines? Can we design and build artificial intelligence with an inate sense of ethics/morality? If morality developed in an AI, would it be one we recognize? In his essay, NYU Professor Gary Marcus explores the need to design ethics and morality into the architecture of future intelligent machines.
“Within two or three decades the difference between automated driving and human driving will be so great you may not be legally allowed to drive your own car, and even if you are allowed, it would immoral of you to drive, because the risk of you hurting yourself or another person will be far greater than if you allowed a machine to do the work. That moment will be significant not just because it will signal the end of one more human niche, but because it will signal the beginning of another: the era in which it will no longer be optional for machines to have ethical systems.”
The next couple decades will be interesting from a philisophical and ethical point of view. I’ve discussed AI free speech and self-driving cars before, but the concept of morality and ethics in AI goes back to Asimov (though in an extremely reductive and overly simplistic form, I credit him with introducing the concept.)
My little robot buddy standing in solidarity with our electric toothbrushes