There’s a new term that I expect will soon become a regular topic of conversation – transhuman. It sounds like a new gender category, but it isn’t. It’s far from it.
Transhuman is the integration of technology into humans. It’s similar to genetic and cell technologies like CRISPR or stem cell therapies but much more invasive. A transhuman refers to someone who has integrated technology into their body in a way that substantially augments either their mental or physical capabilities, or in many cases both. Perhaps you’re more familiar with the term cyborg, the mix of man and machine, although becoming transhuman doesn’t necessarily require embedding a machine in one’s body.
While it sounds like an amazing thing to happen, and in some ways it can be, it’s also quite scary. Here are some of the benefits that could result from being transhuman and why I also thing it could be a cause for alarm.
Now, look, let’s start with the three fundamental Rules of Robotics – the three rules that are built most deeply into a robot’s positronic brain.
One, a robot may not injure a human being, or, through inaction, allow a human being to come to harm.
Two, a robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
Three, a robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
These are Isaac Asimov’s easily recognizable and famous Three Laws of Robotics as laid out in his collection of short stories – I, Robot. For science fiction aficionados, these are easily identified and, most likely, committed to memory. Unfortunately for me, I just recently learned these laws. Sure, I’d heard them paraphrased many times and referenced in numerous books, but I never knew the true context in which they were used by Asimov. Now that I know the context, the rules are even more poignant and relevant in my mind.
What do you do when one of your favorite authors finishes another installment in a book series that you really like? You stop what you’re doing and move that book to the top of your reading list.
That’s what happened to me recently when William Hertling, the author of the Singularity Series, finished the fourth installment titled The Turing Exception. The Turing Exception picks up 10 years after the completion of The Last Firewall. In addition to introducing the effects of advanced nanotech, It adds another layer of artificial intelligence into the mix, the ability to upload your mind to a computer. It makes for some interesting plot dynamics and gives you even more to think about if (and when) the technology becomes available. There are some vexing moral quandaries and dilemmas presented which Hertling leaves for the reader to ponder on their own.
For not being that into TED talks, I saw another one recently that features an interview with Elon Musk. It’s worth the 20 minutes to hear him talk about the big ideas he’s pursuing related to electric cars (Tesla motors), solar energy (Solar City), and space exploration (Space X).
He impresses me, and here’s why.
I’m a huge fan of William’s Hertling’s Singularity book series. Since reading the books, I’ve started following his blog where he often talks about artificial intelligence (AI) and the coming of the Singularity – that point in time where AI has achieved greater than human intelligence.
This weekend, he wrote an article in response to articles by Ramez Naam, author of one of my favorite books Nexus (and the sequel Crux), about how and when the Singularity may occur. In short, Naam suggests that the singularity is a ways off and won’t happen overnight. While Hertling more or less agrees with Naam, his concern is more related to the risks of an advanced AI. His biggest point is that we should be addressing the complicated ethical issues surrounding AI now so we are prepared for the Singularity when it occurs.
I’d be wading into water that is way over my head if I were to offer an opinion on how and when AI advancements will occur. I just don’t understand the technology well enough. However, I’ve read enough sci-fi books over the last 18 months regarding the Singularity that I agree we need to start discussions regarding the risks and ethical issues of a smarter than human AI sooner rather than later so we are prepared for it when it happens. I’m not suggesting I should be a part of the discussions, as I feel there are people who know way more about AI than me who should be addressing the issues, I’m just suggesting they happen.
Either way, here’s a link to all three articles. I suggest that you give them a read if you have any interesting in the advancement of AI and the emergence of the Singularity.
Finally, for good measure, here are my book reviews of Hertling’s and Naam’s books on AI and the Singularity. They’re longer form reading, but worth it if you want to understand the paths that AI could evolve along: