Thoughts on AI and the coming Singularity

I’m a huge fan of William’s Hertling’s Singularity book series. Since reading the books, I’ve started following his blog where he often talks about  artificial intelligence (AI) and the coming of the Singularity – that point in time where AI has achieved greater than human intelligence.

This weekend, he wrote an article in response to articles by Ramez Naam, author of one of my favorite books Nexus (and the sequel Crux), about how and when the Singularity may occur. In short, Naam suggests that the singularity is a ways off and won’t happen overnight. While Hertling more or less agrees with Naam, his concern is more related to the risks of an advanced AI. His biggest point is that we should be addressing the complicated ethical issues surrounding AI now so we are prepared for the Singularity when it occurs.

I’d be wading into water that is way over my head if I were to offer an opinion on how and when AI advancements will occur. I just don’t understand the technology well enough. However, I’ve read enough sci-fi books over the last 18 months regarding the Singularity that I agree we need to start discussions regarding the risks and ethical issues of a smarter than human AI sooner rather than later so we are prepared for it when it happens. I’m not suggesting I should be a part of the discussions, as I feel there are people who know way more about AI than me who should be addressing the issues, I’m just suggesting they happen.

Either way, here’s a link to all three articles. I suggest that you give them a read if you have any interesting in the advancement of AI and the emergence of the Singularity.

Finally, for good measure, here are my book reviews of Hertling’s and Naam’s books on AI and the Singularity. They’re longer form reading, but worth it if you want to understand the paths that AI could evolve along:


Leave a Reply

Your email address will not be published.