If you could write the 3 Laws of Robotics today

Over the holiday weekend I took some time to clean up some books and found several favorites. Growing up I devoured every book by Isaac Asimov I could get my hands on. His tales of robots and futuristic technology captured my imagination. But it was his 3 Laws of Robotics that really stuck with me.

The 3 Laws state:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

These laws made intrinsic sense to my young mind. Of course, we would want to protect humans from harm as we develop more advanced #AI and robotics, right?

Today we have incredibly sophisticated AI like large language models that can generate written content, have conversations, and more. I often wonder what Asimov would think of these developments. Would he see them as a natural progression of AI that doesn’t inherently violate his 3 Laws? Or would he want to establish guardrails on some uses of generative AI to uphold the spirit of not harming humans?

While we can’t bring Asimov back to life and ask him, I think we should thoughtfully consider limits and ethical guidelines as we build ever more powerful AI. We want these tools to help propel humanity forward, not contribute to misinformation, job displacement, or other potential pitfalls.

Asimov was an endless optimist when it came to technology’s potential. But he also wove cautionary tales to remind us that wisdom and responsibility should guide creation. As he once said, “The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom.” Perhaps we can take a balanced approach, allowing AI advancement while establishing reasonable guardrails. That seems like a future Isaac Asimov could appreciate - one where wisdom keeps pace with scientific advancement so we steer clear of any abyss.

How would you write the 3 Laws today?

2 Likes

Great food for thought @WJRyan ! I love the quote you shared from Asimov and think it still applies, especially with AI developments right now.

1 Like

Great post BIll @WJRyan !
Isaac was a brilliant mind!
Even with reasonable guardrails there will be contrarians and controversy. In a weak comparison, we don’t even know who really owns the internet do we? It is with mindfulness not mindlessness that the hope choices are made with integrity, wisdom and protection of human beings/all beings.

I’ve heard there were more than 3 laws… and I have no idea of the fourth but here’s the Fifth Law of Robotics" by Nikola Kesarovski, "A robot must know it is a robot ": it is presumed that a robot has a definition of the term or a means to apply it to its own actions.

1 Like

So the 4th Law was introduced in the 1974 Lyuben Dilov novel, Icarus’s Way (a.k.a., The Trip of Icarus) “A robot must establish its identity as a robot in all cases.” And to add more fun, Asimov introduced the Zeroth Law "so named to continue the pattern where lower-numbered laws supersede the higher-numbered laws—stating that a robot must not harm humanity. The robotic character R. Daneel Olivaw was the first to give the Zeroth Law a name in the novel Robots and Empire ;[19] however, the character Susan Calvin articulates the concept in the short story “The Evitable Conflict”. For those interested, Three Laws of Robotics - Wikipedia

1 Like