April 28, 2022
min LISTEN

AI Safety Engineering - Dr. Roman Yampolskiy

Listen and Subscribe

Today’s guest has committed many years of his life to trying to understand Artificial Superintelligence and the security concerns associated with it. Dr. Roman Yampolskiy is a computer scientist (with a Ph.D. in behavioral biometrics), and an Associate Professor at the University of Louisville. He is also the author of the book Artificial Superintelligence: A Futuristic Approach. Today he joins us to discuss AI safety engineering. You’ll hear about some of the safety problems he has discovered in his 10 years of research, his thoughts on accountability and ownership when AI fails, and whether he believes it’s possible to enact any real safety measures in light of the decentralization and commoditization of processing power. You’ll discover some of the near-term risks of not prioritizing safety engineering in AI, how to make sure you’re developing it in a safe capacity, and what organizations are deploying it in a way that Dr. Yampolskiy believes to be above board.

Key Points From This Episode:

  • An introduction to Dr. Roman Yampolskiy, his education, and how he ended up in his current role.
  • Insight into Dr. Yampolskiy’s Ph.D. dissertation in behavioral biometrics and what he learned from it.
  • A definition of AI safety engineering.
  • The two subcomponents of AI safety: systems we already have and future AI.
  • Thoughts on whether or not there is a greater need for guardrails in AI than other forms of technology.
  • Some of the safety problems that Dr. Yampolskiy has discovered in his 10 years of research.
  • Dr. Yampolskiy’s thoughts on the need for some type of AI security governing body or oversight board.
  • Whether it’s possible to enact any sort of safety in light of the decentralization and commoditization of processing power.
  • Solvable problem areas.
  • Trying to negotiate the tradeoff between enabling AI to have creative freedom and being able to control it.
  • Thoughts on whether or not there will be a time where we will have to decide whether or not to go past the point of no return in terms of AI superintelligence.
  • Some of the near-term risks of not prioritizing safety engineering in AI.
  • What led Dr. Yampolskiy to focus on this area of AI expertise.
  • How to make sure you’re developing AI safely.
  • Thoughts on accountability and ownership when AI fails, and the legal implications of this.
  • Other problems Dr. Yampolskiy has uncovered.
  • Thoughts on the need for a greater understanding of the implications of AI work and whether or not this is a conceivable solution.
  • Use cases or organizations that are deploying AI in a way that Dr. Yampolskiy believes to be above board.
  • Questions that Dr. Yampolskiy would be asking if he was on an AI development safety team.
  • How you can measure progress in safety work.

Stream the full episode below, or head here to select your favorite listening app and view the full transcript.

Tweetables:“Long term, we want to make sure that we don’t create something which is more capable than us and completely out of control.” — @romanyam “This is the tradeoff we’re facing: Either is going to be very capable, independent, and creative, or we can control it.” — @romanyam “Maybe there are problems that we really need Superintelligence . In that case, we have to give it more creative freedom but with that comes the danger of it making decisions that we will not like.” — @romanyam “The more capable the system is, the more it is deployed, the more damage it can cause.” — @romanyam “It seems like it’s the most important problem, it’s the meta-solution to all the other problems. If you can make friendly well-controlled superintelligence, everything else is trivial. It will solve it for you.” — @romanyam Links Mentioned in Today’s Episode:Dr. Roman YampolskiyArtificial Superintelligence: A Futuristic ApproachDr. Roman Yampolskiy on Twitter  

RESOURCES

Related Podcast Episodes

No items found.