You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
<h5align="left">Douglas Blank, Head of Research, Comet ML</h5>
1596
1595
</div>
@@ -1601,7 +1600,7 @@ <h5 align="left">Douglas Blank, Head of Research, Comet ML</h5>
1601
1600
<divclass="col-md-12 v-center">
1602
1601
<h6>Talk Abstract</h6>
1603
1602
<p>
1604
-
While Deep Learning has achieved remarkable advancements, I believe its deployment requires a shift in perspective. Just as the Hippocratic Oath guides medical practice, a fundamental ethical framework is crucial for responsible AI deployment. This talk delves into the critical question: Can your AI system adhere to the principle of 'Do no Harm'? We will explore the risks associated with releasing your AI project into the wild, while considering the ethical implications alongside technical advancements.
1603
+
In 1942, Isaac Asimov introduced the Three Laws of Robotics as a literary ethical framework to explore robot safety and prevent harm to humans. Until recently, these concepts were purely theoretical in relation to real AI. However, more than 80 years later, the challenge of creating a robust ethical and safety layer for autonomous systems is a pressing reality. In this presentation, we will explore the core ideas behind Asimov's laws and conduct interactive, hands-on demonstrations that utilize and challenge current Deep Learning (DL) techniques. By examining the application and inherent limitations of modern safety protocols in DL systems, we will consider Three New Laws of AI designed for contemporary intelligent systems.
0 commit comments