Keynote: AI for formal verification; formal verification for AI
DAY 1
9:00-
9:45
[Recorded Session]
For over a decade, it has been known that formal verification workflows are sufficient to create software that is free of exploitable bugs. It also appears to be necessary. AI systems are increasing rapidly in their ability to assist in these workflows, and to make them accessible to increasingly less specialist engineers. While AI can also assist with many other workflows, formal verification in some form still seems to be necessary to make AI reliable enough to provide a net benefit to cybersecurity. AI could also be used to formally verify at other levels of abstraction than functional correctness, from the concurrency of distributed systems down to the electromagnetics of the hardware. In a small number of years, this will vastly expand the scope of what can be considered “practical” to formally verify, and most forms of cyberattack will become history. At the same time, formal verification and cybersecurity have crucial roles to play in ensuring and assuring that increasingly powerful AI systems do not go rogue and cause a global catastrophe. Our future could be bright, but our communities need to work together.
-
Location :
-
Track 1(HALL B)
-
-
Category :
-
Keynote
-
-
Share :
Speakers
-
David A. Dalrymple (davidad)
デビッド・A・ダリンプル(davidad)
David A. Dalrymple is a highly regarded research scientist currently leading the Safeguarded AI program at ARIA (the UK’s Advanced Research and Invention Agency). His expertise lies in AI safety and computational neuroscience, and he is known for his contributions to developing mathematically grounded, human-auditable AI models. These models aim to ensure safe deployment of AI technologies while maximizing their potential to benefit society.
Dalrymple has an impressive background that includes co-inventing the Filecoin protocol and developing the Hypercerts mechanism for public goods funding at Protocol Labs. He has also worked as a Senior Software Engineer at Twitter and held a research fellowship at Oxford University focused on technical AI safety
In his current role, Dalrymple is exploring the development of AI systems with strong safety guarantees, comparable to those in industries like nuclear power and aviation. His work is vital in addressing the growing need for reliable and ethical AI deployment in critical contexts.