Notes References [1] John M Jumper, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Olaf Ron-neberger, Kathryn Tunyasuvunakool, Russ Bates, Augustin Zídek, Anna Potapenko, Alex
Category: Updates
Notes [1] Abramson, J., Ahuja, A., Barr, I., Brussee, A., Carnevale, F., Cassin, M., Chhaparia, R., Clark, S., Damoc, B., Dudzik, A. and Georgiev, P.,
Advancing best-in-class large models, compute-optimal RL agents, and more transparent, ethical, and fair AI systems The thirty-sixth International Conference on Neural Information Processing Systems (NeurIPS
Research Published 1 December 2022 Authors Julien Perolat, Bart De Vylder, Daniel Hennes, Eugene Tarassov, Florian Strub and Karl Tuyls DeepNash learns to play Stratego
Research Published 6 December 2022 Authors Yoram Bachrach, János Kramár Agents cooperate better by communicating and negotiating, and sanctioning broken promises helps keep them honest
Research Published 8 December 2022 Authors The AlphaCode team Note: This blog was first published on 2 Feb 2022. Following the paper’s publication in Science
Earlier today we announced some changes that will accelerate our progress in AI and help us develop more capable AI systems safely and responsibly. Below
Responsibility & Safety Published 24 April 2023 Authors Iason Gabriel and Kevin McKee Drawing from philosophy to identify fair principles for ethical AI As artificial
Research towards AI models that can generalise, scale, and accelerate science Next week marks the start of the 11th International Conference on Learning Representations (ICLR),
Responsibility & Safety Published 25 May 2023 Authors Toby Shevlane New research proposes a framework for evaluating general-purpose models against novel threats To pioneer responsibly