Publications

Community-based Reconstruction and Simulation of a Full-scale Model of Region CA1 of Rat Hippocampus
Armando Romani
Alberto Antonietti
Davide Bella
Julian Budd
Elisabetta Giacalone
Kerem Kurban
Sára Sáray
Marwan Abdellah
Alexis Arnaudon
Elvis Boci
Cristina Colangelo
Jean-Denis Courcol
Thomas Delemontex
András Ecker
Joanne Falck
Cyrille Favreau
Michael Gevaert
Juan B. Hernando
Joni Herttuainen
Genrich Ivaska … (voir 28 de plus)
Lida Kanari
Anna-Kristin Kaufmann
James King
Pramod Kumbhar
Sigrun Lange
Huanxiang Lu
Carmen Alina Lupascu
Rosanna Migliore
Fabien Petitjean
Judit Planas
Pranav Rai
Srikanth Ramaswamy
Michael W. Reimann
Juan Luis Riquelme
Nadir Román Guerrero
Ying Shi
Vishal Sood
Mohameth François Sy
Werner Van Geit
Liesbeth Vanherpe
Tamás F. Freund
Audrey Mercer
Felix Schürmann
Alex M. Thomson
Michele Migliore
Szabolcs Káli
Henry Markram
The CA1 region of the hippocampus is one of the most studied regions of the rodent brain, thought to play an important role in cognitive fun… (voir plus)ctions such as memory and spatial navigation. Despite a wealth of experimental data on its structure and function, it has been challenging to reconcile information obtained from diverse experimental approaches. To address this challenge, we present a community-driven, full-scale in silico model of the rat CA1 that integrates a broad range of experimental data, from synapse to network, including the reconstruction of its principal afferents, the Schaffer collaterals, and a model of the effects that acetylcholine has on the system. We tested and validated each model component and the final network model, and made input data, assumptions, and strategies explicit and transparent. The unique flexibility of the model allows scientists to address a range of scientific questions. In this article, we describe the methods used to set up simulations that reproduce and extend in vitro and in vivo experiments. Among several applications in the article, we focus on theta rhythm, a prominent hippocampal oscillation associated with various behavioral correlates and use our computer model to reproduce and reconcile experimental findings. Finally, we make data, code and model available through the hippocampushub.eu portal, which also provides an extensive set of analyses of the model and a user-friendly interface to facilitate adoption and usage. This neuroscience community-driven model represents a valuable tool for integrating diverse experimental data and provides a foundation for further research into the complex workings of the hippocampal CA1 region.
A MC-based anthropomorphic test case for commissioning model-based dose calculation in interstitial breast 192-Ir HDR brachytherapy.
Vasiliki Peppa
Rowan M. Thomson
Gabriel P. Fonseca
Choonik Lee
Joseph N. E. Lucero
Firas Mourtada
Frank‐André Siebert
Javier Vijande
Panagiotis Papagiannis
PURPOSE To provide the first clinical test case for commissioning of 192 Ir brachytherapy model-based dose calculation algorithms (MBDCAs) a… (voir plus)ccording to the AAPM TG-186 report workflow. ACQUISITION AND VALIDATION METHODS A computational patient phantom model was generated from a clinical multi-catheter 192 Ir HDR breast brachytherapy case. Regions of interest (ROIs) were contoured and digitized on the patient CT images and the model was written to a series of DICOM CT images using MATLAB. The model was imported into two commercial treatment planning systems (TPSs) currently incorporating an MBDCA. Identical treatment plans were prepared using a generic 192 Ir HDR source and the TG-43-based algorithm of each TPS. This was followed by dose to medium in medium calculations using the MBDCA option of each TPS. Monte Carlo (MC) simulation was performed in the model using three different codes and information parsed from the treatment plan exported in DICOM radiation therapy (RT) format. Results were found to agree within statistical uncertainty and the dataset with the lowest uncertainty was assigned as the reference MC dose distribution. DATA FORMAT AND USAGE NOTES The dataset is available online at http://irochouston.mdanderson.org/rpc/BrachySeeds/BrachySeeds/index.html,https://doi.org/10.52519/00005. Files include the treatment plan for each TPS in DICOM RT format, reference MC dose data in RT Dose format, as well as a guide for database users and all files necessary to repeat the MC simulations. POTENTIAL APPLICATIONS The dataset facilitates the commissioning of brachytherapy MBDCAs using TPS embedded tools and establishes a methodology for the development of future clinical test cases. It is also useful to non-MBDCA adopters for intercomparing MBDCAs and exploring their benefits and limitations, as well as to brachytherapy researchers in need of a dosimetric and/or a DICOM RT information parsing benchmark. Limitations include specificity in terms of radionuclide, source model, clinical scenario, and MBDCA version used for its preparation.
Modeling and Simulation of Neocortical Micro- and Mesocircuitry. Part II: Physiology and Experimentation
James B. Isbister
András Ecker
Christoph Pokorny
Sirio Bolaños-Puchet
Daniela Egas Santander
Alexis Arnaudon
Omar Awile
Natali Barros-Zulaica
Jorge Blanco Alonso
Elvis Boci
Giuseppe Chindemi
Jean-Denis Courcol
Tanguy Damart
Thomas Delemontex
Alexander Dietz
Gianluca Ficarelli
Michael Gevaert
Joni Herttuainen
Genrich Ivaska
Weina Ji … (voir 22 de plus)
Daniel Keller
James King
Pramod Kumbhar
Samuel Lapere
Polina Litvak
Darshan Mandge
Fernando Pereira
Judit Planas
Rajnish Ranjan
Maria Reva
Armando Romani
Christian Rössert
Felix Schürmann
Vishal Sood
Aleksandra Teska
Anil Tuncel
Werner Van Geit
Matthias Wolf
Henry Markram
Srikanth Ramaswamy
Michael W. Reimann
Cortical dynamics underlie many cognitive processes and emerge from complex multi-scale interactions, which can be studied in large-scale, b… (voir plus)iophysically detailed models. We present a model comprising eight somatosensory cortex subregions, 4.2 million morpho-logical and electrically-detailed neurons, and 13.2 billion local and long-range synapses. In silico tools enabled reproduction and extension of complex laboratory experiments under a single parameterization, providing strong validation. We reproduced millisecond-precise stimulus-responses, stimulus-encoding under targeted optogenetic activation, and selective propagation of stimulus-evoked activity to downstream areas. The model’s di-rect correspondence with biology generated predictions about how multiscale organisation shapes activity. We predict that structural and functional recurrency increases towards deeper layers and that stronger innervation by long-range connectivity increases local correlated activity. The model also predicts the role of inhibitory interneuron types in stimulus encoding, and of different layers in driving layer 2/3 stimulus responses. Simu-slation tools and a large subvolume of the model are made available.
Raising the Bar for Certified Adversarial Robustness with Diffusion Models
Thomas R. Altstidl
David Dobre
Björn M. Eskofier
Leo Schwinn
Certified defenses against adversarial attacks offer formal guarantees on the robustness of a model, making them more reliable than empirica… (voir plus)l methods such as adversarial training, whose effectiveness is often later reduced by unseen attacks. Still, the limited certified robustness that is currently achievable has been a bottleneck for their practical adoption. Gowal et al. and Wang et al. have shown that generating additional training data using state-of-the-art diffusion models can considerably improve the robustness of adversarial training. In this work, we demonstrate that a similar approach can substantially improve deterministic certified defenses. In addition, we provide a list of recommendations to scale the robustness of certified training approaches. One of our main insights is that the generalization gap, i.e., the difference between the training and test accuracy of the original model, is a good predictor of the magnitude of the robustness improvement when using additional generated data. Our approach achieves state-of-the-art deterministic robustness certificates on CIFAR-10 for the
Responses of pyramidal cell somata and apical dendrites in mouse visual cortex over multiple days
Colleen J Gillon
Jérôme A. Lecoq
Jason E. Pina
Ruweida Ahmed
Yazan N. Billeh
Shiella Caldejon
Peter Groblewski
Timothy M. Henley
India Kato
Eric Lee
Jennifer Luviano
Kyla Mace
Chelsea Nayan
Thuyanh V. Nguyen
Kat North
Jed Perkins
Sam Seid
Matthew T. Valley
Ali Williford
Timothy P. Lillicrap
Joel Zylberberg
Eliminating Space Scanning: Fast mmWave Beam Alignment with UWB Radios
Ju Wang
Xi Chen
Due to their large bandwidth and impressive data speed, millimeter-wave (mmWave) radios are expected to play a key role in the 5G and beyond… (voir plus) (e.g., 6G) communication networks. Yet, to release mmWave’s true power, the highly directional mmWave beams need to be aligned perfectly. Most existing beam alignment methods adopt an exhaustive or semi-exhaustive space scanning, which introduces up to seconds of delays. To eliminate the need for complex space scanning, this article presents an Ultra-wideband (UWB)-assisted mmWave communication framework, which leverages the co-located UWB antennas to estimate the best angles for mmWave beam alignment. One major challenge of applying this idea in the real world is the barrier of limited antenna numbers. Commercial-Off-The-Shelf (COTS) devices are usually equipped with only a small number of UWB antennas, which are not enough for the existing algorithms to provide an accurate angle estimation. To solve this challenge, we design a novel Multi-Frequency MUltiple SIgnal Classification (MF-MUSIC) algorithm, which extends the classic MUltiple SIgnal Classification (MUSIC) algorithm to the frequency domain and overcomes the antenna limitation barrier in the spatial domain. Extensive real-world experiments and numerical simulations illustrate the advantage of the proposed MF-MUSIC algorithm. MF-MUSIC uses only three antennas to achieve an accurate angle estimation, which is a mere 0.15° (or a relative difference of 3.6%) different from the state-of-the-art 16-antenna-based angle estimation method.
On Codex Prompt Engineering for OCL Generation: An Empirical Study
Seif Abukhalaf
Mohammad Hamdaqa
The Object Constraint Language (OCL) is a declarative language that adds constraints and object query expressions to Meta-Object Facility (M… (voir plus)OF) models. OCL can provide precision and conciseness to UML models. Nevertheless, the unfamiliar syntax of OCL has hindered its adoption by software practitioners. LLMs, such as GPT-3, have made significant progress in many NLP tasks, such as text generation and semantic parsing. Similarly, researchers have improved on the downstream tasks by fine-tuning LLMs for the target task. Codex, a GPT-3 descendant by OpenAI, has been fine-tuned on publicly available code from GitHub and has proven the ability to generate code in many programming languages, powering the AI-pair programmer Copilot. One way to take advantage of Codex is to engineer prompts for the target downstream task. In this paper, we investigate the reliability of the OCL constraints generated by Codex from natural language specifications. To achieve this, we compiled a dataset of 15 UML models and 168 specifications from various educational resources. We manually crafted a prompt template with slots to populate with the UML information and the target task in the prefix format to complete the template with the generated OCL constraint. We used both zero- and few-shot learning methods in the experiments. The evaluation is reported by measuring the syntactic validity and the execution accuracy metrics of the generated OCL constraints. Moreover, to get insight into how close or natural the generated OCL constraints are compared to human-written ones, we measured the cosine similarity between the sentence embedding of the correctly generated and human-written OCL constraints. Our findings suggest that by enriching the prompts with the UML information of the models and enabling few-shot learning, the reliability of the generated OCL constraints increases. Furthermore, the results reveal a close similarity based on sentence embedding between the generated OCL constraints and the human-written ones in the ground truth, implying a level of clarity and understandability in the generated OCL constraints by Codex.
Conditional Permutation Invariant Flows
Berend Zwartsenberg
Adam Ścibior
Matthew Niedoba
Vasileios Lioutas
Yunpeng Liu
Justice Sefas
Setareh Dabiri
Jonathan Wilder Lavington
Trevor Campbell
We present a novel, conditional generative probabilistic model of set-valued data with a tractable log density. This model is a continuous n… (voir plus)ormalizing flow governed by permutation equivariant dynamics. These dynamics are driven by a learnable per-set-element term and pairwise interactions, both parametrized by deep neural networks. We illustrate the utility of this model via applications including (1) complex traffic scene generation conditioned on visually specified map information, and (2) object bounding box generation conditioned directly on images. We train our model by maximizing the expected likelihood of labeled conditional data under our flow, with the aid of a penalty that ensures the dynamics are smooth and hence efficiently solvable. Our method significantly outperforms non-permutation invariant baselines in terms of log likelihood and domain-specific metrics (offroad, collision, and combined infractions), yielding realistic samples that are difficult to distinguish from real data.
Fast and Attributed Change Detection on Dynamic Graphs with Density of States
Shenyang Huang
Jacob Danovitch
MatSci-NLP: Evaluating Scientific Language Models on Materials Science Language Tasks Using Text-to-Schema Modeling
Yurun Song
Santiago Miret
AfriQA: Cross-lingual Open-Retrieval Question Answering for African Languages
Odunayo Ogundepo
Tajuddeen Gwadabe
Clara E. Rivera
Jonathan H. Clark
Sebastian Ruder
Bonaventure F. P. Dossou
Abdoulahat Diop
Claytone Sikasote
Gilles HACHEME
Happy Buzaaba
Ignatius Ezeani
Rooweither Mabuya
Salomey Osei
Chris Emezue
Albert Kahira
Shamsuddeen Hassan Muhammad
Akintunde Oladipo
Abraham Toluwase Owodunni
Atnafu Lambebo Tonja … (voir 32 de plus)
Iyanuoluwa Shode
Akari Asai
Tunde Oluwaseyi Ajayi
Clemencia Siro
Stephen Arthur
Mofetoluwa Adeyemi
Orevaoghene Ahia
Aremu Anuoluwapo
Oyinkansola Awosan
Chiamaka Ijeoma Chukwuneke
Bernard Opoku
A. Ayodele
Verrah Akinyi Otiende
Christine Mwase
Boyd Sinkala
Andre Niyongabo Rubungo
Daniel Ajisafe
Emeka Felix Onwuegbuzia
Habib Mbow
Emile Niyomutabazi
Eunice Mukonde
Falalu Lawan
Ibrahim Ahmad
Jesujoba Oluwadara Alabi
Martin Namukombo
Mbonu Chinedu
Mofya Phiri
Neo Putini
Ndumiso Mngoma
Priscilla A. Amuok
Ruqayya Nasir Iro
Sonia Adhiambo34
Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models
Aarohi Srivastava
Abhinav Rastogi
Abhishek Rao
Abu Awal Md Shoeb
Abubakar Abid
Adam Fisch
Adam R. Brown
Adam Santoro
Aditya Gupta
Adrià Garriga-Alonso
Agnieszka Kluska
Aitor Lewkowycz
Akshat Agarwal
Alethea Power
Alex Ray
Alex Warstadt
Alexander W. Kocurek
Ali Safaya
Ali Tazarv
Alice Xiang … (voir 432 de plus)
Alicia Parrish
Allen Nie
Aman Hussain
Amanda Askell
Amanda Dsouza
Ambrose Slone
Ameet Rahane
Anantharaman S. Iyer
Anders Johan Andreassen
Andrea Madotto
Andrea Santilli
Andreas Stuhlmüller
Andrew M. Dai
Andrew La
Andrew Lampinen
Andy Zou
Angela Jiang
Angelica Chen
Anh Vuong
Animesh Gupta
Anna Gottardi
Antonio Norelli
Anu Venkatesh
Arash Gholamidavoodi
Arfa Tabassum
Arul Menezes
Arun Kirubarajan
Asher Mullokandov
Ashish Sabharwal
Austin Herrick
Avia Efrat
Aykut Erdem
Ayla Karakaş
B. Ryan Roberts
Bao Sheng Loe
Barret Zoph
Bartłomiej Bojanowski
Batuhan Özyurt
Behnam Hedayatnia
Behnam Neyshabur
Benjamin Inden
Benno Stein
Berk Ekmekci
Bill Yuchen Lin
Blake Howald
Bryan Orinion
Cameron Diao
Cameron Dour
Catherine Stinson
Cedrick Argueta
Cesar Ferri
Chandan Singh
Charles Rathkopf
Chenlin Meng
Chitta Baral
Chiyu Wu
Chris Callison-Burch
Christopher Waites
Christian Voigt
Christopher D Manning
Christopher Potts
Cindy Ramirez
Clara E. Rivera
Clemencia Siro
Colin Raffel
Courtney Ashcraft
Cristina Garbacea
Damien Sileo
Dan Garrette
Dan Hendrycks
Dan Kilman
Dan Roth
C. Daniel Freeman
Daniel Khashabi
Daniel Levy
Daniel Moseguí González
Danielle Perszyk
Danny Hernandez
Danqi Chen
Daphne Ippolito
Dar Gilboa
David Dohan
David Drakard
David Jurgens
Debajyoti Datta
Deep Ganguli
Denis Emelin
Denis Kleyko
Deniz Yuret
Derek Chen
Derek Tam
Dieuwke Hupkes
Diganta Misra
Dilyar Buzan
Dimitri Coelho Mollo
Diyi Yang
Dong-Ho Lee
Dylan Schrader
Ekaterina Shutova
Ekin Dogus Cubuk
Elad Segal
Eleanor Hagerman
Elizabeth Barnes
Elizabeth Donoway
Ellie Pavlick
Emanuele Rodolá
Emma Lam
Eric Chu
Eric Tang
Erkut Erdem
Ernie Chang
Ethan A Chi
Ethan Dyer
Ethan Jerzak
Ethan Kim
Eunice Engefu Manyasi
Evgenii Zheltonozhskii
Fanyue Xia
Fatemeh Siar
Fernando Martínez-Plumed
Francesca Happé
Francois Chollet
Frieda Rong
Gaurav Mishra
Genta Indra Winata
Gerard de Melo
Germán Kruszewski
Giambattista Parascandolo
Giorgio Mariani
Gloria Xinyue Wang
Gonzalo Jaimovitch-Lopez
Gregor Betz
Guy Gur-Ari
Hana Galijasevic
Hannah Kim
Hannah Rashkin
Hannaneh Hajishirzi
Harsh Mehta
Hayden Bogar
Henry Francis Anthony Shevlin
Hinrich Schuetze
Hiromu Yakura
Hongming Zhang
Hugh Mee Wong
Ian Ng
Isaac Noble
Jaap Jumelet
Jack Geissinger
Jackson Kernion
Jacob Hilton
Jaehoon Lee
Jaime Fernández Fisac
James B Simon
James Koppel
James Zheng
James Zou
Jan Kocon
Jana Thompson
Janelle Wingfield
Jared Kaplan
Jarema Radom
Jascha Sohl-Dickstein
Jason Phang
Jason Wei
Jason Yosinski
Jekaterina Novikova
Jelle Bosscher
Jennifer Marsh
Jeremy Kim
Jeroen Taal
Jesse Engel
Jesujoba Oluwadara Alabi
Jiacheng Xu
Jiaming Song
Jillian Tang
Joan Waweru
John Burden
John Miller
John U. Balis
Jonathan Batchelder
Jonathan Berant
Jörg Frohberg
Jos Rozen
Jose Hernandez-Orallo
Joseph Boudeman
Joseph Guerr
Joseph Jones
Joshua B. Tenenbaum
Joshua S. Rule
Joyce Chua
Joyce Hui Ping Chua
Kamil Kanclerz
Karen Livescu
Karl Krauth
Karthik Gopalakrishnan
Katerina Ignatyeva
Katja Markert
Kaustubh Dhole
Kevin Gimpel
Kevin Omondi
Kristen Chiafullo
Ksenia Shkaruta
Kumar Shridhar
Kyle McDonell
Kyle Richardson
Laria Reynolds
Leo Gao
Li Zhang
Liam Dugan
Lianhui Qin
Lidia Contreras-Ochando
Louis-Philippe Morency
Luca Moschella
Lucas Lam
Lucy Noble
Ludwig Schmidt
Luheng He
Luis Oliveros-Colón
Luke Metz
Lütfi Kerem Senel
Maarten Bosma
Maarten Sap
Maartje Ter Hoeve
Maheen Farooqi
Manaal Faruqui
Mantas Mazeika
Marco Baturan
Marco Marelli
Marco Maru
Maria Jose Ramirez-Quintana
Marie Tolkiehn
Mario Giulianelli
Martha Lewis
Martin Potthast
Matthew L Leavitt
Matthias Hagen
Mátyás Schubert
Medina Orduna Baitemirova
Melody Arnaud
Melvin McElrath
Michael Andrew Yee
Michael Cohen
Michael Gu
Michael Ivanitskiy
Michael Starritt
Michael Strube
Michał Swędrowski
Michele Bevilacqua
Michihiro Yasunaga
Mihir Kale
Mike Cain
Mimee Xu
Mirac Suzgun
Mitch Walker
Mo Tiwari
Mohit Bansal
Moin Aminnaseri
Mor Geva
Mozhdeh Gheini
Mukund Varma T
Nanyun Peng
Nathan Andrew Chi
Nayeon Lee
Neta Gur-Ari Krakover
Nicholas Cameron
Nicholas Roberts
Nick Doiron
Nicole Martinez
Nikita Nangia
Niklas Deckers
Niklas Muennighoff
Nitish Shirish Keskar
Niveditha S. Iyer
Noah Constant
Noah Fiedel
Nuan Wen
Oliver Zhang
Omar Agha
Omar Elbaghdadi
Omer Levy
Owain Evans
Pablo Antonio Moreno Casares
Parth Doshi
Pascale Fung
Paul Pu Liang
Paul Vicol
Pegah Alipoormolabashi
Peiyuan Liao
Percy Liang
Peter W Chang
Peter Eckersley
Phu Mon Htut
Pinyu Hwang
Pi-Bei Hwang
Piotr Miłkowski
Piyush Patil
Pouya Pezeshkpour
Priti Oli
Qiaozhu Mei
Qing Lyu
Qinlang Chen
Rabin Banjade
Rachel Etta Rudolph
Raefer Gabriel
Rahel Habacker
Ramon Risco
Raphaël Millière
Rhythm Garg
Richard Barnes
Rif A. Saurous
Riku Arakawa
Robbe Raymaekers
Robert Frank
Rohan Sikand
Roman Novak
Roman Sitelew
Ronan Le Bras
Rosanne Liu
Rowan Jacobs
Rui Zhang
Russ Salakhutdinov
Ryan Andrew Chi
Seungjae Ryan Lee
Ryan Stovall
Ryan Teehan
Rylan Yang
Sahib Singh
Saif Mohammad
Sajant Anand
Sam Dillavou
Sam Shleifer
Sam Wiseman
Samuel Gruetter
Samuel R. Bowman
Samuel Stern Schoenholz
Sanghyun Han
Sanjeev Kwatra
Sarah A. Rous
Sarik Ghazarian
Sayan Ghosh
Sean Casey
Sebastian Bischoff
Sebastian Gehrmann
Sebastian Schuster
Sepideh Sadeghi
Shadi Hamdan
Sharon Zhou
Shashank Srivastava
Sherry Shi
Shikhar Singh
Shima Asaadi
Shixiang Shane Gu
Shubh Pachchigar
Shubham Toshniwal
Shyam Upadhyay
Shyamolima Shammie Debnath
Siamak Shakeri
Simon Thormeyer
Simone Melzi
Sneha Priscilla Makini
Soo-Hwan Lee
Spencer Torene
Sriharsha Hatwar
Stanislas Dehaene
Stefan Divic
Stefano Ermon
Stella Biderman
Stephanie Lin
Stephen Prasad
Steven Piantadosi
Stuart Shieber
Summer Misherghi
Svetlana Kiritchenko
Swaroop Mishra
Tal Linzen
Tal Schuster
Tao Li
Tao Yu
Tariq Ali
Tatsunori Hashimoto
Te-Lin Wu
Théo Desbordes
Theodore Rothschild
Thomas Phan
Tianle Wang
Tiberius Nkinyili
Timo Schick
Timofei Kornev
Titus Tunduny
Tobias Gerstenberg
Trenton Chang
Trishala Neeraj
Tushar Khot
Tyler Shultz
Uri Shaham
Vedant Misra
Vera Demberg
Victoria Nyamai
Vikas Raunak
Vinay Venkatesh Ramasesh
vinay uday prabhu
Vishakh Padmakumar
Vivek Srikumar
William Fedus
William Saunders
William Zhang
Wout Vossen
Xiang Ren
Xiaoyu Tong
Xinran Zhao
Xinyi Wu
Xudong Shen
Yadollah Yaghoobzadeh
Yair Lakretz
Yangqiu Song
Yasaman Bahri
Yejin Choi
Yichi Yang
Yiding Hao
Yifu Chen
Yonatan Belinkov
Yu Hou
Yufang Hou
Yuntao Bai
Zachary Seid
Zhuoye Zhao
Zijian Wang
Zijie J. Wang
Zirui Wang
Ziyi Wu
Language models demonstrate both quantitative improvement and new qualitative capabilities with increasing scale. Despite their potentially … (voir plus)transformative impact, these new capabilities are as yet poorly characterized. In order to inform future research, prepare for disruptive new model capabilities, and ameliorate socially harmful effects, it is vital that we understand the present and near-future capabilities and limitations of language models. To address this challenge, we introduce the Beyond the Imitation Game benchmark (BIG- bench). BIG-bench currently consists of 204 tasks, contributed by 450 authors across 132 institutions. Task topics are diverse, drawing problems from linguistics, childhood develop- ment, math, common-sense reasoning, biology, physics, social bias, software development, and beyond. BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models. We evaluate the behavior of OpenAI's GPT models, Google- internal dense transformer architectures, and Switch-style sparse transformers on BIG-bench, across model sizes spanning millions to hundreds of billions of parameters. In addition, a team of human expert raters performed all tasks in order to provide a strong baseline. Findings include: model performance and calibration both improve with scale, but are poor in absolute terms (and when compared with rater performance); performance is remarkably similar across model classes, though with benefits from sparsity; tasks that improve gradually and predictably commonly involve a large knowledge or memorization component, whereas tasks that exhibit "breakthrough" behavior at a critical scale often involve multiple steps or components, or brittle metrics; social bias typically increases with scale in settings with ambiguous context, but this can be improved with prompting.