My research is in the area of neural network accelerator systems, specifically I
-
build neuromorphic, neurosynaptic, and brain-inspired chips and systems,
program applications for these systems, and
develop hardware-aware learning algorithms to efficiently run in silicon.
My research is based on the premise that we can compute more efficiently in silicon by studying the architecture of the brain. My work involves the integration of chip architecture/design/implementation and deep learning with significant inspiration from neurobiology.
IBM Research page
Google Scholar page
Research Biography
I am a principal research scientist and manager in the brain-inspired computing group at IBM Research - Almaden, near San Jose, CA. My research interests are in the domain of neuromorphic, neurosynaptic, and brain-inspired systems, building some of the most advanced hardware neural network systems such as NorthPole and TrueNorth.
I worked as a postdoc in Bioengineering at Stanford University until 2010, working on the Neurogrid project, which aims to provide neuroscientists with a desktop supercomputer for modeling large networks of spiking neurons. I received my Ph.D. in Bioengineering from University of Pennsylvania in 2006 and B.S.E in Electrical Engineering from Arizona State University in 2000.
chip gallery
full publication list
Select Publications
R. Appuswamy, M. Debole, B. Taba, S. Esser, A. Cassidy, et al (2024). Breakthrough low-latency, high-energy-efficiency LLM inference performance using NorthPole. IEEE Conference on High Performance Extreme Computing, HPEC .IBM Research Blog
F. Akopyan, W. Risk, J.V. Arthur, A. Cassidy, M. Debole, C. Ortega Otero, et al (2024). Breakthrough edge AI inference performance using NorthPole in 3U VPX form factor. IEEE Conference on High Performance Extreme Computing, HPEC .
A.S. Cassidy, J.V. Arthur, F. Akopyan, A. Andreopoulos, R. Appuswamy, P. Datta, et al (2024). IBM NorthPole: An Archicture for Neural Network Inference with a 12nm Chip. IEEE International Solid-State Circuits Conference, ISSCC.
Invited Paper
D.S. Modha, F. Akopyan*, A. Andreopoulos*, R. Appuswamy*, J.V. Arthur*, A.S. Cassidy*, P. Datta*, M. V. DeBole*, S.K. Esser*, C. Ortega Otero*, J. Sawada*, B. Taba*, A. Amir, D. Blablani, P. Carlson, M. Flickner, R. Gandhasri, G. Garreau, M. Ito, J. Klamo, J. Kusnitz, N. McClatchey, J. McKinstry, Y. Nakamura, T. Nayak, W. Risk, K. Schleupen, B. Shaw, J. Sivagnaname, D. Smith, I. Terrizzano, T. Ueda (2023). "Neural inference at the frontier of energy, space, and time". Science 19 October 2023, 329-35. *These authors contributed equally.
Featured Story
D.S. Modha, F. Akopyan*, A. Andreopoulos*, R. Appuswamy*, J.V. Arthur*, A.S. Cassidy*, P. Datta*, M. V. DeBole*, S.K. Esser*, C. Ortega Otero*, J. Sawada*, B. Taba*, et al (2023). "IBM NorthPole Neural Inference Machine". 2023 IEEE Hot Chips 35 Symposium (HCS). *These authors contributed equally.
S.K. Esser, P.A. Merolla, J.V. Arthur, et al (2016). "Convolutional Networks for Fast, Energy-Efficient Neuromorphic Computing". PNAS 113 (41) 11441-11446 (arxiv preprint).
Featured commentary
S.K. Esser, R. Appuswamy, P. Merolla, J.V. Arthur, D.S. Modha (2015). "Backpropagation for energy-efficient neuromorphic computing." Adv. Neural Inf. Process. Syst. 28 (NeurIPS). (spotlight presentation),
Ranked among top ~5% of submissions
P.A. Merolla*, J.V. Arthur*, R. Alvarez-Icaza*, A.S. Cassidy*, J. Sawada*, F. Akopyan*, B.L. Jackson*, N. Imam, C. Guo, Y. Nakamura, B. Brezzo, I. Vo, S.K. Esser, R. Appuswamy, B. Taba, A. Amir, M.D. Flickner, W.P. Risk, R. Manohar, D.S. Modha (2014). "A million spiking-neuron integrated circuit with a scalable communication network and interface". Science 8 August 2014, 668-73. *These authors contributed equally.
Cover plus Feature Story
B.V. Benjamin, P. Gao, E. McQuinn, S. Choudhary, A.R. Chandrasekaran, J. Bussat, R. Alvarez-Icaza, J.V. Arthur, P.A. Merolla, K. Boahen (2014). "Neurogrid: A Mixed-Analog-Digital Multichip System for Large-Scale Neural Simulations". IEEE Proceedings.
J.V. Arthur, K.A. Boahen (2011). "Silicon-neuron design: A dynamical systems approach". Circuits and Systems I: Regular Papers, IEEE Transactions on 58(5), 1034--1043, IEEE
P. Merolla, J. Arthur, F. Akopyan, N. Imam, R. Manohar, D.S. Modh (2011). "A digital neurosynaptic core using embedded crossbar memory with 45pJ per spike in 45nm". Custom Integrated Circuits Conference (CICC), IEEE.
J.V. Arthur, K.A. Boahen (2007). "Synchrony in silicon: The gamma rhythm". Neural Networks, IEEE Transactions on 18(6), 1815--1825.
J.V. Arthur, K.A. Boahen (2006). "Learning in Silicon: Timing is Everything". Adv. Neural Inf. Process. Syst. 18 (NeurIPS). (oral presentation),
Ranked among top 2% of submissions
J.V. Arthur, K.A. Boahen (2004). "Recurrently Connected Silicon Neurons with Active Dendrites for One-Shot Learning". Neural Networks (IJCNN), International Joint Conference on.
See all of my publications
Some Past and Present Colleagues and Collaborators
Paul Merolla mkone.ai | Rodrigo Alvarez Elysium Robotics | Andrew Cassidy IBM | Filipp Akopyan IBM | ||||
Jun Sawada IBM | Bryan Jackson DE Shaw | Carlos Ortega Otero IBM | Rajit Manohar Yale | ||||
Dharmendra Modha IBM | Kwabena Boahen Stanford | Michael DeBole IBM | Steve Esser IBM | ||||
Brian Taba IBM | Jean-Marie Bussat |