My research is in the area of neuromorphic and neurosynaptic systems, specifically I
build neuromorphic and neurosynaptic systems,
program applications for these systems, and
develop hardware-aware learning algorithms to efficiently run in silicon.
My research is based on the premise that we can compute more efficiently in silicon by studying the architecture of the brain. My work involves the integration of chip design/architecture and deep learning with significant inspiration from neurobiology.
I am a research scientist in the brain-inspired computing group at IBM Research - Almaden, near San Jose, CA. My research interests are in the domain of neuromorphic and neurosynaptic systems, building many of the largest hardware spiking neural systems such as TrueNorth.
I worked as a postdoc in Bioengineering at Stanford University until 2010, working on the Neurogrid project, which aims to provide neuroscientists with a desktop supercomputer for modeling large networks of spiking neurons. I received my Ph.D. in Bioengineering from University of Pennsylvania in 2006 and B.S.E in Electrical Engineering from Arizona State University in 2000.
Select PublicationsM. V. DeBole, B. Taba, A. Amir, et al (2019). TrueNorth: Accelerating From Zero to 64 Million Neurons in 10 Years. IEEE Computer.
J.L. Mckinstry, D.R. Barch, D. Bablani, et al (2018). Low Precision Policy Distillation with Application to Low-Power, Real-time Sensation-Cognition-Action Loop with Neuromorphic Computing. arXiv:1809.09260.
J.L. Mckinstry, S.E. Esser, R. Appuswamy, et al (2018). Discovering Low-Precision Networks Close to Full-Precision Networks for Efficient Embedded Inference. arXiv:1809.04191.
S.K. Esser, P.A. Merolla, J.V. Arthur, et al (2016). "Convolutional Networks for Fast, Energy-Efficient Neuromorphic Computing". PNAS 113 (41) 11441-11446 (arxiv preprint).
S.K. Esser, R. Appuswamy, P. Merolla, J.V. Arthur, D.S. Modha (2015). "Backpropagation for energy-efficient neuromorphic computing." Adv. Neural Inf. Process. Syst. 28 (NIPS). (spotlight presentation),
Ranked among top ~5% of submissions
P.A. Merolla*, J.V. Arthur*, R. Alvarez-Icaza*, A.S. Cassidy*, J. Sawada*, F. Akopyan*, B.L. Jackson*, N. Imam, C. Guo, Y. Nakamura, B. Brezzo, I. Vo, S.K. Esser, R. Appuswamy, B. Taba, A. Amir, M.D. Flickner, W.P. Risk, R. Manohar, D.S. Modha (2014). "A million spiking-neuron integrated circuit with a scalable communication network and interface". Science 8 August 2014, 668-73. *These authors contributed equally.
Cover plus Feature Story
B.V. Benjamin, P. Gao, E. McQuinn, S. Choudhary, A.R. Chandrasekaran, J. Bussat, R. Alvarez-Icaza, J.V. Arthur, P.A. Merolla, K. Boahen (2014). "Neurogrid: A Mixed-Analog-Digital Multichip System for Large-Scale Neural Simulations". IEEE Proceedings.
J.V. Arthur, P.A. Merolla, F. Akopyan, et al (2012). "Building block of a programmable neuromorphic substrate: A digital neurosynaptic cores". Neural Networks (IJCNN), International Joint Conference on.
J.V. Arthur, K.A. Boahen (2011). "Silicon-neuron design: A dynamical systems approach". Circuits and Systems I: Regular Papers, IEEE Transactions on 58(5), 1034--1043, IEEE
P. Merolla, J. Arthur, F. Akopyan, N. Imam, R. Manohar, D.S. Modh (2011). "A digital neurosynaptic core using embedded crossbar memory with 45pJ per spike in 45nm". Custom Integrated Circuits Conference (CICC), IEEE.
J.V. Arthur, K.A. Boahen (2007). "Synchrony in silicon: The gamma rhythm". Neural Networks, IEEE Transactions on 18(6), 1815--1825.
J.V. Arthur, K.A. Boahen (2006). "Learning in Silicon: Timing is Everything". Adv. Neural Inf. Process. Syst. 18 (NIPS). (oral presentation),
Ranked among top 2% of submissions
J.V. Arthur, K.A. Boahen (2004). "Recurrently Connected Silicon Neurons with Active Dendrites for One-Shot Learning". Neural Networks (IJCNN), International Joint Conference on.
See all of my publications
Some Past and Present Colleagues and Collaborators