Geoffrey Burr

Neuromorphic Technologies for Next-Generation Cognitive Computing

Download the presentation

In a world where the amount of data being created is increasing exponentially, the quest to use data in a productive way is the driving force for cognitive computing – systems that can “learn at scale, reason with purpose, and interact with humans naturally” [1]. Such systems could potentially assist humans in making appropriate yet unbiased decisions, quickly and decisively, within a wide variety of social, health, business, military, scientific, and other important application areas.

IBM Research is developing a Technology Roadmap for Cognitive Computing focused on improving the performance, capabilities and energy efficiency of next-generation Cognitive Computing systems, through innovations in algorithms, architectures, circuits, and device technology.

In the near term (the next five years), we anticipate steady improvements in Deep-Machine Learning (Deep-ML) systems, enabled by accelerators and approximate computing techniques implemented with established device technologies. While energy-efficient chips such as TrueNorth [2] have been introduced for low-power forward evaluation of already-trained artificial neural networks, the time-consuming training of such networks must still be performed offline using power-hungry CPUs or GPUs. Towards this end, we are researching customized hardware for accelerating Deep-ML training, built around large arrays of inherently analog non-volatile memory devices [3-6].

In the far-term, we expect to see continuous learning machines which enable brand-new functionalities.   Machine Intelligence (MI) may enable robots that can move among us without explicit programming, operating in the natural world on their own by forming and extending models of the world they perceive around them, and by then performing actions to achieve useful yet complex goals. Evolving from the Hierarchical Temporal Memory (HTM) algorithm developed by Jeff Hawkins [7], IBM Research has begun developing its own improved set of MI algorithms, which we describe as CAL – Context-Aware Learning. 

[1] IBM whitepaper, “Computing, cognition, and the future of knowing" (https://ibm.biz/BdHErb)
[2] P. A. Merolla, et al. Science 345.6197 (2014): 668-673
[3] G. W. Burr, et al., IEDM 2014, T29.5
[4] G. W. Burr, et al., IEEE Trans. Electr. Dev., 62(11), 3498, (2015)
[5] G. W. Burr, et al. IEDM 2015, T4.4
[6] J.-W. Jang, IEEE Electr. Dev. Lett., 36(5), 457 (2015)
[7] J. Hawkins and S. Blakeslee. On intelligence. (2007)
 

About the panel member:

Geoffrey W. Burr received his Ph.D. in Electrical Engineering from the California Institute of Technology in 1996 under the supervision of Professor Demetri Psaltis. Since that time, Dr. Burr has worked at IBM Research -- Almaden in San Jose, California, where he is currently a Principal Research Staff Member. He has worked in a number of diverse areas, including holographic data storage, photon echoes, computational electromagnetics, and nanophotonics. Dr. Burr's current research interests include storage class memory, non-volatile memories, and cognitive computing. A Senior Member of IEEE, Geoff is also a member of OSA, SPIE, MRS, Eta Kappa Nu, and Tau Beta Pi, and has served on the Editorial Team for the Emerging Research Devices section of the International Technology Roadmap for Semiconductors since early 2012.