Dan Hammerstrom

Embedded Computing

Download the presentation

Embedded computing is becoming an important application space.  Complex embedded systems are being used in applications from IoT, health care, and security, to automobiles.  These embedded platforms are processing large amounts of sensor data. And not only has the volume of sensor data increased exponentially, there has also been a dramatic increase in the complexity of the required analysis and subsequent system response.  Moore’s Law is slowing down and will be ending soon, and power scaling (via Dennard Scaling) has already stopped, significantly limiting the speed and type of analyses that can be done on power constrained platforms.

At DARPA I started two programs to address the problems of processing huge quantities of complex sensor data. The first was UPSIDE (Unconventional Processing of Signals for Intelligent Data Exploitation), which leverages analog computation based on probabilistic abstractions.  In addition to implementations using analog CMOS, the program is also investigating computation based on the physics of nanoscale devices. UPSIDE processing is non-digital and fundamentally different from current digital processors, and so, it is not subject to the same power and speed limitations.  Systems developed by the UPSIDE program, based on analog CMOS, are demonstrating 3 orders of magnitude improvement in power efficiency and an additional 2 orders of magnitude improvement in speed.

UPSIDE allows for highly efficient hardware to be used at the front end of sensor data processing.  However, converting such data into knowledge also requires complex analysis at the back end of the sensor data processing pipeline.  The second program I started at DARPA is the Cortical Processor, whose goal is to take machine learning to the next level for rapid, real time learning, and the capture of and inference over temporal information and complex structure in the data.  These algorithms are then mapped to power efficient architectures.  The approach is to combine neuroscience inspired techniques with more traditional machine learning to create new hybrid models.
 

About the panel speaker:

Dan Hammerstrom was a Program Manager at DARPA from March 2012 to March 2016. He came to DARPA from Portland State University, where he is a Professor in the Electrical and Computer Engineering (ECE) department.
From 1977 to 1980, Dr. Hammerstrom was an Assistant Professor in the Electrical Engineering Department at Cornell University. In 1980 he joined Intel in Oregon, where he was involved in computer architecture and VLSI design. In 1988 he founded Adaptive Solutions, Inc., which specialized in high performance silicon technology (the CNAPS chip set) for image processing, neural network emulation, and pattern recognition.  In 1998 he joined the Oregon Graduate Institute, and then moved to Portland State in 2005.

Dr. Hammerstrom is a Life Fellow of the Institute of Electrical and Electronic Engineers (IEEE). He has been a Visiting Scientist at the Royal Institute of Technology in Stockholm, Sweden and the NASA Ames Research Center. He currently has a joint faculty appointment with Halmstad University, Halmstad Sweden.  He received a PhD in Electrical Engineering from the University of Illinois at Urbana-Champaign in 1977.