NRC Research Associate Programs
Fellowships Office
Policy and Global Affairs

Participating Agencies

  sign inOpen Printer View

RAP opportunity at National Institute of Standards and Technology     NIST

Training, Optimization and Benchmarking of Hardware Neural Networks

Location

Physical Measurement Laboratory, Applied Physics Division

opportunity location
50.68.62.C0834 Boulder, CO

NIST only participates in the February and August reviews.

Advisers

name email phone
Sonia Mary Buckley sonia.buckley@nist.gov 303-497-6639
Adam Nykoruk McCaughan adam.mccaughan@nist.gov 303.497.5487

Description

Machine learning algorithms, such as deep learning, have revolutionized our ability to solve pattern recognition and other traditionally "human" problems. However, such algorithms still do not capture all of the attributes of intelligence and are very power and resource hungry when implemented on traditional computers. These issues have compelled engineers to develop new hardware for AI based on a diverse set of emerging devices, which can include photonic, memristive, magnetic and superconducting materials. Many of these systems are designed with either analog or mixed signal processing instead of digital processing, greatly increasing operating speeds and energy efficiencies. Despite these important advances, there are still important research challenges to overcome before these hardware platforms can be adapted. One of the biggest challenges is the incompatibility of the new hardware with traditional machine learning algorithms such as backpropagation. The goal of this project is to develop and demonstrate a general training technique that can be natively implemented in this diversity of hardware neural networks. Research opportunities include the implementation of physical models of different hardware platforms, extension of the technique to spiking hardware such as Intel's Loihi or commercial FPGAs, and implementation of newly developed continual learning and few-shot learning benchmark tasks.

 

[1] A. N. McCaughan, B. G. Oripov, N. Ganesh, S. W. Nam, A. Dienstfrey, and S. M. Buckley, “Multiplexed gradient descent: Fast online training of modern datasets on hardware neural networks without backpropagation,” APL Mach. Learn. 1, 026118 (2023).

 

[2] J. Yik et al, “NeuroBench: Advanced neuromorphic computing through collaborative, fair and representative benchmarking,” arXiv:2304.04640 (2023).

key words
AI; hardware for AI; spiking neural networks; semiconductors; machine learning; lifelong learning

Eligibility

Citizenship:  Open to U.S. citizens
Level:  Open to Postdoctoral applicants

Stipend

Base Stipend Travel Allotment Supplementation
$82,764.00 $3,000.00
Copyright © 2024. National Academy of Sciences. All rights reserved.Terms of Use and Privacy Policy