Anup Das

(He/Him)

Associate Professor, Electrical and Computer Engineering Department

Hello! I am an Associate Professor in the department of Electrical and Computer Engineering at Drexel University. I direct the Distributed, Intelligent and Scalable COmputing (DISCO) Lab. I received a PhD in Embedded Systems from National University of Singapore in 2014. I have been a researcher in the Neuromorphic Computing Group of IMEC, Netherlands (2015 - 2017) and a post-doctoral fellow in the School of Electronics and Computer Science (ECS) at the University of Southampton, UK (2014 - 2015). Between 2004 and 2011, I have worked at LSI Corporation and STMicroelectronics, as senior IC design engineer.

My research interest is in hardware-software co-design for neuromorphic and in-memory/near-memory computing. I am a Senior Member of the IEEE and a Member of the ACM.

download cv

Selected Publications:

Real-Time Scheduling of Machine Learning Operations on Heterogeneous Neuromorphic SoC [GitHub Code]

Learning in Feedback-driven Recurrent Spiking Neural Networks using full-FORCE Training [GitHub Code]

Education.

Research.

Spike-Based Learning Algorithms

In this project, we are developing biology-inspired algorithms to train spiking neural networks that are deployed in computer vision and bio-signal processing applications.

Compared to analog and rate coding, spike-based learning rules are limited. In our recent work, we developed a new learning algorithm, called full-FORCE, which is used to train reservoir computing architectures using recurrent spiking neural networks. The proposed training procedure consists of generating targets for both the recurrent reservoir and readout layers. It then uses the recursive least square-based First-Order and Reduced Control Error (FORCE) algorithm to fit the activity of each layer to its target. We use full-FORCE to model many dynamical systems. [paper][code]

Design of System Software to Facilitate Real-Time Neuromorphic Computing, NSF RTML Award, 2019-2023 Effect of Cyclical Intermittent Hypoxia on Lung Cancer Progression, Miami VA, 2022-2023

Neuromorphic Compiler and Run-time

In this project, we are developing the system software to compile and run machine learning codes on many-core neuromorphic hardware.

Executing a program on a computer involves several steps: compilation, resource allocation, and run-time mapping. Although very well defined for mainstream computers, no prior work has investigated these steps in a systematic manner for neuromorphic systems. We are developing compiler tool chains to translate a user's machine learning program to low-level languages that can be interpreted by neuromorphic systems. We are developing a common representation across different platforms, a resource optimization strategy to improve program performance, as well as an Operating System like framework that will allow programmers to easily deploy machine learning programs on neuromorphic systems. In our recent work, we have developed a real-time scheduler to schedule machine learning applications, either individually or concurrently, on a heterogeneous neuromorphic SoC. [paper][code]

Design of System Software to Facilitate Real-Time Neuromorphic Computing, NSF RTML Award, 2019-2023 Architecting the Hardware-Software Interface for Neuromorphic Computers, DOE CAREER Award, 2021-2026.

Hardware-Software Co-Design

In this project, we are using hardware-software co-design principles to optimize the software and hardware stacks for neuromorphic systems.

Hardware-software co-design is a system design paradigm where system-level objectives such as cost, performance, power, and reliability are met by exploiting the synergism of hardware and software through their concurrent design and optimization. Similar to many electronics system designs, we are using hardware-software co-design to optimize the software and hardware for neuromorphic systems. In our recent work, we propose NeuroXplorer, a hardware-software co-design framework for implementing SNNs on a neuromorphic hardware. The key idea of NeuroXplorer is to optimize the system software and hardware, including the number of cores, number of neurons per core, synaptic capacity of each core, interconnect configuration, and routing algorithm for a given SNN application. [paper]

Software Infrastructure for Programming and Architectural Exploration of Neuromorphic Computing Systems, NSF CSSI Award, 2022-2026 Facilitating Dependable Neuromorphic Computing: Vision, Architecture, and Impact on Programmability, NSF CAREER Award, 2020-2025

Many-Core Neuromorphic Hardware Development

In this project, we are exploring hardware architectures that mimic the functionality of a human brain and prototype such architectures on FPGA.

Neuromorphic systems are designed as a many-core architecture, where each core can implement a fixed number of neurons and synapses. Neuromorphic cores are interconnected using an on-chip interconnect such as Network-on-Chip (NoC). In our recent work, we have developed a heterogeneous many-core hardware with big and little cores to map different machine learning models. For a neuromorphic core, we have explored crossbar designs, where synaptic weights are stored in non-volatile memory elements such as Phase-Change Memory (PCM) and Oxide-Based Resistive RAM (RRAM). We are also exploring other designs alternatives such as a micro-brain, where there are 3 layers of fully-connected feedforward neurons and a fixed number of lateral connections per core. [paper]

Software Framework for SNN on FPGA, Accenture, 2022-2024 Facilitating Dependable Neuromorphic Computing: Vision, Architecture, and Impact on Programmability, NSF CAREER Award, 2020-2025

I am interested in research that intersects computer architecture and machine learning.

Dependable Neurromorphic Computing

In this project, we are improving the dependability of neuromorphic hardware through software and hardware based optimization techniques.

This project addresses broad research questions with far-reaching implications in dependable neuromorphic computing: What are the reliability issues in neuromorphic architectures and how to model them? How do these reliability issues manifest as errors and impact the performance of machine learning algorithms? How to improve error tolerance in these algorithms by exploiting error resilience and self-repair properties in the brain, and how to proactively mitigate reliability issues in neuromorphic architectures to avoid errors in the first place? [paper]

Facilitating Dependable Neuromorphic Computing: Vision, Architecture, and Impact on Programmability, NSF CAREER Award, 2020-2025 Online Performance Monitoring of Neuromorphic Services, NSF CNS Award, 2021-2024

Publications.

Title: Real-Time Scheduling of Machine Learning Operations on Heterogeneous Neuromorphic SoC [GitHub Code]

Author: A. Das

Conference: 20th ACM-IEEE International Conference on Formal Methods and Models for System Design (MEMOCODE)

Date: October 13-14, 2022

Title: Built-In Functional Testing of Analog In-Memory Accelerators for Deep Neural Networks

Author: A. Mishra, A. Das and N. Kandasamy

Journal: Electronics, 11, 2592

Date: August, 2022

Contact.

anup(dot)das(at)drexel(dot)edu skype: anup_lsic (215) 895 2847
  • DISCO Lab,
  • Electrical and Computer Engineering Department,
  • Drexel University
  • 3101 Market Street, Suite 236,
  • Philadelphia, PA 19104, USA