Sunday, 21 May 2023

Full Day Tutorial

  • Practical Ising Machines for Solving Hard Discrete Optimization Problems

    Over the last decade, techniques for solving difficult (NP-complete and NP-hard) optimization problems using specialized analog hardware-based approaches have emerged. Such approaches leverage analog dynamics and physics to find good solutions of discrete optimization problems, potentially much faster than traditional algorithms. Classical digital optimization approaches, especially implementations using specialized hardware, have also enjoyed a resurgence. This tutorial, which features leading researchers in the area, will provide a detailed introduction to the field, summarize the state of the current art, and outline future directions.

    • Hitachi R&D Group, Japan

    • Stanford University, USA

    • University of California at Berkeley, USA

    • Yale University, USA

Half Day Tutorials

  • Charge and Current Sensing Analog Design: from Front-End to In-Sensor Machine Learning

    The fundamentals of sizing the analog front-end and the signal acquisition chain for miniaturized solid-state sensors read in charge and current mode will be reviewed. Different circuit topologies, from basic to advanced ones, will be presented with special focus on CMOS technologies addressing key tradeoffs such as noise (pA) vs. bandwidth (MHz), and capacitance (nF) vs. speed (ns) and stability. Requirements and solutions from different application domains (capacitive MEMS, biosensors, radiation detectors) will be discussed from a unified point of view. Charge-mode analog design can also provide an alternative route to digital accelerators for area- and energy-efficient machine learning embedded in the sensor.

  • How to Build Open-Source Neuromorphic Hardware and Algorithms

    The brain is the perfect place to look for inspiration to develop more efficient neural networks. While the computational cost of deep learning exceeds millions of dollars to train large-scale models, our brains are somehow equipped to process an abundance of signals from our sensory periphery within a power budget of approximately 10-20 watts. The brain’s incredible efficiency can be attributed to how biological neurons encode data in the time domain as spiking action potentials. 

    This tutorial will take a hands-on approach to learning how to train spiking neural networks (SNNs), and designing neuromorphic accelerators that can process these models. With the advent of open-sourced neuromorphic training libraries and electronic design automation tools, we will conduct hands-on coding sessions to train SNNs, and attendees will subsequently design a lightweight neuromorphic accelerator in the SKY130 process. Participants will be equipped with practical skills that apply principles of neuroscience to deep learning and hardware acceleration in building the next generation of machine intelligence.

    • UC Santa Cruz, USA

    • Delft University of Technology, Netherlands

  • The Impact of Climate Change on Agriculture, and of Agriculture on Climate Change: CAS as Enabling Technology for Mitigating Them

    The first part of the Tutorial will present how the current climate change is negatively affecting agricultural production, primarily putting food security at risk in the future and causing substantial economic losses to the agri-food chain.

    What is happening is that climate change is contributing substantially to food insecurity: food prices are rising, with a simultaneous decrease in production and quality following destructive weather events. In addition, climate change increases energy prices and requires higher water consumption by crops due to drought. Competition for land may rise as some areas become climatically unsuitable for production.

    • Synopsys, Chile

    • Politecnico di Torino, Italy

  • CMOS Circuit Techniques for Wireline Transmitters operating at 112G and higher

    Hyperscale data centers are going through a paradigm shift with the advent of technologies like Artificial Intelligence (AI) and Edge Compute requiring hyperscale data centers to support exponential growth of data volume.  This volume of network traffic demands an increase in bandwidth to 400G, which is now enabled by 112G Ethernet as the interconnect of choice, with next generation architectures being designed to operate at 224Gbps.  These data rates pose extreme challenges on the entire transceiver.  This tutorial examines the challenges posed on the transmitter and discusses various techniques used in the different blocks to overcome the challenges of data transmission at 100s of Gbps.

    • Synopsys, Canada

    • Synopsys, Canada

  • Linear Periodically Time-Varying Systems: Analysis and Applications

    Advances in circuit design techniques over the last few decades and the ubiquitous presence of sampling or analog-to-digital conversion mean that nearly all analog front-end chains are really LPTV systems. Linear time-invariant(LTI) systems are taught in basic undergraduate courses and the principles therein are widely used by engineers for the analysis of circuits and systems. Corresponding principles and analysis techniques of linear periodically time-varying(LPTV) systems are inaccessible to most engineers largely because of difficult notation. Usually one ends up analyzing an equivalent LTI system that represents the average behavior of the LPTV system. While such representations may be adequate for circuits like choppers and low-bandwidth PLLs, they are necessarily incomplete and may miss many important phenomena such as the translation of signals and noise from one frequency to another. 

    • IIT Madras, India

    • IIT Madras, India

  • Memristive Digital Processing-in-Memory

    Memristive technologies are candidates to replace conventional memory technologies and storage class memories. They are also widely explored for neuromorphic applications. This tutorial focuses on a different attractive capability of memristors, their ability to perform logic and arithmetic operations using stateful logic techniques. Using stateful logic, data storage and computation can be combined in the memory array to enable a novel non-von Neumann architecture, where both operations are performed within a memristive Memory Processing Unit (mMPU).

  • Brain-inspired hardware and algorithm co-design for low power online training on the edge

    Our digital society is shifting to an era of pervasive and specialized edge-computing systems. Deep learning (DL) is supporting this revolution by enabling unprecedented performances for a wide range of pattern classification and regression applications. However, the conventional Von Neumann hardware architecture and training algorithms are not most optimally suited for the low-power and real-time requirements of edge-computing devices.

    Event-based neuromorphic technologies aim to overcome this problem by removing the computational abstractions and better exploiting the physics of the substrate while running online algorithms that require spatially and temporally local weight updates.

    This tutorial will introduce a co-design approach between recent event-based algorithms, scalable emerging memory devices, and circuits that together holistically fulfill these requirements. Specifically, it will focus on how to exploit the physics of resistive memory to implement event-driven gradient-based quantization-aware local learning on-chip. This includes methods to increase the bit-resolution of memristive devices, implement solutions for solving temporal credit assignment, and derive local error-based learning rules.

  • Emerging ML-AI Techniques for Analog EDA

    In recent years, machine learning has been extensively applied for the modeling and optimization of integrated circuits. While learning techniques are seamlessly added to improve existing digital synthesis flows, the development of learning-based analog EDA faces more challenges and lags behind the digital counterpart as analog design is usually performed in a customized and manual approach.

    The objective of this tutorial is to provide an overview of recent progress in the application of machine learning to analog EDA. State-of-the-art learning and optimization techniques for the modeling and design of analog ICs are presented and discussed. Practical considerations, challenges, and opportunities of ML for analog EDA will be provided.

  • Analysis and Design of Bio-Inspired Circuits with Locally-Active Memristors

    As established by the second law of thermodynamics, an isolated system is unable to support complex phenomena. Conversely, a system, which communicates with the surrounding environment, may exhibit complex behaviors, provided some of its constitutive components are capable to amplify infinitesimal fluctuations in energy under suitable polarization, a property known as local activity [1]. The theory of local activity [2] explains the mechanisms underlying complex phenomena in any open physical system, including the emergence of an all-or-none spike in the axon membrane of a neuron [3], and the symmetry-breaking phenomena appearing in homogeneous reaction-diffusion networks from cellular biology [4]. The existence of solid-state memristor devices [5], which, similarly as the sodium and potassium ion channels [6] of a biological axon membrane, may operate in the local activity domain under opportune bias conditions [7]-[8], opens up new opportunities to synthesise circuits and systems, which, operating according to biological principles, may outperform traditional computing structures in terms of time and energy efficiency [9]. This tutorial aims to shed light on the precious role that nonlinear circuit and system theory [10]-[11] shall assume in the years to come to support circuit designers in the exploration of the full potential of locally-active memristors in bio-inspired electronics. Particularly, it will be shown how recurring to this theory, and, particularly, to concepts of local activity and nonlinear dynamics [12], is of fundamental importance, in order to gain a thorough understanding of the mechanisms, underlying the rich dynamics of these devices, and to develop systematic methods to design bio-inspired circuits with locally-active memristors, thereafter.

    • Technische Universität Dresden, Germany

    • TU Dresden, Germany

  • Material and Physical Reservoir Computing for Beyond-CMOS Electronics

    Traditional computing is based on an engineering approach that imposes logical states and a computational model upon a physical substrate. Physical or material computing, on the other hand, harnesses and exploits the inherent, naturally-occurring properties of a physical substrate to perform a computation. To do so, reservoir computing is often used as a computing paradigm. In this tutorial, you will learn what reservoir computing is and how to use if for computing with emerging devices and fabrics. You will also learn about the current state-of-the-art and what opportunities and challenges for future research exist. The tutorial is relevant for anybody interested in beyond-CMOS and beyond-von-Neumann architectures, ML, AI, neuromorphic systems, and computing with novel devices and circuits.

Tutorial Schedule




Coffee Break




Coffee Break


Morning Tutorials

1A: How to Build Open-Source Neuromorphic Hardware and Algorithms

1B: Charge and Current Sensing Analog Design: from Front-End to In-Sensor Machine Learning

1C: The impact of Climate Change on Agriculture, and of Agriculture on Climate Change: CAS as enabling technology for mitigating them

1D: CMOS Circuit Techniques for Wireline Transmitters operating at 112G and higher

1E: Analysis and Design of Bio-Inspired Circuits with Locally-Active Memristors

Afternoon Tutorials

2A: Linear Periodically Time-Varying Systems: Analysis and Applications

2B: Brain-inspired hardware and algorithm co-design for low power online training on the edge

2C: Emerging ML-AI Techniques for Analog EDA

2D: Memristive Digital Processing-in-Memory

2E: Material and Physical Reservoir Computing for Beyond-CMOS Electronics

Full Day Tutorial

F: Practical Ising Machines for Solving Hard Discrete Optimization Problems