Alsemy

Recent Publications

Accelerating DTCO with a Sample-Efficient Active Learning Framework for TCAD Device Modeling

Authors : Chanwoo Park*, Junghwan Park*, Premkumar Vincent, Hyunbo Cho (*Equal contribution)

Publication : ACM/IEEE Design Automation Conference (DAC)

Date : June 2024

Design-Technology Co-Optimization (DTCO) can be significantly accelerated by employing Neural Compact Models (NCMs). However, the effective deployment of NCMs requires a substantial amount of training data for accurate device modeling. This paper introduces an Active Learning (AL) framework designed to enhance the efficiency of both device modeling and process optimization, particularly addressing the challenges of time-intensive Technology Computer-Aided Design (TCAD) simulations. The framework employs a ranking algorithm that assesses metrics such as the expected variance from the neural tangent kernel (NTK), TCAD simulation time, and the complexity of I-V curves. This strategy considerably reduces the number of required simulations while maintaining high accuracy. Demonstrating the effectiveness of our AL framework, we achieved a 28.5\% improvement in MSE within a 30-minute time budget for device modeling, and an 86.7\% reduction in the data points required for process optimization of a 51-stage ring oscillator (RO). These results offer a streamlined, adaptable solution for rapid device modeling and process optimization in various DTCO applications.

Inducing Point Operator Transformer: A Flexible and Scalable Architecture for Solving PDEs

Authors : Seungjun Lee, Taeil Oh

Publication : Proceedings of the AAAI Conference on Artificial Intelligence

Date : February 2024

Solving partial differential equations (PDEs) by learning the solution operators has emerged as an attractive alternative to traditional numerical methods. However, implementing such architectures presents two main challenges: flexibility in handling irregular and arbitrary input and output formats and scalability to large discretizations. Most existing architectures are limited by their desired structure or infeasible to scale large inputs and outputs. To address these issues, we introduce an attention-based model called an inducing-point operator transformer (IPOT). Inspired by inducing points methods, IPOT is designed to handle any input function and output query while capturing global interactions in a computationally efficient way. By detaching the inputs/outputs discretizations from the processor with a smaller latent bottleneck, IPOT offers flexibility in processing arbitrary discretizations and scales linearly with the size of inputs/outputs. Our experimental results demonstrate that IPOT achieves strong performances with manageable computational complexity on an extensive range of PDE benchmarks and real-world weather forecasting scenarios, compared to state-of-the-art methods.

2024

Accelerating DTCO with a Sample-Efficient Active Learning Framework for TCAD Device Modeling (To be presented)

Chanwoo Park*, Junghwan Park*, Premkumar Vincent, Hyunbo Cho (*Equal contribution) | ACM/IEEE Design Automation Conference (DAC) | June 2024

Inducing Point Operator Transformer: A Flexible and Scalable Architecture for Solving PDEs

Seungjun Lee, Taeil Oh | Proceedings of the AAAI Conference on Artificial Intelligence | February 2024
2023

NPC-NIS: Navigating Semiconductor Process Corners with Neural Importance Sampling

Hong Chul Nam, Chanwoo Park | NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World | December 2023

DAT: Leveraging Device-Specific Noise for Efficient and Robust AI Training in ReRAM-based Systems

Chanwoo Park, Jongwook Jeon, Hyunbo Cho | SISPAD | September 2023

FlowSim: An Invertible Generative Network for Efficient Statistical Analysis under Process Variations

Chanwoo Park*, Hong Chul Nam*, Jihun Park, Jongwook Jeon (*Equal contribution) | SISPAD | September 2023

Performance Evaluation of Strain Effectiveness of Sub-5 nm GAA FETs with Compact Modeling based on Neural Networks

Ji Hwan Lee, Kihwan Kim, Kyungjin Rim, Soogine Chong, Hyunbo Cho, Saeroonter Oh | IEEE EDTM | March 2023

Neural Compact Modeling: Motivation, State of the Art, Future Perspectives

Hyunbo Cho | IEEE EDTM | March 2023

Hierarchical Mixture-of-Experts approach for neural compact modeling of MOSFETs

Chanwoo Park, Premkumar Vincent, Soogine Chong, Junghwan Park, Ye Sle Cha, Hyunbo Cho | Solid-State Electronics Volume 199, 108500 | January 2023
2022

A novel methodology for neural compact modeling based on knowledge transfer

Ye Sle Cha, Junghwan Park, Chanwoo Park, Soogine Chong, Chul-Heung Kim, Chang-Sub Lee, Intae Jeong, Hyunbo Cho | Solid-State Electronics Volume 198, 108450 | December 2022
2021

Knowledge-based neural compact modeling towards autonomous technology development

Soogine Chong | MOS-AK | August 2021

Alsemy

© 2024 Alsemy. All rights reserved

NotionYouTubeLinkedIn

Company

AboutNewsCareer

Products

AlsisAlspice