About Me
I am an assistant professor at the Department of Electrical Engineering and Computer Science (EECS) at Daegu Gyeongbuk Institute of Science and Technology (DGIST), South Korea, working as a faculty member of DGIST Distributed Artificial Intelligence Lab. My research interests lie broadly in distributed AI, federated and collaborative learning, privacy-preserving machine learning, and on-device learning.
I received my B.S and M.S degrees in Electrical Engineering from KAIST, and received my Ph.D degree in Electrical and Computer Engineering from USC where I was fortunate to be advised by Prof. Salman Avestimehr. I was a Ph.D Research Intern at Microsoft Research, Redmond in 2021, and a Staff Research Engineer at the Samsung Cellular & Multimedia Labs, Samsung Semiconductor Inc. where I worked on on-device learning/federated learning to improve 5G/6G cellular systems.
Recent News
- (2024-02) I joined the Electrical Engineering and Computer Science Department(EECS) at DGIST as an assitant professor (tenure-track)!!
- (2023-08) Our paper “[Universal Auto-encoder Framework for MIMO CSI Feedback]” is accepted to 2023 IEEE Global Communications Conference (Globecom).
- (2022-11) Our paper “Securing Secure Aggregation: Mitigating Multi-Round Privacy Leakage in Federated Learning” is accepted to AAAI Conference on Artificial Intelligence (AAAI 2023).
- (2022-10) Our paper “LightVeriFL: Lightweight and Verifiable Secure Federated Learning” is accepted to Workshop on Federated Learning: Recent Advances and New Challenges (in Conjunction with NeurIPS 2022) as an oral presentation!!
- (2022-09) I joined Samsung Labs at San Diego as a Staff Research Engineer.
- (2022-07) I successfully defended my PhD dissertation, “Coding Centric Approaches for Scalable, Efficient, and Privacy-preserving Machine Learning in Large-scale Distributed System”!!
- (2022-02) Our paper “FedSpace: An Efficient Federated Learning Framework at Satellites and Ground Stations” is on arXiv.
- (2022-01) Our paper “LightSecAgg: a Lightweight and Versatile Design for Secure Aggregation in Federated Learning” is accepted to MLSys 2022 - Fifth Conference on Machine Learning and Systems.
- (2022-01) Our paper “Securing Secure Aggregation: Mitigating Multi-Round Privacy Leakage in Federated Learning” is accepted to International Workshop on Trustable, Verifiable and Auditable Federated Learning in Conjunction with AAAI 2022 (FL-AAAI-22).
- (2021-10) Our paper “Secure Aggregation for Buffered Asynchronous Federated Learning” is accepted to “NeurIPS-2021 Workshop on New Frontiers in Federated Learning”.
- (2021-09) Our paper “LightSecAgg: Rethinking Secure Aggregation in Federated Learning” is accepted to 2021 IEEE Information Theory Workshop (ITW 2021).
- (2021-06) I started a PhD internship at Microsoft Research, Redmond. I have worked on the project “Distributed Machine Learning Training in Space”.
- (2021-06) Our paper “Securing Secure Aggregation: Mitigating Multi-Round Privacy Leakage in Federated Learning” is on arXiv.
- (2021-02) Our paper “FedML: A Research Library and Benchmark for Federated Machine Learning” (a shorter version of FedML white-paper) won the Baidu Best Paper Award at NeurIPS-20 Workshop on Scalability, Privacy, and Security in Federated Learning.
- (2021-01) Our paper “Turbo-Aggregate: Breaking the Quadratic Aggregation Barrier in Secure Federated Learning” is accepted at IEEE Journal on Selected Areas in Information Theory (JSAIT).
- (2021-01) Our paper “CodedPrivateML: A Fast and Privacy-Preserving Framework for Distributed Machine Learning” is accepted at IEEE Journal on Selected Areas in Information Theory (JSAIT).
- (2020-10) Our paper “Byzantine-Resilient Secure Federated Learning” is accepted at IEEE Journal on Selected Areas in Communications (JSAC).
- (2020-09) Our paper A Scalable Approach for Privacy-Preserving Collaborative Machine-Learning is accepted to Conference on Neural Information Processing Systems (NeurIPS 2020)!