About me

alt text

Ninghao Liu
School of Computing
University of Georgia
616 Boyd Graduate Studies Research Center
Athens, GA, 30602

Email: ninghao dot liu at uga dot edu

Background

Dr. Ninghao Liu is an assistant professor in the School of Computing at the University of Georgia. He received Ph.D. in Computer Science from Texas A&M University in 2021, under the supervision of Dr. Xia Hu. He received his M.S. degree in Electrical and Computer Engineering from Georgia Institute of Technology in 2015 and B.S. degrees in Information Engineering from South China University of Technology in 2014.

  • I am always looking for self-motivated PhD students. If you are interested in working with me, please feel free to email me. Please check here for more details.

Research (Google Scholar)

My research work spans Explainable AI (XAI), Large Foundation Models, Graph Mining, Model Fairness, Recommender Systems, and Outlier Detection. I have published refereed papers at recognized venues such as KDD, WWW, ICML, ICLR, NeurIPS, WSDM, IJCAI, CIKM, ICDM, etc. His work won the Outstanding Paper Award in ICML 2022, the Best Paper Award Shortlist in WWW 2019, and the Best Paper Award Candidate in ICDM 2019.

I have a deep interest in Explainable AI (XAI), a journey that started from a group presentation of a research paper in 2016 that discusses “should we trust machine learning models?”. At the time, I was working on security-related topics (anomaly detection, to be more specific). Besides realizing that explainability is a crucial aspect in anomaly/outlier detection, I also started exploring if explainability and security are just two sides of the same coin (this is now widely known, though)! As I learned more concepts of Trustworthy AI, this perspective expanded, and I came to believe that explainability lies at the heart of not only security but also other key principles such as fairness. My passion for this subject has only grown even after I mentor students, and I can expect it will go on.

A major challenge in XAI reserch is opening the black box of deep models. Inspired by “embedding is all you need”, I became motivated to explain the embedding spaces within deep models. Coming from the data mining community, where graph data is a common focus, I chose graph models to start. It was soon expanded to exploring general deep models, where I tried to use natural language tokens to understand the semantics within latent spaces (unfortunately, in 2018 we did not have tools like LLMs, so the process was pretty struggling). Over time, the interest in post-hoc explanation evolved into developing inherently interpretable models (see “is a single vector enough?”).

After working on and reading so many XAI papers, I was always thinking that is XAI really helping us? This concern was exacerbated by the recent large language models, where many XAI techniques—and even some of the assumptions behind them—seemed to be not very important. Through early exploration into LLM explainability (WWW’24, NAACL’24, CIKM’24), my lab is now focusing on making XAI more usable in this evolving landscape. I am deeply grateful to my collaborators and to NSF, IES, and UGA for their generous support. We are always open to new collaborations.

News

  • 2024/10: Glad to be part of the National GenAI Center team receiving a five-year, $10 million grant from IES.

  • 2024/09: Glad to receive the Engineering and Computing Convergence Collaboration grant from College of Engineering, UGA.

  • 2024/07: Glad to receive an IES grant on “Student Reasoning Patterns in Science” with collaborators from UGA and ETS.

  • 2024/07: One paper accepted by CIKM 2024.

  • 2024/06: One paper accepted by AMIA 2024 Annual Symposium.

  • 2024/05: Two papers accepted by ACL 2024.

  • 2024/05: Two papers accepted by ICML 2024.

  • 2024/03: XAI is not just for visualization! Check out our recent work: “Usable XAI: 10 Strategies Towards Exploiting Explainability in the LLM Era”

  • 2024/03: One paper accepted by NAACL 2024 for Oral presentation. Congrats to Xuansheng!

  • 2024/02: One paper accepted by COLING 2024.

  • 2024/02: One paper accepted by CVPR 2024.

  • 2024/01: One paper accepted by The Web Conference (WWW) 2024. Congrats to Xuansheng!

  • 2024/01: One paper accepted by ICLR 2024.

  • 2024/01: Check our new survey of LLM Interpretabiltiy accepted by ACM TIST!

  • 2023/10: Two student abstracts accepted by AAAI 2024.

  • 2023/10: One paper accepted by BIBM 2023.

  • 2023/09: One paper accepted by NeurIPS 2023. Congratulations to Yucheng for his third accepted work this year!

  • 2023/09: One paper accepted by ICDM 2023.

  • 2023/08: Two papers accepted by CIKM 2023.

  • 2023/07: One paper accepted by ECAI 2023.

  • 2023/06: Two papers accepted by ECML-PKDD 2023.

  • 2023/05: One paper accepted by TKDE, titled “Improving Generalizability of Graph Anomaly Detection Models via Data Augmentation”.

  • 2023/04: One paper accepted by ICML 2023.

  • 2023/04: One paper accepted by SIGKDD Explorations.

  • 2023/04: One paper accepted by AIED 2023 (International Conference on Artifical Intelligence in Education, 2023).

  • 2023/03: Check our recent survey paper about Graph Prompting methods.

  • 2023/02: Chekc our recent paper that utilizes ChatGPT for text data augmentation.

  • 2022/12: One paper accepted by SDM 2023.

  • 2022/11: Two papers accepted by AAAI 2023.

  • 2022/10: Two papers accepted by WSDM 2023.

  • 2022/08: Honored to receive the NSF IIS Core grant “Collaborative Research: III: Small: Graph-Oriented Usable Interpretation” as the UGA PI.

  • 2022/08: One paper accepted by CIKM 2022.

  • 2022/07: Welcome to check our survey paper about machine learning fairness available at TKDD.

  • 2022/07: Our paper “G-Mixup: Graph Data Augmentation for Graph Classification” is awarded an Outstanding Paper Award at ICML 2022!

  • 2022/05: One paper accepted by KDD 2022.

  • 2022/05: One paper accepted by ICML 2022.

  • 2022/03: Our workshop proposal on “Data Science and Artificial Intelligence for Responsible Recommendations” has been accepted by KDD 2022. We welcome your submissions and attendance.

  • 2022/01: One paper accepted by ICLR 2022.

  • 2022/01: Two papers accepted by TheWebConf (WWW) 2022.

  • 2022/01: Our review of Interpretability in Graph Neural Networks is available online.

  • 2021/12: One paper about anomaly detection is accepted to SDM 2022.

  • 2021/12: Invited to serve as a PC member of KDD 2022 and ICML 2022.

  • 2021/09: Invited to serve as a SPC member of AAAI 2022.

  • 2021/06: Our survey paper Adversarial Attacks and Defenses: An Interpretation Perspective is available at SIGKDD Explorations Newsletter.

  • 2021/06: Invited to serve as a PC member of WSDM 2022.

  • 2021/04: Invited to serve as a PC member of CIKM 2021 and NeurIPS 2021.

  • 2021/02: One paper about learning credible DNNs is available at KAIS.

  • 2020/12: One paper accepted by AAAI 2021.

  • 2020/12: Invited to serve as a PC member of ICML 2021.

  • 2020/12: Invited to serve as a PC member of KDD 2021.

  • 2020/11: Successfully defend my Ph.D. dissertation!

  • 2020/10: One paper accepted by WSDM 2021.