Research Topics: Large Language Models (WebConf'24, ACL'24), LLM Safety (In preparation for NAACL'25), Multimodal Models (ACL'24), Recommender Systems and Dynamic Graph Mining (KDD'23), Social Network Analysis (CIKM'24, KDD'23), Fair Graph Mining (CIKM'24).
Advisor: Dr. Srijan Kumar
Research Intern
J.P. Morgan AI Research
Research Topics: Multimodal Large Language Models (MLLMs). Infographics Understanding.
Research Intern
Adobe Inc.
Research Topics: Multimodal Large Language Models (MLLMs) Fine-tuning, Web UI and Video Tutorial Understanding.
Research Intern
Microsoft Research Asia
Research Topics: Large Language Models (EMNLP'24, ICML'24, ICML'23, AAAI'23), LLM Agents (EMNLP'24, ICML'24), Scientometric Analysis (In preparation for NAACL'25), Computational Social Science, Misinformation Detection (KDD'22, AAAI'22), Few-shot Learning (ACL'24, AAAI'23), Explainable AI (AAAI'22).
Advisors: Dr. Xiting Wang, Dr. Jindong Wang, and Dr. Xing Xie.
Undergraduate Research Assistant
UCLA
Research Topics: Large Language Models (EMNLP'24), Graph Neural Networks and Data Mining (WWW'23), LLM Fine-tuning (Under Review at KDD'25), Recommender Systems (WWW'23).
Advisors: Dr. Yizhou Sun, Dr. Wei Wang
Software Engineer Intern
Amazon
Worked in Fulfillment By Amazon (FBA), IAR team
Designed and implemented IAR Manual Analysis, a scalable and efficient workflow using AWS Step Functions and AWS Lambda. This service automates the aggregation of data points from multiple sources like Amazon S3 and DynamoDB for SageMaker ML model training, handling over 16,000 requests per summary stage;
Automated the deployment of the workflow across all AWS Realms (EU/FE/NA) through CloudFormation;
Establish DataCraft pipeline to enable automatic data ingestion from DynamoDB into the Andes dataset catalog, promoting broader internal adoption of these datasets for cross-functional teams and enhancing data accessibility;
Perform ablation analysis on the inventory reconciliation model, identifying key performance bottlenecks and optimizing model performance
Education
Ph.D. in Computer Science
Georgia Institute of Technology (GaTech)
Research focus on Large Language Models, Multimodal Learning, and Social Computing
I studied the foundational concept of neural networks and deep learning. By the end, I was familiar with the significant technological trends driving the rise of deep learning; build, train, and apply fully connected deep neural networks; implement efficient (vectorized) neural networks; identify key parameters in a neural network’s architecture; and apply deep learning to your own applications.
Gain an in-depth understanding of the specific mechanics of Bitcoin
Understand Bitcoin’s real-life applications and learn how to attack and destroy Bitcoin, Ethereum, smart contracts and Dapps, and alternatives to Bitcoin’s Proof-of-Work consensus algorithm
Object-oriented programming (OOP) lets you specify relationships between functions and the objects that they can act on, helping you manage complexity in your code. This is an intermediate level course, providing an introduction to OOP, using the S3 and R6 systems. S3 is a great day-to-day R programming tool that simplifies some of the functions that you write. R6 is especially useful for industry-specific analyses, working with web APIs, and building GUIs.