Research

Building AI systems that improve themselves while remaining under control. Focused on multi-agent systems and self-improving AI through synthetic data, brain-inspired neural architectures, and continual learning.

Research Directions

Primary Focus

LLM / VLM / VLA

Multi-agent interaction, self-improving AI, continual learning, and efficient model memory. Primary focus on how LLM/VLM/VLA agents collaborate, self-improve through generate-validate loops, and maintain knowledge efficiently over time.

Primary Focus

Synthetic Data

Synthetic data generation, model collapse dynamics, data curation methods, and safety-oriented data pipelines for self-improving AI.

Architecture

Transformer memory mechanisms, efficient architectures, multimodal architectures, and scalable designs. Research on how models store, retrieve, and reason over information at scale.

Continual Learning

Continual learning for LLM/VLM agents, efficient model memory, catastrophic forgetting mitigation, difficulty-aware replay, and curriculum strategies for deployed AI systems that need to adapt over time.

Recent Publications

Featured

Multimodal Synthetic Data Finetuning and Model Collapse

Zizhao Hu, et al.

2025ACM International Conference on Multimodal Interaction (ICMI)conference
View Paper

Static Key Attention in Vision

Zizhao Hu, et al.

2024Preprintpreprint
View Paper

Lateralization MLP: A Simple Brain-inspired Architecture for Diffusion

Zizhao Hu, et al.

2024Preprintpreprint
View Paper
View all on Google Scholar

Academic Service

Reviewer: NeurIPS 2024, ICLR 2024-2025, ICML 2024-2025

Active contributor through peer review at top-tier venues.

Current Position

PhD Student • USC

Lab: GLAMOUR Lab

Fellowship: MOVE @ Handshake AI (Alumni)

Advisors: J. Thomason, M. Rostami

Affiliation: GLAMOUR Lab, USC ISI

Graduation: 2027