Zizhao Hu

Zizhao Hu

Available for Collaboration

AI Researcher · PhD Student at USC · GLAMOUR Lab · MINDS · MOVE Fellow

“Understanding how LLMs remember and forget—then using that knowledge to build faster, leaner inference and better training data.”

— Zizhao Hu, PhD Student at USC · GLAMOUR Lab & MINDS Group

LLM Memorization

How LLMs remember & forget — unlearning, KV-cache management, continual learning, and reasoning under memory constraints

Inference Optimization

Efficient attention, KV-cache compression, sparse & low-rank methods for faster, leaner LLM serving at scale

Synthetic Data Curation

Generate-validate pipelines, quality filtering, model-collapse prevention, and safety-oriented data curation for LLM training

My Vision

🧠 Human
Stage
🤖 AI

Natural selection shapes the newborn brain's wiring and topology

Base

Architecture & pretraining shape the model's initial weights

Guided learning forms task-based memory and skills

Childhood

SFT builds task-specific skills through curated instruction

Sleep consolidates memory — replaying, pruning, strengthening

Sleep

KV management consolidates context — evicting, compressing, retaining

Short-term and long-term memory store and retrieve knowledge

Memory

KV cache as dynamic context-based weights; model weights as permanent storage

Real-world feedback refines intuition and adapts behavior

Experience

Continual learning updates both KV (context) and weights (parameters)

Build tools — books, calculators — to extend cognition

Tools

RAG & tool use augment models with external knowledge

Diverse attempts, verified by outcomes, drive evolution

Evolve

Diversity + verification: generate, verify, and improve

Decompose goals into subgoals and plan multi-step actions

Goals

Chain-of-thought & agentic planning decompose complex tasks

Specialized AI will develop distinct memory profiles — just as human experts develop domain intuition. Diversity with verification is how both human societies and AI systems evolve.