Continual Learning
Enabling deployed AI agents to learn continuously without catastrophic forgetting. My research develops methods for continual learning in LLM/VLM agents, efficient model memory, difficulty-aware replay, and curriculum strategies that allow agentic systems to evolve with new data and interactions while preserving prior knowledge—essential for any AI system that operates in the real world.
Key Research Topics
Continual Learning for LLM/VLM Agents
Adapting continual learning to large-scale language and vision-language models. Research on how deployed agents can absorb new knowledge, learn from user interactions, and adapt to domain shifts without full retraining — enabling AI systems that grow smarter over their lifetime.
Efficient Model Memory
How models store, compress, retrieve, and forget information efficiently. Research on KV-cache optimization, memory-augmented architectures, retrieval-augmented generation, episodic memory for agents, and parameter-efficient representations that maximize knowledge per byte of VRAM.
Catastrophic Forgetting Mitigation
Developing methods that enable neural networks to learn new tasks without destroying performance on previously learned ones. Combining replay-based, regularization-based, and architecture-based strategies for robust knowledge retention.
Difficulty-Aware Replay (DREAM)
Prioritizing difficult, boundary-adjacent samples in experience replay buffers. By focusing on the most informative examples, DREAM achieves better performance with smaller memory footprints compared to random replay strategies.
Curriculum Continual Learning
Ordering training tasks and samples intelligently to maximize positive transfer and minimize interference. Research on how task sequencing and difficulty progression affect continual learning outcomes.
Evaluation & Benchmarking
Developing comprehensive evaluation frameworks for continual learning that go beyond simple accuracy metrics. Measuring forward transfer, backward transfer, forgetting rates, and computational efficiency across both classical and LLM settings.
Related Work
DREAM-C2L: Continual Learning Framework
Open-source framework for continual learning research with difficulty-aware sample ordering, replay-based retention methods, and reproducible HPC experiment pipelines.