Riyasat Ohib
Georgia Institute of Technology. Ph.D. Candidate
I am a Ph.D. Candidate in the Department of ECE at the Georgia Institute of Technology, advised by Dr. Vince Calhoun and Dr. Sergey Plis. My doctoral research focuses on sparse learning across diverse paradigms, including supervised deep learning, multimodal learning, federated learning, and reinforcement learning. I’ve had the opportunity to explore these ideas through internships at FAIR Meta AI, Dolby Labs, and Cohere. Currently, I’m interning at Google DeepMind, where I’m working on representation alignment in diffusion models.
I have broad interests in learning algorithms and intelligence, and I’m always eager to discuss research—feel free to reach out!
Research and Work Experience
Research Intern
Spring 2026
Fall 2025
Diffusion model representation engineering and alignment. Focus on analysis, controllability, interpretability & safety.
Research Intern
Fall 2024
Inference-time activation sparsity techniques for large language models (LLMs).
Research Intern
Summer 2024
Efficient fine-tuning method for LLMs using probabilistic layer selection.
Research Intern
Summer 2022
At Meta FAIR I worked on research on signal processing based techniques for sparse Deep Learning. My neural network sparsity library was integrated with the facebookresearch/fairscale repo.
Education
PhD Student
Aug 2021 - Present
Research in learning algorithms and sparse learning across domains.
Dissertation: Principled Sparsity for Efficient Deep Learning Across Computational Paradigms.
CGPA 4.0/4.0
Master's
Aug 2019 - May 2021
Research and thesis on Explicit Group Sparse Projection. Master's Thesis.
CGPA 4.0/4.0
news
| Sep 08, 2025 | Excited to join Google DeepMind as a Research Intern! Will be working on model representation analysis and alignment with applications to safety. |
|---|---|
| Mar 05, 2025 | New work on sparse model adapters out, Exploring Sparse Adapters for Scalable Merging of Parameter Efficient Experts was accepted at COLM 2025. |
| Sep 25, 2024 | Our latest work, Efficient Reinforcement Learning by Discovering Neural Pathways was accepted at NeurIPS 2024. |
| Sep 03, 2024 | Excited to join the model efficiency team at Cohere as a Research Intern! |
| May 20, 2024 | Joining the Advanced Technologies group at Dolby Laboratories as a Ph.D. Research Intern! Will be working on novel efficient finetuning methods for both LLMs and multimodal VLMs. |
| Mar 05, 2023 | Preliminary work accepted in ICLR 2023 Sparse Neural Networks workshop on communication efficient federated learning and full work out on arXiv. |