Riyasat Ohib

Georgia Institute of Technology. Ph.D. Candidate

prof_pic.jpg

I am a Ph.D. Candidate in the Department of ECE at the Georgia Institute of Technology, advised by Dr. Vince Calhoun and Dr. Sergey Plis. My doctoral research focuses on sparse learning across diverse paradigms, including supervised deep learning, multimodal learning, federated learning, and reinforcement learning. I’ve had the opportunity to explore these ideas through internships at FAIR Meta AI, Dolby Labs, and Cohere. Currently, I’m interning at Google DeepMind, where I’m working on representation alignment in diffusion models.

I have broad interests in learning algorithms and intelligence, and I’m always eager to discuss research—feel free to reach out!

Research and Work Experience

Research Intern

Spring 2026

Fall 2025

Diffusion model representation engineering and alignment. Focus on analysis, controllability, interpretability & safety.

Research Intern

Fall 2024

Inference-time activation sparsity techniques for large language models (LLMs).

Research Intern

Summer 2024

Efficient fine-tuning method for LLMs using probabilistic layer selection.

Research Intern

Summer 2022

At Meta FAIR I worked on research on signal processing based techniques for sparse Deep Learning. My neural network sparsity library was integrated with the facebookresearch/fairscale repo.

GRA

Fall 2019 - Present

Graduate research assistant (GRA) with Dr. Vince Calhoun and Dr. Sergey Plis.

Education

PhD Student

Aug 2021 - Present

Research in learning algorithms and sparse learning across domains.

Dissertation: Principled Sparsity for Efficient Deep Learning Across Computational Paradigms.

CGPA 4.0/4.0

Master's

Aug 2019 - May 2021

Research and thesis on Explicit Group Sparse Projection. Master's Thesis.

CGPA 4.0/4.0


news

Sep 08, 2025 Excited to join Google DeepMind as a Research Intern! Will be working on model representation analysis and alignment with applications to safety.
Mar 05, 2025 New work on sparse model adapters out, Exploring Sparse Adapters for Scalable Merging of Parameter Efficient Experts was accepted at COLM 2025.
Sep 25, 2024 Our latest work, Efficient Reinforcement Learning by Discovering Neural Pathways was accepted at NeurIPS 2024.
Sep 03, 2024 Excited to join the model efficiency team at Cohere as a Research Intern!
May 20, 2024 Joining the Advanced Technologies group at Dolby Laboratories as a Ph.D. Research Intern! Will be working on novel efficient finetuning methods for both LLMs and multimodal VLMs.
Mar 05, 2023 Preliminary work accepted in ICLR 2023 Sparse Neural Networks workshop on communication efficient federated learning and full work out on arXiv.

selected publications

  1. COLM
    Exploring Sparse Adapters for Scalable Merging of Parameter Efficient Experts
    Samin Yeasar Arnob, Riyasat Ohib, Sergey M. Plis , and 3 more authors
    COLM, 2025
  2. NeurIPS
    Efficient Reinforcement Learning by Discovering Neural Pathways
    Samin Yeasar Arnob, Riyasat Ohib, Sergey M. Plis , and 3 more authors
    NeurIPS, 2024
  3. arxiv
    Unmasking Efficiency: Learning Salient Sparse Models in Non-IID Federated Learning
    Riyasat Ohib, Bishal Thapaliya, Gintare Karolina Dziugaite , and 3 more authors
    arxiv, 2024
  4. ICLR SNN
    SalientGrads: Sparse Models for Communication Efficient and data aware Distributed Federated Training
    Riyasat Ohib, Bishal Thapaliya, Pratyush Reddy , and 3 more authors
    ICLR Sparse Neural Networks Workshop, 2023
  5. TMLR
    Explicit Group Sparse Projection with Applications to Deep Learning and NMF
    Riyasat Ohib, Nicolas Gillis, Niccolò Dalmasso , and 3 more authors
    Transactions on Machine Learning Research, 2022
  6. NeurIPS Off-RL
    Single-Shot Pruning for Offline Reinforcement Learning
    Samin Yeasar, Riyasat Ohib, Sergey Plis , and 1 more author
    NeurIPS Offline RL Workshop, 2021
  7. ICLR HAET
    Grouped Sparse Projection for Deep Learning
    Riyasat Ohib, Nicolas Gillis, Sergey Plis , and 1 more author
    ICLR Hardware Aware Efficient Training workshop, 2021