@@ -25,41 +25,70 @@ permalink: /research/
2525 <div class="research-area-content">
2626 <h2>Interactive Reinforcement Learning</h2>
2727 <p>
28- We develop methods for robots to learn complex tasks through interactive feedback from humans.
29- Our work focuses on efficient learning from human corrections, incorporating human preferences,
30- and enabling robots to actively query humans for guidance during learning and execution.
28+ We develop methods for robots to learn complex manipulation tasks through hierarchical reinforcement
29+ learning and imitation learning. Our work on <strong>Impedance Primitive-Augmented Hierarchical RL</strong>
30+ (ICRA 2025) augments high-level RL policies with impedance control primitives to solve long-horizon
31+ sequential tasks, enabling compliant and adaptive skill acquisition. We also investigate robust imitation
32+ learning — including methods that exploit <strong>mixed-quality demonstrations</strong> (Beyond the Teacher,
33+ ICRA 2026) and <strong>progress-aligned data curation</strong> (PACER, CoRL Workshop 2025) to improve
34+ generalization from imperfect human teachers, as well as stochastic encodings (RISE, IROS Workshop 2025)
35+ for robust policy learning.
3136 </p>
37+ <div class="pub-links-inline">
38+ <a href="https://ieeexplore.ieee.org/abstract/document/11128462" target="_blank" class="pub-link-text">ImpHRL (ICRA'25)</a>
39+ <a href="https://focaslab.github.io/beyondtheteacher/" target="_blank" class="pub-link-text">Beyond the Teacher (ICRA'26)</a>
40+ <a href="https://openreview.net/forum?id=gaYyBvP2Rz" target="_blank" class="pub-link-text">PACER</a>
41+ <a href="https://openreview.net/forum?id=GEexdUmA67" target="_blank" class="pub-link-text">RISE</a>
42+ </div>
3243 </div>
3344 <div class="research-area-image">
34- <img src="/assets/img/research_areas/robot_learning .gif" alt="Interactive RL">
45+ <img src="/assets/img/publication_preview/icra_imphrl_25 .gif" alt="Impedance Hierarchical RL">
3546 </div>
3647 </div>
3748
3849 <div class="research-area">
3950 <div class="research-area-content">
4051 <h2>Foundational Models for Robotics</h2>
4152 <p>
42- We explore how large-scale pre-trained models can be leveraged for robotic manipulation and navigation.
43- Our research includes vision-language models for task understanding, diffusion models for motion generation,
44- and adapting foundation models to physical robot systems with limited data.
53+ We explore how large-scale pre-trained vision-language models can enable natural, flexible
54+ robot-human interaction. Our work on <strong>OVITA</strong> (Open-Vocabulary Interpretable Trajectory
55+ Adaptations, RA-L 2025) uses vision-language models to adapt robot trajectories in real time based on
56+ open-vocabulary human language instructions, providing interpretable modifications without retraining.
57+ We also develop <strong>DiffusionPack</strong> (NeurIPS Workshop 2025), a diffusion-based approach for
58+ complex bin-packing manipulation tasks that incorporates custom human preferences, bridging language,
59+ world models, and physical robot execution.
4560 </p>
61+ <div class="pub-links-inline">
62+ <a href="https://ieeexplore.ieee.org/abstract/document/11150730" target="_blank" class="pub-link-text">OVITA (RA-L'25)</a>
63+ <a href="https://anurag1000101.github.io/projects/IISC/" target="_blank" class="pub-link-text">Project Site</a>
64+ <a href="https://openreview.net/forum?id=uReNc199fG" target="_blank" class="pub-link-text">DiffusionPack</a>
65+ </div>
4666 </div>
4767 <div class="research-area-image">
48- <img src="/assets/img/research_areas/language_robotics .gif" alt="Foundation Models ">
68+ <img src="/assets/img/publication_preview/ovita_25 .gif" alt="OVITA Trajectory Adaptation ">
4969 </div>
5070 </div>
5171
5272 <div class="research-area">
5373 <div class="research-area-content">
5474 <h2>Safe and Compliant Human-Robot Interaction</h2>
5575 <p>
56- Safety and compliance are essential when robots work alongside humans. We develop adaptive control strategies,
57- variable impedance methods, and formal verification techniques to ensure safe physical interaction. Our research
58- ensures robots can adapt their behavior based on human intent and environmental constraints.
76+ Safety and compliance are essential when robots work alongside humans. Our work on <strong>SafeDMPs</strong>
77+ (ICRA 2026) integrates Signal Temporal Logic (STL)-based formal safety specifications directly into Dynamic
78+ Movement Primitives, enabling adaptive robot motions that are provably safe during physical human-robot
79+ contact. We also develop <strong>certified reinforcement learning for variable impedance control</strong>
80+ (ICRA 2026), which combines Lyapunov-based stability certificates with RL to achieve optimal, safe impedance
81+ modulation during interaction. Our stability-aware PI² framework (ICRA Workshop 2025) further extends these
82+ ideas for safe interaction under uncertainty.
5983 </p>
84+ <div class="pub-links-inline">
85+ <a href="https://arxiv.org/abs/2509.16482" target="_blank" class="pub-link-text">SafeDMPs (ICRA'26)</a>
86+ <a href="https://tiwari-pranav.github.io/SafeDMPs/" target="_blank" class="pub-link-text">Project Site</a>
87+ <a href="https://openreview.net/forum?id=Xj3V96qTpf" target="_blank" class="pub-link-text">CoRL Workshop</a>
88+ </div>
6089 </div>
6190 <div class="research-area-image">
62- <img src="/assets/img/research_areas/safe_hri.gif " alt="Safe HRI">
91+ <img src="/assets/img/publication_preview/safedmps_26.png " alt="Safe DMPs for HRI">
6392 </div>
6493 </div>
6594
@@ -74,21 +103,31 @@ permalink: /research/
74103 </p>
75104 </div>
76105 <div class="research-area-image">
77- <img src="/assets/img/research_areas/optimal_control.gif " alt="3D Reconstruction">
106+ <img src="/assets/img/others/slam.png " alt="3D Reconstruction and SLAM ">
78107 </div>
79108 </div>
80109
81110 <div class="research-area">
82111 <div class="research-area-content">
83112 <h2>Optimization and Optimal Control</h2>
84113 <p>
85- We develop optimization-based approaches for robot motion planning and control. Our research
86- includes adaptive optimal control for uncertain systems, trajectory optimization under constraints,
87- and hierarchical planning frameworks that combine learning with classical optimization methods.
114+ We develop optimization-based and data-driven approaches for robot motion planning and control.
115+ Our <strong>Adaptive Critic</strong> framework (IEEE T-CST 2025) learns optimal controllers for uncertain
116+ robot manipulators using neural network-based value function approximation, achieving data-efficient online
117+ adaptation without explicit system identification. Our work on <strong>generalizable motion policies through
118+ keypoint parameterization and transportation maps</strong> (IEEE T-RO 2025) enables one-shot generalization
119+ of manipulation skills to novel object configurations. We also develop <strong>ST²</strong> (Sequentially
120+ Teaching Sequential Tasks, RA-M 2026), a framework for teaching robots long-horizon manipulation skills
121+ through sequential human demonstrations.
88122 </p>
123+ <div class="pub-links-inline">
124+ <a href="https://ieeexplore.ieee.org/abstract/document/10718695" target="_blank" class="pub-link-text">Adaptive Critic (T-CST'25)</a>
125+ <a href="https://ieeexplore.ieee.org/abstract/document/11049008" target="_blank" class="pub-link-text">Motion Policies (T-RO'25)</a>
126+ <a href="https://ieeexplore.ieee.org/document/11369949" target="_blank" class="pub-link-text">ST² (RA-M'26)</a>
127+ </div>
89128 </div>
90129 <div class="research-area-image">
91- <img src="/assets/img/research_areas/optimal_control .gif" alt="Optimization ">
130+ <img src="/assets/img/publication_preview/tcst_ac_24 .gif" alt="Adaptive Critic Optimal Control ">
92131 </div>
93132 </div>
94133
0 commit comments