Latest News
Jan. 2024: Four papers accepted at ICRA'24, one of the premier conferences in robotics.
A big thank you to my students and collaborators!
1- Lifelong Robot Library Learning: Bootstrapping Composable and Generalizable Skills for Embodied Control with Language Models
2- Harnessing the Synergy between Pushing, Grasping, and Throwing to Enhance Object Manipulation in Cluttered Scenarios
3- Self-supervised Learning for Joint Pushing and Grasping Policies in Highly Cluttered Environments
4- TiV-ODE: A Neural ODE-based Approach for Controllable Video Generation From Text-Image Pairs
|
Dec. 2023: Our paper titled
Lifelong ensemble learning based on multiple representations for few-shot object recognition got accepted to Robotics and Autonomous Systems Journal! - [open-access]!
|
Set. 2023: Our paper titled
Language-guided Robot Grasping: CLIP-based Referring Grasp Synthesis in Clutter got accepted to
2023 Conference on Robot Learning (CORL)!
[video]
June. 2023: Three papers accepted at
IROS'23, one of the premier conferences in robotics.
A big thank you to my students and collaborators!
1- Early or Late Fusion Matters: Efficient RGB-D Fusion in Vision Transformers for 3D Object Recognition [video]
2- Enhancing Fine-Grained 3D Object Recognition using Hybrid Multi-Modal Vision Transformer-CNN Models [video]
3- L3MVN: Leveraging Large Language Models for Visual Target Navigation [video]
June. 2023: I received
Google Research Scholar Award in the field of Machine Learning for my work on Continual Robot Learning in Human-centered Environments! Thank Google for their generous support!
June. 2023: I have been selected as an
Outstanding Associate Editor for the IEEE Robotics and Automation Letters!
April. 2023: Our paper titled
MORE: Simultaneous Multi-View 3D Object Recognition and Pose Estimation got accepted to
Intelligent Service Robotics! - [open-access]!
March. 2023:
We will organize a full-day workshop on the topic of
"Interdisciplinary Exploration of Generalizable Manipulation Policy Learning: Paradigms and Debates" at
RSS 2023.
Feb. 2023: Zhenxing Zhang successfully defended his Ph.D. thesis
Generative Adversarial Networks for Diverse and Explainable Text-to-Image Generation. Congratulation Zhenxing!
Jan. 2023: Hamed Ayoobi successfully defended his Ph.D. thesis
Explain What You See: Argumentation-Based Learning and Robotic Vision. Congratulation Hamed!
Jan. 2023: Hamidreza Kasaei is serving as
associate editor for the
IROS 2023!
Jan. 2023: Four of our papers have been accepted at
ICRA'23, the premier conference in robotics.
A big thank you to my students and collaborators!
1- Throwing Objects into A Moving Basket While Avoiding Obstacles. [video]
2- Explain What You See: Open-Ended Segmentation and Recognition of Occluded 3D Objects. [video]
3- Instance-wise Grasp Synthesis for Robotic Grasping. [video]
4- Frontier Semantic Exploration for Visual Target Navigation [video]
Nov. 2022: Hamidreza Kasaei gave an invited talk at the
AI & Robotics in Healthcare | Data Science Center in Health (DASH) on
Towards Lifelong Assistive Robotics: How to make life easier for people with disabilities?
[video]
Nov. 2022: Hamidreza Kasaei gave an invited talk at the
University of Aveiro, Portugal | Seminar in Robotics and Intelligent Systems on
Robotics for Society: How robots can help us with a wide variety of tasks in different domains incrementally?
Oct. 2022: Our paper titled
MVGrasp: Real-Time Multi-View 3D Object Grasping in Highly
Cluttered Environments got accepted to
Robotics and Autonomous Systems (RAS)! - [open-access]!
Sep. 2022: Hamidreza Kasaei is serving as
associate editor for the
IEEE Robotics and Automation Letters (RA-L)!
March. 2022: Our paper titled
Sim-to-Real Transfer of Visual Grounding for Human-Aided Ambiguity Resolution got accepted to
Conference on Lifelong Learning Agents (CoLLAs 2022)! Congrats Georgios!
Sep. 2022: Hamidreza Kasaei is serving as
associate editor for the
IEEE ICRA 2023!
June 2022: We will organize a full-day workshop on
5th Robot Learning Workshop: Trustworthy Robotics at
NeruIPS2022.
March. 2022: Our paper titled
Lifelong 3D Object Recognition and Grasp Synthesis using Dual Memory Recurrent Self-Organization Networks got accepted to
Neural Networks Journal! Congrats Krishna!
Jan. 2022: Hamidreza Kasaei is serving as
associate editor for the
IEEE/RSJ IROS 2022!
Research & Publication
My research interests focus on the intersection of robotics, machine learning
and machine vision.
I am interested in developing algorithms for intelligent robotic systems based on lifelong/continual learning and active exploration to enable robots to help humans in various daily tasks.
I have been investigating on active perception and manipulation, where robots use their mobility and manipulation capabilities to model the world better. I have evaluated my works on different platforms including PR2, robotic arms, and humanoid robots.
Please navigate the publications pages of my research group (IRL-Lab), if you are intersted to know more our research.
My research group (IRL-Lab), mainly focuses on interactive robot learning to
make robots capable of learning in an open-ended fashion by interacting with non-expert human users.
More specifically, we have been developing this goal over six particular research directions:
1 - Perception and Perceptual Learning
We are interested in attaining a 3D understanding of the world around us. In particular,
the perception system provides important information that the robot has to use for interacting with users and
environments.
2- Object Grasping and Object Manipulation
A service robot must be able to interact with the environment as well as human users.
We are interested in fundamental research in object-agnostic grasping, affordance detection, task-informed grasping, and object manipulation.
3- Lifelong Interactive Robot Learning
A service robot must be able to interact with the environment as well as human users.
We are interested in fundamental research in object-agnostic grasping, affordance detection, task-informed grasping, and object manipulation.
4- Dual-Arm Manipulation
A dual-arm robot has very good manipulability and maneuverability which is necessary
to accomplish a set of everyday tasks (dishwashing, hammering).
We are interested in efficient imitation learning, collabrative manipulation, and large object manipulation.
5- Dynamic Robot Motion Planning
We are interested in attaining fully reactive manipulation functionalities in a closed-loop manner.
Reactive systems have to continuously check if they are at risk of colliding while planners should check every configuration that the robot may attempt to use.
6- Exploiting Multimodality
A service robot may sense the world through different modalities that may provide visual, haptic or auditory cues about the environment.
In this vein, we are interested in exploiting multimodality for learning better representations to improve robot's performance.
Contact
Dr. Hamidreza Kasaei
Artificial Intelligence Department,
University of Groningen,
Bernoulliborg building,
Nijenborgh 9 9747 AG Groningen,
The Netherlands.
Office: 340
Tel: +31-50-363-33926
Email: hamidreza.kasaei@rug.nl