2024 |
|
![]() | PhysicsAssistant: An LLM-Powered Interactive Learning Robot for Physics Lab Investigations Conference The 33rd IEEE International Conference on Robot and Human Interactive Communication, IEEE RO-MAN 2024, 2024. Abstract | Links | BibTeX | Tags: assistive devices, human-robot interaction, human-robot interface @conference{Latif2024bb, title = {PhysicsAssistant: An LLM-Powered Interactive Learning Robot for Physics Lab Investigations}, author = {Ehsan Latif and Ramviyas Parasuraman and Xiaoming Zhai}, doi = {10.1109/RO-MAN60168.2024.10731312}, year = {2024}, date = {2024-08-30}, booktitle = {The 33rd IEEE International Conference on Robot and Human Interactive Communication, IEEE RO-MAN 2024}, abstract = { Robot systems in education can leverage Large language models' (LLMs) natural language understanding capabilities to provide assistance and facilitate learning. This paper proposes a multimodal interactive robot (PhysicsAssistant) built on YOLOv8 object detection, cameras, speech recognition, and chatbot using LLM to provide assistance to students' physics labs. We conduct a user study on ten 8th-grade students to empirically evaluate the performance of PhysicsAssistant with a human expert. The Expert rates the assistants' responses to student queries on a 0-4 scale based on Bloom's taxonomy to provide educational support. We have compared the performance of PhysicsAssistant (YOLOv8+GPT-3.5-turbo) with GPT-4 and found that the human expert rating of both systems for factual understanding is same. However, the rating of GPT-4 for conceptual and procedural knowledge (3 and 3.2 vs 2.2 and 2.6, respectively) is significantly higher than PhysicsAssistant (p $<$ 0.05). However, the response time of GPT-4 is significantly higher than PhysicsAssistant (3.54 vs 1.64 sec, p $<$ 0.05). Hence, despite the relatively lower response quality of PhysicsAssistant than GPT-4, it has shown potential for being used as a real-time lab assistant to provide timely responses and can offload teachers' labor to assist with repetitive tasks. To the best of our knowledge, this is the first attempt to build such an interactive multimodal robotic assistant for K-12 science (physics) education. }, keywords = {assistive devices, human-robot interaction, human-robot interface}, pubstate = {published}, tppubtype = {conference} } Robot systems in education can leverage Large language models' (LLMs) natural language understanding capabilities to provide assistance and facilitate learning. This paper proposes a multimodal interactive robot (PhysicsAssistant) built on YOLOv8 object detection, cameras, speech recognition, and chatbot using LLM to provide assistance to students' physics labs. We conduct a user study on ten 8th-grade students to empirically evaluate the performance of PhysicsAssistant with a human expert. The Expert rates the assistants' responses to student queries on a 0-4 scale based on Bloom's taxonomy to provide educational support. We have compared the performance of PhysicsAssistant (YOLOv8+GPT-3.5-turbo) with GPT-4 and found that the human expert rating of both systems for factual understanding is same. However, the rating of GPT-4 for conceptual and procedural knowledge (3 and 3.2 vs 2.2 and 2.6, respectively) is significantly higher than PhysicsAssistant (p $<$ 0.05). However, the response time of GPT-4 is significantly higher than PhysicsAssistant (3.54 vs 1.64 sec, p $<$ 0.05). Hence, despite the relatively lower response quality of PhysicsAssistant than GPT-4, it has shown potential for being used as a real-time lab assistant to provide timely responses and can offload teachers' labor to assist with repetitive tasks. To the best of our knowledge, this is the first attempt to build such an interactive multimodal robotic assistant for K-12 science (physics) education. |
2022 |
|
![]() | On Physical Compatibility of Robots in Human-Robot Collaboration Settings Workshop ICRA 2022 WORKSHOP ON COLLABORATIVE ROBOTS AND THE WORK OF THE FUTURE, 2022. Abstract | Links | BibTeX | Tags: human-robot interaction @workshop{Pandey2022b, title = {On Physical Compatibility of Robots in Human-Robot Collaboration Settings}, author = {Pranav Pandey, Ramviyas Parasuraman, and Prashant Doshi}, url = {https://sites.google.com/view/icra22ws-cor-wotf/accepted-papers}, year = {2022}, date = {2022-05-23}, booktitle = {ICRA 2022 WORKSHOP ON COLLABORATIVE ROBOTS AND THE WORK OF THE FUTURE}, abstract = {Human-Robot Interaction (HRI) is a multidisciplinary field. It has become essential for robots to work with humans in collaboration and teamwork settings, such as collaborative assembly, where they share tasks in an overlapping workspace. While extensive research is available to ensure successful HRI, primarily focusing on the safety factors, our objective is to provide a comprehensive perspective on robot’s compatibility with humans in such settings. Specifically, we highlight the key pillars and elements of Physical Human-Robot Interaction (pHRI) and discuss the valuable metrics for evaluating such systems. To achieve compatibility, we propose that the robot ensure humans’ safety, flexibility in tasks, and robustness to changes in the environment. Ultimately, these elements will help assess robots’ awareness of humans and surroundings and help increase the trustworthiness of robots among human collaborators.}, keywords = {human-robot interaction}, pubstate = {published}, tppubtype = {workshop} } Human-Robot Interaction (HRI) is a multidisciplinary field. It has become essential for robots to work with humans in collaboration and teamwork settings, such as collaborative assembly, where they share tasks in an overlapping workspace. While extensive research is available to ensure successful HRI, primarily focusing on the safety factors, our objective is to provide a comprehensive perspective on robot’s compatibility with humans in such settings. Specifically, we highlight the key pillars and elements of Physical Human-Robot Interaction (pHRI) and discuss the valuable metrics for evaluating such systems. To achieve compatibility, we propose that the robot ensure humans’ safety, flexibility in tasks, and robustness to changes in the environment. Ultimately, these elements will help assess robots’ awareness of humans and surroundings and help increase the trustworthiness of robots among human collaborators. |
2020 |
|
![]() | Needs-driven Heterogeneous Multi-Robot Cooperation in Rescue Missions Conference 2020 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR 2020), 2020. Abstract | Links | BibTeX | Tags: human-robot interaction, multi-robot-systems, robotics @conference{Yang2020b, title = {Needs-driven Heterogeneous Multi-Robot Cooperation in Rescue Missions}, author = {Qin Yang and Ramviyas Parasuraman}, url = {https://arxiv.org/abs/2009.00288}, year = {2020}, date = {2020-11-06}, booktitle = {2020 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR 2020)}, abstract = {This paper focuses on the teaming aspects and the role of heterogeneity in a multi-robot system applied to robot-aided urban search and rescue (USAR) missions. We specifically propose a needs-driven multi-robot cooperation mechanism represented through a Behavior Tree structure and evaluate the performance of the system in terms of the group utility and energy cost to achieve the rescue mission in a limited time. From the theoretical analysis, we prove that the needs-drive cooperation in a heterogeneous robot system enables higher group utility compared to a homogeneous robot system. We also perform simulation experiments to verify the proposed needs-driven cooperation and show that the heterogeneous multi-robot cooperation can achieve better performance and increase system robustness by reducing uncertainty in task execution. Finally, we discuss the application to human-robot teaming.}, keywords = {human-robot interaction, multi-robot-systems, robotics}, pubstate = {published}, tppubtype = {conference} } This paper focuses on the teaming aspects and the role of heterogeneity in a multi-robot system applied to robot-aided urban search and rescue (USAR) missions. We specifically propose a needs-driven multi-robot cooperation mechanism represented through a Behavior Tree structure and evaluate the performance of the system in terms of the group utility and energy cost to achieve the rescue mission in a limited time. From the theoretical analysis, we prove that the needs-drive cooperation in a heterogeneous robot system enables higher group utility compared to a homogeneous robot system. We also perform simulation experiments to verify the proposed needs-driven cooperation and show that the heterogeneous multi-robot cooperation can achieve better performance and increase system robustness by reducing uncertainty in task execution. Finally, we discuss the application to human-robot teaming. |
![]() | Robot Controlling Robots - A New Perspective to Bilateral Teleoperation in Mobile Robots Workshop RSS 2020 Workshop on Reacting to Contact: Enabling Transparent Interactions through Intelligent Sensing and Actuation, 2020. Abstract | Links | BibTeX | Tags: control, human-robot interaction, networking, robotics @workshop{Tahir2020, title = {Robot Controlling Robots - A New Perspective to Bilateral Teleoperation in Mobile Robots}, author = {Nazish Tahir and Ramviyas Parasuraman}, url = {https://ankitbhatia.github.io/reacting_contact_workshop/}, year = {2020}, date = {2020-07-12}, booktitle = {RSS 2020 Workshop on Reacting to Contact: Enabling Transparent Interactions through Intelligent Sensing and Actuation}, abstract = {Adaptation to increasing levels of autonomy - from manual teleoperation to complete automation is of particular interest to Field Robotics and Human-Robot Interaction community. Towards that line of research, we introduce and investigate a novel bilaterally teleoperation control strategy for a robot to the robot system. A bilateral teleoperation scheme is typically applied to human control of robots. In this abstract, we look at a different perspective of using a bilateral teleoperation system between robots, where one robot (Labor) is teleoperated by an autonomous robot (Master). To realize such a strategy, our proposed robot-system is divided into a master-labor networked scheme where the master robot is located at a remote site operable by a human user or an autonomous agent and a labor robot; the follower robot is located on operation site. The labor robot is capable of reflecting the odometry commands of the master robot meanwhile also navigating its environment by obstacle detection and avoidance mechanism. An autonomous algorithm such as a typical SLAM-based path planner is controlling the master robot, which is provided with a suitable force feedback informative of the labor response by its interaction with the environment. We perform preliminary experiments to verify the system feasibility and analyze the motion transparency in different scenarios. The results show promise to investigate this research further and develop this work towards human multi-robot teleoperation.}, keywords = {control, human-robot interaction, networking, robotics}, pubstate = {published}, tppubtype = {workshop} } Adaptation to increasing levels of autonomy - from manual teleoperation to complete automation is of particular interest to Field Robotics and Human-Robot Interaction community. Towards that line of research, we introduce and investigate a novel bilaterally teleoperation control strategy for a robot to the robot system. A bilateral teleoperation scheme is typically applied to human control of robots. In this abstract, we look at a different perspective of using a bilateral teleoperation system between robots, where one robot (Labor) is teleoperated by an autonomous robot (Master). To realize such a strategy, our proposed robot-system is divided into a master-labor networked scheme where the master robot is located at a remote site operable by a human user or an autonomous agent and a labor robot; the follower robot is located on operation site. The labor robot is capable of reflecting the odometry commands of the master robot meanwhile also navigating its environment by obstacle detection and avoidance mechanism. An autonomous algorithm such as a typical SLAM-based path planner is controlling the master robot, which is provided with a suitable force feedback informative of the labor response by its interaction with the environment. We perform preliminary experiments to verify the system feasibility and analyze the motion transparency in different scenarios. The results show promise to investigate this research further and develop this work towards human multi-robot teleoperation. |
Publications
2024 |
|
![]() | PhysicsAssistant: An LLM-Powered Interactive Learning Robot for Physics Lab Investigations Conference The 33rd IEEE International Conference on Robot and Human Interactive Communication, IEEE RO-MAN 2024, 2024. |
2022 |
|
![]() | On Physical Compatibility of Robots in Human-Robot Collaboration Settings Workshop ICRA 2022 WORKSHOP ON COLLABORATIVE ROBOTS AND THE WORK OF THE FUTURE, 2022. |
2020 |
|
![]() | Needs-driven Heterogeneous Multi-Robot Cooperation in Rescue Missions Conference 2020 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR 2020), 2020. |
![]() | Robot Controlling Robots - A New Perspective to Bilateral Teleoperation in Mobile Robots Workshop RSS 2020 Workshop on Reacting to Contact: Enabling Transparent Interactions through Intelligent Sensing and Actuation, 2020. |