2024 |
|
![]() | PhysicsAssistant: An LLM-Powered Interactive Learning Robot for Physics Lab Investigations Conference The 33rd IEEE International Conference on Robot and Human Interactive Communication, IEEE RO-MAN 2024, 2024. Abstract | Links | BibTeX | Tags: assistive devices, human-robot interaction, human-robot interface @conference{Latif2024bb, title = {PhysicsAssistant: An LLM-Powered Interactive Learning Robot for Physics Lab Investigations}, author = {Ehsan Latif and Ramviyas Parasuraman and Xiaoming Zhai}, doi = {10.1109/RO-MAN60168.2024.10731312}, year = {2024}, date = {2024-08-30}, booktitle = {The 33rd IEEE International Conference on Robot and Human Interactive Communication, IEEE RO-MAN 2024}, abstract = { Robot systems in education can leverage Large language models' (LLMs) natural language understanding capabilities to provide assistance and facilitate learning. This paper proposes a multimodal interactive robot (PhysicsAssistant) built on YOLOv8 object detection, cameras, speech recognition, and chatbot using LLM to provide assistance to students' physics labs. We conduct a user study on ten 8th-grade students to empirically evaluate the performance of PhysicsAssistant with a human expert. The Expert rates the assistants' responses to student queries on a 0-4 scale based on Bloom's taxonomy to provide educational support. We have compared the performance of PhysicsAssistant (YOLOv8+GPT-3.5-turbo) with GPT-4 and found that the human expert rating of both systems for factual understanding is same. However, the rating of GPT-4 for conceptual and procedural knowledge (3 and 3.2 vs 2.2 and 2.6, respectively) is significantly higher than PhysicsAssistant (p $<$ 0.05). However, the response time of GPT-4 is significantly higher than PhysicsAssistant (3.54 vs 1.64 sec, p $<$ 0.05). Hence, despite the relatively lower response quality of PhysicsAssistant than GPT-4, it has shown potential for being used as a real-time lab assistant to provide timely responses and can offload teachers' labor to assist with repetitive tasks. To the best of our knowledge, this is the first attempt to build such an interactive multimodal robotic assistant for K-12 science (physics) education. }, keywords = {assistive devices, human-robot interaction, human-robot interface}, pubstate = {published}, tppubtype = {conference} } Robot systems in education can leverage Large language models' (LLMs) natural language understanding capabilities to provide assistance and facilitate learning. This paper proposes a multimodal interactive robot (PhysicsAssistant) built on YOLOv8 object detection, cameras, speech recognition, and chatbot using LLM to provide assistance to students' physics labs. We conduct a user study on ten 8th-grade students to empirically evaluate the performance of PhysicsAssistant with a human expert. The Expert rates the assistants' responses to student queries on a 0-4 scale based on Bloom's taxonomy to provide educational support. We have compared the performance of PhysicsAssistant (YOLOv8+GPT-3.5-turbo) with GPT-4 and found that the human expert rating of both systems for factual understanding is same. However, the rating of GPT-4 for conceptual and procedural knowledge (3 and 3.2 vs 2.2 and 2.6, respectively) is significantly higher than PhysicsAssistant (p $<$ 0.05). However, the response time of GPT-4 is significantly higher than PhysicsAssistant (3.54 vs 1.64 sec, p $<$ 0.05). Hence, despite the relatively lower response quality of PhysicsAssistant than GPT-4, it has shown potential for being used as a real-time lab assistant to provide timely responses and can offload teachers' labor to assist with repetitive tasks. To the best of our knowledge, this is the first attempt to build such an interactive multimodal robotic assistant for K-12 science (physics) education. |
2022 |
|
![]() | Sharing Autonomy of Exploration and Exploitation via Control Interface Workshop ICRA 2022 Workshop on Shared Autonomy in Physical Human-Robot Interaction: Adaptability and Trust, 2022. Abstract | Links | BibTeX | Tags: autonomy, human-robot interface, trust @workshop{Munir2022, title = {Sharing Autonomy of Exploration and Exploitation via Control Interface}, author = {Aiman Munir and Ramviyas Parasuraman}, url = {https://sites.google.com/view/saphri-icra2022/contributions}, year = {2022}, date = {2022-05-23}, booktitle = {ICRA 2022 Workshop on Shared Autonomy in Physical Human-Robot Interaction: Adaptability and Trust}, abstract = {Shared autonomy is a control paradigm that refers to the adaptation of a robot’s autonomy level in dynamic environments while taking human intentions and status into account at the same time. Here, the autonomy level can be changed based on internal/external information and human input. However, there are no clear guidelines and studies that help understand ”when” should a robot adapt its autonomy level to different functionalities. Therefore, in this paper, we create a framework that helps to improve the human-robot control interface by allowing humans to adapt to the robots’ autonomy level as well as to create a study design to gather insights into human’s preference to switch autonomy levels based on the current situation. We create two high-level strategies - Exploration to gather more data and Exploitation to make use of current data - for a search and rescue task. These two strategies can be achieved with human inputs or autonomous algorithms. We intend to understand the human preferences to the autonomy levels (and ”when” they want to switch) to these two strategies. The analysis is expected to provide insights into designing shared autonomy schemes and algorithms to consider human preferences in adaptively using autonomy levels of certain high-level strategies.}, keywords = {autonomy, human-robot interface, trust}, pubstate = {published}, tppubtype = {workshop} } Shared autonomy is a control paradigm that refers to the adaptation of a robot’s autonomy level in dynamic environments while taking human intentions and status into account at the same time. Here, the autonomy level can be changed based on internal/external information and human input. However, there are no clear guidelines and studies that help understand ”when” should a robot adapt its autonomy level to different functionalities. Therefore, in this paper, we create a framework that helps to improve the human-robot control interface by allowing humans to adapt to the robots’ autonomy level as well as to create a study design to gather insights into human’s preference to switch autonomy levels based on the current situation. We create two high-level strategies - Exploration to gather more data and Exploitation to make use of current data - for a search and rescue task. These two strategies can be achieved with human inputs or autonomous algorithms. We intend to understand the human preferences to the autonomy levels (and ”when” they want to switch) to these two strategies. The analysis is expected to provide insights into designing shared autonomy schemes and algorithms to consider human preferences in adaptively using autonomy levels of certain high-level strategies. |
Publications
2024 |
|
![]() | PhysicsAssistant: An LLM-Powered Interactive Learning Robot for Physics Lab Investigations Conference The 33rd IEEE International Conference on Robot and Human Interactive Communication, IEEE RO-MAN 2024, 2024. |
2022 |
|
![]() | Sharing Autonomy of Exploration and Exploitation via Control Interface Workshop ICRA 2022 Workshop on Shared Autonomy in Physical Human-Robot Interaction: Adaptability and Trust, 2022. |