2024 |
|
![]() | 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2024), 2024. Abstract | Links | BibTeX | Tags: learning, mapping, perception @conference{Ravipati2024, title = {Object-Oriented Material Classification and 3D Clustering for Improved Semantic Perception and Mapping in Mobile Robots}, author = {Siva Krishna Ravipati and Ehsan Latif and Suchendra Bhandarkar and Ramviyas Parasuraman }, url = {https://ieeexplore.ieee.org/document/10801936}, doi = {10.1109/IROS58592.2024.10801936}, year = {2024}, date = {2024-10-13}, booktitle = {2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2024)}, pages = {9729-9736}, abstract = {Classification of different object surface material types can play a significant role in the decision-making algorithms for mobile robots and autonomous vehicles. RGB-based scene-level semantic segmentation has been well-addressed in the literature. However, improving material recognition using the depth modality and its integration with SLAM algorithms for 3D semantic mapping could unlock new potential benefits in the robotics perception pipeline. To this end, we propose a complementarity-aware deep learning approach for RGB-D-based material classification built on top of an object-oriented pipeline. The approach further integrates the ORB-SLAM2 method for 3D scene mapping with multiscale clustering of the detected material semantics in the point cloud map generated by the visual SLAM algorithm. Extensive experimental results with existing public datasets and newly contributed real-world robot datasets demonstrate a significant improvement in material classification and 3D clustering accuracy compared to state-of-the-art approaches for 3D semantic scene mapping. }, keywords = {learning, mapping, perception}, pubstate = {published}, tppubtype = {conference} } Classification of different object surface material types can play a significant role in the decision-making algorithms for mobile robots and autonomous vehicles. RGB-based scene-level semantic segmentation has been well-addressed in the literature. However, improving material recognition using the depth modality and its integration with SLAM algorithms for 3D semantic mapping could unlock new potential benefits in the robotics perception pipeline. To this end, we propose a complementarity-aware deep learning approach for RGB-D-based material classification built on top of an object-oriented pipeline. The approach further integrates the ORB-SLAM2 method for 3D scene mapping with multiscale clustering of the detected material semantics in the point cloud map generated by the visual SLAM algorithm. Extensive experimental results with existing public datasets and newly contributed real-world robot datasets demonstrate a significant improvement in material classification and 3D clustering accuracy compared to state-of-the-art approaches for 3D semantic scene mapping. |
2020 |
|
![]() | Material Mapping in Unknown Environments using Tapping Sound Conference 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2020), 2020. Abstract | BibTeX | Tags: mapping, perception, robotics @conference{Kannan2020, title = {Material Mapping in Unknown Environments using Tapping Sound}, author = {Shyam Sundar Kannan and Wonse Jo and Ramviyas Parasuramanoiuytrewq and Byung-Cheol Min}, year = {2020}, date = {2020-10-29}, booktitle = {2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2020)}, abstract = {In this paper, we propose an autonomous exploration and tapping mechanism-based material mapping system for a mobile robot in unknown environments. The proposed system integrates SLAM modules and sound-based material classification to enable a mobile robot to explore an unknown environment autonomously and at the same time identify the various objects and materials in the environment in an efficient manner, creating a material map which localizes the various materials in the environment over the occupancy grid. A tapping mechanism and tapping audio signal processing based on machine learning techniques are exploited for a robot to identify the objects and materials. We demonstrate the proposed system through experiments using a mobile robot platform installed with Velodyne LiDAR, a linear solenoid, and microphones in an exploration-like scenario with various materials. Experiment results demonstrate that the proposed system can create useful material maps in unknown environments.}, keywords = {mapping, perception, robotics}, pubstate = {published}, tppubtype = {conference} } In this paper, we propose an autonomous exploration and tapping mechanism-based material mapping system for a mobile robot in unknown environments. The proposed system integrates SLAM modules and sound-based material classification to enable a mobile robot to explore an unknown environment autonomously and at the same time identify the various objects and materials in the environment in an efficient manner, creating a material map which localizes the various materials in the environment over the occupancy grid. A tapping mechanism and tapping audio signal processing based on machine learning techniques are exploited for a robot to identify the objects and materials. We demonstrate the proposed system through experiments using a mobile robot platform installed with Velodyne LiDAR, a linear solenoid, and microphones in an exploration-like scenario with various materials. Experiment results demonstrate that the proposed system can create useful material maps in unknown environments. |
2019 |
|
![]() | Wisture: Touch-less Hand Gesture Classification in Unmodified Smartphones Using Wi-Fi Signals Journal Article IEEE Sensors Journal, 19 (1), pp. 257-267, 2019. Abstract | Links | BibTeX | Tags: networking, perception, robotics @article{Haseeb2018, title = {Wisture: Touch-less Hand Gesture Classification in Unmodified Smartphones Using Wi-Fi Signals}, author = {Mohamed Haseeb and Ramviyas Parasuraman}, url = {https://ieeexplore.ieee.org/document/8493572}, doi = {10.1109/JSEN.2018.2876448}, year = {2019}, date = {2019-01-01}, journal = { IEEE Sensors Journal}, volume = {19}, number = {1}, pages = {257-267}, abstract = {This paper introduces Wisture, a new online machine learning solution for recognizing touch-less dynamic hand gestures on a smartphone. Wisture relies on the standard Wi-Fi Received Signal Strength (RSS) using a Long Short-Term Memory (LSTM) Recurrent Neural Network (RNN), thresholding filters and traffic induction. Unlike other Wi-Fi based gesture recognition methods, the proposed method does not require a modification of the smartphone hardware or the operating system, and performs the gesture recognition without interfering with the normal operation of other smartphone applications. We discuss the characteristics of Wisture, and conduct extensive experiments to compare its performance against state-of-the-art machine learning solutions in terms of both accuracy and time efficiency. The experiments include a set of different scenarios in terms of both spatial setup and traffic between the smartphone and Wi-Fi access points (AP). The results show that Wisture achieves an online recognition accuracy of up to 94% (average 78%) in detecting and classifying three hand gestures.}, keywords = {networking, perception, robotics}, pubstate = {published}, tppubtype = {article} } This paper introduces Wisture, a new online machine learning solution for recognizing touch-less dynamic hand gestures on a smartphone. Wisture relies on the standard Wi-Fi Received Signal Strength (RSS) using a Long Short-Term Memory (LSTM) Recurrent Neural Network (RNN), thresholding filters and traffic induction. Unlike other Wi-Fi based gesture recognition methods, the proposed method does not require a modification of the smartphone hardware or the operating system, and performs the gesture recognition without interfering with the normal operation of other smartphone applications. We discuss the characteristics of Wisture, and conduct extensive experiments to compare its performance against state-of-the-art machine learning solutions in terms of both accuracy and time efficiency. The experiments include a set of different scenarios in terms of both spatial setup and traffic between the smartphone and Wi-Fi access points (AP). The results show that Wisture achieves an online recognition accuracy of up to 94% (average 78%) in detecting and classifying three hand gestures. |
Publications
2024 |
|
![]() | 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2024), 2024. |
2020 |
|
![]() | Material Mapping in Unknown Environments using Tapping Sound Conference 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2020), 2020. |
2019 |
|
![]() | Wisture: Touch-less Hand Gesture Classification in Unmodified Smartphones Using Wi-Fi Signals Journal Article IEEE Sensors Journal, 19 (1), pp. 257-267, 2019. |