Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor | School of Design | en_US |
dc.contributor.advisor | Luximon, Yan (SD) | en_US |
dc.creator | Gui, Shun | - |
dc.identifier.uri | https://theses.lib.polyu.edu.hk/handle/200/13668 | - |
dc.language | English | en_US |
dc.publisher | Hong Kong Polytechnic University | en_US |
dc.rights | All rights reserved | en_US |
dc.title | Anticipatory human-robot handover interaction model in an assistive robot design | en_US |
dcterms.abstract | The current trend of population aging and an increasing number of individuals with disabilities has drawn significant attention from governments worldwide. This demographic shift has led to a shortage of caregivers, further exacerbating the issue, and raising concerns within society. In response to the needs of individuals with limited mobility in home settings, such as those who are bedridden or wheelchair-bound, there has been a surge in the development of assistive robotic technologies. Various types of assistive robot products have been introduced to address the demand for retrieving everyday objects in these scenarios. Despite significant advancements in robotics, developing effective assistive robots faces challenges in operating in unstructured environments. These settings pose obstacles with objects scattered in varying locations and orientations. Manipulating objects in such dynamic environments requires robust perception, adaptability, and intelligent decision-making. Another challenge is creating safe, reliable, and user-friendly human-robot interaction, achieved through integrating technologies like computer vision, manipulation, and human-robot interaction design. To meet these needs and overcome these challenges, I propose this research with the primary objective of developing a robotic system capable of grasping specific items based on user instructions and delivering them to the user's hand. To achieve this goal, I conduct research on robot recognition, grasping, and control technologies, as well as human-robot handover interaction design. | en_US |
dcterms.abstract | In Study 1, I perform some robot-to-human handover simulation experiments, which aim to investigate a range of issues pertaining to the robot-to-human handover scenario. The primary objective of these experiments is to gain insights into the genuine requirements of users and identify the key research considerations from both the robot's and the user's perspectives in this task. Through simulation experiments, I conclude some challenging but significant robot techniques and some key factors in this human robot interaction. The subsequent research focus on addressing these aspects. | en_US |
dcterms.abstract | In Study 2, I propose a 3D object detection algorithm called Recursive Cross-View (RCV) that can be rapidly applied to recognize various items in different robot scenarios. RCV leverages the three-view principle, transforming 3D detection into multiple 2D detection tasks, using only a subset of 2D labels. RCV introduces a recursive paradigm where instance segmentation and cross-view 3D bounding box generation are performed recursively until convergence. Evaluations on the SUN RGB-D and KITTI datasets demonstrate that the proposed method outperforms existing image-based methods. To showcase the rapid applicability of RCV to new tasks, I implement it in two real-world scenarios: 3D human detection, and 3D hand detection. As a result, two new 3D annotated datasets are obtained, indicating that RCV can be considered as a (semi-)automatic 3D annotator. Furthermore, I deploy RCV on a real robot, achieving real-time 3D object detection at 7 frames per second on live RGB-D streams. Therefore, RCV can be used to recognize various objects for robots in robot-to-human handover scenarios. | en_US |
dcterms.abstract | In Study 3, I propose a novel 6-DoF robot grasp pose detection approach called GoalGrasp that circumvents the need for grasp pose annotations and training. It facilitates user-specified object grasping even in partially occluded scenes in robot-to-human handover scenarios. By combining 3D bounding boxes and human grasp priors, GoalGrasp introduces a new paradigm for grasp pose detection. Leveraging the RCV 3D object detector, which operates without 3D annotations, GoalGrasp achieves rapid 3D detection in new scenes. Through the integration of 3D bounding box information and human grasp priors, GoalGrasp achieves dense grasp pose detection. Experimental evaluation involving 18 common objects demonstrates the generation of dense grasp poses for 1000 scenes without grasp training, establishing a comprehensive grasp pose dataset. GoalGrasp demonstrates notably superior grasp pose stability compared to existing methods, as indicated by a novel stability metric. In user-specified robot grasping experiments, the method achieves an 94% grasp success rate. Moreover, in user-specified grasping experiments conducted under partial occlusion, the success rate reaches 92%. | en_US |
dcterms.abstract | In Study 4, I propose an anticipatory handover control model named Deep-MPC that aims to enhance robots' ability to anticipate system state during the handover process. The framework integrates a 3D hand detector (RCV), an online learning transition model, and a data-driven model predictive control (MPC) approach. The 3D hand detector detects hands, providing visual input to the robotic system. To anticipate future states, Deep-MPC utilizes online learning from data collected during robot-environment interactions to infer forthcoming system states and optimize the robot's actions in real-time. The state transition module in Deep-MPC employs a neural network that takes states and actions as inputs, predicting the subsequent state. By performing multi-step predictions, comparing predicted states to the target state using a loss function, and optimizing actions through gradient backpropagation at each time step, Deep-MPC achieves effective action optimization. Deep-MPC can be viewed as an approach that establishes a human-robot interaction model from the robot's perspective, granting the robot human-like capabilities. | en_US |
dcterms.abstract | In Study 5, I integrate all proposed methods into a physical robot to execute robot-to-human handover interaction model design. Firstly, I explore the key factors from Study 1 in the robot-to-human handover interaction process, such as objects need to be grasped, robot motion speed, robot handover path, etc. These factors form the foundational elements for human-robot interaction. To determine the settings of all proposed factors, I conduct simulated experiments with participants who simulated individuals with mobility impairments, aiming to experience various interaction modes. Questionnaires are leveraged during the experiments to collect user feedback. Utilizing the gathered data, I develop a new robot-to-human handover interaction model. To validate the effectiveness of the interaction model, I conduct a validation experiment with new participants. Their feedback is collected, analyzed, and used to evaluate the performance of the model. The result demonstrates that the proposed interaction model achieves a good performance. This study proposes a new robot-to-human handover interaction model that partially fills a gap and provides insights for further developments in related robotic technologies. | en_US |
dcterms.abstract | This research explores the robot-to-human handover from two perspectives: robot techniques and human-robot interaction design. This research holds great significance in the field of robotics as it focuses on advancing automatic object grasping methods for robots and developing an interactive model for object handover between robots and humans. By addressing key research questions, this research aims to enhance the capabilities of robots in assisting individuals with limited mobility in retrieving objects and facilitating user-friendly interactions. The outcomes of this research have significant implications for designing and implementing future human-robot handover interactions. By identifying crucial factors and leveraging the developed techniques, this research contributes to the advancement of robotic systems that can collaborate with humans in a user-friendly manner, fostering robot adoption and acceptance in domains such as healthcare, assistive robotics, and daily life assistance. | en_US |
dcterms.extent | 205 pages : color illustrations | en_US |
dcterms.isPartOf | PolyU Electronic Theses | en_US |
dcterms.issued | 2025 | en_US |
dcterms.educationalLevel | Ph.D. | en_US |
dcterms.educationalLevel | All Doctorate | en_US |
dcterms.LCSH | Robotics | en_US |
dcterms.LCSH | Human-robot interaction | en_US |
dcterms.LCSH | Self-help devices for people with disabilities | en_US |
dcterms.LCSH | Hong Kong Polytechnic University -- Dissertations | en_US |
dcterms.accessRights | open access | en_US |
Copyright Undertaking
As a bona fide Library user, I declare that:
- I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
- I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
- I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.
By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.
Please use this identifier to cite or link to this item:
https://theses.lib.polyu.edu.hk/handle/200/13668