Author: Li, Chengxi
Title: Human-in-the-loop robot motion generation for smart manufacturing : a mixed reality-assisted deep reinforcement learning approach
Advisors: Zheng, Pai (ISE)
Lee, K. M. Carman (ISE)
Degree: Ph.D.
Year: 2024
Subject: Robots -- Motion
Manufacturing processes
Human-robot interaction
Manufacturing industries -- Technological innovations
Hong Kong Polytechnic University -- Dissertations
Department: Department of Industrial and Systems Engineering
Pages: xxii, 197 pages : color illustrations
Language: English
Abstract: As the manufacturing paradigm shifts towards mass personalization, manufacturing ac­tivities have shown a growing tendency to cater to demands for small batches and high variety. This imposes greater requirements on the flexibility and intelligence of the man­ufacturing system. However, relying fully on autonomous robots for implementation may result in excessive system investment costs, and the practical feasibility of such a highly intelligent, fully automated approach remains uncertain. Under such circumstances, the human-in-the-loop (HITL) paradigm has gradually been considered a promising solution to support robotic manufacturing cell execution. Differing from the traditional human-robot separated mode, the HITL cell breaks the physical isolation between humans and robots. In this paradigm, humans are involved in making flexible decisions to resolve un­certainties in manufacturing tasks, while robots take over responsibility for the heavy but relatively well-defined goals and processes. By fully leveraging the respective strengths of robots and humans, this paradigm has great potential to simultaneously improve the efficiency and agility of modern manufacturing systems at lower cost.
However, a key challenge in this paradigm is that human involvement significantly in­creases the complexity of motion generation in robotic manufacturing cells. Robotic plat­forms not only ensure physical safety through force and speed limitations but also proac­tively and adaptively generate their movements. This adaptive motion generation is a crucial prerequisite (as well as the core technical issue) to seamlessly integrate robots into dynamic, unstructured HITL manufacturing environments, while traditional rigid rule-based or pre-programmed approaches are inadequate to meet these requirements.
Currently, owing to its powerful sequential decision-making capabilities, Deep Reinforce­ment Learning (DRL) has emerged as the predominant solution for addressing complex robotic motion generation problems within scalable and unstructured environments. Nev­ertheless, DRL-based policies still face some intractable challenges, hindering their prac­tical applications in HITL robotic cell settings. From an algorithmic perspective, ex­isting DRL-based approaches suffer from inadequate state representation acquisition at the perception level, overly complex scene exploration configurations during the learn­ing process, and limited transferability and generalization capabilities in the deployment phase. In terms of applications, there is a lack of well-designed algorithm triggering and execution mechanisms to seamlessly integrate DRL policies into real-world robotic sys­tems. Especially in HITL scenarios, it remains challenging to provide accurate and timely feedback during robot motion execution to ensure safety. Overall, addressing these algo­rithmic and integration issues will be crucial to unlocking the full potential of DRL for advanced robotic motion generation.
Fortunately, we found that the incorporation of Mixed Reality (MR) can effectively re­solve certain key limitations of DRL and bring new advantages in HITL scenarios. Owing to their spatial-computing features, MR-HMDs could well integrate scene spatial percep­tion, real-time interaction, and immersive visualization into the manufacturing process. These characteristics support them to effectively collect, process, and respond to observa­tions within HITL robotic manufacturing environments. To elaborate, MR-Head Mounted Display(HMD) unifies the perception, learning, and deployment processes of DRL by re­alistically mapping real environment representation to the simulator. It can enrich the state extraction by leveraging image pixels and robot/human state vectors, overcoming the lim­itations of traditional high-dimensional, computationally efficient methods. This further enables rapid deployment of DRL policies with effective sim2real transfer, improving the robustness of robot behaviours in real-world HITL scenarios. In conclusion, MR-HMD is a versatile platform that can enhance DRL from both algorithmic and application per­spectives, while enabling easy deployment in HITL robotic systems. However, limited research has explored how to use MR to support DRL policy generation, especially for robot control with HITL.
Therefore, this study proposes an MR-assisted DRL approach to potentially address the robot motion generation challenges in unstructured scalable HITL robotic manufactur­ing scenarios. It is designed and enhanced from the beginning of the foundational sup­port framework implementation, human cognitive functionalities enhancement, motion algorithm generalization, and policy scalability improvement. In addition to providing the technological infrastructure for prospective HITL robotic manufacturing systems, our study suggests that equipping robots with a robust decision-making brain through this MR-assisted DRL approach may facilitate more flexible human-robot interaction, ulti­mately contributing to symbiotic human-robot collaboration.
Chapter 3 discusses how to realize an appropriate platform and user-friendly interaction approaches to allow operators from various manufacturing sectors to minimize learning overheads and intuitively control and collaborate with robots. In this chapter, an inte­grated interactive HITL control framework is crafted, employing advanced MR visual­ization to synchronize activities between sophisticated manufacturing systems, precision vision sensors, and collaborative human-robot interfaces. This integration facilitates intu­itive supervision and guidance of complex work directives through MR, enhancing man­ufacturing agility and operational dexterity. Moreover, further integrated through DRL enhancements, it brings motion planning towards adaptability to various manufacturing settings and fosters dynamic and responsive of interaction.
Chapter 4 proposes the MR-assisted mutual cognitive-enhanced motion generation ap­proach toward a HITL robotic manufacturing system. In traditional industrial settings, physical barriers to segregate humans from robotic work areas, a requirement that can impede cooperative task execution and reduce productivity, especially when the demand for HITL is raised. To surmount these limitations, in this chapter, an MR-assisted human-guided mutual cognitive hierarchical motion generation policy is introduced to advance the motion planning module of the proposed existing system. This approach integrates several enhancements, including controlled movement speeds, a predictive model-based collision detection system, and a sophisticated collision-free robot motion planning tech­nique. Collectively, the policy not only supports human operators working in close prox­imity to robots’ motions but also enables simple guidance of operational efficiency and enhanced ease of HITL manufacturing processes.
Chapter 5 progresses with further exploration into the functionality and generalization of robotic DRL-based motion generation policy augmented by MR-HMD. This chapter aims to fully exploit the capabilities of MR-HMD in the proposed existing motion plan­ning and generation policy with the HITL scenario extending into manufacturing settings characterized by unstructured environments and human-movement uncertainties. Thus, a novel DRL approach for generating robot motion policy is proposed with well utiliz­ing mixed reality features. In that, the MR-HMD device is served as an effective tool for representing the states of humans, robots, and scenes, facilitating the development of an integrated end-to-end deep reinforcement learning policy that adeptly manages uncer­tainties in robot perception and decision-making. Furthermore, it effectively ensures the feasibility and safety of implementing the policy in practice.
Chapter 6 addresses the expansion of HITL robotic manufacturing systems beyond single robot cells to include multiple robots collaborating to complete tasks. This chapter intro­duces an advanced DRL-based motion planning policy for scalable robot motion planning within HITL manufacturing environments to expand the existing solution to single-human multi-robot cells. The strategy leverages the features of MR to decrease the state extrac­tion complexity and also improves DRL algorithmic modules to enhance motion genera­tion policy that can be generalized across various manufacturing layouts and various cell scales. This method guarantees seamless cooperation between robot teams and human op­erators, facilitating the execution of manufacturing tasks without necessitating additional operator involvement in managing and programming joint robot motions.
Rights: All rights reserved
Access: open access

Files in This Item:
File Description SizeFormat 
7793.pdfFor All Users58.12 MBAdobe PDFView/Open


Copyright Undertaking

As a bona fide Library user, I declare that:

  1. I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
  2. I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
  3. I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.

By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.

Show full item record

Please use this identifier to cite or link to this item: https://theses.lib.polyu.edu.hk/handle/200/13399