Special Sessions

"SLAM for Robotics Digital Twin in Challenging Environment"

• Organizer: Ayoung Kim, Seoul National University, Korea

We propose a session on SLAM (Simultaneous Localization and Mapping) for Robotics Digital Twin in Challenging Environment. The session will focus on the application of SLAM algorithms in creating digital twins of robots operating in challenging environments. The session will begin with an overview of the principles and techniques of SLAM. We will then delve into the specific challenges of using SLAM, including the need for real-time performance, robustness to noise and uncertainty, and the ability to integrate sensory data from a variety of sources. We will also cover the use of SLAM in conjunction with other technologies, such as machine learning and computer vision, to enhance the accuracy and reliability of digital twins.

"Digital Twinning for Robots and Artificial Intelligence"

• Organizers: Alessandro Carfì, University of Genoa. Italy
                     Anany Dwivedi, Friedrich-Alexander-Universitat, Germany
                     Philipp Beckerle, Friedrich-Alexander-Universitat, Germany
                     Fulvio Mastrogiovanni, University of Genoa, Italy

Simulation has always played a crucial role in testing and validating robotics and artificial intelligence (AI) applications. However, with the advancement of learning research, simulation has also become a vital tool for AI systems to acquire and develop new skills through reinforcement learning approaches. Today, simulation tools and platforms have reached a level of maturity that allows them to take on a new role in the development of robotics and AI applications. These tools can now be used as a knowledge representation layer by AI agents, allowing them to interact with the environment and serve as a surrogate for human cognition. In addition to serving as a representation layer, simulation tools can also be used to predict the effects of an AI agent’s actions. One particular application of these tools is in the creation of digital twins (DTs), which are digital replicas of real physical systems used to represent and monitor the evolution of a system. With the advancement of simulation technologies and increased computational power, it is now possible to use DTs not just as a visual representation but also to drive the behaviour of a system. For example, an autonomous robot could use the representation provided by a DT to better estimate unavailable parameters, predict the effects of its actions, and create virtual environments for human interaction (e.g., semi-autonomous teleoperation). While research in this area is ongoing, the field is scattered and requires further effort in defining its scope and potential. This special session aims to bring together researchers working on the integration of simulation technologies with AI applications and to establish clearer definitions of the problems and opportunities presented by these tools. The organizers believe that this special session is relevant for the IAS community as it provides a novel perspective on the use of simulation tools for deploying AI solutions in real-world environments, focusing not only on the technological aspect but also on the implications for AI development.

"Metaverse and XR Applications in Intelligent Autonomous Systems"

• Organizer: Heejin Jeong, Arizona State University, United States

The Metaverse describes a virtual shared space where users can interact with each other and virtual objects and environments like the real world. XR, or Extended Reality, is a range of technologies that extend or enhance the way we experience the world, including virtual reality (VR), augmented reality (AR), and mixed reality (MR). In the context of intelligent autonomous systems, metaverse and XR technologies can be used to create virtual environments where autonomous systems can be trained, tested, and deployed. For example, a virtual city can be created in the metaverse, where self-driving cars can be tested in various realistic scenarios. Similarly, XR technologies can be used to create virtual training environments for robots and drones, allowing them to practice complex tasks in a safe and controlled setting before being deployed in the real world. This special session allows researchers and practitioners to discuss recent theoretical advancements and practical developments in metaverse and XR research in intelligent autonomous systems. In this special session, speakers will introduce their metaverse and XR research in the applications of surface and air transportation, manufacturing, occupational, and rehabilitation training.

"Rehabilitation Robots for Upper Extremity"

• Organizer: Won-Kyung Song, National Rehabilitation Center, Korea

For those with central nervous system issues like stroke, a variety of rehabilitation robots are being created. Robots used for upper extremity rehabilitation must handle challenging tasks. The robot for upper extremity therapy helps with arm and hand motion. Products that are useful are in demand. A robot can be worn by a patient or a person with disabilities for training purposes in rehabilitation therapy. Sometimes, a robot should be feasible to have assistance with daily motions. The robot apparatus may also perform a variety of assessments or measurements. Focusing on upper extremity exercise, we will introduce 1) a platform capable of exercising both arms, 2) upper extremity impedance assessment technology, 3) posture monitoring, 4) clinical application, and 5) soft wearable gloves for children with cerebral palsy. In the fall of 2022, a workshop on upper limb rehabilitation robots was held by the National Rehabilitation Center’s Translational Research Program for Rehabilitation Robots. This special session is formed by selecting five presenters from among over 10 presenters in that workshop. It is expected that participants will learn about the development and clinical use of upper extremity rehabilitation robots during this session.

"Autonomous Vehicle in Adverse Weather: Challenges and Opportunities"

• Organizers: Gyeungho Choi, DGIST, Korea
                       Yongseob Lim, DGIST, Korea

Autonomous driving technology nowadays targets to level 4 or beyond, but the researches have been faced some limitations for developing reliable driving algorithms in diverse challenges. To promote the autonomous vehicles to spread widely, it is important to properly deal with the safety issues on this technology. Among various safety concerns, the sensor blockage problem by severe weather condition can be one of the most frequent threats for lane detection algorithms during autonomous driving. To handle this problem, the importance of the generation of proper dataset is being more significant. In this paper, a synthetic lane dataset with sensor blockage is suggested in the format of lane detection evaluation. Rain streaks for each frame were made by experimentally established equation. Using this dataset, the degradation of the diverse lane detection methods has been verified. The tendency of the performance degradation of deep neural network-based lane detection methods has been analyzed in depth. Finally, the limitation and the future directions of the network-based methods were presented.

"Field Robots and Intelligent Autonomous Systems"

• Organizer: Hyun-Joon Chung, Korea Institute of Robotics and Technology Convergence, Korea

To be informed later.

"Smart Haptics & Teleoperation"

• Organizer: Gi-Hun Yang, KITECH, Korea

To be informed later.

"Digital Twin and Haptic Interface"

• Organizers: Ki-Uk Kyung, KAIST, Korea
                      Sang-Youn Kim, Korea University of Technology and Education, Korea

To be informed later.