Plenary Speaker

Deep 3D Vision for Intelligent Autonomous Systems

Professor Sukhan Lee

Sungkyunkwan University, Korea

Abstract: Similar to what it is for human and most animals, 3D vision is indispensable for autonomous systems and agents to carry out real-world activities and tasks based on autonomous navigation, manipulation and interaction. In particular, it is the capability of autonomous agents to understand, model and measure surrounding 3D scenes and workspaces in a space-time context that plays a fundamental role for achieving human-like autonomy. The increased complexity in dealing with 3D, due to an extra dimension leading to massive 6D geometric variations, causes conventional engineering approaches based on an extension of 2D vision with exploiting 3D geometric features could offer only a limited success, hampered by efficiency and accuracy trade-off. Recent progress in end-to-end deep learning approach to 3D scene and workspace modeling and object 6D pose estimation, combined with detection, panoptic segmentation and tracking, opens a possibility for autonomous systems and agents to break through the trade-off toward human-like performance in real-time understanding, modeling and measuring 3D scenes and workspaces in a space-time context. In this talk, first, advancement in deep learning approach to 3D vision is reviewed, in which focus is given to the end-to-end deep learning approaches to 3D scene and workspace modeling and 6D object pose estimation. Then, the end-to-end deep learning approaches to 3D vision developed in my laboratory are presented, including the approaches to partial-to-full point cloud reconstruction and 6D pose estimation in both object and category levels that are integrated with deep object detection and panoptic segmentation into real-time modeling of 3D scenes. This is followed by presentation on how the developed deep 3D vision has been applied to smart manufacturing, autonomous navigation and human-robot interaction. Finally, the talk will be concluded with discussions on future direction on deep 3D vision for intelligent autonomous systems.

Bio: Sukhan Lee is currently a Professor of Artificial Intelligent and Robotics at Sungkyunkwan University and the Founding Director of Intelligent Systems Research Institute. Previously, he has served as the Dean of the University Graduate Studies from 2011 to 2013, a Chaired University Professor from 2014 to 2018, a WCU Professor of Interaction Science from 2008 to 2013, a Professor of Information and Communication Engineering from 2003 to 2014. He also served as a Vice President of Korea National Academy of Science and Technology from 2016 to 2019. Prior to Sungkyunkwan University, he was with the Samsung Advanced Institute of Technology as an Executive Vice President and a Chief Research Officer from 1998 to 2003. From 1990 to 1997, he worked for Jet Propulsion Laboratory/NASA, California Institute of Technology, as a Senior Member of Technical Staff. From 1983 to 1997, he was with the Department of Electrical Engineering and Computer Science at the University of Southern California as a Professor. Prof. Sukhan Lee received his Ph.D. degree in Electrical Engineering from Purdue University, West Lafayette, and his M.S. and B.S. degrees in Electrical Engineering from Seoul National University. He is currently a life fellow of IEEE and a fellow of Korean National Academy of Science and Technology. He served as a Vice President of IEEE Robotics and Automation Society from 2004-2008. He has published over 400 technical papers in Scientific Journals and Major Conferences, over 120 Domestic/International Patents and 6/35 Books/Book Chapters. He has his research interest in robotics, artificial intelligence, computer vision and micro systems.

Keynote Speaker

The Irony of Autonomy: The Increasing Involvement of Humans in Assistive Monitoring and Active Interaction

Professor Richard M. Voyles

Purdue University, USA

Abstract: An ironic twist to Autonomous Systems research over the past thirty-five years has been the inclusion of more humans in the loop. In fact, since at least the times of Aristotle, humans have been intrigued by the idea of creating automatons in their image to escape the bondage of labor. Hence, the goal of the first factory robots — some sixty years ago – was to create ‘lights-out factories’ that operated rigidly-timed assembly lines without human involvement. Yet, almost as old as the desire to create subservient automatons, has been the fear of rebellion of our own intelligent machines. Humans, it seems, want to conflate the near-mystic unpredictability of human emotions with the logical toil of their mechanical progeny. So, it may not be such a surprise that the current pinnacle of autonomy research is moving toward machines that seamlessly work alongside untrained human beneficiaries in unstructured and chaotic environments.

This talk explores that evolution in Autonomous Systems research from standalone machines to machines that interact increasingly with humans of greater technical naivete. From robotic assembly to self-driving cars, to emergency response, and robotic surgery, we are gradually pushing the boundaries of the state-of-the art toward more human involvement with increasingly intelligent machines in less-structured situations. Within this space, the emergency response research community made a determined effort to attack highly-unstructured environments in such difficult scenarios that human cooperative control was assumed to be a necessity. The 9/11 attack on the World trade Center in the US was the first use of robots in a real disaster scenario and almost no autonomy was allowed on-site. In that case, the search for victims was so difficult that roughly 60% of victims found were only discovered in post-analysis of video footage, rather than in real-time. Search in deconstructed environments is difficult because a priori model-based information is mostly unusable and dust and debris makes recognition unreliable. In early attempts, teleoperation was the only viable solution with humans lives potentially in the balance. The DARPA Robotics Challenge later brought researchers from around the world to incorporate higher degrees of autonomy and offline and online simulation to the augmentation of both machines and humans. That evolution has continued into the unstructured world of battlefield robotic surgery, in which virtualized reality fuses teleoperation with full autonomy to allow machines to learn from human experts to support full autonomy in times of crisis.

Bio: Dr. Voyles, the Daniel C. Lewis Professor of the Polytechnic, received a B.S. in Electrical Engineering from Purdue University in 1983, an M.S. in Manufacturing Systems Engineering from Dept. of Mechanical Engineering at Stanford University in 1989, and the Ph.D. in Robotics from the School of Computer Science at Carnegie Mellon University in 1997. He is currently Professor of Engineering Technology at Purdue University and an IEEE Fellow. He was a tenured faculty member at the University of Minnesota from 1997 to 2007 and at the University of Denver from 2006 – 2013. He served as lead Program Director for the National Robotics Initiative at NSF and was a co-founder of the NSF Innovation Corps program. He also served as Assistant Director of Robotics and Cyber-Physical Systems at the White House Office of Science and Technology Policy.
Dr. Voyles’ research interests are in the areas of robotics and artificial intelligence. Specifically, he is interested in the development of small, resource-constrained robots and robot teams for urban search and rescue and surveillance. Dr. Voyles has additional expertise in sensors and sensor calibration, particularly haptic and force sensors, real-time control, and Form + Function 4D Printing.
Dr. Voyles’ industrial experience includes Dart Controls, IBM Corp., Intergrated Systems, Inc., and Avanti Optics. He has also served on the boards of various start-ups and non-profit groups.

Keynote Speaker

AI technology for mitigating the risk of AI

Professor Kazuya Takeda

Nagoya University, Japan

Abstract: As the Autonomous Driving (AD) becomes a reality of the society, technical, legal and ethical systems that can mitigate the damage caused by the inevitable error of human or autonomous system become important. Due to their highly complicated or even black-box nature, how an AI for AD ‘understands’ the current traffic context is difficult to be shared. Particularly for the perception, depicting the attention heatmap is often used for sharing the ‘understanding’ of the AI for AD with that of a human. However, detecting the risk is impossible with only visual cue. The AD system must understand the situation so that it properly avoids the risk.
As the first step, we built a signal transcription system which converts the multi-modal sensor signal sequences used by AD – consisting of a frontal camera, kinematic sensor and the vehicle control channel – into the natural language sentences. The generated sentences represent how the AD understands the current traffic context and human beings can share its understanding. We are currently trying to apply this to AD risk management in the insurance business, for the digital aid of the human risk analysts. In this talk, I will introduce details of this project and future research goals that include describing a set of standard traffic scenarios which spans 99% of urban traffic.

Bio: Prof. Kazuya Takeda is working in the field of signal processing technology research for acoustic, speech and vehicular applications. In particular, understanding human behavior through data centric approaches utilizing signal corpus in real world has been his main interest.
Prof. Takeda is a Professor and a Vice President at the Nagoya University, Japan. He received his B.E.E., M.E.E. and Ph.D in 1983, 1985 and 1995, respectively from Nagoya University. After graduating from the university, he worked for ATR and KDD R&D Lab. He visited MIT as a visiting scientist before joined in Nagoya University in 1995. He is a fellow of IEICE (the Institute of Electronics, Information and Communications Engineers) and a senior member of IEEE.
Prof. Takeda has served as an academic leader in various signal processing fields. Currently, he is a BoG (Board of Governors) member of the IEEE ITS Society, the Asia-Pacific Signal and Information Processing Association (APSIPA). He served as a general chair of FAST-zero 2017, Universal Village 2016 and as a program chair of IEEE ICVES 2009, IEEE ITSC 2017 and other scientific meetings. He is serving as the general chair of IEEE Intelligent Vehicle Symposium (IV2021), which is a flagship conference of the Society. He is a co-founder and director of the university startup “Tier IV”, a company which is aiming to democratize autonomous driving technologies through developing the open-source software platform, Autoware.
He has published more than 150 journal papers, 8 co-authored/co-edited books and 15 patents. He received 2020 IEEE ITS Society Outstanding Research Award. He is also winner of 6 best paper awards from IEEE international conferences and workshops in addition to domestic awards.

Keynote Speaker

Connected and automated vehicles: improving safety and efficiency across the scales

Professor Gábor Orosz

University of Michigan, Ann Arbor, USA

Abstract: Automated vehicles are entering our roadways and are expected to have a large impact on the road transportation of the 21st century across the globe. They rely on a large array of optical sensors to perceive their environment and utilize complex algorithms to plan and control their motion while maneuvering through traffic. In addition, they may use vehicle-to-everything (V2X) communication to obtain information about road participants beyond their line of sight. In this talk we describe the promises and challenges of automation and connectivity in mixed traffic scenarios where automated vehicles share the roadways with human-driven vehicles. We present our recent results on how V2X connectivity may benefit automated vehicles responding to complex traffic scenarios and how such benefits scale for large transportation systems. In particular, we focus on improving safety, time efficiency and energy consumption in mixed traffic environments. Tools from time delay systems, nonlinear dynamics and control, network control, and machine learning are utilized and the theoretical results are validated using experiments on closed tracks and on public roads.

Bio: Dr Gabor Orosz received the MSc degree in Engineering Physics from the Budapest University of Technology, Hungary, in 2002 and the PhD degree in Engineering Mathematics from University of Bristol, UK, in 2006. He held postdoctoral positions at the University of Exeter, UK, and at the University of California, Santa Barbara. In 2010, he joined the University of Michigan, Ann Arbor where he is currently an Associate Professor in Mechanical Engineering and in Civil and Environmental Engineering. His theoretical research includes dynamical systems, control, and machine learning with particular interests in the roles of nonlinearities and time delays in such systems. In terms of applications, he focuses on connected and automated vehicles, traffic flow, and biological networks. He has published more than 50 journal papers in leading international journals. He has been serving as associate editor for the Transportation Research Part C since 2018, for the IEEE Transactions on Control Systems Technology since 2021, and for the IEEE Transactions on Intelligent Transportation Systems since 2022. He served as the program chair for the 12th IFAC Workshop on Time Delay Systems, as the general chair for the 17th IFAC Workshop on Time Delay Systems, and as the general chair for 3rd IAVSD Workshop on Dynamics of Road Vehicles, Connected and Automated Vehicles.