Speaker: Prof Jong-Hwan Kim
30/08/2017 4:30pm to 5:30pm
Seminar Venue: Engineering Auditorium
Robots have been expected to do smart services as well as various troublesome or arduous tasks for humans. Since these human-scale tasks consist of sequential procedures, the robots need a knowledge structure to store the temporal sequences through active learning and to retrieve them to autonomously perform such tasks in similar situations through reasoning. The capability for performing those tasks is defined as task intelligence that can be realized based on intelligence operating architecture (iOA). As a knowledge structure, a long-term memory can be employed, which consists of episodic memory, semantic memory, procedural memory, and emotional memory. The crux of the realization of task intelligence for robots is to design the memory module for storing temporal event sequences of tasks, the mechanism of thought for reasoning, and motion planning methodology for execution, among others. This talk introduces how to realize task intelligence based on iOA. This talk focuses on the design of a long-term memory, neural model-based mechanism of thought, and an online motion planning algorithm, Q-RRT*. The long-term memory is developed as an integrated multi-memory neural model in which an episodic memory is designed using a Deep ART (Adaptive Resonance Theory) neural model and a semantic memory is built through the consolidation of memory contents in episodic memory. Procedural memory is also designed using the context-based recurrent neural network to store the trajectories of the manipulators along with context information and then retrieve them according to the context information without conscious thinking. Robots are taught either by human demonstration or symbolic description. A behavior appropriate to the current situation is selected by the developmental neural memory-based mechanism of thought, while a proper task is retrieved from Deep ART model. The behaviors are executed safely and quickly with the motion planning algorithm. The effectiveness of the realization of task intelligence based on iOA in terms of machine intelligence learning and executing tasks is verified through experiments with a humanoid robot, Mybot, developed in the Robot Intelligence Technology Lab. at KAIST. Invited by the World Economic Forum, Mybot had a performance demonstration at the Annual Meeting of the New Champions in Tianjin, China on June 26~28, 2016.
Speaker: Axel Thallemer
12/04/2017 6:30pm to 7:30pm
Seminar Venue: EA LT7A
In the course of evaluating the biological grippers by analytical classification, the opening mechanism of birds’ beaks proved to be particularly promising. This mechanical principle was first described by Franz Reuleaux in the second half of the 19th century. More than 100 years later, in the 1990s, Schilling and Zimmermann re-interpreted the biological paragon by translating the three-dimensional kinematic into a two-dimensional Watt’s linkage. About 20 years later it was designed and additively fabricated in Titanium alloy (Ti6Al4V). These grippers are actuated by artificial muscles co-developed by the speaker for Festo. Video: Dynamic display of the three zoo-kinematic grippers at Hannover Industrial Fair: https://www.youtube.com/watch?v=rkAlWBLwMVw For the full abstract, please visit https://goo.gl/forms/sr1oFiBz5kbd5qdZ2
Speaker: Prof. Dr. Marc Pollefeys
25/11/2016 1:00pm to 6:00pm
Seminar Venue: NUS School of Computing, Lecture Theatre 19 (LT19)
ARC Colloquium Talk: Semantic 3D Reconstruction While purely geometric models of the world can be sufficient for some applications, there are also many applications that need additional semantic information. In this talk I will focus on 3D reconstruction approaches which combine geometric and appearance cues to obtain semantic 3D reconstructions. Specifically, the approaches I will discuss are formulated as multi-label volumetric segmentation, i.e. each voxel gets assigned a label corresponding to one of the semantic classes considered, including free-space. We propose a formulation representing raw geometric and appearance data as unary or high-order (pixel-ray) energy terms on voxels, with class-pair-specific learned anisotropic smoothness terms to regularize the results. We will see how by solving both reconstruction and segmentation/recognition jointly the quality of the results improves significantly and we can make progress towards 3D scene understanding.
Speaker: Professor Dipl.-Ing. (univ.) Axel Thallemer
06/09/2016 6:00pm to 7:00pm
Seminar Venue: NUS Faculty of Engineering Lecture Theatre 7A (LT7A)
Inspired by nature, AirArm is a function-driven robot capable of carrying out humanoid motions. Unlike other nature-inspired research that focus on mimicking the mere structure and form of the human arm, AirArm is designed with the arm’s natural motion in mind. This industrial, pneumatic 4-axis kinematics with inherent flexibility has been researched, designed, developed, fabricated, and demonstrated by bachelor students of the scionic® I.D.E.A.L. curricula in Austria. The presentation will cover the main aspects of the methodology in designing AirArm, such as industrial design management, coupling materials and production technologies to create the purpose-driven nature-inspired robot.
Speaker: Oussama Khatib
14/06/2016 6:00pm to 7:00pm
Seminar Venue: NUS Faculty of Engineering Lecture Theatre 7A (LT7A)
The generations of robots now being developed will increasingly touch people and their lives. They will explore, work, and interact with humans in their homes, workplaces, in new production systems, and in challenging field domains. The emerging robots will provide increased operational support in mining, underwater, and hostile and dangerous environment. While full anatomy for the performance of advanced tasks in complex environments remains challenging, strategic intervention of a human will tremendously facilitate reliable real-time robot operations. Human-robot synergy benefits from combining the experience and cognitive abilities of the human with the strength, dependability, competence, reach, and endurance of robots. Moving beyond conventional teleoperation, the new paradigm - placing the human at the highest level of task abstraction - relies on robots with the requisite physical skills for advanced task behaviour capabilities. Such connecting of humans to increasingly competent robots will fuel a wide range of new robotic application in places where they have never gone before. This discussion focuses on robot design concepts, robot control architectures, and advanced task primitives and control strategies that bring human modelling and skill understanding to the development of safe, easy-to-use, and competent robotic systems. The presentation will highlight these developments in the context of a novel underwater robot, Ocean One, called O2, developed at Stanford in collaboration with Meka Robotics, and KAUST.
Speaker: Steve Cousins, Founder of Savioke, Former President & CEO of Willow Garage
28/03/2016 1:00pm to 2:30pm
Seminar Venue: NUS School of Computing, Executive Classroom (COM2-04-02)
After 20 years of predictions that robots will work among us soon, the predictions are finally starting to come true. Investment in robotics is up, enabling start-ups to explore a range of use cases. Decreasing component costs will make it easier to make real business cases for the technology. Mobile robots are beginning to transform the way we serve people in hotels, elder care facilities, hospitals, restaurants, and throughout the service industry, and this trend is accelerating. Savioke deploys delivery robots in hotels, and has had one in continuous operation since August, 2014. There are now several Savioke Relay robots doing over 400 deliveries each week across California, and the number is quickly expanding.
Speaker: Ruxu Du, Ph.D., F-SME, F-ASME
02/03/2016 6:15pm to 7:15pm
Seminar Venue: NUS Faculty of Engineering Lecture Theatre 7 (LT7)
Fishlike underwater robot, or simply robot fish, is a topic that has attracted much interest in the recent decade. Fish swimming is a wonder of nature: elegant, fast, agile, and highly efficient. Compared to conventional screw propelled ship, fish is several times more efficient, which motivates scientists and engineers to design and built robot fish for various applications, such as underwater exploration, pollution monitoring and shipping. This presentation introduces our bio-inspired wire-driven fishlike underwater robot, covering the design, modeling and optimization, control and experimentation. The presentation will help scientists and engineers to understand the exciting robot fish research and development from theory to practice.
Speaker: Professor Leslie Kaelbling
21/01/2016 3:00pm to 4:30pm
Seminar Venue: Video Conference Room, COM1-02-13
The fields of AI and robotics have made great improvements in many individual subfields, including in motion planning, symbolic planning, probabilistic reasoning, perception, and learning. Our goal is to develop an integrated approach to solving very large problems that are hopelessly intractable to solve optimally. We make a number of approximations during planning, including serializing subtasks, factoring distributions, and determining stochastic dynamics, but regain robustness and effectiveness through a continuous state-estimation and replacing process. This approach is demonstrated on a PR2 robotic system which integrates perception, estimation, planning and manipulation.
Speaker: Gamini Dissanayake
03/09/2015 12:00pm to 1:30pm
Seminar Venue: NUS Faculty of Engineering Blk EA, Level 6, EA-06-03
Robot navigation in unknown environments, particularly when an external location reference such as a global positioning system (GPS) is not available, requires the robot to be able to build a map of the environment in real-time and simultaneously estimate its own location within the map. Robust solutions to the “Simultaneous Localization and Mapping (SLAM)” problem, therefore, underpin successful robot deployment in many application domains such as urban search and rescue, underground mining, underwater surveillance and planetary exploration. The SLAM problem has revealed many surprises since the first “solutions” emerged in the late 90’s. This talk will chronicle my journey in looking for solutions to SLAM through extended Kalman filters, information filters, non-linear optimisers, graph theory and most recently linear least-squares. The talk will also present an overview of the activities of the UTS: Centre for Autonomous Systems that currently consists of 55 staff and students working in various aspects of robotics.
Speaker: Zexiang Li
21/11/2014 10:00am to 11:00am
Seminar Venue: Lecture Theatre 1
Many problems in robotics, mechanism design and manufacturing research are geometric in nature: Rigid body motion, modeling, analysis and synthesis of both open-chain and closed-chain manipulators, grasping and manipulation with multifingered robotic hands, tolerance formulation and verification, design and control of five- axes machines, etc. In this talk, I will present an effort, which was initiated by R. Brockett of Harvard about 30 years ago, and continued by the Berkeley group and then my own group at HKUST for the last 20 years to develop a unified theory, using tools from differentiable manifolds and Lie groups, for robotics, mechanism design and manufacturing research. First, using intuitive examples, I will recollect some of the basic concepts of differentiable manifolds and their “engineering” classifications. Then, I will show how problems in robotics ,mechanism design and manufacturing research can be modeled using various types of manifolds as their model spaces. Finally, I will highlight how geometric properties of these spaces are being exploited to provide more efficient solutions for optimization problems defined on these spaces. I will describe how results from this research program are being used as basis for founding of three companies, one in motion control, one in UAV-based consumer products and the other in assembly automation for 3C products. Burrowing from L. Page’s words, I credit this effort to “Geometry as inspiration.” There are grand opportunities for robotic research in China. However, the biggest challenge lies in the creation of an eco-system that foster growth of many more startups from our research and teaching programs. A new robotics institute is being proposed at HKUST for such a purpose.
Speaker: Professor Dieter Fox
04/11/2014 4:00pm to 5:00pm
Seminar Venue: Seminar Room 1, COM1-02-06
RGB-D cameras provide per pixel color and depth information at high frame rate and resolution. Gaming and entertainment such as the Microsoft Kinect system resulted in the mass production of RGB-D cameras at extremely low cost, also making them available for a wide range of robotics applications. In this talk, I will provide an overview of depth camera research done in the Robotics and State Estimation Lab over the last five years. This work includes 3D mapping, autonomous object modeling, and unsupervised feature learning for object recognition, and articulated object tracking.
Speaker: Professor Siddhartha Srinivasa The Robotics Institute Carnegie Mellon University
06/06/2014 2:00pm to 3:00pm
Seminar Venue: Executive Classroom, COM2-04-02
For years, the focus of robot motion planning has been to produce functional motion: industrial robots move to weld parts, vacuuming robots move to suck dust, and personal robots move to clean up a dirty table. We have been exploring the thesis that although functional motion is ideal when robots perform tasks in isolation, it is insufficient for collaboration, where a human and a robot are manipulating in a tightly-coupled shared workspace. Our goal is to make this collaboration fluent and seamless. To this end, we have been developing algorithms where the notion of an observer watching the motion is woven into the fabric of the motion planner. This perspective has allowed us to formalize qualitative notions such as predictability and legibility in psychology in terms of Bayesian inference and inverse optimal control, and to develop generative models for such motion using functional gradient optimization. I will also describe some of our user studies on applying these algorithms to human-robot handovers, assistive teleoperation, and shared workspace collaboration, and ongoing work on deception, ambiguity, and emotive motion.
Speaker: Professor Oussama Khatib, Stanford University
01/04/2014 4:00pm to 5:00pm
Seminar Venue: Engineering Auditorium 9 Engineering Drive 1 (S) 117575 Faculty of Engineering, National University of Singapore
Robotics is rapidly expanding into the human environment and vigorously engaged in its new emerging challenges. From a largely dominant industrial focus, robotics has undergone, by the turn of the new millennium, a major transformation in scope and dimensions. This expansion has been brought about by the maturity of the field and the advances in its related technologies to address the pressing needs for human centered robotic applications. Interacting, exploring, and working with humans, the new generation of robots will increasingly touch people and their lives, in homes, workplaces, and communities, providing support in services, entertainment, education, health care, and assistance. The discussion focuses on new design concepts, novel sensing modalities, efficient planning and control strategies, modeling and understanding of human motion and skills, which are among the key requirements for safe, dependable, and competent robots. The exploration of the human‐robot connection is proving extremely valuable in providing new avenues for the study of human movement ‐‐ with exciting prospects for novel clinical therapies, athletic training, character animation, and human performance improvement.