ARC Seminars

MATLAB/Simulink for Robotics and Autonomous Systems Development

MATLAB/Simulink for Robotics and Autonomous Systems Development

Date/Time: 12 April 2019, 11am

Location: ARC (Engagement Room)


Applications of robotics and autonomous systems are expanding from structured industrial settings to more dynamic environments, and require intelligent autonomous operations while meeting quality and safety requirements.  This presentation covers MATLAB/Simulink capabilities to support autonomous robotics development while highlighting examples of customer use cases.  It includes Model-Based Design for development and testing of complex systems, integrating with ROS, co-simulation with Gazebo, and deployment to hardware using automatic code generation.  Capabilities for development of autonomous algorithms, such as localization, mapping, SLAM, and path planning, will be discussed.  Attendees will learn of MATLAB/Simulink and supporting features as it applies to autonomous robotics trends.

Click here to download the speaker’s biography.


Deep Learning for Perception in Autonomous Vehicles

Deep Learning for Perception in Autonomous Vehicles

Date/Time: 8 May 2019 (Wednesday), 6.00pm

Venue: NUS Faculty of Engineering, Lecture Theatre 1 (LT1)


Advanced Robotics Centre Seminar: Learning how the brain learns

Title: “Learning how the brain learns”: The successor representation in reinforcement learning
Date/Time : 8 October 2018 (Monday), 6.00pm to 7.00pm
Venue : NUS Faculty of Engineering, Lecture Theater 7A (LT7A)
Speaker : Fred Almeida
Artificial intelligence learns as we humans do – by trial and error. Our experiences teach us valuable lessons
which can be applied to new situations. Over time the surrounding world starts to make sense to us and we
can be entrusted with more demanding tasks like driving a car. For an AI agent, this learning takes place in a
simulation but the principles are the same. To build an AI capable of human level performance, Ascent is
bringing the most recent research advancements in the field to market. Technologies we work with include
deep reinforcement learning, generative models and transfer learning. We are constantly testing new
models and applying the best research to create cutting-edge artificial intelligence. Artificial Intelligence
algorithms are not developed in the same way as traditional software. AIs are trained in simulations through
trial and error. Behaviour that produces better outcomes for the AI agent are reinforced while adverse
choices are discouraged. This is called reinforcement learning. This approach allows the AI to work out for
itself which behaviours are most useful to master. We control the learning framework of the simulated
environment and let the AI agent determine optimal behaviour models on its own.


About the speaker:
Fred Almeida is the founder of Ascent Robotics, developing autonomous robotics in Tokyo. He comes from
a background in Philosophy, Mathematics and is attending Harvard Business School. He spends his free time
mostly on planes reading papers.

Please register your attendance at the following link by 4 October 2018:

For enquiry, please email:

Click here for pdf with more details.

Click here for the webcast of the seminar.



Deep learning as CAD in Medical AI & Imaging

Date/Time : 31 August 2018 (Friday), 2pm to 3pm
Venue : NUS Faculty of Engineering, Advanced Robotics Centre, Blk E6, Level 7, ARC Engagement Space
Speaker : Dr. Zongyuan Ge

Recent advances in artificial intelligence and machine learning have demonstrated that deep learning technologies have superiority in solving various medical imaging analysis tasks involving CT, MRI and X-ray. In this talk, Zongyuan will go through his research in dermatology and radiology analytics during his career at IBM and NVIDIA research – Australia. More specifically, he will cover the topics of featuring and multi-modal learning for multi-class skin disease classification, loss function and model architecture design for lung disease detection on chest X-rays, and Open-set setting for general image classification.

About the Speaker :
Dr. Zongyuan Ge is a senior research fellow and Nvidia deep learning specialist at Monash University, Australia. Before he received his Bachelor degree (with first class honours) in electrical and mechatronics engineering from the Australian National University, Australia in 2012. He then joined Australian Centre for Robotic Vision & Queensland University of Tech to pursue Ph.D. under the supervision of Prof. Peter Corke. During his Ph.D, he wrote and published over 20 research papers and a book about using AI and Big Data in real world applications. He also spend one year serving as a research consultant for DeepGlint, a unicorn start-up that is providing smart city services to improve people’s life. He joined the IBM Research – Australia lab in 2016 and became the chief technical investigator in developing AI solutions for dermatology and radiology. He has been awarded science accomplishment award and manger choice of the year inside IBM for his excellent contributions to those projects. In 2017, Zongyuan was selected as one of the 200 Most Qualified Young Researchers in Computer and Mathematics by the Scientific Committee of the Heidelberg Laureate Forum Foundation in 2017. More recently, he is becoming the external deputy director of AI research for Airdoc, a start-upcompany has raised 50 million+ USD since founded.


Please register your attendance at the following link by 30 August 2018:

Enhanced Performance and Autonomy for Field Robots Through Safe Learning with Degraded Sensing in Unstructured, Uncertain and Changing Environments

Date/Time: 20 December 2017 (Wednesday), 10.30am to 12.00pm

Venue: NUS Faculty of Engineering, Advanced Robotics Centre,
Blk E6, Level 7, Engagement Room

Speaker: Erkan Kayacan, Ph.D.
University of Illinois at Urbana-Champaign, IL, USA

Nowadays, the complexity in the design of robotic systems increases enormously due to the fact that human beings desire a higher level of intelligence and autonomy. Additionally, it is important that the developed systems must be capable of autonomously adapting to the variations in the operating environment while maintaining the overall objective to accomplish tasks even in highly uncertain and unstructured environments. Such robotic systems must display the ability to learn from experience, adapt themselves to the changing environment and seamlessly integrate information to-and-from humans. Traditional controllers have important limitations: i) inability to tune optimally the coefficients of controllers due to the complex nature and the vaguely known dynamics ii) inability to be able to adapt the control parameters considering changing system parameters and varying environmental conditions iii) inability to deal with constraints on systems iv) not account interactions between subsystems. These drawbacks of traditional control algorithms result in suboptimal control performance of systems. Therefore, advanced techniques are required to deal with naturally constrained, nonlinear, and multi-input-multi-output systems. In this talk, nonlinear model predictive control (NMPC) and nonlinear moving horizon estimation (NMHE), which are computationally very intensive, and require the real-time solution, will be addressed to handle aforementioned problems and their applications in field robots will be shown.

Erkan Kayacan received the B.Sc. and M.Sc. degrees in mechanical engineering from Istanbul Technical University, Turkey, in 2008 and 2010, respectively. In December 2014, he received the Ph.D. degree at University of Leuven (KU Leuven), Belgium. During his PhD, he held a visitor PhD scholar position at Boston University under supervision of Prof. Calin Belta. After his Ph.D., he became a Postdoctoral Researcher with Delft Center for Systems and Control, Delft University of Technology, The Netherlands. He is currently a Postdoctoral Researcher with Coordinated Science Lab and Distributed Autonomous Systems Lab in the University of Illinois at Urbana-Champaign under supervision of Assist. Prof. Girish Chowdhary. His research interests center around real-time optimization-based control and estimation methods, nonlinear control, learning algorithms and machine learning with a heavy emphasis on applications to autonomous systems.

Please register your attendance at the following link by 18 December 2017:

 Lunch will be provided.

The seminar flyer can be downloaded here.

Service Robots Are Here

Speaker: Steve Cousins, Founder of Savioke 
Venue: COM2-04-02, Executive Classroom
Date/Time: 28/03/2016, Monday, 1.00pm


After 20 years of predictions that robots will work among us soon, the predictions are finally starting to come true. Investment in robotics is up, enabling start-ups to explore a range of use cases. Decreasing component costs will make it easier to make real business cases for the technology. Mobile robots are beginning to transform the way we serve people in hotels, elder care facilities, hospitals, restaurants, and throughout the service industry, and this trend is accelerating. Savioke deploys delivery robots in hotels, and has had one in continuous operation since August 2014. There are now several Savioke Relay robots doing over 400 deliveries each week across California, and the number is quickly expanding.


Steve Cousins is a world leader in service robotics. He is passionate about building and deploying robotic technology to help people. Before founding Savioke, he was the President and CEO of Willow Garage, where he oversaw the creation of the robot operating system (ROS), the PR2 robot, and the open source TurtleBot. Steve serves on the boards of the Open Source Robotics Foundation and Silicon Valley Robotics, and is an active participant in the Robots for Humanity project. Steve has been a senior manager at IBM’s Almaden Research Center, and a member of the senior staff at Xerox PARC. He holds a Ph.D. from Stanford University, and BS and MS degrees in computer science from Washington University.

A Historical Humanoid Development and Human Support Robot Technologies

Speaker:Yoshihiro Kuroki,  Advanced Technology Engineering Dept. Partner Robot Div. (Tokyo) TOYOTA MOTOR CORPORATION
Venue:NUS Faculty of Engineering, Blk EA, Level 2, EA-02-11
Date/Time: 18/12/14, Thursday, 2.00pm

I have launched a development project for a small biped entertainment robot named SDR (Sony Dream Robot) in 1997. In the shortest development period, in November 2000, we proposed the first prototype SDR-3X featuring dynamic and attractive motion performances such as Parapara dances and multi-unit formation dances. QRIO (SDR-4X II) is the latest and the most advanced model and has important capabilities such as a safe design and functions for safe physical interaction with human. But QRIO project has been terminated in 2006.

I started to develop the essential technologies for human support robots in 2009. Human support robots living together with human and providing livelihood support to human should have capabilities to make physical interaction with human, objects and environment. To achieve these capabilities in physical interaction tasks with safe operations, we propose a new torque sensing method and cooperative force adaptive control by joint torque detection based distributed torque servo control systems. I will also provide a brief introduction on the joint torque control based bilateral master slave system exploring future applications.


Knowledge Representation and Exchange for Robot Tasks

ARC, SIMTech‐NUS Joint Lab on Industrial Robotics and A*STAR industrial Robotics Program  Seminar

Speaker:Mr Alexander C. Perzylo,   Fortiss, Germany
Date/Time: 30/09/14, Tuesday 1030am

Humans can use the Internet to share knowledge and to help each other accomplish complex tasks. Until now, robots have not taken advantage of this opportunity. Sharing knowledge between robots requires methods to effectively encode, exchange, and reuse data. In this presentation the design and implementation of a system for sharing knowledge about robot tasks is explained.