Service Robotics between AI and imitation learning
The use of robots in non-industrial environments is set to continue increasing more and more over the coming years. Robots in industrial environments which are more structured, usually perform specific tasks, whereas non-industrial settings can be complex and unpredictable, therefore robots that will operate in these environments need more skills – making enhanced Artificial Intelligence (AI) essential to success in service robotics. Essentially, AI in service robots brings learning and decision-making capabilities into applications that were previously inclined to do pre-programmed tasks. Some use cases AI makes possible include:
- Manufacturing assembly lines – AI helps robots navigate successfully and learn to choose the best routes to reach the destination point.
- Customer service – the more AI works with humans, in particular through examples such as Chatbots, the more capabilities AI learns.
- Healthcare – new and improved technology with the advantages of AI helps to carry out clinical tests, as well as sanitization, disinfection and performing remote surgeries for example.
- Space exploration – Al robots have brought significant achievements in space exploration and will continue to do so.
- Agriculture – AI robots can pick fruit or vegetables, spray pesticides, and monitor the health of plants and more.
Within the broader category of AI, there are the subcategories of Machine Learning and Deep Learning. Machine Learning is an application of AI that lets algorithms gather data and then learn from the data to make decisions based on what has been learned. Deep Learning, structures algorithms in layers to create an “artificial neural network” able to learn and make intelligent decisions on its own.
One of the applications of Machine Learning, known as Imitation Learning, is a type of learning by example, and is also similar to the type of behaviour shown by young children. Imitation Learning is also connected to reinforcement learning, where acting in a certain way is encouraged in order to bring rewards. In service robotics, Imitation Learning has become popular, as in unstructured environments manually programming robots with multiple tasks can be more challenging. For example, imitation learning can be used for service robots to mimic human gestures and object grasping techniques.
Self-supervised Learning, a subcategory of Machine Learning, is a way of training machines to do tasks without humans providing labelling data. Self-supervised Learning lets robots generate training examples by themselves in order to learn more and perform better, for example interpreting sensor data. The aim is for machines to automatically supervise and solve tasks, for example to label a dataset automatically. A major benefit of self-supervised learning in service robotics is the ability to scale and gather data in a lifelong learning manner and therefore reducing the effect of dataset bias to improve robot performance.
Multi-agent Reinforcement Learning
Multi-agent Reinforcement Learning includes coordination and negotiation, and involves Machine Learning robots adapting to other robots and collaborating to build a better learning model. This is based on the idea of robots working together in order to build a knowledge base. Each robot is building its own set of data here, which enables robots to be able to compare sets of data they have gathered, and correct errors. The Multi-agent Reinforcement Learning robots are able to improve their decisions through experience. Robots also learn to coordinate with other robots, particularly useful for organisations where multiple robots work together, which is predicted to become increasingly popular in the upcoming years.
Chatbots are software applications that use AI and Natural Language Processing (NLP) to better understand requests from humans. Elements of this role include using existing data to understand questions better, and using Machine Learning to be able to provide improved answers in the future. AI chatbots are designed to automatically learn after an initial training period. Chatbots learn according to past information available to them, and most organisations have a chatbot that maintains logs of discussions. In recent years, countless chatbots have been developed for applications and for customer support.
AI Applications with PAL Robotics’ robots
For AI in service robotics, at PAL Robotics as well as equipping our robots with NVIDIA Jetson GPUs to enable AI, we develop various AI applications in our robots with our partners, many of which are part of EU research projects. This is in order to foster new use cases in robotics. Here are some examples:
Model Predictive Control with robot TALOS in project Memmo
During project Memmo and working with our humanoid robot TALOS, the team has worked on Model Predictive Control (MPC). With Model Predictive Control (MPC) with full state feedback experiments and memory of motion as warm start up – thanks to the warm start the MPC is able to find the optimal solution on time. For instance, TALOS has to track an object while at the same time avoiding collisions and obstacles in the environment.
With the Model Predictive Control (MPC) algorithm, the robot is expected to execute the given task with full state feedback and joint torque control that will enhance system robustness. For instance, the robot can be commanded to perform a trajectory based on the model, and then it tries to accomplish the task even when the task is hindered by the external disturbances. The computation of the future predicted state from the main control PC could be migrated to the NVIDIA Jetson, so that it can communicate with the main controller running on the Control PC. EU project Memmo aims to develop memory of motion – a unified approach to motion generation for complex robots with arms and legs, and works with TALOS.
Object segmentation in the NVIDIA Jetson Challenge with robot TIAGo
During the NVIDIA Jetson Challenge, the team at the University of Koblenz worked with our TIAGo mobile manipulator robot combined with the NVIDIA Jetson TX2. TIAGo robot took part in the “Bring me a beer from the fridge” challenge. Through the robot’s RGB-D camera the team projected the resulting segmentation into a local robot coordinate system for manipulating objects. They integrated the semantic segmentation into a practical use case where they proposed a state dependent segmentation method. The robot navigated to a fridge, searching for a handle to open it. Once the fridge was open, the robot took another look for segmenting beer bottles and cans inside the fridge and grasped them. In the end the fridge was closed and the beer was delivered. The focus here of the Jetson TX2 was the execution of the team’s custom semantic segmentation network, although there were also options to use other modules such as navigation, mapping and speech recognition.
Speech Recognition with our robot TIAGo in ALMI Project
Our robots are capable of the Automatic Speech Recognition (ASR) in multiple languages. For example, our robot ARI and TIAGo use the Google Cloud Speech API which is powered by Google’s AI technologies, which works by capturing audio from the robot’s microphones.
As an example, in project ALMI (Ambient Assisted Living for Long-term monitoring and Interaction Integration) ASR will be essential to help robot TIAGo understand user demands during food preparation, and in any emergency situations, to be able to detect keywords that a user may say. In the project ALMI, TIAGo, will be developed into a broad range of social robotic solutions. TIAGo robot will use both speech interaction for voice instructions and object manipulation capabilities to help a user with mild motor and cognitive impairments in the daily activity of preparing a meal. TIAGo robot will also be adapted with environment monitoring capabilities to build and maintain a knowledge base of objects, etc.
At PAL Robotics, our robots can also be customized to meet customer requirements in AI, with different types of NVIDIA Jetson integrations. Many of our research customers have also added their own Machine Learning and Deep Learning integrations to our robots. AI, Machine Learning and Deep Learning are set to bring an ongoing transformation to how we use service robotics, as the industry continues to push the boundaries of more and more possibilities of new use cases.