ROSCon & IROS: Thank you, Vancouver!

This year ROSCon consolidated more than ever as the landmark event for the Robot Operating System developers community. Such general impression was encouraged by the rise of partners and sponsors – such as PAL Robotics – that broke all records. As usual, ROSCon organizers have conveniently made available all the conference talks online! These are very useful state of art resources for all the developers, even if some of them couldn’t make it to Vancouver.

PAL Robotics’ software developments and TALOS Space Robotics Challenge

Our CTO Luca Marchionni outlined PAL Robotics’ software development and how we use ROS tools at his talk in ROSCon. The Whole Body Control was one of the highlights, a software system customized by our team to develop high-level applications by abstracting the complexity of a robot, especially when it has a large number of DoF. This is the case with humanoids TALOS, REEM-C or TIAGo.

In fact, this software was extensively used by Team Olympus Mons, a team made up of current and former PAL members – including Marchionni – who won the 3rd place at the NASA Space Robotics Challenge. Thanks to our software modularity, PAL Robotics only spent two days implementing the developments achieved for the Space Robotics Challenge to our latest humanoid TALOS’ simulation, as shown by Marchionni in a video at the talk. And very soon we will see the actual TALOS humanoid robot performing those tasks!

TIAGo adventures at IROS&ROSCon

When we bring some of our TIAGo robots to an event, they always have such a productive time! After visiting Vancouver and listening to ROSCon conferences, TIAGo also spent five days enjoying the 30th anniversary of IROS.

IROS was a good occasion to share our latest developments; our CEO gave a talk within the frame of the RSJ-IAC Lunch surrounded by an attentive audience. Some of the latest advances we exposed are related to the European Projects that we are part of, such as Co4Robots, EnrichMe, SocSMCs, RobMoSys or GrowMeUp.

In short, IROS&ROSCon continue to be an absolute must to learn the latest cutting-edge developments in robotics. PAL Robotics team could speak with many researchers and companies that are making a great job. Our feeling was that the whole robotics community was making efforts to get these robotic platforms ready to enhance people’s quality of life. It was outstanding, and we are extremely glad to be part of this.

IROS has just ended and we are already looking forward to next year, IROS 2018 at Madrid; Meanwhile, do not miss our next adventures over at Birmingham for Humanoids 2017!

Posted in Events | Tagged , , , , , , , , , , , | Leave a comment

SocSMCs: making robots interact without words

When we shake hands with someone or help them moving furniture, our body is calculating much more than we know, even though we are not consciously aware. The direction in which our hand is moving, how much force is needed, how we keep the balance meanwhile… These are automatic decisions that are daily taken by our body. And while it goes on, we don’t realise anything.

Despite its complexity, this is easy for people. Instead, with robots it is a whole different thing. Robots need to have all the calculations programmed to behave in a way we find “normal”, or “socially accepted”, even speaking of what we would say that is a very basic level of motion control. They don’t have any intuition nor social education as a basis for their actions. But such interactive behaviour is highly needed for robots if we want them to adapt and help in our life.

One of the EU research projects we are involved in, SocSMCs, is focused on studying socially relevant action-effect contingencies in human-human and human-robot interaction scenarios. With the goal of achieving a robotics technology that is socially competent, some of the SocSMCs tests are being done with PAL Robotics’ humanoid robot REEM-C.

First steps: a friendly and responsive handshake

One of the first things that has been developed with REEM-C for the SocSMCs is to enable the humanoid to give a proper handshake, grasping the other’s hand with a controlled force and following the other person’s movements.

The challenge is to create robots that work in human environments in a more natural way for people. The robot shall be able to understand the non-verbal commands that the human uses all the time and react accordingly.

What is REEM-C learning now?

SocSMCs has more plans for our humanoid robot – such as taking a group selfie or helping someone moving a piece of furniture without speaking. REEM-C is currently learning the steps to accomplish the latter one, which are:

  1. See and recognize the object (the furniture)
  2. Reach out to the object altogether with the person with an autonomous navigation
  3. Grasp the object and lift it coordinating with the person
  4. The robot will move the object without explicitly being told just “guessing the human intentions” using its sensors

Studying these cases will help when applying similar developments to other multiple situations in which a robot needs to have an assistive role of cooperation and human-robot interaction.

Posted in Articles | Tagged , , , , , , | Leave a comment

ROSCon & IROS 2017: Towards a human-friendly robots horizon

Friendly people, friendly robots”. This is the theme of IROS 2017, held in Vancouver next week, considering that many stakeholders – PAL included – foresee a future with robots all around us, enhancing our daily life and helping us grow as humans.

ROSCon&IROS 2017 PAL RoboticsTIAGo is joining our team traveling to Canada! Our endearing robot is so excited to meet everyone at this major robotics event! Find us at the IROS’ exhibition area and test TIAGo’s collaborative capabilities. Its features are specially designed to work hand by hand with humans and assist both in industrial and domestic environments, always with a cute smile on its face!

Learn more about PAL Robotics’ philosophy and actions at RSJ-IAC Lunch, with a presentation held by the company’s CEO, Francesco Ferro (Sept. 25th – 12:30h). There you will have the opportunity to discuss with our team about the future of robotics.

ROSCon 2017: Joining efforts to boost robotics state of art

We are fully convinced that the best way to integrate robotics in human environments can only be achieved through joint efforts so we do not have to reinvent the wheel every time. ROS is a robotics middleware that provides a common framework for people working in robotics. Our commitment with ROS as a common framework for the robotics community has brought us to sponsor ROSCon 2017. We believe the conference will boost the software development by sharing everyone’s experience with the most common challenges in robotics and their approach in finding a creative and elegant solution.

Our background in designing ROS-powered robots will be exposed at ROSCon by PAL Robotics’ CTO, Luca Marchionni (Sept. 22nd – 14:30). The totality of our robots, from the small mobile bases to the human-sized bipeds, use the ROS framework, and our team constantly reviews and improves its status. One of the secrets to building our modular, flexible, configurable and testable robots is following some of the ROS guidelines.

The control software architecture, based on OROCOS and ros_control, will be presented together with the ros_controllers we’re currently using. We will focus, in particular, on our approach to Whole Body Control as an efficient redundancy resolution controller that allows to generate real-time motions on anthropomorphic robots.

For those of you who will assist either ROSCON or IROS 2017, our team will be happy to welcome you to our stand and show you what our robots are able to do.

Posted in Events | Tagged , , , , , , , , , , | Leave a comment

The European Robotics League is coming to Barcelona!

Robots from all over Europe will descend on Barcelona for the European Robotics League’s Local Tournament, 20 – 24 November. It’s the first time the Spanish city has hosted the European Robotics League (ERL), and it will be held at PAL Robotics’ office.

We’re incredibly excited to announce that the call for participation is now open! The deadline for applications is 30 September, and successful teams will be announced on our website by mid-October.

Important dates:

  • 30 September 2017 – Deadline for application and request for travel support
  • 11 October 2017 – Qualification announcement
  • 20 November 2017 – Competition begins
  • 24 November 2017 – Competition ends

What is the ERL?

Made up of a series of Major and Local competitions, held at certified testbeds across Europe, the ERL is a key touchpoint in the robotics industry’s calendar. The Service Robots branch of the competition requires teams to successfully program robots that can navigate a typical home environment and complete set task and functionality benchmarks.

Keep an eye on the Local Tournament’s website to see what our testbed will look like.

As a Platinum Sponsor of the ERL – Service Robots Competition, we have lent TIAGo Steel robots at just €650 a month to four lucky teams who applied to use the robot to participate in the competition. Take a look at our blog to see who the lucky winners were!

The TIAGo Steel is a collaborative robot with a mobile base, 14 degrees of freedom, 6kg payload arm and gripper. Sold for around €49,000, the robot is an ideal platform for research into sectors such as healthcare and assisted-living.

We will also provide these teams with a free TIAGo Steel to use during the Tournament. As a result, it is set to be one of the best attended Tournaments of the 2017/2018 ERL Season.

How to apply

Teams that want to apply must send an email to erl.service@robotics-league.eu with the following information:

  • Team name
  • Team leader’s name
  • Total number of team members (including the team leader)
  • Affiliation
  • Contact information (e-mail and phone number of the team leader)
  • Website
  • Qualification material (see below)

Applicants are required to submit the following qualification material (download the form):

  • Team description paper (max. 4 pages) explaining the team’s approach to the technical challenges, including a brief description of software and hardware architectures, the relation to the team’s research, and a list of relevant publications
  • Video showing evidence of some performance in the tasks and/or functionalities of the Tournament
  • Completed travel support request form, if needed

Teams will be selected for participation (and travel support) based on:

  • Quality of application paper and research, as measured from the TDP and list of relevant publications
  • Quality of test performed in simulation, if required (due to a high number of applications)
  • Available funds for travel support (maximum of 2K€ per team per year)
  • Financial needs (as declared by the team’s request for travel support)

For more information, please visit the Local Tournament’s website, or contact:

Carlos Vivas – erl@pal-robotics.com

 

Posted in Uncategorized | Tagged , , , | Leave a comment

The intern guide to SLAM part II: loop closure & new framework

Next up in our SLAM series is intern Tessa Pannen, who is studying a Computational Engineering Science MA at the Technische Universität Berlin. As she’s coming to the end of her six month internship, we quizzed Tessa about the successes and challenges of her new verification process for loop closure and framework for SLAM.

This may be an impossible task, but how would you describe SLAM in less than 50 words?

SLAM – simultaneous localisation and mapping – is a technique robots use to record their surroundings using sensors. They are then able to draw a map and estimate their localisation inside that map. Loop closure is needed to validate the robot’s estimated position in the map and correct it if necessary.

Can you tell us a bit more about the loop closure process?

Loop closure is the ability of the system to recognise a place the robot has previously visited. In visual SLAM, we use RGB cameras to collect information and evaluate whether a robot has “seen” a place before.

A camera integrated in thtessa 1e robot continuously takes pictures of its surroundings and stores them in a database, similar to a memory. Because a robot can’t “see” the way humans do, in order to compare the images, it has to break them down into prominent features such as corners or lines. As their properties can be stored as binaries, these features can be easily compared by an algorithm.

We then extract a limited number of promising candidate images from the database for the loop closure. These candidates are very likely to show the same location as the current image based on the fact they contain a high number of features with the same or similar properties. We can check which is the most likely to show the same location as the one the robot currently “sees” by comparing several geometrical relations between the features of a candidate image and the current image.

Take a look at our video if you’re wondering what loop closure looks like.

How does your work differ from existing research?

In the first part of my internship, I tried a different approach on the candidate geometrical check by comparing the features of three images instead of two. This basically means adding another dimension to the geometrical relation between the images, so a 2D matrix becomes three dimensional. Part of my research involved finding a stable way to compute this transformation using only the matched feature points in the images. 

tessa 2

In the second part of the internship, I joined my supervisor in working on a SLAM framework that we hope will manage all the tasks of SLAM by the time it’s finished, namely mapping, localisation and loop closure. I’d never contributed to such a complex system before, so it was incredibly exciting.

For this task, I implemented several new ways of sourcing a list of loop closure candidates, based on the likelihood of the current and all previous positions instead of feature comparison – a method known as Nearest Neighbor Filter.

What are the next steps in your research?

Tests, tests, and more tests! The only way we’ll know how the new approaches influence the whole loop closure process is by running vigorous tests.

I know you can’t give away too much, but are the results in line with what you expected?

The computation of the transformation works well for simple test images like the one above. We don’t yet know if it’s robust enough to support the loop closure, since we still need to run the tests. It might improve its performance, it might not. But doing research is about trying new things, and there’s no guarantee for success. Keen an eye on the blog – hopefully we will be able to publish another exciting new video!

What have been the highlights so far?

I love the satisfaction you get each time you reach another milestone in your project and see your code slowly developing and actually working. I hope we can make the loop closure run on a real robot by the end of my internship – that’s the big highlight I’m looking forward to.

Visualization of the loop closure process is important in order to find bugs in the code. But when it finally works, it’s hugely exciting to see a simulated robot driving in a loop, highlighting the areas it recognises, and I think: “wow, it’s actually working! I did this!”

What do you think has been the biggest challenge?

Because I wasn’t familiar with any concepts of computer vision, I had to brush up on my maths skills, especially geometry! There was a lot of dry theory I had to absorb in the first few weeks. I also needed to familiarise myself with the existing code of the SLAM project and understand what’s happening in the bigger picture. It can be challenging to dive into a big project without loosing the overview.

How do you think what you’ve learned so far will help in the final year of your degree?

I’ve definitely learned to organise myself better. I now plan what I want to do carefully before I write a single line of code. I also take more notes, which makes it easier to reconstruct my own thoughts and helps me stay on track and not get lost in side tasks. Most importantly, I’ve learned to realise when I need help and ask for it. Sometimes when you’re stuck, you have to swallow your pride and ask for help if you don’t want to loose more time on a problem.

Here’s a video of the loop closure process:

Posted in Uncategorized | Leave a comment

TIAGo Steel loans for European Robotics League: winners announced!

Back in May, we announced our intention to lend TIAGo Steel robots to teams who wanted to enter the European Robotics League – Service Robots competition (ERL).

We had a fantastic number of participants, and are thrilled to see so many researchers, students and companies excited to test their software applications on our robot.

Drum roll, please…

Unfortunately, there can only be… four winners! Congratulations to the four lucky teams who have been successful in their bids to borrow TIAGo Steel robots.

ParticulaTIAGo_Steel_with_force_torque_UNIBO_02r congratulations to Team Homer from the University of Koblenz – Landau, who submitted an excellent application that represented both their breadth of experience and the extent to which they would benefit from competing in the tournament. As a result of their incredibly thorough application paper, prior ERL experience and clear passion for their academic specialisms, they have been awarded the free robot.

All robots are eligible to rent for up to 12 months, at just €650 a month (plus tax). Conditions of the loan include mandatory participation in at least one ERL – Service Robots tournament and a minimum loan contract of three months.

The TIAGo Steel robots are now ready for collection.

Meet the teams

Institut de Robòtica i Informàtica Industrial, CSIC-UPC.

A strong interdisciplinary team from the Institut de Robòtica i Informàtica Industrial, led by Sergi Foix Salmerón, Guillem Alenyà Ribas and Juan Andrade Cetto, have more than 10 years’ robotics experience. Made up of researchers and students from the university’s Mobile Robotics & Perception and Manipulation groups, the team’s specialties include localisation, mapping, planning and learning algorithms, and computer vision applied to human-robot interaction.

They’re looking forward to the challenge of integrating and testing the different algorithms created by both groups on a single platform (TIAGo), and hope that participation in the tournament will allow them to push the boundaries of social robotics.

A consortium between the Disruptive Hub of everis-NTT and the Polytechnic University of Catalonia (UPC)

The application from a consortium of Everis employees and students from UPC’s Robotics Master Program combined the fresh outlook of students from academia with the experience of a company with more than 100K employees.

The students from UPC have a range of specialties, including automatic control systems, computer vision, robotics computer technology and communication in processing elements, that stand them in good stead to compete in the ERL. They will be led by Dr Cecilio Angulo from UPC and Dr Jordi Albo-Canals from everis-NTT, who both have previous experience participating at Robocup at Home.

Universidad Carlos Tercero de Madrid

The RoboticsLab research group, together with the Student Robotics Association (ASROB), from the Universidad Carlos Tercero de Madrid, are the third team to be loaned a TIAGo Steel robot.

They will be led by Dr. Juan González Victores and Dr. Santiago Martínez de la Casa, who have expertise in areas ranging from SLAM and 3D vision processing to grasping and manipulation planning.

The University of Koblenz – Landau

A passionate group of students who have competed in multiple Robocup and ERL competitions over the past decade, Team Homer from the University of Koblenz – Landau demonstrated the experience, innovation and passion required to receive the free loan of a TIAGo Steel robot.

The team was set-up to offer students the opportunity to learn about visual computing and computer science through practical participation in competitions like the ERL.

The TIAGo Steel robot

TIAGo_Steel_03_transparentThe TIAGo Steel robot is one iteration of our mobile manipulator TIAGo. The robot has a mobile base with a combination of lasers, sonars and actuated motors, a max speed of 1m/s and batteries that provide up to 10 hours of autonomy.

The Steel version of TIAGo has a parallel gripper that can be changed to a five finger hand or force-torque sensor. The arm has a workspace of 86cm and maximum payload of 3kg, without the end-effector.

With 12 degrees of freedom, a lifting torso (35cm) and 100% ROS-compatible open-source software, TIAGo is a versatile robot with modular capabilities that make it perfect for research into healthcare, assisted living or light industry.

ERL in Barcelona

As Platinum Sponsors of the European Robotics League – Service Robots competition, we are hosting one of the local tournaments here in sunny Barcelona! Keep an eye on our blog for further details, but what we can reveal at the moment is:

Date: 20-27 November

Location: PAL Robotics office (10 minutes from the beach!)

If you have any questions about TIAGo, the ERL Local Tournament in Barcelona – or anything else! – send us an email: business@pal-robotics.com

Posted in Articles | Tagged , , , , , | Leave a comment

Behind the scenes of Robocup: hacking TIAGo

If you saw the modified TIAGo that competed at Robocup 2017 in Japan, you probably noticed that the robot looked a little different. That’s because it was modified by the Technical University of Munich, specifically for the Robocup@Home Open Platform League, and aptly renamed TIAGo@Home.

Keep reading to find out how the Alle@Home team hacked our robot.

TIAGo_hacked_figure

Directional microphone

Although TIAGo comes equipped with a stereo microphone, the Alle@Home team added a large directional microphone on top of the robot’s head in order to pick up sound and enhance the robot’s speech recognition capabilities. Robocup can be pretty noisy, and the team wanted to make sure their robot could hear and respond to commands.

New computational capabilities

Our TIAGo robots always come with a laptop tray, but it’s up to clients what they do with it! Whether they use it to programme the robot, expand their world on Minecraft or rest a much-needed coffee – we’re not here to judge…

The Alle@Home team used the laptop tray of TIAGO@Home to add a new computer that enhanced the robot’s computational capabilities. In a (technical) nutshell, they used deep learning algorithms running over CUDA in a NVidia GPU to detect people and objects moving around the competition.

Artificial Skin

The team also added Artificial Skin – technology developed by scientists at the Technical University of Munich - to TIAGo@Home’s gripper.

The Artificial Skin was created to increase robots’ multi-modal sensitivity, so that they can better understand and interpret their environment. Comprised of a large number of small hexagonal boards, the skin features energy-saving microprocessors and sensors to detect changes in speed, temperature and touch.

The skin boosted TIAGo@Home’s sensory capabilities, for example by helping the robot to sense proximity to other objects in three dimensional space. These capabilities were crucial at Robocup, as they helped the robot detect the proximity of objects in grasping tasks and fine command the gripper in order to grasp and place them effectively.

Customized 3D printed gripper

The specific chaTIAGo_grasping_bottlellenges facing the Alle@Home team called for specific hardware hacks.

In order to grasp and move target objects, the team decided to 3D print new gripper fingers. Although TIAGo comes with an interchangeable five-finger hand or parallel gripper, the new gripper enabled TIAGo@Home to move the objects with more accuracy and precision in the specific Robocup@Home tasks.

USB hub

In order to connect the new microphone to the robot, the Alle@Home team added a USB hub. The hub is being powered externally by TIAGo’s expansion panel, which also contains a 12V connector providing 5A, a CAN service port and ethernet ports.

A world of no limits

TIAGo is an open platform with a huge array of possibilities for expansion. The robot’s modularity and flexibility, as well as its 100% ROS-compatible open-source software, mean that there are virtually no limits to the applications you can incorporate.

The Alle@Home team added software and hardware hacks to suit their own requirements, and successfully made it through the first rounds of Robocup. Congratulations, Alle@Home!

How would you hack your TIAGo? Let us know: tiago@pal-robotics.com

If you can’t afford your own robot, fear not. Have a play on our free simulation instead.

 

Posted in Articles | Tagged , , , | Leave a comment

Humanoid robots, imitation learning and torque control

This week, we interview Head of the Department of Automatics, Biocybernetics and Robotics at the Jozef Stefan Institute in Slovenia, Ales Ude. As founder of the department’s Humanoid and Cognitive Research Lab, he has a particular interest in imitation learning – the way robots interact with and learn from humans.

In Ales’ words, “Imitation learning is not just about a robot repeating what it sees from humans, but extrapolating from this knowledge in order to generate new behaviours. We’re collecting libraries of human movements, transferring these to humanoid robots and generating new behaviours based on this data.”

Ales is currently working with our humanoid biped, TALOS. Launched in April 2017, the robot’s torque sensors, electrical power, 32 degrees of freedom and walking speed of 3km/hr make it one of the most advanced biped robots in the world.

Watch our interview to find out why Ales chose to work with TALOS, the highlights of his lab’s research over the last few years and how he thinks robotics will change our lives in the future.

 

Posted in Articles | Tagged , , , , , | Leave a comment

2017: The year of the stock-taking robot

In the face of growing competition from e-commerce, brick-and-mortar stores are having to innovate. With lower costs, speed and ease-of-use on their side, online retailers are in prime position to take the lead in the retail world. So, what are stores doing about it?

Early adopters of automstockbot3ation and AI (think computerized check-outs and online product recommendations), retailers have been quick to incorporate new technological innovations into their existing systems and processes. 

From customer service to inventory-taking, we take a look at some of the ways robots are helping to extend the shelf life of shops.

The retail robot

In warehouses around the world, robots are already hard at work, busy delivering heavy packages faster and more easily than their human counterparts.

To take just one example, last year Target added autonomous robots created by Symbotic to one of its biggest U.S. distribution centres. Using ledges to travel up and down aisles, the robots can move and track cases autonomously, helping Target retrieve, record and restock items more quickly.

Robots have also been employed in delivery. Amazon famously made their first drone delivery (of a TV streaming stick and bag of popcorn) in the UK in December 2016, and it’s widely recognised that drones will play a key part in retail logistics in the future.

But robots are also making their presence felt in stores. In an increasingly fast-paced, digital world, the quality of customer service can make or break a retail brand. As a result, retailers are turning to robots to assist customers in shops, harnessing big data to provide real-time product information and even offer personalised recommendations.

From welcoming customers and directing them to a requested item to offering a checkout-free experience, the potential for service robots at all stages of the physical retail cycle is endless.

Time to start taking stock

Another area of retail that’s embracing automation is stock control and inventory. Over the last year, new RFID technology and a growing need to streamline and automate inventorying has pushed the issue to the fore – and PAL Robotics’ StockBot may be the solution.

Over-stretched retail staff struggle to find the time to perform regular inventories, particularly in chains with multiple stores spread across countries or even continents. This leaves retailers in the precarious position of not knowing exactly what they have in each store – or whereabouts in the store their products are.

According to a report by Supply Chain Digest, the out-of-stock rate experienced by consumers is 17.8% – that means close to 18 customers out of 100 are leaving stores unsatisfied. Keeping track of stock can be a major headache for retailers, but luckily it’s an issue robotic platforms, like Stockbot, are ready and waiting to assist with.

Benefits of automated inventory

The benefits of aDSC_1757utomation are exponential. Employing a robot to undertake inventory not only streamlines and speeds up the process, it also results in a more accurate picture of misplaced, lost and stolen items – and frees up staff to attend to customers.

In a pilot undertaken in a 1500m2 store, StockBot reduced the time it took to perform a full inventory by 80% (from five hours to one hour), with an accuracy of 99.10%. Thanks to RFID sensors, an autonomous mobile base and a pioneering combination of lasers and cameras, Stockbot is cheaper, faster and more accurate than a human performing the same task.

Importantly, automated inventory also means automated data capture. Collecting data on a daily basis has the potential to transform a business. Big data can provide retailers with a better understanding of shoppers’ habits and trends, as well as enabling data-driven strategies and more accurate long-term forecasts.

With 12 hours continuous battery life and full autonomous capabilities, StockBot can be deployed overnight so that retailers wake up to a full report – with the items’ positions – in their inbox before 9am. Imagine waking up to coffee, croissants and captured data – the perfect combination.

StockBot at the ready

Robots have the potential to revolutionise all areas of retail, and have been doing so for the last few years. From customer service to logistics, robotic platforms have been transforming the way retailers sell their products. Now, it’s the turn of the stock-taking bot.

If you’re interested in finding out how StockBot could help your business, we’d love to hear from you. Send an email to business@pal-robotics.com.

Posted in Articles | Tagged , , , , , | Leave a comment

The definitive intern guide to SLAM: place recognition & loop closure

SLAM – simultaneous localisation and mapping – is a technique robots use to build a map of their environment, ascertain where they are within the map and assess whether they’ve been to a particular location before. It’s an incredibly important technique within navigation research, so important in fact that we currently have a team focusing on the topic.

We’ve interviewed two PAL Robotics interns, who are spending their six month placements helping to develop and test new approaches to SLAM. First up is Robotics Masters student from University of Genoa (Italy), Elena Rampone, who has kindly volunteered to answer our questions and explain how her research into loop closure and place recognition may help a robot recognise its environment.

This may be an impossible task, but how would you describe SLAM in less than 50 words?

A robot navigating in an environment needs to keep track of its movements by estimating its current position, building an internal map of the environment and recognising if it has already visited a particular place. This is SLAM. The estimations are performed by analysing data acquired by the robot’s sensors.

What exactly are you working on?

My project can be broken down into two steps. The first consists of adapting the company’s place-recognition framework, based on the Bag of Words algorithm, to 3D point clouds. Point clouds provide a 3D depiction of the world and are characterised by a compact descriptor, which allows us to efficiently compare them.

The whole system is ROS-based and tested using the Kitti dataset, a widely-used outdoor dataset of point clouds acquired by a Velodyne sensor.

cloud_with_keypoints

[Image: a point cloud]

When the robot moves, the descriptors characterising new observations of the environment are inserted into a database that stores information about all the places the robot is visiting. The system continuously checks if a new observed cloud is similar to an antecedent one by comparing its descriptor with those present in the database and then selecting the corresponding clouds.

loop_closure

[Image: loop closure]

After the system selects the candidate clouds, it is necessary to assert whether the robot has returned to a previously visited location and, if so, estimate the relative transformation between the two clouds in order to update the position of the robot (loop closure).

The second step involves using the Iterative Closest Point (ICP) algorithm – a method that iteratively matches points between two clouds in order to find the relative transformation that minimizes an error measure. Any candidate clouds that correctly converge with ICP are selected as correct loop closures. The estimated pose will then be used to update the robot’s  position.

What are the next steps in your research?

I’ve already been interning at PAL Robotics for five months, so I’m coming to the end of my project. For the final few weeks I plan to keep testing system performance by using different methods to extract point cloud features, in order to identify the one that works best. I’m testing both the place-recognition and loop closure elements to ensure the end result is robust and – most importantly – useful for the PAL Robotics team.

I know you can’t give too much away, but are the results in line with what you expected?

With the help of open source libraries, I could come to a suitable trade-off between recognition accuracy and execution speed by comparing many different configurations in a semi-automated fashion. The results will allow me to deploy the solution on a TIAGo robot for real-world testing.

What have been the highlights so far?

My first major highlight was when I tested the ICP algorithm on a sequence of clouds and saw that it was correctly estimating the relative displacement between them and could also align them. But the most important moment was when the place-recognition system started working, as it’s a huge milestone in my internship.

Posted in Articles | Tagged , , , , , | Leave a comment