Alphabet is the umbrella under which all Google divisions are organized. The reason they decided to reorganize into a holding company is that this gives their operations some independence, while offering more visibility to investors.
Well, since 2019, this company has been very focused on the development of robots so that they can perform daily tasks.
Well, now they have entered into achieving a fairly simple objective a priori, but it is really complicated: help people communicate better with robots through voice or text and allow them to execute complex tasks with a better understanding of the language.
This project we are talking about today, which belongs to the company Everyday Robots (along with Google Research), is still in its infancy, but the robots have now received an update: better understanding of language, from Google’s PaLM Large Language Model (LLM).
Everyday Robots will put one of its helper robots and Google the language model, creating PaLm-SayCan. “This is the first implementation that uses a large-scale language model to plan a real robot.” The new project should help people communicate better with robots.” explains Google.
Google claims that, thanks to this combination, the robots were able to generate correct answers to 101 instructions that were given 84% of the time and executed successfully 74%.
On the other hand, and according to Android Central, the company affects the security aspect of its robots with PaLm-SayCan. The algorithm is limited to orders that take into account the safety of the robot and also keep things “highly interpretable”it states.
Although this is a success and it is undeniable, you have to tread carefully, why? because real life is full of a multitude of orders and different formulas. We are facing a complete disorder that these robots must face.
Google and Everyday Robots hope that the PaLm-SayCan algorithm can finally help robots achieve a more natural interaction with people.