Arquivo da tag: Robótica

Animal training techniques teach robots new tricks (Science Daily)

Virtual dogs take place of programming

May 16, 2016
Washington State University
Researchers are using ideas from animal training to help non-expert users teach robots how to do desired tasks.

Virtual environments in which trainers gave directions to robot dog. Credit: Image courtesy of Washington State University

Researchers at Washington State University are using ideas from animal training to help non-expert users teach robots how to do desired tasks.

The researchers recently presented their work at the international Autonomous Agents and Multiagent Systems conference.

As robots become more pervasive in society, humans will want them to do chores like cleaning house or cooking. But to get a robot started on a task, people who aren’t computer programmers will have to give it instructions.

“We want everyone to be able to program, but that’s probably not going to happen,” said Matthew Taylor, Allred Distinguished Professor in the WSU School of Electrical Engineering and Computer Science. “So we needed to provide a way for everyone to train robots — without programming.”

User feedback improves robot performance

With Bei Peng, a doctoral student in computer science, and collaborators at Brown University and North Carolina State University, Taylor designed a computer program that lets humans teach a virtual robot that looks like a computerized pooch. Non-computer programmers worked with and trained the robot in WSU’s Intelligent Robot Learning Laboratory.

For the study, the researchers varied the speed at which their virtual dog reacted. As when somebody is teaching a new skill to a real animal, the slower movements let the user know that the virtual dog was unsure of how to behave. The user could then provide clearer guidance to help the robot learn better.

“At the beginning, the virtual dog moves slowly. But as it receives more feedback and becomes more confident in what to do, it speeds up,” Peng said.

The user taught tasks by either reinforcing good behavior or punishing incorrect behavior. The more feedback the virtual dog received from the human, the more adept the robot became at predicting the correct course of action.

Applications for animal training

The researchers’ algorithm allowed the virtual dog to understand the tricky meanings behind a lack of feedback — called implicit feedback.

“When you’re training a dog, you may withhold a treat when it does something wrong,” Taylor explained. “So no feedback means it did something wrong. On the other hand, when professors are grading tests, they may only mark wrong answers, so no feedback means you did something right.”

The researchers have begun working with physical robots as well as virtual ones. They also hope to eventually use the program to help people learn to be more effective animal trainers.


Army ants’ ‘living’ bridges span collective intelligence, ‘swarm’ robotics (Science Daily)

Date: November 24, 2015

Source: Princeton University

Summary: Researchers report for the first time that the ‘living’ bridges army ants of the species Eciton hamatum build with their bodies are more sophisticated than scientists knew. The ants automatically assemble with a level of collective intelligence that could provide new insights into animal behavior and even help in the development of intuitive robots that can cooperate as a group.

Researchers from Princeton University and the New Jersey Institute of Technology report for the first time that the “living” bridges army ants of the species Eciton hamatum (pictured) build with their bodies are more sophisticated than scientists knew. The ants automatically assemble with a level of collective intelligence that could provide new insights into animal behavior and even help in the development of intuitive robots that can cooperate as a group. Credit: Courtesy of Matthew Lutz, Princeton University, and Chris Reid, University of Sydney

 Columns of workers penetrate the forest, furiously gathering as much food and supplies as they can. They are a massive army that living things know to avoid, and that few natural obstacles can waylay. So determined are these legions that should a chasm or gap disrupt the most direct path to their spoils they simply build a new path — out of themselves.

Without any orders or direction, individuals from the rank and file instinctively stretch across the opening, clinging to one another as their comrades-in-arms swarm across their bodies. But this is no force of superhumans. They are army ants of the species Eciton hamatum, which form “living” bridges across breaks and gaps in the forest floor that allow their famously large raiding swarms to travel efficiently.

Researchers from Princeton University and the New Jersey Institute of Technology (NJIT) report for the first time that these structures are more sophisticated than scientists knew. The ants exhibit a level of collective intelligence that could provide new insights into animal behavior and even help in the development of intuitive robots that can cooperate as a group, the researchers said.

Ants of E. hamatum automatically form living bridges without any oversight from a “lead” ant, the researchers report in the journal Proceedings of the National Academy of the Sciences. The action of each individual coalesces into a group unit that can adapt to the terrain and also operates by a clear cost-benefit ratio. The ants will create a path over an open space up to the point when too many workers are being diverted from collecting food and prey.

“These ants are performing a collective computation. At the level of the entire colony, they’re saying they can afford this many ants locked up in this bridge, but no more than that,” said co-first author Matthew Lutz, a graduate student in Princeton’s Department of Ecology and Evolutionary Biology.

“There’s no single ant overseeing the decision, they’re making that calculation as a colony,” Lutz said. “Thinking about this cost-benefit framework might be a new insight that can be applied to other animal structures that people haven’t thought of before.”

The research could help explain how large groups of animals balance cost and benefit, about which little is known, said co-author Iain Couzin, a Princeton visiting senior research scholar in ecology and evolutionary biology, and director of the Max Planck Institute for Ornithology and chair of biodiversity and collective behavior at the University of Konstanz in Germany.

Previous studies have shown that single creatures use “rules of thumb” to weigh cost-and-benefit, said Couzin, who also is Lutz’s graduate adviser. This new work shows that in large groups these same individual guidelines can eventually coordinate group-wide, he said — the ants acted as a unit although each ant only knew its immediate circumstances.

“They don’t know how many other ants are in the bridge, or what the overall traffic situation is. They only know about their local connections to others, and the sense of ants moving over their bodies,” Couzin said. “Yet, they have evolved simple rules that allow them to keep reconfiguring until, collectively, they have made a structure of an appropriate size for the prevailing conditions.

“Finding out how sightless ants can achieve such feats certainly could change the way we think of self-configuring structures in nature — and those made by man,” he said.

Ant-colony behavior has been the basis of algorithms related to telecommunications and vehicle routing, among other areas, explained co-first author Chris Reid, a postdoctoral research associate at the University of Sydney who conducted the work while at NJIT. Ants exemplify “swarm intelligence,” in which individual-level interactions produce coordinated group behavior. E. hamatum crossings assemble when the ants detect congestion along their raiding trail, and disassemble when normal traffic has resumed.

Previously, scientists thought that ant bridges were static structures — their appearance over large gaps that ants clearly could not cross in midair was somewhat of a mystery, Reid said. The researchers found, however, that the ants, when confronted with an open space, start from the narrowest point of the expanse and work toward the widest point, expanding the bridge as they go to shorten the distance their compatriots must travel to get around the expanse.

“The amazing thing is that a very elegant solution to a colony-level problem arises from the individual interactions of a swarm of simple worker ants, each with only local information,” Reid said. “By extracting the rules used by individual ants about whether to initiate, join or leave a living structure, we could program swarms of simple robots to build bridges and other structures by connecting to each other.

“These robot bridges would exhibit the beneficial properties we observe in the ant bridges, such as adaptability to local conditions, real-time optimization of shape and position, and rapid construction and deconstruction without the need for external building materials,” Reid continued. “Such a swarm of robots would be especially useful in dangerous and unpredictable conditions, such as natural disaster zones.”

Radhika Nagpal, a professor of computer science at Harvard University who studies robotics and self-organizing biological systems, said that the findings reveal that there is “something much more fundamental about how complex structures are assembled and adapted in nature, and that it is not through a supervisor or planner making decisions.”

Individual ants adjusted to one another’s choices to create a successful structure, despite the fact that each ant didn’t necessarily know everything about the size of the gap or the traffic flow, said Nagpal, who is familiar with the research but was not involved in it.

“The goal wasn’t known ahead of time, but ‘emerged’ as the collective continually adapted its solution to the environmental factors,” she said. “The study really opens your eyes to new ways of thinking about collective power, and has tremendous potential as a way to think about engineering systems that are more adaptive and able to solve complex cost-benefit ratios at the network level just through peer-to-peer interactions.”

She compared the ant bridges to human-made bridges that automatically widened to accommodate heavy vehicle traffic or a growing population. While self-assembling road bridges may be a ways off, the example illustrates the potential that technologies built with the same self-assembling capabilities seen in E. hamatum could have.

“There’s a deep interest in creating robots that don’t just rely on themselves, but can exploit the group to do more — and self-assembly is the ultimate in doing more,” Nagpal said. “If you could have small simple robots that were able to navigate complex spaces, but could self-assemble into larger structures — bridges, towers, pulling chains, rafts — when they face something they individually did not have the ability to do, that’s a huge increase in power in what robots would be capable of.”

The spaces E. hamatum bridges are not dramatic by human standards — small rifts in the leaf cover, or between the ends of two sticks. Bridges will be the length of 10 to 20 ants, which is only a few centimeters, Lutz said. That said, E. hamatum swarms form several bridges during the course of a day, which can see the back-and-forth of thousands of ants.

“The bridges are something that happen numerous times every day. They’re creating bridges to optimize their traffic flow and maximize their time,” Lutz said.

“When you’re moving hundreds of thousands of ants, creating a little shortcut can save a lot of energy,” he said. “This is such a unique behavior. You have other types of ants forming structures out of their bodies, but it’s not such a huge part of their lives and daily behavior.”

The research also included Scott Powell, an army-ant expert and assistant professor of biology at George Washington University; Albert Kao, a postdoctoral fellow at Harvard who received his doctorate in ecology and evolutionary biology from Princeton in 2015; and Simon Garnier, an assistant professor of biological sciences at NJIT who studies swarm intelligence and was once a postdoctoral researcher in Couzin’s lab at Princeton.

To conduct their field experiments, Lutz and Reid constructed a 1.5-foot-tall apparatus with ramps on both sides and adjustable arms in the center with which they could adjust the size of the gap. They then inserted the apparatus into active E. hamatum raiding trails that they found in the forests of Barro Colorado Island, Panama. Because ants follow one another’s chemical scent, Lutz and Reid used sticks and leaves from the ants’ trail to get them to reform their column across the device.

Lutz and Reid observed how the ants formed bridges across gaps that were set at angles of 12, 20, 40 and 60 degrees. They gauged how much travel-distance the ants saved with their bridge versus the surface area (in centimeters squared) of the bridge itself. Twelve-degree angles shaved off the most distance (around 11 centimeters) while taking up the fewest workers. Sixty-degree angles had the highest cost-to-benefit ratio. Interestingly, the ants were willing to expend members for 20-degree angles, forming bridges up to 8 centimeters squared to decrease their travel time by almost 12 centimeters, indicating that the loss in manpower was worth the distance saved.

Lutz said that future research based on this work might compare these findings to the living bridges of another army ant species, E. burchellii, to determine if the same principles are in action.

The paper, “Army ants dynamically adjust living bridges in response to a cost-benefit trade-off,” was published Nov. 23 by Proceedings of the National Academy of Sciences. The work was supported by the National Science Foundation (grant nos. PHY-0848755, IOS0-1355061 and EAGER IOS-1251585); the Army Research Office (grant nos. W911NG-11-1-0385 and W911NF-14-1-0431); and the Human Frontier Science Program (grant no. RGP0065/2012).

Journal Reference:

  1. Chris R. Reid, Matthew J. Lutz, Scott Powell, Albert B. Kao, Iain D. Couzin, Simon Garnier. Army ants dynamically adjust living bridges in response to a cost–benefit trade-offProceedings of the National Academy of Sciences, 2015; 201512241 DOI: 10.1073/pnas.1512241112

Projecting a robot’s intentions: New spin on virtual reality helps engineers read robots’ minds (Science Daily)

Date: October 29, 2014

Source: Massachusetts Institute of Technology

Summary: In a darkened, hangar-like space inside MIT’s Building 41, a small, Roomba-like robot is trying to make up its mind. Standing in its path is an obstacle — a human pedestrian who’s pacing back and forth. To get to the other side of the room, the robot has to first determine where the pedestrian is, then choose the optimal route to avoid a close encounter. As the robot considers its options, its “thoughts” are projected on the ground: A large pink dot appears to follow the pedestrian — a symbol of the robot’s perception of the pedestrian’s position in space.

A new spin on virtual reality helps engineers read robots’ minds. Credit: Video screenshot courtesy of Melanie Gonick/MIT

In a darkened, hangar-like space inside MIT’s Building 41, a small, Roomba-like robot is trying to make up its mind.

Standing in its path is an obstacle — a human pedestrian who’s pacing back and forth. To get to the other side of the room, the robot has to first determine where the pedestrian is, then choose the optimal route to avoid a close encounter.

As the robot considers its options, its “thoughts” are projected on the ground: A large pink dot appears to follow the pedestrian — a symbol of the robot’s perception of the pedestrian’s position in space. Lines, each representing a possible route for the robot to take, radiate across the room in meandering patterns and colors, with a green line signifying the optimal route. The lines and dots shift and adjust as the pedestrian and the robot move.

This new visualization system combines ceiling-mounted projectors with motion-capture technology and animation software to project a robot’s intentions in real time. The researchers have dubbed the system “measurable virtual reality (MVR) — a spin on conventional virtual reality that’s designed to visualize a robot’s “perceptions and understanding of the world,” says Ali-akbar Agha-mohammadi, a postdoc in MIT’s Aerospace Controls Lab.

“Normally, a robot may make some decision, but you can’t quite tell what’s going on in its mind — why it’s choosing a particular path,” Agha-mohammadi says. “But if you can see the robot’s plan projected on the ground, you can connect what it perceives with what it does to make sense of its actions.”

Agha-mohammadi says the system may help speed up the development of self-driving cars, package-delivering drones, and other autonomous, route-planning vehicles.

“As designers, when we can compare the robot’s perceptions with how it acts, we can find bugs in our code much faster,” Agha-mohammadi says. “For example, if we fly a quadrotor, and see something go wrong in its mind, we can terminate the code before it hits the wall, or breaks.”

The system was developed by Shayegan Omidshafiei, a graduate student, and Agha-mohammadi. They and their colleagues, including Jonathan How, a professor of aeronautics and astronautics, will present details of the visualization system at the American Institute of Aeronautics and Astronautics’ SciTech conference in January.

Seeing into the mind of a robot

The researchers initially conceived of the visualization system in response to feedback from visitors to their lab. During demonstrations of robotic missions, it was often difficult for people to understand why robots chose certain actions.

“Some of the decisions almost seemed random,” Omidshafiei recalls.

The team developed the system as a way to visually represent the robots’ decision-making process. The engineers mounted 18 motion-capture cameras on the ceiling to track multiple robotic vehicles simultaneously. They then developed computer software that visually renders “hidden” information, such as a robot’s possible routes, and its perception of an obstacle’s position. They projected this information on the ground in real time, as physical robots operated.

The researchers soon found that by projecting the robots’ intentions, they were able to spot problems in the underlying algorithms, and make improvements much faster than before.

“There are a lot of problems that pop up because of uncertainty in the real world, or hardware issues, and that’s where our system can significantly reduce the amount of effort spent by researchers to pinpoint the causes,” Omidshafiei says. “Traditionally, physical and simulation systems were disjointed. You would have to go to the lowest level of your code, break it down, and try to figure out where the issues were coming from. Now we have the capability to show low-level information in a physical manner, so you don’t have to go deep into your code, or restructure your vision of how your algorithm works. You could see applications where you might cut down a whole month of work into a few days.”

Bringing the outdoors in

The group has explored a few such applications using the visualization system. In one scenario, the team is looking into the role of drones in fighting forest fires. Such drones may one day be used both to survey and to squelch fires — first observing a fire’s effect on various types of vegetation, then identifying and putting out those fires that are most likely to spread.

To make fire-fighting drones a reality, the team is first testing the possibility virtually. In addition to projecting a drone’s intentions, the researchers can also project landscapes to simulate an outdoor environment. In test scenarios, the group has flown physical quadrotors over projections of forests, shown from an aerial perspective to simulate a drone’s view, as if it were flying over treetops. The researchers projected fire on various parts of the landscape, and directed quadrotors to take images of the terrain — images that could eventually be used to “teach” the robots to recognize signs of a particularly dangerous fire.

Going forward, Agha-mohammadi says, the team plans to use the system to test drone performance in package-delivery scenarios. Toward this end, the researchers will simulate urban environments by creating street-view projections of cities, similar to zoomed-in perspectives on Google Maps.

“Imagine we can project a bunch of apartments in Cambridge,” Agha-mohammadi says. “Depending on where the vehicle is, you can look at the environment from different angles, and what it sees will be quite similar to what it would see if it were flying in reality.”

Because the Federal Aviation Administration has placed restrictions on outdoor testing of quadrotors and other autonomous flying vehicles, Omidshafiei points out that testing such robots in a virtual environment may be the next best thing. In fact, the sky’s the limit as far as the types of virtual environments that the new system may project.

“With this system, you can design any environment you want, and can test and prototype your vehicles as if they’re fully outdoors, before you deploy them in the real world,” Omidshafiei says.

This work was supported by Boeing.


How to train a robot: Can we teach robots right from wrong? (Science Daily)

Date: October 14, 2014

Source: Taylor & Francis

Summary: From performing surgery and flying planes to babysitting kids and driving cars, today’s robots can do it all. With chatbots such as Eugene Goostman recently being hailed as “passing” the Turing test, it appears robots are becoming increasingly adept at posing as humans. While machines are becoming ever more integrated into human lives, the need to imbue them with a sense of morality becomes increasingly urgent. But can we really teach robots how to be good?

From performing surgery and flying planes to babysitting kids and driving cars, today’s robots can do it all. With chatbots such as Eugene Goostman recently being hailed as “passing” the Turing test, it appears robots are becoming increasingly adept at posing as humans. While machines are becoming ever more integrated into human lives, the need to imbue them with a sense of morality becomes increasingly urgent. But can we really teach robots how to be good?

An innovative piece of research recently published in the Journal of Experimental & Theoretical Artificial Intelligence looks into the matter of machine morality, and questions whether it is “evil” for robots to masquerade as humans.

Drawing on Luciano Floridi’s theories of Information Ethics and artificial evil, the team leading the research explore the ethical implications regarding the development of machines in disguise. ‘Masquerading refers to a person in a given context being unable to tell whether the machine is human’, explain the researchers — this is the very essence of the Turing Test. This type of deception increases “metaphysical entropy,” meaning any corruption of entities and impoverishment of being; since this leads to a lack of good in the environment — or infosphere — it is regarded as the fundamental evil by Floridi. Following this premise, the team set out to ascertain where ‘the locus of moral responsibility and moral accountability’ lie in relationships with masquerading machines, and try to establish whether it is ethical to develop robots that can pass a Turing test.

Six significant actor-patient relationships yielding key insights on the matter are identified and analysed in the study. Looking at associations between developers, robots, users and owners, and integrating in the research notable examples, such as Nanis’ Twitter bot and Apple’s Siri, the team identify where ethical accountabilities lie — with machines, humans, or somewhere in between?

But what really lies behind the robot-mask, and is it really evil for machines to masquerade as humans? ‘When a machine masquerades, it influences the behaviour or actions of people [towards the robot as well as their peers]’, claim the academics. Even when the disguise doesn’t corrupt the environment, it increases the chances of evil as it becomes harder for individuals to make authentic ethical decisions. Advances in the field of artificial intelligence have outpaced ethical developments and humans are now facing a new set of problems brought about by the ever-developing world of machines. Until these issues are properly addressed, the question whether we can teach robots to be good remains open.

Journal Reference:

  1. Keith Miller, Marty J. Wolf, Frances Grodzinsky. Behind the mask: machine morality. Journal of Experimental & Theoretical Artificial Intelligence, 2014; 1 DOI:10.1080/0952813X.2014.948315

Pesquisadores da UFRJ trabalham em robô para comportas de hidrelétricas (Agência Brasil)

JC e-mail 4993, de 21 de julho de 2014

Previsão é que equipamento esteja pronto em fevereiro de 2015

Pesquisadores da Universidade Federal do Rio de Janeiro (UFRJ), em parceria com a empresa Energia Sustentável do Brasil (ESBR), trabalham para desenvolver, até fevereiro do ano que vem, um robô subaquático para aprimorar a operação dos painéis das comportas de manutenção das usinas hidrelétricas (stoplogs). Iniciado em outubro do ano passado, o projeto do robô para operação de stoplogs alagados (Rosa) deve reduzir prejuízos com paradas nas turbinas, diminuindo o tempo que elas ficam desligadas.

A pesquisa foi apresentada hoje (18) na universidade, quando também foi formalizada a parceria entre a empresa e o Instituto Alberto Luiz Coimbra de Pós-Graduação e Pesquisa de Engenharia (Coppe-UFRJ, por intermédio do Programa de Pesquisa e Desenvolvimento da Agência Nacional de Energia Elétrica (Aneel).

“O que estamos fazendo é instrumentalizar todo um sistema hoje puramente mecânico, transformando-o em computacional. Estamos acrescentando informações úteis ao operador, com elementos usados em robôs, como sistema operacional, comunicação, sonar”, conta o coordenador do projeto, o professor do Coppe, Ramon Costa.

O projeto foi financiado pela empresa ESBR, responsável pela operação e construção da Usina Hidrelétrica de Jirau, no Rio Madeira, onde grande quantidade de partículas deixa a água turva e se acumula, dificultando a movimentação dos stoplogs depois do serviço de manutenção. O robô, então, fornecerá informações para que o operador possa trabalhar com mais subsídios, substituindo os mergulhadores que atualmente são chamados para conferir a situação do stoplogquando a turbina está parada e a destravá-lo, quando necessário.

A nova tecnologia deve reduzir em um dia o tempo que a turbina fica parada.”Para cada turbina, são dois mergulhos. É um processo demorado e muito custoso”, diz Ramon. Segundo o pesquisador, o custo de uma hora com a máquina parada passa de R$ 10 mil, somando cerca de R$ 250 mil em um dia.

Uma equipe de sete pesquisadores está oficialmente inscrita no projeto pelo Coppe-UFRJ, e mais três cientistas da universidade trabalham como colaboradores. O primeiro teste completo do Rosa deve ser realizado em setembro, e a previsão do coordenador do projeto é que toda a tecnologia necessária para concluí-lo deve estar pronta até o fim deste ano.

(Vinícius Lisboa / Agência Brasil)

Machine learning / teaching robots to understand instructions in natural language

Collaborative learning — for robots: New algorithm

Date: June 25, 2014

Source: Massachusetts Institute of Technology

Summary: Machine learning, in which computers learn new skills by looking for patterns in training data, is the basis of most recent advances in artificial intelligence, from voice-recognition systems to self-parking cars. It’s also the technique that autonomous robots typically use to build models of their environments. A new algorithm lets independent agents collectively produce a machine-learning model without aggregating data.

Scientists have presented an algorithm in which distributed agents — such as robots exploring a building — collect data and analyze it independently. Pairs of agents, such as robots passing each other in the hall, then exchange analyses. (stock image) Credit: © sommersby / Fotolia

Machine learning, in which computers learn new skills by looking for patterns in training data, is the basis of most recent advances in artificial intelligence, from voice-recognition systems to self-parking cars. It’s also the technique that autonomous robots typically use to build models of their environments.

That type of model-building gets complicated, however, in cases in which clusters of robots work as teams. The robots may have gathered information that, collectively, would produce a good model but which, individually, is almost useless. If constraints on power, communication, or computation mean that the robots can’t pool their data at one location, how can they collectively build a model?

At the Uncertainty in Artificial Intelligence conference in July, researchers from MIT’s Laboratory for Information and Decision Systems will answer that question. They present an algorithm in which distributed agents — such as robots exploring a building — collect data and analyze it independently. Pairs of agents, such as robots passing each other in the hall, then exchange analyses.

In experiments involving several different data sets, the researchers’ distributed algorithm actually outperformed a standard algorithm that works on data aggregated at a single location.

“A single computer has a very difficult optimization problem to solve in order to learn a model from a single giant batch of data, and it can get stuck at bad solutions,” says Trevor Campbell, a graduate student in aeronautics and astronautics at MIT, who wrote the new paper with his advisor, Jonathan How, the Richard Cockburn Maclaurin Professor of Aeronautics and Astronautics. “If smaller chunks of data are first processed by individual robots and then combined, the final model is less likely to get stuck at a bad solution.”

Campbell says that the work was motivated by questions about robot collaboration. But it could also have implications for big data, since it would allow distributed servers to combine the results of their data analyses without aggregating the data at a central location.

“This procedure is completely robust to pretty much any network you can think of,” Campbell says. “It’s very much a flexible learning algorithm for decentralized networks.”

Matching problem

To get a sense of the problem Campbell and How solved, imagine a team of robots exploring an unfamiliar office building. If their learning algorithm is general enough, they won’t have any prior notion of what a chair is, or a table, let alone a conference room or an office. But they could determine, for instance, that some rooms contain a small number of chair-shaped objects together with roughly the same number of table-shaped objects, while other rooms contain a large number of chair-shaped objects together with a single table-shaped object.

Over time, each robot will build up its own catalogue of types of rooms and their contents. But inaccuracies are likely to creep in: One robot, for instance, might happen to encounter a conference room in which some traveler has left a suitcase and conclude that suitcases are regular features of conference rooms. Another might enter a kitchen while the coffeemaker is obscured by the open refrigerator door and leave coffeemakers off its inventory of kitchen items.

Ideally, when two robots encountered each other, they would compare their catalogues, reinforcing mutual observations and correcting omissions or overgeneralizations. The problem is that they don’t know how to match categories. Neither knows the label “kitchen” or “conference room”; they just have labels like “room 1” and “room 3,” each associated with different lists of distinguishing features. But one robot’s room 1 could be another robot’s room 3.

With Campbell and How’s algorithm, the robots try to match categories on the basis of shared list items. This is bound to lead to errors: One robot, for instance, may have inferred that sinks and pedal-operated trashcans are distinguishing features of bathrooms, another that they’re distinguishing features of kitchens. But they do their best, combining the lists that they think correspond.

When either of those robots meets another robot, it performs the same procedure, matching lists as best it can. But here’s the crucial step: It then pulls out each of the source lists independently and rematches it to the others, repeating this process until no reordering results. It does this again with every new robot it encounters, gradually building more and more accurate models.

Imposing order

This relatively straightforward procedure results from some pretty sophisticated mathematical analysis, which the researchers present in their paper. “The way that computer systems learn these complex models these days is that you postulate a simpler model and then use it to approximate what you would get if you were able to deal with all the crazy nuances and complexities,” Campbell says. “What our algorithm does is sort of artificially reintroduce structure, after you’ve solved that easier problem, and then use that artificial structure to combine the models properly.”

In a real application, the robots probably wouldn’t just be classifying rooms according to the objects they contain: They’d also be classifying the objects themselves, and probably their uses. But Campbell and How’s procedure generalizes to other learning problems just as well.

The example of classifying rooms according to content, moreover, is similar in structure to a classic problem in natural language processing called topic modeling, in which a computer attempts to use the relative frequency of words to classify documents according to topic. It would be wildly impractical to store all the documents on the Web in a single location, so that a traditional machine-learning algorithm could provide a consistent classification scheme for all of them. But Campbell and How’s algorithm means that scattered servers could churn away on the documents in their own corners of the Web and still produce a collective topic model.

“Distributed computing will play a critical role in the deployment of multiple autonomous agents, such as multiple autonomous land and airborne vehicles,” says Lawrence Carin, a professor of electrical and computer engineering and vice provost for research at Duke University. “The distributed variational method proposed in this paper is computationally efficient and practical. One of the keys to it is a technique for handling the breaking of symmetries manifested in Bayesian inference. The solution to this problem is very novel and is likely to be leveraged in the future by other researchers.”

*   *   *

Robot can be programmed by casually talking to it

Date: June 23, 2014

Source: Cornell University

Summary: A professor of computer science is teaching robots to understand instructions in natural language from various speakers, account for missing information, and adapt to the environment at hand.

A computer science professor is teaching robots to understand instructions in natural language from various speakers, account for missing information, and adapt to the environment at hand. Credit: Image courtesy of Cornell University

Robots are getting smarter, but they still need step-by-step instructions for tasks they haven’t performed before. Before you can tell your household robot “Make me a bowl of ramen noodles,” you’ll have to teach it how to do that. Since we’re not all computer programmers, we’d prefer to give those instructions in English, just as we’d lay out a task for a child.

But human language can be ambiguous, and some instructors forget to mention important details. Suppose you told your household robot how to prepare ramen noodles, but forgot to mention heating the water or tell it where the stove is.

In his Robot Learning Lab, Ashutosh Saxena, assistant professor of computer science at Cornell University, is teaching robots to understand instructions in natural language from various speakers, account for missing information, and adapt to the environment at hand.

Saxena and graduate students Dipendra K. Misra and Jaeyong Sung will describe their methods at the Robotics: Science and Systems conference at the University of California, Berkeley, July 12-16.

The robot may have a built-in programming language with commands like find (pan); grasp (pan); carry (pan, water tap); fill (pan, water); carry (pan, stove) and so on. Saxena’s software translates human sentences, such as “Fill a pan with water, put it on the stove, heat the water. When it’s boiling, add the noodles.” into robot language. Notice that you didn’t say, “Turn on the stove.” The robot has to be smart enough to fill in that missing step.

Saxena’s robot, equipped with a 3-D camera, scans its environment and identifies the objects in it, using computer vision software previously developed in Saxena’s lab. The robot has been trained to associate objects with their capabilities: A pan can be poured into or poured from; stoves can have other objects set on them, and can heat things. So the robot can identify the pan, locate the water faucet and stove and incorporate that information into its procedure. If you tell it to “heat water” it can use the stove or the microwave, depending on which is available. And it can carry out the same actions tomorrow if you’ve moved the pan, or even moved the robot to a different kitchen.

Other workers have attacked these problems by giving a robot a set of templates for common actions and chewing up sentences one word at a time. Saxena’s research group uses techniques computer scientists call “machine learning” to train the robot’s computer brain to associate entire commands with flexibly defined actions. The computer is fed animated video simulations of the action — created by humans in a process similar to playing a video game — accompanied by recorded voice commands from several different speakers.

The computer stores the combination of many similar commands as a flexible pattern that can match many variations, so when it hears “Take the pot to the stove,” “Carry the pot to the stove,” “Put the pot on the stove,” “Go to the stove and heat the pot” and so on, it calculates the probability of a match with what it has heard before, and if the probability is high enough, it declares a match. A similarly fuzzy version of the video simulation supplies a plan for the action: Wherever the sink and the stove are, the path can be matched to the recorded action of carrying the pot of water from one to the other.

Of course the robot still doesn’t get it right all the time. To test, the researchers gave instructions for preparing ramen noodles and for making affogato — an Italian dessert combining coffee and ice cream: “Take some coffee in a cup. Add ice cream of your choice. Finally, add raspberry syrup to the mixture.”

The robot performed correctly up to 64 percent of the time even when the commands were varied or the environment was changed, and it was able to fill in missing steps. That was three to four times better than previous methods, the researchers reported, but “There is still room for improvement.”

You can teach a simulated robot to perform a kitchen task at the “Tell me Dave” website, and your input there will become part of a crowdsourced library of instructions for the Cornell robots. Aditya Jami, visiting researcher at Cornell, is helping Tell Me Dave to scale the library to millions of examples. “With crowdsourcing at such a scale, robots will learn at a much faster rate,” Saxena said.


Further information:

RoboCup: the World Championship on robotics!

21 July 2014

RoboCup was founded in 1997 with the main goal of “developing by 2050 a Robot Soccer team capable of winning against the human team champion of the FIFA World Cup”. In the next years, RoboCup proposed several soccer platforms that have been established as standard platforms for robotics research. This domain demonstrated the capability of capturing key aspects of complex real world problems, stimulating the development of a wide range of technologies, including the design of electrical-mechanical-computational integrated techniques for autonomous robots. After more than 15 years of RoboCup, nowadays robot soccer represents only a part of the available platforms. RoboCup encompasses other leagues that, in addition to Soccer, cover Rescue (Robots and Simulation), @Home (assistive robots in home environments), Sponsored and @Work (Industrial environments), as well as RoboCupJunior leagues for young students. These domains offer a wide range of platforms for researchers with the potential to speed up the developments in the mobile robotics field.

RoboCup has already grown into a project which gets worldwide attention. Every year, multiple tournaments are organized in different countries all over the world, where teams from all over the world participate in various disciplines. There are tournaments in Germany, Portugal, China, Brazil, etc.  In 2014, RoboCup will be hosted for the 1st time in South America, in Brazil.

Abelhas “biônicas” vão ajudar a monitorar mudanças climáticas na Amazônia (O Globo)

JC e-mail 4966, de 04 de junho de 2014

Microssensores instalados em insetos vão colher dados sobre seu comportamento e do ambiente

Nas suas idas e vindas das colmeias, as abelhas interagem com boa parte do ambiente em sua volta, além de realizarem um importante trabalho de polinização de plantas que muito contribui para a manutenção da biodiversidade e a produção de alimentos em todo mundo. Agora, enxames delas vão assumir um outro papel, o de estações meteorológicas “biônicas”, para ajudar a monitorar os efeitos das mudanças climáticas na Amazônia e em seu próprio comportamento.

Desde a semana passada, pesquisadores do Instituto Tecnológico Vale (ITV) e da CSIRO, agência federal de pesquisas científicas da Austrália, estão instalando microssensores em 400 abelhas de um apiário no município de Santa Bárbara do Pará, a uma hora de distância de Belém, na primeira fase da experiência, que também visa descobrir as causas do chamado Distúrbio de Colapso de Colônias (CCD, na sigla em inglês), que só nos Estados Unidos já provocou a morte de 35% desses insetos criados em cativeiro.

– Não sabemos como as abelhas vão se comportar diante das projeções de aumento da temperatura e mudanças no clima devido ao aquecimento global – conta o físico Paulo de Souza, pesquisador-visitante do ITV e da CSIRO e responsável pela experiência. – Assim, entender como elas vão se adaptar a estas mudanças é importante para podermos estimar o que pode acontecer no futuro.

Souza explica que os microssensores usados no experimento são capazes de gerar a própria energia e captar e armazenar dados não só do comportamento das abelhas como da temperatura, umidade e nível de insolação do ambiente. Tudo isso espremido em um pequeno quadrado com 2,5 milímetros de lado com peso de 5,4 miligramas, o que faz com que as abelhas, da espécie Apis mellifera africanizadas, com em média 70 miligramas de peso, sintam como se estivessem “carregando uma mochila nas costas”.

– Mas isso não afeta o comportamento delas, que se adaptam muito rapidamente à instalação dos microssensores – garante.

Já a partir no próximo semestre, os pesquisadores deverão começar a instalar os microssensores, que custam US$ 0,30 (cerca de R$ 0,70) cada, em espécies nativas da Amazônia não dotadas de ferrão. Segundo Souza, estas abelhas são ainda mais importantes para a polinização das plantas da região, e são também mais sensíveis a mudanças no ambiente. Assim, a escala da experiência deve aumentar, com a utilização de 10 mil dos pequenos aparelhos ao longo de várias gerações de abelhas, que vivem em média dois meses.

O tamanho dos atuais sensores, porém, não permite que o dispositivo seja instalado em insetos menores, como mosquitos. Por isso, o grupo de Paulo de Souza já trabalha numa nova geração de microssensores com um décimo de milímetro, ou o equivalente a um grão de areia. Segundo o pesquisador, os novos sensores, que devem ficar prontos em quatro anos, terão as mesmas capacidades dos atuais, com a vantagem de serem “ativos”, isto é, vão poder transmitir em tempo real os dados coletados.

– Quando tivermos os sensores deste tamanho, poderemos aplicá-los na forma de spray nas colmeias, além de usá-los para monitorar outras espécies de insetos, como mosquitos transmissores de doenças – diz. – Mas a vantagem principal é que com eles vamos poder fazer das abelhas e outros insetos verdadeiras estações meteorológicas ambulantes, permitindo um monitoramento ambiental numa escala sem precedentes, já que cada abelha ou mosquito vai atuar como um agente de campo.

(Cesar Baima / O Globo)

Cientistas lançam robô que pode fazer cirurgias em fetos ainda no útero (O Globo)

JC e-mail 4964, de 02 de junho de 2014

Máquina seria capaz de prevenir doenças congênitas

Cientistas britânicos lançaram nesta semana um pequeno robô, capaz de operar fetos ainda no útero das mães. A máquina, que custou cerca de R$ 30 milhões, pode revolucionar o tratamento de más formações congênitas.

O minúsculo aparelho é capaz de fornecer imagens em 3D dos bebês imersos na placenta. Com a visão do “paciente”, o robô começa as intervenções médicas, controladas por uma equipe de especialistas que ficam nos bastidores. A invenção poderia, por exemplo, fazer cirurgias ou até implantar células-tronco em órgãos com deformações da criança.

O projeto é coordenado por engenheiros da University College London (UCL) e Universidade Católica da Lovaina, na Bélgica. De acordo com o líder da pesquisa, Sebastien Ourselin, o máquina evitará riscos tanto às mães quanto aos bebês.

– O objetivo é criar tecnologias cirúrgicas menos invasivas para tratar uma ampla gama de doenças no útero, com muito menos risco para ambos – disse Ourselin ao The Guardian.

O primeiro alvo em vista dos médicos é o tratamento de casos mais graves de espinha bífida, má formação da espinha dorsal que pode atinge um entre cada mil fetos. Ela ocorre quando a coluna não é plenamente desenvolvida, dando margem para que líquido amniótico penetre e leve consigo germes que poderiam atingir o cérebro e prejudicar o crescimento da criança. A intenção é que o novo robô possa fechar esses espaços na espinha, prevenindo a doença.

No entanto, cientistas alertam que operações deste tipo têm elevado risco cirúrgico, com fortes chances de sequelas nas mães. Intervenções médicas em fetos só podem ser realizadas após, pelo menos, 26 semanas de gestação. O procedimento é praticamente impossível atualmente.

O robô é composto por uma sonda muito fina e altamente flexível. A cabeça do equipamento teria um fio equipado com uma pequena câmera que iria usar pulsos de laser e ultra-som detecção – uma combinação conhecida como imagens foto-acústica – para gerar uma fotografia 3D no interior do útero. Estas imagens, então, seriam utilizadas pelos cirurgiões para orientar a sonda para a sua meta: a lacuna na coluna do feto.

(O Globo com Agências)

Brazil will use robots to police the 2014 World Cup (Daily Caller)


Thomas Ryder, iRobot

12:16 PM 02/21/2014

Giuseppe Macri

Brazil is adopting the security of the future after securing a deal with a robot manufacturer to deploy robots programmed to police the 2014 FIFA World Cup games.

The Brazilian government has agreed to pay $7.2 million to Massachusetts-based iRobot for 30 of its PackBot robots, according to a Robohub report. The robots will be programmed to analyze suspicious-looking objects in 12 cities hosting World Cup match-ups across Brazil beginning in June.

PackBots can travel at speeds up to 9 mph and have an extremely versatile mobility system, able to traverse rough terrain and even stairs. iRobot’s models include a host of sensors including GPS, video, thermal detection, electronic compass and system diagnostics. The robots weigh about 40 pounds and can be folded to fit into a backpack, making them ideal for quick deployment.

The model is exceptionally durable, able to survive a hard fall onto concrete from two meters, and has a full 360-degree range of rotation.

The same robots were recently used to assess the Japanese Fukushima Nuclear power plant meltdown resulting from the 2011 Japanese earthquake and tsunami. More than 800 have been used in Iraq and Afghanistan war zones, among other countries, since 2007.

Read more: