Robot Learning

DOP: Deep Optimistic Planning with Approximate Value Function Evaluation

Research on reinforcement learning has demonstrated promising results in manifold applications and domains. Still, efficiently learning effective robot behaviors is very difficult, due to unstructured scenarios, high uncertainties, and large state dimensionality (e.g. multi-agent systems or hyper-redundant robots). To alleviate this problem, we present DOP, a deep model-based reinforcement learning algorithm, which exploits action values to both (1) guide the exploration of the state space and (2) plan effective policies.

Q-CP: Learning Action Values for Cooperative Planning

Research on multi-robot systems has demonstrated promising results in manifold applications and domains. Still, efficiently learning an effective robot behaviors is very difficult, due to unstructured scenarios, high uncertainties, and large state dimensionality (e.g. hyper-redundant and groups of robot). To alleviate this problem, we present Q-CP a cooperative model-based reinforcement learning algorithm, which exploits action values to both (1) guide the exploration of the state space and (2) generate effective policies.

Hi-Val: Iterative Learning of Hierarchical Value Functions for Policy Generation

Task decomposition is effective in manifold applications where the global complexity of a problem makes planning and decision-making too demanding. This is true, for example, in high-dimensional robotics domains, where (1) unpredictabilities and modeling limitations typically prevent the manual specification of robust behaviors, and (2) learning an action policy is challenging due to the curse of dimensionality.

GUESs: Generative modeling of Unknown Environments and Spatial Abstraction for Robots

Representing unknown and missing knowledge about the environment is fundamental to leverage robot behavior and improve its performance in completing a task. However, reconstructing spatial knowledge beyond the sensory horizon of the robot is an extremely challenging task. Existing approaches assume that the environment static and features repetitive patterns (e.g. rectangular rooms) or that it can be all generalized with pre-trained models.

Learning Feedback Linearization Control Without Torque Measurements

Feedback Linearization (FL) allows the best control performance in executing a desired motion task when an accurate dynamic model of a fully actuated robot is available. However, due to residual parametric uncertainties and unmodeled dynamic effects, a complete cancellation of the nonlinear dynamics by feedback is hardly achieved in practice. In this paper, we summarize a novel learning framework aimed at improving online the torque correction necessary for obtaining perfect cancellation with a FL controller, using only joint position measurements.

An online learning procedure for feedback linearization control without torque measurements

By exploiting an a priori estimate of the dynamic model of a manipulator, it is possible to command joint torques which ideally realize a Feedback Linearization (FL) controller. The exact cancellation may nevertheless not be achieved due to model uncertainties and possible errors in the estimation of the dynamic coefficients. In this work, an online learning scheme for control based on FL is presented.

© Università degli Studi di Roma "La Sapienza" - Piazzale Aldo Moro 5, 00185 Roma