Humanoids and Intelligence Systems Lab - Institute for Anthropomatics and Robotics

Focus of research

Interactive Object Modeling

When fulfilling service tasks for humans, a service robot not only interacts with humans, but it also has to interact and to handle lots of everyday object. To provide the required knowledge on these objects, we are working on approaches to automatize the object modelling process. A custom-built sensor platform is used to acquire highly accurate 3d models of objects together with the texture of the objects. Subsequently, the models are enriched with semantic knowledge in an interactive way. Additionally, the placement of objects in complete scenes is analyzed to gather more object knowledge from the relations between single objects.

  Contact: Dipl.-Inform. Alexander Kasper

Recognition of activities in Human-Robot-Interaction

Autonomous service robots work in direct souroundings of human. That requires an intensive and natural human-robot interaction capability. Besides verbal communication, non-verbal communication is very important in interaction with humans, so the passive interpretation of human behavior by the robot plays a key role.
Therefore a system has been developed which is able to capture human motions and activities in real-time solely based on the on-board sensors of a robot. The used sensors comprise a 3d time-of-flight (TOF) sensor, which is an active sensor based on modulated infrared light, and stereo color cameras. A simplified model of the human body, made up of 10 cylinders, is tracked in real-time. In a second step, different classification algorithms are used to get a semantic interpretation of the body model. The classifiers are trained on cylinder positions, velocities and other features which can be extracted from the tracked body model.

  Contact: Dipl.-Inform. Martin Lösch
Architektur Entscheidungssystem

Probabilistic Decision Framework

Autonomous service robots have to act independently within the limits of their task and in particular they need the ability to decide autonomously in a dynamic environment. But real environments have stochastic dynamics and robotic sensors do not provide perfect measurements. In such a case, probabilistic decision theory can be used a basis for the control system. Currently, partially observable Markov decision processes (POMDPs) gain more and more attention as an adequate approach. We have developed a decision and control system which allows a service robot to act autonomously using POMDPs. A filter system is used to transform multimodal perceptions in a representation suited for POMDPs. Stochastic scenario models are obtained through a synthesis process from ontologically represented background knowledge.

  Contact: Dipl.-Inform. Sven R. Schmidt-Rohr

Programming by Demonstration

Future Humanoid and Service robots will have to act in dynamic and unstructured environments nad have to be able to adapt to them. Furthermore, such systems have to show the ability to acquire new capabilities, which usually can only be specified by the end user in an intuitive and interactive way. Existing approaches like programminy applying Tech-in panels or textual interfaces are too complicated for the intended end users because of the lack of necessary knowledge and abilities at their disposal on the one hand and the complexity of the process of programming tak sequences on the other hand.
In the context of Programming-by-Demonstration new intelligent approaches for the programming of robots are investigated. The idea is to let robots learn from humans, which demonstrate the task to be executed by the robot. These demonstrations are observed in a dedicated sensory environment, segmented in meaningful parts and the goal of each segment is deduced. Subsequently, the segments are combined to an abstract task representation called Macro Operator which can be transfered to any robot. Therefore, different kinematics and transformation mappings can be defined.
The figure shows the demonstration centre for the observation of task demonstrations. The integrated sensors comprise data gloves with mounted magnetic field position trackers and stereo cameras. The recorded actions sequence is made executable on an arbitrary robot platform via a skill transfer. Before the execution, the representation can be checked and modified if necessary in simulation. To provide an intuitive user interface, speech recognition, speech output, gesture recognition and iconic communication are integrated. In continuation of this approach, user demonstrations are used to extract further knowledge for the probabilistic decision framework.

  Contact: Dipl.-Inform. Rainer Jäkel

Execution on a service robot

For realistic experimental evaluations of the methods described above, mobile robotic platforms are used besides the static platforms like the PbD demonstration centre. In experiments, the approaches for cognitions and decision and the learned knowledge are applied in real scenarios, including human-robot interaction. Our team uses primarily the robot ALBERT II and the mobile sensor platform MILD for experimental purposes. These systems are characterized by in multimodality in sensor systems and actuating elements. They provide different capabilities to act and to sense like mobility and navigation, grasping and object manipulation, speech recognition and synthesis, recognition of human actions and gestures. The central control and decision system coordinates these capabilities and allows the service robot to act autonomously. Full autonomy is a primary goal in our research, so all data for visualization, recording and later analysis are only submitted wirelessly from the robots.

  Contact: Dipl.-Inform. Sven R. Schmidt-Rohr
  Dipl.-Inform. Rainer Jäkel