(A) Vision for 2050 - Context-Based Image Understanding for a Human-Robot Soccer Match
DOI:
https://doi.org/10.14279/tuj.eceasst.62.863Abstract
We believe it is possible to create the visual subsystem needed for the RoboCup 2050 challenge - a soccer match between humans and robots - within the next decade. In this position paper, we argue, that the basic techniques are available, but the main challenge will be to achieve the necessary robustness. We propose to address this challenge through the use of probabilistically modeled context, so for instance a visually indistinct circle is accepted as the ball, if it fits well with the ball's motion model and vice versa.Our vision is accompanied by a sequence of (partially already conducted) experiments for its verification. In these experiments, a human soccer player carries a helmet with a camera and an inertial sensor and the vision system has to extract all information from that data, a humanoid robot would need to take the human's place.
Downloads
Published
2013-09-15
How to Cite
[1]
U. Frese, T. Laue, O. Birbach, and T. Röfer, “(A) Vision for 2050 - Context-Based Image Understanding for a Human-Robot Soccer Match”, eceasst, vol. 62, Sep. 2013.
Issue
Section
Articles