Artificial vision systems can not process all the
information that they receive from the world in real time
because it is highly expensive and inefficient in terms of
computational cost. However, inspired by biological perception
systems, it is possible to develop an artificial attention model
able to select only the relevant part of the scene, as human
vision does. From the Automated Planning point of view, a
relevant area can be seen as an area where the objects involved
in the execution of a plan are located. Thus, the planning system
should guide the attention model to track relevant objects. But,
at the same time, the perceived objects may constrain or provide
new information that could suggest the modification of a current
plan. Therefore, a plan that is being executed should be adapted
or recomputed taking into account actual information perceived
from the world. In this work, we introduce an architecture that
creates a symbiosis between the planning and the attention
modules of a robotic system, linking visual features with high
level behaviours. The architecture is based on the interaction of
an oversubscription planner, that produces plans constrained
by the information perceived from the vision system, and an
object-based attention system, able to focus on the relevant
objects of the plan being executed.