This paper presents a novel approach that exploits semantic knowledge to enhance the object recognition capability of autonomous robots. Semantic knowledge is a rich source of information, naturally gathered from humans (elicitation), which can encode both objects’ geometrical/appearance proper- ties and contextual relations. This kind of information can be exploited in a variety of robotics skills, especially for robots performing in human environ- ments. In this paper we propose the use of semantic knowledge to eliminate the need of collecting large datasets for the training stages required in typ- ical recognition approaches. Concretely, semantic knowledge encoded in an ontology is used to synthetically and effortless generate an arbitrary number of training samples for tuning Probabilistic Graphical Models (PGMs). We then employ these PGMs to classify patches extracted from 3D point clouds gathered from office environments within the UMA-offices dataset, achieving a ∼ 90% of recognition success, and from office and home scenes within the NYU2 dataset, yielding a success of ∼ 81% and ∼ 69.5% respectively. Addi- tionally, a comparison with state-of-the-art recognition methods also based on graphical models has been carried out, revealing that our semantic-based training approach can compete with, and even outperform, those trained with a considerable number of real samples.