Appearance-based Localization (AL) focuses on estimating the pose of a camera from the information encoded in an image, treated holistically. However, the high-dimensionality of images makes this estimation intractable and some techniques of Dimensionality Reduction (DR) must be applied. The resulting reduced image representation, though, must keep underlying information about the structure of the scene to be able to infer the camera pose. This work explores the problem of DR in the context of AL, and evaluates four popular methods in two simple cases on a synthetic environment: two linear (PCA and MDS) and two non-linear, also known as Manifold Learning methods (LLE and Isomap). The evaluation is carried out in terms of their capability to generate lower-dimensional embeddings that maintain underlying information that is isometric to the camera poses.