In this paper, we present a system that allows visually impaired people to autonomously navigate in an unknown indoor and outdoor environment. The system, explicitly designed for low vision people, can be generalized to other users in an easy way. We assume that special landmarks are posed for helping the users in the localization of pre-defined paths. Our novel approach exploits the use of both the inertial sensors and the camera integrated into the smartphone as sensors. Such a navigation system can also provide direction estimates to the tracking system to the users. The success of out approach is proved both through experimental tests performed in controlled indoor environments and in real outdoor installations. A comparison with deep learning methods has been presented.