[en] In this paper we give a distributed solution to the problem of making a team of non-holonomic robots achieve the same heading (attitude consensus problem) using vision sensors with limited field of view. The use of cameras with constrained field of view limits the information the robots perceive compared to other omnidirectional sensors. This makes the consensus problem more complicated, because the robots will not always be able to observe other robots. By using structure from motion computed from images, the robots can estimate the difference in their headings from common observations of the environment without the necessity of directly observe each other. In this way, the robots achieve the consensus in their headings while observing the environment instead of each other. The contribution of the paper is a new controller that uses the epipoles computed from pairs of images to estimate the misalignment between neighbor robots. In addition, the control is robust to changes in the topology of the network and does not require to know the calibration of the cameras in order to achieve the desired configuration. To the best of our knowledge, this is the first time that the epipoles are used in multi-robot consensus, putting their properties in value.