Visual perception has a problem; it doesn't come with a ruler.
Visual information is angular, and the main consequence of this is that the apparent size of something varies with how far away it is. This means you can't tell how big something actually is without more information. For example, the Sun and the Moon are radically different actual sizes, but because of the huge difference in how far away they are, they are almost exactly the same angular size; this is why solar eclipses work. (Iain Banks suggested in 'Transition' that solar eclipses on Earth would be a great time to look for aliens among us, because it's a staggering coincidence that they work out and they would make for great tourism :)
This lack of absolute size information is a problem because we need to know how big things actually are in order to interact with them. When I reach to grasp my coffee cup, I need to open my hand up enough so that I don't hit it and knock it over. Now, I can actually do this; as my reach unfolds over time, my hand opens to a maximum aperture that's wide enough to go round the object I'm reaching for (e.g. Mon-Williams & Bingham, 2011). The system therefore does have access to some additional information it can use to convert the angular size to a metric size; this process is called calibration and people who study calibration are interested in what that extra information is.
The ecological approach to calibration (see anything on the topic by Geoff Bingham) doesn't treat this as a process of 'detect angular size, detect distance, combine and scale', of course. Calibration uses some information to tune up the perception of other information so that the latter is detected in the calibrated unit. The unit chosen will be task specific because calibration needs information and tasks only offer information about themselves. A commonly discussed unit (used for scaling the perception of long distances) is eye height, because there is information in the optic array for it and it provides a fairly functional ruler for measuring distances out beyond reach space.
Linkenauger et al (2014) take a slightly different approach. They suggest that what the system needs is something it carries with it and that remains constant (not just constantly specified, as with eye height). They present some evidence that the dominant hand is perceived to be a fairly constant length when magnified, and suggest that this length is stored and used by the system to calibrate size perception in reach space. There are, let's say, a few problems with this paper.
Autism: The Truth is (not) Out There
1 day ago