Tuesday, 14 October 2014

Your hand is not a perceptual ruler

Visual perception has a problem; it doesn't come with a ruler. 

Visual information is angular, and the main consequence of this is that the apparent size of something varies with how far away it is. This means you can't tell how big something actually is without more information. For example, the Sun and the Moon are radically different actual sizes, but because of the huge difference in how far away they are, they are almost exactly the same angular size; this is why solar eclipses work. (Iain Banks suggested in 'Transition' that solar eclipses on Earth would be a great time to look for aliens among us, because it's a staggering coincidence that they work out and they would make for great tourism :) 

This lack of absolute size information is a problem because we need to know how big things actually are in order to interact with them. When I reach to grasp my coffee cup, I need to open my hand up enough so that I don't hit it and knock it over. Now, I can actually do this; as my reach unfolds over time, my hand opens to a maximum aperture that's wide enough to go round the object I'm reaching for (e.g. Mon-Williams & Bingham, 2011). The system therefore does have access to some additional information it can use to convert the angular size to a metric size; this process is called calibration and people who study calibration are interested in what that extra information is.

The ecological approach to calibration (see anything on the topic by Geoff Bingham) doesn't treat this as a process of 'detect angular size, detect distance, combine and scale', of course. Calibration uses some information to tune up the perception of other information so that the latter is detected in the calibrated unit. The unit chosen will be task specific because calibration needs information and tasks only offer information about themselves. A commonly discussed unit (used for scaling the perception of long distances) is eye height, because there is information in the optic array for it and it provides a fairly functional ruler for measuring distances out beyond reach space. 

Linkenauger et al (2014) take a slightly different approach. They suggest that what the system needs is something it carries with it and that remains constant (not just constantly specified, as with eye height). They present some evidence that the dominant hand is perceived to be a fairly constant length when magnified, and suggest that this length is stored and used by the system to calibrate size perception in reach space. There are, let's say, a few problems with this paper. 

Wednesday, 8 October 2014

Limits on action priming by pictures of objects

If I show you a picture of an object with a handle and ask you to make a judgment about that object (say, whether it's right side up or not) you will be faster to respond if you use the hand closest to the handle. This is called action priming (Tucker & Ellis. 1998) and there is now a wide literature using this basic setup to investigate how the perception of affordances prepares the action system to do one thing rather than another.

There is, of course, a problem here. These studies all use pictures of objects and these are not that same as the real thing. These studies therefore don't tell us anything about how the perceived affordances of objects make us ready to act on those objects. This is only a problem because it's what these researchers think they are studying which means they don't pay attention to the nature of their stimuli. The result is a mixed bag of results.

For example, a recent article (Yu, Abrams & Zacks, 2014) set out to use this task to ask whether action priming was affected by where the hand had to go to make a response. Most tasks involve a simple button press on a keyboard, so they were interested to see whether asking people to respond using buttons on the monitor might enhance priming. The logic was that the spatial location of the response would be an even stronger match or mismatch to the location of the object's handle. However, they accidentally discovered that a) action priming is not reliably replicable and b) the factor that seems to determine whether it shows up is a confounding task demand. This again highlights just what a problem this experimental set up is.

Tuesday, 23 September 2014

Visual illusions and direct perception

A while back I reviewed a bunch of papers by Rob Withagen who is currently arguing that while perception is not typically based in specifying variables, it can and should still be ecological in nature. While we are also developing an account of information that goes beyond specification I still have some reservations about the details of Rob's work. That said, I do think there is a lot of overlap and I'm still interested in figuring out what that is

Rob's latest paper (de Wit, van der Kamp & Withagen, 2015) is about visual illusions, and how to talk about them ecologically. There is a lot of very useful stuff in here, not least the review of the many things Gibson said about illusions. After this review, the paper tries to put all of this into Rob's and Tony Chemero's evolutionary framework and uses this to formalise and tidy up what Gibson was up to. I'm adding this paper to the set I've covered from Withagen as I continue to think through these issues.

Wednesday, 30 July 2014

Rhythmic constraints on stress timing in English

What kind of embodied constraints affect the production of speech? Can we say anything we like when we like, or are there constraints in play that make some things easier than others? This is the question asked in Cummins & Port (1998) which we recently read in lab meeting (with our PhD student Agnes).

Cummins and Port asked participants to produce sentences over and over and examined when during the cycle a certain stress beat occurred. They set it up so that the beat was timed with a beep to occur throughout the cycle, but showed that people could actually only place the beat in 2 or 3 places in the beat reliably. The big picture result is that speech production is shaped, in part, by the underlying dynamics of production described in terms of the rhythms it is set up to produce.

The nice detail here comes from the theoretical set up and analysis that drives this study. Cummins and Port are directly inspired and guided by work in coordination dynamics. Agnes is interested in this work because she's looking at ways to investigate language and speech using the tools of dynamical systems and embodied cognition - remember, our big pitch is that language is special but not magical and we should be able to study it the way we study, say, rhythmic movement coordination. 

Tuesday, 22 July 2014

Embodying Culture: My ongoing conversation with Soliman & Glenberg

Would a formal reply make me this guy?
I've been exchanging views with Art Glenberg and his colleagues about a paper he published recently in Frontiers. I reviewed it, had reservations but eventually let it through, then published my concerns as a commentary on the original paper. Soliman and Glenberg (2014; S&G) then replied to my reply which I didn't know about until I noticed the commentary had a citation in my Google Scholar profile

I could simply publish a reply to their reply, but to be honest I'm not sure it's worth it; it feels a little too much like arguing on the internet. I'll link to this in the comments section of the Frontiers page, however, and if people think it's worth the DOI then I'll write this reply up as a formal submission. I'd be interested to hear from you all on this.

The short version of my reply is that in the process of dodging my criticism they concede it applies to them, and they swerve into a literature that doesn't help. I do think they've applied some serious and valuable consideration to the details of their proposal, though, so I think this has been a useful process.

Tuesday, 24 June 2014

A Gibsonian analysis of linguistic information

This post is based on a talk I just gave at the Finding Common Ground Conference at the University of Connecticut. Please excuse the Power Pointy nature of some sections! You might need to Ctrl+ to see some of the images clearly.  I have made some changes from the original talk content on the basis of very useful feedback I received from other conference attendees.

What is the place of language in ecological psychology? Is language a type of direct perception? Is language comprehension direct perception? Does language have affordances?

In trying to answer these questions I discovered that some things we think of as being perceptual have a lot in common with the conventionality of language and that some language-related behaviours look a lot like perception (as typically construed). I end up suggesting that we move away from talking about 'perception' and 'language' as different types of entities and instead focus on information / behaviour relations in specific tasks.

Wednesday, 4 June 2014

Affordances are not probabilistic functions

The journal Ecological Psychology is hosting a special issue with papers from a Festschrift for Herb Pick. Karen Adolph and John Franchak have a paper that caught my eye about treating affordances as probabilistic functions, effectively applying standard psychophysical techniques to the study of affordance perception. 

The idea is this: affordance research typically treats affordances as all-or-none, categorical properties. You can either reach that object or you can't; you can either pass through that aperture without turning or you can't. You then measure a bunch of people doing the task as you alter some key parameter (e.g. the distance to the target, or the width of the aperture) and find the critical point, the value of some body-scaled measurement of the parameter where behaviour switches from success to failure. For instance, you might express the aperture width in terms of the shoulder width and look for the common value of this ratio where people switch their behaviour from turning to not turning.