Friday, 27 February 2015

Are we Infomation Processers? (A brief note)

I thought about it a lot and I kept thinking: OK, he’s right, I guess, the information is in the light, it has to be there, because where else are you going to get it. It has got to be there and if it’s there, there’s a sense in which you don’t have to process it at least not in the way that I used to say “process.” But if he’s right, what am I going to do about cognitive psychology? How can I reconcile cognitive psychology, as I knew it, with this theory of Jimmy Gibson’s?
Ulric Neisser, from Szolkolszky, 2013

Are we information processors? No. At least, not if Gibson was right.
Information is clearly important, though, because we talk about it a LOT according to this Wordle.
For James Gibson, information is external to the observer. Information is structure in energy arrays (e.g. the optic array for light) that is specific to the object or event in the world that caused the structure. This structures becomes information when we use it to coordinate and control our behaviour.

This information is not transmitted. At any given possible point of observation, there is a uniquely structured optic array that an observer can interact with by going to (or more likely through) that point of observation. That structure is there as soon as the lights have come on and the light is done filling up the space.

This information is also not processed, because it does not 'get into the system'. The nervous system doesn't take information onboard, it resonates to that information; it's dynamical behaviour is altered by detecting that information

What about all the steps that have to happen once the information is detected? Doesn't the information have to be transformed into a behaviour? No. Behaviour simply is the activity of the kind of embodied system that we are in the presence of that particular information. We can alter the kind of embodied system we are via learning, but all the way through learning, the behaviour you exhibit at any given moment simply is that activity.

Therefore, in the radical embodied, ecological approach, it makes no sense to say that cognition involves information processing. This is due to 'information' in REC referring to Gibson-information, not Shannon-information. Shannon-information is not something that exists. It is an abstract description of how to reduce uncertainty between a sender and a receiver. It's an amazing idea; it's the heart of the digital revolution we live in and it's a powerful analysis tool (read James Gleick's great book, The Information for the history). It's just not describing what biological organisms are interacting with. 

Is this merely a semantic issue? Are we cheating by just defining away the problem? No. It's about precision in terms. Information means something very specific in the REC framework, that meaning is not the same as it is in the information processing framework and this difference has consequences. Following up on those consequences is what radical embodied cognitive (neuro)science is up to. We will either be right or wrong, and the data will tell us which, but only if we stop making this basic confusion.

Thanks to Greg Hickok (Twitter, blog) for arguing with me a lot on Twitter about this which helped me clarify a few things, and to everyone else who got into it too. You guys are a big help with your obstinacy and your refusal to take what I say at face value :) I think this post might be the first of a series of 'Brief Notes' where I just try to lay out one thing as clearly as I can. In true Gibson style, I reserve the right to keep critiquing and modifying the details as new evidence comes in!

References
Szokolszky, A. (2013). Interview with Ulric Neisser. Ecological Psychology, 25, 182–199. Download

Tuesday, 20 January 2015

It's Time to Relabel the Brain

Another day, another study finds that 'visual' cortex is activated by something other than information from the eyes:
A research team from the Hebrew University of Jerusalem recently demonstrated that the same part of the visual cortex activated in sighted individuals when reading is also activated in blind patients who use sounds to “read”. The specific area of the brain in question is a patch of left ventral visual cortex located lateral to the mid-portion of the left fusiform gyrus, referred to as the “visual word form area” (VWFA). Significant prior research has shown the VWFA to be specialized for the visual representation of letters, in addition to demonstrating a selective preference for letters over other visual stimuli. The Israeli-based research team showed that eight subjects, blind from birth, specifically and selectively activated the VWFA during the processing of letter “soundscapes” using a visual-to-auditory sensory substitution device (SSD) (see www.seeingwithsound.com for description of device).
There's lots of research like this. People are excited by mirror neurons because they are cells in motor cortex that are activated by both motor activity and perception of that motor activity. It's incredible, people cry - cells in a part of the brain that we said 30 years ago does one thing seem to also do another thing. How could this be??

I would like to propose a simple hypothesis to explain these incredible results and that is that we have been labeling the brain incorrectly for a long time. The data telling us this has been around for a long time too and continues to roll in, but for some reason we still think the old labels are important enough to hold onto. It's time to let go.

I'm going to go out on a limb and say it's a bit more complicated than this

Thursday, 15 January 2015

The Size-Weight Illusion Induced Through Human Echolocation

Echolocation is the ability to use sound to perceive the spatial layout of your surroundings (the size and shape and distance to objects, etc). Lots of animals use it, but humans can too, with training. Some blind people have taught themselves to echolocate using self-generated sounds (e.g. clicks of the tongue or fingers) and the result can be amazing (I show this video of Daniel Kish in class sometimes; see the website for the World Access for the Blind group too).

In humans, this is considered an example of sensory substitution; using one modality to do what you would normally do with another. This ability is interesting to people because the world is full of people with damaged sensory systems (blind people, deaf people, etc) and being able to replace, say, vision with sound is one way to deal with the loss. Kish in particular is a strong advocate of echolocation over white canes for blind people because canes have a limited range. Unlike vision and sound, they can only tell you about what they are in physical contact with, and not what's 'over there'. 'Over there' is a very important place for an organism because it's where almost all of the world is, and if you can perceive it you give yourself more opportunities for activity and more time to make that activity work out. This is why Kish can ride a bike.

A recent paper (Buckingham, Milne, Byrne & Goodale, 2014; Gavin is on Twitter too) looked at whether information about object size gained via echolocation can create a size-weight illusion (SWI). I thought this was kind of a fun thing to do and so we read and critiqued this paper for lab meeting.

Wednesday, 22 October 2014

Do people really not know what running looks like?

Faster, higher, stronger -
wobbly!
When we run, our arms and legs swing in an alternating rhythm. Your left arm swings back as your left leg swings forward, same with the right. This contralateral rhythm is important for balance; the arms and legs counterbalance each other and help reduce rotation of the torso created by swinging the limbs. 

It turns out, however, that people don't really know this and they draw running incorrectly surprisingly often. Specifically, they often depict people running in a homolateral gait (with arms and legs on the same side swinging in the same direction at the same time; see the Olympics poster). I commented on a piece by Rose Eveleth at the Atlantic about a paper (Meltzoff, 2014) that identifies this surprising confusion in art throughout history and all over the world, and that then reports some simple studies showing that people really don't know what running is supposed to look like.


Rose covered the topic well; I wanted here to critique the paper a little because it's a nice example of some flawed cognitive psychology style thinking. That said, I want to say that I did like this paper. It's that rare thing - a paper by a single author who just happened to notice something and think about it a little then report what he found in case anyone else thought it was cool too. This is a bit old school and I approve entirely.

Tuesday, 14 October 2014

Your hand is not a perceptual ruler

Visual perception has a problem; it doesn't come with a ruler. 

Visual information is angular, and the main consequence of this is that the apparent size of something varies with how far away it is. This means you can't tell how big something actually is without more information. For example, the Sun and the Moon are radically different actual sizes, but because of the huge difference in how far away they are, they are almost exactly the same angular size; this is why solar eclipses work. (Iain Banks suggested in 'Transition' that solar eclipses on Earth would be a great time to look for aliens among us, because it's a staggering coincidence that they work out and they would make for great tourism :) 

This lack of absolute size information is a problem because we need to know how big things actually are in order to interact with them. When I reach to grasp my coffee cup, I need to open my hand up enough so that I don't hit it and knock it over. Now, I can actually do this; as my reach unfolds over time, my hand opens to a maximum aperture that's wide enough to go round the object I'm reaching for (e.g. Mon-Williams & Bingham, 2011). The system therefore does have access to some additional information it can use to convert the angular size to a metric size; this process is called calibration and people who study calibration are interested in what that extra information is.


The ecological approach to calibration (see anything on the topic by Geoff Bingham) doesn't treat this as a process of 'detect angular size, detect distance, combine and scale', of course. Calibration uses some information to tune up the perception of other information so that the latter is detected in the calibrated unit. The unit chosen will be task specific because calibration needs information and tasks only offer information about themselves. A commonly discussed unit (used for scaling the perception of long distances) is eye height, because there is information in the optic array for it and it provides a fairly functional ruler for measuring distances out beyond reach space. 


Linkenauger et al (2014) take a slightly different approach. They suggest that what the system needs is something it carries with it and that remains constant (not just constantly specified, as with eye height). They present some evidence that the dominant hand is perceived to be a fairly constant length when magnified, and suggest that this length is stored and used by the system to calibrate size perception in reach space. There are, let's say, a few problems with this paper. 



Wednesday, 8 October 2014

Limits on action priming by pictures of objects

If I show you a picture of an object with a handle and ask you to make a judgment about that object (say, whether it's right side up or not) you will be faster to respond if you use the hand closest to the handle. This is called action priming (Tucker & Ellis. 1998) and there is now a wide literature using this basic setup to investigate how the perception of affordances prepares the action system to do one thing rather than another.

There is, of course, a problem here. These studies all use pictures of objects and these are not that same as the real thing. These studies therefore don't tell us anything about how the perceived affordances of objects make us ready to act on those objects. This is only a problem because it's what these researchers think they are studying which means they don't pay attention to the nature of their stimuli. The result is a mixed bag of results.

For example, a recent article (Yu, Abrams & Zacks, 2014) set out to use this task to ask whether action priming was affected by where the hand had to go to make a response. Most tasks involve a simple button press on a keyboard, so they were interested to see whether asking people to respond using buttons on the monitor might enhance priming. The logic was that the spatial location of the response would be an even stronger match or mismatch to the location of the object's handle. However, they accidentally discovered that a) action priming is not reliably replicable and b) the factor that seems to determine whether it shows up is a confounding task demand. This again highlights just what a problem this experimental set up is.

Tuesday, 23 September 2014

Visual illusions and direct perception

A while back I reviewed a bunch of papers by Rob Withagen who is currently arguing that while perception is not typically based in specifying variables, it can and should still be ecological in nature. While we are also developing an account of information that goes beyond specification I still have some reservations about the details of Rob's work. That said, I do think there is a lot of overlap and I'm still interested in figuring out what that is

Rob's latest paper (de Wit, van der Kamp & Withagen, 2015) is about visual illusions, and how to talk about them ecologically. There is a lot of very useful stuff in here, not least the review of the many things Gibson said about illusions. After this review, the paper tries to put all of this into Rob's and Tony Chemero's evolutionary framework and uses this to formalise and tidy up what Gibson was up to. I'm adding this paper to the set I've covered from Withagen as I continue to think through these issues.