Friday, 2 December 2016

Scarantino (2003) “Affordances Explained”

Turvey, Shaw, Reed and Mace (1981) laid out an ontology of affordances; a formal account of the kind of things they are. They described them as dispositions, properties of the world constituted by sets of anchoring properties that offered an action to an organism whose dispositions could complement the affordance. Making affordances dispositions makes them real, makes them pre-date the interaction with the organism, and accounts for their odd ‘not doing anything until interacted with’ kind of existence. I am firmly Team Affordances are Dispositions and I have yet to meet an alternative account that supports a science of affordances or even allows them to be perceived.

The literature on dispositions was somewhat limited in 1981, but in 1998 Stephan Mumford published the definitive work on what they are and how they work. I always hoped someone with the necessary philosophy chops would use this work to strengthen the foundations of affordances (I even almost talked a philosopher into doing it!) but it turns out I’m covered. Andrea Scarantino (2003) published ‘Affordances Explained’ and did much of the necessary work, and there are some very useful things in the analysis. This post is me working through this material, translating from the technical philosophy into words I can understand better.

Thursday, 17 November 2016

Free Energy: How the F*ck Does That Work, Ecologically?

Karl Friston has spent a lot of time recently developing the free energy principle framework as a way to explain life, behaviour and cognition; you know, biology, and it's become the cool kid on the block in fairly record time. 

Crudely, the basic idea of the FEP is that living organisms need to operate within a range for a given process, or else they will be malfunctioning to some extent and might suffer injury or death. Being within the relevant range across all your processes means you are alive and doing well, and so for an organism that has made it this far in evolution those states must be highly probable. Being outside those ranges is therefore less probable, and so if you find yourself outside a range you will be surprised. Your job as a self-sustaining organism can therefore be described as 'work to minimise surprise'.

There is a problem with this formalisation though. The information-theoretic term that formalise 'surprise' is not a thing that any organism can access, so you can't work to control it. Luckily, there is another formal quantity, free energy, that is related to surprise and is always higher than surprise. Free energy is therefore the upper bound on surprise and minimising that upper bound can reduce surprise as well. 

All this is currently implemented in an inferential, Bayesian framework that aligns, at least on the surface, with modern representational cognitive science. Andy Clark thinks this is the future, and Jakob Howhy has worked hard to nail this connection down so it won't move. If this is all right, and if the FEP is being successful, perhaps non-representational, non-inferential accounts like ours are going to lose.

A recent paper (Bruineberg, Kiverstein & Rietveld (2016) tries to wedge the FEP and Bayesian psychology apart to allow room for an ecological/enactivist take on the FEP. To be honest, I found the paper a little underwhelming, but it did get me thinking about things, and two questions have emerged.

Before we worry about an ecological account of the FEP, we need to know 1) whether such a thing makes any sense and 2) whether it adds anything new to the proceedings. All comments welcome - these are genuine questions and if there are answers we would love to know.

Tuesday, 8 November 2016

The Field is Full, Just Not of Affordances - A Reply to Rietveld & Kiverstein

I recently posted about relational accounts of affordances and how one way to summarise my objections to them is that they cannot support mechanistic models of cognition. I came to this after reading Rietveld & Kiverstein's 'Landscape of Affordances' paper and chatting to them both at EWEP14. Eric and Julian have been kind enough to send through some detailed comments (beginning here and split over three comments due to character limits). This post is me replying to these comments as a way to get them somewhere a little more visible. I haven't gone point by point, I've just pulled out the key stuff I wanted to address; read their comments for the whole thing. I appreciate their willingness to get into this with me; their account is becoming wildly influential and their papers and feedback are helping me immensely as I work to articulate my concerns. 

To preview: my fundamental objection remains the same and as yet unanswered - while it is indeed possible to identify relations between 'forms of life' and 'socio-cultural environments' there is, as yet, no evidence that these relations create perceptual information. If they do not create information, they are not ecologically perceived, and they cannot figure in the online coordination and control of behaviour. And if they can't do that, then they sure as hell aren't affordances.

So my challenge to Reitveld & Kiverstein (R&K) is this - work up an example of an affordance that fits their definition and not mine and that creates information. Then we can test to see whether people act as if they perceive that affordance and can try perturbing the information to confirm how they are perceiving it. Then, and only then, do we have a ball game.

Friday, 28 October 2016

Nonlinear Covariation Analysis (Müller & Sternad, 2003)

I have been working my way through some analyses that fall under the idea of the motor abundance hypothesis (Latash, 2012) - the idea that motor control does not work to produce a single, optimal movement trajectory, but rather works to produce a particular task goal, or outcome. Motor control preserves function, and not structure; it exhibits degeneracy. So far I have looked at uncontrolled manifold analysis here and here, and stochastic optimal control theory here

This post will review nonlinear covariation analysis developed by Müller & Sternad (2003). This purports to address several issues with UCM.

Thursday, 13 October 2016

Optimal Feedback Control and Its Relation to Uncontrolled Manifold Analysis

Motor control theories must propose solutions to the degrees of freedom problem, which is the fact that the movement system has more ways to move than are ever required to perform a given task. This creates a problem for action selection (which of the many ways to do something do you choose?) and a problem for action control (how do you create stable, repeatable movements using such a high dimensional system?).

Different theories have different hypotheses about what the system explicitly controls or works to achieve, and what is left to emerge (i.e. happen reliably without explicitly being specified in the control architecture). They are typically about controlling trajectory features such as jerk. Are you working to make movements smooth, or does smoothness pop out as a side effect of controlling something else? This approach solves the degrees of freedom control problem by simply requiring the system to implement a specific trajectory that satisfies some constraint on that feature you are controlling (e.g. by minimising jerk; Flash & Hogan, 1985). They internally replace the solutions afforded by the environment with one desired trajectory

Todorov and Jordan (2002a, 2002b; thanks to Andrew Pruszynski for the tip!) propose that the system is not optimising performance, but the control architecture. This is kind of a cool way to frame the problem and it leads them to an analysis that is very similar in spirit to uncontrolled manifold analysis (UCM) and to the framework of motor abundance. In these papers, they apply the mathematics of stochastic optimal feedback control theory and propose that working to produce optimal control strategies is a general principle of motor control from which many common phenomena naturally emerge. They contrast this account (both theoretically and in simulations) to the more typical 'trajectory planning' models.

The reason this ends up in UCM territory is that it turns out, whenever it's possible, the optimal control strategy for solving motor coordination problems is a feedback control system in which control is deployed only as required. Specifically, you only work to control task-relevant variability, noise which is dragging you away from performing the task successfully. The net result is the UCM patterns; task-relevant variability (V-ORT) is clamped down by a feedback control process and task-irrelevant variability (V-UCM) is left alone. The solution to the degrees of freedom control problem is to simply deploy control strategically with respect to the task; no degrees of freedom must be 'frozen out' and the variability can be recruited at any point in the process if it suddenly becomes useful - you can be flexible.

What follows is me working through this paper and trying to figure out how exactly this relates to the conceptually similar UCM. If anyone knows the maths of these methods and can help with this, I would appreciate it!

Tuesday, 11 October 2016

What Can You Do With Uncontrolled Manifold Analysis?

There is generally more than one way to perform a task (the ‘bliss of motor abundance’) and so it’s possible for a movement to incur a little noise that doesn’t actually affect performance that much.
Uncontrolled manifold analysis (UCM) is a technique for analysing a high-dimensional movement data set with respect to the outcome or outcomes that count as successful behaviour in a task. It measures the variability in the data with respect to the outcome and decomposes it into variability that, if unchecked, would lead to an error and variability that still allows a successful movement.

In the analysis, variability that doesn’t stop successful behaviour lives on a manifold. This is the subspace of the values of the performance variable(s) that lead to success. When variability in one movement variables (e.g. a joint angle, or a force output) is offset by a compensation in one or more other variables that keeps you in that subspace, these variables are in a synergy and this means the variability does not have to be actively controlled. This subspace therefore becomes the uncontrolled manifold. Variability that takes you off the manifold takes you into a region of the parameter space that leads to failure, so it needs to be fixed. This is noise that needs control.

With practice, both kinds of variability tend to decrease. You produce particular versions of the movement more reliably (decreasing manifold variance, or V-UCM) and you get better at staying on the manifold (decreasing variance living in the subspace orthogonal to the UCM, or V-ORT). V-UCM decreases less, however (motor abundance) so the ratio between the two changes. Practice therefore makes you better at the movement, and better at allocating your control of the movement to the problematic variability. This helps address the degrees of freedom control problem.

My current interest is figuring out the details of this and related analyses in order to apply it to throwing. For this post, I will therefore review a paper using UCM on throwing and pull out the things I want to be able to do. All and any advice welcome!

Thursday, 15 September 2016

Uncontrolled Manifold Analysis

Human movement is hard to study, because there are many ways to perform even simple tasks and given the opportunity, different people will take different routes. It becomes hard to talk sensibly about average performance, or typical performance, or even best performance. 

This fact - that the action system contains more elements than are needed to solve a given task - was first formalised by Bernstein as the degrees of freedom problem. Anything that can change state is a degree of freedom that can contribute to movement stability and if you have more than you need then there is immediately more than one way to perform a task. This means you have to select the best action, and even then there are always variations in the details of how you perform that action (Bernstein called this 'repetition without repetition'). From this perspective, selecting the right action means freezing out redundant degrees of freedom and working with just the ones you need.

A more recent way to think about the problem is as the bliss of motor abundance (Gelfand & Latash, 1998; Latash, 2012; see this recent post too). From this perspective, selecting the right action is about balancing the contributions of all the degrees of freedom so that the overall behaviour of the system produces the required outcome. Nothing is frozen out, but errors incurred by one degree of freedom are compensated for by changes in other degrees of freedom. If (and only if) this compensation happens, then you have a synergy in action. 

This analysis leads to a prediction and an analysis. It predicts that there are two kinds of movement variability - variability that pulls you away from your target state and variability that doesn't. The former is a problem that must be corrected by another element in the synergy compensating. Successful movement requires clamping down on this variability. The latter requires no correction, no control, and successful movements can still happen even if this variability is high. An analysis of movement then follows. You can decompose the variability of movement in the total state space of that movement into that which pulls you away from the target, and that which does not. Successful movement lives on a subspace of the total space of possible values of your degrees of freedom. If the ratio of the 'good' variability to the 'bad' variability is high, you are hanging out close to that subsapce and working to keep yourself there, although not working to keep yourself doing anything in particular. You have a system that is working to compensate for 'bad' variability while ignoring the rest; a synergy defined with respect to the task demands. 

This subspace is referred to as the uncontrolled manifold. It is uncontrolled because when the system is in this subspace of it's total state space, it does not work to correct any variability because that variability is not affecting the outcome. Control only kicks in when you come off the manifold.