I'm inspired by research on Embodied Cognition: the idea that the body and the environment play an important part in decision-making. I'm also fascinated by neurophenomenological research that suggests that the movement of the body is one way in which behavior can be 'read-out'
To test this, I set up an experiment in an immersive virtual environment which affords dynamic changes in spatial environment. I use motion capture to simulate the movement of the arm in the virtual environment so that the subject's body became present in the virtual space, making it operable and actionable.
The creation of the virtual prosthetic arm was a collaboration with the Pesaran Lab at NYU's Center for Neural Science.
I wanted to look at how changing environment affects decision-making. To do this, I used the Posner Cueing Task as my inspiration. I designed an experiment based on what psychologists call a 2 Force Alternative Choice task, a method to observe decisions between 2 choice categories.
My experiment asks subjects to compute simple math equations, and decide which of two is the greater value. The response is given by reaching and reaching and grasping for corresponding virtual objects. If A is greater, grasp shape A, if B is greater grasp shape B.
The problems are displayed for a controlled time interval, after which the subject must move his arm to to begin his response. While the subject is responding, moving particles are released in the virtual environment to influence her decision.
I designed 6 different experimental conditions which exposed the subject to variations in the virtual world based on 3 parameters: correct answers types, the presence of particles to create clean or cluttered spaces, and the direction of moving distractors.
Data of the frame by frame movement of the bone segments is analyzed in R to uncover if any deviations occur in the presence of moving distractors.
This project is currently in its pilot stage. I hope to conduct a full study once the a few logistical hurdles are overcome.
The process of creating this tool and experiment has given rise to a set of emergent questions. If the embodied cognition theorem is true, then virtual reality is a new interface for control over the human body, and its decision making faculty. I'm fascinated by the idea that when the material of our spatial context is converted to software, we enter a new king of embodied state. VR interfaces have the power to dynamically adjust spatial cues in order to mediate choice-making in real-time. The question I am grappling with :
What does it mean that the interface can act upon user intention, before the user can enact her intention? What is the structure of a space in which the very circumstances of action can be reconfigured before action unfolds?
I believe we are entering an unprecedented era of persuasion. Stay tuned, for more details on the project and for a better articulation of this concept !
P.S. I'm extremely fascinated by what simulation at large will afford. Especially, real-time multi-agent simulation. I'm thinking about the vast experimental playground that Facebook/Oculus will have at its disposal, when we are all plugged in, very soon. Look: