Difference between revisions of "EyeSeeCam SCI"

From biophysics
Jump to navigation Jump to search
 
(67 intermediate revisions by the same user not shown)
Line 1: Line 1:
 
[[File:EyeSeeCam_SCI2.png|thumb|EyeSeeCam SCI]]
 
[[File:EyeSeeCam_SCI2.png|thumb|EyeSeeCam SCI]]
 
==Description==
 
==Description==
The EyeSeeCam SCI is a combined eye tracker and head movement tracker.
+
The EyeSeeCam SCI is a combined eye tracker and head movement tracker. The head tracking is done with an IMU and the eye tracking is done with cameras and software that tracks the pupils of both eyes. The tracking data can be read out directly in Matlab. In case a dedicated computer is used for reading out the EyeSeeCam, the data it is converted to an LSL stream.
 +
 
 +
The EyeSeeCam Sci can record at 250 and and 500 Hz. We are typically recording at 500 Hz.
  
 
==Construction of a gaze trace==
 
==Construction of a gaze trace==
 +
The data from EyeSeeCam is recorded in different coordinate systems than the double polar system that we use to present our targets. The head and eye tracking data each have their own coordinate system. To accurately analyze our data we have to transform these coordinate systems to double polar coordinates in the lab frame. In order to relate the EyeSeeCam coordinates to the lab coordinates, we first need to record a starting gaze position for each trial. This means that during the recording, the subject has to look at the center speaker (0,0) when the trial starts.
 +
 
From the EyeSeeCam data we can construct a head movement trace and an eye movement trace. These can be combined to create a gaze trace.
 
From the EyeSeeCam data we can construct a head movement trace and an eye movement trace. These can be combined to create a gaze trace.
 +
 +
In Matlab we can use the class 'EyeSeeCamSci_Trace' for calculating head, eye and gaze traces in RAS, HVF and DP coordinates.
 +
 +
===Defining a coordinate system===
 +
 +
For the purpose of the interpretation of the EyeSeeCam data we will define the starting direction as the direction the subject looks at at the beginning of each trial. The recording of the data should start when the subject is looking in this direction. <br>
 +
The X, Y and Z of the EyeSeeCam data are in terms of RAS coordinates:
 +
<pre>
 +
X = anterior
 +
Y = -right
 +
Z = superior
 +
</pre>
 +
* When using 'EyeSeeCamSci_Trace' head and gaze traces with RAS coordinates the values are relative to the starting position of subject.
 +
* The HVF and DP coordinates of the eye trace are only valid when the head stays in the starting position.
  
 
===Head movement===
 
===Head movement===
For head movements X, Y and Z are defined with respect to the EyeSeeCam
+
EyeSeeCam_Sci provides us angle velocities for rotations around the  X,Y, and Z axes. For our analyses we want to have position data, so we have to calculate the position data from these angle velocities.
* X = forward
+
* Y = left
+
The head rotation axis X, Y and Z are defined with respect to the EyeSeeCam itself.
* Z = upwards
+
 
 +
*''The EyeSeeCam data is '''not''' in the lab frame!''
  
The EyeSeeCam gives angle velocities for rotations around X, Y and Z axis. In order to create a head movement trace we have to calculate rotation matrices (delta_Rx, delta_Ry and delta_Rz) in EyeSeeCam coordinate for every time step.  
+
In order to create a head movement trace we have to calculate rotation matrices (delta_Rx, delta_Ry and delta_Rz) in EyeSeeCam coordinates for every time step.  
  
Since the time steps are about 2 ms the rotations for each time step are small. Therefor the order in which delta_Rx, delta_Ry and delta_Rz are multiplied are not important and we calculate delta_R(t) in EyeSeeCam coordinates by  
+
Since the time steps are about 2 ms (or 4 ms) the rotations for each time step are small. Therefore the order in which delta_Rx, delta_Ry and delta_Rz are multiplied are not important and we calculate delta_R(t) in EyeSeeCam coordinates by  
 
<pre>
 
<pre>
 
delta_R(t) = delta_Rx * delta_Ry * delta_Rz.
 
delta_R(t) = delta_Rx * delta_Ry * delta_Rz.
 
</pre>
 
</pre>
  
To get R(t) in lab coordinates we have  
+
To get the total rotation matrix in lab coordinates at time t we have iterate over all the time steps
 
<pre>
 
<pre>
R(t) = R(t-delta_t) * delta_R(t).
+
R(t=0) = identity matrix
 +
for t = 0 to t = tmax
 +
  R(t) = R(t-delta_t) * delta_R(t)
 +
end
 
</pre>
 
</pre>
  
 
===Eye movement===
 
===Eye movement===
  
The eye movements are given by rotations in the oculomotor coordinate system (ocs).
+
The eye movements are given by rotations in the same coordinate system as the head movements.
 
+
<pre>
* X = anterior (torsion of the pupil in the gaze direction)
+
X = anterior (torsion of the pupil in the gaze direction)
* Y = superior
+
Y = -right
* Z = right
+
Z = superior
 +
</pre>
 +
The torsion is not important for the gaze direction and is therefore excluded from the calculation.
  
The torsion is not important for the gaze direction and is there excluded from the calculation.
+
Azimuth can be defined as a rotation about the Z axis and Elevation as a contra rotation about the Y axis.
  
Since the eye gaze is given as a rotation angle in OCS coordinates and the eye and head tracker coordinate are fixed relative to each other, we can easily transform OCS coordinates to the EyeSeeCam coordinates. We can then create rotation matrices in the EyeSeeCam coordinates for the azimuth and elevation of the gaze. Sine the torsion is excluded the rotation matrices for azimuth and elevation can be multiplied in any order to get the eye rotation matrix in EyeSeeCam coordinates.
+
Since the eye gaze is given as a rotation angle in OCS coordinates and the eye and head tracker coordinate are fixed relative to each other, we can easily transform OCS coordinates to the EyeSeeCam coordinates. We can then create rotation matrices in the EyeSeeCam coordinates for the azimuth and elevation of the gaze. Since the torsion is excluded the rotation matrices for azimuth and elevation can be multiplied in any order to get the eye rotation matrix in EyeSeeCam coordinates.
 
<pre>
 
<pre>
 
eye_Rz(t) = Rz(eye_azimuth(t))
 
eye_Rz(t) = Rz(eye_azimuth(t))
Line 44: Line 68:
  
 
The gaze movement is the combined head and eye movement.
 
The gaze movement is the combined head and eye movement.
To create the total rotation matrix that represents the 'gaze' rotation in lab coordinates, you need to multiply the two rotation matrices you have in the correct order. In your case, you have:
+
To create the total rotation matrix that represents the 'gaze' rotation in lab coordinates, you need to multiply the two rotation matrices you have for head and eye movement in the correct order. In your case, you have:
  
 
A rotation matrix for the head in lab coordinates.
 
A rotation matrix for the head in lab coordinates.
Line 51: Line 75:
  
 
First, multiply the rotation matrix of the head in lab coordinates by the rotation matrix of the eyes in head coordinates.
 
First, multiply the rotation matrix of the head in lab coordinates by the rotation matrix of the eyes in head coordinates.
 +
<pre>
 +
R_gaze(t) = head_R(t) * eye_R(t)
 +
</pre>
 +
 +
Lastly you can multiply the R_gaze(t) with the starting gaze vector to get the gaze (in lab coordinates) at time t. We assume that the starting gaze is in the forward direction, which is in the X direction in the lab coordinates.
 +
 +
<pre>
 +
startingGaze = [1; 0; 0]
 +
gaze(t) = R_gaze(t) * startingGaze
 +
</pre>
  
 
==Matlab programming==
 
==Matlab programming==
  
===Matlab interface===
+
===Functions for converting EyeSeeCam data to double polar data===
 +
There is one function for head movement, one for eye movement and one for gaze movement. The gaze movement combines head and eye movement.
 +
 
 +
The functions take head rotation speeds and eye rotations (in degrees) as input and give objects containing the traces (head, eye and gaze respectivily) in double polar coordinates as output.
 +
 
 +
* headTrace_DP = EyeSeeCamSciHeadRotationData2Trace_DP(Vx_deg, Vy_deg, Vz_deg)
 +
* eyeTrace_DP  = EyeSeeCamSciEyeRotationData2Trace_DP(Zocs_deg, Yocs_deg)
 +
* gazeTrace_DP = EyeSeeCamSciHeadAndEyeRotationData2Trace_DP(Vx_deg, Vy_deg, Vz_deg, Zocs_deg, Yocs_deg)
 +
 
 +
The Vx, Vy and Vz are the head rotation speeds in the EyeSeeCam coordinates
 +
The Zocs and Yocs are the eye rotations in OCS coordinates.
 +
 
 +
===EyeSeeCam data===
 
The recorded data is read from an LSL stream.
 
The recorded data is read from an LSL stream.
 
<pre>
 
<pre>
Line 92: Line 138:
 
yh = lsldata.escdata.Data(29,:);      % calibrated vertical velocity data HEAD
 
yh = lsldata.escdata.Data(29,:);      % calibrated vertical velocity data HEAD
 
zh = lsldata.escdata.Data(31,:);      % calibrated horizontal velocity data HEAD
 
zh = lsldata.escdata.Data(31,:);      % calibrated horizontal velocity data HEAD
</pre>
 
 
===Converting rotation speed===
 
<pre>
 
%make a coordinate object
 
mycoordinates_XYZ = coordinates_XYZ(startingGaze);
 
 
% initialize R_total to 3D unit matrix
 
R_total = [1,0,0;0,1,0;0,0,1];
 
 
%loop through al timesteps
 
for i = (timeRange)
 
 
    % determine the angle changes between time t(i)-delta_t/2 to t(i)+delta_t/2
 
    delta_angleX = Vx(i)*delta_t;
 
    delta_angleY = Vy(i)*delta_t;
 
    delta_angleZ = Vz(i)*delta_t;
 
 
    %create a rotation matrices
 
    delta_Rx = Rx(delta_angleX);
 
    delta_Ry = Ry(delta_angleY);
 
    delta_Rz = Rz(delta_angleZ);
 
 
    %multiply rotation matrices (order is not important if angles are small enough)
 
    delta_R = delta_Rx * delta_Ry * delta_Rz;
 
 
    % determine new R_total
 
    % rotation in device coordinates, order: R_total * delta_R   
 
    R_total = R_total * delta_R;
 
 
    %rotate startingpoint with R_total
 
    newpoint = R_total * startingGaze;
 
 
    %add new position to list of coordinates
 
    mycoordinates_XYZ.add(newpoint);   
 
end
 
 
% transform XYZ to RAS coordinates with EyeSeeCam definition
 
mycoordinates_RAS = transform_XYZ2RAS(mycoordinates_XYZ, definition_XYZ2RAS_EyeSeeCam_Sci);
 
mycoordinates_DP  = transform_RAS2DP(mycoordinates_RAS);
 
 
 
</pre>
 
</pre>

Latest revision as of 13:51, 24 May 2024

EyeSeeCam SCI

Description

The EyeSeeCam SCI is a combined eye tracker and head movement tracker. The head tracking is done with an IMU and the eye tracking is done with cameras and software that tracks the pupils of both eyes. The tracking data can be read out directly in Matlab. In case a dedicated computer is used for reading out the EyeSeeCam, the data it is converted to an LSL stream.

The EyeSeeCam Sci can record at 250 and and 500 Hz. We are typically recording at 500 Hz.

Construction of a gaze trace

The data from EyeSeeCam is recorded in different coordinate systems than the double polar system that we use to present our targets. The head and eye tracking data each have their own coordinate system. To accurately analyze our data we have to transform these coordinate systems to double polar coordinates in the lab frame. In order to relate the EyeSeeCam coordinates to the lab coordinates, we first need to record a starting gaze position for each trial. This means that during the recording, the subject has to look at the center speaker (0,0) when the trial starts.

From the EyeSeeCam data we can construct a head movement trace and an eye movement trace. These can be combined to create a gaze trace.

In Matlab we can use the class 'EyeSeeCamSci_Trace' for calculating head, eye and gaze traces in RAS, HVF and DP coordinates.

Defining a coordinate system

For the purpose of the interpretation of the EyeSeeCam data we will define the starting direction as the direction the subject looks at at the beginning of each trial. The recording of the data should start when the subject is looking in this direction.
The X, Y and Z of the EyeSeeCam data are in terms of RAS coordinates:

X = anterior
Y = -right
Z = superior
  • When using 'EyeSeeCamSci_Trace' head and gaze traces with RAS coordinates the values are relative to the starting position of subject.
  • The HVF and DP coordinates of the eye trace are only valid when the head stays in the starting position.

Head movement

EyeSeeCam_Sci provides us angle velocities for rotations around the X,Y, and Z axes. For our analyses we want to have position data, so we have to calculate the position data from these angle velocities.

The head rotation axis X, Y and Z are defined with respect to the EyeSeeCam itself.

  • The EyeSeeCam data is not in the lab frame!

In order to create a head movement trace we have to calculate rotation matrices (delta_Rx, delta_Ry and delta_Rz) in EyeSeeCam coordinates for every time step.

Since the time steps are about 2 ms (or 4 ms) the rotations for each time step are small. Therefore the order in which delta_Rx, delta_Ry and delta_Rz are multiplied are not important and we calculate delta_R(t) in EyeSeeCam coordinates by

delta_R(t) = delta_Rx * delta_Ry * delta_Rz.

To get the total rotation matrix in lab coordinates at time t we have iterate over all the time steps

R(t=0) = identity matrix
for t = 0 to t = tmax
   R(t) = R(t-delta_t) * delta_R(t)
end 

Eye movement

The eye movements are given by rotations in the same coordinate system as the head movements.

X = anterior (torsion of the pupil in the gaze direction)
Y = -right
Z = superior

The torsion is not important for the gaze direction and is therefore excluded from the calculation.

Azimuth can be defined as a rotation about the Z axis and Elevation as a contra rotation about the Y axis.

Since the eye gaze is given as a rotation angle in OCS coordinates and the eye and head tracker coordinate are fixed relative to each other, we can easily transform OCS coordinates to the EyeSeeCam coordinates. We can then create rotation matrices in the EyeSeeCam coordinates for the azimuth and elevation of the gaze. Since the torsion is excluded the rotation matrices for azimuth and elevation can be multiplied in any order to get the eye rotation matrix in EyeSeeCam coordinates.

eye_Rz(t) = Rz(eye_azimuth(t))
eye_Ry(t) = Ry(-eye_elevation(t))
eye_R(t)  = eye_Ry(t) * eye_Rz(t)

Gaze movement

The gaze movement is the combined head and eye movement. To create the total rotation matrix that represents the 'gaze' rotation in lab coordinates, you need to multiply the two rotation matrices you have for head and eye movement in the correct order. In your case, you have:

A rotation matrix for the head in lab coordinates. A rotation matrix for the eyes in head coordinates. The correct order of multiplication is as follows:

First, multiply the rotation matrix of the head in lab coordinates by the rotation matrix of the eyes in head coordinates.

R_gaze(t) = head_R(t) * eye_R(t)

Lastly you can multiply the R_gaze(t) with the starting gaze vector to get the gaze (in lab coordinates) at time t. We assume that the starting gaze is in the forward direction, which is in the X direction in the lab coordinates.

startingGaze = [1; 0; 0]
gaze(t) = R_gaze(t) * startingGaze

Matlab programming

Functions for converting EyeSeeCam data to double polar data

There is one function for head movement, one for eye movement and one for gaze movement. The gaze movement combines head and eye movement.

The functions take head rotation speeds and eye rotations (in degrees) as input and give objects containing the traces (head, eye and gaze respectivily) in double polar coordinates as output.

  • headTrace_DP = EyeSeeCamSciHeadRotationData2Trace_DP(Vx_deg, Vy_deg, Vz_deg)
  • eyeTrace_DP = EyeSeeCamSciEyeRotationData2Trace_DP(Zocs_deg, Yocs_deg)
  • gazeTrace_DP = EyeSeeCamSciHeadAndEyeRotationData2Trace_DP(Vx_deg, Vy_deg, Vz_deg, Zocs_deg, Yocs_deg)

The Vx, Vy and Vz are the head rotation speeds in the EyeSeeCam coordinates The Zocs and Yocs are the eye rotations in OCS coordinates.

EyeSeeCam data

The recorded data is read from an LSL stream.

% todo: LSL streaming parameters

The lsldata is a struct and has the following fields:

  • escdata
  • escmetadata
  • escstr
  • evdata0
  • evdata1
  • evdata2
  • evdata3
  • evdata4


The lsldata.escdata contains the eye and head data. The data contains a matrix for with a row for every parameter. The total of parameters is 63. The field lsldata.escmetadata.channel lists all the names of the parameters.


%Eye data

RightEyePosX = 46;
RightEyePosY = 47;
RightEyePosZ = 48;

xeye = lsldata.escdata.Data(46,:);       % right 46, left 32
yeye = lsldata.escdata.Data(47,:);       % right 47, left 33
zeye = lsldata.escdata.Data(48,:);       % right 48, left 34

% Head data
HeadInertialVelXCal = 27;
HeadInertialVelYCal = 29;
HeadInertialVelZCal = 31;

xh = lsldata.escdata.Data(27,:);       % calibrated torion velocity data HEAD
yh = lsldata.escdata.Data(29,:);       % calibrated vertical velocity data HEAD
zh = lsldata.escdata.Data(31,:);       % calibrated horizontal velocity data HEAD