This project has moved. For the latest updates, please go here.

DATA IMPORT (fixation import, stimuli import post data-recording)

Jul 11, 2013 at 6:30 PM
So I've been working with remote eye tracking for a few months now, and I am finally (finally) in the process of data analysis. To get to this step, I was forced to take a couple odd turns. First, I couldn't figure out how to get the slide show module to work with my task design. This was fine. I'd simply realized that for the level of complexity I wanted to achieve in my task, OGAMA simply wasn't the best fit. However, I still wanted to implement remote eye-tracking, where ITU gazetracker is the best open source remote tracker, and within OGAMA, ITU performs at its best.

What I did was write a simple UDP listener in python and had it listen to the network which I could enable in the ITU settings within OGAMA. Basically, I'd set up, calibrate and turn on the network assistant in OGAMA and then run the python script and my eye data was sent directly to a .txt doc of my choice.

This worked swimmingly.

Now however, I want to take the data and reinput it into OGAMA for analysis. This process, I believed would work if I imported my X,Y data and time into OGAMA. However, (as you might be able to surmise) this only brought the data in as raw data, and I was unable to use any of OGAMA's fantastic modules. I realized I'd have to either calculate my own fixations (something I don't yet know how to do) and import them into the software.

Then I got to thinking, what if DO record in OGAMA? It would calculate fixations for me, and if I wanted I could also use the python script to easily export the relevant X,Y,Time data just as before.

I attempted this and it worked, simultaneous data recording from two separate locations (mind you, a timestamp issue comes into play as I know not how to simultaneously start OGAMA and my python script)

Again I come to data analysis. See, the intention is that my eye-tracker runs on computer A and the task runs on computer B, both of which are connected to Monitor C. Calibrate with A connected to C, and then run the task while B is connected to C and I do both processes without sacrificing CPU. However, with EITHER scenario (recording with the python script or OGAMA) I NEED to be able to import the task's randomly generated stimuli into OGAMA so that the fixations are an accurate representation of the subjects' gaze.

Moreover, I am also working with non-remote eyetracking during fMRI scanning. Because of this, it is not as simple as record in OGAMA with the slides already in place. Thus I am completely up for learning how to IMPORT fixations AND stimuli post-recording for analysis in OGAMA. If it isn't possible to accurately analyze imported data, then perhaps OGAMA isn't for me afterall

Is this possible? I'm willing to go over in more detail what I've already tried. But before I do that (and since I've already loaded you with alot to consider), I want to know if you fine people that are more familiar with OGAMA might provide insight (either positive or negative) as to whether what I'm trying to accomplish with OGAMA data-analysis is even possible!

Thanks in advance,

Evan
University of Michigan
Psychiatry Department
Coordinator
Jul 13, 2013 at 5:54 PM
Edited Jul 13, 2013 at 5:56 PM
Hi Evan,

first of all thanks for let us participate on your process of getting ogama work for your setup, this will help other users with same questions also.
What I did was write a simple UDP listener in python and had it listen to the network which I could enable in the ITU settings within OGAMA. Basically, I'd set up, calibrate and turn on the network assistant in OGAMA and then run the python script and my eye data was sent directly to a .txt doc of my choice. This worked swimmingly.
That seems to be like an oversized solution ... Ogama is of no need in this setup? You may use the GTApplication.exe in Ogamas program folder directly and connect to this instance? If it is possible maybe someone else is interested in your python udp listener, so would you share your code with us?
Now however, I want to take the data and reinput it into OGAMA for analysis. This process, I believed would work if I imported my X,Y data and time into OGAMA. However, (as you might be able to surmise) this only brought the data in as raw data, and I was unable to use any of OGAMA's fantastic modules. I realized I'd have to either calculate my own fixations (something I don't yet know how to do) and import them into the software.
By default Ogama calculates the fixations automatically after successful raw data import. If this is not the case, let me know. You have the option to recalculate the fixations manually from the imported raw data in the fixations module using the "start" button in the tool bar.
Then I got to thinking, what if DO record in OGAMA? It would calculate fixations for me, and if I wanted I could also use the python script to easily export the relevant X,Y,Time data just as before. I attempted this and it worked, simultaneous data recording from two separate locations (mind you, a timestamp issue comes into play as I know not how to simultaneously start OGAMA and my python script)
You may synchronize ogama with external soft-/hardware using the trigger option in the slide design/record module. There is only 8-bit LPT signaling implemented yet but you may extend this using ogamas sources, not that difficult, there is an interface for this.
Again I come to data analysis. See, the intention is that my eye-tracker runs on computer A and the task runs on computer B, both of which are connected to Monitor C. Calibrate with A connected to C, and then run the task while B is connected to C and I do both processes without sacrificing CPU. However, with EITHER scenario (recording with the python script or OGAMA) I NEED to be able to import the task's randomly generated stimuli into OGAMA so that the fixations are an accurate representation of the subjects' gaze.
If you are able to generate an ID for each random stimulus (e.g. 4,GD9), which is represented as an image of the stimulus size (e.g. 4.jpg, GD9.jpg) then send a message with the ID/the image filename into the log file (e.g. using your python script) like this:
time x y
0 120 220
20 130 225
40 140 225
...
MSG 4.jpg
...
In the import assistant you are able to detect such message lines and split the raw data into the trials using the given IDs. This enables multiple views of same image (different trials sequence, same trial ID in ogamas words.) Each random jpg has to be imported into ogama with the correct ID which should be available through the batch import in the slide design module.

Using Ogama directly for recording you have to also create a slide for all your randomized pictures (hopefully the number is not endless, then it will surely fail) and use the randomize options in the slide design module which are not sophisticated like a python script, but are you are able to group on multiple layers using folders for slide groups.
Moreover, I am also working with non-remote eyetracking during fMRI scanning. Because of this, it is not as simple as record in OGAMA with the slides already in place. Thus I am completely up for learning how to IMPORT fixations AND stimuli post-recording for analysis in OGAMA. If it isn't possible to accurately analyze imported data, then perhaps OGAMA isn't for me afterall
I have seen experiments in ogama with fMRI data from SMI using the fixations import assistant in the fixations module. You do not need to have the raw data available for all analysis modules (for replay it is essential) so you might also work only with imported fixations. The assistant should be self-explanatory, if not tell me on which step you fail :-) By default the fixation table should also be sequencend by lines with new trial / stimulus id.
Is this possible? I'm willing to go over in more detail what I've already tried. But before I do that (and since I've already loaded you with alot to consider), I want to know if you fine people that are more familiar with OGAMA might provide insight (either positive or negative) as to whether what I'm trying to accomplish with OGAMA data-analysis is even possible!
For all I can see at the moment the one and only problem may be the huge amount of possible pictures that are randomly generated.
I have seen slideshows with 4000 slides and it worked, but I recommend slideshows with about 100 stimuli.
For now I can see no reason why it should not (finally) work, just finding the best way is a bit tricky sometimes and depends on hardware and analysis needs.

Hope it helped, don´t hesitate to ask further,
Adrian