1 00:00:00.069 --> 00:00:7.290 Okay, so this is the Low/High Index of Pupillary Activity. I'll try to do this in three minutes. 2 00:00:8.370 --> 00:00:14.640 It's work with myself Krzysztof Krejtz, Nina Gehrer in Germany, Tanya Bafna and Per Baekgaard in Denmark. 3 00:00:15.960 --> 00:00:17.370 We're supported by the NSF in the US, Bevica Fonden in Denmark, 4 00:00:18.450 --> 00:00:24.390 and Polskie Ministerstwo Nauki i Szkolnictwa Wyższego in Poland. 5 00:00:25.500 --> 00:00:37.740 So in essence what this is about is the low frequency/high frequency ratio of the pupil diameter oscillation to estimate cognitive load. 6 00:00:38.910 --> 00:00:45.180 And this is in part based on Peysakhovich's result, where he did it a Toulouse n-back task and could differentiate between task difficulty. 7 00:00:46.710 --> 00:00:49.050 And also, in part it's based on our 2018 CHI paper on the IPA, 8 00:00:50.250 --> 00:00:56.730 the Index of Pupillary Activity where we use wavelets, 9 00:00:57.780 --> 00:01:03.870 the wavelet coefficient here gives us the detail signal and the scaling function gives us a multi resolution analysis of the signal. 10 00:01:05.250 --> 00:01:7.770 Basically just a simple convolution using 11 00:01:09.030 --> 00:01:11.340 wavelet filters or averaging filters. 12 00:01:12.660 --> 00:01:20.490 The Haar filter, for example, in Table 4 on the right or Daubechies-4; we ended up using symlet-16. 13 00:01:21.330 --> 00:01:28.350 The other thing that we do is find edges in the signal and that is done via modulus maxima detection. 14 00:01:28.350 --> 00:01:34.380 So the value in the center has to be greater than or equal to both neighbors and has to be strictly greater than one of them. 15 00:01:35.820 --> 00:01:41.400 The code for the IPA was available in the CHI paper of 2018 and we also made it available for 2020. 16 00:01:42.630 --> 00:01:53.850 Essentially what we do is we just now take inside the wavelet transform, we take the low to high frequency ratio. And then we compute that aspect of it 17 00:01:54.780 --> 00:01:58.780 to give us the L-H-I-P-A or LHIPA signal. And so here we've got three experiments that we did. 18 00:01:58.780 --> 00:02:03.780 One that we replicated, or actually re-analyzed again the data from 2018, 19 00:02:04.560 --> 00:02:15.060 which was a simple fixed gaze experiment following Siegenthaler et al. 20 00:02:16.020 --> 00:02:22.230 And so those results for the 2018 paper showing that it was difficult to count backwards by 17 or easy to count forward by two, 21 00:02:22.230 --> 00:02:28.230 and LHIPA on the right shows us good differentiation between difficult task and the control easy, just like the IPA. 22 00:02:29.760 --> 00:02:38.520 Second experiment was no longer fixed-gaze, we followed the Tuebingen group's design for an n-back task and we use the 1-back and 2-back 23 00:02:39.330 --> 00:02:46.160 with a limited vocabulary and this is the setup for the experiment. 24 00:02:46.160 --> 00:02:53.160 Results also showed that LHIPA can distinguish between the baseline and the n-back tasks; the IPA could not however. 25 00:02:54.810 --> 00:03:00.960 Finally, our third experiment is an eye-typing task also easy and difficult based on the LIX readability score. 26 00:03:02.190 --> 00:03:07.320 Here's the setup where somebody's typing a phrase, they had to remember previously. 27 00:03:08.520 --> 00:03:14.300 And again, LHIPA shows good results differentiating between easy and difficult tasks, but the IPA does not. 28 00:03:14.520 --> 00:03:21.300 One thing to remember is that LHIPA is inversely proportional to task difficulty and that, in essence, is it. 29 00:03:22.350 --> 00:03:28.000 So essentially the LHIPA looks like a pretty robust way to measure cognitive load using pupil diameter oscillation, 30 00:03:28.000 --> 00:03:34.000 whereas the 2018 IPA seems to have faltered when gaze started moving instead of being a fixed-gaze task. 31 00:03:34.000 --> 00:03:39.630 I welcome any questions, if there are any. Thank you very much.