Home » Research
In my dissertation, I investigated the neuronal mechanisms underlying object recognition and categorization in the human visual system. In particular, I focused on three main aspects: Sampling, Invariance and Plasticity. I first investigated how the cortical systems for overt visual attention and object recognition interact. Combining eye-tracking with ambiguous visual displays, we were able to demonstrate a bi-directional influence. Furthermore, I investigated potential mechanisms and features underlying viewpoint invariant object- and face-recognition. Using multivariate analyses on fMRI and MEG/EEG data, as well as TMS, we focused on a peculiar cortical shortcut: the joint selectivity for mirror-symmetric viewpoints. Finally, I asked how the visual system, despite highly reliable and efficient performance, is capable of maintaining considerable plasticity, allowing it to almost immediately learn and integrate novel categories. Here, we performed a longitudinal study in which we combined extensive category training with multiple MEG measurements to unravel the temporal and spatial aspects of cortical category-learning. The project was funded by a PhD position in the Neurobiopsychology lab of Professor König and by a Fulbright scholarship during my time in Professor Frank Tong’s lab.

Currently, we investigate (a) how long-term visual experience shapes visual representations and perception. In addition to the effects of extended category training, we ask how visual invariances might be extracted from the statistics of the natural input to the system. (b) We furthermore investigate at what stages and latency invariant face recognition is reached in the visual system in order to better understand the underlying mechanisms. To better understand the complex structure of the recorded high-dimensional data, we apply machine-learning techniques and computational models to the empirical data. (c) Finally, we investigate object-based reference frames asking how different parts of a face are bound into a coherent perceptual experience.

Current and Past Collaborators include: Peter König, Frank Tong, Andreas Engel, Randolph Blake, Sam Ling, Niklas Wilming, Jose Ossandon, Torsten Betz.

Past Projects
Eye-Movements and Overt Visual Attention
Although our understanding of conscious visual perception has made considerable progress in recent years, the role of overt visual attention in the initial perceptual formation remains unclear. There are two hypotheses that need to be considered. The first describes overt visual attention as following the perceptual formation. This theory implies that only after we consciously perceive an object’s identity the visual attention is guided towards crucial features of this object. Its competitor, which assigns a more constructive role to visual attention suggests that the features that are fixated prior to the conscious perception substantially contribute to the perceptual outcome (action precedes perception hypothesis, APP). The central emphasis of this research project is to provide evidence for one of the two theories and therefore to clarify the role of visual attention in perceptual formation. The results of this project can be found in this paper.

(collaboration of NBP with Merav Ahissar at the university of Jerusalem)


Task-dependent Changes in Overt Visual Attention
The goal of the GoodGaze project is to investigate how task-dependent differences in viewing behavior can be established. By using eyetracking on webpages, two hypotheses were tested. The “weak top-down” hypothesis suggests that task-effects on fixation selection are due to relative changes of different feature channels in the bottom-up hierarchy of visual processing. Contrary to this, the “strong top-down” hypothesis sees differences in viewing behavior as being independent from changes in the bottom-up hierarchy. In this approach, the differences could for instance be established through a different spatial bias. By showing that (I) Task differences exist on overt visual attention while viewing webpages, (II) there is indeed a small interaction effect of task- and feature variables at the fixated positions, but (III) that these differences are by far not sufficient to explain the observed differences (through computational modeling), the analyses strongly favor the “strong top-down” hypothesis. More can be found in this paper.


Computational Object Recognition - A Biologically Motivated Approach
FILOU (Feature and Incremental Learning of Objects Utility) is a view-based object recognition system. The idea of this project is to extract key-concepts of the human visual system based on neurobiological and psychophysical findings. By translating these concepts into a suitable technical representation, a very efficient and robust object recognition system was created. It is not only capable of task-dependent feature selection, but also requires less view-prototypes to be stored than comparable approaches. This is accomplished by automatically providing each object representation with an individual amount of prototypes, depending on the complexity of the overall object and task. For more information see this and this paper. Matlab code of iGRLVQ can be downloaded here.


’The human eye is utterly passive.’ she said, obviously quoting some professor or textbook, ’Only the brain can see.’
Mario Puzo | The godfather returns