Brain and Perception

AB066. Duration dependent visual plasticity via monocular deprivation

AB066. Duration dependent visual plasticity via monocular deprivation

:-
 

Background: Short-term monocular deprivation has been recently shown to temporarily increase the sensitivity of the patched eye. Many studies have patched subjects for an arbitrary period of 2.5 hours, but for no principled reason. Our goal is to show a relationship, if any, between the length of patching duration and the strength of its effect.

Methods: We tested nine subjects with three different patching durations: 1-, 2-, 3-hour. Four of the nine subjects were patched for 5-hour. Monocular deprivation was achieved by the use of a translucent eyepatch. A session included two rounds of baseline testing of interocular eye balance, patching, and post-patching tests. Each post-patching test occurred at 0, 3, 6, 12, 24, 48, 60 and 96 minutes after patching to track the patching effect over time. Every subject performed two sessions per condition.

Results: One-hour patching produced a small shift in ocular dominance. A larger shift occurred from 2-hour patching, but 3-hour patching produced a comparable effect to the one measured after 2-hour patching.

Conclusions: These results show a saturation of the patching effect beyond 2-hour patching. Hence, we believe that 2-hour patching duration is the optimal duration for eye dominance changes induced by monocular deprivation.

Background: Short-term monocular deprivation has been recently shown to temporarily increase the sensitivity of the patched eye. Many studies have patched subjects for an arbitrary period of 2.5 hours, but for no principled reason. Our goal is to show a relationship, if any, between the length of patching duration and the strength of its effect.

Methods: We tested nine subjects with three different patching durations: 1-, 2-, 3-hour. Four of the nine subjects were patched for 5-hour. Monocular deprivation was achieved by the use of a translucent eyepatch. A session included two rounds of baseline testing of interocular eye balance, patching, and post-patching tests. Each post-patching test occurred at 0, 3, 6, 12, 24, 48, 60 and 96 minutes after patching to track the patching effect over time. Every subject performed two sessions per condition.

Results: One-hour patching produced a small shift in ocular dominance. A larger shift occurred from 2-hour patching, but 3-hour patching produced a comparable effect to the one measured after 2-hour patching.

Conclusions: These results show a saturation of the patching effect beyond 2-hour patching. Hence, we believe that 2-hour patching duration is the optimal duration for eye dominance changes induced by monocular deprivation.

Brain and Perception

AB065. Pedestrian modeling using the least-action principle

AB065. Pedestrian modeling using the least-action principle

:-
 

Background: In this work, we present a theoretical and experimental study of the natural movement of pedestrians when passing through a limited and known area of a shopping center. The modeling problem for the motion of a single pedestrian is complex and extensive; therefore, we focus on the need to design models taking into account mechanistic aspects of human locomotion. The theoretical study used mean values of pedestrian characteristics, e.g., density, velocity, and many obstacles. We propose a human pedestrian trajectory model by using the least-action principle, and we compared it against experimental results. The experimental study is conducted in a Living Lab inside a shopping center using infrared cameras. For this experiment, we collected highly accurate trajectories allowing us to quantify pedestrian crowd dynamics. The tests included 20 runs distributed over five days with up to 25 test persons. Additionally, to gain a better understanding of subjects’ trajectories, we simulated a background of different pathway scenarios and compared it with real trajectories. Our theoretical framework takes the minimum error between previously simulated and real point pathways to predict future points on the subject trajectory.

Methods: This paper explores paths of 25 pedestrians along a known area. After obtaining the trajectory and their points of origin, we evaluated the speed with the objective to calculate the kinetic force of the pedestrian. In the present model, we assume that the principle of least action holds and using this concept we can obtain the potential force. Once all the forces of pedestrian movement are known, then we calculate the adjustment of the parameters employed in the equations of the social force model.

Results: It is possible to reproduce observed results for real pedestrian movement by using the Principle of Least Action. In the first scenario, we focused on a pedestrian walking without obstacles. Using the actual trajectories of the experiment we obtained the necessary information and applied it to the Social Force Model. Our simulations were clearly able to reproduce the actual observed average trajectories for the free obstacle walking conditions.

Conclusions: When a scenario does not represent free walking (obstacles, constraints), the potential energy and the kinetic energy are modified. Note that when the trajectory is real, the action is assumed to equal zero. That is the value of the potential energy changes in each interaction with a new obstacle. However, the value of the action remains. It is shown here that we can clearly reproduce some scenarios and calibrate the model according to different situations. Using different values of potential energy, we can obtain the values of the actual pathway. Nevertheless, as a significant extension concerning this model, it would be desirable to simulate cellular automata that could learn the situation and improve the approximation model to predict the real trajectories with more accuracy.

Background: In this work, we present a theoretical and experimental study of the natural movement of pedestrians when passing through a limited and known area of a shopping center. The modeling problem for the motion of a single pedestrian is complex and extensive; therefore, we focus on the need to design models taking into account mechanistic aspects of human locomotion. The theoretical study used mean values of pedestrian characteristics, e.g., density, velocity, and many obstacles. We propose a human pedestrian trajectory model by using the least-action principle, and we compared it against experimental results. The experimental study is conducted in a Living Lab inside a shopping center using infrared cameras. For this experiment, we collected highly accurate trajectories allowing us to quantify pedestrian crowd dynamics. The tests included 20 runs distributed over five days with up to 25 test persons. Additionally, to gain a better understanding of subjects’ trajectories, we simulated a background of different pathway scenarios and compared it with real trajectories. Our theoretical framework takes the minimum error between previously simulated and real point pathways to predict future points on the subject trajectory.

Methods: This paper explores paths of 25 pedestrians along a known area. After obtaining the trajectory and their points of origin, we evaluated the speed with the objective to calculate the kinetic force of the pedestrian. In the present model, we assume that the principle of least action holds and using this concept we can obtain the potential force. Once all the forces of pedestrian movement are known, then we calculate the adjustment of the parameters employed in the equations of the social force model.

Results: It is possible to reproduce observed results for real pedestrian movement by using the Principle of Least Action. In the first scenario, we focused on a pedestrian walking without obstacles. Using the actual trajectories of the experiment we obtained the necessary information and applied it to the Social Force Model. Our simulations were clearly able to reproduce the actual observed average trajectories for the free obstacle walking conditions.

Conclusions: When a scenario does not represent free walking (obstacles, constraints), the potential energy and the kinetic energy are modified. Note that when the trajectory is real, the action is assumed to equal zero. That is the value of the potential energy changes in each interaction with a new obstacle. However, the value of the action remains. It is shown here that we can clearly reproduce some scenarios and calibrate the model according to different situations. Using different values of potential energy, we can obtain the values of the actual pathway. Nevertheless, as a significant extension concerning this model, it would be desirable to simulate cellular automata that could learn the situation and improve the approximation model to predict the real trajectories with more accuracy.

Brain and Perception

AB064. Product knowledge predicts greater willingness to buy and gaze-related attention, salience does not

AB064. Product knowledge predicts greater willingness to buy and gaze-related attention, salience does not

:-
 

Background: Visual salience computed using algorithmic procedures have been shown to predict eye-movements in a number of contexts. However, despite calls to incorporate computationally-defined visual salience metrics as a means of assessing the effectiveness of advertisements, few studies have incorporated these techniques in a marketing context. The present study sought to determine the impact of visual salience and knowledge of a brand on eye-movement patterns and buying preferences.

Methods: Participants (N=38) were presented with 54 pairs of products presented on the left and right sides of a blank white screen. For each pair, one product was a known North American product, such as Fresca?, and one was an unknown British product of the same category, such as Irn Bru?. Participants were asked to select which product they would prefer to buy while their eye movements were recorded. Salience was computed using Itti & Koch’s [2001] computational model of bottom-up salience. Products were defined as highly salient if the majority of the first five predicted fixations were in the region of the product.

Results: Results showed that participants were much more likely to prefer to buy known products, and tentative evidence suggests that participants had longer total dwell times when looking at unknown products. Salience appears to have had little or no effect on preference for a product, nor did it predict total dwell time or time to first fixation. There also appears to be no interaction between knowledge of a product and visual salience on any of the measures analyzed.

Conclusions: The results indicate that product salience may not be a useful predictor of attention under the constraints of the present experiment. Future studies could use a different operational definition of visual salience which might be more predictive of visual attention. Furthermore, a more fine-grained analysis of product familiarity based on survey data may reveal patterns obscured by the definitional constraints of the present study.

Background: Visual salience computed using algorithmic procedures have been shown to predict eye-movements in a number of contexts. However, despite calls to incorporate computationally-defined visual salience metrics as a means of assessing the effectiveness of advertisements, few studies have incorporated these techniques in a marketing context. The present study sought to determine the impact of visual salience and knowledge of a brand on eye-movement patterns and buying preferences.

Methods: Participants (N=38) were presented with 54 pairs of products presented on the left and right sides of a blank white screen. For each pair, one product was a known North American product, such as Fresca?, and one was an unknown British product of the same category, such as Irn Bru?. Participants were asked to select which product they would prefer to buy while their eye movements were recorded. Salience was computed using Itti & Koch’s [2001] computational model of bottom-up salience. Products were defined as highly salient if the majority of the first five predicted fixations were in the region of the product.

Results: Results showed that participants were much more likely to prefer to buy known products, and tentative evidence suggests that participants had longer total dwell times when looking at unknown products. Salience appears to have had little or no effect on preference for a product, nor did it predict total dwell time or time to first fixation. There also appears to be no interaction between knowledge of a product and visual salience on any of the measures analyzed.

Conclusions: The results indicate that product salience may not be a useful predictor of attention under the constraints of the present experiment. Future studies could use a different operational definition of visual salience which might be more predictive of visual attention. Furthermore, a more fine-grained analysis of product familiarity based on survey data may reveal patterns obscured by the definitional constraints of the present study.

Brain and Perception

AB063. Contrasting effects of exogenous attention on saccades and reaches

AB063. Contrasting effects of exogenous attention on saccades and reaches

:-
 

Background: The goal of the present study was to determine whether exogenous attentional mechanisms involved in motor planning for saccades and reaches are the same for both effectors or are independent for each effector. We compared how eye and arm movement parameters, notably reaction time and amplitude, are affected by modulating exogenous attentional visual cues at different locations relative to a target.

Methods: Thirteen participants (M =22.8, SD =1.5) were asked to perform a task involving exogenous attentional allocation and movement planning. The participants were asked to fixate and maintain their hand at an initial position on a screen in front of them (left or right of screen centre) and then, at the disappearance of the fixation cross, perform an eye or arm movement, or both, to a target square (mirror location of fixation cross). A distractor appeared momentarily just before the appearance of the target at one of seven equidistant locations on the horizontal meridian. Saccade reaction times (SRTs), reach reaction times (RRTs) and amplitudes were calculated.

Results: Compared to the neutral condition (where no distractor was presented), distractors overall did not result in a facilitation of SRTs at any location (shorter SRTs), rather only a strong inhibition (longer SRTs) as a function of distractor target distance. In contrast, RRTs showed strong facilitation at the target location and less inhibition at further distances. However, both SRTs and RRTs followed a similar pattern in that RTs were shortest closer to the target position and were increasingly longer as a function of distractor target distance. In terms of amplitude, there was no effect of the distractor on reach endpoints, whereas, for saccades, there was an averaging effect of distractor position on saccade endpoints, but only for saccades with short SRTs. These effects were similar when either effector movement was performed alone or together.

Conclusions: These findings suggest that attentional selection mechanisms have both similar and differential effects on motor planning depending on the effectors used, providing evidence for both effector independent and effector dependent attentional selection mechanisms. This study furthers understanding of the operating mechanisms of exogenous attention on eye and arm movements and the interaction between sensory and motor systems.

Background: The goal of the present study was to determine whether exogenous attentional mechanisms involved in motor planning for saccades and reaches are the same for both effectors or are independent for each effector. We compared how eye and arm movement parameters, notably reaction time and amplitude, are affected by modulating exogenous attentional visual cues at different locations relative to a target.

Methods: Thirteen participants (M =22.8, SD =1.5) were asked to perform a task involving exogenous attentional allocation and movement planning. The participants were asked to fixate and maintain their hand at an initial position on a screen in front of them (left or right of screen centre) and then, at the disappearance of the fixation cross, perform an eye or arm movement, or both, to a target square (mirror location of fixation cross). A distractor appeared momentarily just before the appearance of the target at one of seven equidistant locations on the horizontal meridian. Saccade reaction times (SRTs), reach reaction times (RRTs) and amplitudes were calculated.

Results: Compared to the neutral condition (where no distractor was presented), distractors overall did not result in a facilitation of SRTs at any location (shorter SRTs), rather only a strong inhibition (longer SRTs) as a function of distractor target distance. In contrast, RRTs showed strong facilitation at the target location and less inhibition at further distances. However, both SRTs and RRTs followed a similar pattern in that RTs were shortest closer to the target position and were increasingly longer as a function of distractor target distance. In terms of amplitude, there was no effect of the distractor on reach endpoints, whereas, for saccades, there was an averaging effect of distractor position on saccade endpoints, but only for saccades with short SRTs. These effects were similar when either effector movement was performed alone or together.

Conclusions: These findings suggest that attentional selection mechanisms have both similar and differential effects on motor planning depending on the effectors used, providing evidence for both effector independent and effector dependent attentional selection mechanisms. This study furthers understanding of the operating mechanisms of exogenous attention on eye and arm movements and the interaction between sensory and motor systems.

Brain and Perception

AB062. Cortical state contribution to neuronal response variability

AB062. Cortical state contribution to neuronal response variability

:-
 

Background: Visual cortex neurons often respond to stimuli very differently on repeated trials. This trial-by-trial variability is known to be correlated among nearby neurons. Our long-term goal is to quantitatively estimate neuronal response variability, using multi-channel local field potential (LFP) data from single trials.

Methods: Acute experiments were performed with anesthetized (Remifentanil, Propofol, nitrous oxide) and paralyzed (Gallamine Triethiodide) cats. Computer-controlled visual stimuli were displayed on a gamma-corrected CRT monitor. For the principal experiment, two kinds of visual stimuli were used: drifting sine-wave gratings, and a uniform mean-luminance gray screen. These two stimuli were each delivered monocularly for 100 sec in a random order, for 10 trials. Multi-unit activity (MUA) and LFP signals were extracted from broadband raw data acquired from Area 17 and 18 using A1X32 linear arrays (NeuroNexus) and the OpenEphys recording system. LFP signal processing was performed using Chronux, an open-source MATLAB toolbox. Current source density (CSD) analysis was performed on responses to briefly flashed full-field stimuli using the MATLAB toolbox, CSDplotter. The common response variability (global noise) of MUA was estimated using the model proposed by Scholvinck et al. [2015].

Results: On different trials, a given neuron responded with different firing to the same visual stimuli. Within one trial, a neuron’s firing rate also fluctuated across successive cycles of a drifting grating. When the animal was given extra anesthesia, neurons fired in a desynchronized pattern; with lighter levels of anesthesia, neuronal firing because more synchronized. By examining the cross-correlations of LFP signals recorded from different cortical layers, we found LFP signals could be divided to two groups: those recorded in layer IV and above, and those from layers V and VI. Within each group, LFP signals recorded by different channels are highly correlated. These two groups were observed in lighter and deeper anesthetized animals, also in sine-wave and uniform gray stimulus conditions. We also investigated correlations between LFP signals and global noise. Power in the LFP beta band was highly correlated with global noise, when animals were in deeper anesthesia.

Conclusions: Brain states contribute to variations in neuronal responses. Raw LFP correlation results suggest that we should analyze LFP data according to their laminar organization. Correlation of low-frequency LFP under deeper anesthesia with global noise gives us some insight to predict noise from single-trial data, and we hope to extend this analysis to lighter anesthesia in the future.

Background: Visual cortex neurons often respond to stimuli very differently on repeated trials. This trial-by-trial variability is known to be correlated among nearby neurons. Our long-term goal is to quantitatively estimate neuronal response variability, using multi-channel local field potential (LFP) data from single trials.

Methods: Acute experiments were performed with anesthetized (Remifentanil, Propofol, nitrous oxide) and paralyzed (Gallamine Triethiodide) cats. Computer-controlled visual stimuli were displayed on a gamma-corrected CRT monitor. For the principal experiment, two kinds of visual stimuli were used: drifting sine-wave gratings, and a uniform mean-luminance gray screen. These two stimuli were each delivered monocularly for 100 sec in a random order, for 10 trials. Multi-unit activity (MUA) and LFP signals were extracted from broadband raw data acquired from Area 17 and 18 using A1X32 linear arrays (NeuroNexus) and the OpenEphys recording system. LFP signal processing was performed using Chronux, an open-source MATLAB toolbox. Current source density (CSD) analysis was performed on responses to briefly flashed full-field stimuli using the MATLAB toolbox, CSDplotter. The common response variability (global noise) of MUA was estimated using the model proposed by Scholvinck et al. [2015].

Results: On different trials, a given neuron responded with different firing to the same visual stimuli. Within one trial, a neuron’s firing rate also fluctuated across successive cycles of a drifting grating. When the animal was given extra anesthesia, neurons fired in a desynchronized pattern; with lighter levels of anesthesia, neuronal firing because more synchronized. By examining the cross-correlations of LFP signals recorded from different cortical layers, we found LFP signals could be divided to two groups: those recorded in layer IV and above, and those from layers V and VI. Within each group, LFP signals recorded by different channels are highly correlated. These two groups were observed in lighter and deeper anesthetized animals, also in sine-wave and uniform gray stimulus conditions. We also investigated correlations between LFP signals and global noise. Power in the LFP beta band was highly correlated with global noise, when animals were in deeper anesthesia.

Conclusions: Brain states contribute to variations in neuronal responses. Raw LFP correlation results suggest that we should analyze LFP data according to their laminar organization. Correlation of low-frequency LFP under deeper anesthesia with global noise gives us some insight to predict noise from single-trial data, and we hope to extend this analysis to lighter anesthesia in the future.

Brain and Perception
Brain and Perception
Brain and Perception

AB059. Expression patterns of CB1R, NAPE-PLD, and FAAH in the primary visual cortex of vervet monkeys

AB059. Expression patterns of CB1R, NAPE-PLD, and FAAH in the primary visual cortex of vervet monkeys

:-
 

Background: The expression, localization, and function of the endocannabinoid system has been well characterized in recent years in the monkey retina and in the primary thalamic relay, the lateral geniculate nucleus (dLGN). Few data are available on cortical recipients’ structures of the dLGN, namely the primary visual cortex (V1). The goal of this study is to characterize the expression and localization of the metabotropic cannabinoid receptor type 1 (CB1R), the synthesizing enzyme N-acyl phosphatidyl-ethanolamine phospholipase D (NAPE-PLD), and the degradation enzyme fatty acid amide hydrolase (FAAH) in the vervet monkey area V1.

Methods: Using Western blots and immunohistochemistry, we investigated the expression patterns of CB1R, NAPE-PLD, and FAAH in the vervet monkey primary visual cortex.

Results: CB1R, NAPE-PLD, and FAAH were expressed in the primary visual cortex throughout the rostro-caudal axis. CB1R showed very low levels of staining in cortical layer 4, with higher expressions in all other cortical layers, especially layer 1. NAPE-PLD and FAAH expressions were highest in layers 1, 2 and 3, and lowest in layer 4.

Conclusions: Interestingly enough, CB1R was very low in layer 4 of V1 in comparison to the other cortical layers. The visual information coming from the dLGN and entering layer 4Calpha (magno cells) and 4Cbeta (parvo cells) may be therefore modulated by the higher expression levels of CB1R in cortical layers 2 and 3 on the way to the dorsal and ventral visual streams. This is further supported by the higher expression of NAPE-PLD and FAAH in the outer cortical layers. These data indicate that CB1R system can influence the network of activity patterns in the visual stream after the visual information has reached area V1. These novel results provide insights for understanding the role of the endocannabinoids in the modulation of cortical visual inputs, and hence, visual perception.

Background: The expression, localization, and function of the endocannabinoid system has been well characterized in recent years in the monkey retina and in the primary thalamic relay, the lateral geniculate nucleus (dLGN). Few data are available on cortical recipients’ structures of the dLGN, namely the primary visual cortex (V1). The goal of this study is to characterize the expression and localization of the metabotropic cannabinoid receptor type 1 (CB1R), the synthesizing enzyme N-acyl phosphatidyl-ethanolamine phospholipase D (NAPE-PLD), and the degradation enzyme fatty acid amide hydrolase (FAAH) in the vervet monkey area V1.

Methods: Using Western blots and immunohistochemistry, we investigated the expression patterns of CB1R, NAPE-PLD, and FAAH in the vervet monkey primary visual cortex.

Results: CB1R, NAPE-PLD, and FAAH were expressed in the primary visual cortex throughout the rostro-caudal axis. CB1R showed very low levels of staining in cortical layer 4, with higher expressions in all other cortical layers, especially layer 1. NAPE-PLD and FAAH expressions were highest in layers 1, 2 and 3, and lowest in layer 4.

Conclusions: Interestingly enough, CB1R was very low in layer 4 of V1 in comparison to the other cortical layers. The visual information coming from the dLGN and entering layer 4Calpha (magno cells) and 4Cbeta (parvo cells) may be therefore modulated by the higher expression levels of CB1R in cortical layers 2 and 3 on the way to the dorsal and ventral visual streams. This is further supported by the higher expression of NAPE-PLD and FAAH in the outer cortical layers. These data indicate that CB1R system can influence the network of activity patterns in the visual stream after the visual information has reached area V1. These novel results provide insights for understanding the role of the endocannabinoids in the modulation of cortical visual inputs, and hence, visual perception.

Brain and Perception

AB058. A longitudinal study on the effects of the optic nerve crush on behavioural visual acuity measures in mice

AB058. A longitudinal study on the effects of the optic nerve crush on behavioural visual acuity measures in mice

:-
 

Background: Visual deficits, caused by ocular disease or trauma to the visual system, can cause lasting damage with insufficient treatment options available. However, recent research has focused on neural plasticity as a means to regain visual abilities. In order to better understand the involvement of neural plasticity and reorganization in partial vision restoration, we aim to evaluate the partial recovery of a visual deficit over time using three behavioural tests. In our study, a partial optic nerve crush (ONC) serves as an induced visual deficit, allowing for residual vision from surviving cells.

Methods: Three behavioural tests—optokinetic reflex, object recognition, and visual cliff—were conducted in 9 mice prior to a bilateral, partial ONC, then 1, 3, 7, 14, 21, and 28 days after the ONC. The optokinetic reflex test measured the tracking reflex in response to moving sinusoidal gratings. These gratings increase in spatial frequency until a reflex is no longer observed, i.e., a visual acuity threshold is reached. The object recognition test examines the animal’s exploratory behaviour in its capacity to distinguish high versus low contrast objects. The visual cliff test also evaluates exploratory behaviour, by simulating a cliff to observe the animal’s sense of depth perception. All three tests provide an estimate of the rodent’s visual abilities at different levels of the visual pathway.

Results: The partial optic nerve crush resulted in a total loss of visual acuity as measured by the optokinetic reflex. The deficit did not show improvement during the 4 following weeks. Despite the visual cliff test showing a non-significant decrease in deep end preference 1-day post ONC, though this was not the case for subsequent test occasions. The object recognition test showed no significant trends.

Conclusions: In conclusion, the optokinetic reflex test showed a significant loss of function following the visual deficit, but no recovery. However, a complimentary pilot study shows visual recovery using lighter crush intensities. The spatial visual function does not seem to be affected by the ONC, suggesting that the object recognition and visual cliff tests, in their current design, may rely on somatosensory means of exploration.

Background: Visual deficits, caused by ocular disease or trauma to the visual system, can cause lasting damage with insufficient treatment options available. However, recent research has focused on neural plasticity as a means to regain visual abilities. In order to better understand the involvement of neural plasticity and reorganization in partial vision restoration, we aim to evaluate the partial recovery of a visual deficit over time using three behavioural tests. In our study, a partial optic nerve crush (ONC) serves as an induced visual deficit, allowing for residual vision from surviving cells.

Methods: Three behavioural tests—optokinetic reflex, object recognition, and visual cliff—were conducted in 9 mice prior to a bilateral, partial ONC, then 1, 3, 7, 14, 21, and 28 days after the ONC. The optokinetic reflex test measured the tracking reflex in response to moving sinusoidal gratings. These gratings increase in spatial frequency until a reflex is no longer observed, i.e., a visual acuity threshold is reached. The object recognition test examines the animal’s exploratory behaviour in its capacity to distinguish high versus low contrast objects. The visual cliff test also evaluates exploratory behaviour, by simulating a cliff to observe the animal’s sense of depth perception. All three tests provide an estimate of the rodent’s visual abilities at different levels of the visual pathway.

Results: The partial optic nerve crush resulted in a total loss of visual acuity as measured by the optokinetic reflex. The deficit did not show improvement during the 4 following weeks. Despite the visual cliff test showing a non-significant decrease in deep end preference 1-day post ONC, though this was not the case for subsequent test occasions. The object recognition test showed no significant trends.

Conclusions: In conclusion, the optokinetic reflex test showed a significant loss of function following the visual deficit, but no recovery. However, a complimentary pilot study shows visual recovery using lighter crush intensities. The spatial visual function does not seem to be affected by the ONC, suggesting that the object recognition and visual cliff tests, in their current design, may rely on somatosensory means of exploration.

Brain and Perception

AB057. Diagnostic information for the recognition of 3D forms in humans

AB057. Diagnostic information for the recognition of 3D forms in humans

:-
 

Background: The perception of visual forms is crucial for effective interactions with our environment and for the recognition of visual objects. Thus, to determine the codes underlying this function is a fundamental theoretical objective in the study of the visual forms perception. The vast majority of research in the field is based on a hypothetico-deductive approach. Thus, we first begin by formulating a theory, then we make predictions and finally we conduct experimental tests. After decades of application of this approach, the field remains far from having a consensus as to the traits underlying the representation of visual form. Our goal is to determine, without theoretical a priori or any bias whatsoever, the information underlying the discrimination and recognition of 3D visual forms in normal human adults.

Methods: To this end, the adaptive bubble technique developed by Wang et al. [2011] is applied on six 3D synthetic objects under varying views from one test to another. This technique is based on the presentation of stimuli that are partially revealed through Gaussian windows, the location of which is random and the number of which is established in such a way as to maintain an established performance criterion. Gradually, the experimental program uses participants’ performance to determine the stimulus regions that participants use to recognize objects. The synthetic objects used in this study are unfamiliar and were generated from a program produced at C. Edward Connor’s lab, Johns Hopkins University School of Medicine.

Results: The results were integrated across participants to establish regions of presented stimuli that determine the observers’ ability to recognize them—i.e., diagnostic attributes. The results will be reported in graphical form with a Z scores mapping that will be superimposed on silhouettes of the objects presented during the experiment. This mapping makes it possible to quantify the importance of the different regions on the visible surface of an object for its recognition by the participants.

Conclusions: The diagnostic attributes that have been identified are the best described in terms of surface fragments. Some of these fragments are located on or near the outer edge of the stimulus while others are relatively distant. The overlap is minimal between the effective attributes for the different points of view of the same object. This suggests that the traits underlying the recognition of objects are specific to the point of view. In other words, they do not generalize through the points of view.

Background: The perception of visual forms is crucial for effective interactions with our environment and for the recognition of visual objects. Thus, to determine the codes underlying this function is a fundamental theoretical objective in the study of the visual forms perception. The vast majority of research in the field is based on a hypothetico-deductive approach. Thus, we first begin by formulating a theory, then we make predictions and finally we conduct experimental tests. After decades of application of this approach, the field remains far from having a consensus as to the traits underlying the representation of visual form. Our goal is to determine, without theoretical a priori or any bias whatsoever, the information underlying the discrimination and recognition of 3D visual forms in normal human adults.

Methods: To this end, the adaptive bubble technique developed by Wang et al. [2011] is applied on six 3D synthetic objects under varying views from one test to another. This technique is based on the presentation of stimuli that are partially revealed through Gaussian windows, the location of which is random and the number of which is established in such a way as to maintain an established performance criterion. Gradually, the experimental program uses participants’ performance to determine the stimulus regions that participants use to recognize objects. The synthetic objects used in this study are unfamiliar and were generated from a program produced at C. Edward Connor’s lab, Johns Hopkins University School of Medicine.

Results: The results were integrated across participants to establish regions of presented stimuli that determine the observers’ ability to recognize them—i.e., diagnostic attributes. The results will be reported in graphical form with a Z scores mapping that will be superimposed on silhouettes of the objects presented during the experiment. This mapping makes it possible to quantify the importance of the different regions on the visible surface of an object for its recognition by the participants.

Conclusions: The diagnostic attributes that have been identified are the best described in terms of surface fragments. Some of these fragments are located on or near the outer edge of the stimulus while others are relatively distant. The overlap is minimal between the effective attributes for the different points of view of the same object. This suggests that the traits underlying the recognition of objects are specific to the point of view. In other words, they do not generalize through the points of view.

其他期刊
  • 眼科学报

    主管:中华人民共和国教育部
    主办:中山大学
    承办:中山大学中山眼科中心
    主编:林浩添
    主管:中华人民共和国教育部
    主办:中山大学
    浏览
  • Eye Science

    主管:中华人民共和国教育部
    主办:中山大学
    承办:中山大学中山眼科中心
    主编:林浩添
    主管:中华人民共和国教育部
    主办:中山大学
    浏览
推荐阅读
出版者信息
中山眼科



中山大学