Brain and Perception

AB065. Pedestrian modeling using the least-action principle

:-
 

Background: In this work, we present a theoretical and experimental study of the natural movement of pedestrians when passing through a limited and known area of a shopping center. The modeling problem for the motion of a single pedestrian is complex and extensive; therefore, we focus on the need to design models taking into account mechanistic aspects of human locomotion. The theoretical study used mean values of pedestrian characteristics, e.g., density, velocity, and many obstacles. We propose a human pedestrian trajectory model by using the least-action principle, and we compared it against experimental results. The experimental study is conducted in a Living Lab inside a shopping center using infrared cameras. For this experiment, we collected highly accurate trajectories allowing us to quantify pedestrian crowd dynamics. The tests included 20 runs distributed over five days with up to 25 test persons. Additionally, to gain a better understanding of subjects’ trajectories, we simulated a background of different pathway scenarios and compared it with real trajectories. Our theoretical framework takes the minimum error between previously simulated and real point pathways to predict future points on the subject trajectory.

Methods: This paper explores paths of 25 pedestrians along a known area. After obtaining the trajectory and their points of origin, we evaluated the speed with the objective to calculate the kinetic force of the pedestrian. In the present model, we assume that the principle of least action holds and using this concept we can obtain the potential force. Once all the forces of pedestrian movement are known, then we calculate the adjustment of the parameters employed in the equations of the social force model.

Results: It is possible to reproduce observed results for real pedestrian movement by using the Principle of Least Action. In the first scenario, we focused on a pedestrian walking without obstacles. Using the actual trajectories of the experiment we obtained the necessary information and applied it to the Social Force Model. Our simulations were clearly able to reproduce the actual observed average trajectories for the free obstacle walking conditions.

Conclusions: When a scenario does not represent free walking (obstacles, constraints), the potential energy and the kinetic energy are modified. Note that when the trajectory is real, the action is assumed to equal zero. That is the value of the potential energy changes in each interaction with a new obstacle. However, the value of the action remains. It is shown here that we can clearly reproduce some scenarios and calibrate the model according to different situations. Using different values of potential energy, we can obtain the values of the actual pathway. Nevertheless, as a significant extension concerning this model, it would be desirable to simulate cellular automata that could learn the situation and improve the approximation model to predict the real trajectories with more accuracy.

Brain and Perception

AB056. Multisensory stochastic facilitation: effect of thresholds and reaction times

:-
 

Background: The concept of stochastic facilitation suggests that the addition of precise amounts of white noise can improve the perceptibility of a stimulus of weak amplitude. We know from previous research that tactile and auditory noise can facilitate visual perception, respectively. Here we wanted to see if the effects of stochastic facilitation generalise to a reaction time paradigm, and if reaction times are correlated with tactile thresholds. We know that when multiple sensory systems are stimulated simultaneously, reaction times are faster than either stimulus alone, and also faster than the sum of reaction times (known as the race model).

Methods: Five participants were re-tested in five blocks each of which contained a different background noise levels, randomly ordered across sessions. At each noise level, they performed a tactile threshold detection task and a tactile reaction time task.

Results: Both tactile threshold and tactile reaction times were significantly affected by the background white noise. While the preferred amplitude for the white noise was different for every participant, the average lowest threshold was obtained with white noise presented binaurally at 70 db. The reaction times were analysed by fitting an ex-Gaussian, the sum of a Gaussian function and an exponential decay function. The white noise significantly affected the exponential parameter (tau) in a way that is compatible with the facilitation of thresholds.

Conclusions: We therefore conclude that multisensory reaction time facilitation can, at least in part, be explained by stochastic facilitation of the neural signals.

Brain and Perception

AB054. Audio—visual multiple object tracking

:-
 

Background: The ability to track objects as they move is critical for successful interaction with objects in the world. The multiple object tracking (MOT) paradigm has demonstrated that, within limits, our visual attention capacity allows us to track multiple moving objects among distracters. Very little is known about dynamic auditory attention and the role of multisensory binding in attentional tracking. Here, we examined whether dynamic sounds congruent with visual targets could facilitate tracking in a 3D-MOT task.

Methods: Participants tracked one or multiple target-spheres among identical distractor-spheres during 8 seconds of movement in a virtual cube. In the visual condition, targets were identified with a brief colour change, but were then indistinguishable from the distractors during the movement. In the audio-visual condition, the target-spheres were accompanied by a sound, which moved congruently with the change in the target’s position. Sound amplitude varied with distance from the observer and inter-aural amplitude difference varied with azimuth.

Results: Results with one target showed that performance was better in the audiovisual condition, which suggests that congruent sounds can facilitate attentional visual tracking. However, with multiple targets, the sounds did not facilitate tracking.

Conclusions: This suggests that audiovisual binding may not be possible when attention is divided between multiple targets.

其他期刊
  • 眼科学报

    主管:中华人民共和国教育部
    主办: 中山大学
    承办: 中山大学中山眼科中心
    主编: 林浩添
    主管:中华人民共和国教育部
    主办: 中山大学
    浏览
  • Eye Science

    主管:中华人民共和国教育部
    主办: 中山大学
    承办: 中山大学中山眼科中心
    主编: 林浩添
    主管:中华人民共和国教育部
    主办: 中山大学
    浏览
出版者信息
中山大学中山眼科中心 版权所有粤ICP备:11021180