CN 11-5366/S     ISSN 1673-1530
"Landscape Architecture is more than a journal."
ZHU H P, KONG Y H, OHNO R. Research Progress on Environmental Visual Information Analysis and Perception Measurement Methods for Motion Vision[J]. Landscape Architecture, 2025, 32(5): 1-10.
Citation: ZHU H P, KONG Y H, OHNO R. Research Progress on Environmental Visual Information Analysis and Perception Measurement Methods for Motion Vision[J]. Landscape Architecture, 2025, 32(5): 1-10.

Research Progress on Environmental Visual Information Analysis and Perception Measurement Methods for Motion Vision

  • Objective Rapid urbanization has prioritized functional and efficient architectural and urban space design, often at the expense of human-centered spatial experience. As China’s urbanization shifts toward optimizing existing spaces, the focus of public space design is evolving to emphasize ambiance and user experience. Evidence-based design, rooted in the “human – space – experience” relationship, has become essential for understanding how people perceive and engage with spaces, offering a foundation for creating more humanized environments. Cognition of built environments, including urban spaces and landscapes, relies on dynamic visual exploration rather than static observation. Visual information, continuously changing during movement, plays a critical role in spatial cognition and environmental experience. Dynamic perception enables a more comprehensive understanding of spaces, making it vital for improving design quality and user satisfaction. Emerging technologies such as panoramic imaging, virtual reality (VR), and wearable sensors provide new opportunities to quantify visual information, simulate dynamic perception, and evaluate subjective experience. These advancements have made the dynamic visual perception in urban public spaces a key research focus. This research reviews the methods for analyzing environmental visual information and dynamic perception. By integrating objective physical environment analysis with subjective perception evaluation, the research proposes a unified framework to explore the mechanisms linking built environments with spatial cognition, and predicts future research directions.
    Methods This research employs a comprehensive review methodology to examine the mechanisms of dynamic visual perception in urban public spaces. By integrating insights from environmental psychology, urban design, and visual perception studies, the research systematically explores both objective and subjective dimensions of spatial cognition. For the analysis of objective physical environments, the research reviews advancements in panoramic imaging, skyline and greenery visibility assessments, and dynamic visual metrics such as optical flow and motion parallax. These methods are evaluated based on their accuracy, computational efficiency, and applicability to real-world environments. In terms of subjective visual perception, the research reviews the methods for simulating dynamic experience through VR, including immersive navigation, motion tracking, and behavior re-creation. This review highlights approach for designing realistic visual experience and capturing human responses to dynamic environments. Additionally, techniques for quantifying subjective perceptions are explored. These include real-time emotion evaluation using wearable sensors, physiological measurements, and machine learning models for multimodal data analysis. Challenges such as data annotation, contextual dependency, and ethical considerations are critically examined to address the complexity of perception assessment. By synthesizing the aforesaid methods, the research establishes a structured framework that supports the evaluation and simulation of dynamic visual perception in built environments, providing a robust foundation for future research and practical applications.
    Results Environmental visual information analysis methods: Panoramic imaging has been shown to offer significant advantages in environmental visual information analysis, enabling comprehensive capture of 360° three-dimensional environmental data centered around the human viewpoint. This method provides a more accurate and reliable representation of the “viewpoint – environment” relationship, overcoming limitations such as shooting angle and lens distortion. Current research primarily focuses on static visual information, such as greenery visibility and sky visibility, using street view data or panoramic images. The primary research trends include improving the accuracy of visual element recognition and enhancing the ability to recognize specific scene elements. While pedestrian trajectory tracking and space syntax-based visual fields are well-developed in dynamic visual information, there is still a gap in the quantification and visualization of motion-induced visual cues, such as optical flow and motion parallax. Motion perception simulation technology: Studies indicate a clear difference in the motion perception results between sequences of images and films. Sequential images fail to effectively convey dynamic visual cues, making them inadequate for simulating motion perception. VR environments, combined with omnidirectional treadmills and handheld controllers, provide more accurate motion simulation by allowing users to simulate physical movements and choose walking paths freely and replicating real-world tour behaviors. Subjective perception quantification methods: Wearable sensors, capable of forming millisecond-level physiological responses to environmental stimuli, have become an effective tool in evaluating subjective environmental experience. However, the challenge remains in using physiological data to precisely identify emotions and understand the dynamic process of perception. Adding sequential descriptive sensors to traditional measurement methods can enhance the accuracy of subjective perception evaluation. Despite the promising applications of machine learning in subjective perception research, challenges such as data annotation difficulties, context dependence, and privacy concerns still persist.
    Conclusion The research demonstrates the advantages of using panoramic images in capturing comprehensive visual information in both static and dynamic environments, offering a more accurate representation of spatial relationships and overcoming traditional limitations. However, there is a need for further development in the quantification and visualization of dynamic visual cues, such as motion parallax and optical flow, as well as in the real-time analysis of dynamic visual information using machine learning. Motion perception simulation methods have highlighted the limitations of traditional sequential images and emphasized the benefits of virtual reality environments for more accurate and immersive experience. Additionally, wearable sensors provide an effective method for quantifying subjective perception, though challenges related to data annotation, context dependence, and privacy must be addressed. Future research directions include improving multimodal fusion techniques, developing personalized perception models, and enhancing the interpretability and transparency of machine learning models, all while ensuring privacy protection.
  • loading

Catalog

    Turn off MathJax
    Article Contents

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return