CN 11-5366/S     ISSN 1673-1530
“风景园林,不只是一本期刊。”

基于正射影像数据的绿视率计量方法研究

Research on Measurement Methodology of Green View Index Based on Orthophoto Data

  • 摘要:
    目的 针对传统绿视率(green view index, GVI)计量方法效率低、覆盖范围受限的问题,本研究提出一种基于正射影像数据的新型绿视率计量方法,旨在构建半自动化框架,以提升城市绿地可视性评估的效率与适用范围。
    方法 基于投影几何原理,构建绿视率与正射影像参数间的数学模型,利用正射影像提取植被空间分布与投影信息,构建三维模型以模拟绿视率场景,结合Python程序实现绿视率的自动识别与计算;最后通过与基于实地相机图像数据源的绿视率进行对比分析,验证该方法的可靠性。
    结果 逻辑推导得出基于正射影像数据的绿视率计算式,从理论方面证明了方法的可行性,揭示了绿视率与正射影像中植物阴影的几何参数、太阳高度角、测点与周边绿化的空间距离等要素之间的定量映射关系。对275组实地与模拟绿视率数据的分析表明,两者呈现显著线性关系,模型验证了方法的可靠性。
    结论 基于正射影像的绿视率计量方法能够有效表征城市绿地的三维可视性,为绿视率自动化计量提供了可扩展的技术路径,有助于推动城市绿化评价体系向精细化与标准化方向发展。

     

    Abstract:
    Objective Against the backdrop of accelerating global urbanization, urban green space systems critically enhance ecological quality, mitigate urban heat island effects and air pollution, and improve residents’ physical and mental well-being with substantial physiological and psychological benefits. Under “Beautiful China” strategy, precise urban greening assessment has emerged as a core challenge in landscape architecture. The green view index (GVI), a 3D perceptual metric quantifying visible green vegetation proportion, better fulfills human-centered evaluation needs than traditional 2D indicators like green coverage rate. However, existing methods relying on manual field surveys, LiDAR point clouds, or street view imagery suffer from inefficiency, high costs, viewpoint simulation biases, limited coverage (often restricted to road networks), and high costs, hindering large-scale urban applications. To address interdisciplinary challenges in urban green space evaluation, this study proposes a novel orthophoto-based GVI measurement method to establish a semi-automated framework, overcoming traditional data limitations and significantly enhancing urban green visibility assessment efficiency, spatial coverage, and practical applicability for diverse green space types including enclosed parks and campuses.
    Methods Two representative small-to-medium-scale, low-density green spaces—Ginkgo Square (YS) on a university campus and Jinxiu Park (JS) in Hangzhou Lin’an District—were strategically selected for validation, as their simplified vegetation structures minimized 3D occlusion complexities during this foundational study. Using stratified sampling reflecting pedestrian movement patterns, 112 observation points covered roads, recreational zones, and transitional areas. Based on projective geometry principles, a quantitative “shadow−facade−GVI” mapping model was developed: First, DJI Mavic 2 drones captured high-resolution orthophotos (100 m altitude, 10 mm focal length, GSD = 24 mm) during summer daylight hours, while 308 on-site GVI images were synchronously taken at 1.5 m eye height using standardized protocols (24 mm focal length, 16∶9 aspect ratio) to simulate human perspective. Pix4Dmapper software performed geometric corrections (WGS 84/UTM zone 50N), enabling manual extraction of vegetation shadow locations and pixel areas (m1) from orthophotos. Vegetation facade area (S2=S1·tanθ S) and equivalent volumes (modeled as rotational ellipsoids) were derived using solar elevation angle (θ S), dynamically calculated from GPS coordinates, date, and local time. SketchUp then constructed simplified 3D scene models—representing trees as ellipsoid canopies (derived from crown shadows) combined with cylindrical trunks, extruding shrubs/structures from shadow footprints, and modeling dense woodlands as aggregated volumes. The theoretically derived formula IGV=(m1×tanθ S/M) (f1×p×H/f2×c×L)2×100% quantified relationships between GVI and orthophoto-measured parameters. Python scripts automated GVI calculation by identifying green pixels, with pedestrian-route-weighted spatial integration generating overall site GVI. Model reliability was rigorously tested via Spearman’s rank correlation and linear regression analyses using 275 paired field-simulation samples.
    Results Mathematical derivation confirmed GVI’s direct calculability from orthophoto-extracted parameters—shadow area (m1), solar elevation (θ S), camera specifications (focal length f2, pixel size c), and observer-to-vegetation distance (L)—establishing a robust 2D-to-3D visual perception conversion mechanism. Statistical analysis of 275 validation datasets revealed a highly significant linear relationship between simulated and field-measured GVI: Simulated GVI = 0.82 × Field GVI + 0.13 (R 2=0.593, p < 0.001). Site-level GVI errors remained low (YS: 44.60% simulated vs. 41.17% field, Δ = 3.43%; JS: 38.19% vs. 32.63%, Δ = 5.56%), demonstrating method consistency. Scenario-based analysis further revealed: strongest correlation in open recreational areas (r=0.841), tightly clustered errors at road nodes, systematic overestimation in low-GVI scenarios (<40%) likely due to minor shadow detection artifacts, and higher variability in open-space viewpoints (IQR span: 0.1676) attributable to broader sightlines. These patterns collectively validate spatial heterogeneity as a key accuracy-influencing factor, informing future model refinements.
    Conclusion This study innovatively leverages widely available orthophotos to create a semi-automated GVI measurement framework, achieving standardized image analysis workflows, breaking street-view imagery’s road-network dependency and extending robust assessment to parks, campuses, and enclosed green spaces previously excluded from automated evaluation. With significantly reduced errors compared to traditional methods, this method establishes a scalable pathway for citywide green visual database construction, advancing urban greening evaluation towards operational precision and standardization. Current manual modeling steps are solely for validation, while parametric tools enable full automation potential. Integration with China’s National Territorial Survey Cloud Platform enables seamless geospatial data fusion, while Digital Twin compatibility supports dynamic visualization of green view service efficacy, serving the human-centered ecological governance goals in the "Beautiful China" strategy. Current limitations include terrain data absence-induced about 10% systematic errors in sloped terrain and dense canopy occlusion requiring aggregated estimation, which will be addressed through near-future enhancements: fusing open-source DEM data for terrain-aware occlusion modeling and embedding semantic segmentation-based deep learning algorithms for automated shadow segmentation and classification.

     

/

返回文章
返回