Abstract:
Objective Against the backdrop of accelerating global urbanization, urban green space systems critically enhance ecological quality, mitigate urban heat island effects and air pollution, and improve residents’ physical and mental well-being with substantial physiological and psychological benefits. Under “Beautiful China” strategy, precise urban greening assessment has emerged as a core challenge in landscape architecture. The green view index (GVI), a 3D perceptual metric quantifying visible green vegetation proportion, better fulfills human-centered evaluation needs than traditional 2D indicators like green coverage rate. However, existing methods relying on manual field surveys, LiDAR point clouds, or street view imagery suffer from inefficiency, high costs, viewpoint simulation biases, limited coverage (often restricted to road networks), and high costs, hindering large-scale urban applications. To address interdisciplinary challenges in urban green space evaluation, this study proposes a novel orthophoto-based GVI measurement method to establish a semi-automated framework, overcoming traditional data limitations and significantly enhancing urban green visibility assessment efficiency, spatial coverage, and practical applicability for diverse green space types including enclosed parks and campuses.
Methods Two representative small-to-medium-scale, low-density green spaces—Ginkgo Square (YS) on a university campus and Jinxiu Park (JS) in Hangzhou Lin’an District—were strategically selected for validation, as their simplified vegetation structures minimized 3D occlusion complexities during this foundational study. Using stratified sampling reflecting pedestrian movement patterns, 112 observation points covered roads, recreational zones, and transitional areas. Based on projective geometry principles, a quantitative “shadow−facade−GVI” mapping model was developed: First, DJI Mavic 2 drones captured high-resolution orthophotos (100 m altitude, 10 mm focal length, GSD = 24 mm) during summer daylight hours, while 308 on-site GVI images were synchronously taken at 1.5 m eye height using standardized protocols (24 mm focal length, 16∶9 aspect ratio) to simulate human perspective. Pix4Dmapper software performed geometric corrections (WGS 84/UTM zone 50N), enabling manual extraction of vegetation shadow locations and pixel areas (m1) from orthophotos. Vegetation facade area (S2=S1·tanθ S) and equivalent volumes (modeled as rotational ellipsoids) were derived using solar elevation angle (θ S), dynamically calculated from GPS coordinates, date, and local time. SketchUp then constructed simplified 3D scene models—representing trees as ellipsoid canopies (derived from crown shadows) combined with cylindrical trunks, extruding shrubs/structures from shadow footprints, and modeling dense woodlands as aggregated volumes. The theoretically derived formula IGV=(m1×tanθ S/M) (f1×p×H/f2×c×L)2×100% quantified relationships between GVI and orthophoto-measured parameters. Python scripts automated GVI calculation by identifying green pixels, with pedestrian-route-weighted spatial integration generating overall site GVI. Model reliability was rigorously tested via Spearman’s rank correlation and linear regression analyses using 275 paired field-simulation samples.
Results Mathematical derivation confirmed GVI’s direct calculability from orthophoto-extracted parameters—shadow area (m1), solar elevation (θ S), camera specifications (focal length f2, pixel size c), and observer-to-vegetation distance (L)—establishing a robust 2D-to-3D visual perception conversion mechanism. Statistical analysis of 275 validation datasets revealed a highly significant linear relationship between simulated and field-measured GVI: Simulated GVI = 0.82 × Field GVI + 0.13 (R 2=0.593, p < 0.001). Site-level GVI errors remained low (YS: 44.60% simulated vs. 41.17% field, Δ = 3.43%; JS: 38.19% vs. 32.63%, Δ = 5.56%), demonstrating method consistency. Scenario-based analysis further revealed: strongest correlation in open recreational areas (r=0.841), tightly clustered errors at road nodes, systematic overestimation in low-GVI scenarios (<40%) likely due to minor shadow detection artifacts, and higher variability in open-space viewpoints (IQR span: 0.1676) attributable to broader sightlines. These patterns collectively validate spatial heterogeneity as a key accuracy-influencing factor, informing future model refinements.
Conclusion This study innovatively leverages widely available orthophotos to create a semi-automated GVI measurement framework, achieving standardized image analysis workflows, breaking street-view imagery’s road-network dependency and extending robust assessment to parks, campuses, and enclosed green spaces previously excluded from automated evaluation. With significantly reduced errors compared to traditional methods, this method establishes a scalable pathway for citywide green visual database construction, advancing urban greening evaluation towards operational precision and standardization. Current manual modeling steps are solely for validation, while parametric tools enable full automation potential. Integration with China’s National Territorial Survey Cloud Platform enables seamless geospatial data fusion, while Digital Twin compatibility supports dynamic visualization of green view service efficacy, serving the human-centered ecological governance goals in the "Beautiful China" strategy. Current limitations include terrain data absence-induced about 10% systematic errors in sloped terrain and dense canopy occlusion requiring aggregated estimation, which will be addressed through near-future enhancements: fusing open-source DEM data for terrain-aware occlusion modeling and embedding semantic segmentation-based deep learning algorithms for automated shadow segmentation and classification.