Abstract:
Visible green index is a visual evaluation standard for green space perception. In previous researches, visible green index was usually calculated on the basis of 2D images, which cannot fully reflect the subjective feeling of people on green volume in 3D space. The concept of panoramic visible green index is proposed based on the panoramic photography. Spherical panoramic photos can be obtained with a panoramic camera, and then the equal-distance cylindrical projected images are transformed into equal-area cylindrical projected images. The images are semantically segmented by convolutional neural network (CNN) models to identify the vegetation areas automatically to calculate the panoramic visible green index. Five CNN models are selected and compared. The result shows that the Dilated ResNet-105 neural network model has the highest recognition accuracy. Finally, the Ziyang Park in Wuchang District of Wuhan is taken as a case study to calculate and analyze the panoramic visible green indices of roads and squares in the park. Compared with the traditional manual identification method, Dilated ResNet-105 achieves an average Intersection over Union (IoU) of 62.53% with an average difference of 9.17% for vegetation area identification. The automatic recognition and calculation of the panoramic visible green index can provide new ideas for related researches and achieve an accurate, high-speed and easy-to-use evaluating method for visible green areas.