5 Limitations

The project’s large scale unfortunately brought some errors and difficulties in some geoprocessing applications. But starting with the smaller issue, the cities shapefile can be improved by the government if they included one where the incorporated areas of a metropolitan city counts as 1 point file and not 25-30. This is the case for Los Angeles, where if you take the population point file and normalize it to the number, there is a strong positive value for Los Angeles, and very weak values on all the dots alongside it. However, adding up all the point files still yields the same clustered and heavily populated segment of the city that Los Angeles is known to have. This is one small issue that can be resolved.

Geoprocessing was excessively slow when dealing with a large data size like this one. Before I commenced on the project, my exploration into past projects led me to see that other students have attempted to classify BC-wide variables before, so I assumed a similar project on California could work feasibly. Each Moran’s and Nearest Neighbor analysis took at least 10 minutes, along with the slow appearance of all the layers of fire occurences location map and the basemap. But if the Moran index still worked, two aspects I have not managed to be successful on pertains to the failure of Regression Models (OLS and GWR) and Grouping Analysis on a state-wide level. The models failed even with one variable selected. It was not meant to handle a big dataset like one that covers all of California. Further attempts to replicate the geoprocessing methods with a smaller scale like Los Angeles yielded some results, but as mentioned earlier, there were existing complications with the way the city point files were classified so the results of the regression weren’t representative of the metropolitan area.

Spam prevention powered by Akismet