Lab 5

Question 1

/ 1 pts
What is the minimum number of pixels per land cover class included in this validation data set?

Correct!

1,502 (with margin: 0)
0 (with margin: 0)
0 (with margin: 0)
0 (with margin: 0)

Question 2

/ 4 pts
What is the overall accuracy % of your classification?

Correct!

0 (with margin: 0)
Between 0 and 100
0 (with margin: 0)
0 (with margin: 0)

Question 3

/ 3 pts
What is the difference between overall accuracy and kappa statistic?

Your Answer:

The difference between the overall accuracy and the kappa statistic is that the kappa statistic can tell you how much better, or worse, your classifier is than what would be expected by random chance. Since my kappa statistic was 85%, that tells me that my classifier is 85% better than a random assignment of cases to the various classes. Overall accuracy just tells us how well the pixels are sorted into classes. Overall accuracy is  computed by dividing the total number of correctly classified pixels (i.e., the sum of the elements along the major diagonal) by the total number of reference pixels.

Question 4

/ 5 pts
Upload your confusion table.

Question 5

/ 5 pts
In your own words, describe how to interpret overall accuracy, producer’s accuracy, and user’s accuracy. Which class do you most “trust” and why? Which class do you trust least, and why? Be specific in your answers, referring to your confusion matrix

Question 6

/ 2 pts
Which two classes have the lowest errors of omission?

Your Answer:

Bare Soil and Developed High Intensity

Question 7

/ 2 pts
Which two classes have the lowest errors of commission?

Your Answer:

Developed High Intensity and Bare Soil

Question 8

/ 3 pts
Considering the results from your confusion matrix, how might you try to update or improve your training data set/classification, if you were to run your classification again?

Your Answer:

I would try to include more training data for both developed high-intensity and bare soil, as there is lower user and producer accuracy for both when compared with the other classification classes. Also, noting the places where there is confusion, I would try to gather more diverse examples or focus on specific edge cases that the model struggles with. I could also create new features to better access the training data by removing irrelevant features or scaling existing features to improve classification accuracy.

Question 9

/ 8 pts
Compare and contrast the results from your minimum distance and maximum likelihood classifications in terms of overall accuracy, as well as the differences in producers accuracy, users accuracy, and which classes were most frequently confused. In your answer, try to explain these differences in terms of the characteristics of these classifiers, as well as the spectral characteristics of the classes.

Your Answer:

For the maximum likelihood classification, I had an overall accuracy of 88%. For the minimum distance classification, I had an overall accuracy of 91%. The main difference between minimum distance and maximum likelihood classifications was that the maximum likelihood classification had a lot of unclassified pixels, which skews the results for producer accuracy and user accuracy. The classes that were most frequently confused were the developed high-intensity and bare soil, as roughly 700 pixels were confused between them. The difference in terms of characteristics is that each class has a different spectral pattern. For example, bare soil is distinctly different than water, and in the confusion table, we see that 0 pixels are confused among them. The spectral characteristics between each class are also significantly different, so we see that in the confusion table, there isn’t as much confusion between classes.

The link between the difference of those two classifiers and the difference in accuracy can be a little more precise.

Question 10

/ 3 pts
Briefly describe the changes that you observe from 2001 to 2016 through visual inspection.

Your Answer:

The changes that occurred between 2001 to 2016 through just visual inspection were that a lot of grassland/forest to the south of Houston was converted to developed housing. Also the west of Houston, there is a similar phenomenon occurring as well. There also appears to be a decrease in vegetation on the outer edges of Houston as more developed spaces took precedence. There also seems to be a decrease in pasture lands as more land is turned in developed land.

Question 11

1.5 / 1.5 pts
Which class increased the most since 2001?

High Density

Pasture

Emergent and Woody

Correct!

Medium Density

Woody

Low Density

Open Water

Question 12

1.5 / 1.5 pts
By how many hectares did this class increase?

Correct!

38,967.66 (with margin: 1)
0 (with margin: 0)
0 (with margin: 0)
0 (with margin: 0)

Question 13

1.5 / 1.5 pts
Which class decreased the most since 2001?

Medium Density

Emergent and Woody

High Density

Correct!

Pasture

Woody

Low Density

Wetland

Open Water

Question 14

1.5 / 1.5 pts
By how many hectares did this class decrease?

Correct!

0 (with margin: 0)
70,532.01 (with margin: 1)
-70,532.01 (with margin: 1)
0 (with margin: 0)

Question 15

/ 1 pts
According to the table, what percent of pixels that were wetlands (including both emergent herbaceous wetlands and woody wetlands) in 2001 remained as wetlands in 2016?

Correct!

15.4 (with margin: 6)
0 (with margin: 0)
0 (with margin: 0)
83.6 (with margin: 6)

Question 16

/ 3 pts
Discuss the common trajectories of change in the wetland categories. What do you think are the dominant drivers of those changes?

Your Answer:

Wetlands, according to the data, are decreasing in size. The trajectory of change in wetlands is a decrease of 13.69% between the years of 2001 and 2016. The dominant drivers of these changes are mostly urbanization through habitat destruction, pollution, and altered hydrological regimes. Agriculture land use also is degrading wetlands and fragmenting habitats.

Question 17

/ 4 pts
Give two examples of changes that seem unreliable. Why don’t you trust these changes? What could you infer about the land cover classifications, based on the results from the transitions matrix?

Your Answer:

Two examples of change that seem to be unreliable are the increase in grassland and shrubs. Just looking at the change images visually, there seems to be a decrease in the amount of area for shrubs and grassland. However, in the change detection statistics, we see an increase in the image differences section. I can infer that urban/developing land is on the rise which causes a decrease in other land use types.

Can you infer anything from that about the land cover classification?

Question 18

/ 6 pts
Discuss the spatial distribution of the changes in the region. How does the change map compare to the changes you described in Q8. What more can you learn from the change map, compared to what you were able to discern with simple visual comparison? What can you infer about the drivers of LULCC in the region based on the spatial distribution

Your Answer:

The change map points out specific areas that have changed. It gives a clear view of the areas that have changed from one class to another. The simple visual comparison doesn’t have that ability as we are just looking at two different maps side by side with no pointers at where the change is located. However, the change map is quite hard to read just by using the colors. Because there are so many change types, the colors on the map are hard to decipher. The spatial distribution of the changes in the region correlates with population growth. As more people are arriving, there needs to be a. increase in the amount of housing, which causes more land to be changed to developed land.

You can highlight spatial distribution a little more
49/56