Lab 6

1. Echo Community Centre: N/A 

AW Neil Elementary: 4

5100 Tebo Ave: N/A 

2.

3.

4. In the Module 5 model, we used a buffer zone and dem data to figure out where flooding was to occur. In this model, we used dem data and Euclidean distance from the water bodies data to figure out where the distance is away from that water body (coast). Using the distance data, we reclassify it into distance and elevation. Distances farther away and have a higher elevation got a higher number than distances that were closer and lower elevation. Then we used a weighted overlay to a state where flooding was to occur more or less. 1 = higher water levels, and 10 = lower levels of water based on elevation and distance from that water body. It also tells us the risk of flooding in Port Alberni.  I think the second model provides a better result because it sorts the areas into high-risk and low-risk, while the first model just tells us where it is going to flood. Some pros and cons of model 1 are that doesn’t display risk, and it can be quite hard to read where the inundation zone is, but it does have a better shape than model 2. Model 2, on the other hand, has risks associated with flooding, and you are able to read it easier than model 1. 

5. Computer systems and tech systems are coded by people, but they use self-learning systems to learn learn more and become self-sufficient. However, racial bias is unknowingly corrupting the tech system. People cling to the fact that technology is not biased and therefore is our saving grace in this messy society. The danger of this is that we don’t question whether tech is biased or not. Machine learning algorithms all learn from humans. Example selected by people, labeled by people and derived from people all plays a part in the machine learning algorithms. In the video, tech shows racial bias as it uses saliency to measure out which object is a face or not; it is shown that it tends to pick up on white faces. This is a very big problem because this can change how AI categorizes people for important demographical jobs. We need to build algorithms that are fair for all people and implement them into machine learning.  However, since all humans are biased in some way or another, this might be hard to implement. So we need to create more systems and checks in place not to let learning algorithms dictate racial bias.

6. In the study, they used the internet search-based proxy of racism as a predictor of black mortality rates. It takes into account the individual’s age, sex, year of death, and census region to find association between area racism and black mortality rates. The study found that greater levels of area racism were associated with an 8.2% increase in black mortality rates. There were also significant associations with heart disease, cancer, and stroke affecting the black community. This shows the strong impact of racism on health among the black community. This type of data can be a useful alternative to traditional metrics by looking at the broader impact on the audience by sharing this data. The fact black mortality rates are increasing in places where area racism is going on, and alternative metrics need more of the audience to realize this. Some potential limitations to this type of research are that some deaths are related to proximal sources, so they might be from diabetes-related mortality, which the researchers found very little association between area racism and diabetes mortality rates.