Task 7

 

Click here for the video

 

The original project was to “explor[e] the duality between the way people characterize themselves in public, and the private contents of their handbags” (Brown, 2018). Brown wanted the viewers to “begin to construct a narrative about the person as they view them and then refine that narrative based on the contents of the bag.” Therefore, instead of showing the actual contents of the bag, this time I decided to reverse the activity and record the sounds of myself using the products and/or the image I portray in public. The viewer then has to use these hints to guess what I would have in my work bag that I bring to school every day.

 

Another purpose of the original task was to consider the different texts on these items and what that says about me, the literacies I have, where I live, and the cultures I engage with. I then started thinking about all of the different languages on my items. I decided to provide further hints to my audience by reading out the texts on some of the items. I couldn’t do this for all of the items because not everything had text on it. 

 

I decided to read out the texts I know (English and Japanese) and then use AI to translate the languages I don’t know (Korean and French). I also had AI translate the Japanese words I read into English. The app I used is called DeepL, which is used often in Japan because it is more accurate than Google Translate. You can hold your camera up to an item and have the app read out the original text as well as the translation. One frustrating aspect of this app was that it was not always accurate. I had to take a picture of item #1 quite a few times as the Korean words didn’t register. Additionally, #1 also had Japanese words on it, which got translated into random symbols. The item had three languages together so the app couldn’t understand what was going on. Finally, I had to block the English and Japanese words with my hands so that the app could just translate the Korean words.

 

I really enjoyed this exercise because it urged me to reflect on the multiliteracies present today as a result of the “culturally and linguistically diverse and increasingly globalized societies” (The New London Group, 1996). My first part of this task allowed people who could not read English, Japanese, Korean, and French to use the visual and auditory clues, such as gestures, to deduce the items in my bag. The second part of this task used AI technology to help the audience access several languages. However, as cited in Dobson & Willinsky (2009): “To be information literate, a person must be able to recognize when information is needed, and have the ability to locate, evaluate, and use effectively the needed information” (ALA, 1989). This suggests that digital literacy is being able to evaluate the accuracy and reliability of information presented. And this is an aspect that my students in Japan are still struggling with, as they overly rely on translation devices not realizing the inaccuracies. While I was using DeepL, I had to know which texts were Korean and Japanese and take out the unnecessary texts so that the app could translate one language accurately. I was also aware when the app did not translate certain texts accurately. For example, instead of “grape”, the camera kept registering the word as “crap” and I had to keep retaking the picture until it could read the text accurately. Therefore, using AI technology such as DeepL may be useful in narrowing the gap between people who do not speak certain languages, but users still need to be aware of its limitations.

 

SCROLL DOWN FOR ANSWERS BELOW

 

 

 

 

 

 

 

 

 

References

Brown, E. (2018). Ellie Brown photography and artworks. Retrieved July 12, 2019.

Dobson, T., & Willinsky, J. (2009). Digital literacy. In D. R. Olson & N. Torrance (Eds.), The Cambridge handbook of literacy (pp. 286-312). Cambridge University Press. 

The New London Group. (1996). A pedagogy of multiliteracies: Designing social futures. Harvard Educational Review 66(1), 60-92.