Author Archives: seth armitage

M4 P5 Schoolu Indigenous Language Learning Program Teaching Indigenous Youth

This is a great example of what is already possible and what can be possible in the near future. There are many ways to use VR for language learning, but one of the most important aspects is for it to be fun. The learners need to be engaged and want to continue to use the method for it to be successful. Being able to immerse children or students of all ages in an interesting environment with engaging activities in the target language can have many positive benefits. I envision incorporating actual place names and accurate landscapes of traditional territories to teach lessons in the target language and connecting the people to the land virtually with periodic visits to the actual place.

 

M4 P4 Language Learning using XR and Incorporation Game Based Learning

Extended reality and its immersive properties are showing benefits for teaching second languages and illustrated here is the benefits of combining XR with game based learning. In previous posts I have talked about how collaborative activities in an immersive environment can have advantages for language learning, and incorporating game based activities to further incentivize learners to collaborate towards a common goal can further enhance the learning experience and motivate some learners who were not already motivated.

 

M4 P3 Mixed Reality in Education

Mixed reality is a technology still in its infancy; however, I see great potential for teaching second languages, specifically Indigenous languages. I envision being able to teach plant names, place names, medicine gathering and other cultural place based activities through technologies such as these. Many Indigenous people do not live close to their territory and would still like to learn their language and be able to learn their language in a place based setting. This technology is likely a long way off from being accessible to a majority of institutions; however, the benefits could be great.

 

M4 P2 Language Learning Through Virtual Experiences in Second Life

Second life is an online 3D virtual environment that has been used in some cases to teach language lessons in. The goal is to create real life scenarios that the students can interact with each other in the target language as well as with characters within the online game. In this video Scott Grant of Monash University speaks to his work of investigating the usefulness of using virtual worlds such as Second Life to teach languages. Having the 3D virtual world include gestures and cultural nuances of the target language was one of the focuses of Scott Grant’s work that at the time of this video, it was premature to indicate if the results were positive or not. Later on in 2012, Scott Grant along with Michael Henderson and Hui Huang published a study on The impact of Chinese language lessons in a virtual world on university students’ self-efficacy beliefs and concluded that a single collaborative language lesson using Second Life can result in a statistically significant increase in student self-efficacy beliefs across a range of specific and general language skills.

References

Henderson, M., Huang, H., Grant, S., & Henderson, L. (2012). The impact of Chinese language lessons in a virtual world on university students’ self-efficacy beliefs. Australasian Journal of Educational Technology, 28(3), 400-419. 10.14742/ajet.842

 

 

M4 P1 Application of the Extended Reality Technology for Teaching New Languages

The link below is to a systematic review for the application of Extended reality technology (XR) for learning new languages. Some of the benefits that were found  include increased engagement, motivation, collaboration, a more comfortable and safe learning environment, decreased anxiety with language learning, vocabulary acquisition and enhanced story retelling. There were also a number of reported challenges and or barriers that ranged from technology issues to difficulty with how to incorporate VR in a way that maximizes the value of the technology.  It appears, that with proper planning and studying of how to best implement XR technology into language learning, that there can be many benefits. I do not believe that using XR technology as your sole method of language teaching is the correct approach, but that it can be a powerful tool if used in combination with other proven language learning methods.

 

file:///Users/pro2019/Downloads/applsci-11-11360%20(1).pdf

 

M3 P5 Indigenous People of Brazil Map Heritage with Google Earth

When I saw this it had confirmed an idea in my head of what could be possible with technology, language learning and connecting that to place based knowledge. The work that was done with the Surui people of Brazil in mapping their territory and even creating a virtual representation of their territory that a virtual character could navigate could have so many uses for Indigenous people across the globe. Being able to teach lessons to Indigenous students about their traditional territory could be incredibly beneficial. In the future I envision students having the ability to put on a VR headset and be immersed into that environment and speak in their Indigenous language while exploring their own territory from wherever they are in the world. Nothing will beat being there in person, but this can be a way to explore the territory and gain familiarity in between times of being able to visit in person.

 

M3 P4 Indigenous AI

This video goes further in depth into the work of Michael Running Wolf, from my Module 3 Post 3, as well as the work of his wife Caroline Running Wolf, who is pursuing a PhD in Anthropology at UBC, studying the potential application of XR technologies in the revitalization of Indigenous languages. This video is filled with valuable information regarding the challenges and potential solutions with technology and Indigenous languages.

One of the highlights that I found particularly interesting was that Michael Running Wolf and his colleagues had found a way to “fork” the coding of open source AI technologies to incorporate Indigenous languages. They had found that most AI technologies were incredibly biased to the western world and specifically California, where google headquarters is located. They had to find a way to use the existing technology and adapt it to the different Indigenous languages they were working with. Furthermore, the complexity of polysynthetic languages in how they have no finite dictionary, as there are virtually an infinite combination of root words, prefixes, suffixes etc.

M3 P3 The Race to Save Indigenous Languages, Using Automatic Speech Recognition

This article is about the work being done by Michael Running Wolf, who is a clinical instructor of computer science at Northeastern University’s Khoury College of Computer Sciences, on developing methods for documenting and maintaining Indigenous languages through automatic speech recognition software. This work is a precursor to his long term goal of providing a way for Indigenous youth to learn their language by way of technological immersion, using technologies such as virtual reality or augmented reality.

Part of the difficulty of developing automatic speech recognition for Indigenous languages is that in the field of computational linguistics, relatively little research has been devoted to Indigenous languages. An additional challenge is that many Indigenous languages are “polysynthetic” meaning that they have words that contain many morphemes, or units of language that cannot be further divided. As Michael Running Wolf points out, “polysynthetic languages often have very long words – words that can mean an entire sentence, or denote a sentence’s worth of meaning.”

 

https://news.northeastern.edu/2021/10/08/protecting-indigenous-languages-using-automatic-speech-recognition/

M3 P2 A Computer-Animated Tutor for Spoken and Written Language Learning

I was unsure about the copyright as the article indicated the following: Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

 

However, the article can be found via the UBC library by searching the title in bold below.

A Computer-Animated Tutor for Spoken and Written Language Learning

Dominic W. Massaro Department of Psychology University of California, Santa Cruz Santa Cruz, CA 95060 U.S.A. 1-831-459-2330 Massaro@fuzzy.ucsc.edu

 

The computer-animated tutor for spoken and written language learning that is referred to in the article is called Baldi. The ability for people to hear the language and to see the language being spoken by the computer-animated tutor has shown great benefits in second language learning, for people with hearing loss and for autistic children. One of the reasons is that the skin on the animated-tutor’s face can be made transparent, thus allowing the learner to see how the vocal tract moves when saying the word they are learning.

References

Massaro, D. (2003). A computer-animated tutor for spoken and written language learning. Paper presented at the 172-175. 10.1145/958432.958466 https://go.exlibris.link/kGJBkvfW

 

Below is an example of Baldi being used to help an autistic boy with language learning.

 

Module 3 Post 1 – Indigenous Language Speech Recognition

Te reo Maori Speech Recognition: A Story of Community, Trust and Sovereignty 

The work that the Maori people have done over the years to preserve their language is truly an inspiration. The Maori along with Hawaiians have been leading the way in Indigenous language revitalization for a very long time. This is another example of how they are leading the way and continue to be an inspiration for many people working in Indigenous language revitalization.

Speech recognition software

Te Hiku Media which is a charitable media organization, collectively belonging to the Far North iwi of Ngāti Kuri, Te Aupouri, Ngai Takoto, Te Rārawa and Ngāti Kahu has adapted existing open sourced speech recognition software to understand the Maori language Te Reo Maori. This type of work is essential to developing virtual worlds where people can learn Indigenous languages. For example, if a virtual person in a metaverse type of environment was programmed to understand an Indigenous language with the speech recognition software, and could in turn speak back in said Indigenous language, a person could practice speaking in a virtual world as much as they wanted.

Data Sovereignty

Data sovereignty is another very important topic that is touched on in this video. Kaitiakitanga License is a license that Te Hiku Media is working on in order to protect their data. Their goal is to have only Maori led organizations and initiatives have access to their data, at least initially. They would also return a portion of profits made from the data back to the communities from which the data came.