AI in Instructional Design

Welcome to our OER on Artificial Intelligence. We have decided to focus this presentation on instructional design: how designers are using AI to enhance their work in all levels of instruction, from K-12, higher education and in the corporate training world. We’ll share some new technologies and tools, and even give you a chance to try out a few! We’ll also talk about the inevitable security and privacy issues that come with this technology.

Our presentation is a slideshow created in Genial.ly. There are activities within the presentation, but we also hope to encourage a lively conversation in the blog below! In order to do this, we are changing up the format of the discussion this week. Rather than answering the questions in individual posts, each of this week’s 7 questions has been posted by us in the discussion below. PLEASE RESPOND TO THE QUESTION DIRECTLY BY REPLYING TO THE QUESTION POST, NOT AS A NEW POST. You can either reply to the original question, or to someone else’s comments.

We encourage everyone to reply to at least two of the prompts in the discussion below.

Now, let’s get to what you’ve been waiting for! Our presentation can be found here: https://view.genial.ly/6233d8a451790f00188de64e/presentation-ai-in-instructional-design


( Average Rating: 5 )

93 responses to “AI in Instructional Design”

  1. Hayley Mooney

    ****QUESTION #1: K-12: Do you see AI being used in your classroom now? In the near future?


    ( 0 upvotes and 0 downvotes )
    1. Kyle

      At the moment, I am not aware of AI being used in the classroom, however perhaps boards of education or ministries are using it behind the scenes for operational items? I absolutely think that AI will make its way into the K-12 classroom in the near future; perhaps as an administrative assistant for large boards to help with redundancy, increase customer support with the community and perhaps detecting system changes/outages etc. In the classroom I can see a future where AI is incorporated in some way, especially as schools gain more access to individualized technology for students as a means to provide unbiased diagnostics of student understanding and next steps. Ultimately I think it has it’s greatest impact further down the line in development and acceptance when perhaps AI is able to sequence consolidation exercises or extensions to activities based upon their completion of the first portion or other more dynamic pieces that impact student learning.


      ( 1 upvotes and 0 downvotes )
      1. Hayley Mooney

        I think this is the story not only with AI, but with many of the technologies presented in the past weeks: all of these technologies have the potential to have huge impacts on instructors in saving on time and repetitive work… but only once these technologies can be seamlessly fit into classrooms and require minimal setup from teachers. At this point most of the technologies still require a lot of time and training on the part of instructors, and so the tradeoff is not yet worth it. The big question is how soon before it is?


        ( 0 upvotes and 0 downvotes )
        1. Nathan Bishop

          Hi Hayley and Kyle,

          I like this point about AI needing minimal setup. In my own context, and online high school, there are opportunities to take advantage of some low-level AI within the learning management system we use (Brightspace). The tool is called LeaP and it is described as an adaptive learning tool that provides personalized learning paths. The short and sweet of it is, you can set up courses to move students to different places based on how they perform on assessments. For example, if a student scores below 70% on a unit test, they will be routed to some review materials based on the questions they struggled to answer. A student that earned above 70% may move onto some supplemental materials or the next unit. There are a lot more complex variations of this, but this is the general idea. This works well in an asynchronous learning environment, but I can see it being problematic in a synchronous classroom. I refer to this low-level AI because there is still a ton of back-end work for the teacher. The teacher essentially needs to design all the different pathways that system may take the student, which is extremely time consuming. So to tie this back to your point, I don’t think teachers would use this if they had to put all the setup time in. If D2L could build some machine learning into LeaP so that it builds these pathways for the teacher, then we are looking at some higher adoption rates. Currently, my school does not use the LeaP tool for the reasons stated above.


          ( 1 upvotes and 0 downvotes )
          1. brendan stanford

            Hi Nathan, Hayley and Kyle!

            I can leapfrog [see what I did there ;)] off of Nathan’s experience in sharing a low-level AI tool I currently use in my practice known as CK-12 “Adaptive Practice” (https://www.ck12.org/practice/); in essence, it offers students problems exploring a given concept and based on student performance and the qualitative difficult of the problem (easy, medium or hard), the system either gradually increases or decreases problem difficulty. These question sets and difficulty levels have already been created by the CK-12 foundation, but herein lies the problem: what one teacher (or programmer/content curator) might evaluate as easy or hard could have a completely different evaluation given a different group of students, which results in some Adaptive Practice sets seeming far too easy and others quickly becoming outside student’s capabilities to solve, especially given differences in curriculum. The benefit of CK-12’s platform is that its resources are all open, so teachers can modify existing resources to better suit their particular curricula and class, however, this removes the “adaptive” component of these assessments and they instead become standard quizzes; this is not to even mention that obviously the teacher is back at square one putting in considerable time and effort to modify the resources for their class.

            Nevertheless, I have found these resources both informative (I’m able to assign them via Google Classroom and get both individual and whole class assessment data as learning back) and generally enjoyed by the students; they often compare their mastery level and “score-streaks” in Math, and at the end of the day, if the platform engages the students more than a traditional worksheet, I’m glad for it!


            ( 0 upvotes and 0 downvotes )
          2. Kyle

            I completely agree Hayley and Nathan; without ease of access the adoption rate will be slow, but perhaps this will be like the computer, or the internet that once we hit a certain percentage of teachers it grows exponentially. Do we think this latency, or hesitancy is confined to education, if so how do we expedite or alleviate this issue, or does this issue extend into other industries? I for one like to try new things, especially technology, and give it a test. I wouldn’t mind providing feedback to the company or agency that built the program if it meant my feedback was incorporated into the build and therefore became easier/better for me.


            ( 1 upvotes and 0 downvotes )
        2. John Wu

          Training is probably the biggest hurdle as most educators seem to be reluctant/slow to jump aboard new tech. Everytime my University’s IT Department announces a training seminar for staff it seems like no one wants to attend even though it’s beneficial. Management will probably need to put in extra work to convince faculty staff to adopt AI. Cost wise, by the time AI becomes mainstream I suspect it’ll be affordable with a range of vendors to choose from so technically that shouldn’t be a problem


          ( 1 upvotes and 0 downvotes )
    2. JacksonLiang

      I can see AI being used now but like what the others mentioned, many may be skeptical or lost on how to start it. For example, some may not know what kind of AI programs are available to support student learning. I am interested in Brandon’s leapfrog suggestion which I’ll look at! In the near future I’m interested to see if chatbots (from the last opportunity forecast) could become helpful in the classroom for students to use.


      ( 0 upvotes and 0 downvotes )
      1. Hayley Mooney

        I think this is going to be a matter of either word of mouth for the free services, or just waiting until the school boards choose to adopt something and learning it by necessity! If you’d told everyone 20 years ago that they had to learn how to do all their social planning online through social media they likely would have been very confused, but it has naturally become the way we function. I suppose it’s only a matter of time before the most practical uses of AI just sneak into our lives.


        ( 0 upvotes and 0 downvotes )
    3. Wynn Zhang

      AI by itself is something that sounds very complex and intimidating to use in a classroom, however, its integration in the apps that we use does allow AI to sneak its way into our classrooms. For something as simple as the auto-generation of the captions for YouTube, it could be a great way to help support students when watching an education video. Thus, I disagree the notion of if AI is used in the classroom, but rather how. I imagine that there will be more functions of AI in my classroom, especially in the areas of assessment or various forms of personalized learning. I see Microsoft Teams to be a platform that could house these functions and allow me to utilize it in my practice. Even now, I sometimes use simple AI tools such as Azure to enhance engagement.


      ( 0 upvotes and 0 downvotes )
      1. Hayley Mooney

        Good point, Wynn. I think when the question was posed we were considering the more exciting, futuristic and outwardly noticeable uses, but really AI is already here, it’s just hard to notice!


        ( 1 upvotes and 0 downvotes )
        1. mstr

          AI is already here, in some ways…I’ve used plagiarism detection, and exam integrity programs, as well as the learning management system “Brightspace,” which as many of have already said uses “low level” AI. I’m confident that AI will soon become more mainstream and when this happens educators will become more accepting of the challenges and time associated with learning and developing its uses for their classrooms. I feel that covid has made education (teachers and students too) more nimble and adaptable, perhaps this change will allow AI advancement and incorporation to occur more rapidly?


          ( 0 upvotes and 0 downvotes )
          1. John Wu

            Duolingo is another good example of AI learning for language students where it mimics actual human speech and linguistic neural patterns. As a throwback to the Chatbot OER, AI is also used for student enrolment and retention. I think actual implementation is already happening and once more mainstream applications catch on (or educators realizing they’re actually using AI at this time), the concept of it being complicated and intimidating will fade away


            ( 0 upvotes and 0 downvotes )
          2. Wynn Zhang

            I completely agree with this sentiment! I have actually used many plagiarism checkers and I have missed the fact that they are using AI-powered engines to run their program. I can imagine that many of the spell/grammar checkers that my students use also has AI functionality built in to increase the accuracy and power of their product. I can imagine that with these new tools, it might completely change the way we teach and interact with language as it makes it even more important to focus on the utilization of these tools to help us achieve our goals.


            ( 0 upvotes and 0 downvotes )
    4. John Wu

      I think Higher Education will adopt AI in education faster than K-12 classes due to various privacy and data regulatory concerns for younger students. Furthermore, there might be hesitation from school boards to fully implement AI whereas individual University Departments have discretion and autonomy to decide. That being said, I’ve been told that some University courses are slowly making use of AI powered assistive technologies to run simulations and streamlining how students acquire knowledge. It probably depends on the subject/course as some tech or STEAM courses will logically be more receptive towards AI than others. As for the future, it’s possible that AI will become more widely adopted but ultimately it depends on (i) how it’s used (ii) why it’s used and the exact value they can contribute to the classroom. I’m definitely open the idea of teachers using AI as a form of supportive tool to enhance and elevate the learning experience but not be replaced by it. If it makes learning more accessible, equal and interactive then by all means use it


      ( 1 upvotes and 0 downvotes )
  2. Hayley Mooney

    ****QUESTION #2: Higher Ed: Do you think that the majority of the respondents of the survey are correct and that AI will complement rather than replace human scientific input? And if so what subjects and/or skills should universities focus on to best prepare students to live and work alongside AI?


    ( 0 upvotes and 0 downvotes )
    1. Ally Darling-Beaudoin

      AI will take its place in inputting in all fields, not just science. Why bother having a person input data when they don’t need to? If you can validate that the data is correct, there is really no need for human input. Human focus is best for contextualizing the results of those inputs, which a machine cannot do, and perhaps even once it can, we won’t WANT it to do… With that in mind, universities should begin teaching students how AI is built, and what limitations it has. AKA, programming! I have had plenty of students, colleagues, etc. who still don’t know basic troubleshooting for their own laptop, let alone a sophisticated program. Computing know-how is only going to become more and more important as machines begin to work alongside us, rather than for us.


      ( 1 upvotes and 0 downvotes )
      1. brendan stanford

        Hi Ally! I 100% agree; I think computing education has been too focused on how to effectively become a consumer of computer software so that you can use it to do other jobs, and we are only just beginning to train students to consider how they can computing technology as producers who create software to serve their needs instead. I have had great success using block-based programming platforms such as Scratch, but I find the struggle lies in getting students to move deeper into more customizable text-based languages and truly create something of their own.


        ( 1 upvotes and 0 downvotes )
        1. Ally Darling-Beaudoin

          Brendan, yes, it certainly makes me feel like I need to brush up on my programming…! It is interesting as well, that when I looked into “simplified” programming, what I actually came across was platforms that continue to make users “consumers” of computer software: this one does ‘free prototyping for app development and user interface’ – no programming required! Not so bad when you don’t “need” programming or don’t know how to use it… but it keeps us a step away from actually knowing what’s going on!


          ( 1 upvotes and 0 downvotes )
          1. Ally Darling-Beaudoin

            Link is here for the platform I was talking about! https://www.justinmind.com/home-a


            ( 1 upvotes and 0 downvotes )
          2. Hayley Mooney

            As someone who doesn’t have a strong background in programming, I’m always pleased to come across ways that I can produce material using click and drag-style systems, but I am also always aware that I would be so much more empowered if I had the skills to produce materials from scratch, and not be dependent on the constraints (and price tags!) of the ready-made software. I think this is a matter of needing to consider computer literacy as an equally important form of literacy as reading and writing, and start kids off early on this route. It’s much more difficult to learn as an adult (as a consumer or a teacher!), then it is to grow up assuming you can do something.


            ( 2 upvotes and 0 downvotes )
            1. mstr

              I very much agree that computer literacy (including AI) is and will be extremely valuable going forward. For me, I’m much more willing to take on a new technology learning experience if I know that training and ongoing support is available.


              ( 0 upvotes and 0 downvotes )
            2. John Wu

              I have the same sentiments, more and more applications have visual coding (basically blocks which contain the entire code) that allows the user to simply drag/drop/connect their program together. Technically programming is not difficult, just extremely tedious. I think the future of user designed programs and AI will be more streamlined than ever before as more people will be able to participate in this area (AI could even be used to help/assist non computer science or programming individuals out)


              ( 0 upvotes and 0 downvotes )
    2. hasssae1

      Hi Haley and team, excellent work, well done. With regard to this question, in my humble opinion, the prospective value of new forms of AI to learning, education, science, health, research, and humanity in general, is inevitable. There are some implementation hurdles, costs, and other practical problems, however undoubtedly, AI is an important construct in learning, and it continues to grow. Educational institutions (i.e. universities) can focus on offering general courses on human-machine interface (HMI), revamp curriculum to improve online systems/connectivity, offer courses in robotics (maybe?), and redesign their curriculum to promote individual development, generate curiosity when it comes to AI, and fire up our ambitions to learn more about the technology, and show students how they can live and work alongside/with AI and benefit from it.


      ( 2 upvotes and 0 downvotes )
    3. Wynn Zhang

      AI is built as a tool, and as with any tools, they are as useful as how well they are utilized. Thus, they will be replacing humane functions, but they will not replace scientific input. Thus, it’s important for schools to teach how to utilize these tools. While AI is an advanced topic, I believe that trying to use AI during school and learning how it works would help with understanding how it can be used in our daily every day lives. However, when implemented correctly, it should not be something that makes it extremely complex to use. For instance, Google Home uses an AI engine, but it’s a tool that anyone can use easily. I imagine that a personalized Google Assistant that can help with inquiry based learning and organization to be extremely helpful in our learning environments, but is very easy to use.


      ( 0 upvotes and 0 downvotes )
    4. John Wu

      Science based courses will naturally integrate itself more naturally, I do agree with the survey results that AI will complement rather than replace. That being said, students should start to train in soft skills/things AI can’t replicate as an effective way to co-exist and work with AI tech in the future. Most important aspect is to differentiate yourself and have a set of skills which are uniquely socially useful/human in nature


      ( 0 upvotes and 0 downvotes )
  3. Hayley Mooney

    ****QUESTION #3: Corporate ID: Do you think that AI will eventually be able to do as good a job as an instructional designer?


    ( 1 upvotes and 0 downvotes )
    1. cindy keung

      Hi team! My thoughts to this prompt are as follows: We still need the soft skills that the “middle man” possesses which comes with experience and where this experience has provided the skills that we, at this point, cannot program an AI to have and use.

      There’s also a wholistic aspect of instructional design that is related to pedagogy and aesthetics/design which is very “human” since it is done via a collection of human senses and experiences. Unless we can program AI to replace these, a human being is still needed.


      ( 2 upvotes and 0 downvotes )
      1. Marie-Eve Masse

        I agree with Cindy, here! I think AI can leverage the role of an Instructional Designer by providing meaningful data to suggest improvements to courses based on the interaction levels of students, and AI can save time in the design. In the end, a human is required to create a positive learning experience. Humans understand the learners, the company, and the message/tone they want to send – these are all intangibles that AI will not be able to develop fully. I would say that AI will enhance the role of an ID, not ever do as a good job as it.


        ( 2 upvotes and 0 downvotes )
        1. Kyle

          I think where we, the educators and human resources, come in is also about how we develop and use the AI in different situations. Take for example websites; creating a website used to be entirely up to those that could write code (efficiently) to make, but now frameworks exist (take this space for example, or the many various website templates that have been used) that allow us to build them in seconds. I think there will be a similar transition across all sectors that will involve humans directing the use of AI to various tasks/areas where resources might have otherwise been deployed.


          ( 1 upvotes and 0 downvotes )
          1. seth armitage

            Hi Cindy, Marie-Eve and Kyle,

            I definitely agree that humans will always have a role in Instructional Design and I think that AI is merely a tool that Instructional Designers can use to evaluate and optimize where their efforts should be focused. With AI more data will be available to them sooner so they are better able to identify where students are struggling so that they can refine their approach and focus on the areas identified. Also, with AI powered chatbots as in the Georgia Tech example, AI can offload time intensive tasks like answering common student inquiries, thus freeing up valuable time to focus their efforts elsewhere.


            ( 2 upvotes and 0 downvotes )
            1. John Wu

              Depends on the context but from a holistic perspective, I don’t think so as instructional designers often take many factors into account when they design their content. For example, sometimes inspiration on how to improve or create something new happens simply because of human creativity and innovation (subjective level) where as if it’s design on an objective level (eg: creating courses based on feedback or data analysis) then perhaps AI could be a worthy substitute in this situation. Then again it depends on the course being designed, the target audience and medium of instruction. Too many external factors to consider


              ( 0 upvotes and 0 downvotes )
    2. Ally Darling-Beaudoin

      AI can definitely replace the instructional designer, and the instructor, depending on the content. In my education I’ve learned plenty of things that had no need for a complex, “highly engaging” lesson plan, where instructors are essentially just reading printed instructions aloud. Think of anything people “teach themselves” on the internet – in my case, it was software. I took formal courses to learn Illustrator, Photoshop, AutoCAD, Revit, Sketchup, and other software programs. These courses were essentially scheduled “how to” sessions where we completed a pre-determined task and handed it in for marks at the end of class. I’m not disparaging that teacher, because I’ve actually BEEN that teacher in the case of AutoCAD & SketchUp, and enjoyed the experience. That said, an AI could be established with understanding of the current curriculum, and then carry out administering the lessons, grading the results (which should be all the same, so quite easy), identifying issues, and tweaking lesson plans to avoid issues down the road. If there wasn’t an established curriculum to begin with, an AI could examine similar programs and make one from scratch, if necessary. If your learning journey is point A to point B with not too much winding, AI is definitely an option – that is, if you know how to use it…!


      ( 2 upvotes and 0 downvotes )
      1. Hayley Mooney

        Hi Ally, thanks for boldly taking the other side of this argument, and you make a good point too, that for straightforward material, AI has the ability now to create equally straightforward instruction. In fact, possibly better than instruction from a teacher who might forget material or ramble unnecessarily. I do sometimes wonder, as this technology gets better and better, and therefore the complexity of the material being taught also becomes more deep, whether learning this way will become somewhat isolating. As it stands, I can currently tell when an informative article is written by AI- and it is almost exhausting knowing that the information I am reading has never been reviewed by an actual human. My point here being that for short informative courses, AI seems like a natural fit, but how long before we lose all human contact in our training, and how will this affect us?


        ( 0 upvotes and 0 downvotes )
        1. Ally Darling-Beaudoin

          Hi Hayley, I expect that as this type of learning becomes more ‘prevalent’, curriculum will adjust to balance things out. Similar to a “flipped classroom” model where students independently review at home, and the collaborate at school, AI-led courses can be used sparingly, and as required, for certain career paths. I am even curious about how AI-led courses could be mediated and scheduled, for example: is there any reason an AI-led course needs to follow the same semester structures? If you must take an AI-led course on how to learn a particular software, for example, is it enough to simply tell a student they must complete the activities within their first year of education? This way, students could prioritize other elements of their education as needed, or take breaks to complete their AI-led work. All simply preliminary ideas, I think that if (when?) we do begin to embed AI into a curriculum, it should be changing the way that we teach altogether.


          ( 0 upvotes and 0 downvotes )
          1. Hayley Mooney

            I like the idea that AI-led courses could be considered more of a lifelong learning model than set into formal semesters. Really, this concept could start at a young age, with courses that produced lessons like video games: less of a pass/fail thing, and more of a progress-at-your-own-pace as you gain skills. There has been a lot of research on how learning doesn’t need to be as formal as it has been designed in the past- perhaps AI is the key to promoting lessons for the sake of actually accomplishing something, rather than passing a test.


            ( 1 upvotes and 0 downvotes )
    3. hasssae1

      Hi Hayley, great feedback provided by colleagues here so far. My two cents is that when it comes to instructional design (e.g. in corporate Learning & Development), AI is more a tool and a collaborator, rather than a substitute for “the human”. In other words, the repetitive tasks in instructional design can be carried out by AI, but the “thinking” has to be done by a human, not the other way around. That is to say that the “master” is always the human, and the “servant” is always the technology.


      ( 0 upvotes and 0 downvotes )
      1. Hayley Mooney

        Hi Saeid, I am inclined to agree with you at this point, because AI has yet to truly live up to the “intelligent” part of it’s potential. But what about straightforward training courses, as mentioned above by Ally in her post? I believe you work/have worked in the oil and gas industry? How about safety training courses or material training courses (HAZMAT?). These courses don’t usually contain much in the way of creative materials- do you think AI might be able to tackle them in the near future?


        ( 1 upvotes and 0 downvotes )
        1. hasssae1

          Hi Hayley, absolutely it can. However, going back to my point, the reason that AI can fully handle such straightforward courses is because “the thinking” has already been done by “the human” before. As you said, it appears that the “intelligent” part of AI is not there, or at least I have not had much exposure to it yet.


          ( 0 upvotes and 0 downvotes )
          1. Ally Darling-Beaudoin

            Saeid, Hayley, I have to disagree with you about AI not being “intelligent” in today’s world. I think we neglect to consider what intelligence is for a human being versus a machine. A researcher that I have been following for a few years, Dr. Susskind, has some really insightful thoughts on this topic, I definitely recommend this short article: https://hbr.org/2016/10/robots-will-replace-doctors-lawyers-and-other-professionals ——– In this article, Dr. Susskind discusses how AI has already replaced many tasks that we would have considered to be “impossible” for a machine, and that he expects this to only grow as we continue to standardize our professional work. He cites that (in 2016) there were over 60 million disagreements resolved on eBay alone with entirely “online dispute resolution”, no legal consulting required at all. Despite the fact that most of us would argue a legal authority is needed for a legal proceeding, eBay was instead able to build an AI using precedence, which then carried out decisions in place of an individual – and, which mimicked the decisions an individual would make, with startling accuracy! A similar success story exists within healthcare, another field that we think must involve humans directly: Dr. Thrun developed an algorithm which could identify skin cancer with equal accuracy to all 21 tested board-certified dermatologists (study is here: https://www.nature.com/articles/nature21056). The “machine” did not need to “learn” about dermatology, it only required precedence to develop a learning pattern. And, consider the benefits: how many dermatologists can look at over 1 million different skin lesions in their career? But a computer can do it easily, and “remember” them all to the most intricate detail, easily. There are of course limitations, but I think we get bogged down waiting for “I Robot” to occur before we admit that AI is here, but intelligent computing that can and does make human-level decisions already exists, and will only grow.


            ( 0 upvotes and 0 downvotes )
            1. Hayley Mooney

              HI Ally, you’re absolutely right- in many cases AI is already doing a good job. Especially in areas like skin cancer, where there is less of a risk of bias (as long as there are samples of cancer on all colours of skin?) I do worry about precedence though. “Machine-learning algorithms are trained on “data produced through histories of exclusion and discrimination,” (Ruha Benjamin, as quoted in https://www.technologyreview.com/2019/10/17/75285/ai-fairer-than-judge-criminal-risk-assessment-algorithm/) I’m not convinced that AI’s intelligence has gotten to the point of realizing it’s own bias, and although humans are also biased, I don’t think we are as likely to hold AI to account. The film at the end of the corporate section of our presentation talks a bit about how we can combat this bias, and make AI systems align with human intentions and human values rather than precedence’s which may not be ethical.


              ( 0 upvotes and 0 downvotes )
              1. Ally Darling-Beaudoin

                Hey Hayley, your point about bias is a valuable one. I’d argue bias is not more inherent in AI than in “traditional practice”, and that the same precedence limitations exist in both cases. Medical textbooks used to teach new doctors have the same skin tone limitations: it is a limitation of the field of medicine, not of AI. A flaw in AI decision making is a reflection of our own, after all… but, I agree, an AI is not as well equipped (being that it is generally not its purpose) to recognize bias as we are. A good thing too, since human beings have a plethora of bias to combat! That said, being cognizant of this limitation of AI is a dilemma, since there is generally minimal overlap in AI researchers, and other (in this case medical) researchers. And while a medical practitioner would recognize a common medical bias, an AI researcher may not. Another reason it is important to have holistic development of these types of systems, and for everyone to understand more about AI.


                ( 0 upvotes and 0 downvotes )
                1. mstr

                  Hi all, I’ll chime in on the AI bias topic. I watched a documentary called “coded bias,” which highlights some of the concerns regarding AI. The film states that “Data is destiny – if you have largely skewed data then you will have skewed results. Algorithms are learning the biases from humans. Machine learning is a scoring system
                  Who owns the code?” Just as in every emerging technology its important to be aware of the advantages and the criticisms, so that the awareness can hopefully lead to improvements and solutions – in this case reduced bias in both AI and humans.


                  ( 0 upvotes and 0 downvotes )
              2. John Wu

                Regarding bias, as a question for thought to everyone, does anyone think algorithm is neutral or subjective?


                ( 0 upvotes and 0 downvotes )
                1. Hayley Mooney

                  HI John, I’d personally say that many algorithms are designed with the AIM to be neutral, but end up subjective due to the natural values and opinions of the person creating them. I don’t think designers purposefully design an algorithm to be biased, but if that’s the way their society works, the results of the algorithm will likely reflect that.


                  ( 0 upvotes and 0 downvotes )
    4. Wynn Zhang

      Hi AI team! I believe that while there will be some strangeness with using only AI-generated instructional designe, I do believe that eventually AI generation coupled with human input would be able to create natural, high-quality instruction designs that fits the usage that it is for. Often we find that AI might be something that we leave to its own, but as with any input/output machine, there are many ways it could be tweaked in order to generate a result that is favorable towards the task that you set of it. For this issues, the amount of data that the AI would have access to is key in the quality of the design. With a lot of data of how people interacts with the learning outcome, the AI would be able to adapt the materials and use the data to see the weaknesses and work towards a more comfortable learning experience for its users. However, without enough data, the AI would unfortunately have to guess at what would be the key points of the lesson.


      ( 0 upvotes and 0 downvotes )
  4. Hayley Mooney

    ****QUESTION #4: The Market: Are AI products becoming more accessible, or are they still out of reach for instructors who need them?


    ( 1 upvotes and 0 downvotes )
    1. cindy keung

      AI devices are accessible for what we may need them for and these are varied e.g. Roomba, using AI to proctor remote exams, police/bomb investigation using robots, robot concierge to service hotels.

      In the arena of instruction, I have found that training instructors to implement AI, even when they are readily available is most of the battle. In addition to this, AI used in instruction can often be invasive. Take the Proctorio remote exam application – it involves a high level of surveillance of students and their spaces and has proven unpopular (among both instructors and students) in some educational settings in which it was used. It is a highly invasive AI that can even detect which way your eye balls are moving during the exam. During an exam, it can be programmed to randomly stop your exam and scan your room. While the program is great for the prevention of academic dishonesty, people are not used to such a sophisticated level of AI..yet.


      ( 0 upvotes and 0 downvotes )
      1. Hayley Mooney

        The remote-exam proctoring is an interesting example, Cindy. I have also read articles of this technology being inherently biased against certain races or gender orientations as well. It is true that in order to do what they are programmed, a lot of these technologies do seem invasive- I remember being horrified when I first learned about Amazon Alexa’s listening in on your household, but now they (or other smart speakers) seem to be a standard fixture in most homes. I wonder if we will come to accept eye-movement tracking and similar as standard features, just like we tick the “I agree with the terms and conditions…” box without reading them for everything else…


        ( 0 upvotes and 0 downvotes )
        1. cindy keung

          Thanks, Hayley. Yes- that incident with Alexa was alarming. There was also a recent incident where Alexa challenged a 1- year old to touch a coin to the prongs of a half-inserted plug: https://www.bbc.com/news/technology-59810383. Upon this incident, Amazon made updates to Alexa’s voice activation. I think like all technology, including AI – it’s all as smart as the humans who created it and humans do forget things and make errors.


          ( 0 upvotes and 0 downvotes )
        2. Nathan Bishop

          Hi Hayley and Cindy,

          I wanted to chime in on the exam proctoring issue as my school has a fair bit of experience with this. For a period of time, we used a proctoring service that essentially connected students to a live proctor through a Zoom-like interface. Students were telling us they felt uncomfortable connecting to a stranger like this, which is understandable. We moved toward a new service that was AI-based and landed on something called Integrity Advocate. We chose them because they were super serious about data protection. For example, they don’t record videos of students violating the exam rules; they just take a screenshot. That screenshot is only saved for a certain number of days (I think 14). The exam records in general are kept for slightly longer (maybe 60 days) but students have control over the data and can request that it be deleted sooner directly through Integrity Advocate. While their system still tracks eye movement and that sort of thing, it does not feel anywhere near as invasive as other platforms I have used (I have read a lot about the Proctorio issues because we were considering it at one point). We have probably had 3000 exams run through Integrity Advocate to date and I think we’ve had 3 or 4 complaints, and those complaints weren’t even really about it being invasive, more about it picking up on eye movement with too much sensitivity (which you can just turn off if you want to). All this is to say that when the AI is done right, and a service shows particular interest in protecting one’s privacy, people seem to be a lot more accepting of it.


          ( 0 upvotes and 0 downvotes )
          1. cindy keung

            Hi Nathan,

            That is great input about your experience. Is Advocate costly? Our school chose Examity and it isn’t even nearly invasive as Proctorio. One feature I noticed is the lag time to actually chat with live support. I think there is a long lag time because they out source their workers and there is at least a 15 hour difference and connectivity lags. I think Proctorio is actually brilliant – especially the data analytics available to instructors to view after exams have been taking. However, we aren’t quite ready for it yet to the point where we can appreciate its features for its entirety.


            ( 0 upvotes and 0 downvotes )
            1. Nathan Bishop

              Hi Cindy, I don’t find it to be too expensive. I believe we are paying around $10 per session (we do purchase thousands of seats so we might be getting a discount as a result). I remember with Examity we were paying $30 per session, so this was a simple change for us.
              I had those issues you described with Examity, among many others. Mostly the technical support for us and our students was quite poor. We received complaints constantly about this as well as the fact that students had a very hard time understanding the proctors. The last straw for us was when they just shut down during the early days of the pandemic. They just shut down their service and my team had to proctor exams via Zoom for a few weeks. It was bad! When all is said and done, we get far fewer complaints about Integrity Advocate than we did with Examity. Students also love the fact that they don’t need to schedule their exam; it is ready to go on-demand!


              ( 1 upvotes and 0 downvotes )
              1. cindy keung

                Thanks, Nathan. Yup – Examity is more expensive! Oh dear, I’m sorry to hear about that shut down. It’s great that you found a proctoring program that works! I’m going to check out Advocate! Thank you 🙂


                ( 0 upvotes and 0 downvotes )
    2. hasssae1

      Existing literature suggests that AI is more accessible to teachers and students, and continues to grow despite some hurdles (e.g. budgeting issues, lack of equipment availability, infrastructure support, limited topics covered, etc..). Looking at the education realm alone, I can easily name dozens of common applications that are accessible to both teachers and students when it comes to AI tutoring. Intelligent tutoring systems (ITS’) are now widely adopted and accessible, and here are a few examples, SQL-Tutor EER-Tutor, StoichTutor, eTeacher, REALP, Why2-Atlas, and SmartTutor.


      ( 1 upvotes and 0 downvotes )
      1. Aaron Chan

        Hey hasssae1 – that’s really interesting that you are familiar with ITS. Could I ask what line of work you’re in? Personally, (in public education) I have yet to incorporate AI into my teaching practice.


        ( 1 upvotes and 0 downvotes )
        1. hasssae1

          Hi Aaron, thank you for the question. Of course. I’m a training manager in the O&G industry. However, my exposure and limited knowledge of ITS has only been through academic research in the MET program and also from my previous graduate degree research (vs work). The idea of ITS’ is fascinating as they simulate a human tutor and the interactive simulations enhance the learner’s engagement and depth of learning. If I recall correctly, a research paper that I read last year argued that students showed an average of approximately 0.8 sigma (nearly one letter grade) of improvement, after using ITS’.


          ( 1 upvotes and 0 downvotes )
          1. mstr

            Wow, a letter grade increase is a noteworthy improvement. I think AI is becoming more and more accessible, my hope is that the educational leaders and decision makers see the value in funding these technologies, and the educational technology industry continues to find work arounds and innovative products to make the advantages of AI accessible to all who would benefit.


            ( 0 upvotes and 0 downvotes )
    3. Wynn Zhang

      As mentioned up above, I do feel that that AI-powered technology is something that is very accessible towards anyone who would want to use it, however, if we are talking about accessing AI by itself, then there is still a very large barrier to bypass before it becomes a part of an educator’s toolkit. After looking through many of the tools that I use for education, I have found a fair amount of them uses AI technology, but it’s just extremely difficult to see. It reminds me of how much goes on under the hood and how much AI is already affecting our practice. However, I have attempted to use machine-learning to help with my assessments in the past, and that was a very difficult and painful process to go through. Thus, depending on the view, I believe that AI products are already here as many of the stronger tools that we use has AI sprinkled within it for ease of usage.


      ( 0 upvotes and 0 downvotes )
  5. Hayley Mooney

    QUESTION #5: Privacy and Security: Do AI privacy and security concerns worry you? Or do you think we’re going in the right direction?


    ( 1 upvotes and 0 downvotes )
    1. cindy keung

      I think that as AI becomes more prevalent, we will gradually become desensitized to a level of privacy we must “give away” in order to use AI. That being said, since AI has not peaked to its fullest extent, nor have we reached the actual era of AI, I think it’s too soon to tell. Overall, I think it can both enhance our lives but also frustrate our lives as well as be very dangerous when it is not used ethically. And “ethically” is often defined by those who have powerful influence and particular financial gain.


      ( 0 upvotes and 0 downvotes )
      1. alexis reeves

        I agree with you about us becoming desensitized with the privacy element in some cases. Especially when looking at how everyone was outraged when we found out that websites were selling our information to third party companies to target us with marketing/ads. Now it seems when a box pops up asking about my ad preferences, I just “accept all” to get rid of it and get to the information I want. I can see the same thing happening to some degree with the privacy element. Not to say that it’s completely safe and we shouldn’t do anything about it, I am just hoping that more precautions can be made before AI becomes a household name.


        ( 1 upvotes and 0 downvotes )
        1. seth armitage

          Hi Cindy and Alexis, I agree with both of your comments on people being gradually desensitized when it comes to AI. We have already been gradually desensitized to the sharing of our information as Alexis pointed out. With certain apps like Facebook already listening in to our conversations in order to target us with certain ads, it can be a little alarming thinking how far things will be taken with AI over the years. I’m sure most people have experienced seeing an ad of something that they have never searched for, but had just recently had a conversation about. I also hope that there will be sufficient checks and balances put in place on a continuous basis as the capabilities of technology improves.


          ( 1 upvotes and 0 downvotes )
          1. Kyle

            I wonder what decentralized data storage will do to help alleviate people’s concerns about data and privacy. As corporations deploy block chain technologies to host their AI technologies would the audit trails and common ledger equate to a stronger sense of safety among consumers?


            ( 1 upvotes and 0 downvotes )
    2. Ally Darling-Beaudoin

      It would be silly not to worry about privacy and data concerns surrounding AI, and all other tech / surveillance in our lives. Often, as Cindy mentions, algorithms are built to provide the opportunity for revenue to a company, with little regard to the data beyond this purpose. In a lot of cases, an organization will justify that they do not “connect” the data to a specific individual, and therefore it is acceptable and harms no one, when it still has real potential for damage. Take Walmart as an example, who certainly monitors shopping habits of each data point (individual) in their store. In monitoring this, it may be determined that hair elastics are purchased most frequently in the 2nd week of the month. With this info they know when to place sales, where to place eye-level product, and what sorts of displays to add to the store to entice buyers who may be on the cusp of a purchase. Not so bad in considering hair ties, but imagine instead that it’s bottled water, or air conditioning units, and the data advises that each time a heatwave approaches, both items are purchased en masse by customers. This will tell the company, in advance of the weather (easily monitored by weather patterns…) to not only to buy up all available stock, but also potentially hoard it and sell it for a higher price. This isn’t necessarily “illegal” as much as it is amoral (although laws are being created to combat this very issue!) Point being, it’s not just the individual that has to worry about their own personal data collection implications, even unconnected data can pose a risk to the individual. As for whether or not we are on the right track, I think it’s impossible to tell…


      ( 1 upvotes and 0 downvotes )
      1. Hayley Mooney

        Good point, Ally, I think regulation tends to be slow to deal with cases where laws are not so much being broken, but ‘bent’. It feels uncomfortable for companies to collect and use data from learners in the same way as consumers, but then again, edtech companies need to make money too! Do you think laws that prohibit the collection of data might also reduce the availability of free learning technology which many teachers currently take advantage of?


        ( 0 upvotes and 0 downvotes )
        1. Ally Darling-Beaudoin

          Hi Hayley, I’m sure some learning tech will be compromised by increased security or privacy mandates, but this would mean that we have deemed their data collection unethical, so the cessation of their work would be a net gain for the industry rather than a loss. Though I suppose it would not feel that way for the teacher who is trying to use the platform.


          ( 1 upvotes and 0 downvotes )
    3. Marie-Eve Masse

      I would say that data privacy is the foundation of this concern for me. If ethics are followed when it comes to the data used for AI, then I would say I am not worried. The largest issue is that there is not one standard of ethics to follow, and who enforces if they are being followed? Of the models that I have reviewed for a learning analytics ‘code of practice’, this is the one that I’ve preferred: https://www.jisc.ac.uk/guides/code-of-practice-for-learning-analytics. The point that is the most important to me is ‘transparency & consent’ for the individual, knowing what is collected and how it is being used as an individual is imperative.


      ( 1 upvotes and 0 downvotes )
      1. Hayley Mooney

        Thanks for bringing up ethics side of things. Although it is related to Facebook and not AI, there is a podcast by Radiolab (https://www.wnycstudios.org/podcasts/radiolab/articles/facebooks-supreme-court) which discusses this same conundrum: who ultimately makes up the standards that should be followed? This subject gets especially difficult due to the international nature of today’s technology. What one country might believe to be ethical could be completely unacceptable in another. As you said, transparency is important, although as it has been mentioned in many of the above posts, how many people stop and actually take a look at the information that they give their consent to?


        ( 0 upvotes and 0 downvotes )
    4. JacksonLiang

      I believe privacy and security will be a contentious topic especially in school districts when it comes to implementing AI. Already, in terms of software, Microsoft Teams was decided based on some of these concerns; its servers on Canadian soil allow it to be more in the green light compared to some other platforms. I do think laws and rules have to catch up with AI privacy. I think we are moving in the right direction; once more ground is made on examples of positive AI use in schools, I believe schools can welcome it with more open arms.


      ( 0 upvotes and 0 downvotes )
      1. Nathan Bishop

        You are totally right that laws and rules need to catch up with AI privacy. I think the main issue here is that technology is just moving way too quickly. Government tend to move quite slowly when it comes to rules and regulations, so we need to see some quick action if we want to stay on top of privacy and safety related to AI!


        ( 0 upvotes and 0 downvotes )
        1. Hayley Mooney

          Nathan, you have sparked the idea in my head, where the same companies that profit off of technological advances had to spend a mandatory amount of their profits on funding non-partisan research on how to regulate these same technologies. In a perfect world…


          ( 0 upvotes and 0 downvotes )
      2. Terri-Lynn McLeod

        I agree that privacy and security is an issue for school districts. Because most students are minors, they need to be particularly careful with what student information is potentially being shared and who it is being shared with. I also agree that laws need to catch up to AI privacy, but as Nathan has pointed out, governments can be slow to act. In the meantime, it is important for schools to do their best to educate students about privacy and security (or lack thereof).


        ( 0 upvotes and 0 downvotes )
      3. John Wu

        From personal work experience, the issue regarding legislation on digital privacy/virtual issues is, they often take a considerable time to draft and introduce simply because of how complex intangible elements are, at this rate users will probably adopt AI faster than the actual laws being used to regulate them simply because of how fast scientific advancements are being made


        ( 0 upvotes and 0 downvotes )
    5. Marie Finch

      I agree with Cindy in that I believe people are less and less concerned with privacy. There is a need to be so visible in everything they do (yes social media & blogs). People have become desensitized to what amount of information they give away. Like with website cookies most people just accept and move along not realizing what information you are giving away. But, it’s the same with carrying a device. GPS on devices help track shared information, yet we are unlikely to give them up. The phones aren’t listening to you, but if your friend that you hang out with is googling about places to stay in Hawaii and chatting with you what they found, you can be assured that you will start seeing ads about Hawaii. If AI streamlines and assists us in our lives I honestly don’t see many everyday people putting up concerns. I think it will be up to the Privacy groups to put up the barriers and have checks and balances for these companies to follow to ensure not every bit of information is put out into the world.


      ( 0 upvotes and 0 downvotes )
  6. Hayley Mooney

    ****QUESTION #6: General Thoughts and Reflections: Anything goes here!


    ( 0 upvotes and 0 downvotes )
    1. alexis reeves

      I like the idea of AI objectively identifying students’ ability levels and which students need additional assistance in different subject areas. Normally this is left up to the teacher in the year before and can occasionally carry subjective interpretations as to that child’s particular learning abilities. This in turn sometimes leads to labelling students when in fact they may not fit/need that label. For example, students with behaviour issues rather than learning difficulties may be wrongly labelled as not understanding certain concepts in math when in fact they just didn’t have a good rapport with that particular teacher, other students, etc. When learning about Mathia, I found this personalized reporting could really spot which students to group together based on truer ability levels and allow for peer-mentoring to take place within these objectively chosen groups.


      ( 0 upvotes and 0 downvotes )
      1. Hayley Mooney

        You bring up a great point, Alexis- we often read about the biases that can affect AI programs, but in many ways these technologies can provide an unbiased opinion about a student’s performance. Another example could be AI-powered essay-marking in exams. I had always had my doubts about how well a computer could actually grade an essay, but if used as a second reviewer, it could provide a teacher with an unbiased second-opinion of a student without any of the personal baggage. Although perhaps it is still too early to trust AI’s opinion on written answers vs math?


        ( 0 upvotes and 0 downvotes )
        1. alexis reeves

          That’s a good idea, for proof-reading as well. I remember reading about a study in China where they trialed some sort of essay marking software but in the end the results were different according to the attention/time the teacher spent getting to know the program and their directions to the students on its use. I guess like most of the technology we’ve learnt about this term, a big factor is the teacher knowing how to use and distribute/explain the technology first to the students who will be using it.


          ( 0 upvotes and 0 downvotes )
    2. Nathan Bishop

      Even if AI ends up replacing the duties of humans in various sectors, there is a bright side. One of the slides in the presentation stated, “There will always be roles that require creative, cognitive and emotional intelligence skills.” I think this is an opportunity to move people out of some of the mundane tasks (those that AI can easily fill) and start focusing on what they can bring to the table, not just as a human, but as an individual too. We all have things we are good at when compared to other people, so we need to find those things and lean into them. This will probably be harder for some, depending on their job, but teachers, for example, can focus their time on delivering engaging lectures, facilitating discussions, working one-on-one with students, and so on. I think we all sometimes slack on those important, human, things that we know we should be doing because we are stuck in the weeds of all the marking or data entry we need to do. If we were free from that (and couldn’t use it as an excuse) we would be forced to do what we do best. Add to that the fact that if we don’t, we might be out of a job, and I think we would have some really motivated people! There is, of course, a concern that with AI everywhere, there may not be enough jobs for people to all just lean into their natural human gifts, but I wouldn’t say that is a given. I think we can get creative enough to build out industries and markets so there is more space for this kind of thing.


      ( 0 upvotes and 0 downvotes )
      1. Aaron Chan

        To add to that, I agree that teachers have a lot of “non-teaching” related responsibilities that could possibly be streamlined with technology (or even without technology). I’d argue that investing in technology outside the classroom can lead to positive educational benefits, perhaps even more so than using it in the classroom. Schools need good enterprise technology, as well as educational technology. But on that note, there’s an interesting Malcolm Gladwell video on the topic of education. He shared a study stating that teachers generally do not invest more time into these teaching activities when their workloads are lightened (reduction in class size). So I’m not sure if more resources and time will directly lead to a higher quality education?


        ( 0 upvotes and 0 downvotes )
        1. Nathan Bishop

          Hi Aaron, That would actually be a super interesting thing to look at! I wonder if there are studies out there on it. I could definitely see this being the case. It would be super disappointing to spend all this money and effort implementing systems to free up teacher time, only to find they don’t take advantage of it! Of course, I think this varies drastically by teacher. There are some very passionate teachers out there who will actually take advantage of their new found time and really pour it back into their students. Others may not be as motivated and may just take longer coffee breaks (although there is definitely a time when this is needed). Some teachers might argue that they are so overstretched and putting in extra hours, that an implementation like this would actually just get them back to neutral and they could finally stop putting in extra hours, which is a fair point. Even in this case where teachers may not put in the extra hours, there will certainly be less burnout, which is positive.


          ( 0 upvotes and 0 downvotes )
          1. Terri-Lynn McLeod

            Some interesting thoughts here. I agree that there are many tasks that teachers carry out everyday that could just as easily be accomplished using AI. I think that some teachers will be skeptical as to the real effectiveness of AI when it is first introduced…will it really work? Some of those teachers may continue to do the tasks themselves until they see proof that AI is really effective. As for what teachers will do with all of the “extra” time they will have once AI takes over some of their current tasks, I agree it will bring them to neutral, where they are not overworked and overstressed. It will take some time to trust in AI and learn how best to use their newfound time.


            ( 0 upvotes and 0 downvotes )
            1. mstr

              Time, the most valuable resource – there’s never enough of it! I feel more and has been added to teachers plates over the last few that I sometimes wonder if technology has created more work? If AI could free up even a little time then I believe most educators would use that time to in whatever way they deemed to be most beneficial for their students.


              ( 0 upvotes and 0 downvotes )
    3. Marie Finch

      I am really interested in many of the programs your OER has shown as well as the Whitehatter website Cindy posted about below. I think, like many who have posted, there is a huge value in integrating AI in educational practice. Streamlining the production of coursework design to free up the face to face interaction time with students would be a benefit. I am always amazed ever year with the administrative side to teaching. Emails and record keeping takes up so much time. I do have a concern with AI use and how it would be perceived by parents. Would there be push back from them about AI involvement? I know for myself I place a huge value on human interaction. Would a parent want to hear from an AI or from the teacher?


      ( 0 upvotes and 0 downvotes )
      1. Hayley Mooney

        Hi Marie, As a parent, I think I can decipher between meaningful interaction with a teacher and updates that don’t need a human touch- for example, my kids have gone to schools where report cards consist of super lengthy writeups, but on closer inspection, it is clear that the paragraphs are just cut-and-paste. I think in these instances, I’d be just as fine getting regular reports from AI telling me how my kid was performing, rather than a sort-of-human written report which was nevertheless time consuming for a teacher to produce. To get a report like this, and then have the teach have more availability to actually talk face-to-face about my kid would be a definite bonus!


        ( 0 upvotes and 0 downvotes )
  7. Hayley Mooney

    ****QUESTION #7: Curious Questions: Do you have your own AI-related question? Post it here and let’s start a discussion!


    ( 1 upvotes and 0 downvotes )
    1. cindy keung

      It is very intriguing to learn about Yuanfudao. I actually did not know about it. While I was reading through the information, I was a bit alarmed at its enormity mainly because of what I understand about China’s “privacy” and how the country does “law”. All of my teacher friends, colleagues and their children who live/work/attend school there use quality VPN’s (often more than one) that allows them full access to the World Wide Web, without restrictions and without having any of their information “watched”.


      ( 1 upvotes and 0 downvotes )
    2. cindy keung

      AI has been on my mind a lot – while I complete the Venture Project for this course and research Data Science courses. One thing I think we need to remember is that AI is not always obvious as such that they aren’t always tangible and physically used in our daily lives. They are often “covert” and embedded in Data Science where algorithms track our online search and reading habits which is why certain ads and people pop up after we conduct a Google Search. So, AI is part of our lives in passive and indirect ways.


      ( 0 upvotes and 0 downvotes )
      1. Hayley Mooney

        Very true, Cindy! I’m often a bit creeped out by how well the digital world seems to know me, despite trying to keep my information private. It seems unavoidable not to have your data collected and used in one way or another. Do you think we need to reduce this with more regulation, or give in to the convenience?


        ( 1 upvotes and 0 downvotes )
        1. cindy keung

          Hi Hayley,

          I’m not sure if me, as the individual, has complete power to regulate what shows up in my social media due to algorithms that are set from the start, to track my habits. While I just typed that out, I think the powers that be lead the way of tracking user’s online habits and are at least three steps ahead of us so if we were serious about regulating things or preventing tracking, we need to do what we can in our side of the control and tweak our settings. The other drastic option is to delete certain accounts, altogether, which a few of my friends have done e.g. no Facebook, no Twitter, won’t buy Apple products, no GMAIL account and one of my friends doesn’t even use certain operating systems anymore on his computer anymore. This is a bit of an extension of AI in Instructional Design but related to how AI in social media aids dangerous tracking behaviour: have you heard of the White Hatter? He and his team bring awareness to school communities about the dangers of online behaviour. In many instances, he shows us how easy it is to track someone, down to where they may park and work, using social media. And social media, as already mentioned, is full of AI/data science implementation that allows people to look for information about others and their habits. https://www.thewhitehatter.ca/ His emphasis is not on AI, per se, but AI is certainly a culprit in how easily people and the details of their lives can be “researched” without even knowing them personally.


          ( 0 upvotes and 0 downvotes )
          1. Hayley Mooney

            I hadn’t heard of the White Hatter, but I will definitely keep them on hand as a resource! What your friend is doing is likely the safest thing to do as far as tracking goes, but so massively inconvenient. A few years ago I decided to stop using Facebook, but then found I had to reactivate the account because school extracurricular announcements, and other information could only be found there and if I didn’t occasionally check I would end up out of the loop. Unless all consumers pushed back and demanded products that didn’t track us so thoroughly I think we’ll continue to be reliant on them. And of course, as you mentioned in your original post, thanks to AI it’s becoming more difficult to even know when and how we’re being tracked in the first place. It all seems to be the perfect storm to make us just give in and accept the lack of privacy of our data.


            ( 1 upvotes and 0 downvotes )
            1. cindy keung

              I totally hear you regard to the massive inconvenience of shutting down one’s social media account. This is prevalent also in the instances where instructors incorporate social media into their course work/instructional design such as an Instagram account for the class or a project where you need to create a project/profile of a historical figure/event (for social studies). There is a demographic of families/parents who elect to literally “sign-out” of all digital learning activities for their children. This would make it very, very hard for such a student to participate in any blended/online learning activity.


              ( 0 upvotes and 0 downvotes )

Leave a Reply

You must be logged in to post a comment.