Final Project: AI Tutors Affect Original Authorship. Should we care?

Introduction

Artificial Intelligence (AI) tutors affect authors and their creations. They influence teaching and learning and are considered both boon and bane. At issue with machine-learning algorithms is their role in teaching and learning, their value for education and the workforce, and how they will indelibly (digitally) leave their mark on intellectual and creative writings. This is a meandering excursion more than it is research paper that defines a problem and draws conclusions from testing and literature review. We will consider a few examples of writing using AI tools before leaving with something interesting on which to reflect.

Bane

Universities have become increasingly dependent on education technologies like Grammarly (https://www.grammarly.com/edu) and Turnitin (https://www.turnitin.com/) plagiarism detection software to seek and address problems with lack of authorship attribution and with low language proficiency in student writing. They are supplementing hands-on teaching support with AI-driven technology coaches to help address foundational issues of grammar and spelling. Herrington and Moran (2001) outline some of the problems with arguments claiming that technology is faster, better, and less expensive, citing numerous concerns for the profession of English. They claim that machine-reading software has a negative effect on student learning, and large-scale use of technology combined with attractively rising stock process for the companies leading the software industry lend an artificial credibility to the tools, even suggesting that education should follow practices more like business functions (p. 495). At best, Herrington and Moran offer that technology companies provide good marketing campaigns to combat large class sizes, high student-faculty ratios, and the burden for instructors who must read large quantities of student writing samples. They believe the adoption of technology will permanently embed those structural problems into the institutions that use them.

Boon

McKee and Porter (2018) take a far less ludditic approach to engagement with AI in writing. They address three areas for concern that we should consider about the current and future state of AI technology development:

  1. AI chatbots are now operating in the workplace.
  2. AI writing bots, aka “smart writers,” will soon do our writing for us.
  3. AI-based teachers (AKA “smart teachers”), or at least teacher assistants, are now in use at some universities.

Addressing the concerns that teachers are being driven out of the institutions by technology, McKee and Porter suggest this was already happening in introductory writing programs with the introduction of advanced placement writing tests before advanced technology entered the scene. The researchers take a healthier look at the technological and cultural shifts that are happening with increasing use of AI. They ask serious and important questions like when and how AI-driven chatbots should be used as professional writers, whether there be restrictions on their use, if there are contexts where their use is inappropriate, and whether they should be used transparently.

AI Tutors provide support for writing. How does this affect originality, and does it matter? IBM Watson natural language understanding is being used in experiential learning simulations in formative higher education and professional learning contexts. NLU is a step up from the type of natural language processing (NLP) that many of us encounter daily with the use of Alexa and Siri, two search engines with NLP, aural capabilities, and oral responsiveness. If an AI tutor steers student written and verbal comments with informed feedback and fills in gaps to improve their understanding, what part of student reasoning may be eroded and what part of the work is considered their authorship? What constitutes original work, and does it matter as much now if the goal is support understanding? Does NLU challenge original authorship, enhance it, ‘remediate’ it? Let’s take a look at an example.

Experiential learning use case

A Canadian start-up company, Ametros (https://ametroslearning.com/), was founded by university educators that wanted to support experiential learning in a digital environment with smart tutors under watchful supervision. The Ametros application uses IBM Watson NLU to gain powerful insights into the language habits and learning needs of students and the product claims suggest that AI helps learners develop key skills in communication among others. NLU is used to recognize problems with student decision-making and appropriate workplace communication, processing email-like written interactions from students who engage with AI chatbots that may be co-workers, employers, or customers. If a student practices empathy in communication and presents aggressive comments or uses an unsympathetic tone, the AI trainer corrects the behaviour with continuous adaptive feedback until the student demonstrates a capacity for understanding empathy. [Ametros is the application that I have recently been working with professionally to see if it is a good fit. Unfortunately, I was unable to record a walkthrough of the application in use and so turned this reflection elsewhere.]

Courses with writing components like memos and informal reports can be supported by chatbots that improve engagement and provide feedback with a timeliness and level of professionalism (because search engine-enabled) that is not possible with educators who manage large classes. Moving a human-led, in-person class to an online environment does not necessarily enhance the quality of learning unless innovative methods are used. Education technology with NLU components might meet the same quality of teaching and learning standards as those used in the classroom. If a chatbot uses the Socratic method to support student learning while making minor corrections for spelling and grammar, what part of the student submission in writing could be considered authentic? Original work may not always be a requirement in an experiential learning context. In a co-operative work-integrated-learning context, and employer may have partnered with a university network to find students that need real-world, authentic learning experiences that both benefit the educational institution, the employers, and the learner. How a person acquires business communication skills or basics of Microsoft Excel are less important than their correct application to support business purposes. Is that such a threat to academic integrity? Weitekamp, Harpstead, and Koedinger’s (2020) efforts to define a machine learning engine that is user friendly and employs a “show-and-correct” process is an interesting way to show teachers rather than programmers how to use these powerful tools for the benefit of learners, and would be a supportive approach to institutions that embrace education technology solutions.

McKee and Porter (2020) offer some support with growing number of the unanswered questions here. They construct supporting arguments for two ethical principles that may be used to guide the design of AI writing systems:

  1. First, that there be transparency about machine presence and critical data awareness
  2. Second, there there be a methodological reflexivity about rhetorical context and omissions in the data that need to be provided by a human agent or accounted for in machine learning

The learning goal of AI tutors is to highlight strengths and areas of improvement within an interactive writing experience and to provide research-based support to responses regarding the correctness of natural language using formal rules. Does working through an AI interactive improve a learner’s understanding? Does it help make the writing feel less abstract or more? Will exposure to writing tutors help later when solving problems using critical thinking that require opinions and idiomatic expressions? Questions like who bears responsibility when your super-charged grammar checker makes a mistake is no different with AI supports than with a common word processor. Popenici and Kerr (2017) and Rouhiainen (2019), like McKee and Porter (2020) above, recognized the importance of advances in use of AI technologies in higher education writing services for learners, and focuses more on the ethical matters related to data privacy and data ownership that accompany the benefits of personalized learning.

Further exploration of these accumulating questions may be accomplished through working with design considerations when creating an AI-tutored session for writing. Instructors, employers, and learners are looking for real-life examples. AI can help differentiate high quality from low, and originality from plagiarism. When prompted to rethink a writing sample, learners may not notice the changes recommended. What are the AI prompts to highlight essential words or structural changes in the instructional sentence to make the writing and idea-generation (and thus reading) easier? Increasing interactivity using Chi’s ICAP framework (2009) could be one solution. Considering the learning design principle of cognitive load, Chi’s ICAP framework and supporting hypothesis demonstrates that the more interactive the learning element, the greater student engagement and learning. Four levels of interactivity in the framework are:

  1. Interactive activities that involve social interaction. The Ametros AI simulation provides a character bot to engage with.
  2. Constructive activities that involve writing or creating. Writing a business communication to a client with the support of the chat ‘boss.’
  3. Active activities that involve manipulating media. Perhaps clicking through a simulation and answering multiple choice questions.
  4. Passive activities that involve reading text or viewing images/videos.

Chi’s research suggests that adding a constructive piece to the experience would positively impact learning outcomes. Focus on the outcomes and not the originality of the work may be right approach for some learning contexts.

A bright or bleak future?

Poetry chapbooks do not often sell well throughout a poet’s lifetime. An AI writer may be an intelligent and economical alternative to expert human cultural tutors who fail to earn an adequate living as a creator. Could there be a social responsibility to promote AI in this case to decrease the unhappiness of human artists? While this excursion meandered more than it should have for its brevity, I have wondered how important these arguments for and against the use of AI in teaching and learning and writing composition will matter in the not very distant future. What is all the fuss? In his 2015 TED Talk, “Can a computer write poetry?”, Oscar Schwartz walks through the algorithms that compose poetry by scraping his Facebook feed for the muse and vocabulary and compared that to a poem by William Blake, asking the audience if they can guess which composition is human-created and which machine-generated. His final words fit well into the question about the changing spaces of reading and writing.

“But what we’ve seen just now is that the human is not a scientific fact, that it’s an ever-shifting, concatenating idea and one that changes over time. So that when we begin to grapple with the ideas of artificial intelligence in the future, we shouldn’t only be asking ourselves, “Can we build it?” But we should also be asking ourselves, “What idea of the human do we want to have reflected back to us?” This is an essentially philosophical idea, and it’s one that can’t be answered with software alone, but I think requires a moment of species-wide, existential reflection.” (Schwartz, 2015)

 

References

Chi, M. T. H. (2009). Active-Constructive-Interactive: A Conceptual Framework for Differentiating Learning Activities. Topics in Cognitive Science, 1(1), 73–105. http://doi.org/10.1111/j.1756-8765.2008.01005.x

Herrington, A., & Moran, C. (2001). What happens when machines read our students’ writing? College English, 63(4), 480-499. https://doi.org/10.2307/378891

McKee, H., Porter, J. (2018). The Impact of AI on Writing and Writing Instruction. Digital Rhetoric Collaborative. April 25, 2018. https://www.digitalrhetoriccollaborative.org/2018/04/25/ai-on-writing/

McKee, H., Porter, J. (2020). “Ethics for AI Writing: The Importance of Rhetorical Context.” AIES ’20: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society. February 2020, pp 110–116. https://doi.org/10.1145/3375627.3375811

Popenici, S., Kerr, S. (2017). Exploring the impact of artificial intelligence on teaching and learning in higher education. Research and Practice in Technology Enhanced Learning. DOI 10.1186/s41039-017-0062-8

Rouhiainen, L. (2019). How AI and Data Could Personalize Higher Education. Harvard Business Review. October 14, 2019. https://hbr.org/2019/10/how-ai-and-data-could-personalize-higher-education

Schwartz, O. (May 2015). Can a computer write poetry? [Video]. TED Conferences. https://www.ted.com/talks/oscar_schwartz_can_a_computer_write_poetry

Weitekamp, D., Harpstead, E., and Koedinger K.R. (2020). “An Interaction Design for Machine Teaching to Develop AI Tutors.” CHI ’20: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. April 2020, pp 1–11. https://doi.org/10.1145/3313831.3376226

Leave a Reply

Your email address will not be published. Required fields are marked *