This article presents a vision of a near future in which mobile devices augment employees’ learning in the workplace. With a simple tap on a smart device, learning content can be projected onto a wall, a desk, or even a sheet of paper. Instantly, a Virtual More Knowledgeable Other (vMKO) — powered by artificial intelligence — appears to guide the learner. This vMKO can simulate real-world workplace scenarios, respond to questions in real time, and connect learners with peers or domain experts. Through this interaction, learning becomes a collective process of human–AI co-intelligence, where knowledge is not only delivered, but co-constructed through collaboration between humans and intelligent systems.
Here is the link to my vision,
Here are two questions in my mind when I envision this vMKO’s idea,
- What human values, boundaries, and learning principles should shape the design of such AI-powered mobile intelligence so that it nurtures rather than replaces human growth?
- Who should define the standards for “good” AI-supported learning, and how might L&D professionals, learners, technologists, and communities co-create these guidelines to ensure equity, ethics, and impact?
Your vision of the vMKO, it’s such a compelling take on how mobile intelligence could meaningfully support workplace learning. The idea of projecting guidance into the physical workspace and having an AI act as a responsive, situational “more knowledgeable other” feels both intuitive and powerful. I can easily imagine how this could boost confidence and reduce the friction of just-in-time learning.
Your questions about values and governance really stood out. As exciting as the vMKO is, grounding it in clear human-centred principles will be key. I’m especially curious about how we can make sure the AI supports growth without slipping into over-scaffolding or replacing peer learning.
thank you for sharing!
Very interesting concept!
Reading about the idea of a virtual MKO made me think about what this would actually look like as an employee of a school. As a teacher, I can imagine it becoming a kind of partner that supports the day to day work I’m already doing. The examples in your post show AI analyzing learning patterns and offering feedback in real time, and honestly that would take a huge weight off my plate. Instead of trying to catch every small misconception on my own, I’d have something that helps flag them so I can focus more on the personal side of teaching.
I think the biggest shift would be how planning and interaction change. Instead of preparing every single scaffold myself, I’d be working alongside an AI system that adjusts as students move through their tasks. It would let me spend more time actually talking with students, checking in on their thinking, and guiding their decision making. I especially love your emphasis on the idea of co-constructing knowledge, and that’s how I see it. Not AI taking over, but creating space for deeper conversations and more meaningful moments with my learners.
I really liked the idea of AI and learners co-constructing knowledge. It feels more human because it values each person’s background, skills, and experiences instead of giving everyone the same “one-size-fits-all” solution. That’s what good learning should be — personal and connected to who you are. If AI just hands people the answer, then honestly it could replace the whole learning process, or let people skip learning altogether, which kind of defeats the purpose.
I also never really thought about learning in the workplace before. It makes me wonder how proactive people actually are about “learning something” at work, especially when they’re already busy or overwhelmed. Would a vMKO motivate people to grow, or would it feel like another task added to their day? I’m really curious about how that balance would work.
Hi Chanmi,
Thanks for your question. I believe the vMKO can help employees develop the skills they need to address workplace challenges and prepare for future growth, The motivation for engaging with it lies in reducing work pressure and enhancing their sense of worth. Therefore, it highly depends on how vMKO can help them to solve their workplace issues.
Besides, with the rapid advancement of AI, businesses are increasingly exploring how AI can support the business development. Employees should also understand how they can benefit from these advancements. As learning experience designers, we continuously seek innovative ways to integrate these tools, here is one of the possible innovations that I have envisioned.
Thanks.
Yanny
I’m sure there are many contexts where a ‘companion’ like this would be helpful. Having resources on hand for guidance is useful, smart, and likely efficient. An important resource for quality assurance work, trades, IT, or any sort of troubleshooting work. Your project echoes the potential ubiquitousness I refer to in my project.
How do you feel about the humanization of AI or AI powered tools? Whenever I see personification I get a little nervous. It is how AI is referred to often and I can’t help but wonder if it is intended to shape our perception of the service and/or its capabilities. AI ethics researcher Timnit Gebru always refers to these technologies as Artificial Intelligence Machines — never failing to include the last word.
The presentation of your content is great — I need to learn a little more about Notion.
Hi Mcober,
It’s thought-provoking to hear your thoughts and concerns about the ‘humanization’ of AI. This is a complex and nuanced issue. I recognize that AI is fundamentally different from us. I believe that AI-human collaboration is a dynamic process, meaning that while AI shapes our perceptions, we also shape its capabilities. I am particularly concerned about how we can cultivate critical thinking in the age of AI, perhaps it’s time to focus more on AI literacy in education.
Thanks.
Yanny
This was an engaging and easy-to-read presentation! I appreciated how clearly you explained the idea of a virtual MKO and how learning could be projected onto real surfaces with just a tap. That visual made the concept feel realistic and not just futuristic. Your connections to the ADDIE and Kirkpatrick models were also really helpful because they showed how current approaches to training fall short, especially when it comes to feedback and supporting individual learning needs.
I found your discussion of human-AI co-construction especially fascinating. The idea that learning becomes a shared process rather than something delivered or automated lines up well with how many of us already teach. It’s an important reminder that AI should enhance human growth, not replace it.
Hi Nik,
It’s great to hear that you find the discussion enjoyable. Besides, I hope more employers will envision ways to facilitate collaboration between AI and humans, rather than viewing it as a replacement. We often hear news about using AI to replace the human workforce instead of increasing employee productivity. As educators and learning designers, we should always communicate and share how we can collaborate with AI to create more impactful outcomes.
Thanks.
Yanny
Hi Makyan,
I really like how you introduced the idea of co-creating the database for AI learning to mitigate the issue of biases. As part of the process of being transparent, do you think a numerical value of how many times the information was found would help increase the accuracy of the results? For example, if only one article stated something (e.g., the impact of vaccines), then AI would report that percentage of use as part of their response. However, I recently watched a video, ‘We fell for the oldest lie on the internet’ by Kurzgesagt detailing their attempt to determine the source of the idea that blood vessels could circle the earth 2.5-4 times. They had a very difficult time finding the original source because each iteration merely cited what was previously there. Their calculation at the end of the video described how it was actually inaccurate and suggested modifications. The reason I bring this up is the impact of sources citing each other, thus burying the ‘truth’ and making it even more difficult for AI to be accurate. So, who/what should define the standards of ‘good’ AI-supported learning?
Hi Mandy,
Thank you for your insightful feedback. This aspect is worth to explore further, it’s the first time I’ve encountered it. Defining standards for ‘good’ AI is indeed a challenging task, but necessary as AI increasingly integrates into our daily lives and significantly impacts the education ecosystem. Your feedback has prompted me to rethink that including experts in the field of AI as key stakeholders is essential for this discussion.
Thanks.
Yanny
Thanks for the thought-provoking work Rei. I specifically liked your use of Notion.
In regards to shaping AI-powered intelligence, based on share values, here are a few I brainstormed:
Connection: Technology should bring humans together, not isolate them
Reciprocity: Give and take, not just consume
Agency: Students drive their learning, supported by the algorithms
Authenticity: Real struggle, real help, real relationships
Overall, my thoughts always go back to using AI as a tool to support learning and peer-to-peer communication. And with that in mind I have some boundaries I think will be necessary:
AI facilitates, humans teach
Privacy-first: minimal data collection
Consent: opt-in helping, can say no
Context-respect: AI won’t interrupt focus time
Finally, I agree with your final thoughts, on the reciprocal processes that is needed to develop AI systems.
I’ll call these ‘co-creation of standards’ and you can take them as a ‘brainstorm’.
STUDENTS
Rate interactions
Report issues
Suggest features
INSTRUCTORS
Set learning objectives
Monitor quality
Intervene when needed
AI SYSTEM
Learn from successes & student reporting
Identify patterns
Adapt algorithms
RESEARCHERS/L&D
Study effectiveness
Ensure equity
Develop ethical guidelines
Hi Dave,
Thank you for providing a possible framework for Human-AI collaboration. It is comprehensive and you try to strike a good balance between human and AI involvement, especially you have also considered the role of students (one of the key stakeholders of it).
Thanks.
Yanny
Hi Maykan,
This is a great concept! I really appreciated your focus on personalized learning and the benefits it offers, as well as how AI can support that for learners. Instead of feeling judged or confused, people can interact with the AI “mentor” as you described as a personalized colleague to guide and support them. It’s a thoughtful and innovative idea with a lot of potential.
Hi Meshi,
Thanks for your feedback. I am also curious how we far we can go with the AI mentor and in what potential ways.
Thanks.
Yanny
Hey Makyan,
After reading through your authored Notion page, I can tell you’ve put a lot of thought into the future of learning and what it will mean for us to ‘peacefully and productively integrate’ with our AI companions. The questions you left us with support that observation, and I’d like to take a shot at one of them.
“Who should define the standards for “good” AI-supported learning, and how might L&D professionals, learners, technologists, and communities co-create these guidelines to ensure equity, ethics, and impact?”
This is a question I’m hoping to develop more literacy on to get closer to the bottom of, especially as someone who is currently teaching the next generation of our workforce. What it means to learn from (or with!) an AI versus a human is a quintessential question.
I would imagine that the answer to “who defines the standards” will arise over time as more of the population grapples with the question of AI collaboration. What we could hope for is that those conversations are primarily handled by humans, for humans, with little AI contamination. In my eyes, there’s an intangible issue with collaborating with the very being we hope to develop standards around. Kind of a “conflict of interest” type of thing.
As for how designers will work to codevelop these use guidelines, I think if it were my own company/task, I would want my workforce to be able to collaborate with one another, under the guidance of knowledge and “what works” from other prominent thinkers, to arrive at their own protocol that aligns with best practice. The specific protocols and guidelines for a given organization will be deeply personal and specific to that organization, so I don’t think a “one size fits all” approach would be useful or motivating here.
Hi Jakedepo,
Your approach of the guidelines development and considerations is thoughtful, especially on the issues of bias and equity. Besides, allowing each organization the flexibility to explore what works best for them is a valuable idea. It provides me a new insights in this matter. Thanks for your reply.
I found your vision of the vMKO very engaging, and it connects closely with ideas I explored in my own A3 writing. You show why workplace learning needs more personalized and adaptive support, something traditional L&D models can’t always provide.
What stood out most was the focus on partnership. Instead of replacing human thinking, the vMKO creates a two-way relationship where humans guide the system and the AI helps learners explore scenarios, organize ideas, and build confidence. This feels like a realistic direction for future workplace learning.
Your questions about values and boundaries are especially important. If tools like the vMKO are going to support human growth, we need clear limits and community-driven guidelines to keep human judgment and culture at the centre.
After I read yours I have a thought in mine: how can we make sure vMKO feedback reflects real workplace culture and human learning, not just algorithmic patterns?
Hi Sean,
Thanks for your question! Regarding how to ensure the reliability of vMKO feedback that reflects real workplace culture, I believe one possible safeguard is by involving human feedback. It ultimately depends on our ability to differentiate and reflect on real-world practices. Additionally, the learning designer should always be involved to minimize potential issues. Overall, I find your question important and it encourages me to think more critically and conduct further research on it.
Thanks,
Yanny
Hi makyan,
I found your project on human-AI co-intelligence really compelling, especially in light of the question about what values and principles should guide AI-powered mobile tools. Your work suggests that systems like this should be built around support, curiosity, and shared reasoning rather than replacement or shortcuts.
What stood out to me is the need for boundaries that keep humans in charge of judgment and meaning-making, with AI acting as a prompt or guide. I think your project reinforced what I have started to believe through this 523 course as well as the 565 AI course and that as we move forward with AI, there will be increased importance for educators to champion learning principles like reflection and active engagement, so students are thinking through problems rather than simply finding the answer. That feels essential if the goal is for education to support genuine human growth in the wake of AI.
Hi Kgaudr,
Thanks for your comment. I am really glad that we shared the same thoughts. I’ve also been thinking a lot about the common narrative of AI “replacing” people, and I keep returning to the question: does it have to be that way? If humans are the ones investing time, creativity, and even environmental and labour resources into building these systems, then it seems essential that AI development should support human growth rather than undermine it. Across different courses, I’ve also learned about issues like digital labour, sustainability, and the hidden environmental costs of AI. These concerns make me believe even more strongly that we need a more positive and sustainable vision of AI, one where humans and AI learn alongside each other, rather than humans becoming passive or displaced.
I hope I’ll have the chance to take the 565 AI course too, it sounds incredibly interesting and aligns so well with the AI agenda nowadays.
Thanks again for engaging so deeply with my project.
This is an incredible idea that is very well presented. I like the way you laid the information out and gave rationale for all your thoughts. Your included research was relevant to the technology you envisioned, and further supported your theory. I also found the questions for reflection at the end of your assignment to be really thought provoking and interesting.
Thanks for your feedback! I always want to delve into the topic of collaboration between humans and AI, and I got the chance to explore this in this assignment. It’s great to hear that you enjoyed it, and I hope the reflective questions can help us, as educators, think more about the wave of AI in the education field
Hey makyan,
Not sure if I’m missing something, but I got a “No access to this page” error message when I tried to open this to review it. Check permissions maybe? Or do I need to be signed in to my UBC email?
Thanks,
Jake
Thanks for your message. Please try to click the link again, it should be workable now.