I’ve already written about Task 7, but wanted to focus in on Joti’s response specifically. Joti focused on integrating the theoretical into her response, and she developed an interesting Genially that took advantage of blending the visual, audio, and linguistic modes.
I wonder how it could be expanded to include the spatial, and gestural modes. I wonder if something as simple as Genially allowing swiping on a mobile device could be considered in the frame of the spatial and the gestural. Using a mobile device compared to a laptop or desktop computer does impact how you interact with and consume content. Recently I’ve been working on a complex self assessment tool on Qualtrics and have spent many an afternoon swiping through the form on a mobile device testing out the different layouts to see what is the easiest and most straightforward way for users to get to the end of the form. As I work on this I’ve also been reflecting on the attention economy task – funnily enough it’s an ethics self-assessment tool so issues of ethical design and effective use have been bubbling to the surface.
How else could you incorporate the spatial and gestural in the mode-bending task I wonder? Spatial audio like you’d see in 360 degree videos could be one way to consider the spatial still in a web platform…. part of me thinks we’ve been trying to instill depth into the screen since we started projecting images.
References
Singh, J. (June 24, 2024). Mode-bending. Joti Singh’s Weebly. https://jotisingh.weebly.com/tasks/mode-bending