The purpose of our paper prototype and cognitive walkthrough was to develop a high-level understanding of the interactions between important individual functions derived from our field study. After determining these functions from our field study, we wanted to explore how users would interact with these functions and how these functions would interact with each other when integrated into a single design. An illustration of the prototype can be seen in Update 3a.
We have chosen to support all of our task examples, which can be found in our previous blog post, Update 2b. Our task examples were supported in the design by allowing users to add annotations at specific parts of a video, or to leave a general comment. For task example 1, a user might add a link to a video aid in their annotation, to help other users clarify a specific idea in a tutorial. For task examples 2 and 3, a user can personally make an annotation for a certain timestamp on the video to note errors or suggest alternatives, applicable to both tutorial or informational videos.
There were several key design decisions we made that influenced the design of our paper prototype. Firstly, we wanted to use a familiar video website interface to create a positive transfer effect, to make the interface intuitive for new users. For this, we used the YouTube interface as a basis due to its popularity. Another important design decision was to make the interface simple, without overwhelming the user with too much information at once. Related to this, we wanted to make the interface consistent across different contexts, whether it be a more casual tutorial context as in task example 1, or a more formal educational context as in task example 3. Consequently, we did not include video transcripts in the design, because it was not completely necessary for a casual educational video. Instead, we focused on the commonality across these different contexts for this prototype. Additionally, we incorporated the concept of segmenting the video into sections automatically based on user behaviour (e.g. where they stop, where they rewind) and video content (e.g. transition points, audio pauses) which would help with the process of following along with a video. Since this is a low-fidelity prototype, deciding how to do this was not in the scope of our design, but will be an important consideration in the future.