Category Archives: Update 3

Update 3e: Proposed Goals of Experiment

The list of potential goals for our evaluation are as follows:

  1. How error-prone is the system and how clear are the steps to perform a task? We could measure how much users hesitate or make with the design in accomplishing a set of tasks. This value could be compared against a set threshold of number of hesitation points and number of errors.
  2. How quickly can a user perform a task? We could measure the time taken for a user to perform a task, as an indication of the efficiency of the system. The time taken can be compared against predetermined threshold times.
  3. What do people think of the new system compared to existing platforms? We could measure this using a likert scale to quantify the user’s willingness to use the design compared to a neutral threshold.

 

Ranking of Importance:
3,2,1

Item 3 is most important because people need to be willing to use the system in order for it to be a useful system to develop.

Item 2 is next most important because efficiency is related to how usable the interface will be- if the task takes too long to perform, then the interface will suffer.

Item 1 is least important since although users may hesitate at certain steps, the interface is meant to be forgiving and the consequences of hesitation or making an error is minimal.

 

Ranking of Testability:
2,3,1

Item 2 is the easiest to test because we can measure times and compare the number obtained with the threshold times to complete a task.

Item 3 is more difficult because we are relying on a user’s opinion, which might not always be completely truthful. However, if we collect their opinions using a Likert scale, it will be easier to analyze these results.

Item 1 is the most difficult because it could be hard to determine when a user hesitates.

Based on these ideas, we have decided to focus on goals 2 and 3 for our evaluation, especially since idea 1 ranked lowest on importance and testability.

Update 3c: Walkthrough Report

Through performing our cognitive walkthrough, we learned that some of the terminology used in our paper prototype was confusing to the user. It was unclear what “annotations” signified and how they differed from comments. However, once annotations were explained, the user was easily able to realize how to add a timestamped annotation. The user also noted that the relationship between annotations and comments should be made more clear, both conceptually and in terms of the visual layout of the design. It was suggested that a timeline displaying all of the annotations on a video may be more useful and understandable when watching a video. A positive aspect of the prototype discussed in the walkthrough was that it was straightforward to add comments because of the interface’s similarity to YouTube.

Some specific findings that we found for each task example are as follows:

TE1:
The annotations/comments section of the design solves the user’s problem, which is not having an easy way to find information about a confusing part of a video. The walkthrough showed us that users were able to easily deduce how to make both annotated and regular comments, or start a discussion about something unclear to them. A problem that arose for this task example was the terminology used in the prototype design. The word “annotations” did not provide a clear description to the user.

TE2:
Task example 2 was well supported by our paper prototype. Our prototype granted users the ability to make annotated comments with specific timestamps for a certain part of the video. Additionally, the comments were displayed in a very visible manner and users watching the video could easily see remarks made by other users at that particular timestamp. However, it was unclear how a user could add a video link to a comment as a supplement.

TE3:
For task example 3, we supported a user being able to annotate a video with what he/she may find useful to clarify the content. However, as previously mentioned, there was confusion with the difference between comments and annotations.

Update 3b: Description of Prototype

The purpose of our paper prototype and cognitive walkthrough was to develop a high-level understanding of the interactions between important individual functions derived from our field study. After determining these functions from our field study, we wanted to explore how users would interact with these functions and how these functions would interact with each other when integrated into a single design. An illustration of the prototype can be seen in Update 3a.

We have chosen to support all of our task examples, which can be found in our previous blog post, Update 2b. Our task examples were supported in the design by allowing users to add annotations at specific parts of a video, or to leave a general comment. For task example 1, a user might add a link to a video aid in their annotation, to help other users clarify a specific idea in a tutorial. For task examples 2 and 3, a user can personally make an annotation for a certain timestamp on the video to note errors or suggest alternatives, applicable to both tutorial or informational videos.

There were several key design decisions we made that influenced the design of our paper prototype. Firstly, we wanted to use a familiar video website interface to create a positive transfer effect, to make the interface intuitive for new users. For this, we used the YouTube interface as a basis due to its popularity. Another important design decision was to make the interface simple, without overwhelming the user with too much information at once. Related to this, we wanted to make the interface consistent across different contexts, whether it be a more casual tutorial context as in task example 1, or a more formal educational context as in task example 3. Consequently, we did not include video transcripts in the design, because it was not completely necessary for a casual educational video. Instead, we focused on the commonality across these different contexts for this prototype. Additionally, we incorporated the concept of segmenting the video into sections automatically based on user behaviour (e.g. where they stop, where they rewind) and video content (e.g. transition points, audio pauses) which would help with the process of following along with a video. Since this is a low-fidelity prototype, deciding how to do this was not in the scope of our design, but will be an important consideration in the future.