Conclusions and Recommendations
We conclude that some of the features evaluated worked well, though concerns exist regarding inter-user trust and community navigation.
While there was no significant difference between the level of details in food and profile posts, participants were slightly more inclined to trust another user based on profile rather than food post. In other words, the evidence shows that a user puts higher emphasis on a profile than a food post, which is something we did not anticipate.
In the apartment layout, participants took more time to scan for targets and perform scrolling gestures both vertically and horizontally, as indicated by the significant effect the layout had on time. Interestingly, we found an interaction effect when analyzing layout and task, indicating that tasks which include explicit spatial (apartment floor) information might not be hindered as much.
In regards to our interface design, we acknowledge that there are some factors that could have affected the results in our experiments. One major factor could have been that the contrast between low and high levels of information was not high enough. As such, participants may have been unable to indicate a clear preference and chose arbitrarily which one they trusted more. In the second experiment, it is unclear whether the difficulty in performing the task was a scrolling problem, or a visual issue (e.g pictures were too small).
The concept presented here is still worth exploring given these conclusions, though future designs would have to address several serious issues.
In terms of promoting trust, it might make sense to make interface adjustments that emphasize the seller and their history. One specific recommendation might be to make certain (or all) fields in profiles required. Another might be to set a standard to check a user’s history, such as a background check or a food safety inspection. It might also be worthwhile to consider these in context with features that promote user trust, such as their effect on our messaging system.
With regards to main page layout, future design steps would have to address the issues reported with gestures and overload of information on screen. Specifically, this might involve considering pinch gestures for navigation and collapsible map “pins”. Alternatively, we might recommend the use of a hybrid interface. This would combine the map with the simplicity of a list, which might be sorted (e.g. using a recommendation system) to complement the spatial info.
Reflection
Prior to creating the interactive system, the field study was beneficial in identifying issues with some assumptions that we had. Initially, we felt that our system would be used by anybody within a close-knit community. To our surprise, the young adults in our study were pretty vocal about how they didn’t enjoy or cared little about interacting with their neighbors. After realizing this, we shifted our user group to an older age group that were more open to sharing and interacting with their neighbors. In addition, we realized that we had to put a higher importance on trust as multiple participants expressed their concerns for trusting food made by others. Use of an affinity diagram allowed us to easily identify rich patterns in this data.
Using different levels of prototypes greatly helped us in designing and shaping our experiments. With the low fidelity prototype that we created, we were able to quickly evaluate our conceptual ideas and see if it worked with minimal effort. Additionally, after getting feedback, we were able to quickly make changes and adjust features to test. After the low fidelity prototype, we were able to pinpoint areas of our overall prototype to test, such as creating timed trials in our medium fidelity prototype to test searching and browsing tasks to collect quantitative data.
Pilot testing was another great method that we learned through class, and after performing our study, we realized the significance of conducting such tests. Not only did pilot testing allow us to reveal functional issues of our prototype, it also allowed us to figure out a better way to gather qualitative data. To elaborate, we found some frames in our prototype that did not transition appropriately given the user’s input, and this would have affected our first participant’s results. Furthermore, we discovered that using a semi-structured interview style to gather qualitative data was better than giving the participant a survey to fill out.
In terms of analysis, we felt that using ANOVA testing in R studio allowed us to clearly see the differences in our findings and helped us visualize our collected data.
However, like most things in life, there were aspects in our study that did not go as we planned. When we designed our medium fidelity prototype, we had planned to use a touchscreen device to conduct the experiments, however, with the COVID-19 pandemic, we had to hastily adjust our experiments to make everything virtual. Given that we could not control the environment of our experiment, many factors could have affected our results, such as the different devices that the participants used, the internet speeds, etc.
Additionally, if we had more time to run our experiment with a larger group, we may have been able to collect more substantial data that could possibly change our results and have more accurate conclusions.