Author Archives: Jacey Bell

Assignment 1C: Control, Sampling, and Measurement

This assignment consists of answers to several questions about the following paper:

Ardern, J., & Henry, B. (2019). Testing writing on computers: An experiment comparing student performance on tests conducted via computer and via paper-and-pencil. Journal of Research in Digital Education, 20(3), 1-20.

Control

Blinding was used in the scoring process. All performance writing responses that were on pencil and paper were entered into the computer and intermixed with the computer responses so that raters did not know if they were scoring responses from the control group or the experimental group. Whether they realized it or not, any or all of the raters may have had expectations that either computer or pencil and paper responses would score higher, and this expectancy could introduce bias into the rating.

Constancy was used in the design of the computer-based assessments. Care was taken to make each page on the computer screen look as similar as possible to the paper version of the exam, with attempts to keep the number of items on a page, the position of headers and footers, the order of the responses, etc. the same. The researchers noted that previous studies had reported that changes in appearance of tests could alter performance, so without this control in place it is possible that the performance of the experimental group could have been influenced by the appearance of the exam as opposed to the mode of administration.

Sampling

Random selection and random assignment into groups is important to neutralize any threats that could cause bias in the study. By random assignment of students into either the control or experimental group, the researchers could assume roughly the same number of students in each group would be affected by any extraneous variables, therefore not allowing those variables to have more of an effect on one group than the other.

Sample Sizes

  • Experimental (computer) group: 46 – originally recruited 50
  • Control (paper-and-pencil) group: 68 – originally recruited 70

Rule of thumb: Minimum group size of 30, 40 often recommended to create comparable groups.

At least 63 per group to get medium effect size (d = 0.50) that is statistically significant. 25 per group to get large effect size (d = 0.80) that is statistically significant.

Standard Deviation

In this entry, SD refers to standard deviation from the mean score of the open-ended (OE) writing exam. It indicates how student scores were dispersed around the mean. Of the 114 assessments, the mean score for the OE exam was 7.87 out of a possible 14 points and the standard deviation was 2.96.

This indicates that approximately 68% of the students scored within plus or minus 1 standard deviation of the mean, which when calculated equals between 4.91 (7.87 – 2.96) and 10.83 (7.87 + 2.96).

Approximately 95% of students scored within plus or minus 2 standard deviations of the mean: in other words, between 1.95 and 13.97.

Effect Size

I understand why the researchers interpreted the effect size as both statistically and practically significant. With an effect size of 0.94, the mean score of the experimental group would shift 94% of a standard deviation to the right, causing the mean of the experimental group to fall at the 83rd percentile of the control group.

Measurement

The modest level of inter-rater reliability reported (0.44 to 0.62) indicates that scores assigned to a student’s response were often different between the three raters. A modest or low inter-rater reliability could be considered an error in measurement and render the data useless. However, the researchers in this study attempted to control for the moderated inter-rater reliability by using average of the three scores for each student response. Measures with modest or low reliability are undesirable in research because they may present scores or data that are not as close to the “true” value.

Content validity would have been most relevant to this study. In order to measure student writing performance, it would be important to ensure that the assessment actually measures writing performance.

Reference List Assignment

Research question: How can use of technology in university level science education increase student understanding of core concepts?

Keywords: education* technology “post secondary”; biology education* technology; education* technology university science; education* technology AND biology AND university OR post secondary OR tertiary AND understanding.

Reference manager application: RefWorks

 

References

Bennett, S., Agostinho, S., & Lockyer, L. (2001). Technology tools to support learning design: Implications derived from an investigation of university teachers’ design practices. Computers and Education, 81, 211-220. https://doi.org/10.1016/j.compedu.2014.10.016

Borokhovski, E., Bernard, R. M., Tamim, R. M., & Schmid, R. F. (2001). Technology-supported student interaction in post-secondary education: A meta-analysis of designed versus contextual treatments. Computers and Education, 96, 15-28. https://doi.org/10.1016/j.compedu.2015.11.004

Dantas, A. M., & Kemm, R. E. (2008). A blended approach to active learning in a physiology laboratory-based subject facilitated by an e-learning component. Advances in Physiology Education, 32(1), 65-75. https://doi.org/10.1152/advan.00006.2007

Förster, M., Weiser, C., & Maur, A. (2001). How feedback provided by voluntary electronic quizzes affects learning outcomes of university students in large classes. Computers and Education, 121, 100-114. https://doi.org/10.1016/j.compedu.2018.02.012

Goff, E. E., Reindl, K. M., Johnson, C., McClean, P., Offerdahl, E. G., Schroeder, N. L., & White, A. R. (2017a). Efficacy of a meiosis learning module developed for the virtual cell animation collection. CBE Life Sciences Education, 16(1), Article 9. https://doi.org/10.1187/cbe.16-03-0141

Goff, E. E., Reindl, K. M., Johnson, C., McClean, P., Offerdahl, E. G., Schroeder, N. L., & White, A. R. (2017b). Variation in external representations as part of the classroom lecture: An investigation of virtual cell animations in introductory photosynthesis instruction. Biochemistry and Molecular Biology Education, 45(3), 226-234. https://doi.org/10.1002/bmb.21032

Henderson, M., Selwyn, N., Finger, G., & Aston, R. (2015). Students’ everyday engagement with digital technology in university: Exploring patterns of use and ‘usefulness’. Journal of Higher Education Policy and Management, 37(3), 308-319. https://doi.org/10.1080/1360080X.2015.1034424

Kara, Y., & Yeşilyurt, S. (2008). Comparing the impacts of tutorial and edutainment software programs on students’ achievements, misconceptions, and attitudes towards biology. Journal of Science Education and Technology, 17(1), 32-41. https://doi.org/10.1007/s10956-007-9077-z

Lowerison, G., Sclater, J., Schmid, R. F., & Abrami, P. C. (2006). Student perceived effectiveness of computer technology use in post-secondary classrooms. Computers & Education, 47(4), 465-489. http://dx.doi.org/10.1016/j.compedu.2004.10.014

Makransky, G., Thisgaard, M. W., & Gadegaard, H. (2016). Virtual simulations as preparation for lab exercises: Assessing learning of key laboratory skills in microbiology and improvement of essential non-cognitive skills. PloS One, 11(6), Article e0155895. https://doi.org/10.1371/journal.pone.0155895

Riffell, S., & Sibley, D. (2005). Using web-based instruction to improve large undergraduate biology courses: An evaluation of a hybrid course format. Computers & Education, 44(3), 217-235. https://doi.org/10.1016/j.compedu.2004.01.005

Sadler, T. D., Romine, W. L., Stuart, P. E., & Merle-Johnson, D. (2013). Game-based curricula in biology classes: Differential effects among varying academic levels. Journal of Research in Science Teaching, 50(4), 479-499. https://doi.org/10.1002/tea.21085

Swan, A. E., & O’Donnell, A. M. (2009). The contribution of a virtual biology laboratory to college students’ learning. Innovations in Education and Teaching International, 46(4), 405-419. https://doi.org/10.1080/14703290903301735

Assignment 2: Analysis and Critique

Analysis & Critique of Who benefits from learning with 3D models? The case of spatial ability (Huk, 2006)

The purpose of this study was to determine if interactive three-dimensional (3D) models of plant and animal cells had an effect on students’ learning of cell biology in a hypermedia learning environment, and whether the effect was different between students with high vs low spatial ability. In the context of this paper, spatial ability was considered to be the students’ ability to visualize and rotate 3D images in their mind. The researcher identified a gap in research examining the educational value of 3D models, stating that most prior research had not found either advantages or detriments when using 3D vs 2D images. There was also little or no previous research connecting spatial ability with the educational value of 3D models.

The most significant prior studies linked to the research are: Keehner et al. (2004), which revealed that there is an effect on comprehension of 3D computer modeling that depends on spatial ability, Mayer (2001) which presented the ability-as-enhancer hypothesis (where higher spatial ability increases comprehension of 3D models), and Hays (1996), which proposed the alternate ability-as-compensator hypothesis (where people with lower spatial ability benefit more from the 3D models as these models compensate for the student’s lower ability to visualise 3D structures). A compelling idea in this study was that the addition of interactive 3D models may increase the cognitive load of learners, especially in a hypermedia environment, and that a learner’s spatial ability may affect whether they are cognitively overloaded by the extra information.

Spatial ability of participants was measured by their score on a 21-question tube figures test. A median split was used to categorize students as having either high or low spatial ability in the graphical representation of results; it was not stated if this was the method used in calculations or if a raw score out of 21 was used. Knowledge acquisition was measured by student scores on a pencil and paper post-test with a total of 7 questions. The first 3 questions were designed to test auditory recall and the second 4 questions to test visual recall. Cognitive load was measured indirectly by students’ self-reporting of agreement or disagreement on a 5-point scale with the statement “The presentation of the animal and plant cell is easy to comprehend” (Huk, 2006, p. 398).

The research was quantitative as the researcher used test scores and statistical analyses to measure the results (both test results and survey results). It was experimental because there was an intervention (the addition of 3-D models to the learning material) and the participants were randomly assigned to control or experimental groups. There was an element of problem-based research because the researcher was directly exploring the problem of whether 3-D models were beneficial to learners of cell biology. The research was also partially theory-based because the researcher framed the experiment in such a way that it could support/refute two conflicting hypotheses that had been proposed in previous studies: either the ability-as-compensator hypothesis or the ability-as-enhancer hypothesis. The support of one of these hypotheses could possibly be used to generalize about a larger population of learners or help to refine the theories themselves.

One independent variable was the presence or absence of interactive 3-D cell models in the software that students used prior to their knowledge acquisition test. The experimental group was given identical software to that of the control group, except with the addition of 3-D cell models. The second independent variable was the spatial ability of the participants. A median split was used to categorize students as having either high or low spatial ability (at least for the purposes of graphical representation). It was unclear if the researcher used the raw score out of 21 in their statistical analysis.

One dependent variable was students’ knowledge acquisition, measured by the number of correct answers on a post-test. Knowledge acquisition was split into the sub-categories of auditory and visual recall. The other dependent variable was students’ impression of the module, a rating of whether the students found the information easy or difficult to understand. This was interpreted as the self-reporting of cognitive load. The attribute variable was students’ prior knowledge of the subject material. This was analysed using the scores obtained on a pre-test one week ahead of the actual test.

The research design was an experimental, randomly assigned, 2 x 2 design. Control procedures that were used included randomization of subjects into intervention or non-intervention groups and the statistical control using students’ prior domain knowledge (and in some cases, the amount of time spent on the content module) as covariates. The author noted that the research took place in the students’ everyday classroom surroundings to increase external validity of the experiment.

The sample of research participants consisted of 106 high school or college-level biology students from more than one school (with the total number of schools not specified) in Germany. The author reported that there were 54 students randomly assigned to the control group and 54 to the experimental group. About 67% of participants were female and the mean participant age was 18.49 years (SD = 2.16 years).

As an alternative hypothesis, the author posited that gender differences may influence spatial ability and an imbalance of male to female participants could have introduced bias to the results. However, the random assignment of participants ensured that the ratio of male to female participants was nearly equal between the two groups.

The data were analyzed using linear regression models with prior knowledge as a covariate, and in the case of auditory recall, time spent on the module was used as a covariate as well. The inclusion of time spent as a covariate made no statistical difference for visual recall, so was not included there.

A major finding of the study was that students with high levels of spatial ability showed higher mean post test scores (both auditory and visual) when the 3D model was present in their software, while the opposite was true for students with low spatial ability. The results suggest that only students with high levels of spatial ability benefitted from the inclusion of interactive 3D models.

The most important point made in the discussion section was that students with lower spatial ability may be faced with cognitive overload when they are faced with integrating the information from a 2D drawing with that of a 3D computer model.

There were a few methodological issues in this study. The most obvious was that the number of participants in the research study did not add up. It was reported that there were 54 students in each of the groups (intervention and non-intervention), but the total number of participants in the study was reported as 106. It is possible that there was an error made in the writing of the paper and each group only had 53 students, or perhaps 2 participants were lost throughout the course of the study (although this was not reported). It may also have been possible that there were 2 non-binary participants, but it is unclear why the researcher would not include their data, especially if it were one participant per group, if this were the case.

Additionally, there were no details describing how student answers on the pre- and post tests were graded. Whether the answers were rated by one or more people and whether those raters had a high level of inter-rater reliability could have an impact on the validity of the results. If multiple raters had differing opinions on the students’ answers and did not reconcile these differing opinions through averaging or some other means, the data would not be reliable, and one would have to question the results of the statistical analyses.

In the presentation of the data, the researcher used a median split to show the difference in knowledge acquisition between students with high spatial ability and low spatial ability. There was no clarification in the methods section whether the actual spatial ability score of the students or median split was used in data analysis. If it were a median split, this would reduce the validity of the data.

On the other hand, the same computers were used at each of the study locations and the same instructor gave the directions to the students. This control for confounding effects of technology and different instructors was a notable strength in the research design. As well, random selection of test subjects controlled for differences in prior knowledge, which when calculated, was not statistically different between groups. Although there were a higher number of females in the study, the proportion of female to male participants was not different between groups either.

Overall, I found this study useful both for my personal work as a biology laboratory instructor and as a deeper investigation into spatial ability and cognitive load of learners. In order for the study to have repeatability, and in order to effectively gauge the reliability of the study, more detail would be required in the methods. However, the researcher did appear to pay attention to detail and consider alternative hypotheses and control for confounding. Therefore, I would recommend that this study be considered when making decisions regarding the introduction of computer models to students learning cell theory. In cases where students have limited time to interact with software, the addition of more learning tools may impact their ability to recall information. This study indicates that more research is required to explore the connection between spatial ability, cognitive load, and 3D computer models in other areas of study.

 

 

References

Hays, T.A. (1996). Spatial abilities and the effects of computer animation on short-term and long-term comprehension. Journal of Educational Computing Research (14), 139–155.

Huk, T. (2006). Who benefits from learning with 3D models? The case of spatial ability. Journal of Computer Assisted Learning, (22)(6), 392-404. https://doi.org/10.1111/j.1365-2729.2006.00180.x

Keehner M., Montello D.R., Hegarty M., & Cohen C. (2004) Effects of interactivity and spatial ability on the comprehension of spatial relations in a 3D computer visualization. In Proceedings of the 26th Annual Conference of the Cognitive Science Society (eds K. Forbus, D. Gentner, & T. Regier). Erlbaum, Mahwah, NJ, 1576 pp.

 

 

IP 2 – Annotated Bibliography

Tsai, Y. L., & Tsai, C. C. (2020). A meta‐analysis of research on Digital Game‐based Science Learning. Journal of Computer Assisted Learning, 36(3), 280–294. https://doi.org/10.1111/jcal.12430

Tsai & Tsai (2020) performed a meta-analysis of 26 peer-reviewed empirical studies, published between 2000 – 2018, that examined the use of digital games for science learning (including physics, chemistry, biology, and natural science). The study design followed preferred reporting items for systematic reviews and meta-analyses (PRISMA) and APA meta-analysis reporting standards. Comprehensive Meta-Analysis software was used to calculate the overall effect size of two groups (random-effect model), gameplay design (GD; n = 14) and game-mechanism design (GMD; n = 12), and the subgroup analysis tool (mixed-effects model) to compare education level, single/multiplayer game design, roleplay/no roleplay game type, and learning mechanisms/gaming mechanisms.

In the GD group, the results showed that students’ science knowledge acquisition at all educational levels significantly improved when digital games were used for learning in place of other teaching methods. There was no significant difference observed between single and multiplayer games or games with or without roleplay. In the GMD group, it was found that both added learning mechanisms and gaming mechanisms significantly increased science knowledge acquisition at all educational levels, with no difference between the two. The authors suggest that the results support Piaget’s theories that connect play and cognitive development and note that many children lack motivation to learn science because it is perceived as complex, but digital game-based learning may engage them in the subject matter.

The authors of this meta-analysis provided sound reasoning for their research design and provided relevant connections between the results and learning theories that inform pedagogy. They acknowledged several limitations to their study and presented compelling arguments for further research on topics such as the connections between digital game-based learning and student problem solving and gameplay behavior. This paper is clearly written and provides empirical evidence that students can develop their scientific knowledge through digital gameplay.

 

 

 

Wang, L.-H., Chen, B., Hwang, G.-J., Guan, J.-Q., & Wang, Y.-Q. (2022). Effects of digital game-based STEM education on students’ learning achievement: A meta-analysis. International Journal of STEM Education, 9(1), 1–13. https://doi.org/10.1186/s40594-022-00344-0

Wang et al. (2022) performed a meta-analysis of 33 studies with a total of 36 effect sizes that were published between 2010 – 2020. The studies were selected in accordance with the preferred reporting items for systematic reviews and meta-analyses (PRISMA) guidelines, based on the following: focus on games and STEM (specifically science, math, and engineering/technology) learning of students (K-12 or higher education), presence of a control group, quantity of data, and publication in English. Comprehensive Meta-Analysis 3.0 software was used to calculate the effect size (expressed as the standard mean of difference using the random-effects model), and variables considered included the control treatment, educational level, subject, intervention duration, game type, and gaming platform.

The results indicate that there is a significant positive effect size of digital game-based learning on STEM students’ achievement. Digital games outperformed non-digital games, but there was no significant difference between subject disciplines, traditional or multimedia instruction of control groups, or the platform that was used. Primary school students showed significantly better learning achievement when learning from digital games, but there was no significant difference between intervention duration. The authors note, however, that intervention duration of less than one week has the highest increase in achievement when compared to each of the longer intervention periods, perhaps due to novelty.

Although the authors provided details about how studies were chosen, I would like to read more about how learning achievement was measured, and whether pre-tests and post-tests were used for assessment. Wang et al. (2022) do acknowledge the limitations of their meta-analysis and indicate that a follow-up study from another perspective (e.g. cognitive skills, affective influences) is warranted. The research methods could have benefitted from some more detail, and a connection to pedagogical practices would create more relevance for practicing educators.

IP 5 – Hegemonic Play: Gatekeeping Game Culture

References

Choi, Y., Slaker, J. S., & Ahmad, N. (2020). Deep strike: Playing gender in the world of Overwatch and the case of Geguri. Feminist Media Studies, 20(8), 1128-1143. DOI.

Gach, E. (2022, July 06). Ubisoft employees have ‘grave concerns’ over Toronto studio’s misconduct allegations. Kotaku.

Hathaway, J. (2014, October 10). What is Gamergate, and why? an explainer for Non-Geeks. Gawker. Retrieved February 12, 2023, from https://www.gawker.com/what-is-gamergate-and-why-an-explainer-for-non-geeks-1642909080

Mac, J. (Host). (2021, December 18). How to be an ally [Audio podcast episode]. In Cheat Codes. Women In Games International. https://open.spotify.com/show/7vQBfOKWsdnYVkLrermilu

Tyler, D. (2023, February 2). How women are changing the gaming industry. Video Game Design and Development. Retrieved February 12, 2023, from https://www.gamedesigning.org/gaming/women/

Witkowski, E. (2018). Doing/Undoing gender with the girl gamer in high-performance play. In K.L Gray, G. Voorhees, & E. Vossen (Eds.), Feminism in play (pp. 185-203). Springer International Publishing.

Intellectual Production 8: Game Design 101

The following post is a selection of exercises from the first three chapters of Fullerton’s Game Design Workshop (2014). I made an effort to choose at least one exercise from each chapter and focused on the ones that I found the most interesting.

1.3 Your Life as a Game

Here are some aspects of my life that could be made into games:

Travelling to work

The travelling to work game has a simple objective: make it to the office on time with the highest number of points. However, there are a number of challenges the player must navigate in order to complete the task.

Firstly, the player must decide at what time to leave the house with a rough estimate of the time required to get from home to work. The more time spent in the home prior to leaving, the more ‘alertness’ points the player will have to spend on the drive, since they will have had more coffee. However, the game takes place in Saskatchewan, and the weather is unpredictable! Sometimes there will be 10 cm of snow on the ground (and the windshield), so it is often wise to sacrifice some coffee time to make sure the vehicle can be sufficiently scraped for maximum visibility.

There will be many obstacles to avoid on the drive, some more important than others. Hitting one of the many potholes will only deplete the vehicle’s condition points, but hitting another vehicle or cyclist is an instant game over.

Once the car is safely parked, the number of alertness points, vehicle condition points, and number of minutes left before the start of the workday are totalled to calculate the player’s daily ranking.

 

Aquarium care

 

The fun of the aquarium care game comes from managing a system with several interconnected variables. The player must select suitable substrate and vegetation and carefully choose inhabitants that are compatible. The bioload, water pH, and hardness must all be carefully monitored to ensure the inhabitants are happy. Some animals, such as snails, can be added to reduce the amount of algae present, but their extra nitrogen production must be taken into consideration. Each new element added will have a trade-off. The overall goal is to create a healthy ecosystem that requires as little input as possible. The player wins the game when the aquarium becomes stable enough that it only requires weekly water top-ups and daily food for the animals.

Grocery shopping

The Grocery shopping game is an exercise in resource management. The player gets a predetermined amount of time and money and must get as many items on the list without running out of either. The score is based on the number of items that were successfully purchased in the allotted time and the number of dollars that remain in the budget at the end of the trip. An experienced player will make a mental map of the grocery store and retrieve items in the order that requires them to traverse the fewest number of aisles.

 

 

Cooking supper

Cooking supper is the sequel to the grocery shopping game. The player must prepare a healthy and delicious meal from the items acquired in the previous game. The goal is to include something from every food group and make it as yummy as possible without adding too much sugar, salt, or fat. Bonus points are awarded based on timing (if all components are still hot when the meal is served), use of cookware (fewer dishes = better), and appropriate volume of food (having just enough left for lunch the next day is awarded a special achievement).

Raising chickens

This game is all about balancing animal care, egg production, and neighbourhood satisfaction. Each chicken will produce a certain number of eggs per year, and the number of eggs will increase as chicken happiness increases. Chicken happiness is based on the amount of space available, the cleanliness of the run, and the variety of feed provided. Neighbour satisfaction is based on the number of gifted eggs and the amount of noise produced by the chickens. The game is never-ending unless chicken happiness decreases to the point that they get sick or neighbour satisfaction decreases to the point where they report the player’s chickens to animal control (because the game takes place in a city with anti-chicken bylaws). Each spring season there is a mini-game where the player has to look at all different kinds of adorable heritage breed chicks and avoid buying too many.

 

2.6 Challenge

The following games are ones that I find particularly challenging:

Donkey Kong Country: Tropical Freeze

This game requires a lot of precisely timed button presses, which is not one of my strengths. As Donkey Kong flies through a side-scrolling level on a rocket-propelled barrel, the player must repeatedly tap a button to keep the barrel from falling. Increase the speed of the button taps and the barrel rises. If the barrel crashes into any of the many obstacles or touches the bottom of the screen, it explodes, and the level must be restarted at the last checkpoint. Because there is not much time to react between first seeing an obstacle and dodging it, I find that I need to play the same level over and over, which quickly becomes frustrating.

Kena: Bridge of Spirits

It is the combat that makes this game very challenging for me. There are a lot of different options for offensive and defensive moves, and each type of enemy requires a unique combination of them to be killed. It can take many attempts and a large amount of trial and error to figure out what works against each new enemy. Once the player gets over the hurdle of determining the correct type of combat, the next challenge is actually executing the moves without losing too much health. During battle, resources are scarce and it can take a long time to save up enough power to do a significant amount of damage to the enemy.

Hollow Knight

I find Hollow Knight challenging because the game map is large and initially very difficult to navigate. The player must work through the world piece by piece and can only access new parts of the map after solving various challenges, many of which cannot be solved without discovering a new power. The challenge would be lessened if the map were laid out in a linear fashion so that one could work through it as they achieved new powers, but as it is designed, the player must remember where the inaccessible places are and return to them later. It almost feels like a maze sometimes!

 

3.2 Three-Player Tic-Tac-Toe

For this exercise, I first tried making a 4×4 grid and proceeded to play against myself with three different symbols. I used X, O, and then arbitrarily added a triangle as my third symbol since it is something that can be drawn from either side of the grid and is recognizable upside-down as well. I decided that with the larger grid, a player should be required to get four in a row to win. However, it quickly became apparent that whichever symbol belonged to the player who went third would never be able to win. I found myself thinking from the perspective of the third player that I should just give up after a couple of turns.

I then changed the rules back to three in a row to win and tried again. This felt more like the classic game of tic-tac-toe where each turn forces the hand of the next player and generally results in no winner (unless someone makes a mistake). I learned at a young age that whoever gets to go first in tic-tac-toe has a greater chance of winning. After playing a few rounds of three player tic-tac-toe, I concluded that the first player advantage exists in this version as well.

 

3.3 Interaction Patterns

The following is a list of my favourite games for each of the seven different interaction patterns:

 

3.6 Rules Restricting Actions

The following is a list of games along with the rules of each game that restrict player actions:

    • Twister
      • Players must place a specific (left or right) hand or foot within a circle of a specific colour when the combination is called out by the person refereeing the game
      • Players cannot remove their hands or feet from the circles they currently occupy until a new colour is called out
    • Pictionary
      • The player who is drawing for their team must communicate a word solely by drawing a picture without giving any verbal hints or writing numbers or letters on the page
    • Scrabble
      • Players may only lay down tiles that spell acceptable words
      • Players may only collect up to 7 letter tiles in their hand
      • Each word that is played must connect to word that is already on the board
    • Operation
      • Players must use the designated tool to remove a specific object from the patient
      • Players may only remove the object that is indicated on the card that is drawn
      • If the player lights up the patient’s nose, they may not continue their turn
    • Pong
      • The only action that players may perform is to move the “paddle” vertically across the screen to stop the ball from touching their side

Reference

Fullerton, T. (2014). Game design workshop: A playcentric approach to creating innovative games. Taylor & Francis (CRS Press).

ETEC 511 IP 2: Artificial Intelligence

Table 1. A side-by-side comparison of my responses to a series of questions vs the responses of Chat GPT (OpenAI, 2023).

My Response

Chat GPT’s Response

1.      Who were these people, and how did/does each contribute to the development of artificial intelligence? How did/does each think “intelligence” could be identified? (~ 50 words each)

Alan Turing

Alan Turing was a British mathematician who paved the way for computer sciences and the development of artificial intelligence (Biography.com, 2020). He proposed a machine that could compute anything and created the design that led to the first personal computers. He described an Imitation Game that would test machine intelligence by observing it in conversation with a human (Turing, 1950).

 

Alan Turing was an English mathematician and computer scientist who helped crack Nazi codes during WWII and is considered the father of computer science. He proposed the concept of a machine, now called a Turing machine, that could perform any mathematical calculation, laying the foundation for modern computers. Turing believed that if a machine could successfully mimic human conversation, it could be considered intelligent. This became known as the Turing test.

John McCarthy

John McCarthy, a professor of computer science, was a researcher of artificial intelligence and the creator of Lisp programming language. He defines intelligence as “the computational part of the ability to achieve goals in the world” but believes it is not yet possible to define machine intelligence without relating it to human intelligence (McCarthy, 2007).

 

John McCarthy was an American computer scientist and mathematician known as one of the founders of artificial intelligence. He helped develop the first AI programming language, LISP, and organized the Dartmouth Conference in 1956, considered the birth of AI as a field. McCarthy believed intelligence could be defined as the ability to solve problems and make decisions, and that machines could be made to demonstrate this through programming and algorithms.

Herb Simon

Herb Simon was a researcher of management science who focused on decision making. His work on psychology, economics, computer science, and math made him one of the founding fathers of artificial intelligence (Donovan, n.d.). His programming was informed by a computational theory of human intelligence where large amounts of data are collected to understand the background of a problem (Lee, 2013).

 

Herbert Simon was an American economist and psychologist who was awarded the Nobel Prize in Economics in 1978. He made significant contributions to the study of decision-making and artificial intelligence. He believed intelligence could be identified by a system’s ability to process information efficiently and make decisions that achieve its goals. He also introduced the concept of “bounded rationality”, suggesting that decision-making is limited by the information available and the cognitive abilities of the decision-maker.

Marvin Minsky

Marvin Minsky, cofounder of the Massachusetts Institute of Technology’s Artificial Intelligence Lab, built one of the first neural network machines and strove to design machines that were capable of human common-sense (BBC, 2016). His theory of intelligence was based on the idea that many small non-intelligent agents can work together in a society to produce something intelligent (Minsky, 1986).

 

Marvin Minsky was an American cognitive scientist and computer scientist who is considered a pioneer in the field of artificial intelligence. He co-founded the MIT Artificial Intelligence Laboratory and contributed to the development of early AI systems. Minsky believed intelligence could be identified by a machine’s ability to use symbols and rules to represent and manipulate knowledge, leading to “common-sense reasoning”. He also developed the theory of “frames”, which is used to represent knowledge in AI systems.

Timnit Gebru

Timnit Gebru, a computer scientist and artificial intelligence ethics researcher, has analyzed representation of women, people of colour, and less developed countries in AI-produced output (Hao, 2020). The work that caused her ousting from Google outlined the risks of large language models. She believes that intelligent models of AI would be built with community contexts in mind (Walsh, 2022).

 

Timnit Gebru is an Ethiopian-American AI researcher and computer scientist who has made significant contributions to the field of artificial intelligence ethics and bias. She co-led Google’s ethical AI team before leaving the company in 2021. She advocates for responsible AI practices and has published research on the unintended consequences of biased data and algorithmic decisions. Gebru believes intelligence can be identified by a machine’s ability to learn and understand complex patterns in data and make decisions that align with human values and ethics.
2.      How do “machine (programming) languages” differ from human (natural) ones? (~100 words)
While both machine and human languages are forms of communication with semantics and syntax, they are different. Harris (2018) outlines three major differences: 1) Human languages can change depending on context, while machine languages have defined rules and do not change; 2) Natural languages include nonverbal communication that can impart emotion and nuances that are not possible to convey in machine language; and 3) in human language, errors, accents, slang and other forms of improvisation do not necessarily affect the meaning, while in machine language, these things may cause errors and render the communication ineffective. Machine languages are designed for computers to understand and execute, whereas human languages are for communication between people. Machine languages are based on strict syntax and grammar rules, whereas human languages can be more flexible and context-dependent. Machine languages consist of a limited set of commands and symbols, whereas human languages have a large vocabulary and the ability to form new words. Machine languages are designed for efficiency and speed of processing, whereas human languages have evolved for nuanced communication and expression of emotions. Machine languages can only understand specific tasks, whereas human languages can understand and convey abstract concepts.
3.      How does “machine (artificial) intelligence” differ from the human version? (~100 words)
Machine intelligence is often measured based solely on the efficiency and speed with which the AI can perform a skill, whereas human intelligence is informed by prior experience (Chollet, 2019). Traditionally, intelligence in AI was determined by success in a specific task, but human intelligence can be applied generally, over multiple different tasks. According to Chollet (2019), “information processing systems form a spectrum between two extremes: on one end, static systems that consist entirely of hard-coded priors… and on the opposite end, systems that incorporate very few priors and are almost entirely programmed via exposure to data,” while humans “combine ample amounts of both priors and experience” (p. 19). Artificial intelligence (AI) refers to the ability of machines to perform tasks that typically require human-level intelligence, such as perception, reasoning, learning, and decision making. Human intelligence, on the other hand, is the ability of a person to think, understand, and learn from experiences and environment. AI operates based on pre-programmed algorithms and rules, whereas human intelligence is based on complex neural networks, emotions, biases, and past experiences. AI can process large amounts of data much faster than humans, but lacks creativity, empathy, and common sense that humans possess. AI can be programmed to make objective decisions, while human intelligence is subjective and influenced by emotions and personal experiences.
4.      How does “machine learning” differ from human learning? (~100 words)
Machine learning is different from human learning because AI makes decisions based on patterns seen a limited data set without the ability to apply context or seek out new sources of information. Like humans, machines develop biases, but theirs are the result of data sets and variables from which they do not deviate (Heilweil, 2020). Janardhanan (2020) summarizes the major differences by stating that “an intelligent human being will be able to solve problems related to unforeseen domains, whereas a machine will not.” Machine learning is a type of artificial intelligence that allows computer systems to learn from data without being explicitly programmed. Unlike human learning, machine learning algorithms don’t require a teacher or human intervention to change its behavior. Human learning is based on experience, perception, and feedback while machine learning relies on algorithms and large amounts of data to make predictions. Human learning can be influenced by emotions, biases, and past experiences, while machine learning is based solely on the data it was trained on. Both processes involve changing behavior over time, but machine learning is much faster and can process much larger amounts of data than human learning.
5.      How do your answers differ from what a machine could generate? Make sure you reference the specific differences between your answers and Chat GPT’s. (~200 words)
Chat GPT did not cite any sources, whereas I was careful to only make claims I could back up with sources. For example, I know that the Imitation Game designed by Turing is commonly called the Turing Test, but since the paper I consulted did not mention that I made a conscious decision not to include the information. When I entered the questions into the Chat GPT textbox, I prefaced them with the number of words I expected in the answer (e.g., In 100 words, who was…). Chat GPT went over the word limit each time. I initially went over my word limit for each question but went back and deleted every word that I could without losing meaning or important information. In some instances, Chat GPT made claims that I was unable to confirm through internet searches, for example the statement about what Gebru believes to be the definition of intelligence. Unless she was quoted somewhere as saying those words, I think that the sentence is speculation and should have been worded as such. In comparison, my answer about her beliefs was less specific but can be traced back to a specific piece of writing.

 

References

BBC. (2016, January 26). Ai pioneer Marvin Minsky dies aged 88. BBC News. Retrieved January 31, 2023, from https://www.bbc.com/news/technology-35409119

Biography.com (Ed.). (2020, July 22). Alan Turing. Biography.com. Retrieved January 31, 2023, from https://www.biography.com/scientist/alan-turing

Chollet, F. (2019, November 5). On the measure of intelligence. Google, Inc. Retrieved January 28, 2023, from https://arxiv.org/pdf/1911.01547.pdf

Donovan, P. (n.d). Herbert A. Simon: Do we understand human behavior? The economics of altruism. Retrieved January 30, 2023, from https://www.ubs.com/microsites/nobel-perspectives/en/laureates/herbert-simon.html

Hao, K. (2020, December 4). We read the paper that forced Timnit Gebru out of google. here’s what it says. MIT Technology Review. Retrieved January 31, 2023, from https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru

Harris, A. (2018, November 1). Human languages vs. programming languages. Medium. Retrieved January 31, 2023, from https://medium.com/@anaharris/human-languages-vs-programming-languages-c89410f13252

Heilweil, R. (2020, February 18). Why algorithms can be racist and sexist. Vox. Retrieved January 31, 2023, from https://www.vox.com/recode/2020/2/18/21121286/algorithms-bias-discrimination-facial-recognition-transparency

Janardhanan, P. S. (2020, April 2). Human learning and machine learning – how they differ? Data Science Central. Retrieved January 31, 2023, from https://www.datasciencecentral.com/human-learning-and-machine-learning-how-they-differ/#:~:text=Let%20us%20examine%20the%20difference,the%20form%20of%20past%20data

Lee, J. A. N. (2013). Herbert A. Simon. Computer Pioneers – Herbert A. Simon. Retrieved January 30, 2023, from https://history.computer.org/pioneers/simon.html

McCarthy, J. (2007, November 12). What is artificial intelligence? Basic questions. Retrieved January 30, 2023, from http://www-formal.stanford.edu/jmc/whatisai/node1.html

Minsky, M. L. (1986). The society of mind. Simon and Schuster. Retrieved January 30, 2023, from https://archive.org/details/societyofmind00marv/page/17/mode/2up

OpenAI. (2023, January 25). CHATGPT: Optimizing language models for dialogue. OpenAI. Retrieved January 31, 2023, from https://openai.com/blog/chatgpt/

Turing, A. M. (1950). Computing, machinery and intelligence. Mind, 49(236), 433-460. Retrieved January 30, 2023, from https://www.cs.mcgill.ca/~dprecup/courses/AI/Materials/turing1950.pdf

Walsh, D. (2022, May 26). Timnit Gebru: Ethical AI requires institutional and structural change. Stanford University. Retrieved January 30, 2023, from https://hai.stanford.edu/news/timnit-gebru-ethical-ai-requires-institutional-and-structural-change

IP 4: What is a Game?

The following mind map attempts to condense and connect the ideas presented in Chapter 3 of Understanding video games: The essential introduction (Egenfeldt-Nielsen et al., 2019) and was created using Mind Meister’s mind mapping software. If you would like to zoom in or out to read specific sections, you can view it here.

Reference

Egenfeldt-Nielsen, S., Smith, J. H., & Tosca, S. P. (2019). Chapter 3: What is a game? In Understanding video games: The essential introduction (4th ed., pp. 31-59). Routledge. https://doi.org/10.4324/9780429431791

IP 1: Usability

Assignment instructions

  1. Formulate your own conception of usability based on the first reading.
  2. Think about what is missing from your conception from an educational perspective, then create your conception of educational usability.
  3. Identify and discuss 2 of Woolgar’s examples of how usability studies ended up configuring users.
  4. Discuss the differences between these two quotes:
    1. “…the usability evaluation stage is an effective method by which a software development team can establish the positive and negative aspects of its prototype releases, and make the required changes before the system is delivered to the target users” (Issa & Isaias, 2015, p. 29).
    2. “…the design and production of a new entity…amounts to a process of configuring its user, where ‘configuring’ includes defining the identity of putative users, and setting constraints upon their likely future actions” (Woolgar, 1990).
  5. Figure out an effective way to reduce your writing to a maximum of 750 words without losing meaning.

 

Usability

Usability can be defined as the degree to which a system can be used by people to achieve certain goals. Issa & Isaias write that the goal of usability is “making systems easy to learn, easy to use, and with limiting error frequency and severity” (2015, p. 24). Ease of communication between humans and machines by means of an intuitive user interface, comfortable physical interaction, and use of familiar language in programs all increase the usability of a digital tool. Usability increases productivity and the speed of task completion without creating additional frustration.

Educational usability

In an educational setting, there are additional considerations for usability of digital learning tools. They should be affordable to avoid cost as a barrier and easy to operate for students who experience neurodivergence, learning disabilities, or physical constraints that could interfere with operation of the tool. The design must take into consideration the ages, relative knowledge levels, and experience of the students who are expected to use them for learning, and provide relevant and engaging content that promotes interest. Care should be taken to represent diversity and inclusivity so that the tool can easily be used by all students regardless of race or gender.

Woolgar’s accounts of user configuration

In contrast with the aforementioned concept of educational usability, Woolgar (1990) recounts instances where usability testing focused more on how a company could determine who would use their tools and how they would be used. For example, Woolgar makes a connection between the case of the computer and boundaries between the users and the company (1990, p. 79). The myriad of warnings about electrocution, void warranty, and possible damage to computer components if the case was opened or tampered with, coupled with a redirection of the user to a manual or hotline, ensured that the users behaved in such a way that their interaction with the computer was distinct from that of the company. In doing so, the company was able to retain control over the prescribed use of the machine, but forfeited possible valuable input about usability from the users.

Another example Woolgar provided of usability trials gone wrong was the testing of th user manual. User manuals that accompany the hardware and software configure users in the sense that they outline the correct sequence of actions that a user should take (Woolgar, 1990 p. 81). By testing how easy the manuals were to follow during usability trials, the company was testing the effectiveness of their manual in instructing the users how to behave instead of assessing how intuitive the system was to use, and ensuring they could maintain control of the users’ future actions.

Differing perspectives of usability testing

In the following quotations, the authors show a stark contrast between their views of usability testing:

“…the usability evaluation stage is an effective method by which a software development team can establish the positive and negative aspects of its prototype releases, and make the required changes before the system is delivered to the target users” (Issa & Isaias, 2015, p. 29).

“…the design and production of a new entity…amounts to a process of configuring its user, where ‘configuring’ includes defining the identity of putative users, and setting constraints upon their likely future actions” (Woolgar, 1990).

On one hand, Issa and Isaias (2015) describe usability testing as a process where the user and developers are in communication to improve the usability of prototypes prior to the release of the final iteration of the machine. By taking into account the opinions and desires of the people who will be using the final product, the designers admit that they cannot possibly perfectly predict what users will need.

On the other hand, Woolgar (1990) describes the process of usability testing as deciding who the users should be and essentially testing them to determine the amount of control the company will have over how the machine is used. This gives the impression that the company believes their initial design is flawless and the importance is placed on ways to ensure the user can be trained to use the product effectively.

In conclusion, it is clear that if designers share the perspective of Issa and Isaias (2015), they will be far more likely to produce a tool with a high level of usability, whereas designers who operate in the way that is described by Woolgar (1990) face the possibility of producing a frustrating experience for the user.

References

Issa, T., & Isaias, P. (2015). Usability and human computer interaction (HCI). In Sustainable Design (pp. 19-35). Springer.

Woolgar, S. (1990). Configuring the user: The case of usability trials. The Sociological Review, 38(1, Suppl.), S58-S99.

 

Word count: 737