Author Archives: elske ammenwerth

Simulation instead of reality: Worth the effort?

I know I am not fully in line with the instructions, but I would like to focus my post on the benefits of simulation compared to reality. Some of the readings of this week discussed this topic, and I found this question quite fascinating, as the answer will guide the decision on whether using simulation in the classroom or not.

Simulations are based on models. Models are normally understood as simplified representations of a reality (of what the modeler understands as reality), in order to be able to focus on those issues that the modeler finds most important (Winter&Haux, 2011). Thus, in this definition, simulations simplify the reality with the intention to help students learn.

Indeed, students may learn better in simplified, “constrained” environments (Finkelstein, 2005). Why is this? First, simulations may offer “visual clues”, making concepts visible that would otherwise be invisible (such as the flow of electrons in a wire) (Finkelstein, 2005). Second, a simulation avoids any distraction of students that may interfere with successful learning (e.g. no misunderstanding of different colors of wire), instead helps to focus the student on the relevant details (Finkelstein, 2005). Third, in a simulation variables can be changed much more easily than in a real lab situation. Fourth, a simulation is less expensive than real lab classes, especially for large number of students (Srinivasan, 2006).

Yet, as Finkelstein (2005) notes, this may stand in contrast to the “conventional wisdom” that students may learn most with hands-on experience. He thus conducted an experiment: Two groups of university students attending a physics course were compared regarding their mastery of physical concepts. One group used real lab equipment to learn about electron flow, the other group used simulation (the PhET Circuit Construction Kit). Overall, 231 students participated in the experiment. Data was collected by the researchers via observation of the sessions, analysis of lab documentation, time needed to solve the lab exercises, and performance on selected questions in the final exams. Results show that the simulation group outperformed the lab group both in understanding of the physical concepts as well as in their ability to describe their circuit. The authors conclude that simulation can replace traditional real lab. However, they also discuss that simulations are not “the magic bullet”, and that they do not proposed to skip all lab classes. But still, they argue, there is a place for simulations in university education, and depending on the context, the outcome may be better compared to traditional labs.

Interestingly, students themselves may prefer real lab versus simulation. In another study, undergraduate and graduate students that worked with a simulation were studied (Srinivasan, 2006). All students were exposed both to MatLab-based simulation and to traditional lab classes. No differences in learning outcome could be detected (Srinivasan, 2004). A smaller number of students were also interviewed. Interviews showed that a majority of the students perceived the software simulation as a kind of “fake” (Srinivasan, 2006, p. 137). More than half of the students would have preferred “real” lab classes.

What is my summary: Simulations have their place and can lead to even better learning than traditional labs. Yet, students in certain contexts may consider simulations as “not real” and not “authentic” (Srinivasan, 2006).

Question: Did you observe an impact of simulations on learning, compared to traditional lab-based teaching?

References:

Finkelstein, N.D., Perkins, K.K., Adams, W., Kohl, P., & Podolefsky, N. (2005). When learning about the real world is better done virtually: A study of substituting computer simulations for laboratory equipment. Physics Education Research,1(1), 1-8.

Srinivasan, S., Perez, L. C., Palmer,R., Brooks,D., Wilson,K., & Fowler. D. (2006). Reality versus simulation. Journal of Science Education and Technology, 15 (2), 137-141.

Srinivasan, S. (2004). Implementation of an integral signals and systems laboratory in electrical engineering courses: A study. MA, University of Nebraska, Lincoln.

Winter, A., Haux, R (2011). Health Information Systems. 2nd edition. New York: Springer.

GLOBE as learning community

I analyzed GLOBE and chose the following question for discussion: “In what ways do the networked communities you examined represent this characterization of learning communities? What implications does this have for your practice and the design of learning activities?”

GLOBE (https://www.globe.gov) is an international environmental science and science education program. It represents a form of citizen science, a term coined in the 1990th. In the 19th century, nearly all scientific research was done by unpaid amateurs (Scheifinger, 2016). Then, science became an activity of professional scientists. Only in the last years, contributions by citizens are worshiped again. On example for this is GLOBE.

In my opinion, GLOBE indeed represents the four characteristics that a learning community should have, according to Bielaczyc and Collins (1999):

  1. Diversity of expertise among its members who are valued for their contributions and given support to develop: Within GLOBE, young people are involved in collecting data for real scientific investigations (Peneul, 2004). Researchers, students, and teachers from all over the world participate, thus GLOBE shows a high diversity of participants. Contributions by students, under guidance of teachers, are core of GLOBE and thus these contributions are valued; students are seen as “contributors to actual scientific studies” (Peneul, 2004, p. 296). Partner universities train the teachers in the use of the offered GLOBE protocols.
  2. A shared objective of continually advancing the collective knowledge and skills: As international education program, all participants share the objective to collect data to answer research questions and by this to advance science.
  3. An emphasis on learning how to learn. The core objective of GLOBE is to educate students in inquiry-based science (Peneul, 2004). Teachers are first trained and then instruct and monitor their own students in collecting data. By this, students get a better understanding on how scientific inquiry works. Students are encouraged to work on questions they are interested in.
  4. Mechanisms for sharing what is learned: All data collected by students are collected on the GLOBE website. Scientists, students and teachers can access the collected data, use them, and analyse them for various scientific questions. This GLOBEDataArchive is “a key element of GLOBE” (Peneul, 2004, p. 296).

So, overall, GLOBE as networked community display all attributes of a learning community.

Anyone every participated in GLOBE? Would you agree to my analysis?

References:

Bielaczyc, K.,  and Collins, A. (1999): Learning Communities in Classrooms: A Reconceptualization of Educational Practice. In: Reigeluth, C.M. (eds.). Instructional design theories and models, Vol. II, Mahwah NJ: Lawrence Erlbaum Associates.

Driver, R., Asoko, H., Leach, J., Scott, P., & Mortimer, E. (1994). Constructing scientific knowledge in the classroom. Educational researcher, 23(7), 5-12.

Peneul, W.R., & Means, B. (2004). Implementation variation and fidelity in an inquiry science program: Analysis of GLOBE data reporting patterns. Journal of Research in Science Teaching, 41(3), 294-315

Scheifinger, H., Templ, B., “Is Citizen Science the Recipe for the Survival of Paper-Based Phenological Networks in Europe?” BioScience, Oxford University Press. June 2016.

App: Project Noah

I looked at the app Project Noah (https://www.projectnoah.org/mobile).

Project Noah is an app to explore and document wildlife. It also harnesses “the power of citizen scientists everywhere”. Users can spot and document wildlife, using several pre-defined categories. Users submit a picture and a description of the spotting; latitude/longitude and time of spotting are automatically added. Project Noah has a counter on its website indicating 810.570 spottings (13.3., 8 p.m.).

Users can leave comments to other spottings, find related or nearby spottings, and help to identify unidentified spottings. They can also join missions that can be predefined by anyone.

Regarding using Project Noah in education, the website writes: ” By encouraging your students to share their observations and contribute to Project Noah missions, you not only help students to reconnect with nature, you provide them with real opportunities to make a difference.” The website offers missions and challenges to be used in the classroom and supporting materials for the teacher.

Overall, I really appreciate this simple, yet effective app. Students are encouraged to go into the nature, observe, identify, document, share, and catalogue spottings in a group of peers. This is cooperative learning embedded in a real physical environment. The missions add gamification that will increase motivation. I see this related to LfU – it is about applying scientific research principles (observing, analysing, documenting) and about knowledge construction while doing research. And, most importantly: It is very easy to use, only a mobile phone is needed, no complex technology. So the barriers to try it out seem rather low to me.

Embodied learning – just costly solutions looking for problems?

According to Winn (2003), brain, body and the world cannot be separated, and that consequently cognition involves the whole body, not only the brain. Cognitive activity is thus connected to the environment through physical action, which means according to Winn (2003, p. 87): Cognition is embodied in physical activity; activity is embedded in a learning environment; and learning is the result of adaptation of the learner to the environment and vice versa.

Lindgren (2013) defines embodiment somewhat differently as “the enactment of knowledge and concepts through the activity of our bodies” (Lindgren, 2013, p. 445). This definition does not include the learning environment. Yet the idea is the same: learning can be fostered when the body is used to incorporate new concepts. Recent studies have shown that embodied learning leads to “greater chance of retrieval and retention” (Lindgren, 2013, p. 446). Lindgren (2013) then presents two examples of embodiment as “mixed reality”: A MEteor simulation where a student learns about how objects move in space, and a SMALLab chemistry simulation where up to four students immerge in simulations. Both environments include real-time audio and video feedback.

I then chose the topic “mobile apps” and read the review of (Zydney, 2016). He summarizes around 30 mobile apps for science learning. I will select one of those apps for the resource sharing forum.

Regarding applying these approached in my teaching, I was first a bit reluctant: I teach university students, and they probably would hesitate if I told them to use their bodies to learn about computer science J. But I did a quick look at EBSCO and found some paper on embodied learning at university settings, such as: Using pointing and tracing gestures for learning anatomy (Macken, 2014), or using embodied haptic feedback to understand electric force (Magana, 2017). So when thinking about it a bit closer, I could use VR technology to immerge students in a simulated reality where they can see – and maybe feel – the different application systems and their connections in a hospital information systems. Students could follow, for example, the flow of information between the systems. This would be quite interesting to develop – but quite expensive.

When thinking about my idea and after reading all these papers, I couldn’t stop thinking whether augmented reality/virtual reality/mixed reality environments are really worth the effort. Lindgren (2013, p. 449) also discussed this topic without giving a clear answer. Yes, the examples presented in Lindgren (2013) look enjoyable and motivating, and will probably lead to effective learning on the presented concept. Yet, e.g. in MEteor, only a limited set of physical concepts is dealt with. Thus, for other scientific concepts, you would need new or at least adapted environments. This costs a lot of money and time to develop. Also, technology advances fast at the moment, thus MEteor and other examples may be outdates quite soon.

So my question to you:

Q1. Are complex technical environments that support embodied learning only “solutions looking for problems”? Are they worth the effort, compared to other TELE that we have discussed before?

Q2. How can we reduce the costs of developing and maintaining (!) environments for embodied learning?

 

References

Winn, W. (2003). Learning in artificial environments: Embodiment, embeddedness, and dynamic adaptation. Technology, Instruction, Cognition and Learning, 1(1), 87-114.

Lindgren, R., & Johnson-Glenberg, M. (2013). Emboldened by embodiment: Six precepts for research on embodied learning and mixed reality. Educational Researcher, 42(8), 445-452

Magana, A.J. & Balachandran, S. (2017). Unpacking students’ conceptualizations through haptic feedback. Journal of Computer Assisted Learning, 33(5), 513-531.

Macken, L. & Ginns, P. (2014). Pointing and tracing gestures may enhance anatomy and physiology learning. Medical Teacher, Vol 36(7), 596-601.

Zydney, J. M., & Warner, Z. (2016). Mobile apps for science learning: Review of research. Computers & Education, 94, 1-17.

Synthesis: And the Oscar goes to …

In this module, we learned about four technology-enhanced learning environments (TELE). The following table summarizes the theoretical basis, the objectives, and the chosen approach of these four TELE.

  Theoretical basis Objective Approach
Anchored Instruction Case-based learning

Problem-based learning

Project-based learning

Situated learning

Help students develop confidence, skills, and knowledge necessary to solve problems and become an independent thinker Interactive digital video adventures
SKI & WISE Scaffolded knowledge integration framework (SKI), cognitive apprenticeship, intentional learning, and constructivist pedagogy Develop more cohesive, coherent and thoughtful account of scientific phenomena; resolve misconceptions;

make thinking visible, make science accessible, help students learn from each other, give feedback

free on-line science and mathematics learning environment, allows teachers to create own cases
LfU & MyWorld GIS Constructivism

Goal-directed nature of learning

Learning context

 

Provide students procedural knowledge on how to apply declarative knowledge Motivation, knowledge construction, and knowledge refinement
T-GEM & Chemland Inquiry-based learning Foster learners’ conceptual understanding and development of inquiry skills. Generate, evaluate and modify relationships

What are similarities between them? In my opinion, there is one strong similarity: All TELE are based on the constructivist approach. The constructivist approach to learning that postulates that knowledge has to be constructed actively by the students. Consequently, all TELE put a strong emphasis on student activation, for example through adventures, cases, problems, or projects that need to be worked on. This also implies quite some freedom for the students to decide on the next steps of their learning process or inquiry. And this again implies that the students have to get frequent feedback – e.g. by the adventure progress, in a simulation, in a map, by peers, or by the teacher. In detail, the presented TELE show some differences in implementation, such as regarding the degree of offered flexibility, with WISE probably offering least flexibility to the students.

Which TELE now gets the Oscar from me? From all approaches, I found LfU the most attractive for my teaching. Its three steps of motivation, knowledge construction and knowledge refinement seem quite flexible and usable for many teaching scenarios. Also, LfU fosters inquiry-based learning as applied in many science classes. Finally, the integration of the three LfU steps in a learning cycle nicely reflects the typically iterative approach to learning.

To whom would you give the Oscar?

Elske

Semantic inconsistency of classification systems: Let’s try GEM

In our Bachelor course on classification systems, I introduce the idea behind classification systems and also present the topic of “semantic inconsistency”. I know from the exams in this module that this is an issue that the student seldom fully understand, even when I try to explain this in class quite well. So here is my idea of using GEM:

Within the module, students first get an introduction into the idea of classification system. Classification systems allow to clearly assigning an item to a class (e.g. a very good presentation is assigned the mark “A+”). This assignment must be fully clear, to avoid misclassifications. Classification system that allow this are called “semantic consistent”, the others “semantic inconsistent”.

To further work on this concept, I will present a short instruction into ICD10 – the 10th edition of the International Classification of Diseases. I will introduce the ICD10 browser at http://apps.who.int/classifications/icd10/browse/2016/en.

Then let’s start with GEM:

Step 1: Generate

Compile information:

  • Students are asked to work a bit with the ICD10 tool, to get familiar on how it works.
  • Students then get a list of 10 – 20 simple diagnosis that they are asked to code. They are asked to compare their findings with their neighbor. Examples:
    • Angina pectoris -> Code: I20.0
    • Acute sinusitis -> Code: J01
    • Alcoholic liver disease -> Code: K70
    • Atopic dermatitis -> Code: L20

Generate relationships:

  • Based on these examples, students are asked to explain how ICD10 is organized: What is the organizing principle?
    • Students will find out that ICD is (mostly) organized according to organ system (nervous system, eyes, circulatory system, respiratory system, digestive system etc.)
    • Students will be asked to find some proof of this assumption by showing some codes related to organ systems.

Step 2: Evaluate

  • Students are asked to code a list of further diagnosis such as:
    1. liver cancer
    2. viral hepatitis
    3. respiratory tuberculosis
  • Students are asked what is happening here, what is wrong?
    • They will find out that these diagnosis are not only coded according to organ system, but on different axes
      • liver cancer -> Code: C22.9 (Axe: Neoplasm)
      • respiratory tuberculosis -> Code: A15 (Axe: Infectious diseases)
    • Students are asked to find more such examples of codes not organized according to organ system

Step 3: Modify

  • Students are asked to explain why this can happen
    • They will find out that ICD10 axes are related to different perspective of coding (“organ system” versus “type of disease”)
  • Students as asked to look again at the ICD: So what is the organizing principle?
    • It is a mix of organ system and type of disease
  • Students are asked what this means for the coding person
    • They will find out that indeed such a diagnosis could be coded in two area (viral hepatitis may be coded either as liver disease or as infectious disease)
  • Students are asked to discuss with their neighbor how this problem be solved
    • They may come up with the idea that coding rules are needed
  • Students are asked to find out how ICD solves the problem
    • They will find out that indeed ICD includes rules, named “exclusion/inclusion” that clear point to one axe
    • For example: Viral hepatitis is not coded as liver disease, but as infectious disease

Okay, this is quite a complex example, but we are at the bachelor level here, so I guess this could work out. In any case, it is worth a try – I have this module in March, so I will try this exercise.

Do you think the approach could work?

Elske

How to address misconceptions in earth sciences by LfU?

I decided to do some search about frequent misconceptions and how LfU may address them:

Misconceptions regarding earth sciences are common and can not only be found in students, but also in textbooks. Stein (2008) presents a 47-item “Science Beliefs Test” that accesses student’s science understanding. When applying it to 305 students, they found that many students held misconceptions (correct response rates ranged from 33% to 94%) such as misconceptions about moon gravity. Stein (2008) argues that students “develop ideas about a variety of science topics before beginning formal science education and that these ideas tend to remain persistent despite efforts to teach scientifically accepted theories and concepts“ (p. 2). Which means: We can teach what we want, misconceptions stay quite fixed.

How can LfU help to address these misconceptions?

LfU motivates students to observe a situation, collect data, communicate the findings, and draw conclusions. This practical work may help to overcome pre-developed misconceptions by inquiry. Communication and group discussion are especially useful to overcome misconceptions (Shapiro, 1988). Also, the teacher can prepare activities or situations where the student understands that his concepts are not appropriate – this dissatisfaction may support faster accommodation of new concepts (Confrey 1990). There is not one approach to support accommodation that suits all students, and thus the teacher has to adopt different strategies for different students (Saeli, 2011). LfU may help here by allowing students to work in open-ended investigation and hands-on labs and by this addressing different needs and skills of the students.

Do you have other ideas on how LfU addresses misconceptions?

References

Confrey, J. (1990). A review of the research on student conceptions in mathematics, science, and programming. Review of research in education, 16, 3-56.

Saeli, M., Perrenet, J., Jochems W.M.G., & Zwaneveld, B. (2011): Teaching Programming in Secondary School: A Pedagogical Content Knowledge Perspective. Informatics in Education, 10(1), 73–88.

Shapiro, B. L. (1988). What children bring to light: Towards understanding what the primary school science learner is trying to do. Developments and dilemmas in science education, 96-120.

Stein, M.; Larrabee, T.G.; Barmann, C. (2008): A Study of Common Beliefs and Misconceptions in Physical Science. Journal of Elementary Science Education, v20 n2 p1-11

SKI and lessons on seasons: A nearly perfect WISE lesson?

At the beginning of ETEC 533, we had discussed a video on misconceptions related to seasons. I therefore chose “planetary motion and seasons” (http://wise.berkeley.edu/previewproject.html?projectId=23117) for my analysis. It is designed for grades 6 – 12 and uses instructional scaffolding to help students use evidence to generate a well-supported explanation for seasons. The estimated workload is 8 – 9 hours.

Honestly, I find this WISE lesson really good, and I had difficulties to find aspects to improve it. Let me start with a short assessment, based on the quality criteria that we have discussed in the course and that are reflected in the Scaffolded Knowledge Integration Framework (SKI). I also analyzed of how misconceptions are addressed, how PCK is visible in this WISE lesson and how instructional feedback is given:

Making thinking visible (SKI): To make students thinking visible, prompts can be used that invite student to report on their ideas (Linn, 2003). The WISE lesson indeed provides both metacognitive prompts as well as knowledge integration prompts. Metacognitive prompts foster students to criticize own thinking processes. Knowledge integration prompts ask students to link and connect ideas. Indeed, this WISE lesson presents a lot of these prompts. For example, as metacognitive prompts, the WISE lesson offers an “idea basket” that appears several times. In this idea basket, students can put their ideas on several questions. Later, they are asked to reflect on their ideas, to drop “not helpful” ideas and to use the other ideas to explain seasons (e.g. “Review the ideas in your basket. Which ones are HELPFUL for explaining seasons? Which ideas are NOT HELPFUL or are you UNSURE about using to explain seasons?”). This idea basket also serves as knowledge integration prompts, as students are often asked to bring evidence together from various parts of the lessons (e.g. “What is the relationship between tilt, latitude and hours of daylight?”).

Making science accessible (SKI): The WISE lessons makes sciences accessible in several ways: First, students are put in the role of a “detective” and they are asked to develop inquiry questions. Also, pivotal cases (examples from well-known cities) are used. All units are developed in an explorative way. Students are asked to collect evidence and to reflect on this. Overall, the WISE lesson shows the students how a research process should be done: Define a research question, develop hypotheses, collect evidence, analyse evidence, develop answers, revise them, and finally find the most convincing answer to the research question.

Help student learn from each other (SKI). There are some “discuss with your partner” exercises in this WISE lesson. Also, towards the end, students are asked to do a peer-review of the explanation for season given by another student.

Dealing with misconceptions: This WISE lessons address typical misconceptions on seasons. For example, the lesson shows what “other students” have said, and the student is then invited to think about it and criticize the opinion of these other students (e.g. “When asked to explain seasons, here is what one student wrote and drew: … Do you agree or disagree with this student’s explanation for seasons? Why?”). These “other students” hold typical misconceptions on seasons. So by inviting students to critically assess these misconceptions, the student may overcome own misconceptions. Also, students are asked several times to collect own reasons for seasons and prioritize these. As this is repeated over time, the student will see changes in his reasoning based on accumulated evidence. This shows the student how he can overcome misconceptions.

I hope that these examples show that this lesson is strongly based on the SKI framework and on scientific inquiry. In my opinion, also by addressing misconceptions, it really shows PCK! I was not able to identify a strong weak spot.

I wonder whether all WISE lessons are designed in a comparable way? What did you find?

References:

Linn, M., Clark, D., & Slotta, J. (2003). Wise design for knowledge integration. Science Education, 87(4), 517-538

Anchored instruction and Jaspers: Is there any evidence on its benefits?

As for all educational approaches, as a scientist, I am always curious whether any evidence on their benefits is available. What benefits on the impact of the new approach can be identified? So I did some search on related evaluation studies.

(Cognition, 1992) presents the results of an evaluation of the effectiveness of using Jaspers adventures for teaching mathematics. The authors recruited 739 five-grade and sixth-grade students. 17 classes formed the intervention group, 7 classes formed the control group. The intervention consisted of three Jasper adventures, presented during one week each. All test instruments were self-developed, as the authors argue that standardized math tests are not the best indicators of the type of goals that they want to achieve with Jaspers. The evaluation study results show:

  1. Both groups improved at the same rates in basic math knowledge.
  2. The performance of the Jasper group in solving word problems was superior to the control group.
  3. The Jasper group students scored higher on both planning and sub-goal comprehension questions.
  4. Jasper’s students showed significantly improved attitudes towards mathematics as compared to the control group.
  5. Qualitative analysis of teacher’s comments found strong positive feedback to Jasper. Only the standardized paper-based tests on program effectiveness were considered negative and frustrating for the students.

The authors summarize that evaluation was “highly positive” (Cognition, 1992). Interestingly, no qualitative or quantitative evaluation of the students’ attitudes was evaluated or reported which I see as a big limitation. Also, no long-term evaluation was conducted, e.g. one year after the Jaspers experience, to determine whether the positive effects remained stable. These two aspects could form a new line of inquiry.

I looked for newer studies, and especially for studies that were not written the original Jaspers team. I want to present two of them:

Park (2012) reports about “anchored instructions” in the context of a problem-based blended learning course. However, no evaluation results are reported here. I looked for the full paper in the Ebsco database, but didn’t found any subsequent publication on evaluation results.

Shyu (2000) presents the evaluation of a computer-assisted videodisc anchored instruction on attitudes towards mathematics and problem-solving skills among Taiwanese elementary students. They argue that the Taiwanese education system is based on memorization and less on independent thinking and thus that it is unclear whether the positive results of Jaspers anchored instruction can be transferred from the US to Taiwan. The Math video that was developed was comparable to the Jaspers series. The authors recruited 74 fifth-graders resp. 37 sixth-graders for two experiments. Results show significant higher student attitudes (p<.01) and improved problem-solving skills post-test (p<.000) compared to pre-test. Students also were positive towards anchored instruction. The authors conclude that the anchored instruction provides “a more motivating environment” and that all students profit from this. As a limitation, no control group was available, and the teacher’s point of view is not assessed. As strength, the authors also included the attitudes of the students, stratified according to high-, middle- and low group ability, and evaluated the impact of Jaspers in culturally different settings.

Summarizing, I found some positive evaluation results of anchored instruction. We should note, however, that negative evaluation result maybe have not been published – a phenomena called publication bias.

Overall, in my database query, I found 26 papers referring to “Jaspers”, the newest one from 2016. So it seems that the ideas of Jaspers have survived the change of educational technology. Also the term “anchored instruction” is frequently being used, I found 260 papers in a quick search in Ebsco, the newest paper published in 2018.

I found that anchored instruction is not limited to videos to present motivating and realistic problems any more, but nowadays include also “computer-based interactive activities” (such as an interactive tape measures to teach fractions) in addition to video-based anchored problems and hands-on applied projects (Bottge, 2018). By this, the interactive functions of technology that the earlier videos did not have are today exploited for anchored instructions.

References:

Bottge, B.A.;,Cohen, A.S., Choi, H.-J. (2018). Comparisons of mathematics intervention effects in resource and inclusive classrooms. Exceptional Children. 84(2), 197-212.

Cognition and Technology Group at Vanderbilt (1992). The Jasper series as an example of anchored instruction: Theory, program, description, and assessment data. Educational Psychologist, 27(3), 291-315.

Park, K., & Park, S. (2012). Development of professional engineers’ authentic contexts in blended learning environments. British Journal of Educational Technology, 43(1), E14-E18

Shyu, H. Y. C. (2000). Using video‐based anchored instruction to enhance learning: Taiwan’s experience. British Journal of Educational Technology, 31(1), 57-69.

TPACK in a 6-week online course

I already heard about TPACK in my very first ETEC course. I still find the concept quite clear, well explained, and helpful. Obviously, it is not enough to talk about technology. Technology use is “context-bound”, as Mishra (2008) notes. Thus, the theory of TCK resp. TPACK helps to focus on the fact that any use of educational technology has to be planned in relation to the content and to the pedagogy that the teacher aims for. This also has implications for the organization of professional development workshops.

So while the concept seems quite clear, I will now try to verify it by applying it to my own teaching. I just taught a 6-week online course on project management. Thirteen adults with various health care professional background participated. Their workload was around 15 hours per week. The course was organized within our learning management system Moodle.

Now let’s use TPCK as analytic lens: My content knowledge (CK) on this subject is fair – I know the basic of project management in theory and practice. My pedagogical knowledge (PK) as a university teacher is somewhat limited; for this course, we chose a constructivist approach, using the concept of Etivities by Gilly Salmon (Salmon, 2007) and elements from the Community of Inquiry (Garrison, 2007) and from collaborative and situated learning. My technological knowledge (TK) regarding Moodle is good.

My TPK told me that there exists some functionality in Moodle that support our pedagogical approach, including message forums, badges, online-based tests and peer feedback, so I used these. My PCK told me that to teach project management, the best is to have the students do a real project. As this was not possible within the available six online weeks, I decided to let them first work on a fictive case and then develop a project plan for an own, real project. I also used reflections to activate previous experiences and to derive lessons-learned for future projects. My TPCK, finally, brought this all together in a well-structured, collaborative learning environment based on Etivities. Overall, the course worked quite well, the students were satisfied and learned, in my point of view, some important basics of project management.

So summarizing, I think TPACK is a helpful lens that covers important aspect in planning and conducting teaching. Yet I am not sure that I got new insights while analyzing my course using TPCK. I am curious to know whether you find I missed important points, or how you feel that TPCK can be applied while preparing a course.

Elske

References:

Garrison, D. R. (2007). Online community of inquiry review: Social, cognitive, and teaching presence issues. Journal of Asynchronous Learning Networks, 11(1), 61-72.

Mishra, P., & Koehler, M. (2006). Technological pedagogical content knowledge: A framework for teacher knowledge. The Teachers College Record, 108(6), 1017-1054.

Salmon, G. (2013). E-tivities – The key to active online learning. New York: Routledge.