Category Archives: Class Discussion

A tyranny of experts versus a tyranny of idiots

I chose the above title because it really jumped out at me from the end of the Bates (2014) reading.  While I agree that it’s important for us as social media users to be discerning in our selection of resources, the statement to me reads as just another attempt at pithy fear-mongering.  Oh no, the internet is here, any yahoo will get to have a public opinion now, the academic world is ending.  Yes, the amount of chaff to sift through has grown exponentially, but what social media also does is lift up voices that would have previously been silenced for not meeting elite criteria.  The Black Lives Matter protests that first unfolded in Ferguson a few years ago would have been censored had it not been for ‘guerilla’ journalists at the scene, taking photos and or videos and uploading them along with tweets or instagram posts.  Social media is the ultimate media democratizer, and some would say it’s for worse, but I’m of the belief it will prove to be for the better.  That said, this TVO documentary titled ‘the Thread’ explores how social media rose as a news outlet, it’s an interesting watch: http://tvo.org/video/documentaries/the-thread  It shows both the incredible power to do good, and the incredible potential for social media to feed into mob-like behaviour online which can lead to tragedy.

I’ve as of yet struggled to use Twitter in an academic context, as November lays out in his article, in part because of my own lacking expertise with it, but in part because almost none of my students were acquainted with it either – I’m not sure if that is because of the generational disparity in who uses it, or if was due to the fact that many of them were from overseas, but as such I didn’t find it motivating.  I have experimented with it as a PLN tool, participating in a number of #edchats and haven striven to keep it a solely professional space.  November’s article shed more light on how it could be integrated into classes in the future however, especially those that want to track cultural events as they are taking place.  I have used current events to spark student interest in the past, for example encouraging anybody interested to follow the Umbrella Protests in Hong Kong a couple of years ago in our Challenge and Change (HSB4M) course, but hadn’t fully taken advantage of twitter as a united action.  Still, many students were exposed to content and ideas that had previously been censored in their home countries, and I could tell they were waking up to the possibilities of social media and connectivity.

Bates’ statement that “social media can make the learning of how to learn much more effective but still only in most cases within an initially structured environment” is a sound one, but I think a key point is around ‘initially’.  Especially with older students, but also hopefully with younger secondary students, there should be a scaffolding of ability with how to use social media for academic purposes so that they can carry on those skills outside of the class setting.  If we look at social media as just another tool for students to receive training in, our roles may become as clear as they would be in teaching them how to write.  The difference here is that we can reach out, as can they, for assistance – as long as we are all wary of the ‘idiots’ that some seem to be so worried about.

The second question regarding whether or not a course should be re-designed around social media is an interesting one, and at this point if I’m honest, I don’t have a clear answer.  My first impulse is to say ‘no’, but I can understand the intent behind the question as well.  Perhaps it’s just the last threads of my ‘traditional’ upbringing in education, but I don’t know that structuring a course around social media is wise – but maybe that is just because I’m not sure it would be wise to design a course after ANY one source.  I’ll be interested to see what my classmates say about this question this week!

 

References 

Bates, T. (2014). Pedagogical differences between media: Social media. In Teaching in digital age.Retrieved from http://opentextbc.ca/teachinginadigitalage/chapter/9-5-5-social-media/(Chapter 7, point 6)
November, A. (2012).  How Twitter can be used as a powerful educational tool. November Learning [Weblog] Retrieved from http://novemberlearning.com/educational-resources-for-educators/teaching-and-learning-articles/how-twitter-can-be-used-as-a-powerful-educational-tool/

Online Assessments not always acceptable

Last year I worked at a grade 7 and 8 school that licensed an LMS called Schoology. The user interface is very much like Facebook, and I and the students found it easy to use. I taught math exclusively, to 3 grade 7 classes and 3 grade 8 classes. While I often did creative assignments that I would mark by hand, I set up some tests on Schoology for automated marking. Yes, it saved me an enormous amount of time, but I tried to ensure that the tests asked rich questions and were valid. Some were multiple choice and some involved filling in the blanks with the right answers but with both of these options they could be marked automatically by the system. Some were multi-step problems where the student would have to fill in the blank at each stage of the problem, thus could still get part marks while being assessed by the answers pre-programmed into Schoology. Because it was a French immersion program, I would sometimes have to override the system if someone put in the English answer 36.5 rather than the French 36,5, and for other issues of misunderstanding I could go back in and give part marks.

Anyway, despite the tests being rich, and flexible, people complained to the principal, and based on his directive, I could use them as “quizzes” for formative assessment but not as “tests” for summative assessment? Why not? I still don’t know. Really, I think it was because it was perceived as me taking a short cut and not doing all the work that my predecessors had to do! Let’s face it, most math teachers since the advent of the school house have marked math assignments by looking at the answer and marking it right or wrong. If it is wrong, we look back at the process and identify where the thinking led to the incorrect answer. Is it really necessary that such a job be done by a human?

Twitter, PLN, Growth

In 2013, I dove headfirst into the world of Twitter. First, I used it primarily for professional learning, lurking on Twitter chats, collecting resources, and doing research. Very soon I began to participate, sharing what I was doing and openly accepting suggestions from others on how to improve or which tools to use over several subject areas. The learning curve was extensive and steep at first, but eventually became comfortable. In my case, Bates (2014) was absolutely dead on in his assessment that professional learning through social media promotes global collaboration, digital literacy, networking, and individually-driven learning.

In contrast, November (2012) misses a crucial part of what kept me coming back to professional use of social media: relationships. It was the people that I was following and who took the time to support me that were the most important part of my experience. I began filtering professional knowledge in my brain in a different way than before: it wasn’t what I knew, but what the people I followed knew. For example, I know to turn to Alice Keeler for information about Google Tools or to Gallit Zvi and Joy Kirr when I’m thinking about Genius Hour. There are countless other human resources in my PLN who have specialities that are ready and waiting to be tapped into. I even made this video in my first MET course for a project on Twitter – it outlines the stages of using Twitter for professional development

Following this pattern of my own steep professional growth, I wanted to lead my students through a similar experience of discovery and learning with social media. I set up a class Twitter, Instagram, Facebook, and Remind accounts under the name @EduMinions (our class theme). We set up Mystery Skypes with other classes around the globe, participated in global projects such as Global Read Aloud, and shared daily student work to hashtags like #mathphotoaday. My students also temporarily worked on a project on Twitter called #grammar911 where they could compose and edit each other’s grammar including capitalization, organization, punctuation, and spelling. Social media became a digital gallery walk or an announcement spot for fun news as we shared to the common hashtag #eduminions. There were many ways that we used social media as an exploratory tool that helped us to connect further with each other, families at home, and with other global classrooms.

Students began to learn the pros and cons of each tool, what they were used best for, and who should be using them (adult vs student). We worked through digital citizenship curriculum (which was also glossed over in November’s (2012) article but briefly nodded to in Bates’ (2014)). Students began to see that global connections were possible and began asking questions about other cultures, regions, or languages. Sometimes this led to self-directed or guided inquiry opportunities depending on class interest. An understanding of audience was also gained by students. They were very aware that people would see their posts and worked hard to perfect their work; perhaps even more so than if it had been just me reviewing it.

Courses do not necessarily need to be re-designed to fit around social media, but instructors certainly need to know the affordances of social media before attempting to harness them for use in a classroom space. Because there are so many types of social media, it’s a bit of a tall order to ask teachers to understand the affordances and constraints of them all. However, the primary audience needs to be considered – are you doing this for the students to see/experience? Or for a parent community? Or to connect with global classrooms? Each of these scenarios may call for a different tool with a different set of affordances. If you pick the wrong tool for the job, re-design may be necessary then. Time needs to be integrated in class to read, interact, and reply to these posts as well.

 

References

Bates, T. (2014). Pedagogical differences between media: Social media. In Teaching in digital age. Retrieved from http://opentextbc.ca/teachinginadigitalage/chapter/9-5-5-social-media/ (Chapter 7, point 6)
November, A. (2012).  How Twitter can be used as a powerful educational tool. November Learning [Weblog] Retrieved from http://novemberlearning.com/educational-resources-for-educators/teaching-and-learning-articles/how-twitter-can-be-used-as-a-powerful-educational-tool/

Optional completion based programs and assessment

The Continuing Professional Development world is an interesting one when it comes to assessment. A lot of our programs are optional and the ones that are mandatory do not have formal assessment or grades for students. I see that being the largest challenge with incorporating technology that supports student assessment. Our learners are simply not required to complete it. With that said, I think technology offers a lot of new interesting mediums that would be appealing to students. It also adds a level of convenience that our learner would appreciate. I’m sure they would be much more likely to complete assessments if they were easily accessible, quick, and provide immediate feedback.

One opportunity that I do see for us to use more technology to support student assessment is through online modules. Our department has been growing our on demand program offerings both internally and externally. I think we could use software like Articulate Storyline to build assessment right into our modules. Whether that is a few multiple choice questions throughout the module or a summative question at the end. I think we could utilize this technology to assess our learner and make modules more interactive.

I would love to hear all of your thought on the matter!

Online Assessment

Some of the opportunities I see in using technology to support student assessment is described by Bates’ (2008): Project work or group work, along with e-portfolios. I remember back in 2010 when I graduated from the education program, our final assessment was to create an e-portfolio linking our journey in the program to the 8 standards for teachers. BC Standards for Teachers

Most teachers do use a physical portfolio for their student’s progress, but once I completed my own e-portfolio, I knew that this would be perfect for students. Bates’ (2014) describes them as self-assessment through reflection, recording, knowledge management and evaluation of learning activities. Students could keep them online indefinitely and be able to see how they improved over the years to come. I certainly enjoyed it, and look forward to completing another e-portfolio at the end of this program.

Another great opportunity that technology can bring to student assessment is through online project work. There are so many apps and learning software available for students to use nowadays that the opportunities are endless. The new BC curriculum is headed this way through self exploration, problem-solving, collaborative learning and creativity. I teach my grade 7 class using the MYP International Baccalaureate curriculum, and I feel that the new BC curriculum is using the same idea.

Another opportunity I find using online assessment is feedback. According to Hattie (1987), the most powerful influence in student achievement is feedback (as cited in Gibbs and Simpson, 2005). Students want to know how they are doing, not necessarily by a mark, but by words. Are they doing in correctly? Is there something they should change? Are they really understanding it? Feedback is not my strong point as it requires individualized comments for each student and I am used to just grading a quiz or a paper using a rubric. With online feedback, I can pre-program feedback so that each student will receive a comment that is in the database according to their response. I can see that this would be a great tool to use.

Bates, T. (2014). Teaching in a digital age http://opentextbc.ca/teachinginadigitalage/

Blank Canvas

Due to the looming final ministerial exit exam, the manner in which our department assessed its students changed.  Teachers, pressed for time, hurried through the concepts that might be covered on the final.  Amidst the chaos generated by the genuine desire for our students to succeed on this exam, we omitted formative assessment, as well as, opportunities to provide detailed and relevant feedback; we provided grades. Quite honestly we have lost most if not all of the conditions set out by Gibbs, Simpson, Gravestock, and Hills (2005)  for effective assessment.  It seems counter-intuitive, yet when faced with growing fears of failure we wanted to drill the knowledge into the students. Good intentions, poor results.

This rush not only pushed aside formative assessment but also modified the format of summative assessments.  In order to best mimic the final exam, projects, case studies, portfolios and to a certain extent complex labs were replaced by paper quizzes and tests.  In some terms, this form of summative evaluation accounts for 90% of the student’s grade.  In such a context, you must understand that technology, even the use of science simulations, is a very rare practice.

On the positive side I must say that the opportunities of using technology in this context are endless as one is faced with a blank canvas.  Technology could be used to provide feedback, formative assessments; we do not have to deal with technology being misused and revising existing behavior, technology simply is just not being used at all.  However, as we are not using technology, even though it is at our disposal, we must contend with two enormous challenges; tradition and fear.

Interestingly enough, I see these challenges originating not from the students or their parents, but from the educators.   Students are comfortable with the idea of using technology; we are teaching a generation who has never known the world without the internet.  Many educators however have had little experience with technology.  Some fear trying something new, and prefer sticking with the tried and true methods they have developed over the years.  Some fear letting go and doubt that meaningful learning can occur beyond their classroom walls.  Others fear being embarrassed in front of the students; fearing being exposed as less technological savvy than their students.  Faced with such fear, it seems safer to stick with tradition, with the old fashion quiz and test.  Far from pleased with the results of such methods or the long hours of correction, they still shy away from the potential benefits of trying technology.  Instead of admitting this fear, some educators often dismiss using technology assuming that students will cheat or plagiarise (Jenkins, 2004).  However, this behavior is still observable regardless of whether the assessment is performed with technology or not. Also some educators  assume that their students, potential “digital natives” (Prensky, 2001), would simply not be able to understand how it works.

In my context the affordances granted by the use of technology in assessments need to be unlocked slowly.  We need to build confidence in the notion that assessment is not only the driving force for learners (Bates, 2014), it also drives our teaching practices.  It can provide educators with immediate feedback on the difficulties experienced by the class and concepts that are still unclear to many.  Such feedback is necessary to teaching itself, and I believe that technology can aid in that pursuit.

I have established below a potential progression of technology that might hopefully lead to a full integration of technology in assessments by slowly increasing the teachers confidence in both the technological and their technological skills, as well as slowly transferring assessment activities outside the classroom walls.  Any addition or comments you might have on this progression would be greatly appreciated. As a question of convenience I have placed it in a google doc for everyone to edit.   https://docs.google.com/document/d/14oezqRrFMkdv4AZjB86Y8YThD7KMHB9D__RyyIfW0cw/edit?usp=sharing

 

Thank you,

Danielle

progression

 

 

High Tech

 

References:

Bates, T. (2014). Teaching in a digital age. Open Textbook.

Gibbs, G., Simpson, C., Gravestock, P., & Hills, M. (2005). Conditions under which assessment supports students’ learning.

Jenkins, M. (2004). Unfulfilled promise: formative assessment using computer-aided assessment. Learning and Teaching in Higher Education, 1(1), 67-80.

Prensky, M. (2001). Digital natives, digital immigrants part 1. On the horizon, 9(5), 1-6.

Assessment challenges in Med Ed

Medical education has changed significantly in the last several years. Emphasis used to be placed on content, where exams would test on minutiae, which would be forgotten as quickly as it was memorized. Medical education today aims to teach communication, collaboration, self-directed learning, compassion, patient-centred care, leadership, health advocacy, and professionalism. Even in terms of content, the aim is to learn the “big-picture”, be able to take a situation and critically analyze it, develop a plan and execute. Given this change assessment methods must also change.

Traditionally, multiple choice examinations were the assessment method of choice. It can be administered to a large group of students at once and scoring can be performed electronically. Unfortunately, this method may not accurately test the student’s level of understanding, does not provide effective feedback and it would be difficult to assess attributes such as communication, collaboration, leadership etc. Herein lies the challenge. How do you assess all of these attributes, in over 100 students, and provide formative feedback that will inspire and motivate medical students to further their education? Can technology assist in this endeavor?

I think that some of these challenges can be addressed through technology. Testing higher level cognitive processes can be done by changing multiple choice questions to ones that are context rich. Making them short answer questions that can be marked electronically is also feasible. Feedback for these questions can also be provided through the use of technology. I understand that there are some programs that are able to score essays, which would not only assess knowledge and higher order thinking skills but also communication. Discussion forums are another way to assess communication and analytical skills. Online simulations would also be useful in these domains, but these would have to created, which requires more skills than I possess. And I’m sure there’s a lot of applications out there that I’m not aware of that could useful in assessment of medical students. I look forward to learning more about these as I progress in my career/education.

Flipping our approach?

I am acutely aware that the overall approach to assessment at my school is not ideal–for students or teachers. While we also have significant barriers in using technology, I think online assessment can help shift our approach to assessment in a direction that is more useful for students and teachers.

It was Bates’ (2014) list of assessment purposes that really got me thinking about assessment at my school. Bates put his list in order from most important to creating an effective learning environment to least important. It was no surprise to me that our overall approach was the reverse. As much as it pains me to admit, as a private school that caters to a very specific demographic, I would say that the number one priority of assessment at my school  is “for institutional accountability and/or financial purpose” (Bates, 2014). As this makes Bates’ list, it does have a purpose in creating an effective learning environment, largely ensuring there IS a learning environment to begin with. Surprisingly, I don’t feel pessimistic about this. The results students get on assessments like TOEFL, IELTS, and the SAT help attract more students. More students mean more teachers, which means there is opportunity for change–because there is bound to be a teacher brave enough to switch the focus, if not for the whole school, at least with their classes–and perhaps that will include online assessments.

The benefits of online assessment for my school are two-fold–it would potentially maximize time spent by teachers giving feedback and it would help us reverse our current approach to assessment by focusing on the first two items on Bates’ list: “to improve and extend student learning” and “to assess student knowledge and competence in terms of desired learning goals or outcomes” (Bates 2014). I see online assessment as a means to focus more on formative assessment with feedback. As Gibbs and Simpson’s Condition 4–Sufficient feedback is provided, both often enough and in enough detail states, “feedback may need to be quite regular, on relatively small chunks of course content, to be useful” (p.17). The main obstacle to giving frequent feedback at my school is large class sizes and a heavy teaching load. Using an online assessment with automated feedback would ensure that every student is getting some feedback. Using Socrative, for example, would allow my students to complete quick assessments, perhaps even daily assessments, using their cell phones. I know that precise written feedback is ideal, but even just showing correct answers is feedback that students weren’t getting before–at least not immediately. Gibbs and Simpson’s Condition 6–The feedback is timely in that is received by students while it still matters to them and in time for them to pay attention to further learning or receive further assistance (p.18) supports my assumption, although their example referred to “imperfect feedback from a fellow student” (p.19) rather than learning the correct answers as I suggested. Timely feedback also helps the teacher. For example, if the teacher is also getting feedback from online assessments by way of reports they can quickly identify trouble areas for the whole class, or individual students and address it in the next class, rather than finding out after a summative assessment that no one understood a particular topic. So, even a simple Socrative quiz could make a huge change at our school–as long as the feedback is useful.

There is a flip side to online assessment, and perhaps it’s a comment that doesn’t need to be made because, let’s be honest–many teachers are guilty of making an assessment because they need to assess students and they know it. I think it’s easy to abuse online assessments because, from my experience, they’re easy to use if you’re not using them well (does that make sense?)  So, if the online assessment (or any assessment!) doesn’t “[engage] students in [a] productive learning activity of an appropriate kind” (Gibbs and Simpson 2005 p.14) it is a waste of everyone’s time. You would think that’s common sense, but I see it happen everyday. I’m guilty of it myself. But, a good teacher keeps learning and trying new things, so I’m going to make a conscious effort to not assess unless it is worthwhile to both my students and to me as their teacher.

References

Bates, T. (2014). Teaching in a digital age. Retrieved from http://opentextbc.ca/teachinginadigitalage/chapter/5-8-assessment-of-learning/

Gibbs, G., & Simpson, C. (2005) Conditions under which assessment supports students’ learning. Learning and Teaching in Higher Education, 1(1), 3-31. Retrieved from http://www.open.ac.uk/fast/pdfs/GIbbs%20and%20Simpson%202004-05.pdf

Assessment and Poor Attendance

In my school we have fantastic access to technology. We have a supportive principal with frequent budget surpluses that allow us to order what we need. One of the challenges I see to using technology to support student assessment in my context is that it requires a lot of time investment in order to see relatively small gains. I have yet to teach the same class more than two years in a row. This is really a deterrent into investing significant time developing technology for a course. However  a lot of time is also invested in trying to help students with poor attendance catch up. Perhaps the time spent developing this technology would save time in this area as well.

Students here are often absent from school. They take time off to travel to Whitehorse, there are biannual REMs, trips to neighbouring communities and countless workshops at the school. We also have real problems with tardiness. In this I have found that working to create course shells and then acting as a tutor have been a successful strategy.

A positive part of the small class sizes is that if I am able to organize assignments online then I can spend the majority of the class working with students one on one to offer constructive (formative) comments on their work. If they are stuck on an introduction, I can immediately make suggestions. This aligned nicely to the best practice mentioned by Gibbs & Simpson where students were “…gaining immediate and detailed oral feedback on their understanding as revealed in the essay.” Technology supports assessment in this way by freeing up time for the teacher to provided these tutorials. It would be impossible to keep records of this sort of assessment without the process being tedious and without me seeing the direct result, honestly I would probably drop off. I know their strengths and weaknesses like my own because of the small class sizes but right now in my district there is a real push for having detailed records of formative assessment. I want to try to tap into technology to help make this easier.

In some courses I have been given audio feedback on work. I would like to bring this into my courses as it would help me to keep record of what is being done. My goal would also be to have a visible and strong connection between the standards that I am trying to cover and the assessments that I am assigning. I wonder if having quizzes students could do that would give immediate feedback as to why they were wrong but also that were not scored might also be a good thing for me to try in my context.

Gibbs & Simpson say that the “trick when designing assessment regimes is to generate engagement with learning tasks without generating piles of marking.” As I am working on my Moodle course I am keeping this in mind. I know that the time investment will be challenging at first but that in the end it will pay off. I am also wondering if there isn’t a way that MET students might share their course shells from this program with others, or for teachers in general to for groups and create courses. We could modify them to fit our context but having them started would make techniques like this much less daunting to start in the future.

Gibbs, G., & Simpson, C. (2005). Conditions under which assessment supports students’ learning. Learning and Teaching in Higher Education, 1(1), 3-31. Retrieved fromhttp://www.open.ac.uk/fast/pdfs/Gibbs%20and%20Simpson%202004-05.pdf

Purposeful Assessment

Assessment seems to be largely linked to purpose. Why are we using assessment? What is the assessment trying to achieve for the student? For the teacher? How do we know we are assessing what we aim to assess?

My first MET course was a surprise to me in this regard. In the entire course I received 11 sentences of feedback from my instructor. After pouring hours into the final paper, the feedback I received  was that I should have included headers. It felt like quite a let down. I had invested all this time and valuable learning into a paper, which was assigned a grade but the comments almost dismissed my work as of little importance. Thankfully, since then, I have discovered most courses have provided ample assessment opportunities, primarily through peer feedback of discussions and group assignments, but also regular and meaningful feedback from the instructors. Assignments with part A and part B submissions in particular I have found helpful. When receiving written feedback that is summative on part A but formative towards learning for part B, then a clear direction can be tackled.

The Gibbs & Simpson article this week raised a concept I hadn’t thought much about, mainly I believe, because my experience is in elementary school. They referred to, “different kinds of students: the ‘cue seekers’, who went out of their way to get out of the lecturer what was going to come up in the exam and what their personal preferences were; the ‘cue conscious’, who heard and paid attention to tips given out by their lecturers about what was important, and the ‘cue deaf’, for whom any such guidance passed straight over their heads.” (Gibbs & Simpson, 2005, p.4). In elementary school, grades are typically only provided twice a year at report card time and summarize an entire unit of study i.e. ‘Writes to communicate and express ideas and information”. Throughout the term, assessment is provided as verbal or written teacher feedback, rubric scores, peer or self assessments, none of which assign a ‘grade’. You never hear the question, Will this be on the test?, even if tests are one form of assessment used by teachers.  For these students, how do they maneuver or prioritize their learning or ‘hidden curriculum’ (Gibbs & Simpson, 2005). Is this a developmental concept that develops with age? Does the assignment of number or letter grades in later years change the way students approach the tasks? How does digital assessment change the way students identify the essential elements to achieve a higher grade rather than more solid understanding of the concepts? How can we ensure our assessments are leading our students to deeper understanding rather than grade chasing?

If the purpose of our assessment is formative and to advance student learning, then digital assessment can be very powerful. For example, a multiple choice test that is designed to assess what is learned at the end of the unit is not likely to achieve our outlined purpose. Several multiple choice exams spread throughout the unit provides a greater opportunity for learning, however changing the style of the multiple choice exam is likely to have the greatest impact of all. Adding media, i.e. pictures or video, questions with a ‘hints’ option, feedback for questions that were answered incorrectly or that include a student’s level of confidence in their answer are a few ways improve this traditional form of testing. Also, how teachers use the information obtained from these tests is relevant. Rather than simply recording a grade, if teachers compile answers and determine where most students answered incorrectly, an opportunity for class discussion arises. Was the question worded in a way that was difficult to understand? Does the concept require review? Would having students work in peer groups to debate their answers lead to increased understanding? Having students be able to re-try exams can also be beneficial to learning. This has traditionally been viewed as cheating. However if the purpose is for students to identify areas needing improvement, then students can learn those concepts and confirm their new understandings, which is the primary purpose of the assessment. Traditional assessment practices can be used in new ways to increase student achievement.

Gibbs, G., & Simpson, C. (2005). Conditions under which assessment supports students’ learning. Learning and Teaching in Higher Education, 1(1), 3-31. Retrieved fromhttp://www.open.ac.uk/fast/pdfs/Gibbs%20and%20Simpson%202004-05.pdf

Jenkins, M. (2004).  Unfulfilled promise: Formative assessment using computer-aided assessment. Learning and Teaching in Higher Education, i, 67-80. Retrieved fromhttp://www2.glos.ac.uk/offload/tli/lets/lathe/issue1/articles/jenkins.pdf