Final Assignment

Posted by in Uncategorized

Final Assignment

Gary Reimer

University of British Columbia

Student ID: 91757997

ETEC 540

Professor Ernesto Pena Alonso

April 14, 2021

The mass movement of globalized capital and labour is a salient feature of contemporary life. In search of change or opportunity, people have left their homes to study or live abroad and this decision has often made it necessary for them to acquire new language skills, usually although not always, this has meant English. Governments, educational institutions, private sector firms and other stakeholders in recipient countries have come to rely on purpose-built proficiency exams to help them determine the language eligibility of individuals as potential students, workers and citizens. The numbers tell a story of dramatic growth. In 1981, 41,000 people took the International English Language Testing System (IELTS); in 2019, the year before the pandemic, that number had grown to 3.5 million rising every year since its inception. (Wikipedia, 2021) Despite the evident popularity and presumed value of such tests, their efficacy in measuring the true functional literacy of candidates has been called into question. (Jenkins & Leung, 2019) Apart from legitimate concerns about whether it is appropriate to reduce language ability to a graded evaluation, there is the further worry that such tests no longer fully capture what modern literacy entails.

Literacy, in a strict sense, has been conventionally defined as a subset of general language skill, that is, of the ability communicate, especially to read and to write, at a level sufficient for an adult to cope with daily life. However, beginning with the pioneering work of the New London Group and its heirs a transformative change in perspective has taken place. Literacy, or rather multiliteracy, is now rather seen as a social practice that operates within a complex ecology of networked digital technologies and that involves non-language skills and competencies. (Bolter, 2017) This paradigm change does not exclude the older print culture or the idea of literacy in the older sense but it does subsume and integrate it into a wider more diverse information landscape. Controversially, it has also raised the possibility that assessment exams derived primarily from the older technologies might need to be entirely reimagined. In this essay I focus on the IELTS although my observations are applicable to other similar tests.

Assuming that proficiency exams should be modified to allow them to effectively measure a broader definition of literacy what skills should be assessed? What would we want such a test to measure? Our language technologies, from oratory and scroll reading to the current multimodal media-rich environments, have experienced progressive remediation as modes of representation refashion, extend, and delimit their forerunners. If high-stakes proficiency exams are committed to the principle of authentic assessment and to a more accurate and fair measuring of the true communicative skills of candidates, it is probable they will need to undergo a parallel process of remediation. In other words, testing systems must concern themselves not only with assessing appropriate language content but also with enabling and empowering relevant modes of representation. This might entail i) greater emphasis on visual elements including new media, ii) enhanced interactivity among textual and visual elements, iii) the privileging of the screen over the page as the dominant display format, and iv) the use of a range of computer-based tools. It might also include access to external sources of distributed knowledge, such as the internet.

Having considered the affordances a redesigned test could offer, it might be prudent to consider some of the factors that account for and possibly justify the current conservative bias. A central theme explored in this course has been how communication technologies are best understood when situated in their historical context. According to Kress (2005) educational bodies have an innate conservative bias because they focus on building expertise in and fluency with the governing modes of representation. Transitions to novel modes of representation from formerly dominant ones has historically been an uneven and unpredictable process. A concrete and striking illustration of this is the delayed introduction of keyboarding in the written section of the IELTS which was introduced only in 2017, approximately two generations after writing by long-hand had ceased to be the dominant mode of written communication for most test-takers. For some younger “digital natives, hand-writing was practically an unknown art form and requiring them to test in such an uncongenial mode risked compromising their test results. While acknowledging the tendency to be drawn to the past, I suggest that it should not be viewed as problematic. Unlike algorithms which learn painlessly, human knowledge is never gained without effort. A student who earns a bachelor’s degree is assumed to have acquired a solid foundation in a certain field, a masters student should have mastery of the same, and a PhD will have made an original contribution to the corpus. At each stage, a grasp of what has come before is taken as a given for moving forward and it is assumed that progress will be a lengthy, possibly arduous task. Assessing the extent to which this body of knowledge has been assimilated will always be an exercise in looking backward, at evaluating a student’s past experience. The same bias is present in exam testing and for the same reason; literacy is a skill and skills take time to learn.

The logistics of test design and creation also play a role. It takes several years to gather materials, generate ideas, prepare a draft test, scrutinize it for defects, and then finally distribute it. The content and language forms will reflect the reality of the world as it was during this process. While a natural predisposition for the past is one possible explanation for the tendency of high-stakes tests to resist change, research suggests the chief cause for the strongly conservative bias of high-stakes proficiency tests lies elsewhere, in the urgent practical need for standardization. (Pilcher & Richards, 2017)) Standardization is crucial for at least three reasons and each argues implicitly against fundamental change.

First, since they are widely used for competitive appraisals, tests need to produce scores that are fair and fully comparable. The score of an 80-year old Welshman taken fifteen years ago has to be comparable to that of a 20-year old Nigerian today. No matter the background, scores need to be as comparable as possible with those of other candidates who might have had an entirely different life experiences. Changing the test in any fundamental way risks making test results incommensurate which would invalidate them as a vetting tool.

Second, by strictly defining acceptable test conditions, standardization promotes and guarantees fairness. Stakeholders need assurance that results will be objective and one method to achieve this objectivity is by excluding extraneous skills or knowledge that might improve a candidate’s score but which would not necessarily reflect a greater language ability. It is helpful to think of high-stakes testing as a qualitative experiment in which the reliability and validity of the findings are enhanced by the range and effectiveness of the control features. What this implies is that test-makers have to guarantee the soundness of the measuring instrument, the test itself, while invigilators have to ensure they have controlled for external variables. In order to maintain uniform test conditions, assessors have to be able to limit candidates’ access to outside language resources; otherwise, it wouldn’t be possible to determine what language ability a candidate possesses and what has been “borrowed”. It is true that while permitting access to distributed knowledge, for example, by allowing candidates to research the web when writing essays or answering questions would approximate real-world conditions, this step would also introduce an unacceptable degree of subjectivity.

Third, standardization makes it possible to grade and distribute scores efficiently. Tests need to be assessed and scores provided rapidly. Candidates taking high-stake tests are often navigating major life transitions under stressful conditions and the logistics of such transitions often involve time-sensitive deadlines, such as those for university and college admission and registration or visa, passport and job applications. Many, if not most of these, require a language proficiency requirement which the test-givers have to provide in a timely manner. There can be no question that allowing access to online language resources would permit test condition scenarios that more closely approximate real-world conditions. However, this would greatly lengthen and complicate the assessment process and delay the provision of results.

These pragmatic considerations suggest that the current cautious approach to modifying high-stakes proficiency exams may be warranted. However, in addition to these pragmatic concerns, there is also a theoretical reason why language proficiency exams should not attempt to assess literacy in a broader sense. The point has been cogently made that multiliteracy requires the skilful use of diverse semiotic resources, including the ability to skillfully negotiate information rich technology-mediated environments. (Bolter, 2001) When considering the idea of multiliteracy and bearing in mind the extent to which our lives have become technology-embedded, it is worth making a distinction between life skills and language skills. The major conclusions drawn by the New London Group are now rightly seen a prescient and part of our conventional pedagogic wisdom. Likewise, the competencies called for by Bolter (2001) and others makes it clear that many are not specifically linguistic and it may not be appropriate for them to be part of test whose sole purpose is to evaluate language ability. In order to do their job properly, proficiency tests need to stay within their remit and not try to to assess factors that are beyond their competence. Keeping these constraints in mind, however, it is still possible to imagine modifications to the existing design.

The goal of test-givers is to provide all stakeholders with an accurate and fair assessment of a candidate’s language ability. Although it is important to acknowledge that conceptions of literacy have changed, it is critical that proficiency exams remain focussed on their core responsibility of measuring individual language ability.  Navigating digital networked technologies while creatively harnessing diverse cultures, experiences, and ways of thinking and acting is an important life skill.(Cazden, Cope, et al, 1996) However it is not specifically a language skill. As a result, while enriching the format of the exam to include a wider range of audio-visual and other elements can be recommended, permitting candidates to fully access to the online language resources must be seen as a step too far.

References

Retrieved from https://en.wikipedia.org/wiki/International_English_Language_Testing_Ststem

Jenkins J, Leung C (in press, 2019) From mythical ‘standard’ to standard reality: the need for alternatives to standardized English language      tests. Language Teaching 52, 1  https://doi.org/10.1017/S0261444818000307

Cazden, Courtney; Cope, Bill; Fairclough, Norman; Gee, Jim; et al A pedagogy of multiliteracies: Designing social futures. Harvard Educational Review, Spring 1996; 66,1; Proquest Psychology Journals

Kress, Gunther, Gains and Losses: New forms of texts, knowledge, and learning. Computers and Composition, Volume 22, Issue 1, 2005, Pages 5-22 doi.org/10.1016/j.compcom.2004.12.004

Pilcher, N., & Richards, K. (2017). Challenging the power invested in the International English Language Testing System (IELTS): Why determining ‘English’ preparedness needs to be undertaken within the subject context. Power and Education, 9(1), 3-17.

Bolter, J. D. (2001). Writing space: Computers, hypertext, and the remediation of print (2nd ed.). Mahwah, N.J: Lawrence Erlbaum Associates. Doi:10.4324/9781410600110