We are pleased to announce the following keynote speakers:
Glenn Fulcher is Professor of Applied Linguistics and Language Assessment at the University of Leicester, United Kingdom. He has published widely, and his books include Testing Second Language Speaking (2003), Language Testing and Assessment (Routledge, 2007) and Practical Language Testing (Hodder, 2010). In 2016, The Routledge Handbook of Language Testing (Routledge, 2012), and Language Testing Re-examined (Routledge, 2015) were jointly awarded the SAGE/ILTA Book prize. In 2014 he was elected to a National Teaching Fellowship of the Higher Education Academy (UK) for his contribution to language assessment literacy, through the development of language teaching curricula and for his website, http://languagetesting.info, which is used worldwide in language testing programmes. He has served as the President of the International Language Testing Association, and was also the editor of Language Testing from 2006 to 2015.
Alternative Validity Worlds
The late 20th century consensus on the meaning of validity as embodied in Messick and the 1999 Standards has evaporated. We are left with a multiplicity of validity models that are largely theoretically incongruent. As a result, each model has its ardent supporters in equal measure with strident critics. But in truth the battleground has not shifted very far. Disputes centre around what is to be included or excluded from a model, the definition of construct, or (still rather shockingly) the extent to which a test provider is responsible for score use. This parochial timidity stands in stark contrast to the expansive world view of visionaries like Lado, whose work depicts the kind of society that assessment can serve. Language testing is both a social science and a social phenomenon. As such it can never be value free. In this talk I consider a number of validity models in current vogue with regard to implicit value systems, notions of society, and the human condition. I argue for an approach to validity and validation that prioritises a progressive value system, which in turn can motivate “effect-driven” language testing practices.
Dr. Elana Shohamy is a Professor at Tel Aviv University School of Education where she teaches and researches co-existence and rights in multilingual societies within four inter-connected areas: Language Testing, Language Policy, Migration and Linguistic Landscape. She authored The power of tests (2001), Language policy (2006), and the co-editor two books on Linguistic Landscape. Elana (with Iair Or) edited Vol. 7 of Language Testing and Assessment of the Encyclopedia of Language and Education (Springer, 2017). She served as an editor of the journal Language Policy (2007-2015) and an editor and founder of the new journal Linguistic Landscape (Benjamins). Elana is the winner of the ILTA lifetime achievement awarded by ILTA (International Language Testing Association) in 2010, for her work on critical language testing. Her current work continues to focus on various issues within the above topics. She is currently heads a project of introducing a new multilingual educational policy in Israel.
Incorporating expanded dimensions of ‘language’ for increasing construct validity
Critical language testing refers to the continuous need to raise questions about language tests in various domains but especially in terms of fairness and justice. Indeed, most of the research in the past two decades raised questions about tests’ uses, misuses, injustices, ethicality and learning. It represented at the time, a shift from viewing the quality of tests based on their psychometric features to a point where tests are viewed in terms of their uses in education and society. It was shown how central agencies – Ministries of Education, testing boards, principals and teachers, misuse tests to perpetuate their agendas given the enormous power of tests based on their unique feature of determining the future of test takers and educational systems. Calls for democratic assessments, fairness, ethicality and effective learning became part of the need to increase construct validity of tests given the argument of Messick that tests’ uses and impact are part of construct validity. The critical testing approach continues to collect evidence and raise questions of justice and fairness about various domains of tests; now it is turning to the examination of the ‘what’ is being tested on language tests, the nature of language. Drastic changes have occurred in the past decade about the definitions of language, mostly as a result of sociolinguistics, with regards to diversity of learners, immigrants, indigenous and others. These new definitions perceive languages as multilingual, translanguage, fluid, semiotic, multimodal and contexualized in space (linguistic landscape). Still most language tests remain monolingual, static, formulaic and closed. This paper will focus on a variety of new approached for assessment of language in their expanded forms and definitions and share research about the advantages of multi language, multi modal tests. I will show how such tests reflect the nature of the language of learners our current understanding. These tests reflect a revised construct validity of language tests in this day and age.