While attending a talk on campus (at Stony Brook University) this afternoon, given by Elana Sohamy, an Israeli scholar, I had a moment of despair.
The title of her talk today was “multilingual testing” and the backdrop of her presentation was the monolingual regime of language testing and its effects on multilingual language users across the world.
As teachers of language and writing/communication, we keep saying in theory that language learners take 3-5 or even 9-11 years to be fluent and accurate in a new language, depending on where and how they learn. But in practice, we continue to resort, very quickly and thoughtlessly, to the logic of pragmatism, of institutional policy, of the need to make sure that our multilingual students can perform in English.
Here’s the problem with our dead habits and thoughtless embracing of the current monolingual testing regime: Usually, monolingual tests don’t predict overall academic performance by multilingual students, simply because the tests confuse language proficiency as a predictor of a lot more than just language proficiency that academic transition and success involve. It would be nice if, for instance, a TOEFL score of x predicted whether a graduate student is able to listen, read, speak, and write well enough to be a successful graduate student in, say, a graduate program in ecology. But it doesn’t.
Two students with the same TOEFL score (even with similar academic records from the same education system) regularly perform significantly differently, and often, students with much lower TOEFL scores perform better than with much higher scores. Why? Because academic performance is a matter of process involving a lot of factors–grit, support, psychology, personality, content knowledge, and a lot more.
While TOEFL is not as bad as GRE, I must add, the company selling this product back-peddled a few years ago when it ignored that adding the spoken test (as it is administered by using an underdeveloped technology of speech and accent recognition) makes the test even less valid. TOEFL’s test of writing skills is as much of a joke as SAT writing test is: it rewards certain semantic, syntactic, and rhetorical tricks that do not constitute good writing in college/university. And while its listening and reading passages are based on quite authentic classroom situations, the validity of the overall score is significantly hampered by the outdated view of language proficiency that the test is based on. Those who make these tests are still unable to understand how multilingual English speakers around the world perform sociolinguistically in academic and other contexts.
I heard today that ETS is starting to respond to the basic idea that multilingual English speakers should be assessed in terms of how they draw on more than one language to achieve communicative goals. But the company’s primary goal of efficiency for itself trumps the principles of validity and effectiveness. And this is to say nothing about the blatant lie about universality of the content on which the tests are based, the egregious amount of money the test fee translates from US dollars to some of the local currencies, the intellectual insult for students with learning disabilities, and many more issues.
Sohamy showed that immigrant students scoring in the 60th percentile when tested in L2 only were able to score in the upper 80s when the questions were provided in both L1 and L2. This means that when educators allow learners to start academically succeeding while still having some “issues” with their language, they learn English much more effectively in the process. This is a no brainer: when I start a semester, I tell my nonnative English speaking students in class that they “shouldn’t worry about [their] language” and instead “focus on doing the research, coming prepared to actively participate in class, drawing on [their] prior knowledge, being excited about learning and sharing ideas.” To teachers, it’s really a no brainer: It is possible to not put the cart of language learning in front of the horse of the process of education. It is possible to not follow the backward logic of ETS — of treating language as something that you learn “before” you join the learning party!
Gatekeeping is necessary but it doesn’t need to be so outdated and invalid. Assessment is necessary but it doesn’t need to be decoupled from learning and teaching.
Now, some readers may object: “I have to make sure that students I admit have a certain level of language proficiency.” Well, there are two significant problems in that “pragmatic” stance. First, someone who scored well on the TOEFL may have been linguistically privileged and proficient, but there is no guarantee that he/she is academically capable or committed. TOEFL doesn’t measure subject knowledge, and, again, it doesn’t measure grit. Second, one could say that admission officers look at academic transcripts in order to review the applicant’s academic caliber. Guess what? Academic transcripts from different countries (and even from different academic systems from the same country) cannot be compared–which is one of the reasons people turn to TOEFL in the first place. Back to square fifteen.
We’re confusing pedagogy with policy, process with desirable proficiency, outcome with entry level proficiency, and our own bias with the need to rely on a system that we admit is flawed but seemingly without good alternatives. What if start by thinking outside of these easy frameworks? What if we start by embracing what Elana Sohamy called “critical language testing”–testing based on skepticism toward established regime that are not in the business of fairness and sophisticated thinking. What if we can adopt parallel regimes, including ad hoc approaches, that we can use in order to challenge ourselves?
What if we can ask all the ten students whom we want to admit to our doctoral program to call us on Skype, Hangout, Viber, or Facebook phone and have a twenty minute conversation each, in order to use our own conscientious judgment, instead of a TOEFL score?
re: What if we can……have a twenty minute conversation each… instead of a TOEFL score?
I agree with you, Shyam, and have addressed the issue of TOEFL in my book.
Having a sense of incoming students’ oral language proficiency would help, but is it enough? What about written language proficiency? As a population, international graduate students are not monolithic.
Donna Bain Butler
FORWARD by Suresh Canagarajah
Thank you, Donna! I am sorry that I didn’t notice this comment until now, but this also reminds me to order the book (too busy summer). I don’t think it is enough to just know how well a student can speak or write, especially when speaking and writing themselves serve so many different functions and students struggle/succeed in spite of these skills quite often — even when these needs are addressed (or not) as they are today. TOEFL’s validity is crapshoot class A. The diversity of international students is so complicated that I find it shocking that even the savviest writing/language scholars not address that elephant in the room.
Great post! Too bad more data doesn’t exist showing success rates of students who have gone through pre-credit English programs vs. those entering degree programs based solely on a TOEFL score. Unfortunately, speaking on their terms, (neoliberal, capitalist) and putting this into perspective monetarily is a necessary evil. Showing the difference in tuition payments between a successful graduate and an unsuccessful student who dropped out after 1 or 2 semesters is what gets their attention- if they haven’t already pressured professors to inflate grades, that is. This post also touches on the foundation of educational assessment: the test or the essay, both of which do less to prove that any learning has occurred than they do to show the regurgitation of facts or the parroting of someone else’s words. Projects that show what students can DO with what they have learned are far and few between, difficult to organize, administer, and validly assess, and don’t fit in with the simple minded bean counters view of the world in that these projects are holistic, subjective, abstract, and qualitative. As this question can be laid at the feet of the corporatized test makers, it is also a reflection of our own practices as we revert to and give weight to the discrete or paraphrased in an attempt to make up for our lack of any sense of what authentic assessment looks like in practice.
Wow, Matthew, This is such a thoughtful and inspiring comment. I am sorry that I didn’t notice until I was deleting spam today (need to set alerts better). I wholeheartedly agree with you about the “bean counter view of the world” that undergirds standardized testing, especially when it comes to the very diverse demographics of international students and the very complex abilities and challenges that they bring into higher ed, even if we simply focus on language proficiency. Thanks again.