Congratulations to the
2021 Graduate Student Award Winners
The Graduate Student Award is a merit-based award that supports the attendance of AAAL graduate student members at the annual conference. You can find the abstracts of their presentations below. Recordings of the talks are available on the AAAL 2021 conference platform.
This year's winners are:
Nathan Thomas, UCL IoE, University College London, Wilga Rivers Award
Paul Meighan, McGill University, Multilingual Matters Award
Ryo Maie, Michigan State University, ETS Award
Jeremy A. Rud, University of California, Davis
María Díez-Ortega, University of Hawai‘i at Mānoa
GoMee Park, The University of Iowa
Nathan Thomas is a Ph.D. candidate in Applied Linguistics and a postgraduate teaching assistant in TESOL at the UCL Institute of Education. He also works on several funded research projects within the institute and as a Course Tutor in Applied Linguistics for Language Teaching at the University of Oxford. His research interests are wide ranging but current projects mainly relate to language learning strategies, self/other-regulation, and English medium instruction. He has published on these and other topics in leading academic journals such as Applied Linguistics, Applied Linguistics Review, ELT Journal, Journal of Second Language Writing, Language Teaching, System, TESOL Journal, and TESOL Quarterly. He is also a regular reviewer for these and other top journals in the field as well as a frequent conference presenter, having given more than 50 presentations in 14 countries around the world. Before moving to the UK for his doctoral studies, Nathan completed an M.Sc. in Teaching English Language in University Settings, M.Ed. in International Teaching, M.A. in Applied Linguistics, B.A. in English, and various teaching certifications, all while working full time for 10+ years in China and Thailand. He considers this practical experience crucial for guiding his scholarly work.
Revisiting the Revised Theoretical Framework: An Integrated View of Strategies,
Strategic Processing, and Strategic Behavior
Early discussions of language learning strategies materialized as practical treatises, giving birth to influential publications that were, nevertheless, criticized for their lack of theoretical rigor. Some researchers situated strategies in cognitive theoretical frameworks that were soon complemented by those taking sociocultural, and later, complexity theory perspectives. Yet, only Macaro’s (2006) revised theoretical framework, which assumes an unabashedly cognitive position, has attempted to account for learner-internal processing components and attend to longstanding theoretical-conceptual issues. It has been nearly 15 years since his proposal, and despite being widely cited, few researchers have utilized Macaro’s framework in their studies. It may be that his lack of engagement with the social turn in applied linguistics more broadly has dissuaded researchers from operationalizing his framework, despite its sound theoretical base. This presentation takes the view that both social and cognitive perspectives must be accounted for in any theory attempting to describe strategic behavior.
Taking what is a necessarily integrated perspective, this presentation will revisit existing theoretical frameworks such as Macaro’s and attempt to align them with recent developments in theory and research. It will take the first steps in outlining the skeleton of a theoretical model that attempts to mitigate existing theoretical-conceptual issues by distinguishing between strategies, strategic processing, and strategic behavior as part of a framework that exploits the potential of distributed cognition, co-/shared-regulation, and mediated learning/decision-making. The presentation will conclude by progressing stepwise through each stage of the proposed model, exploring links between learner-internal and external factors, and harnessing shared/distributed cognition within the scope of what are traditionally viewed as individual cognitive processes. In viewing strategies as the result of distributed networks of knowledge and strategic behavior as the overt or covert operationalization of strategic processing, the framework illuminates previously unaccounted for processes and presents a host of new perspectives for future research.
Paul J. Meighan (Miadhachàin)-Chiblow is a Gàidheal (Scottish Gael) from Glasgow, Scotland. He is a Ph.D. candidate in Educational Studies and SSHRC Bombardier scholar at the Department of Integrated Studies in Education (DISE), McGill University. He has a B.A. (Hons) in European Studies and Spanish (2002) and Postgraduate Certificate in Education (PGCE) in Modern Foreign Languages (2009) from King’s College, London, England. He also has an M.A. in TESOL (2019) from Trinity Western University, Canada. Paul has an extensive background in translating and teaching languages (ESL/EFL/EAP, Italian, Spanish and French) internationally since 2001. Paul’s research focuses on decolonizing language education and Indigenous language revitalization. His community-led, SSHRC-funded doctoral research will explore the connections between Indigenous language revitalization, Traditional Ecological Knowledge (TEK) and decolonizing technology. He is honoured to be named the recipient of the AAAL GSA Multilingual Matters award this year. For more about Paul and his research, visit his website at : www.paulmeighan.com or his Twitter @PaulJMeighan.
(Re)viewing Our Relationships With the World: Foundations for Decolonial and Equitable English Language Learning
The mainstream English as a Second/Foreign Language (ESL/EFL) classroom can silence a rich tapestry of voices and identities through an imposition, either forced or covert, of a monolingual and monocultural learning environment. The dominant use of the English language carries a colonial legacy and a eurocentric, human-centred worldview characterized by: (1) linguistic imperialism (Phillipson, 1992) and cognitive imperialism (e.g., Battiste, 2013), (2) the view that humans are superior to nature (e.g., “human exceptionalism” [Haraway, 2008]), and (3) white (epistemological) supremacy (Gerald, 2020; Minde, 2003). The negative impacts of this imperialistic worldview can already be seen and felt in the human-caused climate and humanitarian crises. This paper contends that a more equitable and sustainable way to use language would be to question the dominant human-centered worldview that informs ecologically and culturally destructive assumptions (Pennycook & Makoni, 2020; Stibbe, 2018; wa Th’iongo, 1983). This paper will conceptualize ways in which the “past”, “present” and “future” aspects of a learner can be more fully validated and embraced in order to foster a more decolonial, respectful and relational worldview in the ESL/EFL classroom. Heritage languages and cultures (the “past” of the learner) possess a wealth of knowledges and ways of knowing and being which should be incorporated and acknowledged in our “present”-day classroom for a more equitable, culturally- and environmentally-responsive “future” (Meighan, 2019, 2020). By way of example, the “World (re)viewer” log is introduced to exemplify how heritage language pedagogy could be implemented in our classrooms in a way that (1) addresses the cognitive and linguistic imperialism of the colonial monolingual English classroom, (2) validates heritage, non-dominant knowledge systems and languages (such as those which are Indigenous), (3) fosters discussions and orientations for a more environmentally- and culturally-responsive sustainable future, and (4) promotes positive identity formation for all multilingual and multicultural learners.
Ryo Maie is a third-year doctoral student in the Second Language Studies program at Michigan State University. He received his M.A. in Second Language Acquisition from University of Maryland, College Park. His research interests include cognitive psychology of second language acquisition, usage-based and cognitive linguistics approaches to language development, task-based language teaching, and applied statistics in L2 research. Ryo is currently working on his dissertation study on testing and validating skill acquisition stages in L2 learning under supervision of Dr. Aline Godfroid.
Arbitrary Choices, Arbitrary Results: A Multiverse Analysis of L2 Reaction Time Data
Quantitative analysis begins with researchers processing raw data into a form (i.e., dataset) ready for statistical analysis. This data processing, however, typically involves making decisions among equally reasonable options of how to remove, transform, and code data. For instance, should one remove data points lying outside the area 2SD 2.5SD, or 3SD away from the mean? This openness creates room for researcher degree of freedom (i.e., inherent flexibility in designing and conducting a study and in preparing and analyzing data), and when exploited, leads to questionable research practices including p-hacking. Inspired by Steegen, Tuerlinckx, Gelman, and Vanpaemel (2016), this presentation recommends a multiverse analysis to circumvent this problem and provides a demonstration using existing L2 reaction time data. In multiverse analysis, one performs the same analysis across a whole set of datasets, that is, a multiverse of datasets. This idea is important because the arbitrariness in reaching a particular dataset is inevitably inherited by its statistical result, and hence, the data multiverse implies a multiverse of statistical results. Here, I reviewed previous studies that used a word-monitoring task as an implicit knowledge measure and extracted all previous options of how to process raw data. This procedure created 108 datasets. I fitted generalized linear mixed models to each dataset to see how results change across them. Results showed that 64 (59.25%) produced regression coefficients (of interest) that were statistically significant, with a mean p-value of 0.048 (SD = 0.027). An inspection of the p-values revealed that they could not only be well below 0.05 but also be as high as .129, with their 95% confidence interval [.044, .052]. The presentation shows that different choices made by analysts can sometimes lead to conflicting conclusions and recommends that one conduct a multiverse analysis and take an average over a multiverse of statistical results.
Jeremy A. Rud
Jeremy A. Rud is a Ph.D. student in Linguistics at UC Davis. His research broadly focuses on language in contexts of migration and asylum and addresses issues of asylum seeker credibility at intersections of public policy, narrative performance and entextualization, and speech perception. In his work he uncovers and challenges taken-for-granted notions of language held by institutions that act as gatekeepers to asylum seekers. Some of his studies have included a critique of a nonprofit’s narrative portrayals of former refugees, published in Narrative Inquiry, a caution against states’ uses of algorithms to evaluate asylum applications, a micro analysis of listeners’ politicized judgements of credible fear in asylum seekers’ narrative performances, and a review of language integration policies for immigrants and refugees, forthcoming in the Handbook of Educational Linguistics (2nd ed.). His university profile can be found at https://linguistics.ucdavis.edu/people/jarud
Can AI Determine Credible Fear? Challenging the State's Use of Text Analytics in Asylum Adjudications
In line with the long history of scholarship on language in contexts of migration and asylum (Canagarajah, 2017), in this study I bridge European and North American discussions of language analysis in asylum policy (Patrick, Schmid, & Zwaan, 2019) and artificial intelligence (AI) in migration management (Beduschi, 2020) by examining the use of “text analytics to look for boilerplate language” to detect “fraud” in asylum applications. This practice, a planned function of the United States Asylum Vetting Center currently in development by US Citizenship and Immigration Services, is just one example of the increasing use of AI at borders around the world. In order to preemptively unpack this black-box process, I apply both narrative critical discourse analysis and lexicon-based sentiment analysis, a method of text analytics that numerous scholars across Europe applied to media data in response to the 2015 “refugee crisis” (Nerghes & Lee, 2019, inter alia), to a corpus of 20 former refugee narratives in order to compare human and algorithmic readings of these high-stakes linguistic performances. Specifically, I examine how credible fear in an entextualized asylum seeker narrative could be (mis)determined by AI and conclude that the nature of the training data, the composition of the sentiment dictionary used, the accuracy of sentiment scores, and the ideologies of the practitioners all raise serious concerns for the use of sentiment analysis for automatic decision-making in asylum proceedings. Overall, I argue for greater international coordination of applied and theoretical linguistic scholarship that takes an active, rather than reactionary, stance against the unchecked entrenchment of AI in asylum policy and I advance the line of inquiry of the politics of listenership that extends the borders of linguistic anthropological analyses of asylum to concerns of aurality, listening, and artificial intelligence.
María Díez-Ortega is a Ph.D. candidate in Applied Linguistics at the University of Hawai‘i at Mānoa. Her research interests include task-based language teaching, computer-assisted language learning, learner corpus research, and L2 teacher education. María holds an M.A. in Spanish from the University of Hawai‘i at Mānoa, and Advanced Graduate Degree in Spanish Linguistics also from UH, and degrees in TESOL and Early Childhood Education from Universidad Complutense in Madrid. She has over 10 years of teaching experience; she has taught courses in L2 pedagogy, second language acquisition, English for academic purposes, and Spanish and English as foreign languages. María is currently a graduate assistant at the Language Flagship Technology Innovation Center.
Peer-Interaction of Beginner L2 Learners During Collaborative Gaming
Research in peer-interaction has been shown to promote L2 development from interactionist (Long, 1996) and sociocultural perspectives (Lantolf, 2000). Interactionist approaches suggest that this type of dialogue provide opportunities for negotiation for meaning, noticing, corrective feedback, and processes facilitative of L2 learning (Gass & Mackey, 2015). There is a growing body of research on how technology impact peer-interaction, but few studies have investigated L2 beginner learners. Little is known about whether face-to-face findings transfer to technology-mediated tasks, or how task features affect interaction (Plonsky & Ziegler, 2016). Digital games fit the definition of technology-mediated task (González-Lloret & Ortega, 2014), but research on gaming from an interactionist TBLT framework is scarce, with few longitudinal studies. The study addresses these gaps by investigating collaborative gaming of L2 Spanish beginner learners at a university program (9 classes, n=156). There were two experimental conditions: 1) learners played the game individually (n=53); 2) learners played in dyads sharing one computer (n=49), and a control group (n=54).The experimental conditions engaged on five gaming sessions of an educational task-based digital game, Practice Spanish: Study Abroad. All learners completed a pre/post vocabulary and grammar test, and a pre/post survey on willingness to communicate. Dyads’ interaction while gaming was audio and screen recorded, further analyzed quantitatively by measuring type and resolution of Learner Related Episodes (LREs, Swain & Lapkin, 2001), and in-game triggers. Lastly, LREs were analyzed longitudinally by tracking subsequent use and learning of LRE-implicated forms. Qualitative data of students’ perceptions in the experimental conditions were collected. Preliminary results indicate the dyads outperformed the other groups and displayed a more positive attitude towards the game. LRE analyses of two dyads showed a large number of lexical triggers, inter-dyad variation, and the degree to which task features (e.g., game question, quest difficulty, corrective feedback) impacted interaction. Pedagogical and game design implications are briefly discussed.
I am a doctoral candidate in Foreign Language and English as a second language (ESL) Education Program at the University of Iowa. Through the firsthand experience as an ESL student in California, I became drawn to English language education, which became my lifetime career path. Returning to South Korea, I received B.A. in English Language and Literature and M.A. in Applied Linguistics while teaching English to various student groups. With my own experience as an ESL student and ten years of experience as an English teacher, I have helped develop and operate professional development workshops for educators in local school districts to support English learners during my 5 years of graduate study. I will obtain my Ph.D. and another B.A. in Teaching Korean as a Foreign Language in July 2021. My research interests include ESL/EFL teacher education, educators' agentive role in school settings, language policy, assessment literacy, and language assessment.
Testing and Language Policy: The Social Impact of State-mandated Assessments on a Dual Language Program
This study investigates how educators at a dual language elementary school perceive, interpret, and use state-mandated tests to make instructional decisions, particularly in terms of language policy. Educators are powerful language policy agents whose interpretation of educational language policy makes differences in actual implementation of the policy at the school and district levels (Hornberger & Johnson, 2007; Palmer & Lynch, 2008, Paciotto & Delany-Barmann, 2011). Despite their importance, there have been few studies on language policy practices, especially in rural school districts that have experienced an influx of Latino students. By conducting an ethnographic study of language policy, I paid attention to the impact of state-mandated assessments on emergent bilingual (EB) students and their teachers. To investigate how educators’ interpretation of assessments informs their instructional decisions, I conducted semi-constructed interviews with educators, and audio and video recordings of classroom instruction of three bilingual classroom teachers were collected along with fieldnotes and meeting notes for three months. Qualitative data analysis (Saldana, 2014) and critical discourse analysis (CDA) (Fairclough, 2015) were utilized for analysis. Preliminary findings reveal that the state-mandated standardized test puts much pressure on all educators as it was used as a report card for the educators while the teachers—the end-user of the tests—had limited (and sometimes wrong) information about the test and test results. Also, although it was a dual language program, often it was Spanish strand teachers who had the responsibility to “bridge” two languages for students, although time allotted for Spanish literacy was much shorter than its counterpart. While the administrators at school and district levels open up an “ideological and implementational space” (Hornberger, 2002; Hornberger & Johnson, 2007) for EBs by operating a dual language program, space seems to be narrowed due to the impact of state-mandated testing.