The effectiveness of the general aptitude test in Saudi Arabia in predicting performance of English as a foreign language

Abdulhameed Aldurayheem (Imam Muhammad Bin Saud Islamic University College of Language and Translation, Riyadh, Saudi Arabia)

PSU Research Review

ISSN: 2399-1747

Article publication date: 7 March 2022

2368

Abstract

Purpose

This study examines the test's predictive validity of English language performance and compares test constructs to identify the most effective predictors of English language performance.

Design/methodology/approach

Data were collected and analysed from test scores of students enrolled in the foundation year (N = 84) and level 2 (N = 127) in the faculty of English at a Saudi university using correlation and regression tests.

Findings

The findings revealed that the General Aptitude Test (GAT) is effective in predicting English performance for students in level 2 and that the error detection task is the most effective predictor of performance in English reading.

Practical implications

The study provides support for the validity of the GAT as a university admission requirement for English language courses in the Arabic-speaking world.

Originality/value

This study examines the GAT's power using a fine-grained approach by deriving scores from its breakdown constructs to predict the performance of English skills at the university level.

Keywords

Citation

Aldurayheem, A. (2022), "The effectiveness of the general aptitude test in Saudi Arabia in predicting performance of English as a foreign language", PSU Research Review, Vol. ahead-of-print No. ahead-of-print. https://doi.org/10.1108/PRR-01-2021-0002

Publisher

:

Emerald Publishing Limited

Copyright © 2022, Abdulhameed Aldurayheem

License

Published in PSU Research Review. Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode.


Introduction

Higher education in Saudi Arabia and the general aptitude test

In response to the high number of applications for university admission in Saudi Arabia and the uncertainty related to the reliability of high school exams and because of the inflation of its grades (Al Saud, 2009; NCA, 2018), the government commissioned the National Centre for Assessment (NCA) to “provide comprehensive and integrated solutions that scientifically measure and evaluate knowledge, skills, and aptitude with the purpose of achieving fairness, maintaining quality and satisfying development needs” (NCA, 2018, line 1). The NCA, thus, introduced a standardised high-stakes test in 2003 called the General Aptitude Test (GAT) and continues to expand the use of its versions and applications. Several other countries now use it for university admissions, such as Oman and Bahrain. The GAT is in Arabic and is intended to help higher education institutes choose those applicants who are likely to succeed in their chosen areas of study. The test has, in fact, continued to cause frustration among parents and students and is considered a burden for both as reported by the media (Albilad, 2019; Akhbar, 2013; Alriyad, 2011; Hashem, 2009) whenever it takes place. It is seen as unfair to give such weight to the test in terms of admission decisions in light of 12 years of study prior to university. The test has enormous ramifications every year for students who have to take it. The number of students who sat in its sessions in 2017 (NCA, 2018) was 314,113 which indicates a high impact on a large population. Therefore, the debate and research regarding using the GAT as a requirement for university admission and its power and validity to predict students' success are still going on. This study evaluated the GAT and its constructs as a predictor for university success in studying English as a foreign language for students majoring in English. By doing so, a better understanding of the GAT will be gained as for its appropriateness as an admission test as this study seeks to reveal.

Literature review

Testing for university admission

Using tests in academia has been a major method to assess students' abilities, skills or knowledge in a given context, Al Saud (2009). The importance of tests varies depending on the target of the test. It could be used to determine one's suitability to enter a college, move from one level to another or be used as an instrumental method in experimental research. Therefore, these high-stake tests must be examined to serve the purposes they are made for. The high-competitive rate of universities admission demanded to set some requirements that can help in advance predict the performance of applicants and then choose from them who are believed to be successful in university. High-stakes standardised tests and university admission tests have been found to be good predictors of college performance such as the Scholastic Aptitude Test (SAT) in the USA and the GAT in Saudi Arabia. This topic has been studied tremendously from the perspective of academic success as proposed and supported by Misanchuk (1977), Crouse (1985) and College Board (2020). Setiawati (2020) examined the Deferential Aptitude Test in terms of its predictive validity for psychology programmes and found that the verbal and numerical subtests are the most influential sections of the test to predict students’ future Grade Point Average (GPA) – final total grade in every term or last year. University admission tests clearly have significant implications for policymakers with regard to supporting their decisions and for other shareholders such as parents and applicants to raise their awareness of the effectiveness of such tests. This study integrated the investigation of cross-linguistic influence with examining the GAT in order to provide both theoretical and practical perspectives.

The general aptitude test

A considerable amount of research and technical reports have been conducted by the developer of the GAT, the NCA and individual researchers, as discussed below, to investigate the test constructs, characteristics, reliability, validity and the characteristics of the examinees. First of all, it is worthwhile mentioning that the reliability coefficient of the test was reported reliable and has never been less than 0.91 when using alpha coefficient and when using a test-retest method for any version, the coefficient was 0.88. Additionally, in a report by the NCA (2018), a correlation has been established between the GAT and other measures (such as high school GPA and university first-year GPA) and compared to similar standardised tests (such as the Standardised Achievement Admission Test – another test by NCA). The report showed that there was a moderate to moderate-high relationship among the variables. Al Saud (2009) reported that the relationship between the GAT and first year's GPA is moderate (0.45) or even more in some scientific majors.

Several studies have investigated the test's validity of prediction. For example, in Alshumrani's (2007) study, he examined the validity of the GAT score and high school percentage in predicting the first-year GPA at different colleges for Saudi undergraduate students. He found that there was a significant correlation and predictive relationship between the predictors when weighted together and the criterion variable, the GPA in different majors. Alghamdi and Al-Hattami (2014) examined the predictive validity of the GAT, the Scholastic Achievement Test and high school GPA as admission criteria in three university colleges in Saudi Arabia. Their findings indicated that high school GPA was a strong predictor of college performance, whereas the GAT score was not a strong predictor for students in humanities faculties. However, when calculating the weighted scores of the predictors, the results revealed a significant prediction. The predictability of the GAT was strong for students in non-humanity colleges either when it is a sole or a weighted predictor. In support of these findings, Alnahdi (2015) confirms that a weighted combination of the GAT score and high school GPA shows a prediction power of performance in university. Further, the GAT score in this study exhibited a stronger prediction when the criterion is the graduation GPA. Similarly, Alanazi (2014) found that the GAT score can be a strong predictor for each of its two sections, the numerical and verbal or as a total score, regardless of student's major in high school, and that the GAT score explains 13.2% of variances in first semester's GPA. Althewini and Alkushi (2020) examined the relationship between the GAT's sections' scores and performance of English for students majoring in Health Sciences at a Saudi university. They found that the GAT along with other two academic criteria predicted cumulatively 17.3% of performance in English for this population of students. It is clear from the studies above that the relationship between the GAT scores and university GPAs has been established and that the changes in the GAT score determine the GPA. This study is sought to investigate the GAT from a cross-linguistic influence perspective to explore more issues related to the power of this test in predicting performance. Most of the previous studies used coarse-grained measures such as the total score of a test as a predictor and the GPA of college as a criterion variable. This study, though, sought to use fine-grained measures deriving the breakdown scores of the test's verbal section and establish their relationship with future foreign language skills performance, English, in final exams and coursework scores. A better understanding of examining a test's prediction comes from the fact that the core of prediction is how accurate a factor can predict – compared to others – (Wolfe and Johnson, 1995) rather than does it predict solely or not.

Cross-linguistic influence

Cross-linguistic interaction is seen as central to L2 processing (Mei et al., 2014). Learning a language is highly complex which makes it difficult to make an accurate prediction of learner performance. Perhaps this is because of the different factors embedded in shaping the unique system of a language learner. One of these factors is the influence of the first language as it characterises the initial state of a learner when beginning a new language learning experience.

Singleton (2003) suggests four alternative groups of factors: cross-linguistic factors, general cognitive factors, motivational factors and educational factors. It has been posited that native language does play a role, but is somewhat complicated in that many other factors also share this influence on second language acquisition (Sparks et al., 1998, 2012; Dabrowska and Street, 2006). Although languages appear different in terms of characteristics, they share some common features. These common features are not necessarily linguistic forms; they go beyond that and share features that are rooted in their genetic structures as explained by Universal Grammar.

A number of studies (Sparks et al., 1998, 2012; Dabrowska and Street, 2006) have found that students with more highly developed first language skills exhibit stronger second language skills than those with less developed first language skills. Hence, knowledge of the L1 may accelerate the learner's progress through the learning stages of L2. These observations suggest that there are cross-linguistic effects. However, this phenomenon of influence from one language to another still lacks enough evidence of what constitutes it and which skills are transferred from L1 because views are continuously changing regarding how a second language is learned (Koda and Zahler, 2008) and providing that transfer is a complex interrelationship between the L1 and L2. In fact, more studies are needed that investigate the different perspectives of cross-linguistic influence and break down its components.

The interdependence hypothesis developed by Cummins' (1979, 2000) holds that first and second language acquisitions are interlinked due to certain shared aspects. He claims that proficiency in L1 affects proficiency in L2 despite the apparent differences between the two languages. Cummins uses the term “common underlying proficiency” to refer to a base set of skills that develop from the first language. These cognitively demanding proficiencies, such as problem-solving, abstract thinking and literacy, do not differ across languages; therefore, any improvement benefits both languages. Cummins tested his hypothesis for bilingual speakers starting to learn an L2 at an early age whose L1 literacy was not yet fully established. Several empirical studies have provided evidence supporting the interdependence hypothesis. These include Legarreta's (1979) longitudinal study of Spanish-speaking children and Verhoeven's (1994) study which examined predicting L2 ability based on L1 ability among Turkish children learning Dutch. He found a positive relationship in literacy, pragmatic and phonological skills.

Cummins offers in development of his previous hypothesis the threshold hypothesis, which indicates that a level of linguistic competence in the L1 is required and that learners should reach a minimum threshold in L2 proficiency in order for the transfer to occur. As a result, if a student's capacity in L1 is limited, their capacity in the L2 will be limited in the same way. However, foreign language ability cannot be predicted until learners reach the level of threshold of competence in L2. Some researchers defined the threshold on the basis of the vocabulary size a learner has in the L2 in order to fulfil reading comprehension. For example, Laufer (1992, p. 24) states that “the level at which good L1 readers can be expected to transfer their reading strategies to L2 is 3,000-word families”. However, the explanation of interdependence and subsequently the threshold hypotheses is not restricted to proficiency in reading or knowledge of vocabulary as languages compose of features other than vocabulary knowledge such as knowledge of grammar. Therefore, defining such a threshold of proficiency would not be consistent. The threshold line might be variable, given that different types of task demands determine which baseline is required to exhibit a certain skill level. The current study seeks to define a threshold level within its context that enables the GAT to predict the performance of English, hence, when skills can be transferred.

The evaluation of the GAT was set out to address the following research questions:

  1. To what extent is the Arabic GAT effective in predicting the performance of L1 Arabic speakers in learning English as a foreign language?

  2. How much variance can the GAT explain of university performance in specific courses?

  3. Which construct of the verbal section is more effective in predicting English language performance?

Method

Setting, participants and data

The setting of the current study is the College of Languages and Translation at Imam Muhammed bin Saud Islamic University in Riyadh, Saudi Arabia. The data were obtained from 211 Saudi undergraduate males, 84 of whom were from the foundation level and 127 from level 2. The foundation level is the first semester for students in the university. Each semester lasts for 4 months. Those who were recruited from level 2 had been studying English in the Faculty of English for at least 1 year after they had finished their foundation level. Participants' ages ranged from 18 to 22 years. The data of the GAT were obtained from the National Centre for Assessment from past tests' scores that the participants have taken before entrance to the university. The English language performance data for the foundation level were obtained from an integrated language skills exam at the end of their first semester. The English language performance data for level 2 were obtained from the final-level two reading and grammar exams and the total course score for each skill. The total course score includes classroom participation, assignments, attendance and exams. This score is considered as an indication of general performance at the university level. The data collection process was reviewed and treated according to the NCA policies and other stakeholders. No IDs, names or any other identifiable data were disclosed.

Tests and measures

The first test was the GAT in Arabic. Students take this test as a requirement for university entrance. The GAT is designed to measure the analytical and deductive skills of students in two main types: The first one measures ability and aptitude in written language and is called the verbal section, and the second measures mathematical literacy. For the purpose of this study, it is the first section that is the subject of this investigation. The test is not based on the school curriculum, and participants are not expected to study specifically for the test; rather, it is designed to measure students' cognitive ability. The verbal section measures language and reading-related skills including inferencing, information retention and structure sensitivity. It is composed of four constructs in the format of multiple-choice items: analogy, context error detection, sentence completion tasks and reading comprehension (NCA, 2018). The reliability of the test has been reported in previous studies at 0.90 (2018). The test was used in this study as an index of participants' first language aptitude in reading-related skills.

The second test is the final English reading exam that the students take at the end of level 2 in the College of Languages and Translation in the English Faculty. It is designed to measure their ability to comprehend English text and related subskills required for reading. These skills include, for example, scanning and skimming for different purposes to gain information, guessing and inferencing to grasp meanings of words or phrases, understanding signal words and cause and effect, summarising and identifying particular features of a text including cultural interpretations and various texts and understanding different points of view (IMSIU, 2018), and it was designed in the form of multiple-choice questions. The test scores were used in this study as an index of the participants' performance in English reading and comprehension after studying at the Faculty for 1 year and a half.

The third test is the final English grammar exam that the students take at the end of level 2 in the College of Languages and Translation in the English Faculty. Knowledge of grammar can contribute to reading skills as understanding a language's grammar (syntactic structure) is a crucial part of language comprehension (Koda, 2005). The test is designed to measure their ability to comprehend the most fundamental aspects of English grammar, such as tenses, modals, conditionals, questions, quantifiers and conjunctions (IMSIU, 2018), and it was designed in the form of multiple-choice questions. In this study, the test score was used as an index of the participants' performance in English grammar after studying in the Faculty for 1 year.

The fourth test is the English final exam that students in the foundation level take after one semester of study before entering level 1 in the faculty. This test measures students' performance in English as a second language. The test consists of a hundred multiple-choice items. Skills examined in this test are integrated into three sections: reading comprehension, vocabulary meaning, explicit grammatical knowledge and writing. Writing scores are not considered in the current study for two reasons. First, this study examines receptive skills and writing is a productive one. Second, the scoring method of this test for the writing section may not be as objective as the other sections.

All of the English exams abovementioned were developed, reviewed and verified by the Faculty members who are professors in linguistics and the English language. The test development process undergoes careful construction and feedback involving analysis of the tasks and their connection to the objectives of the curriculum (IMSIU, 2018). Thus, these tests are believed to be suitable instruments for testing foreign language performance for pre/upper-and-intermediate English language learners based on their construct validity. According to the specifications of the tests and the curriculum for which they are designed, they are roughly equivalent to the Common European Framework of Reference for languages (CEFR) (Council of Europe, 2019), level A2 (basic user) for foundation students and B2 (independent user) for students in level two.

Analysis

In this study, the relationships were analysed using correlation tests and regression models using the Statistical Package for Social Sciences (SPSS) for all of the variables. After fitting the regression model, the model assumptions were checked for normality, constant variance and that there is no multicollinearity.

Results

Results for level 2 students (English reading)

The results of students in level 2 showed that there are moderate correlations between GAT verbal section with English reading test (corr = 0.420, p value < 0.001) and English reading course score (corr = 0.436, p value < 0.001). The regression model for the effect of the verbal section of the GAT on the English reading test is shown in Table 1. Looking at the model assumptions, it was found that the residuals were normally distributed and had constant variance. The variable explained 17.6% of the variation in the English reading test score. Based on regression coefficients, the verbal section had a significant positive effect on English reading test score (B = 0.363, p value < 0.001) and (B = 0.475, p value < 0.001) on English reading course score where 16.8% of variance was explained by the verbal section.

Breakdown of the GAT with English reading test score

Four sub-constructs of aptitude in Arabic are derived from the GAT verbal section. They are analogy, context error detection, sentence completion and reading comprehension. They were regressed on the English reading test score (see Table 2). Checking the model assumptions, it was found that the residuals were normally distributed and had constant variance. Moreover, the issue of multicollinearity was not present (VIF < 10). The independent variables were able to explain 19.1% of variation in the English reading test score. The fitted model was statistically significant (F = 5.751, p value < 0.001). Only the context error detection breakdown score showed a significant value (B = 0.194, p value 0.015).

Results for level 2 students (knowledge of grammar)

The results of students in level 2 showed that there are moderate correlations between GAT verbal section with English grammar test (corr = 0.344, p value < 0.001) and English grammar course score (corr = 0.403, p value < 0.001). The GAT verbal was regressed on the English grammar test score (see Table 3). Looking at the model assumptions, it was found that the residuals were normally distributed and had constant variance. The variable explained 11.8% of the variation in the English grammar test score. Based on regression coefficients, GAT verbal had a positive significant effect on the English grammar test score (B = 0.245, p value < 0.001) and (B = 0.373, p value < 0.001) on English grammar course score where 12.5% of variance was explained by the verbal section.

Results for foundation students

There is no correlation relationship between the GAT scores and the English language performance test for students in the foundation level. The regression model for the effect of the verbal section of the GAT for the foundation English performance test is shown in Table 4. The model assumptions (normality, constant variance and VIF) were examined and found valid. The total variation of the model was low (4.7%). The variable did not show any significant effect on English performance. Using standardised coefficients, the contribution of Arabic aptitude variable is (0.139). Hence, the model did not show any prediction of reading skills in Arabic to the skills of English for students in the foundation level.

Discussion

The findings for students in level 2 that the GAT could predict their performance at university are in line with the findings of the studies reviewed in the literature of Alshumrani (2007), Al Saud (2009), Alnahdi (2015) and Alanazi (2014) where they used general measures of the relationship between the GAT (i.e. only the total scores) and university GPA of different majors. Similarly, the study confirms Althewini and Alkushi's (2020) study where they examined the GAT's prediction of English performance for Health-majored students. The prediction of the test means the higher a student scores in the test, the higher the score will be in the English language tests and course grades. This performance can, thus, represent success at university. However, the current study examined the GAT from a cross-linguistic perspective, and the findings of the GAT predictability in the study are different from those studies as it has revealed the test's power (with breakdown scores) to predict specific foreign language skills performance for students majoring in English. Responding to the observation made by Alanazi (2014) that GAT score can be a stronger predictor for each of its sections or as a total score and his findings that GAT explains 13.2% of variances in the GPA (i.e. how much the GAT score can contribute to university GPA); however, the present study reports higher variances, at 17.6 and 19.1%. This is similar to those in other studies as the variation average for this factor (first language literacy accounting for second language reading) is 20% (Bernhardt and Kamil, 1995). The reason for that higher percentage is probably because of not reporting on the GPA as a criterion. Rather, tests scores and coursework were used. Thus, the GAT's effectiveness can at best, according to the present study, explain 19% of variances in performance in English reading-related skills. This shows that the GAT's prediction power is greater due to using particular subject areas as criteria. Nevertheless, the findings stand in contrast to Alghamdi and Al-Hattami’s (2014) findings that the GAT did not predict students' university GPAs in a humanities faculty (College of Education) while it did in science faculties even though they found that the GAT score can predict when it is weighted (or combined) with other variables, such as high school grade.

Since the findings of the present study showed a relationship between aptitude in Arabic and reading and grammar in English for advanced learners, this, in turn, provides further support for Cummins' (1979, 2000) interdependence hypothesis. However, while Cummins' hypothesis was first developed to cater to children growing up bilingual, the present study is concerned with adult learners of languages. This relationship findings between aptitude in the L1 and performance in L2 (English) reading are consistent with Sparks et al.'s (2009) study (L1s were Spanish, French and German), but this study considered two languages with different writing systems.

For foundation students, the test findings do not reveal any significant value for the prediction power in their English performance, for two possible reasons: The first is the relatively limited time that students had studying English. The second is that the test used to measure English performance is different from the one used for level 2. The foundation test integrated many language skills, vocabulary, reading and grammar, where the total test score combined all the skills. This raises questions regarding the reliability of the test as the total score represented might not have given and reflect a reliable indication of performance. Nevertheless, these findings give further support to the threshold hypothesis that there is a baseline of proficiency that needs to be reached in order to achieve the transfer of skills, as in the studies of Legarreta (1979), Verhoeven (1994) and Van Gelderen et al. (2007), which involved speakers of languages other than Arabic, as well as Mushait (2004) in his study of Arabic speakers. In light of the present findings, it would be recommended to conduct a further investigation involving level 1 students in order to determine the baseline point according to the context of this study.

When the sub-constructs of aptitude were considered for the proposed relationship, the ability to detect context error could predict performance in English reading. This result supports the idea of problem-solving skills and their impact on learning a foreign language in general and reading in particular, as given in Cummins' (2000) definition of skills underlying proficiency that developed from the L1. One of these skills is problem-solving, which can be developed and accessed for both L1 and L2. As second language learners actively utilise knowledge and experience gained from one language in learning another language, as stated by Kuo and Anderson (2008), and since aptitude has a complex of characteristics (Biedroń and Szczepaniak, 2009; Stansfield and Winke, 2008), this study has shown that problem-solving abilities that were observed in Arabic (compared to relationship recognition, analysis, inference) are the most powerful construct of aptitude to predict performance in English. Hence, this has answered a question found in the literature pertaining to which aspect of skills/subskills is transferred from the L1 to L2 and which one has a greater effect. To the best of this researcher's knowledge, there are no published studies examining the relationship between the breakdown of skills of Arabic language aptitude derived from the GAT verbal section and university success.

Conclusion

Overall, the findings of this study have shown a moderate relationship and a predictive relationship between the GAT and performance in English skills and university studies. This relationship is pertaining to that between aptitude in Arabic and reading and grammar in English. This confirms the claim that previously developed L1 ability affects and fosters competence in the L2 despite the apparent differences between the two languages. Moreover, the present study has identified a baseline of proficiency required in the L2 which a learner should pass in order for the influence of the L1 to occur. This baseline is equivalent to the level between the foundation level and level 2 based on the context of the study.

This study gives further support to the GAT in terms of its ability as an admission requirement to predict students' success in English major at university. The main strength of this study in this regard is in the examination of the GAT's breakdown verbal section scores as well as calibrating it to specific university courses and language skills. This method has allowed the present study to provide further evidence in terms of the GAT's ability to explain performance variances up to 19%. Therefore, the implication of this research, from a practical perspective, is that it supports the notion that high-stakes standardised tests and universities admission tests are good predictors of academic performance. Thus, it recommends continuing using the GAT as an admission requirement for all majors and for English majors particularly in the case of applicants with little or no prior knowledge of English. Otherwise, an English proficiency test would be a valid option according to the academic institution policy. It is believed that the GAT has a washback effect [1] on state education in that it will prompt schools, teachers and even parents to prepare students and teach them skills targeted at the test, such as problem-solving, analysing and inferencing strategies and critical thinking. It is essential that ethical issues should be carefully addressed to minimise side effects and biases against those with special needs, particularly in these tests account for only a relative proportion of university success. It is essential to consider alternatives for those less fortunate in terms of skills in order to avoid any direct negative impact on them. From a theoretical perspective, this study gives a new insight into cross-linguistic influence from the domain outside Indo-European languages. Skills found in Arabic reading comprehension can influence learning English. Hence, it expands the implications of the interdependence hypothesis (Cummins, 2000) to include adult learners within the explanation of the hypothesis.

Concerning the limitations of this study, students in the foundation level were not tracked for a longer period in order to monitor their progress more efficiently. Also, this study did not include female students to see if gender differences will exhibit a different result. For future research, further coverage of the study demographic elements is suggested in order to see if the findings represent a broader Saudi and Arab student.

Regression for level 2 students

ModelUnstandardised coefficientsStandardised coefficientstp-valueCollinearity statistics
BSEVIF
(Constant)18.2355.368 3.3970.001
GAT verbal0.3630.0700.4205.1500.0001.00

Note(s): R2 = 0.176

F(ANOVA) = 26.52 p value < 0.001

Dependent variable: English reading test

Multiple regression of GAT constructs on English reading test

ModelUnstandardised coefficientsStandardised coefficientstp-valueCollinearity statistics
BSEVIF
(Constant)14.3946.670 2.1580.033
Analogy0.1430.0920.1671.5480.1241.706
Context error detection0.1940.0790.2212.4550.0151.182
Sentence completion0.1010.1000.1101.0120.3141.742
Reading comprehension0.0440.0930.0450.4740.6361.344

Note(s): R2 = 0.191

F(ANOVA) = 5.751 p value < 0.001

Dependent variable: English reading test

Regression for level 2 students (grammar knowledge

ModelUnstandardised coefficientsStandardised coefficientstp-valueCollinearity statistics
BSEVIF
(Constant)31.0114.574 6.7790.000
GAT verbal0.2450.0600.3444.0790.0001.00

Note(s): R2 = 0.118

F(ANOVA) = 16.63 p value < 0.001

Dependent variable: English grammar test

Regression for foundation test

ModelUnstandardised coefficientsStandardised coefficientstp-valueCollinearity statistics
BStd. ErrorVIF
(Constant)16.7318.878 1.8850.064
GAT verbal0.1470.1320.1391.1120.2701.087

Note(s): R2 = 0.047

 F(ANOVA) = 1.629 p value = 0.204

Dependent variable: English performance

Note

1.

Washback effect is the consequence resulting from something on many aspects of education such as teaching, curriculum design and teaching practices.

References

Akhbar (2013), “Ekhtebarat alqiyas tahtaj taqueem [Qiyas tests need evaluation]”, Akhbar, [online], 15th August 2013 available at: https://akhbaar24.argaam.com/ (accessed 25th January 2020).

Al Saud, F.B.A.M. (2009), “Development of student admission criteria in Saudi universities: the experience of the National Center for Assessment in higher education”, Proceeding of towards an Arab Higher Education Space: International Challenges and Societal Responsibilities, Vol. 727.

Alanazi, N.K.J. (2014), “The validity of the general aptitude test (GAT) to predict male Saudi students' academic performance in first semester”, Ph.D. thesis, University of Northern Colorado.

Albilad (2019), “Ekhtebarat qiyas tahta mejhar almujtam’ [Qiyas tests under the community scope]”, Albilad, [online], 21st September 2019, available at: https://albiladdaily.com/ (accessed 25th January 2020).

Alriyad (2011), “Ekhtebarat alqias [Qiyas tests]”, Alriyad, [online], 12th June 2011 available at: http://www.alriyadh.com/ (accessed 25th January 2020).

Alghamdi, A.K.H. and Al-Hattami, A.A. (2014), “The accuracy of predicting university students’ academic success”, The Journal of Education and Psychology, ISO 690, Vol. 1, pp. 1-8.

Alnahdi, G.H. (2015), “Aptitude tests and successful college students: the predictive validity of the General Aptitude Test (GAT) in Saudi Arabia”, International Education Studies, Vol. 8 No. 4, pp. 1-6.

Alshumrani, S.A. (2007), “Predictive validity of the general aptitude test and high school percentage for Saudi undergraduate students”, Ph.D. thesis, University of Kansas.

Althewini, A. and Alkushi, A. (2020), “Predictive validity of Saudi admission criteria for freshmen students' English performance: experience of king Saud Bin Abdulaziz university for Health sciences”, Journal of Language Teaching and Research, Vol. 11 No. 1, pp. 108-114.

Bernhardt, E.B. and Kamil, M.L. (1995), “Interpreting relationships between L1 and L2 reading: consolidating the linguistic threshold and the linguistic interdependence hypotheses”, Applied Linguistics, Vol. 16 No. 1, pp. 15-34.

Biedroń, A. and Szczepaniak, A. (2009), “The cognitive profile of a talented foreign language learner: a case study”, Psychology of Language and Communication, Vol. 13, pp. 53-71.

College Board (2020), available at: https://www.collegeboard.org.

Council of Europe (2019), available at: https://www.coe.int/en/web/common-european-framework-reference-languages/home.

Crouse, J. (1985), “Does the SAT help colleges make better selection decisions?”, Harvard Educational Review, Vol. 55 No. 2, pp. 195-220.

Cummins, J. (1979), “Linguistic interdependence and the educational development of bilingual children”, Review of Educational Research, Vol. 49 No. 2, pp. 222-251.

Cummins, J. (2000), Language, Power and Pedagogy: Bilingual Children in the Crossfire, Multilingual Matters.

Dabrowska, E. and Street, J. (2006), “Individual differences in language attainment: comprehension of passive sentences by native and non-native English speakers”, Language Sciences, Vol. 28 No. 6, pp. 604-615.

Hashem, H. (2009), “Jeel mahmom be ekhtebarat alqudurat [A generation concerning the Aptitude Test]”, Alriyad, [online], 11th March 2009 available at: http://www.alriyadh.com/ (accessed 25th January 2020).

Imam Muhammed bin Saud University (2018), available at: www.imamu.edu.sa.

Koda, K. (2005), Insights into Second Language Reading: A Cross-Linguistic Approach, Cambridge University Press.

Koda, K. and Zahlfer, A.M. (2008), Learning to Read across Languages: Cross-Linguistic Relationships in First-And Second-Language Literacy Development, Routledge, New York.

Kuo, L.J. and Anderson, R.C. (2008), “Conceptual and methodological issues in comparing metalinguistic awareness across languages”, Learning to Read Across Languages, Cambridge University Press, pp. 39-67.

Laufer, B. (1992), “How much lexis is necessary for reading comprehension?”, Vocabulary and Applied Linguistics, Palgrave Macmillan, London, pp. 126-132.

Legarreta, D. (1979), “The effects of program models on language acquisition by Spanish speaking children”, TESOl Quarterly, Vol. 13 No. 4, pp. 521-534.

Mei, L., Xue, G., Lu, Z.L., Chen, C., Zhang, M., He, Q. and Dong, Q. (2014), “Learning to read words in a new language shapes the neural organization of the prior languages”, Neuropsychologia, Vol. 65, pp. 156-168.

Misanchuk, E.L. (1977), “A model- based prediction of Scholastic Achievement”, Journal of Educational Research, Vol. 71, pp. 30-35.

Mushait, S.A. (2004), “The relationship of L1 reading and L2 language proficiency with the L2 reading comprehension and strategies of Saudi EFL University students”, Doctoral dissertation, University of Essex.

National Centre for Assessment (2018), available at: http://www.qiyas.sa.

Setiawati, F.A. (2020), “Aptitude test's predictive ability for academic success in psychology student”, Psychological Research and Intervention, Vol. 3 No. 1, pp. 1-12.

Singleton, D. (2003), “Critical period or general age”, in María del Pilar, G.M. and María, L.G.L. (Eds), Age and the Acquisition of English as a Foreign Language, Multilingual Matters, Vol. 4, pp. 1-22.

Sparks, R.L., Artzer, M., Ganschow, L., Siebenhar, D., Plageman, M. and Patton, J. (1998), “Differences in native-language skills, foreign- language aptitude, and foreign-language grades among high-, average-, and low-proficiency foreign-language learners: two studies”, Language Testing, Vol. 15 No. 2, pp. 181-216.

Sparks, R., Patton, J., Ganschow, L. and Humbach, N. (2009), “Long-term crosslinguistic transfer of skills from L1 to L2”, Language Learning, Vol. 59 No. 1, pp. 203-243.

Sparks, Patton, J., Ganschow, L. and Humbach, N. (2012), “Do L1 reading achievement and L1 print exposure contribute to the prediction of L2 proficiency?”, Language Learning, Vol. 62 No. 2, pp. 473-505.

Stansfield, C. and Winke, P. (2008), “Testing aptitude for second language learning”, in Shohamy, E. and Hornberger, N. (Eds), Encyclopedia of Language and Education, Vol. 7, pp. 81-94.

Van Gelderen, A., Schoonen, R., Stoel, R.D., De Glopper, K. and Hulstijn, J. (2007), “Development of adolescent reading comprehension in language 1 and language 2: a longitudinal analysis of constituent components”, Journal of Educational Psychology, Vol. 99 No. 3, p. 477.

Verhoeven, L.T. (1994), “Transfer in bilingual development: the linguistic interdependence hypothesis revisited”, Language Learning, Vol. 44 No. 3, pp. 381-415.

Wolfe, R.N. and Johnson, S.D. (1995), “Personality as a predictor of college performance”, Educational and Psychological Measurement, Vol. 55 No. 2, pp. 177-185.

Acknowledgements

Date collection was provided by and made accessible by two bodies in Saudi Arabia: The National Centre for Assessment and the Faculty of Languages and Translation at Imam Muhammed Ibn Saud Islamic University (IMSIU). This study is a part of a PhD project which was supervised by Amanda Mason, Brigitte Hordern and Catherine Groves at Liverpool John Moores University. It was also presented at the First International Symposium on Applied Linguistics Research at Prince Sultan University, 2020.

Corresponding author

Abdulhameed Aldurayheem can be contacted at: aadurayheem@imamu.edu.sa

Related articles