A practical application of statistical process control to evaluate the performance rate of academic programmes: implications and suggestions

Ana Gessa (Department of Financial Economics, Accounting and Operations Management, Faculty of Business Sciences and Tourism, University of Huelva, Huelva, Spain)
Eyda Marin (Department of Financial Economics, Accounting and Operations Management, Faculty of Business Sciences and Tourism, University of Huelva, Huelva, Spain)
Pilar Sancha (Department of Financial Economics, Accounting and Operations Management, Faculty of Business Sciences and Tourism, University of Huelva, Huelva, Spain)

Quality Assurance in Education

ISSN: 0968-4883

Article publication date: 22 August 2022

Issue publication date: 28 September 2022

2217

Abstract

Purpose

This study aims to properly and objectively assess the students’ study progress in bachelor programmes by applying statistical process control (SPC). Specifically, the authors focused their analysis on the variation in performance rates in business studies courses taught at a Spanish University.

Design/methodology/approach

A qualitative methodology was used, using an action-based case study developed in a public university. Previous research and theoretical issues related to quality indicators of the training programmes were discussed, followed by the application of SPC to assess these outputs.

Findings

The evaluation of the performance rate of the courses that comprised the training programs through the SPC revealed significant differences with respect to the evaluations obtained through traditional evaluation procedures. Similarly, the results show differences in the control parameters (central line and control interval), depending on the adopted approach (by programmes, by academic year and by department).

Research limitations/implications

This study has inherent limitations linked to both the methodology and selection of data sources.

Practical implications

The SPC approach provides a framework to properly and objectively assess the quality indicators involved in quality assurance processes in higher education.

Originality/value

This paper contributes to the discourse on the importance of a robust and effective assessment of quality indicators of the academic curriculum in the higher education context through the application of quality control tools such as SPC.

Keywords

Citation

Gessa, A., Marin, E. and Sancha, P. (2022), "A practical application of statistical process control to evaluate the performance rate of academic programmes: implications and suggestions", Quality Assurance in Education, Vol. 30 No. 4, pp. 571-588. https://doi.org/10.1108/QAE-03-2022-0065

Publisher

:

Emerald Publishing Limited

Copyright © 2022, Ana Gessa, Eyda Marin and Pilar Sancha.

License

Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial & non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode


Introduction

The need to advance the modernisation of universities is a key element in developing a knowledge-based society and a more competitive economy. In this context, it is not surprising that quality assurance, has become one of the cornerstones for policy and decision makers in higher education institutions in the European Higher Education Area (EHEA) (Curaj et al., 2015).

Quality assurance in higher education has two primary functions. First, it establishes the legitimacy of an institution and the academic programmes it offers. Second, it informs the institutions’ stakeholders about program objectives and outcomes and the fulfilment of the expected quality standards (Kinser, 2014).

In the European Union, national accreditation and quality assurance agencies play a key role, as they are in charge of quality management through evaluation, certification and accreditation of programmes, professors and universities. They design different programmes to guarantee internal and external quality, following the Standards and Guidelines for Quality Assurance in the EHEA (ESG), adopted by the European Association for Quality Assurance in Higher Education (ENQA) (2015).

Under ENQA’s umbrella, universities apply procedures that facilitate both the improvement of the quality of their degrees and the external evaluation processes conducted by the corresponding competent institutions. In this sense, the indicators used for the evaluation of the academic curriculum performance constitute a key element for accountability and transparency.

Despite the experience accumulated over more than a decade in university degree programme evaluation processes, criticisms of approaches to definition and to operationalisation is recurrent in the literature (Hanna et al., 2012; Strang et al., 2016). The main concerns across agencies relate, among others, to the consolidation of good practices and appropriate statistical principles when evaluating and analysing performance indicators.

The current use of a general heuristic to assess academic programmes outcomes where the motto seems to be the higher the ratings are, the better (Tomas and Kelo, 2020) has led to a misinterpretation of the outcomes of educational process measured by indicators. There are four barriers that have made this traditional approach both unreliable and inaccurate when it is used to analyse the variability of indicators such as the performance rate:

  1. subjectivity about what should be considered excellent, acceptable, or insufficient prevails in the analysis;

  2. the lack of approaches that consider the context of universities, faculties, departments and even courses;

  3. the ignorance of uncontrollable factors that underline the inherent variability in the educational processes; and

  4. more importantly – the extended practice of comparing performance with averages or the use of arbitrary cut-off numbers without taking into consideration that educational processes have an inherent variability (Bi, 2018).

An alternative approach to overcome these shortcomings and remove the barriers is the application of statistical process control (SPC), something which has been widely and successfully used for decades in the manufacturing industry, and later more slowly introduced in the service sector, in general and in the educative sector in particular (MacCarthy and Wasusri, 2002; Sulek, 2004; Suman and Prajapati, 2018; Utley and May, 2009; Yang et al., 2012). Contrary to the classical statistical methods that are developed for fixed populations and not for processes, SPC can be very effective in detecting process shifts and the dynamics of the process itself. Moreover, it enables hidden problems to be revealed therefore indicating the actions necessary for continuous improvement.

In any process, a certain amount of inherent or natural variability will always exist due to the cumulative effect of unavoidable causes. These sources of variability are called “chance causes” (Montgomery, 2009). SPC helps to assess the variability of the educational processes, distinguishing between assignable (inappropriate educational resources and methodology; ineffective curriculum, etc). and random (student profile, family context, etc). causes (Daneshmandi et al., 2020; Nikolaidis and Dimitriadis, 2014).

This work aims to contribute to understanding the usefulness of the application of SPC, in the processes of accreditation and monitoring of university degrees, through the analysis of the variability of the performance rate associated with the procedures that constitute the quality assurance system (QAS). Specifically, the authors focussed their analysis on the variation in performance rates in business studies taught at a Spanish University. Based on this goal, the following research question is suggested: Are there any significant differences between the results obtained with a standard statistical analysis and the results obtained through the SPC application?

For this purpose, the next section outlines the theoretical framework of this study. Subsequently, the paper presents the methodology and results. The last section discusses the conclusions.

Theoretical framework

Quality assurance in Spanish higher education systems

Quality assurance has become one of the key issues for higher education systems (HES) around the world. The changes that have arisen in the European education area have forced university institutions to adopt new management models, with a priority being to guarantee the quality of the studies, as a contributing factor to the development of the economy and society [Red Iberoamericana para la Acreditación de la Calidad de la Educación Superior (RIACES), 2007]. The comparability and recognition of degrees from the perspective of quality management, strengthened by evaluation and assurance mechanisms and quality of higher education qualifications and certifications, has undoubtedly made the EHEA possible. The key instrument to make it possible was the ESG [European Association for Quality Assurance in Higher Education (ENQA), 2015]. Specifically:

[…] they set a common framework for quality assurance systems for learning and teaching at European, national and institutional level; they enable the assurance and improvement of the quality of higher education in the EHEA; they support mutual trust, thus facilitating recognition and mobility within and across national borders; and they provide information on quality assurance in the EHEA [European Association for Quality Assurance in Higher Education (ENQA), 2015, p. 7].

Under these purposes, the ESG are grounded in four principles for quality assurance:

  1. “Higher education institutions have primary responsibility for the quality of their provision and its assurance;

  2. quality assurance responds to the diversity of HES, institutions, programmes and students;

  3. quality assurance supports the development of a quality of culture; and

  4. quality assurance takes into account the needs and expectations of students, and other stakeholders and society” [European Association for Quality Assurance in Higher Education (ENQA), 2015, p. 8].

European agencies vary in their approaches to the implementation and adaptation of the ESG (Alzafari and Ursin, 2019; Kohoutek et al., 2018; Nascimbeni, 2015; Manatos and Huisman, 2020). In Spain, the National Agency for Quality Assessment and Accreditation (ANECA) has developed a set of programmes to conduct its activities of evaluation, certification and accreditation of teaching staff, universities and degrees (Table 1). These programmes constitute a framework that makes it easier for universities to implement their internal QAS as well as the external evaluation processes conducted by the responsible regional agencies and universities.

The MONITOR programme instrumentalizes the operational procedures of the Protocol for the Monitoring and Renewal of the Accreditation for Official University Qualifications University Commission for the Regulation of Monitoring and Accreditation (UCRMA) (2010), designed to supervise the implementation of previously established official qualifications until the moment when it is re-assessed for the purpose of renewing and confirming its suitability (accreditation) once again. The protocol includes the obligation to report information about various processes carried out within the universities. The set of indicators proposed to report the outcomes of the qualification includes the following:

  • performance rate in the qualification;

  • dropout rate;

  • efficiency rate; and

  • graduation rate.

The reports also include global data regarding the qualification and an analysis of the adequacy of the evolution of the indicators and their consistency with the target established in the qualification verification report.

The usual practice of universities and evaluation agencies when analysing these indicators is limited to the application of conventional descriptive statistics methods that involves the analysis of variation through measures such as mean, standard deviation, median etc. As a result, the analysis seems subjective, decontextualized and unsound, as it focusses on the evaluation of the outcomes of the educational processes and their comparison with outdated fixed reference values (Andreani et al., 2020; Klasik and Hutt, 2019). Therefore, this practice does not enable us to identify whether the process is under control and if it is not, to identify if this status is due to causes attributable to the processes themselves or to random causes, depending, in turn, on factors such as the branch of study, the type of centre, the university and the region (Bi, 2018; Hanna et al., 2012; Kember and Leung, 2011). An alternative practice to overcome this limitation could be the application of SPC charts, which have become a useful tool with which to achieve stability and improvement in the quality of educational service delivery by monitoring certain variables or attributes over time, such as the previously mentioned performance indicators. The application of SPC in an educational context is already a reality, and it is increasing, as shown in the next section. This will allow the progressive implementation of typical practices of quality management in the field of education, within the general framework of the continuous improvement of processes.

Statistical control of processes in education services

SPC allows the identification of different sources of variation and also enables detection of an “out of control” status (Besterfield, 1995; Shewhart, 1936). A control chart “shows the value of the quality characteristics of interest as a function of time or sample number” (Montgomery, 2009). Thus, the variability of a quality characteristic should be based on output, which involves estimating its statistical distribution and parameters (Juran and Gryna, 1988).

The standard control chart shows a central line (CL), which is the mean of the variable being monitored, the upper control limit (UCL) and the lower control limit (LCL), which are typically set at  ± 3 standard deviations of the CL and its graphic representation is shown in Figure 1.

When a plot violates a control limit, which is when some of the plot points fall outside the three-sigma limits (Zone B), it should be treated as a special cause of variation, attributable to the system. If special causes are present, it is said that the process is out of control. The zone between the UCL and the LCL (Zone A) shows the expected normal (common cause) plot point variation. Unlike special causes of variability, common causes are also called “natural” or “random variability” and are due to a large number of small sources of variation that are not easily identifiable. If assignable or special causes of variation are removed, characteristic parameters such as the mean, standard deviation and probability distribution are constant and process behaviour is predictable; the system is said to be “in a state of statistical control” or simply “under control”. However, certain abnormal patterns (trends, sudden shifts, systematic variation, cycles and mixtures) may alert one to the existence of special causes.

There are a great variety of quality control charts, depending on the type of quality characteristic to control (variable or attributes) and the number of variables to control (univariate or multivariate) (Montgomery, 2009).

The industrial concepts of quality control have been refined and adapted for quality controlling and monitoring the educational service. The literature review section of previous applications of SPC in the educational field focusses on the main objective of each of the reported studies. From the analysis of the studies, four principal objectives have been identified:

  1. controlling student achievement;

  2. monitoring the effectiveness of the teaching–learning process;

  3. evaluating student satisfaction; and

  4. identification abnormal patterns in certain educational processes.

Statistical process control charts to control students’ achievements

Objective and quantitative measures such as examination scores or grade point averages (GPAs) to monitor learning performance have been used to conduct numerous studies, under different approaches and educational levels. Thus, for example, Schafer et al. (2011), in their work, used traditional Shewhart X-R charts to follow the performance of primary and secondary students in large-scale assessment programmes. In a similar way, Bakir and McNeal (2010) and Bakir et al. (2015) designed a non-parametric control chart based on the sign statistic to detect statistically significant shifts in students’ GPAs from a desired level. Milosavljevic et al. (2018) used attribute charts from the perspective of the number of passing exams. Other studies following a similar framework include Peterson (2015), Zolkepley et al. (2018), Hrynkevych (2017), Aldowaisan and Allahverdi (2016), Mazumder (2014), Djauhari et al. (2017), Cervetti et al. (2012) and Hanna et al. (2012).

SPC charts have also been applied to monitor the consistency of scale scores or ratings over time. Lee and von Davier (2013) used cumulative sum charts (CUSUM) in a study conducted with data from 72 countries to detect changes in a measurement process of rating performance items in operational assessments. Omar (2010) proposed X-S charts to monitor the consistency of the scores in a co-assessment process. SPC charts that have also been applied in similar frameworks are Beshah (2012), Edwards et al. (2007) and Savic (2006).

Monitoring the effectiveness of the teaching–learning process

Another line of research aims to measure both teachers’ contributions to increasing student knowledge and student’s learning outcomes. The technique usually applied for this purpose consists of administering to students, prior to a lecture, a test known as the Background Knowledge Probe (BKP), which contains questions about concepts covered during the lecture, and after the lecture students have to answer the same questions. Rather than grading the students’ outcomes, the BKP should be understood as a resource to measure student’s gain. This would enable an improvement in the teaching programmes by virtue of detecting the student’s mistakes and gaps in knowledge transfer. Thus, Green et al. (2012) used traditional mean graphics (Graph X), whereas Agrawal and Khan (2008), Grygoryev and Karapetrovic (2005a, 2005b), Karapetrovic and Rajamani (1998) and Pierre and Mathios (1995) used graphics of non-conformity attributes (p graphics).

Evaluating students’ satisfaction

SPC chart techniques have also been used to monitor and evaluate the level of student satisfaction regarding the quality of faculty members’ teaching and university service operation. Thus, Jensen and Markland (1996) published one of the first studies in this field, reporting the use of a Hotelling’s T multivariate control chart to detect shifts in satisfactory and unsatisfactory perceptions of computer services at a large university institution. Debnath and Shankar (2014) proposed the use of attribute control charts (c-charts and u-charts) to evaluate the level of student satisfaction with their academic process. In their study, students were asked to provide information about various parameters, such as the grievance-handling process with respect to students’ admissions and results, facilities, practical orientation process.

On the other hand, it is worth mentioning works that, under the controversial field of Student Evaluation of Instruction, are addressed from the perspective of SPC, bringing a new approach that focusses not on educational outputs and outcomes, but on the quality of the underlying educational processes. Bi (2018) and Nikolaidis and Dimitriadis (2014) used X-S charts; Cadden et al. (2008) and Marks and O'Connel (2003) used X charts; Manguad (2006), proposed the use of X-R charts; Ding et al. (2006) and Carlucci et al. (2019) applied attributes control chart (p and u charts, respectively). Finally, Sivena and Nikolaidis (2019) through simulation, evaluate several popular types of control charts, identifying the most suitable among them.

Identification of abnormal patterns in certain educational processes

Another segment of this body of research has looked at the use of CUSUM charts to detect known items and outliers in computer adaptive testing (Meijer, 2002; Veerkamp and Glas, 2000). This instrument allows the level of acquisition of different skills (linguistic, professional accreditations, etc). to be detected in the context of item response theory.

Given the aforementioned considerations, three facts have motivated this research. First, the existence of inadequate statistical approaches that university institutions have traditionally used to assess certain outcomes of educational processes. Second, the potential of SPC to assess different quality characteristics for higher education institutions has been demonstrated in the literature review. Third, the authors of this paper are active members of the Board for the Monitoring and Accreditation of Qualifications at a business faculty at a Spanish University. Moreover, some of the authors teach courses related to quality management. They share the aforementioned concerns and for the sake of “practice as they preach”, they advocate that some successfully industrial quality management techniques such as SPC which is occasionally taught in their courses, should be incorporated to assess and analyse the variation related to educative performance indicators. This application would improve the assessment of the quality of educative processes so that root causes can be detected and corrective actions can be taken by those who supervise instruction Hanna et al. (2012).

Materials and method

Research context

The study was conducted at the Faculty of Business Sciences and Tourism (FBST) of the University of Huelva (Spain). The FBST offers three bachelor’s programmes: Bachelor in Business Management (BM), Bachelor in Finance and Accounting (F&A) and Bachelor in Tourism. Table 2 shows the main characteristics of the study context.

Design of the study

The method adopted for this study is the action-based case study approach conducted at FBST. Previous research and theoretical issues related to SPC implementation and quality indicators of the training programmes proposed in Protocol for the Monitoring and Renewal of the Accreditation for Official University Qualifications were discussed. Then the authors built a conceptual framework that provided a step-by-step approach to SPC implementation as illustrated in Figure 2.

In this first approach applying the SPC to assess the variability of progress indicators, we only focus on the performance rate.

Applied at course level and by academic year, the performance is indicated by relationship in the total number of ordinary credits passed by the students in a particular academic year and the total number of ordinary credits in which they are enrolled [University Commission for the Regulation of Monitoring and Accreditation (UCRMA), 2010].

This rate provides a snapshot of the proportion of students passing a course in an academic year. Analysis of this rate, with the help of SPC charts, helps to identify root causes such as unmotivated students to further take actions such as monitoring the enrolment process.

For this purpose, the performance rates of the courses taught at the FBST of the University of Huelva between the years 2014 and 2020 were collected from reports on the QAS of the FBST.

To avoid skewing the results, the last academic year of the degrees was excluded from the study due to the type of courses taught. It included elective courses and credits given to students for the completion of curricular internships and the final degree project.

Applying statistical process control

To apply SPC, we followed a standard set of guidelines for setting up an SPC charting scheme, illustrated in Figure 2.

As our main goal was to monitor the effectiveness of undergraduate academic programmes at faculty level, we chose an X-R chart and the quality characteristic to control was the performance rate of the courses in a given academic year for each programme taught at the FBST As already mentioned, to draw an X-R chart, it was necessary to calculate the CL, UCL and LCL (Table 3).

To achieve the proposed goal, we used Minitab 17 statistical software.

Results and discussion

Bearing in mind the inherent variability of the educational processes, we are able to identify: first, courses that fall below the LCL underperform (Zone B, Figure 1), i.e. the percentage of students who passed the course was worryingly low. Second, courses that fall above the UCL (Zone B, Figure 1) indicate that the percentage of students who passed the course was extraordinarily high. Finally, courses that fall within the three-sigma limits had more stable performances, meaning that only common causes explain the variability presented between the control limits (Zone A, Figure 1).

In this work, several sets of X-R graphs are presented in which the parameters were calculated by programme, by academic year in which the courses were taught and by department. They were completed with a comparative analysis.

Control by academic programme

In Figure 3, we provide the results of X-bar charts that plot the variability of the performance rate of the courses for the years 2014–2020, for each of the three programmes analysed, considering the courses within the programme grouped by academic year.

The graphs show similar behaviours for the BM and F&A programmes, with a similar under control performance average (60.54% and 59.49%, respectively). In addition, by academic year, the courses with a low performance rate, e.g. below the lower control limits, were concentrated in the first year and those with a high performance rate, e.g. above the LCS, were concentrated in the third year. The performance rates of the second-year courses are considered to be normal, as they were located within the control interval zone.

The performance rate of the courses within the Tourism Degree is 79.93%, which is substantially higher than the BM and F&A ratio. It is worth noting that the distance to the LCL of the out-of-control courses is much greater for the Degree in Tourism than for the BM and the F&A.

Rather than a particular high or low value for the performance rate, the wide range in the proportions observed between degrees and as shown in the graphs, evidences that while F&A show more stability in the proportion of students passing the courses, a worrying lack of stability is a common factor for B&M and Tourism.

This suggests that higher ratings are not necessarily better, and for this reason it is very important to contextualize the analysis. Because the correct application of X chart formulas requires that all performance ratings contribute to determining the three-sigma limits, the previous analysis lacks subjectivity when it categorizes performance ratings into three groups: high, normal and low.

Surprisingly, for the BM and the F&A, the means of the performance ratios calculated using the traditional approach are higher than those estimated applying SPC: 64.67 > 60.54 and 65.14 > 59.49, respectively. By contrast, the traditional mean of the Degree in Tourism (71.39), is lower than the mean calculated under statistic control (75.93) (Table 2).

The differences pointed out above highlight the need to review the procedures for evaluating the results of the degree study programs, promoting methods such as SPC that allow a contextualized, robust and unbiased analysis.

However, it is important to bear in mind that the analysis could change if control chart parameters were calculated using the programme mean, the department mean or the programme year mean. Thus, to complement the SPC study, in the following sections we present the results according to academic year and department.

Control by academic year

Figure 4 shows the results of SPC application conducted independently for each of the academic years that make up each degree, and not in an aggregated form approach as in the previous section.

The results show differences in the control parameters (CL and control interval), depending on the programme itself and the academic year in which the courses were taught. As an example, the central values for the first year and the second year of BM and F&A were very similar (50.55% and 51.84%; 58.81% and 57.98%, respectively). However, these values increased significantly for courses taught in the third year and differed considerably between programmes (BM 79.19%, F&A 67.72%) (Figure 4). Notably, the three programmes present an overall upward trend in the under control performance rate associated with the sequence of the courses within a structured curricular of the program and the wide range in the academic year to year performance ratio observed (Table 4). This finding is consistent with the results of Hanna et al. (2012) who found a similar trend in a study of the proportion of students passing the class analysed by course and using p-charts. Taken together, these findings would suggest the need to identify the root causes of this trend such as the necessity to revise admission standards. Regarding possible reasons behind the high performance ratios values of the courses located in the second and third academic year of the program we suggest three possibilities. First, it is possible that the faculty and the administration were aware of the presence of special reasons behind the variation and responded by improving teaching strategies. Second, it is also possible that those students lacking motivation and ability were filtered out and did not pass to the subsequent academic year, thereby increasing student performance for higher courses. Third, a low performance rate in the first academic year coupled with a high dropout rate would have had the effect of lowering the standards for the high level courses.

Under this independent analysis, the results also show that the courses taught in the second year of BM and F&A were under statistical control as they were in the aggregate analysis, but with a lower average performance rate (58.81 and 57.98% versus 60.54 and 59.49% of BM and F&A, respectively). Surprisingly, for the courses that were out of control in BM and F&A, those of the first year were above the UCL limit, whereas for the third year, they were located below the LCL in clear contrast to what occurred in the aggregate analysis of the programme described in the previous section. In the case of the tourism degree, the courses outside the control interval were mainly concentrated in the last year, unlike the results obtained in the joint analysis, which were concentrated in the other years. In general, a progressive increase in the value of the CL (under control) of the performance rate was observed, to the extent that the courses were taught in higher academic years. This year-by-year contextualized analysis applying SPC demonstrates the usefulness of this approach by revealing hidden problems that affect variability, such as the profile of students who enrol in a degree for the first time or how students adapt to their university studies over time.

Finally, the use of the three-sigma limits also shows important differences with the results of the traditional evaluation (mean of the academic program) (Table 4). This reinforces the argument previously made in the analysis by academic programs with SPC.

Control by department

Bearing in mind that the courses taught in the different degrees were assigned to different departments, in this section, we present the CL for the courses, grouping them by departments with the highest teaching load for each degree. Table 5 shows the results.

These new parameters imply changes with respect to those obtained in the previous applications. For example, courses which in the control graph built by the programme were under control, were out of control with this new parameterisation and vice versa. Figure 5 shows differences of under control means (CL) between the two departments with the highest teaching load allocation for each degree.

As a summary of the results, and for comparative purposes, Figure 6 shows the performance under control of the courses, grouped by program, academic year and department, comparing them in turn with the mean performance rate, which was precisely the one traditionally used to monitor the performance rate.

Conclusions

The results of this study show significant differences in the analysis of the variability of the performance rate between the scenarios described (degree, year and department), depending on the reference parameters (CL, UCL, LCL), focussing the analyses on the process itself rather than the result.

Similarly, in the analysis of the variability of the processes, the SPC approach does not only allow hidden problems to be revealed and their causes to be determined so that corrective actions can be taken to reduce oscillations and therefore improve the stability of the educational processes. Once SPC has been incorporated as an assessment tool to measure the variation relating to academic performance ratios, better information would be available to establish more suitable target goals.

The main conclusion of this study is that the traditional analysis which compares performance ratings with means of the degrees or arbitrary cut-off numbers overlooks the inherent variability of the educative processes. In addition to being useless for comparative purposes, this traditional approach lacks the objectivity and robustness necessary for its application in decision-making, in the accreditation and monitoring processes in which this work is framed.

By contrast, SPC allows a contextualized, robust and unbiased analysis of the variability of quality indicators involved in the accreditation and monitoring processes, providing valuable information for decision making to administrators, teachers and other stakeholders in HE.

This study is a first approach to SPC application for monitoring the control indicators proposed in the ESG [European Association for Quality Assurance in Higher Education (ENQA), 2015]. However, the results are focussed on the analysis of just three programmes in a Spanish University, and thus should be interpreted with caution. In future studies, we will explore the application of multivariable charts and capability analysis, the inclusion of other programmes and time horizons, thereby widening the scope of this study.

Figures

Conventional control chart for monitoring the variability of a process

Figure 1.

Conventional control chart for monitoring the variability of a process

Application of the SPC in the scope of the study

Figure 2.

Application of the SPC in the scope of the study

Control results by academic programmes (2014–2020)

Figure 3.

Control results by academic programmes (2014–2020)

Control results by academic year

Figure 4.

Control results by academic year

Results by departments (CL)

Figure 5.

Results by departments (CL)

Comparative analysis related to performance rate

Figure 6.

Comparative analysis related to performance rate

Evaluation programmes of Spanish University

Evaluation activities Programmes
Academic staff Teacher evaluation program: evaluates the CVs of applicants to access non-civil servant academic staff bodies
ACADEMIA: evaluates CVs of applicants to access civil-servant academic staff bodies
CNEAI: The CNEAI is the body within ANECA responsible for the evaluation of research activity for the purposes of assigning the corresponding retribution complements, as per applicable regulations
Programme VERIFICA: evaluates degree proposals designed according to EHEA criteria
MONITOR: follows-up an ex-ante accredited programme to check its correct implementation and results
ACREDITA: checks that the degree has been carried out according to the initial project
SIC: Assessment for quality International labels
Institution AUDIT: provides guidance for HEIs to establish their own internal quality assurance systems
AUDIT INTERNATIONAL: certify quality assurance systems for Higher Education Institutions (HEI) located in third countries
INSTITUTIONAL ACCREDITATION: evaluates applications for institutional accreditation from university centres
DOCENTIA: supports Universities to create mechanisms to evaluate academic staff quality

Source: ANECA (www.aneca.es/Programas-de-evaluacion)

Bachelor’s programmes offered by Faculty of Business Sciences and Tourism, University of Huelva

Bachelor’s programs Courses No. of credits
European credit transfer system
Students enrolled
(no) 2014-2020
Performance rate
(mean) (%)
Compulsory Elective Mean SD 2014-2020 Departments
Business management 35 14 240 464 88.88 64.67 – Management and marketing
– Financial economics and accounting
Finances and accounting 35 17 240 201 75.18 65.14 – Economy
– Law
– Geography and history*
Tourism 37 28 240 195 23.30 71.39 – Languages* (German, English and French)
Note:

ECTS = European credit transfer system

X-R chart formulas

“x” Bar chart “R” Chart
Central Line (CL) X¯¯=i=1kx¯ik R¯=i=1kRik
Upper control limit (UCL) LCSX¯=X¯¯+A2R¯ LCSR=D4R¯
Lower control line (LCL) LCIX¯=X¯A2R¯ LCIR=D3R¯
Notes:

n = sample size “i”, k = number of samples; X¯= mean of the sample “i”, Ri = rank of the sample “I”, X¯¯ = grand mean of the k samples; R¯ = grand rank of the k samples: A2, D4, D3 are constant values determined by the sample size

Source: Adapted from Shewhart (1936)

Control results by academic year versus traditional assessment

Academic year Business management Finance and accounting Tourism
First 50.55 51.84 60.1
Second 58.81 57.98 77.42
Third 79.19 67.72 80.99
Program (mean) 64.67 65.14 71.39

Control parameters by departments

Bachelor’s programmes
Business management Finance and accounting Tourism
Department CL UCL LCL CL UCL LCL CL UCL LCL
Management and marketing 76.09 87 65.06 60.42 72.67 48.17 51.84 74.72 28.95
Financial economics and accounting 57.37 74.3 40.45 63.31 78.43 48.18 76.7 86.21 67.2
Economy 59.95 77.78 42.12 66.45 77.11 55.79
Languages 76.5 87.39 66.5

References

Agrawal, D.K. and Khan, Q.M. (2008), “A quantitative assessment of classroom teaching and learning in engineering education”, European Journal of Engineering Education, Vol. 33 No. 1, pp. 85-103.

Aldowaisan, T. and Allahverdi, A. (2016), “Continuous improvement in the industrial and management systems engineering programme at Kuwait university”, European Journal of Engineering Education, Vol. 41 No. 4, pp. 369-379.

Alzafari, K. and Ursin, J. (2019), “Implementation of quality assurance standards in European higher education: does context matter?”, Quality in Higher Education, Vol. 25 No. 1, pp. 58-75.

Andreani, M., Russo, D., Salini, S. and Turri, M. (2020), “Shadows over accreditation in higher education: some quantitative evidence”, Higher Education, Vol. 79 No. 4, pp. 691-709.

Bakir, S.T. and McNeal, B. (2010), “Monitoring the level of students’ GPAs over time”, American Journal of Business Education, Vol. 3 No. 6, pp. 43-50.

Bakir, S.T., Prater, T. and Kiser, S. (2015), “A simple nonparametric quality control chart for monitoring student's GPAs. SOP”, SOP Transactions on Statistics and Analysis, Vol. 2015 No. 1, pp. 8-16.

Beshah, B. (2012), “Students’ performance evaluation using statistical quality control”, International Journal of Science and Advanced Technology, Vol. 2 No. 12, pp. 75-79.

Besterfield, D.H. (1995), Control de Calidad, 4th ed., Prentice Hall, Mexico.

Bi, H.H. (2018), “A robust interpretation of teaching evaluation ratings”, Assessment and Evaluation in Higher Education, Vol. 43 No. 1, pp. 79-93.

Cadden, D., Driscoll, V. and Thompson, M. (2008), “Improving teaching effectiveness through the application of SPC methodology”, College Teaching Methods and Styles Journal (CTMS), Vol. 4 No. 11, pp. 33-46.

Carlucci, D., Renna, P., Izzo, C. and Schiuma, G. (2019), “Assessing teaching performance in higher education: a framework for continuous improvement”, Management Decision, Vol. 57 No. 2, pp. 461-479.

Cervetti, M.J., Royne, M.B. and Shaffer, J.M. (2012), “The use of performance control charts in business schools: a tool for assessing learning outcomes”, Journal of Education for Business, Vol. 87 No. 4, pp. 247-252.

Curaj, A., Matei, L., Pricopie, R., Salmi, J. and Scott, P. (Eds). (2015), The European Higher Education Area: Between Critical Reflections and Future Policies, Springer, Cham, Heidelberg, New York, NY, Dordercht, London.

Daneshmandi, A.A., Noorossana, R. and Farahbakhsh, K. (2020), “Developing statistical process control to monitor the values education process”, Journal of Quality Engineering and Production Optimization, Vol. 5 No. 1, pp. 33-54.

Debnath, R.M. and Shankar, R. (2014), “Emerging trend of customer satisfaction in academic process”, The TQM Journal, Vol. 26 No. 1, pp. 14-29.

Ding, X., Wardell, D. and Verma, R. (2006), “An assessment of statistical process control‐based approaches for charting student evaluation scores”, Decision Sciences Journal of Innovative Education, Vol. 4 No. 2, pp. 259-272.

Djauhari, M.A., Sagadavan, R. and Lee, S.L. (2017), “Monitoring the disparity of teaching and learning process variability: a statistical approach”, International Journal of Productivity and Quality Management, Vol. 21 No. 4, pp. 532-547.

Edwards, H.P., Govindaraju, K. and Lai, C.D. (2007), “A control chart procedure for monitoring university student grading”, International Journal of Services Technology and Management, Vol. 8 Nos 4/5, pp. 344-354.

European Association for Quality Assurance in Higher Education (ENQA) (2015), “Standards and guidelines for quality assurance in the European higher education area (ESG)”, Brussels.

Green, K.W., Jr, Toms, L. and Stinson, T. (2012), “Statistical process control applied within an education services environment”, Academy of Educational Leadership Journal, Vol. 16 No. 2, pp. 33-46.

Grygoryev, K. and Karapetrovic, S. (2005a), “An integrated system for educational performance measurement, modeling and management at the classroom level”, The TQM Magazine, Vol. 17 No. 2, pp. 121-136.

Grygoryev, K. and Karapetrovic, S. (2005b), “Tracking classroom teaching and learning: an SPC application”, Quality Engineering, Vol. 17 No. 3, pp. 405-418.

Hanna, M.D., Raichura, N. and Bernardes, E. (2012), “Using statistical process control to enhance student progression”, Journal of Learning in Higher Education, Vol. 8 No. 2, pp. 71-82.

Hrynkevych, O.S. (2017), “Statistical analysis of higher education quality with use of control charts”, Advanced Science Letters, Vol. 23 No. 10, pp. 10070-10072.

Jensen, J.B. and Markland, R.E. (1996), “Improving the application of quality conformance tools in service firm”, Journal of Services Marketing, Vol. 10 No. 1, pp. 35-55.

Juran, J.M. and Gryna, F.M. (1988), Juran’s Quality Control Handbook (4th edn), McGraw-Hill. New York, NY.

Karapetrovic, S. and Rajamani, D. (1998), “An approach to the application of statistical quality control techniques in engineering courses”, Journal of Engineering Education, Vol. 87 No. 3, pp. 269-276.

Kember, D. and Leung, D. (2011), “Disciplinary differences in student ratings of teaching quality”, Research in Higher Education, Vol. 52 No. 3, pp. 278-299.

Kinser, K. (2014), “Questioning quality assurance”, New Directions for Higher Education, Vol. 2014 No. 168, pp. 55-67.

Klasik, D. and Hutt, E.L. (2019), “Bobbing for bad apples: accreditation, quantitative performance measures, and the identification of low-performing colleges”, The Journal of Higher Education, Vol. 90 No. 3, pp. 427-461.

Kohoutek, J., Veiga, A., Rosa, M.J. and Sarrico, C.S. (2018), “The European standards and guidelines for quality assurance in the European higher education area in Portugal and the Czech republic: between the worlds of neglect and dead letters?”, Higher Education Policy, Vol. 31 No. 2, pp. 201-224.

Lee, Y. and Von Davier, A.A. (2013), “Monitoring scale scores over time via quality control charts, model-based approaches, and time series techniques”, Psychometrika, Vol. 78 No. 3, pp. 557-575.

MacCarthy, B.L. and Wasusri, T. (2002), “A review of non-standard applications of statistical process control (SPC) charts”, International Journal of Quality and Reliability Management, Vol. 19 No. 3, pp. 295-320.

Manatos, M.J. and Huisman, J. (2020), “The use of the European standards and guidelines by national accreditation agencies and local review panels”, Quality in Higher Education, Vol. 26 No. 1, pp. 48-65.

Manguad, B.A. (2006), “Using SPC to assess performance in a graduate course of business”, The American Society of Business and Behavioral Sciences, 13th Annual Meeting, Las Vegas.

Marks, N. and O'Connel, T. (2003), “Using statistical control charts to analyse data from students evaluations of teaching”, Decision Sciences Journal of Innovative Education, Vol. 1 No. 2, pp. 259-272.

Mazumder, Q.H. (2014), “Applying six sigma in higher education quality improvement”, 21st ASEE Annual Conference and Exposition, Indianapolis, IN, pp. 15-18.

Meijer, R.R. (2002), “Outlier detection in high‐stakes certification testing”, Journal of Educational Measurement, Vol. 39 No. 3, pp. 219-233.

Milosavljevic, P., Pavlovic, D., Rajic, M., Pavlovic, A. and Fragassa, C. (2018), “Implementation of quality tools in higher education process”, International Journal of Continuing Engineering Education and Life-Long Learning, Vol. 28 No. 1, pp. 24-36.

Montgomery, D.C. (2009), Introduction to Statistical Quality Control, 7th ed., Wiley, New York, NY.

Nascimbeni, F. (2015), “The increased complexity of higher education collaboration in times of open education”, Campus Virtuales, Vol. 3 No. 1, pp. 102-108.

Nikolaidis, Y. and Dimitriadis, S.G. (2014), “On the student evaluation of university courses and faculty members' teaching performance”, European Journal of Operational Research, Vol. 238 No. 1, pp. 199-207.

Omar, M.H. (2010), “Statistical process control charts for measuring and monitoring temporal consistency of ratings”, Journal of Educational Measurement, Vol. 47 No. 1, pp. 18-35.

Peterson, S.J. (2015), ““Benchmarking student learning outcomes using Shewhart control charts”, 51st ASC Annual International Conference Proceedings.

Pierre, C.B. and Mathios, D. (1995), “Statistical process control and cooperative learning structures: a data assessment”, European Journal of Engineering Education, Vol. 20 No. 3, pp. 377-384.

Red Iberoamericana para la Acreditación de la Calidad de la Educación Superior (RIACES) (2007), “Proyecto ‘sistemas de garantía de calidad de las agencias de evaluación’”, Propuesta de ANECA.

Savic, M. (2006), “P-charts in the quality control of the grading process in the high education”, Panoeconomicus, Vol. 53 No. 3, pp. 335-347.

Schafer, W.D., Coverdale, B.J., Luxenberg, H. and Jin, Y. (2011), “Quality control charts in large-scale assessment programs”, Practical Assessment, Research and Evaluation, Vol. 16 No. 15, p. 2.

Shewhart, W.A. (1936), Statistical Method from the Viewpoint of Quality Control, Dover Publications, New York, NY.

Sivena, S. and Nikolaidis, Y. (2019), “Improving the quality of higher education teaching through the exploitation of student evaluations and the use of control charts”, Communications in Statistics-Simulation and Computation, pp. 1-24.

Strang, L., Bélanger, J., Manville, C. and Meads, C. (2016), Review of the Research Literature on Defining and Demonstrating Quality Teaching and Impact in Higher Education, Higher Education Academy, New York, NY.

Sulek, J.M. (2004), “Statistical quality control in services”, International Journal of Services Technology and Management, Vol. 5 Nos 5/6, pp. 522-531.

Suman, G. and Prajapati, D. (2018), “Control chart applications in healthcare: a literature review”, International Journal of Metrology and Quality Engineering, Vol. 9, p. 5.

Tomas, C. and Kelo, M. (2020), “ESG 2015-2018 ENQA agency reports: thematic analysis”, ENQA Occasional Paper, 28.

University Commission for the Regulation of Monitoring and Accreditation (UCRMA) (2010), “Protocol for the monitoring and renewal of the accreditation of official university degrees”, available at: http://deva.aac.es/include/files/universidades/seguimiento/Protocolo_CURSA%20_020710.pdf?v=2018927184158 (accessed 22 November 2021).

Utley, J.S. and May, G. (2009), “Monitoring service quality with residuals control charts”, Managing Service Quality: An International Journal, Vol. 19 No. 2, pp. 162-178.

Veerkamp, W.J. and Glas, C.A. (2000), “Detection of known items in adaptive testing with a statistical quality control method”, Journal of Educational and Behavioral Statistics, Vol. 25 No. 4, pp. 373-389, pr9.

Yang, S., Cheng, T., Hung, Y. and Cheng, S. (2012), “A new chart for monitoring service process mean”, Quality and Reliability Engineering International, Vol. 28 No. 4, pp. 377-386.

Zolkepley, Z., Djauhari, M.A. and Salleh, R.M. (2018), “SPC in service industry: case in teaching and learning process variability monitoring”, AIP Conference Proceedings 1974 (1), AIP Publishing LLC, p. 40026.

Further reading

Agencia Nacional de Evaluación de la Calidad y Acreditación (ANECA) (2016), “Support guide for the process of follow-up of official titles of bachelor and master”, available at: www.aneca.es/eng/content/view/full/13282 (accessed 2 December 2021).

Agencia Nacional de Evaluación de la Calidad y Acreditación (ANECA) (2021), “Evaluation programmes”, available at: www.aneca.es/Programas-de-evaluacion (accessed 2 December 2021).

Minitab Inc (2014), Minitab Statistical Software Version 17, Minitab Inc., State College, PA, available at: www.minitab.com

Acknowledgements

Funding: This study was supported by the University of Huelva (Spain) – Andalusia Consortium (CBUA).

Corresponding author

Pilar Sancha can be contacted at: mpsancha@uhu.es

Related articles