Is your AoL process overly complex?

Karen A. Tarnoff (College of Business and Technology, East Tennessee State University, Johnson City, Tennessee, USA)
Kathleen J. Barnes (Management Department, Salem State University, Salem, Massachusetts, USA)
Eric D. Bostwick (Department of Accounting and Finance, University of West Florida, Pensacola, Florida, USA)

Organization Management Journal

ISSN: 2753-8567

Article publication date: 29 June 2022

Issue publication date: 25 April 2023

698

Abstract

Purpose

The purpose of this study is to identify signs of unnecessary assurance of learning (AoL) complexity and to provide suggestions for simplifying the AoL processes.

Design/methodology/approach

While this paper is grounded in the existent AoL literature, the paper also presents several anecdotal observations from the authors’ practical knowledge in designing, leading, maintaining and consulting on AoL systems and processes.

Findings

Based on both a conceptual review of AoL literature and the authors’ own experiences, the authors outline 13 specific symptoms of unnecessary AoL complexity, identify potential underlying causes for each symptom and propose practical solutions that can increase the efficiency and effectiveness of dysfunctional AoL systems and processes.

Research limitations/implications

Although this work is grounded in the existent AoL literature, the present paper presents several anecdotal observations from the authors’ experiences. While the intent is to provide guidance that is actionable, it is understood that variability exists within and across schools and programs. Future research is needed to provide a more formal structure for reviewing AoL complexity, efficiency and effectiveness.

Practical implications

While future research is needed to provide a more formal structure for reviewing AoL complexity, efficiency and effectiveness, the intent of this paper is to provide guidance that is actionable with the understanding that variability exists within and across schools and programs.

Social implications

Society increasingly is demanding accountability from institutions of higher learning, and properly structured AoL programs can provide evidence of institutional effectiveness in preparing students to be productive members of society in their chosen fields of study. Stated succinctly, “although accountability matters, learning still matters most” (Angelo, 1999, n.p.).

Originality/value

Consideration of the 13 symptoms presented here along with other drivers that are unique to each school and program should result in the identification and development of practicable remedies to simplify AoL processes and systems, increase efficiency and effectiveness and improve the documentation of improvements to student learning.

Keywords

Citation

Tarnoff, K.A., Barnes, K.J. and Bostwick, E.D. (2023), "Is your AoL process overly complex?", Organization Management Journal , Vol. 20 No. 2, pp. 46-55. https://doi.org/10.1108/OMJ-02-2022-1478

Publisher

:

Emerald Publishing Limited

Copyright © 2022, Karen A. Tarnoff, Kathleen J. Barnes and Eric D. Bostwick.

License

Published in Organization Management Journal. Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence maybe seen at http://creativecommons.org/licences/by/4.0/legalcode


Introduction

While a critical component of assurance of learning (AoL) is evidence of a process that supports continuous improvement, many schools mistakenly approach this requirement with a compliance-oriented perspective. This is short-sighted and mistakenly focuses on generating data, leaving little room for the data’s review or use in the continuous improvement process, thereby failing to comply with the accreditation standards. The resultant process and approach tend to lead to the development of overly complex AoL systems that ultimately prove to be ineffective at improving student learning. Schools adopting a compliance-oriented philosophy face a number of challenges including complex inefficient systems, exhaustive processes and critical difficulty during accreditation reviews.

Complexity of the AoL process is generally antithetical to meaningful AoL and is often a hallmark of misplaced motivations and misunderstood requirements. Developing overly complex AoL processes is neither impressive nor sustainable and often has the unintended consequence of reinforcing faculty beliefs that the AoL process is a cumbersome, time-consuming waste of resources because it does not help students improve. Ultimately, these complex processes erode faculty participation and discourage engagement.

Given the import of AoL in the acquisition and maintenance of AACSB accreditation, this article seeks to address three issues. First, provide a brief overview of AASCB accreditation. Second, help readers recognize and identify signs of unnecessary AoL complexity. Third, provide suggestions for simplifying the AoL process to reduce waste and duplication of effort as well as to improve process effectiveness and utility.

Assurance of learning definitions and purpose

When designing any system, it is always good advice to “begin with the end in mind” (Covey, 1989). Understanding the definition of AoL, the philosophy behind AoL and the vision of AoL is imperative for appropriate and meaningful system design. Broadly speaking, Gainen and Locatelli (1995) define “educational assessment” as the “systematic collection, interpretation, and use of information about student characteristics, the educational environment, and learning outcomes to improve student learning and satisfaction” (p. 225 emphasis added). More to the point of this article, Standard 5 of the 2020 AACSB Business Standards (2020), defines AoL as:

The systematic processes and assessment plans that collectively demonstrate that learners achieve learning competencies…[and]…the processes of identifying competency gaps and designing and implementing changes to the curriculum and learning experience so the learning competencies are met (p. 41).

Thus, AoL processes, per AACSB, should result in a system that demonstrates learner achievement, identifies gaps in learning and initiates an ongoing process of systematic review and improvement.

The three themes noted in AACSB’s standards are also reflected in the literature. Specifically, in reviewing the assessment literature, the themes of student learning improvement, program and curricula improvement and internal and external accountability were consistently observed. Of these three themes, student improvement was most prominent (Eschenfelder, Bryan, & Lee, 2014; Hamilton & Schoen, 2009; Martell & Calderon, 2009; Rubin & Martell, 2009; Rohrbacher, 2015). In fact, Angelo (1999) summarizes the importance of improving student learning by stating that “though accountability matters, learning still matters most” (n.p.).

Although the relative ranking of program improvement (AACSB, 2013; Eschenfelder et al., 2014; Hamilton & Schoen, 2009; Martell, 2007; Rubin & Martell, 2009; Terenzini, 1989) and accountability (AACSB, 2013; Hamilton & Schoen, 2009; Martell & Calderon, 2009; Rohrbacher, 2015; Rubin & Martell, 2009; Terenzini, 1989) is debatable, program improvement was deemed by the authors to be the second most important purpose of AoL since it was discussed more often as a reason for AoL than was accountability. Summarizing all of these perspectives, the AACSB Business Standards (2013), Martell and Calderon (2009) and Rexeisen and Garrison (2013) conclude that “continuous improvement” is the hallmark of an effective AoL system.

Why do assurance of learning system fail?

The four most notable reasons for AoL system failure noted in the literature are: misunderstanding the purpose of AoL (AACSB, 2003; Angelo, 1999; Fogarty, 2009; Martell & Calderon, 2009), requiring “scientifically significant” levels of assurance (Martell, 2009; Martell & Calderon, 2009), waiting to implement the perfect AoL system (Martell & Calderon, 2009; Rubin & Martell, 2009) and failing to gain adequate faulty participation (Betters-Reed, Nitkin, & Sampson, 2008; Ewell, 2003; Martell, 2007). Awareness of these issues can inform AoL system design/redesign and maintenance. Each of the four reasons will be addressed in the subsequent section.

Angelo (1999, n.p.) indicates that one reason assessment efforts fail is because they are “implemented without a clear vision of what ‘higher’ or ‘deeper’ learning is and without an understanding of how assessment can promote such learning.” This perspective can manifest itself in an obsession with measurement alone. While measurement is an important part of AoL, “measures have little value in and of themselves” (AACSB, 2003, p. 69). Martell and Calderon (2009) caution that “collecting data without acting upon it is a waste of resources and will not advance the school’s accreditation case” (p. 8). Fogarty (2009) concludes that “information that is collected has to be used…[t]hus, assessment exercises should not be fishing expeditions that produce information that cannot be used or is irrelevant to the purpose” (p. 166).

Another hindrance to AoL systems is an insistence on a higher level of “assurance” than necessary. Martell (2009) offers that “…the model [of]…scholarly inquiry is not an appropriate framework for pursuing assessment” (p. 211). Martell and Calderon (2009) conclude that what the AACSB requires is “an honest effort to investigate students’ learning through direct measures…not meeting standards for peer-reviewed research” (p. 24).

Waiting for the “perfect” AoL system before any implementation can also lead to failure. Martell and Calderon (2009) advise: “it is much more important to get started, knowing that there is room to improve as a result of experience, than [to wait] for the perfect assessment to come along” (p. 24). Rubin and Martell (2009) echo these sentiments: “assessment methods, regardless of how sophisticated or elaborate, contain imperfections” (p. 14).

Lack of faculty participation can also hinder assessment efforts. Sometimes an AoL system may unintentionally discourage faculty participation by failing to share AoL information and results with faculty (Betters-Reed et al., 2008), while at other times, the system may actively discourage participation by “allowing learning assessment to become punitive [which] defeats its very purpose—which is to help all educators improve their games” (Ewell, 2003, p. 31). Faculty often distance themselves from the AoL process because they see themselves as “individual contributors with sole purview over the courses they teach,” and they fail to recognize that “programmatic review requires faculty to share ownership of the program offerings of their institutions and to be responsible for the quality of the program as a whole, not just their courses” (Betters-Reed et al., 2008, 238). Martell (2007) summarizes this perspective when quoting an anonymous dean: “the problem is that we've lost sight of what it means to be part of a curriculum” (p. 194).

Hallmarks of unnecessary assurance of learning process complexity

Accreditation mentors, peer review team members and AoL consultants who review many schools’ AoL processes quickly recognize the hallmarks of unnecessary complexity and the telltale signs of dysfunction in the AoL process. These hallmarks result in reviewers probing more deeply to identify underlying problems that must be addressed to ensure that AoL processes are robust, systematic and sustainable.

Common hallmarks of unnecessary complexity driving dysfunctional AoL processes include:

  • significant effort being dedicated to the AoL process without meaningful data being collected, without meaningful student-oriented improvements being implemented and without documented improvement of student learning;

  • faculty being overwhelmed by the AoL workload and the amount of time required;

  • faculty failing to understand the purpose of AoL and/or unable to articulate the basic steps in their school’s AoL process;

  • faculty engaging in AoL only to “check the accreditation boxes” without understanding why the data is collected, how the data is used, or how critical faculty engagement in the AoL process is.

  • collecting ALL the data ALL the time.

  • focusing on data collection rather than on using the data to drive meaningful improvements.

  • failing to “close the loop” on assessment activities to evaluate improvements’ effectiveness and determine if additional improvements are necessary.

  • students are not improving which is the ultimate hallmark of dysfunctional AoL processes.

Specific symptoms, underlying problems and solutions

The presence of one or more of these hallmarks indicates that the AoL system is overly complex. The following sections outline thirteen specific symptoms of AoL dysfunction, identify the problems underlying each symptom and propose viable solutions that can increase the efficiency and effectiveness of dysfunctional AoL processes. Table 1 summarizes these common problems, identifies likely symptoms and provides possible solutions:

  • (1)

    Multiple accreditor-specific AoL processes

Frequently, schools design separate AoL systems or components to accommodate multiple disciplinary and regional accreditors’ requirements. This is often symptomatic of the mistaken assumption that accreditors (e.g. AACSB vs regional) have different requirements necessitating separate AoL approaches. However, designing and implementing separate AoL processes for each accreditor frequently results in redundancy and wasted effort. Addressing this problem requires finding the common ground across accreditors to build a single, robust AoL process that addresses accreditors’ common requirements efficiently and also allows flexibility to address each accreditor’s unique requirements. Since assessment has its genesis in core principles detailed in the scholarship of teaching and learning literature, all accreditors generally require the same elements: definition of a finite set of expected learning objectives/outcomes, measurement of those objectives, analysis of data against performance targets, implementation of improvements to benefit students and collection and analysis of data to evaluate the effectiveness of the improvements. In short, good assessment done well is good assessment irrespective of various accreditors’ standards. Thus, when assessment is viewed in terms of continuous improvement rather than accreditation compliance, finding areas of commonality that allow for unified AoL processes and system components is much easier. For example, data from a content knowledge measure could be labeled as discipline-specific knowledge for AACSB documentation but could also be used to satisfy a regional accreditor’s requirement to assess majors.

  • (2)

    Loops are not closed within the five-year accreditation cycle

Often, schools that find it difficult to measure each learning objective twice within a five-year accreditation cycle have too many goals and/or objectives per program. Alternatively, multiple programs may have separate unique goals and/or objectives for each program so that the number of objectives to be measured increases as a function of the number of programs being assessed. This may result from a misguided effort to have an exhaustive AoL process which rapidly results in an unmanageable and unsustainable data collection process leaving little time to identify and implement improvements. One way to prevent proliferation of objectives is to share common competency goals and objectives across programs at the same academic level (e.g. oral communication, written communication, ethics) or to share content knowledge goals and objectives across programs with a common core. Another solution is to limit or reduce each program’s learning goals and objectives to a more manageable number (e.g. four or five total objectives per program).

  • (3)

    Duplicated and/or disjointed data collection process

Disjointed data collection is another symptom of an overly complex AoL system. Duplication of processes across units (e.g. each department creates a critical thinking goal, objective and measure) results in variance across units and, ultimately, difficulty aggregating data at the program-level. This is indicative of a bottoms-up, cobbled-together AoL process. When multiple units have different definitions and measures for common objectives (e.g. oral communication) aggregating the data and interpreting results becomes difficult. The development of separate processes such as these often result from a focus on course rather than program assessment. One solution is to lead the assessment process “from the top down” by guiding faculty to recognize common AoL definitions, measures and processes. Another solution is to have faculty cooperatively create universal AoL definitions and measures to be applied across multiple programs at the same level (e.g. master’s degree programs).

  • (4)

    Holistic rubrics create vagueness

Another common symptom of unnecessary AoL complexity is assessors having difficulty using rubrics to score student artifacts (e.g. presentations) impairing both the interpretation of the results and the ability to identify beneficial improvements. This difficulty likely results from the use of holistic rubrics which, when poorly written, use either vague language or use multifaceted anchors (i.e. descriptions of performance levels) that encompass multiple behaviors. These design errors create difficulty with understanding and interpretation of results making it difficult to identify the specific behaviors that students need to improve. However, even sound holistic rubrics are difficult to develop and require significant assessor effort to ensure consistent use. One effective solution is to capture the same content in an analytic rubric containing unitary rather than multifaceted behavioral anchors which more easily evaluate student performance, identify specific weaknesses and allow implementation of more viable learning improvements.

  • (5)

    Little return on investment

Most AoL processes include content knowledge goals and objectives for which gathering data can be quite time-consuming. This issue typically arises when content knowledge data is collected using course-embedded measures distributed across multiple courses. Schools frequently experience an inadequate return on effort invested in content knowledge assessment because there is a persistent student motivation issue: “Did students give their best effort, or did they merely race through the measure to get it done?” The solution to this challenge is to develop a faculty-created, stand-alone exam to collect all content knowledge data in a single instrument. Such an instrument may be administered in a campus testing center, completed on a school-wide “assessment day,” or embedded within a particular course. Beneficially, this approach relieves faculty of the data collection burden but engages them in both developing the measures and reviewing the results to identify student deficiencies. Student motivation can further be addressed by requiring a specific score on the exam as a progression requirement to gain entrance into a required course (e.g. a capstone course) or as a graduation requirement.

  • (6)

    Uncertainty regarding process and timeline

When faculty are confused about what is required of them in the AoL process and when tasks must be completed, the AoL process may be suffering from frequent incremental system improvements. Such an unstable assessment process is difficult for faculty to understand. A solution to this challenge is to create a consistent, cyclical data collection plan, to distribute this plan to faculty and then to execute the plan. For example, a cyclical plan may call for data collection of competency/skill learning objectives in odd numbered years and review of the data to drive improvements in even numbered years. Creating and distributing an assessment calendar detailing monthly assessment activities can also help faculty view assessment as a less overwhelming process. Providing visualizations of the plan can further increase transparency and understanding.

  • (7)

    Division of labor and delegation of responsibility

When “everyone” is responsible for AoL, then, practically speaking, no one is responsible for AoL. In accordance with this truism faculty become unsure about their specific responsibility for various AoL tasks. Such diffusion of responsibility is commonplace in assessment processes when a clear division of labor is lacking. However, the development of a clear, logical, efficient, specific, widely-shared division of labor will improve faculty understanding of the AoL processes. The AoL division of labor should detail specific tasks and responsibilities identifying precisely to whom (e.g. system stewards, key committees, administrators, departments) each is assigned. A clear and efficient division of labor is a meaningful indicator to accreditation review teams of the system’s robustness and sustainability.

  • (8)

    Excess data

Overly complex assessment processes often require faculty to expend inordinate amounts of effort on continual data collection. This is often symptomatic of misfocused importance not only on data collection but also provides a signal that there is inadequate understanding of the difference between course assessment and program assessment. With the bulk of the faculty’s effort dedicated merely to data collection, little time or energy remains to analyze, review and use the data to drive improvements. Streamlining data collection and incorporating a sound sampling strategy can alleviate this problem. Ensuring a representative sample of sufficient size to draw sound conclusions to drive improvements aids this streamlining. As mentioned previously, encouraging faculty to approach assessment as a student-focused, continuous improvement process rather than as a research project assists faculty in building and implementing an efficient and effective process devoid of unnecessary complications.

  • (9)

    Inefficiency reigns supreme

Faculty complaints that the data collection process is unproductive, cumbersome and time-consuming are an indication that the AoL process is inefficient. For example, there are likely to be multiple means of data collection and data submission with some faculty submitting hard copy rubrics necessitating data coding while others submit spreadsheets, and still, others submit assessment data via a learning management system. Standardizing and automating data collection (e.g. using a single, electronic system) increases efficiency of faculty effort, improves the data’s validity and bolsters faculty confidence in the data and its use in the continuous improvement process.

  • (10)

    Disappearing data

In a dysfunctional AoL process, the last time faculty see the AoL data is when they submit it. This can be symptomatic of an AoL process lacks infrastructure to conduct analyses or to capture and disseminate results in a timely manner. Data disappearing into a “black hole” only reinforces the mistaken notion that AoL is merely a data collection process. Creating infrastructure to collect and analyze the data, routinizing repeated tasks, and creating databases to generate standard reports can all speed the dissemination of the results to faculty which supports timely decision-making to improve student learning.

  • (11)

    Meaningful assessment

When faculty find it hard to make meaningful recommendations to improve student learning, this can be a symptom of complex reporting requirements of a misguided focus on course assessment. While faculty are engaged in the AoL process by serving as assessors, individual faculty members are only able to conduct analyses and report results at the course level. In proper program assessment, analysis and review of program outcomes likely require aggregation of results across multiple courses. Thus, program-level results should be provided to faculty groups tasked with drawing data-driven (i.e. results-driven) conclusions about students’ weaknesses as demonstrated by students’ failure to achieve predetermined performance targets. The identification of program-level improvements is greatly facilitated by faculty groups, rather than individuals, reviewing aggregated results to identify learning improvements based on data-driven conclusions about students’ weaknesses.

  • (12)

    Students are not improving

Despite large volumes of data being collected, faculty frequently lament that AoL does not work because students are not improving. This perspective likely indicates that the focus of the AoL process is misplaced. The AoL process may be incorrectly focused on data collection rather than data use, so improvements are rarely identified and implemented. To address this issue, the data collection process should be designed to gather only enough data to allow faculty review of aggregate results to identify program-level learning improvement. Prioritizing efficiency in the data collection process can simplify the process and yield more manageable data loads. Additional benefits can be achieved by conducting appropriate and meaningful analyses with results captured in high-impact graphics that make it easier for faculty to identify weaknesses and reason about which potential improvements might be most beneficial.

  • (13)

    Effectiveness of improvement

When it is difficult to determine whether or not curriculum improvements are working, is it likely that the AoL system fails to “close loop” properly. This may result from improvements being inappropriately implemented based on course assessment rather than program assessment. Convening faculty groups to review results from a program-level perspective is an effective approach to redress this problem. Furthermore, supplying faculty groups with analyses and results in graphic form, including both initial results and the most recent results, is an effective means of supporting faculty drawing conclusions at the appropriate level. Such comparisons also enable faculty to see and discuss the effectiveness of past improvements.

Conclusion

An efficient, sustainable AoL process is critical to achieve the goal of helping students improve and to successfully attaining or maintaining accreditation. Simplicity can be achieved by sharing goals, objectives and measures across programs at the same level and strategically sampling to efficiently collect only enough data to draw meaningful program-level conclusions. Using standardized and routinized tasks to opportunistically use course-assessment data to draw aggregate program-level conclusions and constructing facets of the AoL process to address the least common denominator across programs and accreditors are also approaches that simplify the assessment process. Finally, advising faculty to set aside their research-oriented mentality to draw data-driven conclusions about inadequate student performance and implement improvements to redress deficiencies is critical.

Problems, symptoms, and potential solutions for an overly complex AoL process

Problems Symptom(s) Possible solution(s)
(1) Multiple Accreditor Special AoL Processes Belief that different accreditors’ have different requirements Build single robust, efficient, effective AoL process based on common ground
(2) Loops are Not Closed Within the Five Year Accreditation Cycle Too many goals and objectives per program and/or across multiple programs Share common competencies, knowledge goals and objectives across same-level programs
(3) Duplicated and/or Disjointed Data Collection Process Different measures are used to assess same learning objective (often from course-focus) Lead from the top down to have faculty cooperate to create universal measures applied across same-level programs
(4) Holistic Rubrics Create Vagueness Raters find the rubrics used to score artifacts hard to use Capture same content in analytic rubric format
(5) Little Return on Investment Content knowledge measures are distributed across courses Collect content knowledge measures in single, faculty-created, stand-alone exam in capstone course
(6) Uncertainty Regarding Process and Timeline Faculty do not understand the AoL process or the process is too complicated or dynamic Create and distribute detailed assessment calendar outlining ALL AoL activities
(7) Division of Labor and Delegation of Responsibility Unclear division of labor and/or unclear AoL infrastructure Develop clear, logical, efficient AoL division of labor
(8) Excess Data Data collection process too extensive or overwhelming Streamline data collection process using sound representative sampling strategy
(9) Inefficiency Reigns Supreme Multiple assessment data collection/submission processes Standardize and automate the data collection process
(10) Disappearing Data No data handling infrastructure Create simple, data handling infrastructure to routinize tasks
(11) Meaningful Assessment Reporting requirements too complex or course-focused Use simplified reporting to focus on data-driven conclusions about student opportunities for improvement
(12) Students Are Not Improving AoL process is a data collection exercise Collect ONLY enough data to draw sound conclusions about student opportunities for improvement
(13) Effectiveness of Improvement Loops are not being closed properly Convene faculty groups to review initial and most recent program-level results and discuss past improvements’ effectiveness

References

AACSB, International (2003). 2003 Business standards. Author’s personal copy.

AACSB, International (2013). 2013 Business standards. Retrieved from www.aacsb.edu/educators/accreditation/business-accreditation/aacsb-business-accreditation-standards

AACSB, International (2020). 2020 Business standards. Retrieved from www.aacsb.edu/educators/accreditation/business-accreditation/aacsb-business-accreditation-standards

Angelo, T. A. (1999). Doing assessment as if learning matters most. AAHE Bulletin, 51(9), 3-6. Retrieved from www.aahea.org/articles/angelomay99.htm, (presented without pagination).

Betters-Reed, B. L., Nitkin, M. R., & Sampson, S. D. (2008). An assurance of learning success model: toward closing the feedback loop. Organization Management Journal, 5(4), 224-240, doi: 10.1057/omj.2008.26.

Covey, S. R. (1989). The 7 habits of highly effective people: Powerful lessons in personal change, Simon & Schuster, Inc.

Eschenfelder, M. J., Bryan, L. D., & Lee, T. M. (2014). Motivations, costs and results of AoL: Perceptions of accounting and economics faculty. Journal of Case Studies in Accreditation and Assessment, 3, Retrieved from www.aabri.com/manuscripts/121347.pdf

Ewell, P. (2003). The learning curve. BizEd. July/August 2003, 28-33, Retrieved from www.e-digitaleditions.com/i/62203-julyaugust2003/29?

Fogarty, T. J. (2009). Learning in business ethics courses: Initial ideas about content and assessment. In K. Martell & T. Calderon, (Eds) Assessment of student learning in business schools: Best practices each step of the way. 1(1), Retrieved from www.airweb.org/docs/default-source/documents-for-pages/reports-and-publications/assessmentstudentlearning1.pdf?sfvrsn=be668b1d_2

Gainen, J & Locatelli, P. (1995). Assessment for the new curriculum: A guide for professional accounting programs, American Accounting Association.

Hamilton, D. M & Schoen, E. J. (2009). Same song, second verse: Evaluation and improvement of an established assessment program. In K. Martell & T. Calderon, (Eds) Assessment of student learning in business schools: Best practices each step of the way, 1(2), Retrieved from www.airweb.org/docs/default-source/documents-for-pages/reports-and-publications/assessmentstudentlearning2

Martell, K. (2007). Assessing student learning: Are business schools making the grade? Journal of Education for Business, 82(4), 189-195, Retrieved from https://doi-org.ezproxy.lib.uwf.edu/10.3200/JOEB.82.4.189-195. doi: 10.3200/JOEB.82.4.189-195.

Martell, K. (2009). Overcoming faculty resistance to assessment. In K. Martell & T. Calderon, (Eds), Assessment of student learning in business schools: Best practices each step of the way, 1(2). Retrieved from www.airweb.org/docs/default-source/documents-for-pages/reports-and-publications/assessmentstudentlearning2

Martell, K & Calderon, T. (2009). Assessment in business schools: What it is, where we are, and where we need to go now. In K. Martell & T. Calderon, (Eds) Assessment of student learning in business schools: Best practices each step of the way, 1(1). Retrieved from www.airweb.org/docs/default-source/documents-for-pages/reports-and-publications/assessmentstudentlearning1.pdf?sfvrsn=be668b1d_2

Rexeisen, R. & Garrison, M. (2013). Closing the loop in assurance of learning programs: Current practices and future challenges. Journal of Education for Business, 88(5), 280-285. Retrieved from https://doi-org.ezproxy.lib.uwf.edu/10.1080/08832323.2012.697929. doi: 10.1080/08832323.2012.697929.

Rohrbacher, C. (2015). Humanities professors' conceptions of assessment in general education, Unpublished doctoral dissertation. Northeastern University.

Rubin, R. S & Martell, K. (2009). Assessment and accreditation in business schools. In S.J. Armstrong & C. V. Fukami, (Eds), The SAGE handbook of management learning, education and development, pp. 364-383. SAGE Publications Ltd. Retrieved from https://sk.sagepub.com/reference/hdbk_mgmtlearning/n19.xml

Terenzini, P. T. (1989). Assessment with open eyes: Pitfalls in studying student outcomes. The Journal of Higher Education, 60(6), 644-664. Retrieved from https://doi.org/1981946 doi: 10.1080/00221546.1989.11775076.

Corresponding author

Eric D. Bostwick can be contacted at: ebostwick@uwf.edu

Related articles