Assessment Dictionary

The following Assessment Dictionary is intended as a resource for faculty and staff. The most commonly used terms and definitions in assessment are provided here for convenience.

Assessment: The ongoing process of:

  • establishing clear, measurable outcomes of student learning,
  • ensuring that students have sufficient opportunities to achieve those outcomes,
  • systematically gathering, analyzing and interpreting evidence to determine how well students have matched those expectations, 
  • Using the resulting information to understand and improve student learning (Suskie, L., 2009, p.4).

Assurance of Learning: Processes for demonstrating that students achieve learning expectations for the programs in which they participate (Association to Advance Collegiate Schools of Business, AACSB, p. 32-33).

Bloom’s Taxonomy of Learning Domains: Hierarchical ordering of learning objectives by complexity and specificity to promote higher levels of learning.

Case Study: An intensive, detailed description and analysis of a single project, program, or instructional material in the context of its environment (Westat, J. 2002).

Categorical scale: A scale that distinguishes among individuals by putting them into a limited number of groups or categories (Westat, J. 2002). If the categories have numbers assigned to them, the number does not refer to a quantity or amount but to a type or kind of category.

Classroom Grading: Course performance judged by the instructor of record for a course. It can provide an indirect measure of student learning. However, uncorroborated judgment within a class does not typically meet the more strenuous requirements advocated by accrediting agencies (Pusateri, T, 2009).

Co-curricular: Learning activities, programs and experiences that reinforce the institution’s mission and values and complement the formal curriculum. (HLC Proposed Criterion Revision).

Competency: General statement of student learning. Lacks context and is unmeasurable. General statements of skill areas in which students should be competent (Hatfield, S., & Rogers, G., 2018). 

Criterion-referenced Test: Test whose scores are interpreted by referral to we-defined domains of content or behaviors, rather than by referral to the performance of some comparable group of people (Westat, J. 2002).

Cross-sectional Study: A cross-section is a random sample of a population, and a cross-sectional study examines this sample at one point in time. Successive cross-sectional studies can be used as a substitute for a longitudinal study. For example, examining today’s first year students and today’s graduating seniors may enable the evaluator to infer that the college experience has produced or can be expected to accompany the difference between them. The cross-sectional study substitutes today’s senior for a population that cannot be studied until 4 years later (Westat, J. 2002).

Direct Assessment: Federal regulations define a direct assessment competency-based educational Program as an instructional Program that, in lieu of credit hours or clock hours as a measure of student learning, uses direct assessment of student learning relying solely on the attainment of defined competencies, or recognizes the direct assessment of student learning by others (Commission on the Accreditation of Health Management Education, CAHME).

Direct Evidence: Tangible, visible, self-explanatory, and compelling evidence of what students have learned and not learned. The product accessed has grading criteria with rigorous standards (Suskie, L., 2009, p.20).

Direct Measures: Based on student performance of program activities within courses or program-sponsored experiential learning opportunities (CAHME).

Embedded Assessment: Departments demonstrate efficient planning when they embed assessment practices in existing coursework. The department agrees in which courses this data collection should occur and collectively designs the strategy and uses the data to provide feedback about student progress within the program (Pusateri, T, 2009).

Expected Outcomes: Broad or high-level statements describing impacts the school expects to achieve in the business and academic communities it serves as it pursues its mission through educational activities, scholarship, and other endeavors. Expected outcomes translate the mission into overarching goals against which the school evaluates its success (AACSB, p. 16). 

Evaluation Processes: Interpret the data and evidence accumulated through the assessment process and determine the extent to which student outcomes and program educational objectives are being attained. Thoughtful evaluation of findings is essential to ensure that decisions and actions taken as a result of the assessment process will lead to program improvement (Accreditation Board for Engineering and Technology, ABET). 

Goal: States what your college or program aims to achieve (e.g. students learn cultural competence). Broad concepts or categories of expected learning.

Good Practice: Practice that is based in the use of processes, methods and measures that have been determined to be successful by empirical research, professional organizations and/or institutional peers (HLC Proposed Criterion Revision).

Indirect Evidence: Proxy signs that students are learning. This type of evidence is less clear and convincing (e.g. course grades, student/alumni attitudes, student participation rates in research, career placement, etc.). Student reflection where they report, describe, or reflect on their learning is also a form of indirect assessment (Hatfield, S., & Rogers, G., 2018). 

Learning Goals: State the educational expectations for each degree program. They specify the intellectual and behavioral competencies a program is intended to instill. In defining these goals, the faculty members clarify how they intend for graduates to be competent and effective as a result of completing the program (AACSB, p. 32).

Learning Outcome: Statement identifying what students will be able to do as the result of study in the program. (See objective and goal; Hatfield, S., & Rogers, G., 2018).

Objectives: More detailed aspects of goals. A goal might be for students to have the ability to explain concepts in writing, more detailed objectives might be for students to have the ability to write essays and critique the writing of their peers.

Performance Indicators: Represent the knowledge, skills, attitudes or behavior students should be able to demonstrate by the time of graduation that indicate competence related to the outcome. (ABET). Specific, measurable statements identifying what students will be able to do as the result of study in the program. Well stated Performance indicators (1) provide faculty with clear direction for classroom implementation and (2) make expectations explicit to students (Hatfield, S., & Rogers, G., 2018).

Program Educational Objectives: Based on the needs of the program’s constituencies and are expressed in broad statements that describe what graduates are expected to attain within a few years of graduation (ABET).

Program Review: Comprehensive evaluation of an academic program that is designed to both foster improvement and demonstrate accountability. Assessment results can be incorporated into program review but PR is broader than assessment.

Rubric: A detailed guide for scoring an assessment product. An analytic rubric describes different levels of performance (Hatfield, S., & Rogers, G., 2018).

Student Learning Outcomes: Statements that clearly state the expected knowledge, skills, attitudes, competencies and habits of mind that students are expected to acquire at an institution of higher education (National Institute for Learning Outcomes Assessment, NILOA).

References

Accreditation Board for Engineering and Technology, Inc. (n.d.). Assessment Planning. Retrieved from http://www.abet.org/accreditation/get-accredited/assessment-planning/.

Commission on the Accreditation of Health Management Education (May, 2018). Self-Study Handbook for Graduate Programs I Healthcare Management Education. https://cahme.org/files/resources/CAHME_Self_Study_Handbook_Fall2017_RevisedMay2018.pdf.

The Association to Advance Collegiate Schools of Business (AACSB) (July 1, 2018). 2013 Eligibility Procedures and Accreditation Standards for Business Accreditation. Retrieved from https://www.aacsb.edu/-/media/aacsb/docs/accreditation/business/standards-and-tables/2018-business-standards.ashx?la=en.

The Higher Learning Commission. (n.d.) Criteria for Accreditation Terminology. Retrieved from https://www.hlcommission.org/Policies/glossary-new-criteria-for-accreditation.html.

Hatfield, S., & Rogers, G. (2018). Higher Learning Commission, Assessing General Education Workshop Material.

Higher Learning Commission. Beta Revision November 2018. Draft Criteria for Accreditation. Retrieved from http://download.hlcommission.org/ProposedCriteriaRevision_2018-11_POL.pdf.

National Institute for Learning Outcomes Assessment (NILOA). (n.d.) Providing Evidence of Student Learning: A Transparency Framework. Retrieved from http://www.learningoutcomeassessment.org/TFComponentSLOS.html.

Pusateri, Thomas (2009). The Assessment CyberGuide for Learning Goals and Outcomes. APA Board of Education Affairs’ (BEA), American Psychological Asspciation (APA). Retrieved from https://www.apa.org/ed/governance/bea/assessment-cyberguide-v2.pdf.

Suskie, Linda. (2009). Assessing student learning: A common sense guide. San Francisco: Jossey-Bass.

Joy Frechtling Westat. (2002) The 2002 user friendly handbook for project evaluation. Retrieved from: https://www.nsf.gov/pubs/2002/nsf02057/nsf02057.pdf.