Major Program Assessment: Developing Assessment Plans

A program assessment plan is a living document initially composed during the program proposal process (see p. 53 and Appendix 9, which starts on p. 78, of the “ASC Operations Manual”. The initial plan should lay out a multi-year cycle for assessing how well a program is meeting its goals, which are articulated in a concrete list of measurable student learning outcomes.

Basic Expectations

The sections farther below go into more depth about planning than is required for the program proposal. However, it doesn’t hurt to start thinking about the more specific details of who will participate in assessment activities, when they will take place, how they will be conducted. In fact, such considerations—including developing rubrics or test questions that will ultimately be used to assess learning outcomes—can serve as heuristic devices, aiding in the articulation of the outcomes themselves. This section, in any case, identifies some basic considerations for developing assessment plans for the program proposal and once the program is approved.

Meeting annual and multi-year reporting requirements:

  • At least one learning outcome should be assessed with a direct method each year. For a discussion of direct vs. indirect methods of outcomes assessment, see Appendix 9 of the ASC Operations Manual.
  • All learning outcomes should be assessed with direct methods over the course of three years. These assessments should, moreover, be based on a representative sample of the major program’s population, in particular, those students who have completed intermediate and advanced coursework in the program’s curriculum.

Gathering essential planning materials:

  • The outcomes listing included within the approved program proposal
  • The program’s curriculum map, also submitted with the program proposal
  • A list of program personnel supporting assessment

Submitting assessment plans:

  • With the program proposal: As noted and linked above, expectations and examples appear in the ASC Operations Manual.
  • In Nuventive.Improve: Once the program is approved, assessment plans must also be entered into the university’s program-assessment reporting system. To learn more about this part of program assessment see the “Major-Program Assessment: Annual Routine” page.  

Choosing Methods and Scheduling Outcomes Reporting

Not a small part of developing an assessment plan is allocating and coordinating program resources to ensure adequate coverage of outcomes during a three-year assessment cycle. Different methods require different amounts and allocations of resources. Methods also vary terms of how valid they are for measuring student achievement of learning outcomes and how well they reflect a representative sample of the program. The goal is to assess all outcomes using methodologies that provide truly meaningful data for analysis and decision making, yet without overextending program resources. Here are some considerations for choosing methods in the process of developing a feasible assessment plan:

  • Pre-developed, field-approved testing instruments may be available, but at a cost. For some programs, in fact, students are already asked to take licensure and certification exams. When portions of these exams can be aligned with specific learning outcomes, the exams provide suitable direct assessments. Be aware, however, that such testing instruments tend to be more costly, and their standardized formats often preclude adjustments based on a program’s local variables, including special goals of the program and the diversity of the student body.
  • Coursework can be evaluated for direct assessment, albeit with special requirements. “Course embedded” data— including papers, presentation recordings, exam questions, or other substantive demonstrations of learning—may be evaluated by qualified raters. However, course grades, assignment grades, or any scoring wherein the evaluation is not directly correlated to a specific program learning outcome, do not qualify as direct assessment (see more below).
  • Portfolio methods give students a greater role in data collection, but require guidance. For some programs, it may be entirely appropriate for students to select which of their past coursework is ideal for representing their own learning—artifacts that an then be evaluated by qualified raters. However, students need guidance in making these selections. Such guidance can be provided in capstone course or by a student advisor. With the right protocols, student-developed portfolios can be used for direct assessments of all of a program’s outcomes.
  • Student-sentiment surveys provide indirect assessments when aligned with outcomes. By asking students to rate themselves on various outcomes-related competencies, program administrators can learn much about how self-assessment aligns with direct measures. These self-assessments are generally easy and inexpensive to implement and analyze, and so may be worth doing even though they don’t provide a direct assessment of learning outcomes.
  • Student success rates, grades, and course evals don’t typically assess learning outcomes directly. When a specific course is directly aligned with an outcome, then these forms of data can serve as indirect measures. Otherwise, these data points are generally considered ancillary to a program’s assessment of learning outcomes, since they may reflect other factors affecting the final course evaluation (attendance, participation, etc.). Even so, when such data is reconciled with other data more directly correlated with learning outcomes, the latter dataset can be more meaningfully interpreted.  

This list mentions some common methods in broad terms. Whichever methods are used, program administrators need to schedule reporting of direct measures for each outcome at least once every three years. Note that data collection for most assessment methodologies can be ongoing, even when reporting on outcomes is staggered from year to year. For instance, a program using a portfolio methodology can, in one year, assess one outcome using portfolios submitted over the past three years, and then, the following year, assess another outcome using the same repository of portfolios.

Working Out the Details

The selection of particular methods and the timing for assessing particular outcomes is a starting point, but the plan for assessing each outcome needs also to be articulated in actionable terms for it to come to fruition. The plans should ideally identify specific data sources, necessary measurement instruments (whether already created or not), and the program personnel responsible for developing new instruments, collecting data, and processing results. The subsections below help you to identify these details:

Identifying Data Sources

While some details will be unavailable at the time the initial plan is developed, expect to gather the following information in fairly specific terms, depending upon the methods being used. Note, too, that the bullets below don’t address all methods, so you may need to identify different kinds of data sources.

  • Courses from which student work will be gathered: Since program proposals require a curriculum map identifying where each outcome is treated, the likely data sources will already be clear. For each outcome, identify one or more courses wherein students are expected to demonstrate intermediate or advanced achievement. Questions to consider: What activity or assignment will generate the data for the assessment? Does it need to be created? Which courses will allow for the most representative sample of the major’s population?
  • Portfolio apps or other technological platforms outside the LMS: Data collection need not be associated with specific courses or assignments. Portfolio applications and similar platforms allow students to select their own data to demonstrate learning across multiple courses. Survey platforms offer easy-to-analyze tabular reports on students’ responses. Questions to consider: Will students need to set up an account? Do they need training for the tool?
  • Institutional data repositories, data sets, and reports: The university and college already gather information that may be relevant to program assessment, even if not useful to measure learning outcomes directly. Such sources nonetheless provide data that can be used to form different kinds of indirect assessments, as well as to make the interpretation of direct assessment results more meaningful. Questions to consider: Can specific data elements be aligned with specific outcomes? Can specific data elements be responsibly reconciled with locally gathered data about individual student sentiment or achievement.  
  • Field-expert reports on student performance outside coursework: For some programs, students may perform fieldwork under the supervision of a qualified mentor. For instance, graduate programs may include teaching observations from a teaching coordinator to assess students’ abilities to communicate knowledge related to the field of study. Questions to consider: Who arranges the observation? How will expert opinions be gathered? What instruments will be used to allow evaluator to focus on specific program-related outcomes?  
  • Professional organizations or services providing standardized assessments: While passing a standardized test or receiving a certification cannot serve as a learning outcome itself, even if it’s a goal of the program, some exams can contribute to valid assessment methods. Questions to consider: Are the results for individual students available for analysis? Are exam questions able to be aligned with specific learning outcomes? Is taking the exam a program requirement?

Identifying Specific Data-Collection Instruments

Although the instruments used to collect data may evolve over time, an initial assessment plan should identify the types of instruments required for the chosen method. For each of these, the plan ideally indicates who is responsible for developing the instrument, deploying it, and, finally, processing the data collected (see more in the next section). Here are some common data-collection instruments used for program assessment, as well as some brief commentary on their applications:

  • Examination questions: Outcomes can be assessed with either higher-order essay questions or a cluster of short-answer questions that measure achievement of a desired learning outcome.
  • Project and paper assignments: Substantial student-developed artifacts may even allow for the assessment of multiple outcomes, depending on the scope of the assignment and rubric.
  • Presentation prompts: Like papers and projects, presentations may allow evaluators to review performance across multiple outcomes.
  • Evaluation rubrics: Evaluative rubrics are essential for all methods requiring a performance rating. Rubrics used for learning assessment should be oriented around specific outcomes, identifying one or more criteria for measuring the quality of student performance.
  • Surveys: When used for indirect assessment of learning outcomes, individual questions or clusters of questions should be clearly associated with particular outcomes. However, surveys may also be used to learn other information about students’ experiences—data that can allow for more meaningful analysis of related assessment results.
  • Data repositories: While some instruments store information in a format conducive to data processing, such as a table, some methods may require that a more complex dataset be developed, or they may necessitate collection of data over a longer period of time. Important: When student data is stored outside classroom contexts, data security and privacy protocols also need to be developed. For more guidance on long-term or complex data collection, consider consulting with the ASC Assessment Coordinator.

Identifying Who Will Do What

This is probably the most important detail of the plan. No matter the data sources or instruments used, people must ultimately implement the plan. Even when using methods that use standardized data collection instruments, for instance, someone needs to process, analyze, and report on the data. below are a number of specific tasks that someone will need to perform. Depending on the program and methodologies used, the following tasks may be divided amongst various program stakeholders, or they may be the focused activities of just a few individuals:

  • Securing or allowing access to data sources
  • Developing instruments for data collection and evaluation
  • Collecting data and securing data
  • Deidentifying and pre-processing data before analysis
  • Analyzing data and its implications for program administration

For some specific suggestions on how such activities might be assigned to various program personnel, consult the section on “Who Is Responsible for Assessment” on the “Major-Program Assessment: The Basics” page.