Ohio State nav bar

Major-Program Assessment: College Initiatives

The College of Arts and Sciences includes a remarkably diverse collection of programs, ranging from Art to Zoology, with anything from Physics to Poli-Sci in between. And these programs offer a wide variety of degrees, from bachelor’s degrees in music (BM) to Doctorates in Audiology (DA). This diversity creates the ideal environment for assessment innovation.

ASC Curriculum and Assessment Services is interested in working with programs, departments, and groups of programs to explore new ways to improve the quality of major-program assessment practices in the arts and sciences, especially by helping program leaders balance disciplinary demands with departmental priorities, university goals, and broader community interests.

While program leaders are always welcome to schedule a consultation with the ASC Assessment Coordinator to discuss new ways to assess their programs in particular, they may also find it valuable to participate in one or more of the following initiatives alongside other program leaders.

  • Dissertation Data Capture: In the College of Arts and Sciences, we’re piloting a new approach for gathering assessment data for doctoral programs by integrating the process directly into the dissertation defense. This approach should provide greater efficiency in gathering and processing richer, more authentic data. Learn more about this initiative below
  • Stakeholder Conversations: One misconception about academic program assessment is that it’s all about a few professors verifying that students are learning the outcomes defined by a few other professors. Assessment should be more like a field study than an accounting audit. While program faculty must lead academic assessment efforts to ensure disciplinary validity, these efforts provide more valuable insights when they reconcile perspectives from a range of program stakeholders, from grad-student instructors to potential employers, from former students to community leaders. Learn more about this initiative below.

If you are interested in participating in either initiative for major-program assessment, please reach out to Dan Seward, Assessment Coordinator for the College of Arts and Sciences.


Dissertation Data Capture

In the College of Arts and Sciences, we're piloting a new approach for gathering assessment data for doctoral programs by integrating the process directly into the dissertation defense. This approach should provide greater efficiency in gathering and processing richer, more authentic data about the learning of advanced graduate students. 

Here’s How It Works:

  1. Preparation: Developing an outcomes-aligned rubric. Interested graduate programs would work with the ASC Assessment Coordinator to refine or create a rubric for evaluating dissertations in accordance with the doctoral program’s learning outcomes. (Note: Other program capstone projects may also be suitable—contact the ASC Assessment Coordinator for guidance in implementing similar methods.)
  2. Integration: Including the rubric in the dissertation defense. After the rubric has been developed, ASC Curriculum and Assessment Services (ASC-CAS) will generate a Qualtrics form and related messaging to explain the processes described in the steps immediately following. Links to the form and supporting information are sent directly to doctoral candidates and all members of each dissertation committee each time a defense is scheduled.
  3. Collection: Routine gathering of multiple data points at each defense. After reading the dissertation, but before the defense, all committee members, as well as the candidate, will be asked to enter scores on the rubric (i.e., the Qualtrics form). For the student, this will serve as a self-assessment to help them prepare for their upcoming defense. Then, at the defense, immediately before the closing conversation, all committee members will be asked to complete the rubric again—perhaps entering the same scores, perhaps adjusting them after discussion.
  4. Analysis: Routine reporting of rich assessment results. Annually, ASC-CAS will then provide program directors with a dataset reflecting the scores submitted for the rubric, including subsets disaggregated according to roles (i.e., candidate, doctoral advisor, second reader, outside reader, etc.) and pre-/post-defense scoring. Where needed to ensure anonymity in presenting program-level results, ASC-CAS will pool data over multiple years or present only aggregate scoring of all committee members (student’s self-assessments removed, of course).

Chief Benefits of this Approach

  • Graduate programs can “outsource” the primary data collection and management processes to ASC-CAS without relinquishing the essential evaluative role of faculty in assessing learning.
  • More faculty members will contribute data for program assessment (assuming current practices rely only on the dissertation director’s or a program director’s scoring).
  • Disaggregated results reflecting multiple perspectives will provide richer datasets for analysis, allowing comparisons between student self-assessments and faculty perspectives, as well as comparison among faculty responses reflecting different roles for each dissertation.
  • The very process of pre-defense and post-defense scoring can frame the defense within the contexts of the program learning outcomes.
  • Centralized management of the assessment instruments (the rubric and form) ensures uninterrupted data collection even when program directors or admin personnel change.
  • Centralized data management allows for greater security and more efficient processing. 

How to Participate

If you are interested in participating in this initiative for doctoral program assessment (or a similar capstone-based assessment activity), please reach out to Dan Seward, Assessment Coordinator for the College of Arts and Sciences.


Stakeholder Conversations

Program stakeholders include any beneficiary of the program, as well as anyone who has expended time, effort, or other resources to help the program succeed. For academic programs, students and faculty represent the core stakeholders, but others who already support the interests of programs can further contribute by participating in the assessment process.

Academic program assessment should, ideally, generate conversation among faculty about what they value as academic specialists in a particular field and about how they recognize those qualities in student work. However, such valuable conversation can often be sparked by learning what others are seeing or expect to see in students’ work. These perspectives are often gathered through indirect methods of assessment, such as surveys or focus groups.  Yet, when given the right guidance, non-faculty stakeholders can also participate meaningfully in direct assessments of learning outcomes, which is the focus of this initiative.

ASC-CAS is interested in working with undergraduate major programs to coordinate direct assessments that engage one or more of the following stakeholder groups in the direct assessment of one or more program learning outcomes (PLOs).

  • Graduate Students: Assessment is a generative activity for professional development, especially when it leads to dialogue between more and less experienced professionals in the field, discussing what the field values in undergraduate student work and how it is recognized.
  • NTT Faculty: Teaching professors, clinical faculty, and lecturers play crucial instructional roles in many programs, and their experiences working directly with large numbers of students may lead to special insights, while they themselves may value more direct engagement in the dialogue about disciplinary and academic values that shape the program’s curriculum.
  • Faculty in related departments: Many programs have curricular interfaces that could benefit from an exchange of assessment input and concrete discussion of how students integrate different fields, both reaffirming and reforming the relationship between the programs.
  • Practitioners in the professional community, including alumni or advisory board members: These potential employers and coworkers of program graduates have practical professional interests in dialoging with faculty at a top-tier university about how they prepare students for their professional lives, and faculty can benefit from learning more about what practitioners are doing and seeing out in the field, especially as it pertains to the program’s graduates.
  • Members of local, national, or global service communities: For some programs, faculty may benefit from hearing directly from practitioners involved in community initiatives that depend on the disciplinary knowledge of civically engaged and service-motivated students coming from the program.

NOTE: As suggested under the bullets above, these varied stakeholders each might value the experience of participating in assessment for their own reasons. But, in addition to the intellectual and pragmatic benefits, other tokens of appreciation can be offered for participants. Aside from providing the logistical support described below, the Assessment Coordinator may also be able to provide financial support for a limited number of programs to pilot assessment methods that engage non-faculty program stakeholders. (The availability and application of these funds will depend on the details of the assessment plan.)

Here’s How it Works (the following is a tested routine that can be adjusted to fit program needs):

  1. Preparation: Recruiting, Rubrics, and Reusable Resources. Interested undergraduate programs would work with the ASC Assessment Coordinator to recruit participants from one or more of the stakeholder groups listed above; ideally, some core faculty would participate too. Additionally, program leaders would develop an evaluative rubric reflecting one or more PLOs. Throughout the process, the ASC Assessment Coordinator can provide resources and guidance that ensure the assessment methods can be readily reduplicated in future assessment cycles.
  2. Data Collection: Once program leaders have determined which outcome(s) to assess, they’ll need to determine how they’ll collect student work that can demonstrate the outcome(s), whether papers, portfolios, exam responses, problem sets, or some other kind of student work. The ASC Assessment Coordinator can provide resources, guidance, and other support for data collection and storage methods, including those needed to ensure security and privacy.
  3. Stakeholder Conversation 1 (“Assessment Orientation”): This is the first of two meetings between a) program leaders, b) participating program faculty, and c) the non-faculty stakeholders. The focus of the session—aside from facilitating interaction between faculty and non-faculty stakeholders—is to guide all participants in applying the assessment instruments to specific examples of student work. The ASC Assessment coordinator can provide resources to facilitate this discussion, which will necessarily vary depending on the methods used.
  4. Scoring Student Work: After the “orientation” session, participants are sent documents (e.g., papers, problem sets, etc.) to score using an online form. Prior to this step, as part of Steps 1 and 2, the Assessment Coordinator and program leaders identify the optimum scoring load based on a) a statistically meaningful sample size for the program population; b) the number of raters recruited; and c) a reasonably limited time commitment for each rater (e.g., six hours).
  5. Results processing: Once the scores are collected (usually a few weeks after the orientation session), they are processed, which includes validating results, reconciling with any other measures (e.g., surveys), and disaggregating according to available variables (e.g., scoring group, such as faculty vs. non-faculty, student pathways, etc.). The Assessment Coordinator can here, too, provide support in the form of resources (tools, templates, etc.) and guidance.
  6. Stakeholder Conversation 2 (“Reflection on Assessment Experience”): Finally, all the participants meet a second time to discuss the results for the program assessment. This provides an excellent opportunity for further discussion of the program’s values and what those look like in student work, this time from the perspectives of various audiences speaking about a concrete sample of student work. For program leaders, this discussion can serve to provide the raw material for a fuller report on the interpretation and implication of the assessment results.

Chief Benefits of this Approach

  • Program faculty can hear a variety of perspectives on concrete examples of student work.
  • When raters are NTT or graduate student instructors, participation in PLO assessment can serve as both professional development and inclusive program development.
  • When raters are from professional or local communities, the practice of PLO assessment can serve as a form of program outreach and engagement.

How to Participate

If you are interested in participating in this initiative for undergraduate program assessment, please reach out to Dan Seward <seward.65>, Assessment Coordinator for the College of Arts and Sciences.