News from NDU

News | Feb. 15, 2024

A New Form of Accountability in JPME: The Shift to Outcomes-Based Military Education

By Kristin Mulready-Stone Joint Force Quarterly 112

Download PDF

Then Chairman of the Joint Chiefs of Staff General Mark A. Milley congratulates National War College graduate during National Defense University’s 2023 graduation ceremony
Kristin Mulready-Stone is Professor and Director of the Writing and Teaching Excellence Center at the U.S. Naval War College, and in 2023–2024, a Visiting Professor and the Harold K. Johnson Chair of Military History at the U.S. Army War College.

The programs responsible for teaching joint professional military education (JPME) Phases I and II are in the early stages of a significant overhaul of how they demonstrate to the Joint Staff that they are fulfilling their mission of educating and developing leaders from the U.S. joint force, interagency community, and officers from allied and partner countries around the world. These institutions of higher education operate under the Officer Professional Military Education Policy (OPMEP), the latest version of which is OPMEP Foxtrot (OPMEP-F).1 Under previous versions, JPME programs were required simply to demonstrate they were covering the content that congressional statutes require them to deliver. That process was assumed sufficient to ensure that the programs were teaching what needed to be taught and that students were learning what they needed to learn. OPMEP-F, released on May 15, 2020, introduced a wholesale change in how JPME programs will have to prove to the Joint Staff that they are accomplishing their objectives, specifically by demonstrating their graduates have reached the appropriate level of achievement on defined learning outcomes. Mandating that JPME programs adopt this process represents a shift to outcomes-based military education (OBME).2

This shift in methodology brings JPME in line with standard practice across postsecondary education in civilian academia—not only in the United States but also in higher education across much of the world—through a practice commonly known as assessment in outcomes-based education.3 This article serves as a primer on this kind of assessment for JPME faculty and administrators and anyone interested in JPME. It introduces some of the terminology, explains some of the benefits, provides a brief historical overview, points out strengths in the shift to OBME so far, and identifies caveats as JPME progresses through the shift to OBME.

In civilian higher education, the terms assessment, outcomes-based education, and variations such as outcomes assessment are used interchangeably. The fundamental intention underlying assessment is to ensure that students are in fact learning what their professors, departments, and programs intend for them to learn. This represents a change in emphasis in JPME programs’ accountability from merely proving they are covering the required content to additionally providing evidence that graduates have reached a sufficiently high level of achievement of Program Learning Outcomes (PLOs) through a process referred to as measuring outcomes. In other words, JPME programs now are not only required by statute to cover certain content and assign, grade, and provide feedback on papers, exams, and other projects, but they must also institute a new evidence-based process specifically designed to determine whether students are reaching a sufficiently high level of achievement on what their programs intend for them to learn, as articulated in a program’s learning outcomes.

The Joint Staff coined the term outcomes-based military education to describe this expansion in focus from content alone (covering statutory requirements) to content and outcomes (covering the content and demonstrating students have learned what they are supposed to learn) under OPMEP-F. Central to this process is that all programs develop their own PLOs that accurately reflect their unique emphases on areas such as maritime power, airpower, land-based warfare, intelligence operations, or cyber warfare, among others. Variations among the programs are expected and valued, with the caveat that all PLOs must sufficiently align with six Joint Learning Areas identified by the Joint Staff.4 This is necessary since all programs are delivering JPME, and there must be enough commonality among them to ensure that graduates from every school have the requisite knowledge and abilities in areas that include jointness, warfighting, strategy, and the profession of arms.5

There are many challenges associated with a shift to outcomes-based education, not least of which is ensuring that administrators, faculty, students, and external stakeholders not only appreciate the value of assessing outcomes but also understand the difference between grading assignments and assessing outcomes.6 A common response from faculty hearing about assessment for the first time is, “I assess my students all the time. I grade their papers and exams, I evaluate their understanding of the readings through their class participation, and I assign grades. I’m assessing them.” But grading is not outcomes assessment.

One vivid example of how assessment provides different information than grading presented itself at the Naval War College early in our assessment efforts. All the JPME core departments developed their Course Learning Outcomes (CLOs), which articulate what students should know and be able to do at the end of a particular course, as opposed to PLOs, which define what they should know and be able to do after taking all the required courses in a JPME program. One of the departments had carefully developed CLOs that were an accurate reflection of what the department intended students to learn. But when it came time to map existing course assignments to those CLOs—ensuring that each assignment is clearly linked to one or more of the CLOs and allows students to demonstrate sufficient achievement of those CLOs through their coursework—this department found that the research paper that students spent most of the term working on did not actually align with any of the department’s declared CLOs. This meant that a student could write a very good research paper, get a high grade on it, learn a great deal about the topic, but not make progress toward achieving the CLOs, despite having devoted dozens or hundreds of hours over the course of many weeks to research and writing.

That is, the department discovered it had assigned a task that was insufficiently connected to what that department thought its students should learn. If any assignment—let alone the most time-consuming assignment in a course—does not contribute to a student achieving the outcomes of a course or a program, this is a problem that must be remedied. Simply grading the research papers had not revealed the problem. Developing outcomes and assessing students’ mastery of those outcomes through the research paper, on the other hand, threw the problem into stark relief.7

Realizing that an assignment does not directly contribute to students’ achieving the specific course or program outcomes does not necessarily mean the faculty should eliminate the assignment—instead, they should adjust so it clearly aligns with CLOs and PLOs. In the case of this course’s research paper, one possible fix would be to change the guidance to students on appropriate research topics so that conducting research and writing the paper contribute to a specific course learning outcome. In this example, the assessment process made clear that although the research paper was not in line with intended learning outcomes, relatively minor adjustments would solve the problem, strengthen student learning, and improve mastery of outcomes.

This kind of revelation about the utility of an assignment can easily be missed in the absence of a carefully designed assessment process. Nevertheless, until the process is developed, operationalized, and generating useful data and insights, faculty resistance in both civilian and military schools to a new outcomes-based assessment requirement is common, expected, and often pronounced. This is unsurprising since a new mandate to conduct assessment affects all teaching faculty, occupying time they could otherwise devote to teaching, research, writing, and publishing, and it can feel like just the latest arbitrary requirement that noneducators are inflicting on educators. Many faculty believe outcomes-based assessment will simply go away if they ignore it long enough. JPME programs should be prepared for similar faculty responses based on a decades-long pattern of such a response in civilian higher education. Tammie Cumming, L. Jay Deiner, and Bonne August emphasize the importance of respecting people’s time when shifting to outcomes-based education, noting:

Colleges and universities are busy places where everyone is balancing multiple competing priorities; time is the greatest commodity. Faculty, staff, and administrators will quickly come to resent anything that requires a large investment of time for little payoff. Therefore, it is critical to examine the assessment process to make sure that busywork and time burdens are minimized.8

The requirement to assess learning outcomes is new in JPME, and many faculty are unaware of how standard these outcomes-assessment requirements are in higher education. The fact that outcomes assessment is so well established in civilian academia means that there are many lessons that JPME programs could and should learn from their civilian counterparts, such as avoiding making outcomes assessment more time-consuming than necessary and other pitfalls.

Even many faculty involved in outcomes assessment at civilian institutions, however, are unaware of its long history, and it is worth having some familiarity with it. What is recognizable today is that outcomes assessment in higher education developed over the course of several decades and has endured and grown for 30 years. In “History and Conceptual Basis of Assessment in Higher Education,” Peter Ewell and Tammie Cumming provide a detailed overview of strategies designed to remedy an array of problems in postsecondary education from the 1960s onward that were the unintentional starting point of outcomes assessment. The issues included “academic and social integration” on campuses to prevent student attrition, mandatory program evaluation that came with large-scale Federal programs in the 1960s and 1970s, and “the wider movement toward ‘scientific’ management that quickly found applications in higher education in the form of strategic planning, program review, and budgeting,” among others.9 Ewell and Cumming emphasize that the methods developed in an effort to solve such problems coalesced over the course of a few decades into a methodology for outcomes assessment.

Three different approaches to assessment appeared in the 1970s and 1980s, all of which endure in different contexts, but they are not all part of assessment in postsecondary education today. The first focuses on an individual student’s learning and is rooted in “development over time and continuous feedback on individual performance.” The second is now inextricably linked to accountability in K-12 education and was designed not “to examine individual learning, but rather to benchmark school and district performance.” The third “defined assessment as a special kind of program evaluation, whose purpose was to gather evidence to improve curricula and pedagogy. . . . This tradition focused on determining aggregate not individual performance.”10 By the mid-1980s there was enough (though by no means universal) discussion in higher education circles of improving student learning through outcomes assessment that the First National Conference on Assessment in Higher Education, cosponsored by the National Institute of Education (NIE) and the American Association for Higher Education, was held in Columbia, South Carolina, in fall 1985. Ewell and Cumming make clear that “the proximate stimulus for the conference was a report called Involvement in Learning,” published by NIE in 1984:

Three main recommendations formed its centerpiece, strongly informed by research in the student learning tradition. In brief, they were that higher levels of student achievement could be promoted by establishing high expectations for students, by involving students in active learning environments, and by providing them with prompt and useful feedback. But the report also observed that colleges and universities as institutions could “learn” from feedback on their own performances and that appropriate research tools were now available for them to do so.11

The feedback that colleges and universities could glean from assessment would allow them to adjust not only content but also teaching methodologies when the assessment data they gathered showed that in an aggregate sense, students were not learning everything their degree programs intended them to learn. This is the piece that evolved into the approach that is now nearly universal in civilian higher education and that informs the Joint Staff’s guidance for JPME institutions to follow as the schools shift to OBME.

Determining the gaps in student learning can allow departments and programs to home in on a content area that needs greater emphasis or a pedagogical method that might need adjustment.12 The shift to focusing on assessment processes and measuring outcomes took higher education away from an earlier input-based standard—a different set of metrics that did almost nothing to demonstrate that students had learned what they were supposed to learn. Before the 1980s, as Kenton Fulcher and Caroline Prendergast make clear in their book on improving student learning, “institutional quality was evaluated almost entirely on inputs (e.g., number of faculty holding doctoral degrees, test scores of incoming students) and outputs (e.g., graduation rates, employment rates of graduates).”13 Faculty credentials are important. Students graduating and finding employment are also important. But these inputs and outputs do not provide any evidence that students have learned what is necessary for them to do “what is essential for all students to be able to do successfully at the end of their learning experiences,” which is the central requirement of outcomes-based education, the approach to education the Joint Staff has now embraced.14

That said, there needs to be more to the outcomes-assessment process than simply developing assessment mechanisms and compiling data in line with the practice of outcomes-based education. Compiling the data on mastery of outcomes does not in any way guarantee better results in teaching and learning than the inputs-outputs approach. The essential—and frequently overlooked—final step in the process is to evaluate the data and adjust curricula and teaching methodologies to improve student learning, which would lead to higher levels of student mastery of the outcomes. There are plenty of examples of colleges and universities devoting countless hours of faculty time to assessing outcomes and compiling data, then failing to close the loop. That is, they fail to come up with effective processes to evaluate the data and to apply the lessons the data yield back into the curriculum in ways that result in better student achievement of outcomes.15 As Fulcher and Prendergast succinctly state, “Assessment should not be treated as an end unto itself. Instead, the rightful emphasis should be placed on improving student learning.”16 Their research followed an important 2018 National Institute for Learning Outcomes Assessment (NILOA) report that concluded:

While use of assessment results is increasing, documenting improvements in student learning and the quality of teaching falls short of what the enterprise needs. [In a 2017 NILOA survey], provosts provided numerous examples of expansive changes at their institutions drawing on assessment data, but too few had examples of whether the changes had the intended effects.17

Closing the loop by improving student learning is the most crucial step; if this step is overlooked or carried out half-heartedly or ineffectively, all the faculty time devoted to coming up with learning outcomes, measuring those outcomes through well-developed assessment mechanisms, and compiling the data would ultimately amount to nothing more than wasted time. Adjustments need to be made, and then programs must reassess the outcomes to determine whether student achievement on outcomes improved.

Wanda Baker of Council Oak Assessment pointed out at the fall 2021 annual Assessment Institute at Indiana University–Purdue University Indianapolis that colleges and universities have been measuring outcomes and compiling data and filling countless binders with data that then sit on shelves in someone’s office, ultimately accomplishing nothing. But when the data sits on a shelf in a binder and does nothing to help students learn what they should be learning, it boils down simply to a box-checking exercise to keep accreditors off an institution’s back rather than an admittedly time-consuming but worthwhile enterprise to improve teaching and learning.18 Wasting faculty time by failing to close the loop is an endstate JPME institutions must avoid.

Naval War College holds commencement ceremony for College of Naval Command and Staff and College of Naval Warfare 2023 graduating classes

Encouraging Signs

Guidance so far from the Joint Staff J7 on how programs should make the transition to OBME has been clear and overall positive.19 Those who drafted OPMEP-F did a thorough job of educating themselves on outcomes assessment, and the document does capture its true intent and purpose, aligning with the assessment scholarship. OPMEP-F also comes with a procedures manual, published on April 1, 2022, which gives detailed instructions on how to develop learning outcomes, provides guidance to ensure the outcomes align with institutions’ and programs’ mission statements, and defines seven milestones each program has to pass to achieve full certification from the Joint Staff J7.20 Programs have 6 years from the publication of the OPMEP-F manual to complete this process.21 This is ample time, particularly given that a near-final draft of the manual was sent to all JPME programs in summer 2021. Even though the manual had not yet been signed, some JPME institutions were able to start the milestones process in summer and fall 2021 based on its guidance. Even institutions that were not yet ready to begin the process were able to make progress toward the early milestones with the draft manual in hand, meaning all schools and programs will have more than 6 years to gain OBME certification.

A central component of the milestones process is the requirement to report PLO assessment data for 4 full years before a program can achieve full certification under OBME. This requirement is appropriate for two reasons. First, that amount of time will allow programs to test their assessment mechanisms and make any necessary adjustments to ensure they are effective in assessing PLOs and generate the necessary data on student learning and achievement. Second, and just as important, the literature on closing the loop makes clear that improved learning cannot happen in 1 year, rarely happens in 2 years, but takes 3 or more years before efforts to improve curricula or methodologies will show up in the data.22 With 4 years of data, JPME programs that develop sound processes for closing the loop will be able to report on the early signs of how effective their OBME practices are and what they intend to do to make them even more robust. This will be true for 10-week and 10-month resident programs and for distance programs that take longer to complete.

General Darren W. McDew, then commander of U.S. Transportation Command, Scott Air Force Base, Illinois, presents lecture to Marine Corps War College students at Dunlap Hall, Marine Corps University, Quantico, Virginia

Another important part of the OBME certification process is that the OPMEP-F manual specifies that JPME programs will report on their 4 years of assessment data in biennial reports, not annual reports, reporting 2 years of data at a time.23 This provides time to assess PLOs and reflect on the significance of the data, so programs can develop a clear plan on how to close the loop to improve student learning. Indeed, the definition of assessment in OPMEP-F is, “The systematic collection, review, and use of information to improve student learning.”24

The review and use of the information collected through assessment requires deliberation and reflection time. By year 4, there should be opportunity for programs to have adjusted to close the loop and for those efforts to show up in the data. This process will by no means be complete at the time of the second biennial report, but for programs that take this challenge seriously, the 4 years of data will provide sufficient evidence for the OBME review teams, the Military Education Coordination Council Working Group (MECC WG), and the J7 to determine whether each program’s assessment process is in line with guidance in OPMEP-F and the manual and sufficiently well developed to warrant full certification under OBME.

But the need to close the loop on student learning, although present in OPMEP-F, does not currently receive enough emphasis. As civilian institutions have learned—often painfully—collecting and reporting outcomes data does not, in and of itself, bring improved student performance on outcomes. Improving student learning takes time and effort, and sometimes initial efforts to improve wind up failing.

Caveats

It will be crucial, however, for the members of OBME teams, the MECC WG, and the J7 to recognize that a rigid expectation of rapid improvement will undermine the whole process. This could be challenging in an educational system whose faculty and administration report to flag and general officers, many of whom will be in place for only 2 or 3 years, and some of whom might demand faster results. Likewise, those with final authority for JPME in the J7 and the Joint Chiefs of Staff are also flag and general officers who might have similar inclinations.

In the context of the return of strategic competition with China and Russia, there is a sense of urgency for JPME to ensure that it is preparing future leaders for the new environment right now, and that expectation is understandable. Curricular changes in JPME programs to incorporate more China-focused content are well underway, and there are also discussions about increasing Russia content. Although curricular changes cannot happen overnight and a mandate from above to inject certain content into the curriculum cannot be implemented when the curriculum is already finalized for an academic term, reasonable changes can happen from one year to the next.

But the data on which learning outcomes show insufficient student achievement must be permitted to speak for themselves, as faculty and programs implement adjustments to program delivery over the course of 3 to 4 years: initial assessment to determine the baseline, followed by intervention intended to bring improvement, followed by reassessment to determine whether improvement occurred. This involves a continuing cycle of gathering and analyzing data, attempting to close the loop, then repeating the process to guide the next effort to close the loop. This process must be intentional, deliberate, and data-driven. Demands that the loop be closed without enough time to develop the right solution for a particular pedagogical shortcoming or curricular omission, or that a reassessment happen before the remedy has had time to affect outcome achievement, will sabotage the entire assessment process.

National Defense University’s College of International Security Affairs hosts its annual Thesis Symposium, where students from Class of 2019 present their theses to faculty and fellow students

The importance of the “feedback-improvement loop” is spelled out clearly in the OPMEP-F manual.25 It is important, however, to emphasize that the focus needs to be on longer term rather than shorter term improvement. The OPMEP-F manual states formative assessments that reveal shortcomings during a student’s time in a JPME program allow “a corrective feedback loop to ensure learners achieve mastery of the materials before graduating” and that faculty “use formative assessments to identify when their students are straying from the path of PLO mastery and intervene appropriately.”26 Even though formative assessments will point out some individual problems and allow some course correction, it is not reasonable to assume that all students will achieve mastery on all PLOs every year. (This is true at all levels of education, civilian and military.) But assessing outcomes at the aggregate level will provide insight into shortcomings in the courses or program, rather than at the individual level. And by necessity, the greater focus in improving student learning will have to be on making improvements year by year, not day by day, because, as stated, it takes time to interpret assessment data and determine what teaching methodology and curricular adjustments will yield a higher percentage of students mastering learning outcomes.

In addition to resisting the temptation to force a faster feedback-improvement loop, there are other caveats for JPME programs and senior leaders to keep in mind if OBME is to succeed. First and foremost, as stated in OPMEP-F, the process must remain faculty-driven, from developing and adjusting the PLOs to implementing and adjusting assessment mechanisms to creating assessment rubrics. JPME faculty have the clearest understanding of their curricula. For institutions to develop appropriate PLOs, assessment mechanisms, and rubrics, the faculty must not simply be involved, but they must also have the lead and work across departments to develop and refine PLOs, assessment mechanisms, and the feedback-improvement loop. Typically, in both civilian academia and JPME, PLOs for programs that include courses from more than one department are developed through coordination, often in the form of an assessment committee that has representatives from all departments. Although the products the faculty develop must be subject to the review and approval of the administration, faculty experts must be the primary developers.

There are potential pitfalls, however, to placing faculty at the center of developing assessment processes. Some may have prior assessment experience from civilian or military institutions of higher education where the approach too often has been all about compliance—the ineffective practice of compiling assessment data on outcomes because an accreditor requires it but failing to apply that data to learning improvement. Promoting that mindset in the OBME context would be a mistake. Others may believe they do not need any further professional development to improve student learning. As Fulcher and Prendergast point out:

[Many faculty] have a good sense of students’ needs. It is unsurprising, then, for them to expect they could invent effective interventions without reviewing additional literature. Certainly, we would anticipate that some interventions developed this way would lead to successful learning improvement projects. However, researchers around the world have spent untold hours cumulatively studying interventions related to a massive array of educational topics and skills. Why not take the time to learn from this work from the beginning of the intervention development stage? Why not combine lessons from the literature with lessons from instructors’ experiences and wisdom?27

Why not, indeed? There are two great starting points for faculty development in assessing and improving student learning. One is the annual Assessment Institute in Indianapolis, which has multiple tracks that focus on different aspects and different stages of the assessment process. Each year the Assessment Institute has sessions appropriate for assessment newcomers, seasoned experts, and everyone in between.28 The second consists of professional organizations that specialize in teaching, learning, and assessment. These organizations have websites with a wide array of assessment and learning improvement materials, and they frequently collaborate to produce such resources. The Association for the Assessment of Learning in Higher Education has worked with the American Association of Colleges and Universities, NILOA, the American Institutes for Research (now part of Cambium Learning Group), and the POD Network (North America’s largest educational development community) to assist institutions of higher education in developing and refining their faculty development and assessment processes.29

JPME institutions should not try to reinvent the wheel but can draw on extensive assessment expertise that has developed in civilian higher education in the past few decades to develop and refine OBME assessment mechanisms. Reading the literature is an important first step, but there are experts who can be brought to JPME campuses to do small- and large-group faculty development sessions tailored to whatever stage a particular program has reached in assessment. Some of these experts will also be experts on Officer Professional Development (OPD), but given how long outcomes-based education and assessment has been going on in civilian academia, JPME institutions can also benefit from assessment experts who do not have OPD experience. JPME programs that ignore the deep well of experience and expertise that genuine assessment experts at civilian institutions possess would sacrifice important opportunities to learn from them.

Army Major General James E. Taylor, Inter-American Defense College director, speaks to Army War College students at National Defense University, Fort Lesley J. McNair, Washington, DC

Moreover, JPME institutions must be willing to invest in necessary technology and human capital. In a 2021 Assessment Institute session, Glenn Phillips, then of Howard University, made clear that when an institution needs additional resources for implementing effective assessment mechanisms, the administration sometimes offers to hire a person or two, when the actual requirement is a technological tool to allow existing personnel to manage, process, and interpret vast quantities of data.30 Conversely, leadership might offer a tool when additional hires are necessary. These are not always easy waters to navigate, but faculty, staff, and administrators involved in assessment must be prepared to make a convincing case on value versus cost for the resources they need.

Finally, informal collaboration among JPME programs is already happening and should become more common. Although the Naval War College’s institutional accrediting agency did not require outcomes assessment until recently, most other JPME programs’ accreditors did. This means that most JPME institutions have been doing some form of outcomes assessment for several years and already had PLOs and data collection processes in place. Although the Naval War College had to start from the beginning, other JPME colleges and programs have had to make substantial changes to bring their practices in line with OBME as spelled out in OPMEP-F. Several of us involved in bringing our programs in line with OBME regularly have conversations with colleagues at other institutions on what their assessment mechanisms are, how many people they have who work on assessment, what kinds of technological tools they use to facilitate the process, and other matters. One peer institution generously allowed us to observe part of its end-of-year PLO assessment process when the COVID-19 pandemic forced it to shift online, which made it easy for us to observe. This kind of cooperation across colleges and programs, combined with a concerted effort to familiarize ourselves with the literature and best practices, will bring better results for us all.

As JPME I and II programs continue to develop and refine their assessment processes, they must do their best to incorporate the lessons learned at other institutions that are further along in the process and be open to bringing in outside experts from civilian academia to make this possible. The improvement and innovation track of the Assessment Institute—which focuses on applying assessment data to improve student learning—is still new, dating to only 2018. As a result, the scholarship on implementing the feedback-improvement loop remains limited. It would behoove JPME programs not only to embrace this part of OBME earlier rather than later for the benefit of their students and programs but also to avoid wasting time and getting negative reviews from OBME teams. This means reviewing the existing literature and being prepared to innovate with methods rooted in what has worked so far. Fulcher and Prendergast bluntly state, “Given the paucity of learning improvement examples, it is safe to say that the traditional assessment model has not successfully guided [assessment] practitioners to the promised land of learning improvement.”31 Progress in this area stalled because of the pandemic but is back on track now. To do right by our students, JPME faculty, staff, and administrators will have to embrace established best practices, keep up with the developing literature on learning improvement, and innovate new methods and practices to do our part to ensure the joint force is fully prepared for strategic competition and the next war. JFQ

Notes

1 The 23 programs are listed in the official document, Chairman of the Joint Chiefs of Staff Instruction (CJCSI) 1800.01F, Officer Professional Military Education Policy [OPMEP-F] (Washington, DC: The Joint Staff, May 15, 2020), appendix B to enclosure A, A-B-1–A-B-11, https://www.jcs.mil/Portals/36/Documents/Doctrine/education/cjcsi_1800_01f.pdf.

2 The initialism OBME appears 20 times in OPMEP-F and 229 times in its accompanying procedures manual. See Chairman of the Joint Chiefs of Staff Manual (CJCSM) 1810.01, Outcomes-Based Military Education Procedures for Officer Professional Military Education (Washington, DC: The Joint Staff, April 1, 2022), https://www.jcs.mil/Portals/36/Documents/Library/Manuals/CJCSM%201810.01.pdf.

3 From January 2010 through October 2013, the Organisation for Economic Co-operation and Development (OECD) conducted a feasibility study and published three volumes of results from its Assessment of Higher Education Learning Outcomes (AHELO) project “across diverse national, cultural, linguistic, and institutional contexts,” which demonstrates how widespread assessment in higher education has become. See AHELO Feasibility Study Report, vol. 1, Design and Implementation: Executive Summary (Paris: OECD, 2013), 2, https://www.oecd.org/education/skills-beyond-school/AHELO%20FS%20Report%20Volume%201%20Executive%20Summary.pdf. For links to all three volumes, see “Testing Student and University Performance Globally: OECD’s AHELO,” OECD, June 2014, https://tinyurl.com/mvcavk7v. In addition, the United Nations Educational, Scientific, and Cultural Organization (UNESCO) advocates for outcomes assessment in education globally and includes links to publications on assessment in the Asia-Pacific, Latin America, and sub-Saharan Africa regions on its website. See “Resources on Learning Assessment,” UNESCO, May 11, 2023, https://www.unesco.org/en/learning-assessments/resources. U.S. publications on assessment often refer to the existence of assessment across continents and some of the differences that exist from one country to another. See, for example, Clifford Adelman, To Imagine a Verb: The Language and Syntax of Learning Outcomes Statements, Occasional Paper No. 24 (Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment [NILOA], February 2015), 4, 6; Daniel J. McInerney, “Historical Study in the U.S.: Assessing the Impact of Tuning Within a Professional Disciplinary Society,” Tuning Journal for Higher Education 6, no. 1 (November 2018), 23–24.

4 The emphasis on joint professional military education (JPME) programs maintaining their uniqueness is explicit in CJCSI 1800.01F, 3.

5 Six Joint Learning Areas are identified in OPMEP-F: strategic thinking and communication; the profession of arms; the continuum of competition, conflict, and war; the security environment; strategy and joint planning; and globally integrated operations. For full details on what these areas should entail, see CJCSI 1800.01F, A-A-1–A-A-2.

6 On the importance of “communicating effectively about student learning,” see Natasha A. Jankowski et al., Assessment That Matters: Trending Toward Practices That Document Authentic Student Learning (Urbana: University of Illinois and Indiana University, NILOA, January 2018), 25–26.

7 The instructional design community of practice frequently refers to this process as the ADDIE Model, which stands for analyze, design, develop, implement, and evaluate. The model assists instructors in putting together courses that meet the needs of students and programs and includes methods to determine not only whether a course has accomplished its goals but also how to make improvements the next time the course is taught. For more on the ADDIE Model, see “ADDIE Model,” University of Washington–Bothell, 2023, https://www.uwb.edu/it/addie; Amanda Kathryn Nichols Hess and Katie Greer, “Designing for Engagement: Using the ADDIE Model to Integrate High-Impact Practices Into an Online Information Literacy Course,” Communications in Information Literacy 10, no. 2 (2016), 264–282.

8 Tammie Cumming, L. Jay Deiner, and Bonne August, “Case Study: The New York City College of Technology Approach to General Education Assessment,” in Enhancing Assessment in Higher Education: Putting Psychometrics to Work, ed. Tammie Cumming and M. David Miller (Sterling, VA: Stylus Publishing, 2017), 171.

9 Peter T. Ewell and Tammie Cumming, “History and Conceptual Basis of Assessment in Higher Education,” in Enhancing Assessment in Higher Education, 4–5. The NILOA website provides free access to many assessment articles and materials, including an excerpted version of Ewell and Cumming’s “A Historical Overview of Assessment: 1980s–2000s,” in Enhancing Assessment in Higher Education, https://www.learningoutcomesassessment.org/wp-content/uploads/2019/08/Assessment-Briefs-History.pdf.

10 Ewell and Cumming, “History and Conceptual Basis of Assessment in Higher Education,” 8. Keston H. Fulcher and Caroline O. Prendergast provide good clarifying examples of the first (continuous improvement of individual students’ performance) and third (focusing on the aggregate data of all students’ learning to achieve “learning improvement at scale over the course of multiple student cohorts”) approaches in their book Improving Student Learning at Scale: A How-To Guide for Higher Education (Sterling, VA: Stylus Publishing, 2021), 8–9.

11 Ewell and Cumming, “History and Conceptual Basis of Assessment in Higher Education,” 7. Emphasis added.

12 Because all students in JPME programs are adults, technically the term here should be andragogy instead of pedagogy. But it is common practice simply to use the word pedagogy when referring to teaching students of all ages, not just children.

13 Fulcher and Prendergast, Improving Student Learning at Scale, 10. For a helpful explanation of the difference between outputs and outcomes, see Ewell and Cumming, “History and Conceptual Basis of Assessment in Higher Education,” 15–16.

14 For the full definition of outcomes-based education included in OPMEP-F, see CJCSI 1800.01F, GL-6.

15 Fulcher and Prendergast, Improving Student Learning at Scale, 11–13.

16 Ibid., 140. Fulcher and Prendergast draw extensively on a paper by Thomas Angelo that is more than two decades old and was on the leading edge of emphasizing the importance of closing the loop on assessment to ensure that the improvement of student learning would be the fundamental purpose of assessment. See Thomas A. Angelo, “Doing Assessment as if Learning Matters Most,” AAHE Bulletin 51, no. 9 (May 1999), 3–6, https://www.aahea.org/articles/angelomay99.htm.

17 Jankowski et al., Assessment That Matters, 26.

18 Wanda K. Baker, “Assessment 101—Part 1 of 2 (Learning Outcomes),” Session 01A, 2021 Assessment Institute in Indianapolis (virtual), hosted by Indiana University–Purdue University Indianapolis (IUPUI), October 24, 2021. On the slow pace of change from institutions’ motivations to conduct assessment being entirely to comply with accreditors’ demands to a balance between compliance and improving learning, see Jankowski et al., Assessment That Matters, 3, 6, 8–9, 14, 16, 20, 29.

19 J7 is the directorate for Joint Force Development and “is responsible for the six functions of joint force development: Doctrine, Education, Concept Development and Experimentation, Training, Exercises, and Lessons Learned.” See “Lt. Gen. Dagvin R.M. Anderson,” The Joint Staff, August 2022, https://www.jcs.mil/Leadership/Article-View/Article/2308664/lt-gen-dagvin-rm-anderson/. For additional information, see “J7 Joint Force Development,” The Joint Staff, n.d., https://www.jcs.mil/Directorates/J7-Joint-Force-Development/.

20 CJCSM 1810.01.

21 Ibid., 1.

22 See Keston H. Fulcher and Caroline O. Prendergast, “Six Questions to Guide Your Learning Improvement Process,” Panel 14L, 2021 Assessment Institute in Indianapolis (virtual), IUPUI, October 26, 2021.

23 CJCSM 1810.01, enclosure B, B-A-2. Programs will still report on “compliance with legislative and OPMEP requirements for high-quality delivery of Joint education.” See also CJCSM 1810.01, F-A-1. The annual reports are separate from the biennial reports on assessment data.

24 CJCSI 1800.01F, glossary, Part II—Definitions, GL-3.

25 See CJCSM 1810.01, A-1–A-3, D-1, E-A-4.

26 Ibid., A-1.

27 Fulcher and Prendergast, Improving Student Learning at Scale, 104.

28 To see the full programs for recent and upcoming Assessment Institutes, see the “Assessment Institute Website,” https://assessmentinstitute.iupui.edu.

29 Stephen P. Hundley, Susan Kahn, and Jeffery Barbee, “Meta-Trends in Assessment: Perspectives, Analyses, and Future Directions,” in Trends in Assessment: Ideas, Opportunities, and Issues for Higher Education, ed. Stephen P. Hundley and Susan Kahn (Sterling, VA: Stylus Publishing, 2019), 195.

30 See Glenn Phillips, “Picking the Provost’s Pocket: Navigating Politics and Finance to Secure New Assessment Technologies,” Panel 14S, IUPUI 2021 Assessment Institute in Indianapolis (virtual), IUPUI, October 26, 2021.

31 Keston H. Fulcher and Caroline O. Prendergast, “Lots of Assessment, Little Improvement? How to Fix the Broken System,” in Hundley and Kahn, Trends in Assessment, 160.