Skip to Main Content

News

Alumni

News Archives Photo Galleries Wabash Blogs Wabash Magazine The Bachelor

Do you know of someone who would make a good Wabash man?

Refer a Student

The Collegiate Assessment of Academic Proficiency (CAAP)
by Jill Rogers and Pat Helland
07/14/05
Printer-friendly version Bookmark and Share

Abstract

The Collegiate Assessment of Academic Proficiency (CAAP) is a standardized survey designed to measure general educational outcomes. The test is divided into a number of subject modules; namely, reading, writing skills, writing essay, science and critical thinking. The CAAP is widely used by educational researchers, state organizations, and colleges or universities to measure learning outcomes, demonstrate performance and accountability, establish benchmarks, and inform organizational improvement. While the CAAP is designed to assess growth in specific subject areas, certain components of the test, such as the critical thinking and writing modules, aim to evaluate more general learning qualities like a student’s ability to integrate ideas and solve problems. Therefore, when combined with other appropriate instruments, parts of the CAAP survey could be useful in measuring liberal arts outcomes.

Introduction

The Collegiate Assessment of Academic Proficiency (CAAP), developed by ACT in 1990, is a paper-and-pencil survey that assesses general education outcomes and measures achievement levels in the areas of reading, writing skills, essay writing, mathematics, science, and critical thinking. The specific tests are divided into independent "modules." Each module has up to 72 questions and takes approximately 40 minutes to complete.
 
The following report provides a summary of how the CAAP assesses outcomes of a liberal arts education, how the CAAP is administered, what the CAAP measures, how the CAAP is used, and limitations of the CAAP.

Liberal Arts Outcomes

Wolniak, Seifert, and Blaich enumerate curricular and co-curricular practices that, when implemented effectively and coherently, lead to positive learning gains. [9] These practices include instructional clarity, challenging coursework, academic effort, essay writing in coursework, and integration of ideas, among others. Resulting learning gains are seen in areas of reading comprehension, science reasoning, and writing skills. Moreover, students who consistently experience high levels of these "best practices" not only show gains in the areas listed above, they also demonstrate growth in broader liberal arts outcomes like critical thinking, openness to diversity, learning for self-understanding, preference for deep and difficult intellectual work, positive attitude toward literacy, and a sense of responsibility for one’s own academic success. [9]

Therefore, for institutions intentional about creating and maintaining these "best practices," the CAAP is one appropriate tool for measuring learning growth, particularly since many of the gains resulting from best practices overlap with CAAP’s six modules (reading, writing skills, writing essay, science, and critical thinking). CAAP results can point to the effectiveness of an institution’s best practices, which ultimately can describe broader liberal arts outcomes.
 
Looking at the modules themselves, individual CAAP sections aim to test the ability to connect and communicate major ideas; and rhetorical, reasoning, and organizational skills, as opposed to formula recall/memorization or mastery of a particular skill set. Additionally, the modules attempt to draw from many disciplines. For example, the reading test includes passages from fiction, humanities, social sciences, and natural sciences. The science test draws content from biology, chemistry, physics, and the physical sciences. The critical thinking test includes case study, debate, dialogue, overlapping positions, statistical argument, experimental results, or editorials. [2] Theoretically, then, the CAAP addresses (1) curricular content, (2) reasoning and thinking skills (not memorization), and (3) integration of course material and concepts. These features align with several outcomes associated with liberal arts education, namely integration of learning and problem solving skills. 

The teaching and institutional practices that support CAAP outcomes also support increases in students’ valuing learning for its own sake, literacy, plans to pursue an advanced degree, and valuing cognitively demanding tasks. Therefore, to the extent that an objective, multiple-choice test (five out of the six modules) can look at broad and general qualities, the CAAP certainly attempts to assess liberal arts outcomes. With that said, it is difficult to imagine that such a survey could comprehensively delve into the complex interplay of ideas associated with liberal arts outcomes, nor can the CAAP adequately address the more holistic attitudinal characteristics and outcomes of a liberal arts education, such as moral character, leadership, and well-being. Institutions might view an "ideal" liberal arts environment as one that seamlessly incorporates general education outcomes with character development, providing a coherent context for student growth and learning. While the CAAP allows assessment of the educational outcomes, it does not capture the intellectual essence and the institutional ethos that characterizes the liberal arts experience. 

In short, the CAAP is one useful tool for looking at certain components of a liberal arts education. However, additional quantitative instruments that can address the broader learning context might also be valuable (the SRLS or the NCS, for example), along with appropriate qualitative assessment.

Participation

For specific information on how to obtain and administer the CAAP, please see ACT’s website

Those interested in using the CAAP must complete a Participation Agreement form and pay an annual participation fee of $330 each academic year. This fee includes a Standard Reporting Package, which is intended to provide documentation of students’ achievement levels on an individual and group basis. ACT provides a CAAP Planning and Forms Manual to help institutions determine if the test is appropriate for them, and to guide them through the administration of the instrument.  

Costs for scoring the reading, writing skills, math, science, and critical thinking tests vary and are based on the number of students participating, as well as on the number of objective tests taken per student. As an example, if an institution gave 250 students one objective test, the cost to the institution would be $2887.50. If they opted for 2-5 tests, the cost would be $4462.50. The only subjective testing, the Writing Essay, has its own fee schedule ranging from $3.45 to $11.55 per respondent based on whether ACT or the institution does the scoring, and if other objective tests are purchased.

Reporting

ACT offers several reporting packages. As mentioned above, the standard package is always provided. It includes (1) the Institutional Summary Report, (2) student roster reports, (3) student score reports, (4) certificates of achievement, (5) up to three previously specified subgroup reports (supplemental reports are available for an additional fee). The Institutional Summary Report gives average scores for each demographic area and a summary of student self-reported motivation.

CAAP also offers Research Reports, which enable institutions to document student improvement and provide analysis of the relative strengths and weaknesses of student groups. Faculty can use this information to determine specific areas of their general education programs that are working and those that need enhancement.

One component of the Research Reports is a "Linkage Report." A linkage report refers to matched comparisons of students’ scores from the ACT assessment, ASSET, or COMPASS/ESL tests (all tests taken before or on entry to college) with their CAAP scores after some or all of their college education is completed. For instance, an institution that has ACT scores on at least 25 students (100 or more is preferred) can administer the CAAP test to those students upon completion of their sophomore year in college. A comparison of the results between the ACT assessment and the CAAP is made to see how much learning has occurred in the first two years of college. (The same type of analysis could be done with ASSET or COMPASS scores). ACT has generated national norms of expected growth and can provide institutions with an indication of how their students are developing (under, at, or above expected growth) compared to similar institutions. Finally, if an institution has specific reporting requirements, it may contact ACT’s Customized Research Services to discuss its particular needs.

About the CAAP

The CAAP is a national standardized test developed by ACT. It has six independent modules that assess outcomes in core general education: reading, writing skills, writing essay, mathematics, science, and critical thinking. Each module has been analyzed and found to demonstrate acceptable internal consistency and reliability. [3] Below is a brief description of each module.

Reading

The reading module consists of 36 questions designed to conceptualize referring and reasoning skills. Under the reading test, four prose passages from the content areas of prose fiction, humanities, social studies, and natural sciences are used to represent the levels of writing encountered in typical college curricula. Each passage then has nine multiple-choice questions. An example question under the prose fiction passage asks, "Which of the following statements represents a justifiable interpretation of the meaning of the story?"

Writing Skills

The writing skills module does not measure rote knowledge of spelling and vocabulary. Rather, it contains 72 items that measure students’ understanding of the "conventions of standard written English in punctuation, grammar, sentence structure, strategy, organization, and style." This test has six prose passages and each is followed by set of 12 multiple-choice test items. Sentences or phrases are broken out from the passages and students are asked to correct the sentence if necessary. An example item, with responses, is, "In the end, everyone gives up jogging. Some find that their strenuous efforts to earn a living drains away their energy. A. NO CHANGE, B. drain, C. has drained, D. is draining"

Writing Essay

For the writing essay, two 20-minute writing tasks are defined by a short prompt that identifies a specific hypothetical situation and audience. The student is then instructed to take a position on the issue and to explain to the audience why the position taken is the better alternative. This approach is designed to test the student’s skills in formulating an assertion, testing that assertion, organizing and connecting ideas, and clearly expressing those ideas.

Mathematics

The mathematics module emphasizes quantitative reasoning rather than formula memorization. It is a 35-item test with content areas of pre algebra, elementary algebra, intermediate algebra, coordinate geometry, college algebra, and trigonometry. An example of a pre-algebra question is, "How much greater is the product of -3, -7, and 5 than their sum? A. -110, B. -100, C. 90, D. 100, E. 110"

Science

The science module emphasizes reasoning skills instead of rote scientific knowledge. The 45 questions are drawn from the biological sciences (e.g., biology, botany, and zoology), chemistry, physics, and the physical sciences (e.g., geology, astronomy, and meteorology). There are eight passage sets along with a set of multi-choice questions. An example taken from a passage which summarizes two scientific tables is, "The data suggest that subjecting plants to which of the following conditions would result in the greatest seed masses? A. 8 hours of light, adequate water supply, and 23°C, B. 8 hours of light, decreased water supply, and 23C, C. 14 hours of light, adequate water supply, and 23°C, D. 14 hours of light, decreased water supply, and 29°C"

Critical Thinking

The critical thinking test asks students to clarify, analyze, evaluate, and extend arguments. This module has four passages with a total of 32 items. Each passage uses several formats to present the arguments and is followed by a set of multi-choice test items. An example question under a passage with two persons expressing differing opinions is, "A’s argument in favor of social welfare programs relies on which of the following assumptions?"

The CAAP tests have several features designed to enhance reliability and validity. First, there are multiple forms for each test to allow for greater security as well as retest options. Secondly, there are questions that determine the motivation level of the respondent in terms of whether or not they gave the test their best effort (see below under "Limitations" for a discussion on student motivation and performance).

Administration

Institutions can customize their testing regimes to suit their purposes. The CAAP can be administered at different times of the year and may be given to students only once or several times over a student’s college career. Institutions should carefully choose the testing scenario and the independent modules that will provide them with information addressing the particular outcomes in question. Examples of possible administration scenarios are given below.

  • Outcomes Only: This design focuses on simple outcomes. The CAAP is given one time to students who have completed their general education. The institution can then compare its CAAP results with those of similar institutions and gauge how its students are performing relative to similar institutions.

  • Cross-sectional: This design enables an institution to obtain an initial reading on program performance. Incoming freshmen are tested at the beginning of the fall term and a similar group of sophomores, for example, is tested at the end of the spring term in the same academic year. The effectiveness of a program may then be inferred from the differences between the two mean group scores. The challenge of this design is matching student characteristics across groups.

  • Longitudinal (same test): This design involves the administration of the CAAP to incoming students and then again to the same students, usually at the end of the sophomore, junior, or senior year. This pre-test/post-test scenario allows institutions to measure differences in students over time and infer change.

  • Longitudinal or Linkage Report (different, but similar tests): The CAAP is administered at the end of the sophomore, junior, or senior year to students who have taken the ACT Assessment, ASSET, or COMPASS test for entry purposes (see discussion above in "Reporting" section). Growth can be inferred by comparing the rankings (based on matched records, i.e., records for the same student taking entry and outcomes tests).

The ACT Assessment, ASSET, and COMPASS are all pre-baccalaureate measures of student performance.

Using the CAAP

The CAAP test is used extensively by institutions for many purposes. On a statewide level, for states emphasizing public accountability, the CAAP has been used to track entire systems over a period of time, using test results as a benchmark for progress. Often, these results are published in a "public report card," [4]
Universities, state schools, two- and four-year colleges, and business schools use the CAAP to establish student learning gains, evaluate the development of student skills over time, and study how this growth compares to students at other colleges of similar type. This information may be used for many purposes, such as to document student performance, to evaluate institutional effectiveness, to analyze program success, or to identify when intervention is warranted. Institutions can choose which module(s) to use based on what question is being asked.

Within institutions, researchers in higher education often study certain student characteristics (such as GPA, minority status, or age when entering college) or environmental factors (such as commuting or courses taken), and how these qualities impact learning. CAAP results can be used to assess the learning component in these types of studies. The test itself asks for data about student characteristics and demographics (such as gender, ethnicity, and major), allowing for "built-in" subgroup analysis. For example, comparing the CAAP scores of biology majors and English majors might indicate a relationship between a particular curricular track and certain learning outcomes. CAAP scores might also be combined with data from other surveys (the CIRP or the NSSE, for instance), retention and graduation rates, course grades, etc. Investigation as to why subgroups differ is suggested in order to work towards intentional improvement. 

The CAAP survey can also be a useful tool for assessing the effectiveness of an educational practice or curriculum. One might administer one or more CAAP module(s) both before and after a specific educational experience (e.g., a colloquium or general education curriculum) to evaluate how that experience affects the learning outcomes in question. Again, exploring why certain practices or curricula are effective (or are not) is an important component to this type of assessment.

Limitations

In general, the CAAP is a dependable instrument offering institutions flexibility and convenience. However, the test has several shortcomings. For one, using paper-and-pencil, multiple-choice surveys to measure student skill level can be as much about student motivation as about learning. The CAAP is no exception, and in fact Hoyt demonstrates (not surprisingly) that the average test scores of students increase when they increase their efforts on the exam. [5] CAAP addresses this issue by including questions about a student’s motivation. (ACT publishes the "Motivating Students for Successful Outcomes Assessment" Handbook, 2001, as well). Even so, including student motivation as another variable (and a capricious one, at that) creates concern about inconsistent comparisons between cohorts.

Similarly, the ability to customize the CAAP from school to school introduces a mix of sampling scenarios.  Whereas one institution may require students to take the test or offer incentives for them to participate, other colleges may sample a much smaller, more select group of their student bodies. This is a concern when comparing data between schools. [5]

This point suggests another drawback of the CAAP. Because the test is so ready-made and easy to administer, it may be used without thoughtful regard to goals or study design and construct. Take University of Wisconsin-Whitewater’s self-proclaimed five-year "Era of Pseudo Assessment" as an example. In a hurried effort to produce results about a new general education program, the University neglected to tie the results of their data to any outcomes of the still-being-revised general education curriculum. [8] The University did enter into "genuine assessment" several years later, after reflecting on their goals and study questions.

Another limitation of the CAAP (and standardized testing in general) is the inability of the test to wholly reproduce areas or emphasis within a particular curriculum. Additionally, standardized tests may not provide students the opportunity to demonstrate the practical application of knowledge or demonstrate skills in problem solving tasks. [6] Tennessee University, Knoxville, provides an illustration of this notion. This University examined how CAAP content represented its general education goals. Faculty indicated less than one-third of the CAAP content mirrored the goals established by the University and the test was deemed insensitive to the general education coursework at the University. [7] Another college dropped the Essay Writing test after they found it did not provide sufficient sensitivity to a change in student essay skills. [5] These examples underscore the importance of choosing an assessment tool that can answer the study questions as well as the need to combine more qualitative tools with standardized tests.

Also, ACT acknowledges that the cross-sectional design—that is, giving the test to entering first-year students and then later in the year to upperclass students—poses a challenge in matching student characteristics across groups. They also caution that the CAAP should not be used independent of other testing for high-stakes student evaluation and that it should not be used by individual faculty for course evaluation. [2]

Finally, while the CAAP can provide straightforward results about student performance, the survey cannot indicate why certain student groups outperform others, or why particular educational practices are more effective than others. Like most quantitative assessment methods, CAAP results can only indicate performance patterns and trends. It cannot indicate cause. Therefore, institutions should consider qualitative methodologies and reflection in order to inform change and improve institutional programs.

Conclusion

The CAAP is a standardized test measuring general education outcomes. The test is widely used for a variety of purposes from evaluating statewide higher education performance to looking at individual student development. The CAAP offers institutions flexibility in timing and study design. The six independent modules of the CAAP include reading, writing skills, writing essay, mathematics, science, and critical thinking. Any or all of these modules can be administered in a number of study designs, including a one-time-only test or a linkage of the CAAP with previous pre-college tests. Furthermore, ACT provides a wealth of information and support regarding testing administration, score interpretation, study design, etc. The modules of the CAAP aim to address a student’s ability to reason effectively and integrate concepts, and therefore could be useful in assessing some aspects of a liberal arts education. Certainly, when assessing liberal arts outcomes, other instruments including qualitative tools should be employed to better evaluate the overall institutional culture.


References   

  1. Blaich C.F., Bost A., Chan E., Lynch R. (2004), Executive Summary: Defining Liberal Arts Education. The Center of Inquiry in the Liberal Arts. Available at http://liberalarts.wabash.edu/home.cfm?news_id=1400.

  2. CAAP Planning and Forms Manual (2004-2005). ACT.

  3. CAAP Technical Handbook (2004). ACT.

  4. Ewell, P. (2001). Statewide Testing in Higher Education. Change. March/April, 2001.

  5. Hoyt, J.E. (2001). Performance Funding in Higher Education: The Effects of Student Motivation on the Use of Outcomes Tests to Measure Institutional Effectiveness. Research in Higher Education 42(1).

  6. Lopez, C.L. (11/2002). Assessment of Student Learning: Challenges and Strategies. Journal of Academic Librarianship 28(6).

  7. Pike, G. R. (1989). A Comparison of the College Outcome Measures Program (COMP) and the Collegiate Assessment of Academic Proficiency (CAAP) Exams. Assessment and Evaluation.

  8. Stone, J. & Friedman, S. (2002). A Case Study in the Integration of Assessment and General Education: lessons learned from a complex process. Assessment and Evaluation in Higher Education, 27(2).

  9. Wolniak G.C., Seifert T.A., Blaich C.F. (2004), A Liberal Arts Education Changes Lives: Why Everyone Can and Should Have This Experience. LiberalArtsOnline 4(3). Available at http://liberalarts.wabash.edu/home.cfm?news_id=1382.