Follow this link to skip to the main content
NASA Jet Propulsion Laboratory California Institute of Technology
JPL banner
JPL HOME vertical line EARTH vertical line SOLAR SYSTEM vertical line STARS & GALAXIES vertical line SCIENCE & TECHNOLOGY
horizontal line
BRING THE UNIVERSE TO YOU: JPL Email News RSS Podcast Video
CAE: Center for Astronomy Education. Dedicated to research on teaching and learning, and professional development, for the Astro 101 community.
Home
Workshops
Instructional & Workshop Materials
Astrolrner@CAE
Teaching Strategies
bullet Teaching Strategies Archive
CATS
College Locator
CAE AAS Plenary Talk
Publications
CAE on Facebook
Seeing the Universe
Outside Resources
About Us
Contacts
Join Our Mailing List
Teaching Strategies
horizontal line

Classroom Assessment Techniques:
Classroom Assessment Techniques:
A Brief Overview

May, 2005
An Excerpt from The Role of Assessment in the Development of the College Introductory. Brissenden, University of Arizona; Slater, University of Wyoming; Mathieu, University of Wisconsin - Madison; & NISE College Level-One Team

In our CAE Teaching Excellence Workshops, we discuss quite a few classroom assessment techniques that could be used to improve learning in an introductory astronomy course. Following is a brief description of these, and other, techniques. To learn more about assessment, and its role in the development of an introductory astronomy course, we invite you to read the entire article available online in Astronomy Education Review.

Attitude Surveys

Attitude surveys provide valuable information on student perceptions of their classroom experience. This includes general attitudes toward the course, the discipline, and their own learning. The results from this survey can also help you identify elements in your course which best support student learning. While attitudinal surveys may take many forms and address a range of issues, they typically consist of a series of statements with which students are asked to express their degree of agreement or disagreement, using a numerical scale.

ConcepTests [also: Think-Pair-Share]

With ConcepTests, the instructor obtains immediate feedback (during class) on the level of student understanding of a particular concept. Students obtain immediate practice in using [astronomy] terminology and concepts. Students have an opportunity to enhance teamwork and communication skills. Many instructors have reported substantial improvements in class attendance and attitude toward the course. The instructor presents one or more questions during class involving key concepts, along with several possible answers. Students in the class indicate by, for example, a show of hands, which answer they think is correct. If most of the class has not identified the correct answer, students are given a short time in lecture to try to persuade their neighbor(s) that their answer is correct. The question is asked a second time by the instructor to gauge class mastery. [See our April 2005 Teaching Strategy. for suggestions on implementing Think-Pair-Share.]

Concept Maps

Concept Maps assess how well students see the big picture surrounding a concept. They provide a useful and visually appealing way of illustrating students' conceptual knowledge. A Concept Map is a diagram of nodes, each containing concept labels, which are linked together with directional lines, also labeled. The concept nodes are sometimes arranged in hierarchical levels that move from general to specific concepts.

Conceptual Diagnostics Tests

Conceptual Diagnostics Tests are used to assess how well students understand key concepts in [an astronomy course] prior to, during, and after instruction. These tests use items in a multiple-choice or short-answer format that are designed specifically to elicit common misconceptions.

In-Depth Structured" Interviews

Using a handful of carefully selected students, In-Depth "Structured" Interviews enable assessment of the level of understanding your students have developed with respect to a series of well-focused, conceptually-related scientific ideas. This form of assessment provides feedback that is especially useful to instructors who want to improve their teaching and the organization of their courses. A "structured" interview consists of a series of well-chosen questions (and often a set of tasks or problems), which are designed to elicit a portrait of a student's understanding about a scientific concept or set of related concepts. The interview may be videotaped or audiotaped for later analysis.

Mathematical Thinking Tasks

Few faculty have difficulty finding or developing tools that assess the algorithmic mathematical techniques which they teach in [astronomy] courses; a challenge which faculty do face, however, is finding ways to promote and assess the development of mathematical thinking--notably helping students know what to do when faced with problems that are not identical to the technical exercises they've already encountered in their course. Mathematical Thinking Tasks are designed to aid in the development of this problem-solving skill.

Multiple-Choice Tests

In any field of science, there exists a vocabulary, history, and basic knowledge base that constitute the foundation of the discipline. One efficient way to measure students' abilities to recall and identify these basic constituents is the oft-used multiple-choice test. The most common multiple-choice test items are constructed with an incomplete sentence as the prompt, or stem, which is followed by several choices. One of these choices is a most appropriate completion to the stem, whereas the other three choices, called distracters, represent common mistakes that students make. Multiple-choice items are quick and easy to grade, but often difficult to write well.

Performance Assessment

Although facts and concepts are fundamental in any [introductory astronomy] course, knowledge of methods, procedures, and analysis skills that provide context are equally important. Student growth in these latter facets proves somewhat difficult to evaluate, particularly with conventional multiple-choice examinations. Performance assessments, used in concert with more traditional forms of assessment, are designed to provide a more complete picture of student achievement. Performance assessments are designed to judge student abilities to use specific knowledge and research skills. Most performance assessments require students to manipulate equipment, to solve a problem, or to make an analysis. Rich performance assessments reveal a variety of problem-solving approaches, thus providing insight into a student's level of conceptual and procedural knowledge.

Portfolio Assessment

Portfolio Assessment strategies provide a structure for long-duration, in-depth assignments. The use of portfolios transfers much of the responsibility of demonstrating mastery of concepts from the instructor to the student. Student portfolios are a collection of evidence, prepared by the student and evaluated by the instructor or teaching assistants, that demonstrate mastery, comprehension, application, and synthesis of a given set of concepts. To create a high-quality portfolio, students must organize, synthesize, and clearly describe their achievements, and effectively communicate what they have learned.

Scoring Rubrics

Has a student ever said to you regarding an assignment, "But, I didn't know what you wanted!" or "Why did her paper get an 'A' and mine a 'C'?" Students must clearly understand the level of performance we expect them to achieve in course assignments, and importantly, the criteria we use to determine how well they have achieved those goals. A Scoring Rubric, though not technically itself a [classroom assessment technique] (it is used in conjunction with one), provides a readily accessible way of communicating our goals to students as well as communicating the criteria we use to discern how well students have reached them. Rubrics (or "scoring tools") are a way of describing evaluation criteria (or "grading standards") based on the expected outcomes and performance of students. Typically, rubrics are used in scoring or grading written assignments or oral presentations; however, they may be used to score any form of student performance. Each rubric consists of a set of scoring criteria and point values associated with these criteria. In most rubrics the criteria are grouped into categories so the instructor and the student can discriminate among the categories by level of performance. In classroom use, the rubric provides an "objective" external standard against which student performance may be compared.

Student Self-Assessment of Learning Gains

Strategies that allow for Student Self-Assessment of Learning Gains can spotlight those elements in the course that best support student learning and those that need improvement. This instrument is a powerful tool, can be easily individualized, provides instant statistical analysis of the results, and facilitates formative evaluation throughout a course.

Weekly Reports

Weekly Reports provide rapid feedback about what students think they are learning and what conceptual difficulties they are experiencing. Weekly Reports are short papers written by students each week, in which they typically address three questions: "What did I learn this week?", "What questions remain unclear?", and "What questions would you ask your students, if you were the professor, to find out if students understood the material?"

To learn more about Classroom Assessment Techniques, you can also go to the Field-Tested Learning Assessment Guide developed by the National Institute for Science Education, or better yet, attend one of our workshops.

References

Brissenden, G., Slater, T. F., & Mathieu, R. (2002). The role of assessment in the development of the college introductory astronomy course: A "how-to" guide for instructors. Astronomy Education Review 1(1). Retrievable from http://aer.noao.edu

Teaching Strategies Archive

CAE is housed in the Astronomy Dept. at the Univ. of Arizona's Steward Observatory. CAE is funded through the generous contributions of the NASA JPL Exoplanet Exploration Public Engagement Program. This material is based upon work supported by the National Science Foundation under Grant No. 0715517, a CCLI Phase III Grant for the Collaboration of Astronomy Teaching Scholars (CATS). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

USA Gov
PRIVACY          FEEDBACK
 
Site Manager:   Gina Brissenden
Webmaster:   Cornell Lewis