Follow this link to skip to the main content
NASA Jet Propulsion Laboratory California Institute of Technology
JPL banner
JPL HOME vertical line EARTH vertical line SOLAR SYSTEM vertical line STARS & GALAXIES vertical line SCIENCE & TECHNOLOGY
horizontal line
BRING THE UNIVERSE TO YOU: JPL Email News RSS Podcast Video
CAE: Center for Astronomy Education. Dedicated to research on teaching and learning, and professional development, for the Astro 101 community.
Home
Workshops
Instructional & Workshop Materials
Astrolrner@CAE
Teaching Strategies
bullet Teaching Strategies Archive
CATS
College Locator
CAE AAS Plenary Talk
Publications
CAE on Facebook
Seeing the Universe
Outside Resources
About Us
Contacts
Join Our Mailing List
Teaching Strategies
horizontal line

The Multiple-Choice Test:
The Multiple-Choice Test:
Creating Better Questions

June, 2006
Revisiting our CAE Teaching Excellence Workshops with an Excerpt from Learner-Centered Astronomy Teaching: Strategies for Astro 101 Brissenden, Univ of Arizona; Slater, Univ of Wyoming; & Prather, Univ of Arizona; Slater, Univ of Wyoming & Adams, Montana State Univ

Whether we like it or not, for many of us, there is no escaping the multiple-choice test. When faced with a hundred students, and no grader, it is simply a matter of efficiency. But many of us feel guilty at using this classroom assessment technique, believing that we can barely do better than test our students at a very shallow level of understanding (Bloom's knowledge and comprehension levels;Brissenden et al, 2002). Though certainly this is true of many a multiple-choice test, it says more about us, as question writers, than about the technique itself. For anyone who has attended one of our workshops, you know how easy it is to write knowledge and comprehension questions, as well as how difficult it can be to write multiple-choice questions that probe Bloom's deeper levels of understanding (synthesis and evaluation). But that is just it: writing these questions is more difficult—not impossible. In addition, level of difficulty is only one area in which we can improve our questions.

To assist you in creating better multiple-choice questions, we offer the following suggestions:

The Basic Structure

In discussing structure of a multiple-choice question (or item), we distinguish between an item's stem, the opening text that sets up the question, and an item's responses, the collection of statements or values from which the student must chose the correct one. The responses comprise the correct answer and a collection of distracters, which are reasonable-sounding incorrect responses (We'll come back to this in a bit).

Number of Responses

There is no rule about the number of responses that must be supplied. However, just because there are five "bubbles" on your bubble sheet does not mean that they must all, or even should all, be used. Good distracters are ones that sound reasonable, are clearly incorrect to someone who understands the concept being tested, but also sound reasonable to a student who does not understand the concept. Consider the following example:

1. Stars Antares and Mimosa each have absolute magnitude -4.6. Antares is spectral type M and Mimosa is spectral type B. Which star is larger?

    1. Antares
    2. Mimosa
    3. They are the same size.
    4. There is insufficient information to determine this.
There really are no other reasonable responses. So, even if we have 5 bubbles on our sheet, we stop at four responses.

The Question Itself

Ideally, the stem of a multiple-choice question should provide the student with everything they need to answer the question. That is, a student should be able to cover the responses, read the stem, determine their answer, and then find their desired answer in the list of responses. (Try this with our first example.) For students who have a hard time taking multiple-choice tests, this allows the student to, essentially, remove the multiple-choice aspect from the question. Instead of getting bogged down in trying to figure out which of five responses is correct, all they have to do is find the one response they want.

Reasonable Sounding Distracters

Here is where many of us fail the most at question writing. So what are some of the common pitfalls?

  • All distracters are either longer or shorter than the correct response: This telegraphs to students which answer is most likely correct—the one that is different.
  • All distracters have either more or less technical jargon than the correct response: Again, this telegraphs to students which answer is most likely correct.
  • Including a "nonsensical" distracter—often one we believe to be funny: Test taking is high stakes for our students. How well they do at this time will directly affect their grade. During this high stakes period they are fully concentrating on the content of our course, managing their time to finish before class ends, and trying to keep their stress level under control. When we try to provide our students with comic relief at this point, all we have done is broken their concentration, derailed their time management, and made them groan (we do not know how to be funny).

Level of Difficulty

Maybe the most important aspect related to the level of difficulty in our multiple-choice questions is to realize that they don't all have to be at the deeper levels of Bloom's Taxonomy. In fact, in a high-stakes situation—one that leads to student grades—like a test—it would be wrong of us to only ask deeper level questions.

As a basic rule, there should be several questions on each of the topics covered in the exam. These questions should range from Bloom's "entry level" knowledge and comprehension levels to the more in-depth syntheses and evaluation levels.

Consider the following two examples:

2. During the full moon phase, how much of the total Moon's surface is being illuminated by sunlight?

    1. none
    2. less than half
    3. half
    4. more than half
    5. all
3. Which of the following is possible?

    1. A waxing crescent Moon near the eastern horizon just after sunset.
    2. A waning gibbous Moon near the western horizon just after sunset.
    3. A waning crescent Moon near the eastern horizon just before sunrise.
    4. A full Moon near the western horizon at sunset.
    5. A first quarter Moon rising at dawn.
The Wrap Up

What we have previously discussed are guidelines for writing better multiple-choice questions. They are not hard-and-fast rules. Sometimes we will break these rules, and we will still end up with a better question. (see example 3). We ask only that you keep these guidelines in mind as you create your next exam.

To learn more about creating better multiple-choice questions, or other forms of assessment, we suggest further reading at the National Institute for Science Education's Field-tested Learning Assessment Guide (FLAG), or, better yet, attend one of our workshops.

References

Brissenden, G., Slater, T. F., & Mathieu, R. (2002). The role of assessment in the development of the college introductory astronomy course: A "how-to" guide for instructors. Astronomy Education Review 1(1). Retrievable from http://aer.noao.edu

Slater, T. F. & Adams, J. P. (2003). Learner-Centered Astronomy Teaching: Strategies for Astro 101. New Jersey: Prentice Hall.

Teaching Strategies Archive

CAE is housed in the Astronomy Dept. at the Univ. of Arizona's Steward Observatory. CAE is funded through the generous contributions of the NASA JPL Exoplanet Exploration Public Engagement Program. This material is based upon work supported by the National Science Foundation under Grant No. 0715517, a CCLI Phase III Grant for the Collaboration of Astronomy Teaching Scholars (CATS). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

USA Gov
PRIVACY          FEEDBACK
 
Site Manager:   Gina Brissenden
Webmaster:   Cornell Lewis