Skip to main content
Article
The best test of Ph.D. student success: Response
Science
  • David F. Feldon, Utah State University
  • Briana Crotwell Timmerman
  • Michelle Maher
Document Type
Article
Publisher
American Association for the Advancement of Science
Publication Date
10-29-2010
Abstract
Newquist suggests that students' publications are important predictors of post-degree research effectiveness, due in part to the importance of collaboration in innovative research. We agree that publication record is important and helpful, but the collaborative aspects of writing render publications a noisy metric by which to assess individual growth on specific skills (1). The variable time lags between the execution of an experiment, analysis of its data, and publication of findings [e.g., (2)] further limit the ability to identify direct relationships between experiences in a doctoral program and scholarly growth. Doctoral education's overarching goal is to develop competent researchers capable of performing independent research (3–6). To determine how effectively doctoral programs—and specific features of those programs—prepare individual students for independent scholarship, we suggest the implementation of measures reflecting individual growth in requisite skill sets identified by a discipline [e.g., (7)]. Newquist also infers that we advocate some form of standardized testing. This is not the case. The mechanism we do suggest, the rubric, represents a performance-based assessment that faculty at the program or department level can tailor to evaluate localized, authentic student research products (8, 9). Rubrics may also be useful in conceptualizing and operationally defining necessary competencies that represent the consensus of a larger field or discipline at the local level. Far from constraining research creativity or inhibiting problem-solving in graduate students, an effective rubric makes transparent a faculty's expectations of excellence in research. This can help students to align the products of their innovative work with the quality indicators valued by faculty and the larger field to which they wish to contribute. Newquist then cites findings from a recent study (10) that identifies a correlation between doctoral students' goal orientations (“learning-oriented” or “performance-oriented”) and their subsequent professional productivity as measured by grants and publications. The definition of “performance-orientation” in this study refers to their indication on a survey that their sole motivation for attending graduate school was either having “received good grades in science” previously or being “awarded [a] scholarship or fellowship.” In contrast, those classified as having a “learning-orientation” indicated a sole motivation of “enjoyed thinking about science.” These results do not conflict with our position. Certainly, someone who is driven by an inherent interest in scientific inquiry will be more motivated to acquire necessary skills at the Ph.D. level and find productive research opportunities. We merely suggest that assessing those skills in a manner able to meaningfully inform the improvement of doctoral education requires measures that are well defined through faculty consensus, suitable for identifying longitudinal growth, and precisely targeted to measure students as individual learners.
Citation Information
Feldon, D. F., Timmerman, B., & Maher, M. (2010). The best test of Ph.D. student success: Response. Science, 330, 587.