by Maia Silber
When comparing two sources, it’s easy to fall into what I like to call the “friends, enemies, and frenemies” trap. If the two sources present similar perspectives, our first instinct might be to label them “friends”—Source X and Source Y both argue that standardized testing should be used to evaluate high school teachers. Alternatively, if the sources clearly contain opposing viewpoints, we cast them as “enemies”—Source X argues that standardized testing should be used to evaluate high school teachers, but Source Y argues that it should not.
It might seem like the way to add complexity to such theses would be to define the sources as “frenemies”– Source X and Source Y both argue that standardized tests should be used to evaluate high school teachers, but only Source X argues that student reports should also be used to evaluate high school teachers. The problem with the “frenemies” approach is not that it’s inaccurate—any two writers, like any two people, will agree on some points and disagree on others—but that it does not account for why or how the authors agree and disagree.
A good comparison, someone once told me, finds the like in the unlike and the unlike in the like. To present a more complex account of how two sources relate to one another, it’s helpful to remember that writers can be more than frenemies—they might, for instance, relate to each other in the following ways:
THE SPRINTER AND THE JOGGER: The sprinter and the jogger each have the same goal—the finish line—but they’re going to get there in different ways. Source X and Source Y might be making the same argument—each claims that standardized testing should not be used to evaluate high school teachers—but for different reasons. Source X might argue that standardized testing should not be used to evaluate high school teachers because standardized tests don’t reliably predict students’ academic success. Source Y might claim that standardized tests do a great job of predicting students’ academic success, but still argue that standardized tests should not be used to evaluate teachers because students with high IQs will score well regardless of time spent in class.
TWO COOKS IN THE KITCHEN: Have you ever watched one of those TV cooking challenges, where both chefs get the same ingredients to create their dishes? They each start out with similar combinations of milk, eggs, and flour, but one bakes a pound cake and the other a puff pastry. Source X and Source Y might both be using the same tool—the value of meritocracy, say—and come to entirely different conclusions. Source X argues that standardized test scores provide the most objective way to measure teachers’ performance, but Source Y argues that in-class evaluations provide a larger picture of teachers’ merit.
THE THEORETICIAN AND THE PRACTITIONER: When comparing a secondary source to a primary source, you can imagine discovering a cure in the lab and then testing it on real patients. Does the cure work? What real-life variables not present in the lab might affect it? Did the lab report anticipate its success rate, and if not, why? If Secondary Source X argues that standardized testing should be used to evaluate high school teachers, and Primary Source Y charts students’ standardized test scores against teachers’ in-class evaluations at public and private high schools, what might looking at Source X and Source Y together tell us about the real-life situations where standardized test scores accurately do or don’t accurately measure teachers’ performance?
THE DOCTOR AND THE PATIENT: A medical analogy might also be fitting to describe another way that primary and secondary sources interact. Say that a patient comes to a doctor’s office complaining of a problem—he’s been exercising every day and can’t lose weight. The doctor asks him about his eating habits, and finds that he’s been consuming a high-calorie diet. Primary Source X (the patient) finds that standardized test scores don’t reflect teachers’ performance ratings at low-income schools. If Secondary Source Y (the doctor) suggests that standardized test scores are affected by school resources and funding, how might this account for the data in Primary Source X?