While Herrera and Kapur insist on how to improve the quality of the data by acknowledging biases, Adcock and Collier focus on measurement validity.
Both articles are very useful in order to improve the quality of measures and they both give advices on how to deal with the imperfection of different methods.
After debating about the definition of democracy I found very useful the distinction made by Adcock and Collier between validity and validation. They encourage the scholars to distinguish between the conceptual concern and the concern about measurement. First, is the definition of democracy we chose to create the indicator (the systematized concept) coherent with the broader concept of democracy (the background concept)? Second, is the indicator accurately measuring what we defined as democracy for the purpose of that research?
In the jungle of definitions this distinction allows the scholar to locate more precisely where the problem is and therefore make it easier for them to solve it. Furthermore, they explain in a (not always) clear purpose the different ways of evaluating the quality of the measures. Those types are intertwined and constituted a unified conception of measurement validity.
(1) The Content Validation, which is what we usually learn as being “validity”: is the indicator actually measuring what we want?
(2) Convergent/Discriminant Validation: Are other similar indicators giving the same results? Does our indicator give different results if we measure something different?
(3) Nomological/ Construct Validation: Once we have a measure, if we take already existing valid hypothesis, do they confirm our own results?
As in the rest of the article they always assess the problems, limits and concerns around the different methods.
Finally, one issue I was skeptic about when reading their previous article (pragmatic approach) was when they advised to “compare regimes according to whether they have achieved full democratization in relation to the norms of the relevant time period” (552). However, after reading this article and seeing more concretely how this can be done through the tools named “context-specific domains of observation”, “context -specific indicators” and “adjusted common indicators” I am more convinced by this possibility. Moreover, because they acknowledge the dangers of doing so and advise to carefully justify it. In a world where we want to compare democracies I think that it is very important to find a good balance between being conscious about the limits of comparisons and completely avoiding to do some. Because comparisons are the roots for more general laws we need them and their article is a good start to surpass the “recurring tension between particularizing and universalizing”(534).