Let's look at the two types of translation validity. In face validity , you look at the operationalization and see whether "on its face" it seems like a good translation of the construct.
This is probably the weakest way to try to demonstrate construct validity. For instance, you might look at a measure of math ability, read through the questions, and decide that yep, it seems like this is a good measure of math ability i. Or, you might observe a teenage pregnancy prevention program and conclude that, "Yep, this is indeed a teenage pregnancy prevention program.
Note that just because it is weak evidence doesn't mean that it is wrong. We need to rely on our subjective judgment throughout the research process.
It's just that this form of judgment won't be very convincing to others. We can improve the quality of face validity assessment considerably by making it more systematic. For instance, if you are trying to assess the face validity of a math ability measure, it would be more convincing if you sent the test to a carefully selected sample of experts on math ability testing and they all reported back with the judgment that your measure appears to be a good measure of math ability.
In content validity , you essentially check the operationalization against the relevant content domain for the construct. This approach assumes that you have a good detailed description of the content domain, something that's not always true. For instance, we might lay out all of the criteria that should be met in a program that claims to be a "teenage pregnancy prevention program.
Then, armed with these criteria, we could use them as a type of checklist when examining our program. Only programs that meet the criteria can legitimately be defined as "teenage pregnancy prevention programs. But for other constructs e. In criteria-related validity , you check the performance of your operationalization against some criterion.
How is this different from content validity? In content validity, the criteria are the construct definition itself -- it is a direct comparison.
In criterion-related validity, we usually make a prediction about how the operationalization will perform based on our theory of the construct.
The differences among the different criterion-related validity types is in the criteria they use as the standard for judgment. In predictive validity , we assess the operationalization's ability to predict something it should theoretically be able to predict. For instance, we might theorize that a measure of math ability should be able to predict how well a person will do in an engineering-based profession.
We could give our measure to experienced engineers and see if there is a high correlation between scores on the measure and their salaries as engineers. A high correlation would provide evidence for predictive validity -- it would show that our measure can correctly predict something that we theoretically think it should be able to predict.
In concurrent validity , we assess the operationalization's ability to distinguish between groups that it should theoretically be able to distinguish between. For example, if we come up with a way of assessing manic-depression, our measure should be able to distinguish between people who are diagnosed manic-depression and those diagnosed paranoid schizophrenic.
If we want to assess the concurrent validity of a new measure of empowerment, we might give the measure to both migrant farm workers and to the farm owners, theorizing that our measure should show that the farm owners are higher in empowerment. As in any discriminating test, the results are more powerful if you are able to show that you can discriminate between two groups that are very similar.
In convergent validity , we examine the degree to which the operationalization is similar to converges on other operationalizations that it theoretically should be similar to. For instance, to show the convergent validity of a Head Start program, we might gather evidence that shows that the program is similar to other Head Start programs.
Or, to show the convergent validity of a test of arithmetic skills, we might correlate the scores on our test with scores on other tests that purport to measure basic math ability, where high correlations would be evidence of convergent validity. This examines the ability of the measure to predict a variable that is designated as a criterion.
A criterion may well be an externally-defined 'gold standard'. Achieving this level of validity thus makes results more credible. This measures the extent to which a future level of a variable can be predicted from a current measurement. This includes correlation with measurements made with different instruments. This measures the relationship between measures made with existing tests. The existing tests is thus the criterion. Face validity occurs where something appears to be valid.
This of course depends very much on the judgment of the observer. In any case, it is never sufficient and requires more solid validity to enable acceptable conclusions to be drawn. Measures often start out with face validity as the researcher selects those which seem likely prove the point. Validity as concluded is not always accepted by others and perhaps rightly so. Typical reasons why it may not be accepted include:.
Validity , Three izings of research. Please help and share: Home Top Menu Quick Links. Constructs accurately represent reality. Simultaneous measures of same construct correlate. Doesn't measure what it shouldn't. Causal relationships can be determined. Any relationship can be found. Conclusions can be generalized. Predicts future values of criterion. Correlates with other tests. Looks like it'll work. Construct validity Construct validity occurs when the theoretical constructs of cause and effect accurately represent the real-world situations they are intended to model.
Convergent validity Convergent validity occurs where measures of constructs that are expected to correlate do so. Discriminant validity Discriminant validity occurs where constructs that are expected not to relate do not, such that it is possible to discriminate between these constructs. Convergent validity and Discriminant validity together demonstrate construct validity. Nomological network Defined by Cronbach and Meehl, this is the set of relationships between constructs and between consequent measures.
Content validity Content validity occurs when the experiment provides adequate coverage of the subject being studied. Internal validity Internal validity occurs when it can be concluded that there is a causal relationship between the variables being studied. Conclusion validity Conclusion validity occurs when you can conclude that there is a relationship of some kind between the two variables being examined.
This may be positive or negative correlation. External validity External validity occurs when the causal relationship discovered can be generalized to other people, times and contexts. Correct sampling will allow generalization and hence give external validity.
Criterion-related validity This examines the ability of the measure to predict a variable that is designated as a criterion. Criterion-related validity is related to external validity. Predictive validity This measures the extent to which a future level of a variable can be predicted from a current measurement.
For example, a political poll intends to measure future voting intent. College entry tests should have a high predictive validity with regard to final exam results. Concurrent validity This measures the relationship between measures made with existing tests. For example a measure of creativity should correlate with existing measures of creativity.
External validity is about generalization: To what extent can an effect in research, be generalized to populations, settings, treatment variables, and measurement variables? External validity is usually split into two distinct types, population validity and ecological validity and they are both essential elements in judging the strength of an experimental .
Construct validity is the approximate truth of the conclusion that your operationalization accurately reflects its construct. All of the other terms address this general issue in different ways. Second, I make a distinction between two broad types: translation validity and criterion-related validity.
Face Validity is the most basic type of validity and it is associated with a highest level of subjectivity because it is not based on any scientific approach. In other words, in this case a test may be specified as valid by a researcher because it may seem as valid, without an in-depth scientific justification. Notice that a tool can have high content validity and low construct validity. Our survey might ask questions that are all relevant to empathy and therefore have a high content validity. But, if it is measuring something other than empathy (like guilt-motivated behavior), its construct validity is low.
Types of validity. Explanations > Social Research > Design > Types of validity. Construct | Content | Internal | Conclusion | External | Criterion | Face | Threats | See also. In a . INTERNAL VALIDITY is affected by flaws within the study itself such as not controlling some of the major variables (a design problem), or problems with the research instrument (a data collection problem).