Reliability and Validity Worksheet PSYCH/655
Title
ABC/123 Version X
1
Reliability and Validity Worksheet
PSYCH/655 Version 4
1
Katie OKennedy
November 9th 2020
Professor Amy Logsdon
PSY/655
Week 2
University of Phoenix Material
Reliability and Validity Worksheet
Instrument Reliability
A reliable instrument is one that is consistent in what it measures. If, for example, an individual scores highly on the first administration of a test and if the test is reliable, he or she should score highly on a second administration.
Imagine that you are conducting a study for which you must develop a test in mathematics for 7th-grade students. You develop a 30-point test and distribute it to a class of 12, 7th-grade students. You then administer the test again one month later to the day. The scores of the students on the two administrations of the test are listed below. Use Microsoft® Excel® or IBM® SPSS® to create a scatterplot with the provided scores, formatted as shown in the example graph. What observations can you make about the reliability of this test? Explain.
30-POINT TEST 30-POINT TEST
(FIRST ADMINISTRATION) (SECOND ADMINISTRATION)
A 17 15_______________
B 22 18_______________
C 25 21_______________
D 12 15_______________
E 7 14_______________
F 28 27_______________
G 27 24_______________
H 8 5_______________
I 21 25_______________
J 24 21_______________
K 27 27_______________
L 21 19_______________
image3.png
image2
When reviewing the graph above one can observe that the majority of students attained comparable scores in the 2 tests. One key difference that appears is between scores recorded in both tests by individual learners. The majority of the scores are clustered between 20 and in conclusion the test is reliable because of the consistent nature of the scores.
What Kind of Validity Evidence: Content-Related, Criterion-Related or Construct-Related?
A valid instrument is one that measures what it says it measures. Validity depends on the amount and type of evidence there is to support ones interpretations concerning data that has been collected. This week, you discussed three kinds of evidence that can be collected regarding validity: content-related, criterion-related, and construct-related evidence.
Each question below represents one of these three evidence types. In the space provided, write content if the question refers to content-related evidence, criterion if the question related to criterion-related evidence, and construct if the question refers to construct-related evidence of validity.
1. How strong is the relationship between the students scores obtained using this instrument and their teachers rating of their ability?
The answer is Criterion
2. How adequately do the questions in the instrument represent that which is being measured?
The answer is Content
3. Do the items that the instrument contains logically reflect that which is being measured?
The answer is Content
4. Are there a variety of different types of evidence (test scores, teacher ratings, correlations, etc.) that all measure this variable?
The answer is Criterion
5. How well do the scores obtained using this instrument predict future performance?
The answer is Criterion
6. Is the format of the instrument appropriate?
Yes, construct.
References
McDonnell Douglas Corp. v. Green, 411 U.S. 792 (1973).
Reynolds, & Lowe. (2012). Bias in Psychological Assessment. https://onlinelibrary.wiley.com/doi/full/10.1002/9781118133880.hop210004 .
Copyright © XXXX by University of Phoenix. All rights reserved.


Recent Comments