PSA6669 - SECTION 3: QUALITATIVE AND QUANTITATIVE MEASURES IN ASSESSMENT
The goal of any psychosocial assessment is to identify strategies and resources to return the individual to their optimal level of adaptive functioning while strengthening the individual’s feelings of self efficacy.
The psychosocial assessment is a collaborative process between the individual and the practitioner. The process should result in a plan. The plan should focus on time limited intervention that produces a beneficial outcome for the individual.
A comprehensive and well-designed psychosocial assessment process will be concerned with both Qualitative and Quantitative measures as information is gathered. Qualitative and Quantitative measures are different but not incompatible.
Qualitative measures focus on the individual’s own words, descriptors and metaphors to clarify the content and context of the issue. Quantitative measures require operational definitions of behaviors or attributes to obtain an objective assessment.
Standardized measurement tests have been researched on samples of the population with known characteristics. This research establishes the properties of the standardized test - reliability, validity and norms.
Many or most of these tests are typically administered, scored and reported upon by clinical, school or counseling psychologists who have specialized training in this area. However, mental health clinicians without specialized training in testing may sometimes have clients complete self-report measures that come equipped with computerized scoring mechanisms, such as the Substance Abuse Subtle Symptoms Inventory (SASSI), or the Conover Anger Management Assessment.
Clients who are being seen in counseling or other services provided by mental health clinicians may bring with them previously administered psychological testing. Likewise, clinicians may find it helpful to refer out for testing in order to ascertain certain aspects of a client's functioning.
In any of these cases, clinicians must possess some capacity to understand testing results – and evaluate the efficacy of those tests in accurately measuring what they are supposed to measure. For this reason, it is extremely important for mental health clinicians to understand the concepts related to quantitative measures.
Reliability relates to the dependability, stability and consistency of the measure. In order to determine the reliability - measuring what it is suppose to measure - standardized measurement tools are tested using established methods such as test, re-test, using two forms of the same measure to the same population, or dividing the measure into two equivalent halves.
(There are statistical methods for computing split half reliabilities such as Cronbach’s alpha [Cronbach LJ (1951) Coefficient alpha and internal structure of tests. Psychometrika, 16, 167-188] or Kuder-Richardson formula [Kuder GF & Richardson MW (1937). The theory of the estimation of test reliability. Psychometrika, 2, 151-160.] )
Not all tests currently available for use are equally reliable. In examining how much credence to give to test results received, clinicians should have some understanding about how reliable the test is, and how that reliability has been determined. Tests such as the Minnesota Multiphasic Personality Inventory (MMPI) have been scrutinized carefully for decades and have a high degree of reliability.
Newer tools that lack the pedigree and research backed support of tests such as the MMPI are thrown into the mental health marketplace on a regular basis – sometimes before reliability has been firmly established. It is important to be able to separate out the reliable tests from the tests whose most important purpose is to make the reputation and fortune of the test developer.
Measurement tools used to determine interventions about individuals should have an acceptable reliability of greater than .80. [Springer DW, Abell N & Hudson WW (2002) Creating and validating rapid assessment instruments for practice and research: Part One and Part Two. Research on Social Work Practice, 6, 752-768.]
Validity refers to how well the standardized instrument measures what it claims to measure. The characteristics assessed by particular measure e.g., depression, stress, substance use, must be defined and verified by objective sources and empirical operations. [ Anastasi A (1988) Psychological testing (6th edition) New York: MacMillan] Clinicians need to know for what is the instrument valid and for whom. As was the case for reliability, clinicians should have some understanding about how valid the test is, and how that validity has been determined.
This is a little more complicated, and requires a slightly higher degree of knowledge on the part of the clinician. There are four methods of establishing the validity of any measurement. 1) Content validity; 2) Criterion validity; 3) Construct validity; and 4) Factor analysis
Content validity means that the test being used creates an accurate multidimensional representation of the range of traits or behaviors that are being assessed. This is to say that the test must accurately reflect the reality of what is being measured.
Criterion validity represents the relationship between the score and the trait or characteristic. This is to say that the score should accurately represent what you would predict for the trait or characteristic in terms of future performance or behavior.
Construct validity measures a variable that that is believed to explain or organize observed responses. This is to say that the variable(s) should align with what is generally understood about complex concepts or constructs working together, as would be the case in measurements of complex constructs like intelligence or shyness.
Factor analysis is concerned with identifying factors that determine similar results among different tests. For example, someone with a high aptitude in math might score well in the math sections on both an intelligence test and the SAT. Factor analysis would look to determine what factor(s) (e.g., aptitude in using symbolic representations) would explain a high degree of correlation in scoring on similar sections of different tests.
Norms are determined by administering the test to a large representative population. Individual scores are then compared to the established norms for the test instrument. To be of any use to individuals, norms must share the characteristics of the individual, the population should be at least 100 individuals and they must be relevant to the individual [Stattler JM (1988) Assessment of children. (3rd ed), San Diego, CA: Author.]
Clinicians should have an adequate training in how any standardized testing instrument that is intended for clinical evaluation or assessment is administered, scored and interpreted. As stated before, even if clinicians do not themselves administer the test, they need to have knowledge of the theory of measurement and knowledge of the quality of the evaluation of the measurement tools.
Having this specialized training will guard against the misuse of the measurement tools or the misinterpretation of information collected from the use of the standardized measurement tool.
There are also some risks in the use of standardized measurement tools. These include using a tool that is not appropriate for the client population and which could lead to mislabeling the attributes of culturally or ethnically diverse populations.
Standardized measurements should not be the only resource to identify client characteristics. Standardized measurement tools are not intended to be a replacement for the clinical interview.
It is important not to overdo measurements. Even though it is important to collect repeat measures, they need to be tailored to the individual and the situation. Clinical measurement, like other clinical skills, is an art as well as a science. [Jordan C & Franklin C (2003) Clinical Assessment for Social Workers. 2nd Ed Chicago: Lycem Books].
Qualitative measures of assessment such as the Narrative Assessment or personal histories have some advantages over the quantitative measures of assessment. These advantages include having a low cost, requiring no technology, allowing for simple administration, portability, providing a natural method of communicating personal information and offering a direct method of presenting the individual’s story.
However, as with quantitative assessment tools, qualitative measures also have significant disadvantages. These include practitioner bias, client motives, limits in the amount of quantifiable data and including information not relevant to the intervention.
The narrative assessment will usually include relevant demographics, a description of the problem, the quality and nature of the individual’s resources for resolving the problem, and a summary.
The summary should contain specific goals and interventions, negotiated priorities and action steps and a clinical impression. It should not include interesting but irrelevant information.
From the combination of the qualitative and quantitative measures, a clinical picture begins to come into focus. The first piece of this that will receive a little more focus is the clinical impression. This is the topic at the center of the next section.