Assessment of patient satisfaction: Development and refinement of a Service Evaluation Questionnaire

Assessment of patient satisfaction: Development and refinement of a Service Evaluation Questionnaire

0149-7189/83 $3.00 + .OO Copyright a, 1984 Pergamon Press I.td Evaiuation and Program Planning, Vol. 6, pp. 299-314, 1983 Printed in the USA. Ail rig...

2MB Sizes 21 Downloads 36 Views

0149-7189/83 $3.00 + .OO Copyright a, 1984 Pergamon Press I.td

Evaiuation and Program Planning, Vol. 6, pp. 299-314, 1983 Printed in the USA. Ail rights reserved.

ASSESSMENT OF PATIENT SATISFACTION: Development and Refinement of a Service Evaluation Questionnaire

TUAN D. NGWEN,

C. CLIFFORD ATTKISSON, and BRUCE L. STEGNER University

of California,

San Francisco

ABSTRACT A series of seven studies was conducted by the authors and their colleagues toproduce an efficient measure of service satisfaction that can easily be related to symptom level, demographic characteristics, and type and extent of service utilization. The resulting measure, the Service Evaluation Questionnaire ([email protected] is a brief, global index that has excellent internal consistency and solid psychometric properties. Data from an extensive SEQ field study can be used as a comparison base for future applications of the two SEQ component scales, the CSQ-8 and the SCL-IO. A new hypothes~ has emerged from this series of studies that will guide future research: Service recipients may find it difficult to formaRy express d~satisfaction in the face of significant caring- however ineffectuaf- when the technical capacity to offer definitive treatment is not yet fully developed and when criteria for evaluating the efficacy of treatment are not yet crystal clear.

In recent years there has been a signi~cant shift toward broadening the scope of client participation in the evaluation of human service programs. A notable example of this trend is the proliferation of research on client and patient satisfaction. The distinguishing

feature of satisfaction research is that service recipients are explicitly asked to evaluate the services provided to them. This paper presents our progress in developing and refining a simple, general scale for use in human service programs.

OVERVIEW

1977; Goyne & Ladoux, 1973; Henchy & McDonald, 1973; Larsen, Attkisson, Heargreaves, & Nguyen, 1979). Linn (1975), in his review of patient evaluation of health care, concluded that, from study to study, leveis of satisfaction are very high, regardless of the method used, the population sampled, or the object of rating. This finding of a high level of satisfaction can be interpreted in several ways. At one extreme, such findings can be viewed as arising solely from the clients’ desire to give “grateful testimonials” and from other demand characteristics of the rating situation or the instrument used (Campbell, 1969). At the other extreme, the observed data are taken at face value as unquestioned “proof” of the effectiveness of the program. Both of these positions are shortsighted and misguided

Despite the increase in active involvement of health and human service recipients in program evaluation, there are some serious problems with using satisfaction data in this process. These problems include (a) high rates of reported “satisfaction,” (b) lack of meaningful comparison bases, (c) lack of a standardized scale, and (d) difficulties in obtaining unbiased samples.

High Levels of Reported Satisfaction A major problem encountered in using satisfaction measures is the ubiquitous finding that service recipients report high levels of satisfaction. The mental health literature is replete with such findings (Denner & Halperin, 1974; Frank, 1974; Gilligan & Wilderman, Support was received for research

presented in this paper by contract with the San Francisco Community Mental Health Service (NIMH Orant No. 09-H-000320-01-1) and by a National Institutes of Health, Biomedical Research Support Award (United States Public Health Service Grant SO7 RR-05755). Requests for reprints should be sent to C. Clifford Attkisson, Department of Psychiatry, University of California, San Francisco, San Francisco, CA 94143.

299

300

TUAN

D. NGUYEN,

C. CLIFFORD

because there are appropriate ways to derive meaning and usefulness from client satisfaction data while acknowledging their limitations. Lack of Meaningful Comparison Bases The majority of patient satisfaction studies in the literature report high levels of obtained satisfaction ratings. Consequently, the level of satisfaction in absolute terms, and in isolation from other data, is meaningless. For example, suppose that the mean satisfaction score for one service program is 70 on a scale of 1 to 100 with a standard deviation of 10. What can one conclude? Without some basis for comparison, little can really be said. Such statements as, “In general, our clients are satisfied,” may actually be misleading. Now suppose further that the mean satisfaction score on the same scale in a sample of comparable service programs is 85. Clearly, a score of 70 is much lower than the mean for the population of similar programs. With this additional information, one could now conclude that in fact, the current program is viewed as less satisfactory by its clients than are comparable programs by their clients. Lack of a Standard Scale A unique feature of client satisfaction research to date is the tendency of investigators to invent their own questionnaires or to modify existing scales in such a way that one is not sure whether the original and modified versions measure the same thing. Most often, these inventions or modifications are carried out solely on the basis offace validity without careful systematic analysis of their effects (e.g., Love, Caid, & Davis, 1979). Yet, it is obvious that one cannot make valid and meaingful comparisons of different programs or components of the same program when the measurement conditions differ from one another in terms of instruments, data collection methods, or data analyses. If one program uses a IO-item scale while another uses a 20-item scale and both scales have little in common in terms of content and format, data from these two programs cannot be pooled for meaningful comparisons. Even within programs, the lack of standardized measurement will preclude comparing different time periods, different groups of clients, or THE

UCSF

PATIENT

ATTKISSON, and BRUCE L. STEGNER different program components. Thus, at a minimum, when an existing scale is modified for use in specific situations, one must establish clearly the psychometric relationships between the original and modified versions of the scale. Difficulty in Avoiding Sampling Biases It is a common finding that a large proportion of clients drop out from programs, particularly when services are extended over time. This is certainly true in mental health outpatient programs (e.g., Brandt, 1965; Garfield, 1971; Sue, McKinney, & Allen, 1976). Since dissatisfied clients are more likely to drop out, the timing of data collection may determine the extent and direction of bias in the data. Positive bias (in favor of the program) is increased if data are gathered a long time after the client entry point and those who have terminated are not reached. On the other hand, if data are collected close to the point of client entry in order to counter the dropout bias, then clients have not experienced the complete service package. Collecting the data through mailed questionnaires after clients have left the program also has its problems: The return rate is often very low, usually below 35%, and satisfied clients often are more likely to return the questionnaires than are dissatisfied clients. Another possibility is to sample clients cross-sectionally. An example of cross-sectional sampling might be to solicit ratings from all clients who receive services during a 2-week period. This approach allows the sampling of clients who have had different amounts of experience with the program. This is often the least expensive approach, but is biased against the inclusion of clients who drop out early, or who miss their appointments during the data collection period. In general, it seems impossible to rule out completely the biases inherent in the different sampling strategies. Thus, it is crucial to ascertain the relative biasing consequences of various sampling approaches and then to chose the least biasing approach. For comparisons over time and within the same program, any consistent sampling scheme is probably satisfactory, since the extent of bias can be assumed to be constant across data collections.

SATISFACTION

This paper describes a series of studies of patient satisfaction that have been carried out at the University of California, San Francisco over the last 6 years. To focus on the problems presented in the overview, our research was designed (a) to construct and assess a simple client satisfaction scale that possesses sound construct validity, a coherent structure, and stable psychometric properties; (b) to identify the relationships of client satisfaction with measures of mental health outcomes; and (c) to identify important variables that

RESEARCH

need to be controlled scale.

PROGRAM in any attempt

to standardize

the

Study I: Ensuring Construct Validity Our first step in developing a satisfaction scale was to consult published and unpublished sources in order to identify aspects of the service delivery process that could be determinants of satisfaction with services. From this literature search, we were able to identify nine dimensions of service delivery that could con-

301

Service Evaluation Questionnaire ceivably be primary targets of satisfaction ratings by patients (Larsen et al., 1979). For each category, we created nine items. Our intent in the selection of these items was to ensure breadth of content and to highlight nuances pertaining to each category. Each item was phrased as a question having a 4-point anchored answer without the neutral position. Table 1 lists the nine categories with an example of an item in each. A group of 32 mental health professionals ranked the nine items in each category according to how well each item reflected the dimension in question. Items were ranked from best (9) to worst (1) with no ties. Items receiving a mean rank of 5 or higher were kept in the pool. This left 45 items with a minimum of 4 and a maximum of 6 per category. The pool of 45 items was next rated by 31 members from various California County Mental Health Advisory Boards. These raters were asked, given their position as advisory personnel, to rank items (again, within category) by selecting those about which they would most like to receive feedback. The three topranked items in each category were selected. Four additional items were also retained because their content was sufficiently different to justify inclusion. Thus, the preliminary version of the scale (the “Client Satisfaction Questionnaire” [CSQ-3 11) was composed of 31 items, with a minimum of three items in each category (Larsen, Attkisson, Hargreaves, & Nguyen, Note 1).

exclusively outpatient, but included persons receiving a variety of treatments. In that sample, some individuals were still in treatment at the time they completed the preliminary scale, while others had left treatment for periods up to 6 months. The data from this study were submitted to a principal-components factor analysis, using squared multiple correlations as initial communality estimates. The first factor from this solution accounted for 43% of the total variance and about 7.5% of the common variance. The second factor accounted for less than 7% of the common variance. Even when items with high first-factor loadings were removed and the analysis repeated, no other factor accounted for as much as 10% of the total variance. This finding provided strong evidence that only one salient dimension underlies the responses to the items in the preliminary CSQ-31 scale (Larsen et al., Note 1). To construct the final, briefer scale, we selected the eight items that loaded highly on the unrotated first factor and that exhibited good inter-item and itemtotal correlations. Coefficient Q!for this 8-item questionnaire (the CSQ-8) was .93, indicating that it possesses a high degree of internal consistency. These eight items provided a homogeneous estimate of general satisfaction with services. A copy of the CSQ-8 is reproduced in Figure 1. Items 3, 7, and 8 of this scale also appear to function well as a smaller global measure of satisfaction.

Study II: Testing Psychometric Properties and Further Refinement of the CSQ31 The preliminary, 31-item scale (CSQ-31) was administered to 248 mental health clients in five service settings. The sample, described in Larsen et al. (1979),

Study III: Testing the Reliability of the CSQ Scale In this reliability study, we developed parallel forms (Figures 2 and 3) of the preliminary CSQ-31 scale. One of the eight items in the CSQ-8 that had the highest factor loading and four items chosen randomly from

was

TABLE1 CONTENT CATEGORIES AND SAMPLE ITEMS Category

Sample Item

Physical surroundings

in general, how satisfied are you with the comfort and attractiveness facility?

Support staff

When you first came to our program, did the receptionists seem friendly and make you feel comfortable?

Kind/type of service

Considering your particular you received?

Treatment staff

How competent most closely?

Quality of service

How would you rate the quality of service you received?

Amount, length, or quantity of service

How satisfied are you with the amount of help you received?

Outcome of service

Have the services you received helped you to deal more effectively problems?

General satisfaction

In an overall, general sense, how satisfied ceived?

Procedures

When you first came to our program, were you seen as promptly as you felt necessary?

needs, how appropriate

and knowledgeable

of our

and secretaries

was the kind of service

was the person with whom you worked

with your

are you with the service you re-

TUAN

302

D. NGUYEN,

C. CLfFFORD

CLIENT

ATTKISSON,

EVAL.UATlON

and BRUCE

L. STEGNER

OF SERVICES

Please help us lmptove out program by answering some questlons about the services you have received. We are Interested In your honest oplnlon, whether they are posltlve or negative Please answer all of the questlons We also welcome your comments and suggestlons. Thank you very much, we really appreciate your help.

CIRCLE

1.

2.

YOUR

ANSWER

How would

you rate the quality

of service

you

4

3

Excellent

Good

To what extent

If a friend

not

has our program

met your

_c

Yes, definitely

needs? 3

2

1

Most of my needs have been met

Only a few of my needs have been met

None of my needs have been met

were in need of similar help, would

How satisfied

you recommend

2 not

No, I don’t

are you with the amount

Qu&? dfssattsfied

our program

think so

2

3

4

Mostly satrsfied

Very satisfied

3

Yes, they helped a great deal

Yes, they helped somewhat

sense, how

satisfied

Very satisfsd

If you were to seek help again, would

are you

1

No, they really didn’t he/p

with

No, they semed to make things worse

the service you have received7 2

1

Mostly sa ttsfted

Indifferent or mildly dtssa tisfied

Quite dlssattsfied

you come back to our program? 2

not

with your problems?

2

3

1 No, defmitely

Yes, definitely

Indifferent or mildly dissa t/sfied

4

general

4

Yes, I think so

Have the services you received helped you to deal more effectively

In an overall,

to him or her?

3

of help you have received?

4

8.

-

4

1

7.

4

Yes, generally

Almost all of my needs have been met

No, defmltely

6.

Poor

3

No, not really

1

5,

1

Fair

2

MO, defmltely

4.

2

Did you get the kind of service you wanted? 1

3.

have received,

No, I don’t think so

Figure 1. The CSQ-8 as Presented in the Service Evaluation

Questionnaire

3

4

Yes, t think so

(cf. Larsen,

Attkisson,

Hargreaves,

Yes, deflmtely

& Nguyen.

1979).

Service Evaluation

Questionnaire

303

Please help us improve our program by answering some questions about the services you have received. We are interested in your honest opinions, whether they are positive or negative. Please unswer all of the questions. We also welcome your comments and suggestions. Thank you very much, we really appreciate your help. CIRCLE 1.

YOUR

ANSWERS

Have you ever felt that our program.was

more concerned with procedures than with helping you?

4 Concerned

helping

2.

3

mostly

with

Concerned

me

helping

How satisfied are you with the quality

with

Concerned

me

Mostly

1

Mostly

4

satisfied

Very satisfied

3 or

Mostly

4

satisfied

Very

satisfied

dissatisfied

You came to our program with certain problems. How are those problems now? 2 worse

3

No change

Somewhat

Have the services you received helped you to deal more effectively 4 o great

How convenient

is the location

No,

somewhat

they

In general, have the receptionists

Mostly

No,

they seemed

to make

things

2

convenient

and secretaries seemed friendly

4

Somewhat

worse

Yes, most

1

inconvenient

Very

inconvenient

and made you feel comfortable?

3

Yes, definitely

1 really

didn ‘t help

3

convenient

deal better

of our building?

4 Very

A great

2

Yes, they helped

deal

4 better

with your problem?

3

Yes, they helped

2

of the time

No, sometimes

1 not

No,

often

not

Have the services you received led to any changes in either your problems or yourself? 1 Yes, but were

2

the changes

for

the worse

No,

there

3

was really

no noticeable

Yes, some noticeable

change

change

for the better

4 Yes, (I great of positive

deal

change

Are you satisfied with the fee that was charged for the services you received? 1 Quite

2

dissatisfied

Indifferent mildly

11.

or

Indifferent

1

10.

dissatisfied

2

dissatisfied

Worse or much

9.

dissatisfied

dissatisfied

mildly

8.

Quite

3

Indifferent

1

7.

t or

How satisfied are you with the amount of help you have received? Quite

6.

with

1

lndifferen

2

dissatisfied

mildly

5.

mostly

procedures

How satisfied are you with the kind of service you have received? Quite

4.

Concerned

2

satisfied

mildly

3.

with

procedures

3

satisfied

1

more

of the service you have received?

4 Very

2

more

3 or

Mostly

satisfied

4 Very satisfied

dissatisfied

Do you feel that our program has kept your problems confidential? 4

3

Yes, I feel

they

definitely

have

Yes, I feel they hove

Figure 2. Client Satisfaction Questionnaire

2 No,

I feel

they hove not

(Parallel Form A) (CSQ-ISA)

1 No,

I feel they

definitely

have not

304 12.

TUAN D. NGUYEN,

C. CLIFFORD

4

2

1

Fair

Poor

Mostly satisfied

When you first came to our program, did the receptionists 4

To what extent

not

1

Yes, they genera//y did

No, they genera#y didn’t

No, they definitely didn ‘t

How interested

3

4

Yes, generally

Yes, definite/y

have the receptionists 4

Most of my needs have been met

Interested

2

1

Only a few of my needs have been met

None of my needs have been met

2

Somewhat uninterested

in helping you was the person with whom you have worked most closely? 4 3 2 Very interested

lnteresfed

Figure 2. Cltent Satisfaction

Questionnaire

the remaining seven were included in both forms. These five items were then removed from the CSQ-31 and the remaining 26 items of this set were alternately placed in one of the two parallel forms. Thus, each parallel form contained 18 items, 5 of which are common to both forms (the CSQ-18, Forms A and B). Both Forms A and B of the CSQ-18 were presented in counterbalanced order to 34 clients of a day treatment program in an urban community mental health center (LeVois, Nguyen, & Attkisson, 1981). The obtained means were 2.94 (SD = .491) for Form A and 2.96 (SD = .447) for Form B. These two means did not differ significantly from each other (t (33) = - SO, p = .62). The two forms also correlated significantly (r = .822, p < .Ol) with each other. Thus our results indicate a high degree of split-half reliability for the scale, at least for this patient population. Study IV. Examining the Effects of Modes of CSQ Administration Campbell (1969) has given this tongue-in-cheek advice

to “trapped

-

and secretaries been in helping you? 3

Very interested

How interested

2 No, not really

has our program met your needs? 4 3

Almost all of my needs have been met

18.

and make you feel comfortable?

2

1

17.

and secretaries seem friendly

Quite dissatisfied

Have you received as much help as you wanted? No, definitely

16.

1

Indifferent or mifdiy dissatisfied

3

-

Yes, they definitely did

15.

3 Good

In an overall, general sense, how satisfied are you with the service you have received? 4 3 2 Very satisfied

14.

and BRUCE L. STECNER

How would you rate the quality of service you have received? Excellent

13.

ATTKISSON,

administrators”:

“Human

courtesy

and

-__

(Parallel

Somewhat uninterested

1 Very uninterested

1 Very uninterested

Form A) (CSQ-IRA) (continued).

gratitude being what it is, the most dependable means of assuring a favorable evaluation is to use voluntary testimonials from those who have had the treatment” (p. 426). Implied in this statement is the hypothesis that client satisfaction ratings are easily influenced by a large number of social-psychological artifacts, including social desirability bias (Edwards, 1957). ingratiating response set (Scheirer, 1978), the Hawthorn effect (Roethlisberger & Dickson, 1939), or researcher bias (Rosenthai, 1966; Friedman, 1967; Cordon & Morse, 1975). Test administration situations that amplify the effects of these artifacts can be expected to decrease comparability of client satisfaction data even if a standardized scale is used. This possibility prompted us to compare the two most often used modes of administration: written and oral. For the purpose of this study, the two parallel forms developed in Study III were used. Subjects were 60 volunteers drawn from the active client rosters of 2-day treatment programs of an urban Community Mental Health Center (LeVois et al., 1981). This sample was largely white (58%), male (62%‘0),and single

Service Evaluation

305

Questionnaire

Please help us improve our program by answering some questions about the services you have received. We are interested in your honest opinions, whether they are positive or negative. Pkuse umwer u/l of the questions. We also welcome your comments and suggestions. Thank you very much, we really appreciate your help. CIRCLE 1.

2.

3.

4.

5.

6.

YOUR ANSWERS

When you first came to our program, were you seen as promptly as you felt necessary? 4

3

2

1

Yes, very promptly

Yes, promptrv

No, there was some delay

No, it seemed to take forever

In general, how satisfied are you with the comfort and attractiveness of our facility?

1

2

3

4

Quite dissotisfled

lndlffemnt or mildly diswtlsncd

Mostly satisfied

Very satisfied

Did the characteristics of our building detract from the services you have received? 1

2

3

4

Yes, they detracted very much

Yes, they detmcted somewhat

No, they did not detract much

No, they did not detract at all

How satisfied are you with the amount of help you have received? 1

2

3

4

Quite dissatisfied

Indifferent or mildly dlssaUsfied

Mostly satisfied

Very satisfied

Considering your particular needs, how appropriate are the services you have received? 4

3

2

1

Highly approprlate

Genemlly appropriate

Genemlly InapproprIate

Hlghly Inappropriate

Have the services you received helped you to deal more effectively with your problems? 4 3 2 Yes, they helped o great deal

7.

8.

1

2

3

4

Not too closely

Fah-ly closely

Very closely

Did you get the kind of service you wanted? 1

2

3

4

No, not really

Yes, genemlly

Yes, definitely

Are there other services you need but have not received? 1

2

3

4

Ya, lhcrc

Yes, I think then! w?re

No, I don’t think there were

No, there definitely were not

How clearly did the person with whom you worked most closely understand your problem and how you felt about it? 4

3

vev clearly

11.

1 No, they seemed to make things worse

Not or all clarcly

definitely were

10.

No, they really didn ‘t help

When you talked to the person with whom you have worked most closely, how closely did he or she listen to you?

No, definitely not

9.

Yes, they helped somewhat

U&Y

2

1

Somewhat unclearly

Very unclearly

How competent and knowledgeable was the person with whom you have worked closely? 2 1 3 Poor abilities atbest

Only of avenge &II&

Figure 3. Client Satisfaction Questionnaire

Competent and knowledgeable

(Parallel Form B) (CSQ-18B).

4 Hlghly competent and knowledgeable

306 12.

TUAN D. NGUYEN,

Most/y satisfied

2

3

not

No, I don’t think so

Yes, I think so

Have the people in our program generally understood No, they misundersiood almost complerety

18.

or her? 4 Yes, definitely

the kind of help you wanted?

3

4

No, they seemed to misunderstand

Yes, they seemed to generally understand

Yes, they understood almost perfectly

To what extent has our program met your needs? 4 3 Almost all of my needs have been met

1 Quite dissatisfied

2

1

17.

Poor

Indlffereni or rn~~d~ydi~a~s~ed

If a friend were in need of similar help, would you recommend our program to him

1

16.

1

2 Fair

In an overall, general sense, how satisfied are you with the service you have received? 4 3 2

No, definitely

15.

and BRUCE L. STEGNER

Good

Very satisfied

14.

ATTKISSON,

How would you rate the quality of the service you have received? 4 3 Excellent

13.

C. CLIFFORD

Most of my needs have been mei

2

1

Only a few of my needs have have met

None of my needs have been met

Have your rights as an individual been respected? 1

2

3

4

No, almost never respected

No, sometimes not respec ied

Yes, genemlfy respected

Yes, almost always respected

If you were to seek help again, would you come back to our program? 2 1 No, definitely Figure 3.

not

No, I don’t think so

3 Yes, I think so

4 Yes, definitely

Client Satisfaction Questionnaire (Parallel Form B) (CSQ-18s) (continued).

(60%). Most had finished high school (72070)~and were about 37 years of age. Subjects were assigned randomly to one of four experimental conditions designed to counterbalance the order of the modes of administration (oral versus written) and of the CSQ-18 Forms (A versus B). This sample of chronic mentally ill patients produced a considerable amount of missing data. Sixteen clients failed to answer any of the items on one or the other of the two forms, thus precluding the calculation of an individual average for one of the two modes of administration. Consequently, the sample size was reduced to 44 clients. The overall mean across the 44 clients was 2.739 for the written condition and 3.075 for the oral condition. Thus the oral mean was about 10% higher than the written mean. A correlated t test indicated that these means were significantly different (t(43) = 2.99, p < .Ol). Effects associated with the forms and the formby-mode interaction were not statistically significant.

A comparison between the two modes of administration was also made based on the number of unanswered items as the dependent variable and including all 60 cases in the analysis. We found that the oral administration mode produced significantly fewer unanswered items (1.22 items per subject or 7% per questionnaire) than did the written mode (5.87 items per subject or 33% per questionnaire; f(59) = 4.71, p < .01). The effects of the mode of administration were also evident with a nonverbal scale that was used concurrently with the verbal questionnaire. The nonverbal scale consisted of a graphic ladder of the type developed by Cantril and Kilpatrick (Cantril, 1963) for use in cross-cultural research. The scale used in our study was drawn to resemble an upright seven-rung ladder; the bottom rung corresponding to least satisfaction and each rung moving up the ladder corresponding to increasing satisfaction. The ends of the scale were labeled as follows: “The services offered

Service Evaluation Questionnaire here couldn’t be worse” (at the bottom), and “The services offered here couldn’t be better” (at the top). A simple written explanation accompanied the ladder: “Place an x on the rung of the ladder to indicate your satisfaction with the services you have received at this program.” This explanation was read aIoud to subjects in the oral condition. The results indicated that responses to this nonverbal scale are just as sensitive to the presence of an interviewer as were responses to the verbal scale: When an interviewer collected the data, the ratings on the seven-rung ladder were about 15% higher than when clients marked their ratings alone (means = 5.57 and 4.76, respectively; t(53) = 2.11, p < .05). Study V: Relationship of Satisfaction to Psychotherapy Outcome The CSQ-8 was included in an experimental study of therapy outcome in an urban community mental health center (Larsen, 1979). It was administered to 49 outpatient clients approximately 4 weeks following admission for individual therapy. Results indicated that the CSQ-8 retained the same basic psychometric properties as in the previous study: Item means were similar and the coefficient (Ywas .92. Two process variables correlated significantly with satisfaction ratings. Clients who dropped out of the program within the first month tended to have lower mean CSQ-8 scores than those still in the program (r = .37, p < ‘01). Among clients still in treatment, those missing a greater percentage of their scheduled appointments also tended to score lower on the CSQ-8 (r = .27, p < ,061. Clients made a global self-rating of improvement at the 4-week follow-up and completed three subscales of the Symptom Checklist (SCL-90, [Derogatis, Lipman, & Covi, 19731) at admission and follow-up. Selfratings of global improvement correlated significantly with the CSQ-8 ratings (r = .53, p ==c.OOl). At follow-up, two of the XL-90 subscales, “Depression” and “Anger,” showed a low but significant and negative correlation with the CSQ-8 when the relationship to admission scores was partialled out (r = - .32 and - .36, p < .05). Thus, client-rated therapy gain is correlated with client satisfaction, but apparently less so as the measure becomes more specific. Therapists also made several ratings at the two measurement points. Their ratings of client global improvement were not significantly related to client ratings on the CSQ-8. Follow-up scores on the Global Assessment Scale (Endicott, Spitzer, Fleiss, & Cohen, 1976) were also unrelated to the CSQ-8 scores. Partial correlations were computed between follow-up scores on a modified and abbreviated version of the Brief Psychiatric Rating Scale (BPRS, [Overall & Gorham, 19621) and the CSQ-8 scores, controlling for admission symptom levels. Three of the nine symptom ratings

307

(“Anxiety, ” “Thought Disturbance,” and “Interpersonal-Social-Marital Problems”) correlated significantly (p < .05) with scores on the CSQ-8 (r = .37, .33, and .33, respectively) while two other ratings, “Depression” and “Job-Related Difficulties,” showed a marginal (p < .lO) relationship (r = .28 and .29, respectively). The total score on the modified BPRS correlated .44 (p < .Ol) with the CSQ-8 scores. Thus, again, there is the suggestion that a modest relationship exists between therapy gain and satisfaction. Finally, therapists’ estimates of how satisfied they believed their clients to be were correlated .56 (p < .Ol) with the actual client rating on the CSQ-8. This finding provides some evidence of the concurrent validity of the CSQ-8. Study VI: Relationship of Satisfaction to Psychotherapy Outcome-A Subsequent Study A recent experimental study (Zwick, 1982) provided further information about the psychometric properties of two versions of the Client Satisfaction Questionnaire (the CSQ-8 and the CSQ-18) and the relationship of these two measures to service utilization and therapy outcome. In Zwick’s study, 62 clients at an urban community mental health center were randomly assigned to an oriented group, which viewed a pretherapy orientation videotape at admission, or to a control group that had a routine admission without the videotape. Forty-six of these clients agreed to participate in a l-month follow-up. All but one of these clients completed the CSQ-8, the CSQ-18 (Form B, see Figure 3), and additional outcome measures at the follow-up (Attkisson & Zwick, 1982). Both the CSQ-IS and the CSQ-8 were found to be substantially correlated with remainer-terminator status after 1 month and with the number of therapy sessions attended in 1 month. The results concerning treatment dropout are consistent with Larsen (1979), who also found a relationship between level of satisfaction and remainer-terminator status. However, Larsen (1979) found a relationship between satisfaction and proportion of appointments missed, whereas Zwick did not. Results indicated that the CSQ-18 had high internal consistency (coefficient CY= .91) and was subsequently correlated with remainer-terminator status (r = .61) and with number of therapy sessions attended in one month (r = .54). The CSQ-18 was also correlated with change in client-reported symptoms (r = - .35), indicating that greater satisfaction was associated with greater symptom reduction. Results also demonstrated that the CSQ-8 performed as well as the CSQ-18 and often better. The excellent performance of the CSQ-8, coupled with its brevity, suggests that it may be especially useful as a brief global measure of client satisfaction. The CSQ-8 was clearly superior to the CSQ-18 in

308

TUAN D. NGUYEN,

C. CLIFFORD

terms of internal consistency, as reflected in its higher coefficient CYvalue (.93), item-total correlations, and interitem correlations. Previous studies of the CSQ-8 had found internal consistency values of .93 (Larsen, 1979), .92 (Larsen et al., 1979), and .87 (see Study VII, this paper). Internal consistency values other than those described in this paper are not available for the CSQ-18. The scale means for the CSQ- 18 and the CSQ-8 were 55.38 and 24.16, respectively. The standard deviations were 8.51 and 4.94 for the two measures. On the CSQ18, the score distribution was nearly symmetrical, whereas on the CSQ-8, the distribution was slightly skewed to the left. The correlation between the CSQ-18 and the CSQ-8 was .93. The results of item analyses for the two versions revealed that for this sample, the average item means and variances were almost identical for the two versions (Zwick, 1982). The average item mean was 3.08 for the CSQ-18 and 3.02 for the CSQ-8. The average item variances for the two versions were .58 and .57. Although the two satisfaction measures were not found to be related to change on therapist ratings of symptom level or global functioning, both measures were correlated with change in self-reported symptoms, as measured by the Client Checklist. The fact that clients’ satisfaction ratings were not correlated with their concurrent ratings of symptom levels suggests that the relationship between symptom reduction and satisfaction was not merely the result of a global satisfaction factor or halo effect. In addition, it was possible, through the use of partial correlations, to determine that the association between symptom reduction and satisfaction was not due solely to the correlation of each of these variables with initial symptom level. Examination of the relationships between satisfaction and the remaining therapy outcome measures revealed that the CSQ-8 was correlated with client and therapist global ratings of improvement, as assessed by the Client Self-Rating Scale and the Therapist Global Rating Scale. Thus, despite the brevity of the follow-up period, satisfaction with services was related to three of the five measures of improvement in therapy. The findings relating satisfaction to service utilization and outcome provide further evidence for the construct validity of the CSQ-8 and the CSQ-18.

Study VII: Construction and Evaluation of a Service Evaluation Questionnaire (SEQ) In the summer of 1979, a “Service Evaluation Questionnaire” (SEQ) was constructed for initial field testing in a variety of service settings and with a variety of client types across the United States. The SEQ is an instrument designed to function as a brief, reliable measure of (a) client satisfaction (the CSQ-8) and (b) client self-evaluation of psychological symptoms (the

ATTKISSON.

and BRUCE

L. STEGNER

SCL-10). In addition, the SEQ records client demographic information as well as selected indices of service utilization. The SEQ is presented in Figures 1 and 4. A summary sheet for collecting demographic and services utilization data is also a formal part of the SEQ. This summary sheet is not included in this paper, but is available on request (Attkisson & Nguyen, Note 3).

Construction of the Symptom Checklist (SCL-IO). The SCL-10 (or “Client Self-Evaluation,” as this scale was named for purposes of the Service Evaluation Questionnaire [SEQ]) was derived from the Symptom Checklist-90 (SCL-90) developed by Derogatis, Lipman, and Covi (1973). Hoffmann and Overall (1978) reported that a single global score derived from selected SCL-90 items could be used “. . . as an index of psychopathology or psychological discomfort” (p. 1189). In their research, Hoffmann and Overall found that a 5-factor solution to a factor analysis of patient responses to the SCL-90 produced the most interpretable results. The first factor, labeled “Depression,” was the most important and accounted for 6.45 times the variance as the next largest factor (“Somatization”) and twice the variance of the remaining factors combined. In summary, three of the five factors “Depresand “Phobic Anxiety,” were sion, ” “Somatization,” most clearly defined and accounted for a large proportion of variance in the SCL-90 scores produced by a general psychiatric clinic outpatient population. Based on Hoffmann and Overall (1978), ten items were selected for trial use as a brief, global symptom checklist. The items selected were those that loaded highest on the three most important factors identified by Hoffmann and Overall (Note 2). Since “Depression” was, by far, the most salient factor, six items (Items 1, 2, 5, 8, 9, and 10 in Figure 4) were selected to represent this factor. Two items each were selected to represent “Somatization,” (Items 4 and 6 in Figure 4) and “Phobic Anxiety,” (Items 3 and 7 in Figure 4). The resulting lo-item scale was prepared for trial use in an extensive field study of the SEQ. Collaborating Sites in the National SEQ Field St&. To date, the SEQ has been administered to 3,628 clients of 76 collaborating clinical facilities representing 21 different community mental health centers, public health centers, and freestanding mental health clinics. Of these facilities, 4 were psychiatric inpatient settings, 11 were residential facilities (i.e., halfway and three-quarter way houses), 20 were partial care programs, and 41 were outpatient clinics. As detailed in Table 2, respondents were drawn from outpatient settings (81.5%), and preponderantly from the western region of the United States (50%) and the State of California (46.9%). Also, 59.9% of the data were obtained from facilities located in an urban or metropolitan area.

Service Evaluation

309

Questionnaire

CLIENT SELF-EVALUATION

Below

IS a IIst of problems

one of the answers

that

first

answer

1.

How much were you distressed by feeling lonely?

If you

TODAY.

sometImes

have

DISCOMFORT

your

compietely.

INCLUDING

people

MUCH

THE

How

WEEK

that

HOW

DURING

2.

PAST

and complaints

best describes

Do not

have any qiiesrcons,

Read each Item

THAT

skip any

PROBLEM

items

If you

please ask the questlonnalre

carefully, HAS

change

and circle

CAUSED your

YOU

mind,

erase

admmlstrator

0

1

2

3

Not at all

A little bit

Moderately

Qutfe a bit

4 Extreme fy

much were you distressed by feeling no interest in things? 4

2

3

1

0 -

Ex tremefy

3.

6.

8.

9.

10.

Notatafl

A little bit

2

3

4

Modera telv

Quite a b/t

Ex tremefy

A little bit

Moderately

Qw te a bit

4 Ex tremety

How much were you distressed by feeling blue? 4

3

2

Ex tremefy

Quite a bit

Moderately

1 A ftttie b/t

How much were you distressed by heavy feelings in your arms or legs? 4 3 2 1 Ex tremefy

7.

A Ilttle b/t

How much were you distressed by feeling weak in part of your body? f 2 0 3 Notatall

5.

Moderately

How much were you distressed by feeling afraid in open spaces or on the streets?

Not at all

4.

Quite a bit

Quite a bit

Moderatefy

A fettle brt

0 Notatalf

0 Notatall

How much were you distressed by feeling afraid to go out of your house alone? 0

1

2

Notatall

A little bit

Moderately

3 Quite a bit

4 Ex tremeiy

How much were you distressed by feeling tense or keyed up? 4

3

2

Extremely

Quite a bit

Moderately

1 A I,ttfe b/t

0 Not atall

How much were you distressed by feelings of worthlessness? 4

3

2

1

0

Extremely

Quite a bit

Moderately

A little bit

Not atall

How much were you distressed by feeling lonely even when you are with people? 1 2 0 3 Not

at aff

A little bit

Moderately

Quite a bit

4 Ex tremefy

Figure 4. The SCL-10 as Presented in the Service Evaluation Questionnaire (see Derogatis, Lipman, & Covi, 1973; and Hoffman & Overall, 1978).

310

TUAN D. NGUYEN,

C. CLIFFORD

ATTKISSON.

and BRUCE L. STEGNER TABLE 3

TABLE 2

CHARACTERISTICS IN THE SERVICE

OF SETTINGS COLLABORATING EVALUATION QUESTIONNAIRE FIELD STUDY

CHARACTERISTICS

OF SEQ

Number of Respondents

Variable Number of Patients

Variable Primary Service Inpatient Residential Partial day Outpatient

Percentage in each Category

Type

N=

74 103 493 2,958 3,628

2.0 2.8 13.6 81.5 100%

N=

36 1,703 83 47 51 202 110 712 303 117 264 3.628

1.0 46.9 2.3 1.3 1.4 5.6 3.0 19.6 8.4 3.2 7.3 100%

N=

2,172 1,456 3,628

59.9 40.1 100%

State Location Arkansas California Florida Georgia Illinois Indiana Nevada Pennsylvania Tennessee Vermont (Not identified) Urban/Rural Urban Rural

The Study Procedures. All data were collected according to the following standardized procedure. Questionnaires to be used were printed and distributed by the authors to the collaborating sites. A 2-week period was set aside at each site for data collection. During this period, all clients who arrived for service were asked to complete the questionnaire on-site. Continuing clients completed the SEQ before seeing their service providers, whereas clients who came for their first visit completed the questionnaire after their intake interview. Completed SEQs were then sent to the authors for coding and data analyses. Each collaborating facility received a standard package of statistical summaries that provided a sociodemographic profile of their clients and mean scores on each item of the SEQ including separate reports for the CSQ-8, the SCL-10, and SCL- 10 subscales. Table 3 presents the sociodemographic profile for the total sample of 3,628 respondents. A little more than half the respondents were female (58.4%); the vast majority were adults (86.1%); 44.8% had more than a high school education; 75.7% were Caucasian and 10.5% were black; 41.2% had never been married; the vast majority were at lower income levels; and more than half were unemployed at the time of the study. Evaluation of the data presented in Table 3 in-

Sex Female Male (Not identified) N=

2,072 1,477 79 3,628

N=

234 3,064 259 71 3,628

N=

383 586 901 918 305 296 239 3,628

N=

45 361 107 35 21 42 96 2,605 128 188 3,628

N=

1,380 943 235 608 184 278 3,628

N=

659 670 472 382 291 627 527 3,628

N=

798 369 374 995 256 81 282 473 3,628

Age groups Children (1-17) Adults (18-64) Aged (65+) (Not identified) Education Grade 8 or less 9to11+ High school Some college College graduate More than college (Not identified) Ethnocultural group Native American Black Chinese Filipino Japanese Mexican-American Other Hispanic Caucasian Other (Not identified) Marital status Never married Married Separated Divorced Widowed (Not identified) Yearly family income Under $2,500 $2,501-5,000 $5,001-7,500 $7,501-10,000 $lO,OOl-12,500 Greater than $12,500 (Not identified) Current Employment Full Time Part Time Housewife Unemployed Student Full Time Student Part Time Retired (Not Identified)

RESPONDENTS Adjusted Percentage

58.4 41.8 100% 6.6 86.1 7.3 100% 11.3 17.3 26.6 27.1 9.0 8.7 100 % 1.3 10.5 3.1 1.0 0.6 1.2 2.8 75.7 3.7 100% 41.2 28.1 7.0 18.1 55 100% 21.3 21.6 15.2 12.3 9.4 20.2 100%

Status 25.3 11.7 11.9 31.5 8.1 2.6 8.9 100 %

311

Service Evaluation Questionnaire dicated that this sample is quite typical of the population of persons receiving health care in publicly funded health and mental health facilities when this population is viewed from the national perspective. Internal Consistency: The CSQ-8 Scale of the SEQ. As presented in Table 4, the internal consistency of the SEQ remains equivalent to what has been found in the past; the cxcoefficient in this extensive field study, based on 3,120 fully completed SEQs, was .874. This coefficient varies slightly as the total sample is broken down into subsamples: .877 among white clients, .871 among nonwhite clients, .877 among females, and .863 among males. These variations are, however, very negligible. Item total correlations ranged from .54 to .68 with a median of .65. The overall mean of item means for the CSQ-8 is 3.396 on a scale from 1 to 4. This mean is equivalent to values found in the previous studies. A factor analysis showed only one factor for the scale, thus confirming the results of Study II. Inter-item correlations ranged from .35 to .65 with a mean of .47 also reflecting the strength of relationship among the items. TABLE 4 PSYCHOMETRIC PROPERTIES OF THE SEQ: CHARACTERISTICS OF MAJOR COMPONENTS (N = 3,120)

Internal Consistency: SCL-10 Scale of the SEQ. The data presented in Table 4 indicate that the SCL-10 is also a scale that has a high level of internal consistency. The coefficient cx for the SCL-10 was .88. Item-total correlations ranged from .48 to .70 with a median of .62. The inter-item correlations of the SCL-10 were somewhat lower than the CSQ-8 reflecting the potential multidimensionality of this scale. This potential is further supported by the psychometric properties of the 6-item “Depression” subset of the SCL-10. The “Depression” subscale has an a! coefficient of .88, high item-total correlations (median = .68), and higher inter-item correlations than the CSQ-8 or the SCL-10 (X = .55). In sum, the SCL-10 data indicate that this scale can function as a reliable and internally consistent measure of patient-rated psychological distress. The scale is a likely measure of dysphoria, demoralization, and neurotic anxiety. Future research will focus on scale validation and extension of the scale to encompass psychotic symptomatology. Table 5 presents descriptive statistics for each item TABLE 5 DESCRIPTIVE STATISTICS OF ITEMS WITHIN MAJOR COMPONENTS OF THE SEQ (N = 3,120)

Service Evaluation Questionnaire Scales

Scale Attributes Total score (Average) Total score (Standard deviation) Mean of item means Mean of item variances Coefficient (Y Median item-total correlation* Minimum item-total correlation* Maximum item-total correlation* Mean inter-item correlation Minimum inter-item correlation Maximum inter-item correlation

Item

SCL-10 (Sltem Depression Subscale)

CSQ-6

SCL-10

27.09

14.54

10.61

4.01 3.39 .48

9.19 1.45 1.77

6.48 1.77 1.87

.87

.88

.88

.65

.62

.68

.54

.48

.63

.68

.70

.75

.47

.42

.55

.35

.26

.46

.65

.63

.62

Note. 3,120 respondents are the total number of individuals having complete data on both the CSQ8 and the SCL-10. Item total correlations are corrected for the presence of the item being evaluated.

l

CSQ8

Mean

Standard Deviation

Item numbers 1 2 3 4 5 6 7 8

3.41 3.36 3.10 3.58 3.28 3.45 3.39 3.51

.67 .64 .75 .62 .80 .65 .71 .65

SCL-10 Item numbers 1 2 3 4 5 6 7 8 9 10

1.82 1.64 .93 1.35 1.94 .87 .79 2.14 1.66 1.41

1.37 1.35 1.28 1.35 1.35 1.22 1.24 1.37 1.45 1.31

SCL-10 Item numbers (g-item “Depression scale”) 1 2 5 8 9 10

1.82 1.64 1.94 2.14 1.66 1.41

1.37 1.35 1.35 1.37 1.45 1.31

312

TUAN

D. NGUYEN,

C. CLIFFORD

of the CSQ-8, the SCL-10, and for the 6-item “Depression” subscale of the SCL-IO. Means and standard deviations are presented for each item. “Total score” means and standard deviations for each of the three scales are presented in Table 4.

Conclusions from the %‘Q Study. Results from the extensive field study of the SEQ indicate that the SEQ is an efficiently administered instrument that (a) has excellent internai consistency, (b) is well-received by patients, service providers, and administrators, (c) is applicable to a wide range of service settings, and (d) has psychometric properties that are stable across many independent studies. These characteristics of the SEQ make it a strong candidate for further development and refinement for use routinely in health and human service settings. The SEQ is especially applicable to mental health or social service agencies. Expansion of the symptom checklist items to include items reflecting psychotic symptoms and anxiety will make the whole

ATTKISSON,

and BRUCE L. STEGNER

SEQ far more useful with the full range of patients without adding very much to its length. The CSQ-8 scale of the SEQ continues to perform very well as a global measure of patient satisfaction. Data in Tables 4 and 5 confirm previous findings that satisfaction, as measured by the CSQ-8 and similarly constructed scales, is negatively skewed reflecting high levels of satisfaction with health and mental health services. Though response sets and limited service options may contribute to the high level of satisfaction reported, we take these findings seriously. While further improvements in our instrument will certainly make it more sensitive to dissatisfactions among service recipients, it is doubtful that one can reverse the almost universal finding that the typical service recipient is quite satisfied with the services that are provided. It is significant, nevertheless, that the lowest mean score (see Table 5) of any item on the CSQ-8 was found for Item 3: “To what extent has our program met your needs?” This finding suggests what we generally know: We must work to improve the effectiveness of the services we now provide.

CONCLUSIONS The seven empirical studies reported in this paper answer many of the important questions raised in the paper’s introduction. A number of important issues remain for future research. There are many benefits to a systematic, ongoing research effort such as the UCSF patient satisfaction research program. The seven studies have allowed important replications while simultaneously fostering a progressive enhancement of methods, concepts, and scope of the basic investigation. The UCSF program of research has produced an efficient measure of service satisfaction that can be easily related to symptom level, demographic characteristics, and type and extent of service utilization. This measure is a brief, global index that has excellent internal consistency and solid psychometric properties. The data from the extensive SEQ field study can be used as a comparison base for future use of the CSQ-8 and the SCL-10 scales. While not “norms” in any rigorous sense, these data will be valuable to other evaluators and researchers who use the scales in their own settings. A well-tested scale and a broad data base for comparative assessment are the most important results, to date, from our work.

Future studies must focus on the validity question as it relates to the high levels of reported satisfaction found with the CSQ-8 (and similarly constructed scales). In addition, the validity question must be studied with reference to sampling bias. The key issue in future research will be the enhancement of our capacity to detect dissatisfied consumers. Current measures are relatively insensitive to dissatisfaction while being very sensitive to satisfaction. The true extent of this insensitivity is unknown but may not be as extensive as many anticipate. Many theoretical and empirical explanations have been offered to account for this “positive response” bias (see LeVois et al., 1981). One further explanation, even if it is hypothetical, may be added to this repertoire of explanations. It is that service recipients may find it difficult to formally express dissatisfaction in the face of significant caring- however ineffectual when the technical capacity to offer definitive treatment is not yet fully developed and when criteria for evaluating the efficacy of treatment are not yet crystai clear.

REFERENCE NOTES 1. LARSEN, D. L., ATTKISSON, C. C., HARGREAVES, W. A., 81 NGUYEN, T. D. Client Satrsfac/ion Ques/mnnam? (April 1979 Revision). AvaIlable from Dr. C. C. Attkisson, Graduate Dwi\ion, 140-S, Unrversity of California, San Francisco. CA 94143. 2. Personal

communication

from

N. G. Hoffmann

(see Hoffmann

& Overall,

1978).

3. ATTKISSON, C. C., & NGUYEN, T. D. The Service Evuiuafron Questionnaire. Available from Dr. C. C. Attkisson, Graduate Division, 140-S, Umrerrity of California, San Francisco, CA 94143.

Service Evaluation

Questionnaire

313

REFERENCES ATTKISSON. C. C., & ZWICK, R. The Client Satisfaction Questionnaire: Psychometric properties and correlations with service utilization and psychotherapy outcome. Evaluation and Program Planning, 1982, 5, 233-237. BRANDT, L. W. Studies of “dropout” patients in psychotherapy: A review of findings. Psychotherapy: Theory, Research and Practice, 1965, 2, 6-12. CAMPBELL, D. T. Reforms as experiments. American Psychologist, 1969, 24, 409-429. CANTRIL, H. The self-anchoring scale and human concerns. Scientific Americun, 1963, 208(Z), 41-45. DENNER, B., & HALPERIN, F. Measuring consumer satisfaction in a community outpost. American Journal of Community Psychology, 1974, 2, 13-22. DEROGATIS, L., LIPMAN, R., & COVI, L. The SCL-90: An outpatient rating scale (Preliminary report). Psycbopharmacoto~ Bulletin, 1973, 9, 13-28. EDWARDS, A. L. The social desirability variable in personality and research. New York: Dryden, 1957.

fectiveness of a community mental health center through the use of a consumer questionnaire. exchange, 1973, I, 19-21. HOFFMANN, N. G., & OVERALL, P. B. Factor structure of the SCL-90 in a psychiatric population. Journal of Consultrng and Clinical Psychology, 1978, 46, 1187-l 191. LARSEN, D. L. Enhancing client utilization of community mental health services. (Doctoral dissertation, Umversity of Kansas, 1978). Dtssertatron Abstracts International, 1979, 39, 4041B. (University Microfilms No. 7904220) LARSEN, D. L., ATTKISSON, C. C.. HARGREAVES. W. A., & NGUYEN, T. D. Assessment of client/patient satisfaction: Development of a general scale. Evaluation andProgram Plannmg, 1979, 2, 197-207. LEVOIS. M., NGUYEN, T. D., & ATTKISSON, C. C. Artifact in client satisfaction assessment: Experience in eommunity mental health settings. evaluation and Program Planning, 1981, 4, 139-150. LINN, L. S. Factors associated with patient evaluation of health care. Milbank Memorial Fund Quarterly, 1975, 53, 531-548.

ENDICOTT, J., SPITZER. R. L., FLEISS, J. L., & COHEN, J. The Global Assessment Scale: A procedure for measuring overall severity of psychiatric disturbance. Archives of General Psychiatry, I976,33, 766-711.

LOVE, R. E., CAID, C. D., & DAVIS, JR., A. The user satisfaction survey: Consumer evaluation of an inner city commumty mental health center. Evaluation & the Health Professions, 1979, 2, 42-54.

FRANK, R. A. A postcard survey assesses consumer satisfaction. ~va~uatjon, 1974, 2, 20-21.

OVERALL, J. E., & GORHAM, D. R. The Brief Psychtatrtc Rating Scaie. psychological Reports, 1962, i0, 799-8 12.

FRIEDMAN, N. The social nature of psychological research. New York: Basic Books, 1967.

ROETHLISBERGER, F. J., & DICKSON, W. J. Management and the worker. Cambridge, MA: Harvard Umversity Press, 1939.

GARFIELD, S. L. Research on client variables in psychotherapy. In A. E. Bergin & S. L. Garfield (Eds.), Handbook ofpsychotherapy and behavior change: An empirical analysis. New York: Wiley, 1971. GILLIGAN, J. F., & WILDERMAN, M. A. An economical rural mental health consumer satisfaction evaluation. Community Mental Hearth Journal, 1977, 13, 3 l-36. GORDON, G., & MORSE, E. V. Evaluation research. In A. Inkeles (Ed.), Annual review of sociology. Palo Alto, CA: Annual Reviews. 1975. GOYNE, J., & LADOUX, P. Patients’ opinions of outpatient clinic services. Hospital and Community Psychiatry, 1973, 24, 627-628. HENCHY, T., & MCDONALD, L. A global assessment of the ef-

ROSENTHAL, R. E.~per~menter effects in behavioral research. New York: AppIeton-Century-Crofts, 1966. SCHEIRER, M. A. Program participants’ positive perceptions: Psychological conflict of interest in social program evaluation. Evaluation Quarterly, 1978, 2(l), 53-70. SUE, S., McKfNNEY, H. L., & ALLEN, D. B. Predictors of the duration of therapy for clients in the community mental health system. Community Mental Health Journal, 1976, 12, 365-375. ZWICK, R. J. The effect of pretherapy orientation on client knowledge about therapy, improvement in therapy, attendance patterns, and satisfaction with services. (Masters thesis, University of California, Berkeley, 1981). Masters Abstracts, 1982, 20, 307. (University Microfilms No. 13-18082)