Assessing GDSS empirical research

Assessing GDSS empirical research

162 European Journal of Operational Research 46 (1990) 162-176 North-Holland Assessing GDSS empirical research Paul G R A Y Claremont Graduate Scho...

887KB Sizes 0 Downloads 18 Views


European Journal of Operational Research 46 (1990) 162-176 North-Holland

Assessing GDSS empirical research Paul G R A Y

Claremont Graduate School, Claremont, CA 91711, U.S.A. Doug V O G E L

Unioersity of Arizona, Tucson, A Z 85721, U.S.A. Renee B E A U C L A I R

University of Pittsburgh, Pittsburgh, PA 15260, U.S.A.

Abstract: GDSS experiments can differ from one another in many important dimensions. To interpret the results of a particular experiment in the context of the literature, it is necessary to find similar experiments for comparison. In this paper we present a method for finding the appropriate peer experiments. This method involves defining a set of variables to classify experimental conditions, defining scales for each variable, capturing the difference between experiments by measuring the average distance between them, and by using multi-dimensional scaling as a way of representing the positioning of experiments graphically. The method is applied to experiments analyzed by Pinsonneault and Kraemer (1990) based on the classification scheme they propose. Detailed data and graphs are presented. The results show that the method is a robust way of identifying experiments that are similar and distinguishing among experiments that are fundamentally different from one another.

Keywords: Group Decision Support Systems, experimentation

Introduction In the last several years, a variety of empirical research has been performed on the use of information technology for supporting group processes. Much of this research has involved group decision support systems (GDSS) in which the group is provided with one or more terminals or personal computers and typically a (public) screen which can be seen by everyone. Much of this empirical research is reviewed by Pinsonneault and Kraemer (1989) and by Kraemer and King (1988). Analysis of these papers shows that the term GDSS is applied to systems with a variety of capabilities and that the experiments have been performed under a wide variety of conditions. Yet, almost every experimenter has drawn conclusions Received May 1989

about GDSS in general based only on his results. For these reasons, the conclusions presented are often contradictory and it is not possible to tell which results are robust and which are artifacts of the particular experimental situation. The purpose of this paper is to propose a method for comparing experiments based on the classification scheme used by Pinsonneault and Kraemer. The idea is to create a way of clustering similar experiments. Experiments 'close' to one another should yield similar results, whereas experiments 'distant' from one another may give quite different results. Furthermore, as new empirical results are presented, researchers will, by using the method, be able to compare their findings to those of experiments close to them. The paper is organized as follows: Section 1 defines the problem and gives the objective of the paper; Section 2 explains the approach; in Section

0377-2217/90/$3.50 © 1990 - Elsevier Science Publishers B.V. (North-Holland)

P. Gray et al. / Assessing GDSS empirical research

3, the Pinsonneault-Kraemer classification for evaluating research is used; in Section 4, the variables are defined; in Section 5 the experiments, are compaired; Section 6 analyzes the method used; and, finally, Section 7 gives directions for future research.

1. The problem and the objective of the paper With the proliferation of empirical results on group decision support systems, it should be possible to draw general results about the nature of such systems. Unfortunately this is not the case. For example, Watson et al. (1988) concluded that: "The main statistical test indicates no difference in mean values of post-meeting concensus among GDSS, manual, and unsupported groups." Whereas Nunamaker et al. (1988) concluded that: "The quality of the deliberation and decision making was at least as good as that resulting from the manual process." Detailed reading of these papers confirms these contradictory findings. Watson et al. find little value added by GDSS whereas Nunamaker et al. find significant differences between groups with GDSS and groups without. Of course, the two papers deal with very different situations. Watson et al. are concerned with groups of three people whereas Nunamaker et al. deal with groups of eight or more. They have different software, hardware, and physical environments. They differ in the rules for the meeting. They are similar principally in using individual terminals and a public screen. The difference in findings is not unique to this pair of experiments. Because the various empirical studies involve different situations, no standards exist for comparing results. Without such a standard, it is not possible to tell whether a particular result is idiosyncratic to the situation or applies across a range of situations. The objective of this paper is to create standards for characterizing experiments that can be agreed upon by the GDSS research community. Once such standards are in place, researchers should be able easily to find the set of previous work to which their work is related and then be


able to show which previous findings are confirmed by the current results, which are contradicted, and which are new.

2. The approach In any experiment, a large number of variables are under the control of the experimenter and the remainder are not. The experimenter in GDSS selects the group, the hardware, the software, the task, and the rules to be followed. The experimental subjects determine the task and group outcomes. In this paper, we are concerned only with the variables under the control of the experimenter since it is these variables that define the experiment. Variables

In their paper, Pinsonneault and Kraemer defined a set of variables, based on the social science and GDSS literature, that describe both the experiments and their outcomes. We use the Pinsonneault-Kraemer classification as a basis for defining a set of 20 variables that classify GDSS experiments. We do not claim that this list of variables is exhaustive. However, we believe that these variables capture most of the conditions investigated in GDSS empirical research. Scales

For each variable, we establish a 0-100 scale. In some cases, this scale is a function of a measurable variable such as the number of subjects. In others, we suggest points on the scale in such a way that different conditions will have significantly different values along the scales. Furthermore, the scales follow a progression from simple to complex as the values move from small to large. For example, for extent of communications supported, we suggest 10 for verbal only, 50 for a LAN with either file sharing of E-mail, and 90 for a LAN with file sharing, E-mall, and communications with the outside through computer or video conferencing. The idea is that by providing reference points on the scale, different experiments can be placed relative to one another in that dimension. It is important to recognize that the scales are indicative rather than absolute since we are trying to determine whether experiments have large or small differences between them. The scales


P. Gray et aL / Assessing GDSS empirical research

given in this paper are proposed and are not considered final. In Section 7, we describe a process we intend to follow that should result in scales acceptable to the GDSS research community.



! Indicators For some variables, such as the quality of the interface, the variable has a number of components. We call these components indicators and provide scales for measuring the indicators. In the case of the interface, the indicators include the type of interface at the individual stations, the response time, and the public screen. The indicators can be weighted equally or by importance and then aggregated to give the value of the variable. Indicators are provided for 5 of the 20 variables. Average distances Having defined the values of the variables, we compute the average distance between each pair of experiments. That is, we compute the average difference in the values of the 20 variables. This average distance is a measure of the 'closeness' of the two experiments. Two-dimensional map To make the table of distances visual, we use multi-dimensional scaling (see, e.g., Roskam and Lingoes, 1971) to create two-dimensional maps such as are shown in Figures 4-7 (see Section 6). Experiments that are close to one another cluster on these maps, whereas those that are quite different are separated. The two-dimensional map helps researchers understand how their experiment is related to other experiments in the literature.

3. The Pinsonneauit-Kraemer classification In the introduction to their paper on assessing the empirical research, Pinsonnealt and Kraemer (1990) state that: "Despite recent research efforts, there are few clear indications of how electronic meetings affect groups. Empirical findings often appear contradictory and inconsistent."

They then present a classification for analyzing the impact of GDSS on group processes and outcomes. This classification is based on a systematic review of research in organi7ational behavior and

Figure 1. The Pinsonneault-Kraemer classification

in group psychology. They examine both the variables under the control of the experimenter and those resulting from group activity. In all, they developed 41 variables and classified 31 experiments according to these variables. Since they were working with published reports of these experiments rather than with interviews with the experimenters, they were able to define some but not all the variables in the experiments. Figure 1 shows the structure of the Pinsonneault-Kraemer classification. The contextual variables affect the group process which, in turn affects the task- and group-related outcomes. In addition, the task-related outcome affects the group-related outcome. To operationalize the general relations shown in Figure 1, Pinsormeault and Kraemer broke each box in Figure 1 down into metavariables and then broke the metavariables down into variables that could be defined. Figure 2, reproduced from Pinsonneault and Kraemer (1989), shows the metavariables (e.g., personal factors, situational factors) and the operational variables (e.g., attitude, abilities, individual motives, backgrounds of the participants). In examining Figure 2, it is clear that the contextual variables are under the control of the experimenter, as are some but not all the group process variables. To define an experiment, then, we can do so in terms of the variables under the control of the experimenter. In what follows, we use the Pinsonneault-Kraemer classification of variables as a basis for our own development of the list of variables that define an experirnent. We do not, however, follow their classification slavishly. In making our choice of variables, we were primarily interested in defining the characteristics of the experiments, not how well they were executed. For example, Pinsonneault and Kramer include the number of experimental and control groups, which reflect the quality of experimzntation, but we do not. On the other hand, we have

P. Gray et al. / Assessing GDSS empirical research


Contextual Variables Personel Factors • attitude • abilities • individual motives • background Situational Factors • reasons for group membership • stage in group development • existing social networks G r o u p Structure • work group norms • power relationships • status relationships • group cohesiveness • denisty (group size, room size, interpersonal distance) • anonimity • facilitator

Technological Support • ~greee

• type (GDSS vs. GCSS)

t t

Task Related Outcomes

Decisional Characteristics • depth of analysis • participation • consensus reaching • time to reach the decision II. Communication Characteristics • clarification efforts • efficiency of the commun. • exchange of information • non verbal commun. • task-oriented commun. HI. Interpersonal Characteristics • cooperation • domination of a few members



L Characteristics of the Decision • quality • variability of the quality over time • breadth

Group Process

II. Implementation of the Decision • cost


• commitment of the group members III. Attitude of Group Members Toward the Decision • acceptance • comprehension • satisffaction • confidence

IV. Structure Imposed by GDSS/ GCSS

TaskCharacteristics ~ Complexity • natul-e • degree of uncertainty •

Group-Related Outcomes I. Attitude Toward the Group Procecss o satisfaction • willingness to work with the group in the future

Figure 2. Details of the Pinsonneauh-Kraemer classification (reproduced from Pinsonneault and Kraemer, 1989)

expanded some of Kraemer and Pinsonneauh's variables, such as personal factors because we believe that more fine-grained detail is needed in some areas. These expansions were based on their discussions and our experience.

4. Definition of the variables In this section we develop a set of 20 variables that define GDSS empirical research. For simplicity in thinking about these variables, we have clustered them into 6 metavariables (personal variables, situational variables, group structure, technological support, task characteristics, and group process). Following the guidance of Pinsormeault and Kraemer, these 6 metavariables are further divided into 20 variables. Some of these variables, such as size of group, are self-explanatory. However, others require further definition to become operational. For five variables, we have divided the variable by

creating indicators that define the variable. For example, the variable Background of Group Members is further divided into: (1) previous experience in working with groups, (2) education, (3) average age, and (4) computer ability. Information about the quantities selected as indicators is usually well documented in the experimental literature on GDSS. Most of the indicators are also referred to by Pinsonneault and K.raemer. As we shall show, combining these indicators leads to a composite value for the variable. Each GDSS experiment can be defined in terms of the 20 variables in Table 1. Having specified the variables, the next step is to assign values to them. For uniformity (and convenience), each variable is defined on a 0-100 scale. Because of the way we shall average across variables, the scales should all have the same range. In Table 2 we suggest reference values along these scales based on our experience with GDSS. We believe that these 'anchor points' are appropriate for each variable. In each case, we have tried to define the


P. Gray et al. / Assessing GDSS empirical research

Table 1 Metavariables, variables and indicators Metavariable



Personal factors (group member attitudes, backgrounds)

1. Attitude toward group 2. Ability to work in group 3. Background of group members

Situation (how group came together)

4. Reason for group membership 5. Existing social network of group 6. Stage of group development

Group structure (how group is organized)

7. Size of group 8. Density

1. Previous group experience 2. Education 3. Average age 4. Computer ability

1. Number of people/terminal 2. Terminal separation

9. Table shape Technological support (characteristics of GDSS)

Task characteristics (what the group does)

10. Degree of support (DeSanctis-Galhipe (1987)) 11. Degree of anonymity 12. Chauffeur/facilitator 13. Interface

14. Complexity of task

1. Response time 2. Type of interface 3. Public screen 1. Complexity of problem 2. Complexity of response 1. Urgency 2. Importance 3. Routine/creative 4, Abstractness

15. Nature of task

16. Negotiation associated with task Group process (how the group works as set up by experimenter)

17. Degree of consensus required 18. Communication supported 19. Group structure imposed 20. Number of meetings to accomplish task

extremes o f the range a n d one or m o r e interm e d i a t e points. P i n s o n n e a u l t a n d K r a e m e r ' s discussions of their variables have served as a p o i n t o f d e p a r t u r e . F o r e x a m p l e , in the case o f v a r i a b l e 6 - - S t a g e of G r o u p D e v e l o p m e n t - - w e follow their use of the T u c k m ~ n 4-stage m o d e l ( T u c k m a n , 1965). F i g u r e 3 illustrates the use o f i n d i c a t o r s to d e v e l o p a value for a variable. I n this figure, the f o u r i n d i c a t o r s of B a c k g r o u n d o f G r o u p M e m b e r s are weighted e q u a l l y to o b t a i n the average, W e i g h t i n g allows d i f f e r e n t i a t i n g a c c o r d i n g to the relative i m p o r t a n c e a t t a c h e d to each i n d i c a t o r . W e i g h t s can b e a d d e d easily if desired. F o r example, if e d u c a t i o n a n d c o m p u t e r experience were assigned a weight of 35 a n d the o t h e r two i n d i c a t o r s a weight o f 15, then the w e i g h t e d average w o u l d b e 52. I n this p a p e r , we have w e i g h t e d

all i n d i c a t o r s e q u a l l y since we are n o t y e t r e a d y to r e c o m m e n d relative weights.

5. Comparing experiments using the variable values T h e set o f values o f the v a r i a b l e s for a p a r t i c u lar e x p e r i m e n t f o r m a d e i g n p r o f i l e for t h a t exIndicator



1. Previous group experience 2. Education 3. Average age 4. Computer experience

25 25 25 25

20 70 30 60

Weighted average


Figure 3. Example of indicators-Background of Group Members

P. Gray et al. / Assessing GDSS empirical research


Table 2 Reference values for variables and indicators Metavariable


Personal factors

1. Attitude toward group (attitude members have toward working in groups and working with the members of the group)

10 Strong dislike 30 Dislike 50 Indifferent 70 Like 90 Strong liking

2. Ability of work in group

10 Strong dislike 30 Dislike 50 Indifferent 70 Like 90 Strong liking

3. Background of group members


Previous experience in working with groups

10 None 30 Experimental groups only 50 Some business experience 70 Middle management 90 Top executives


20 No college 40 Undergraduate student 60 Undergraduate degree 80 Graduate degree

Average age Computer ability



20 + 2 * age for ages < 60 100 for average age >/60 -

0 No experience 10 Novice 50 Experienced user 90 Expert programmer

4. Reason for group membership

10 Required for class grade 30 Paid volunteer or cash prize 50 Paid volunteer and cash prize 90 Regular work activity

5. Existing social network of group (which has a direct impact on communication and interpersonal dimensions of the group process)

10 Randomly chosen subjects who do not know one another 30 Attended courses together 50 Coworkers (less than a year) 70 Coworkers (less than 5 years) 90 Coworkers (over 5 years)

6. Stage of group development (based on Tuckman (1965) as quoted in Pinnsonneault and Kraemer (1989))

20 Testing and dependence: group members attempt to understand acceptable and unacceptable behavior and the norms of the group 40 Intragroup conflict. Members establish and solidify their position and acquire influence over decisions 60 Development of group cohesion: members accept fellow members and develop norms 80 Functional performing: efforts of group members mostly oriented toward task and goal accomplishment

P. Gray et al. / Assessing GDSS empirical research

168 Table 2 (continued) Metavariable




Value 5 * number of group members 100 for groups >/20

7. Group size

structure 8. Group density

Number of people per terminal

10 1 person 50 2 persons 90 > 2 persons

If more than one terminal is used, the distance between adjacent terminals

10 people in different locations (e.g., E-mail) 50 5 feet 90 1 foot 10 one piece oval or rectangle 50 U-shaped 90 Gallery seating

9. Table shape

Technological support

10. Degree of support (using DeSanetis and Gallupe, 1987)

10 Level 1 50 Level 2 90 Level 3

11. Degree of anonymity (permitted by GDSS)

10 None 50 Some 90 Complete

12. Chauffeur/facilitator

10 None 30 Technical chauffeur or meeting facilitator but not both 50 Facilitator as chauffeur or chauffeur as facilitator 90 Both a facilitator and a chauffeur

13. Interface

Task characteristics

14. Complexity of task

Response time

10 Slow 50 Average 90 Fast

Type of interface

10 Unix and keyboard only 50 EGA single screen 60 EGA single screen with mouse or touchscreen 90 VGA fully windowed

Public screen

0 none 10 27"TV, monochrome 30 CGA projection, 1 screen 50 EGA projection, 1 screen 70 EGA projection plus multiple screens 90 EGA projection on 2 screens + auxiliary screens

Complexity of problem

10 Pick family vacation site 30 "Lost in the arctic" or similar canned problem 90 Solve major real crisis situation

periment. Experiments with nearly equal values for all variables are similar to one another. Experiments with large differences in variable values

should be expected to be different from one another and to yield differing results. To operationalize the notion of similar profiles,

P. Grayet al. / Assessing GDSS empiricalresearch


Table 2 (continued) Metavariable




Task characteristics

14. Complexity of task

Complexity of response

20 Vote 40 Define alternatives, vote 60 Define criteria, alternatives, and vote 80 Define problem, criteria, alternative and vote

15. Nature of task


10 Not urgent 50 Urgent 90 Extremely urgent


10 Not important 50 Important 90 Extremely important


10 Routinely done by people in organization 50 Brainstorming


10 Textbook problem 50 Affects participants somewhat 90 Critical to participants

Group process

16. Negotiation associated with task

10 Cooperative, same goal 50 Internal negotiation within group 70 Competitive

17. Degree of consensus required

10 Not required 50 Preferred 90 Mandatory

18. Communication supported

10 Verbal only 30 LAN, centrally oriented, able to send 1 to N 50 LAN with file sharing or E-mall 70 LAN with both file sharing and E-mail 90 LAN with file sharing, E-mail, and communications with outside through computer or video conference

19. Group structure imposed

10 Fully hierarchic 90 Fully democratic

20. Number of meetings to accomplish task

10 1 30 2 50 3 70 4-5 90>5

we compute the average absolute difference between each pair of experiments. For experiments i and j, the average absolute distance between them,

d(i, j),

is defined as

d(i, j) = ~ Iv(k, i) - v(k, j ) I / m , k=l

where v (k, i) a n d v (k, j ) represent the values of variable k i n e x p e r i m e n t s i a n d j , respectively, a n d m is the n u m b e r of variables used i n the c o m p a r i s o n , N o t e that the average distance is symmetric i n i, j . T h a t is,


d(i, j ) = d ( j , i),



P. Gray et al. / Assessing GDSS empirical research

and that equation (1) shows that the distance of an experiment from itself (d(i, i)) is zero, as it should be. To specify the distances between all pairs of experiments, it is necessary only to calculate an upper triangular matrix of distances. As illustrated in the next section, it is useful to project this upper triangular distance matrix onto a two-dimensional graph. This graph shows the experiments relative to one another so that distances are approximately preserved. The graph has no axes since only relative distance is important. The technique for doing so is called multi-dimensional scaling. A variety of computer codes exist including ALSCAL in SPSS (SPSS, 1988), the Guttman multi-dimensional scaling option in SYSTAT, and the Lingoes codes (Roskam and Lingoes, 1971). The idea is to create a graph in which the distances between experiments are preserved. Since the distance matrix for n experiments is inherently ( n - 1)-dimensional and the projection is onto two dimensions, the technique makes a series of approximates. A statistical measure, called the coefficient of alienation, is used to indicate the quality of the approximation. In general, a value of the coefficient under 0.2 is considered acceptable.

6. Application of the method As described above, assigning values to the variables defines a profile for an experimental situation. To illustrate the method, Table 3 shows the values of the variables for 20 GDSS empirical studies reported in the literature. Twelve of these are discussed in Pinsonneault and Kraemer (1990). These twenty studies were selected because sufficient data were available to the authors to make judgments on each of the variables. In examining Table 3, we see that the values associated with some experiments are very similar to one another. For example, a number of studies done at the University of Minnesota (Zigurs, 1987; Watson, 1987; Galluppe, 1985; Dickson et al., 1989) have almost identical experimental situations. All involved small groups, students, and relatively low level of computer support. The dissertation studies by Lewis (1982) and by Beauclair (1987), and the experiments by Steeb and Johnston (1981) and by Turoff and Hiltz (1982) are also quite similar to the Minnesota situation. Some

of the experiments done at the University of Arizona (e.g., Heminger, 1988; Martz et al., 1988) are similar to one another but differ considerably from the Minnesota experiments. The Bui et al. (1987) and the Gray (1983) situations differ from all the others. Summary information about the data are given at the bottom of Table 3. The maximum and minimum reference values defined in Table 2 are shown here as are the largest, smallest, and median values for each variable. Examining this summary information shows that experimentation thusfar covers much of the range of most, but not all, of the variables. The effect of the differences in individual variables on the closeness of the experiments to one another can be seen from Table 4. Here, the average distance between each pair of experiments is computed using equation (1). Average distances of 5 or less reflect experiments that typically differed in only a few variables and then not by a large amount. The distances shown in Table 4 encapsulate into a single number the multi-dimensional information obtained by comparing experiments pairwise. The foregoing observations are illustrated graphically in Figures 4 and 5. These figures show that the data in Table 4 when projected onto the plane by using multi-dimensional scaling. There is no new information in these figures. They simply approximate the information in Table 4 by plotring it in two dimensions. In examining these three figures, recognize that the graphs have no axes, they only show positions of studies relative to one another in such a way that distance relations are preserved. If an experiment is deleted or added, the location of individual points changes but their relative positions are preserved. Figure 4 shows experiments 1-10 and 12 while Figure 5 shows experiments 10-20. Note that in Figure 4, the Minnesota experiments (6, 8, 9) are effectively on top of one another and that experiments 3, 4, 5, 7, and 10 (which were not done at Minnesota) are close to them. However, experiments 1, 2, and 12, all performed at the University of Arizona, are some distance away. These visual observations of clustering are in keeping with the data for individual studies that were shown in Table 3. In Figure 5, experiments 10 and 12 are repeated from Figure 4. Note that they are still apart (after

1988 1988 1989 1989 1988 1987 1982 1987 1985 1987 1989 1988 1988 1987 1988 1989 1987 1981 1982 1983

1. Easton, A. 2. Easton, G. 3. Connoily et al. 4. Valacieh et al. 5. Jessup et al. 6. Zigurs 7. Lewis 8. Watson 9. Gallupe 10. Beauclalr 11. Ellis et al. 12. Heminger 13. Martz 14. Nunamaker et al. 15. Eveland and Bikson 16. Dickson et al. 17. Bui et al. 18. Steeb and Johnston 19. Turoff and Hiltz 20. Gray


10 50 50 70 90

50 50 50 50 50 50 50 50 50 50 70 50 50 50 50 50 80 50 50 70 10 50 50 80 90

50 50 50 50 50 50 50 50 50 50 70 80 80 50 70 50 70 50 50 80



10 50 50 80 90

27 27 28 27 27 29 27 28 27 30 70 60 60 50 75 28 56 27 27 61 10 27 28 70 90

20 20 20 20 20 30 30 30 30 30 95 90 90 70 70 20 70 30 30 10


10 10 30 75 90

30 30 20 25 25 30 30 30 30 10 75 70 70 60 50 30 60 30 20 70


20 20 35 70 80

30 35 30 30 30 50 20 40 30 40 90 70 70 50 50 40 50 20 20 60



10 15 25 60 100

20 30 20 45 20 20 15 17 15 25 35 40 35 60 50 20 15 15 25 45


10 20 40 90 90

75 40 30 30 20 40 50 40 40 45 40 40 40 40 20 40 90 75 20 55 10 10 15 50 90

10 50 50 50 50 10 10 10 10 50 10 50 50 50 50 10 10 10 15 10



10 15 30 40 90

30 30 15 15 15 30 20 30 30 30 30 30 30 20 15 30 30 20 20 40 10 10 40 80 90

30 50 80 80 80 40 30 40 40 40 50 50 50 50 10 40 40 30 25 70



10 10 30 80 90

50 25 20 20 20 30 10 30 30 30 50 35 35 80 10 10 40 10 30 20


10 17 40 70 90

20 60 70 70 60 40 30 40 40 40 60 70 70 60 20 40 47 30 20 17


15 15 40 75 85

40 40 40 40 40 30 40 20 40 35 75 75 75 65 70 28 15 50 35 60


10 20 35 72 90

35 25 35 35 35 25 25 25 25 20 72 70 70 60 70 25 30 25 25 35


10 10 40 50 90

40 35 20 20 20 50 40 50 50 10 50 40 40 50 40 50 10 50 40 50 10 10 70 90 90

50 70 10 10 10 70 70 70 70 50 70 70 70 50 70 70 70 70 70 90


10 10 20 50 90

10 20 50 50 50 20 20 20 20 10 20 30 30 30 25 20 30 20 20 50



10 30 90 90 90

70 90 90 90 90 90 90 90 90 90 60 40 30 55 80 90 90 90 90 30


10 10 10 70 90

10 10 10 10 10 10 10 10 10 10 10 20 12 10 70 10 10 10 10 70

a Year refers to publication date (see section on References). Column headings are numbers of variables in Table 1. Indicators were used in determining values of variables. Minimum and maximum value defined refers to Table 2. Observed refers to value in the column.

Minimum value defined Low value observed Median observed High value observed Maximum value defined



Table 3 Analysis of experiments a



Easton, A. (1988) Easton, G. (1988) ConnoUy et al. (1989) Vala¢ich et al. (1989) Jessup et al. (1988) Zigurs (1987) Lewis (1982) Watson (1987) Gallupe (1985) Beauclair (1987) Ellis et al. (1989) Heminger (1988) Martz (1988) N u n a m a k e r et al. (1987) Eveland and Bikson (1988) Diekson et al. (1989) Bui et al. (1987) Steeb and Johnston (1981) Turoff and Hiltz (1982) Gray (1983)



2 24 22

3 25 22 1


5 24 22 1 2

18 19 12 12 12

6 17 20 11 12 11 4

7 18 19 12 12 12 1 4

8 17 19 11 11 11 1 3 1


10 20 20 9 10 10 6 8 6 6

11 25 23 32 32 32 21 26 22 22 28

12 27 20 27 26 28 23 27 24 24 25 10

13 27 20 27 27 28 23 27 24 24 25 9 1

14 22 18 23 22 23 20 24 21 21 21 15 13 13

15 26 23 29 27 28 24 22 25 23 26 21 17 18 19

10 6 15 16 15 2 5 2 3 9 23 25 25 22 24


17 21 15 22 23 22 16 18 17 16 15 19 17 17 17 21 18

9 11 18 19 18 7 2 7 5 13 26 28 28 24 23 6 19


7 10 17 18 18 7 4 7 6 10 27 27 27 24 24 8 20 4


29 33 20 20 21 17 18 18 18 22 23 21 22 26 23 24 25 25 26


a Distances are based on data in Table 3, and are computed from equation (1). Values are rounded to the nearest integer. C o l u m n headings refer to the n u m b e r of the experiment.

1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20.


Table 4 Distances a m o n g experiments a



P. Gray et al. / Assessing GDSS empirical research




[] [] Figure 4. Map of experiments 1-10 and 12

all, the average distance between them is 25) but are in new positions. The Dickson et al. (1989) experiment, also done at the University of Minnesota, is close to the Beauclair (1987), the Steeb and Johnston (1981), and the Turoff and Hiltz (1982) experimental conditions.

In Figure 6 we have taken 11 of the 12 experiments that are considered here and also in Pinsonneault and Kraemer (1990). Examining Figure 6 shows no clustering of experimental conditions within what Pinsonneault and Kraemer call GDSS studies (numbers 5, 14, 17, 18) or those they call




Figure 5. Map of experiments 10-20



P. Gray et al. / Assessing GDSS empirical research

1. EASTON A 5. JESSUP et al 6. ZIGURS 8. WATSON 9. GALLUPE 11. ELLIS et al 14. NUNAMAKER etal 16. DICKSON etal 17. BUI et al 18. STEEB&JOHNSTON 19. TUROFF&HILTZ

,3 []




E Figure 6. Map of 11 experiments discussed by Pinsonneault and Kraemer Table 5 Distances among Minnesota experiments and 'Minnesota-near' experiments

G C S S (group c o m m u n i c a t i o n s support systems) studies ( n u m b e r s 1, 6, 8, 9, 11, 16, 19). As a final exploration, in T a b l e 5 a n d in Figure 7 we show the M i n n e s o t a experiments a n d those with results close to them b y themselves. I n effect, we are using the m e t h o d to ' z o o m i n ' o n the p o r t i o n of the plot a n d e x a m i n e the fine structure further. The table shows that all distances are less t h a n 10 except for the distance b e t w e e n the Steeb a n d J o h n s t o n a n d the Beauclair experiments. Here,

Experiment 6. Zigurs(1987) 7. Lewis(1982) 8. Watson (1987) 9. Gallupe (1985) 10. Beauclair (1987) 16. Dickson et al. (1989) 18. Steeb and Johnston (1981) 19. Turoff and Hiltz (1982)









Figure 7. Zoom on Minnesota and Minnesota-like experiments

6 7 8 9 10 16 18 19 4 1 1 6 4 3 8 1 6 6

2 5 2 3 9

7 7 2 4 7 7 5 6 13 10 6 8 4

P. Gray et al. / Assessing GDSS empirical research

the points separate considerably. However, remember that the maximum distances between points in the previous figures were of the order of 30. Plots such as that of Figure 7 help sort the differences among experiments that are quite similar to one another.

7. Findings and directions for further work Group Decision Support Systems experiments are quite varied in design and can differ from one another in many important dimensions. To interpret the results of a particular experiment in the context of the literature, first it is necessary to find similar experiments for comparison. In this paper we have provided a method for finding the appropriate set of 'peer' experiments. In brief, the method involves: (1) Defining the set of variables that classify experimental conditions. A suggested set of variables is presented based on the PinsonneaultKraemer evaluation of empirical GDSS research. (2) Defining a scale for each variable. A proposed set of scales is given. The scales are between 0 and 100 and cover the range of conditions encounted in GDSS situations. (3) Introducing absolute average distance between experiments as a measure of similarity between experiments. (4) Using multi-dimensional scaling as a way of representing the relative position of studies graphically. The analyses performed thus far indicate that experiments do cluster together when our method is applied. Furthermore, the variables and scales proposed are a viable basis for finding similar experiments. The variables and scales, however, should be viewed as tentative. As a next step, we intend to circulate the scales for the variables (shown in Table 2) and our assessment of each experiment (as shown in Table 3) to the researchers who performed the experiments. We shall ask each of them to review our assessment and to comment on the validity of the scales for their situation. This feedback from the research community should result in consensus on the appropriate variables and scales for defining GDSS experiments. Once consensus has been reached, we hope researchers will report their experimental condi-


tions in terms of the framework provided here. We anticipate that experiments close to one another will exhibit robust and similar findings. Experimenters trying to push the frontier of the field forward can use the framework to find situations that differ from those reported previously, They will have a guide for finding the gaps in understanding and for determining which questions are important to resolve.

References Applegate, L. (1986), "Idea management in organization planning", Unpublished Doctoral Dissertation, University of Arizona. Beauclair, R. (1987), "An experimental study of the effects of group decision support system process support applications on small group decision making", Unpublished Doctoral Dissertation, Indiana University. Bui, T., Sivansankaran, T.R., Fijol, Y., and Woodburg, M.A. (1987), "Identifying organizational opportunities for GDSS use: Some experimental evidence", Proceedings of the 7th International Conference on Decision Support Systems, June 8-11. Connolly, T., Jessup, L., and Valacich, J., "Idea generation in a GDSS: Effects of anonymity and evaluative tone", Working Paper, University of Arizona. DeSanctis, G., and Gallupe, R.B. (1987), "Foundations for the study of group decision support systems", Management Science 33, 589-609. Dickson, G., Lee, J., and Robinson, L. (1989), "Observations on GDSS interaction: Chauffeured, facilitated, and userdriven systems", Proceedings of the 22nd Annual Hawaii International Conference on System Sciences, January. Easton, A. (1988), "An experimental investigation of automated versus manual support for stakeholder identification and assumption surfacing in small groups", Unpublished Doctoral Dissertation, University of Arizona. Easton, G. (1988), "Group decision support system versus face-to-face communication for collaborative group work: An experimental investigation", Unpublished Doctoral Dissertation, University of Arizona. Ellis, C., Rein, G., and Jarvenpaa, S. (1989), "Nick experimentation: some selected results", Proceedings of the 22nd Annual Hawaii International Conference on System Sciences, January. Eveland, J., and Bikson, T. (1988), "Work group structures and computer support: A field experiment", Proceedings of the Conference on Computer-Supported Cooperative Work, September 26-28. Gallupe, B. (1985), "The impact of task difficulty on the use of a group decision support system", Unpublished Doctoral Dissertation, University of Minnesota. Gray, P. (1983), "Initial observations from the decision room project", in: G.P. Huber (ed.), DSS 83 Transactions, Third International Conference on Decision Support Svstems, June 27-29, Boston, MA, 135-138.


P. Gray et al. / Assessing GDSS empirical research

Heminger, A. (1988), "Group decision support system assessment in a field setting", Unpubfished Doctoral Dissertation, University of Arizona. Jessup, L.M., Tansik, D., and Laase, T.D. (1988), "Group problem solving in an automated environment: The effects of anonymity and proximity on group process and outcome with a group decision support system", Proceedings of the Academy of Management, Anaheim, CA, August. Kraemer, K.L., and J. King (1988), "Computer-based systems for cooperative work and group decision making", Computing Surveys 20, 115-146. Lewis, F. (1982), "Facilitator: A microcomputer decision support systems for small groups", Unpublished Doctoral Dissertation, University of Louisville. Martz, B., Nunamaker, J., Applegate, J., and Konsynski, B. (1988), "Information systems infrastructure for manufacturing planning systems", University of Arizona Working Paper. Nunamaker, J.F., Applegate, L., and Konsynski, B. (1987), "Facilitating group creativity: Experience with a group decision support system", Journal of Management Information Systems 3, 5-19. Nunamaker, J.F., Applegate, L., and Konsynski, B. (1988), "Computer-aided defiberation: Model management and group deliberation support", Operations Research 36, 826848. Pinsonneault, A., and K.L. Kraemer (1989), "The impact of technological support on groups: An assessment of the empirical research", Decision Support Systems 5, 197-216. Pinsonneault, A., and Kraemer, K.L. (1990), "The effects of electronic meetings on group processes and outcomes: An

assessment of the empirical research", European Journal of Operational Research 45, 143-161 (this issue). Roskam, E., and Lingoes, J.C. (1971), "Minissa-I: A Fortran IV(G) program for the smallest space analysis of square symmetric matrices", Behavioral Science 15, 204-205. SPSS, Inc. (1988), SPSS-X User's Guide, 3rd ed., SPSS Inc., Chicago. Steeb, R., and Johnston, S.C. (1981), "A computer-based interactive system for group decision making", IEEE Transactions on Systems, Man, and Cybernetics 11, 544-552. Tuckman, B.W. (1965), "Development sequence in small groups", Psychological Bulletin 64, 384-399. Turoff, M., and Hiltz, S. (1982), "Computer support for group versus individual decisions", IEEE Transactions on Communications 30, 82-90. Valacich, J., Dennis, A., and Nunamaker, J. (1989), "The effects of anonymity and group size in an electronic meeting system environment", Working Paper, University of Arizona. Watson, R.T. (1987), "A study of group decision support system use in three- and four-person groups for a preference allocation decision", Unpublished Doctoral Dissertation, University of Minnesota. Watson, R.T., DeSanctis, G., and Poole, M.S. (1988), "Using a GDSS to facilitate group consensus: Some intended and unintended consequences", MIS Quarterly 12, 463-478. Zigurs, I. (1987), "The impact of computer-based support on influence attempts and patterns in small group decision making", Unpubfished Doctoral Dissertation, University of Minnesota.