WPbanner5.jpg (10176 bytes)
No. WP-97-04

Human Centered Systems in the Perspective of Organizational and Social Informatics

Chapter 5 of Human Center Systems

For the National Science Foundation.
Thomas Huang and James Flanigan (Editors)

Rob Kling

SLIS

Indiana University

Bloomington, IN 47405
Phone:812-855-9763
Fax:812-8556166
E-mail:kling@indiana.edu

Leigh Star

Graduate School of Library and  Infomation Science

501 E.Daniel Street
                                     The University of Illinois Champaign, IL 61820 

                                                                                                                    May 1997
                                                                                                Center for Social Informatics
                                                                                                                      SLIS
                                                                                                              Indiana University
                                                                                                           Bloomington, IN47405
                                                                                                    http://www.slis.indiana.edu/csi

 

Group participants and co-authors:

Sara Kiesler, CMU, kiesler+@andrew.cmu.edu
Phil Agre, UCSD, pagre@weber.ucsd.edu
Geoffrey Bowker, UIUC, bowker@alexia.lis.uiuc.edu
Paul Attewell, CUNY, pattewel@broadway.gc.cuny.edu
Celestine Ntuen, North Carolina Agriculture and Technology Univ., ntuen@ncat.edu



Acknowledgments: We also received helpful comments from Dan Atkins, Ann Bishop, Blaise Cronin, Patricia Jones, Simon Kasif, and Geoff McKim.

1.0 Introducing the Issues

1.1 Organizational and Social Informatics in System Design

Computer systems have constituted a significant presence in American business, government, and cultural life for about a third of a century, and with each passing year they evolve rapidly in technical sophistication, in scope of use, and in processing power. Despite the extraordinary advances achieved to date, a tone of concern has developed among the many scholarly disciplines which work with these technologies. There is widespread agreement that we need new ways of thinking about computers and information technologies: new conceptions of how computing fits into larger organizational processes; a better understanding of how the soft human systems and skills surrounding the machinery contribute to the success or failure of the enterprise; improved theories about how decision-making activities are best distributed between humans and machines, and how the interior processes of machines can be represented symbolically so that human operators can really remain in control; new ways to grasp the role of information technologies as arteries in vast communication networks of people, groups, and organizations. These are all major intellectual challenges for researchers in the years ahead.

Computer scientists, systems designers, and social scientists have come to realize that our individual intellectual disciplines have become dwarfed by the complexity, dynamism and scale of today's information technology. Not only are the largest systems so complex that no individual can fully understand them or anticipate all their actions, but even those of us who work on particular pieces of systems have come to understand that to do our work well we must appreciate and anticipate the interactions of the hardware, the software that will run on it, the skills and purposes of the people who will use it, and the organizational and political environment in which the system is put to work. In other words, the complexity, interdependence and social embeddedness of modern computer systems are mirrored in the intellectual challenges which individual researchers face.

Many of us have come to view the intellectual challenge facing us as envisioning, designing, and researching Human-Centered Information Systems for the next century. This implies a shift in the ways we have been doing research towards a more inter-disciplinary approach in which the computer scientists, systems engineers, and social scientists collaborate together (Bowker, et al., 1997). It differs from past practice in which computer scientists tended to focus on the development of hardware and software, and left studies of the actual uses and impacts of their systems (if any) to social scientists whose research was independent of the originators. In practice, relatively little social research was carried out on the use and impacts of computer systems. As a result, many systems which looked wonderful in a development lab failed to live up to their promise when placed in real-world settings because their designers did take account of important social relationships around system users (and others in their workworlds). Conversely, many systems "work well" because of ways that users tailor them and work around some of their limitations. Unfortunately, this kind of knowledge about the nitty-gritty practices of systems use has not filtered back into the education of systems designers and into a well organized body of knowledge that practicing designers can readily learn and follow. The promise of human-centered systems is that knowledge of human users and the social context in which systems are expected to operate become integrated into the computer science agenda, even at the earliest stages of research and development.

Fortunately, we are not beginning in total ignorance of relevant principles. For the last 20 years there has been a growing body of research that examines social aspects of computerization -- including the roles of information technology in organizational and social change and the ways that the social organization of information technologies influence social practices (and are influenced by them). This body of research is called Social Informatics (and Organizational Informatics when the focus is upon systems used within organizations). The names social informatics and organizational informatics are relatively new. But they are new labels that bring together studies that have been labelled as social impacts of computing, social analysis of computing, studies of computer-mediate communication (CMC), CSCW, and so on. (See the Social Informatics Home Page) for a listing of research and teaching materials http://www.slis.indiana.edu/SI)

1.2 Opportunities and Crises in Systems Design

For some, the conclusion that our research agenda must change stems from a sense of crisis in current system design practice: that there is a lag in the development of analytical approaches and institutions which can safely manage the greater complexity of today's information systems, and in a way that will be more effectively human-centered. For others, the need for change is demonstrated by spectacular failures of some very large systems which can be attributed to the combination of human and technological factors: The Challenger disaster; the failures of air-traffic control systems (Stix, 1994) and long costly delays in the development of systems for agencies such as the Social Security System and the IRS; problems with the implementation of private enterprise-wide information retrieval systems such as SAP/ R3 - are all examples.

For others, there is less a sense of crisis than one of opportunity to advance our understanding of computers and communications in human society, and identify the principles of interactive complexity, human-machine interaction, and social embeddedness evidenced by state-of-the-art systems. Here are some examples where technical and social issues are so intertwined in modern large-scale systems that they need to be examined in an integrated way:

Medical information systems constitute some of the largest and most complex computer applications in widespread use today. Such systems include insurance record-keeping functions, as well as diagnostic and patient history information. In many places, these systems are a kind of collage built out of separate modules, designed at different times for distinct purposes. Increasingly, however, the value of such systems, whether for health care economists, office workers, epidemiologists, or physicians, depends upon the successful integration and combination of data from all these sources (the interoperability of the component systems). This goal presents designers and researchers with tremendous intellectual challenges which are simultaneously technical and social. Integrating these databases not only depends upon very fast filing and retrieval algorithms and powerful database software; it also depends upon overcoming differences in medical classification and nomenclature systems - understanding the knowledge domains to be represented and then developing standards. Beyond knowledge engineering, such systems touch on ethical and privacy issues, from whom should gain access, to what kinds of personal and group membership information are appropriately stored and accessed alongside the medical data. The systems also raise usability issues: what kinds of people are expected to access them, in what kinds of settings. Should they be designed such that they can be accessed only by technical experts with substantial training in the particular systems, or are they to be usable by novices with little training, using simple search tools?

In many businesses, computerized accounting systems appear like the layers of an archaeological dig, with newer systems built upon older systems, with the added dimension that large-scale information technology now makes possible the rapid transmission of this information across traditional organizational boundaries, as well as building into these systems various workplace surveillance capacities. There is often no a central rationale or architecture for these diverse organizational accounting systems, and systems developers, analysts, managers and employees feel themselves caught in a web of overly complicated and redundant accounting schemes. Such legacy systems" can take on a fragile inflexible quality: changing any part of them is fraught with problems, not only because one is changing archaic code, but also because what seem to be harmless modifications of one part may prove to have unexpected, and occasionally disastrous effects on other parts of the system.

The proliferation of such legacy systems is one small part of the productivity paradox" which refers to the discrepancy between the expected economic benefits of computerization and measured effects. (See Harris et al., 1994; Landauer, 1995). But the importance of the productivity paradox is not simply anchored in macroeconomic statistics; the paradox might be resolved by better average performance of computer systems in leverage organizational performance. Even if average performance is improved, many managers and professionals can continue to see information systems whose use fails to improve economic productivity, and even occasionally becomes a barrier to workplace innovation and improvement. Solving these problems is not merely a matter of paperwork reduction. It requires quite extensive mapping of work-task domains, understanding the interdependency of various computation tasks, mobilizing cooperation from all parties, and developing shared information standards and an acceptable system architecture.

One of the problems and scientific challenges we face is a lack of a coherent theory of human/system complementarity for complex work: what should best be left to the human operators and what part to the machine, what routines should be built in so that each participant can check up on or remedy the actions of the other, how the forms of representation or knowledge-mapping of the machine should fit with those of the human, and so on. There are theories for special cases, such as aircraft operations and statistical data analysis. Older models of human-computer interaction did not model the kinds of scope and scale found in today's high-complexity systems. As a result, the opportunities for error in today's systems, and the difficulties in identifying and correcting errors, proliferate. Further, many of today's computer systems are used to support human communication. We have little systematic understanding about the role of face-to-face, telephone, and email in supporting effective communications.

Studies of the routine use and social impacts of computing are just now moving to understand uses and impacts where such dense and ubiquitous computerization already exists in an installed base. Most of our conceptual tools were developed for understanding the automation of individual tasks, and are being extended to team-wide applications in CSCW studies. Some information systems theorists have helped us to understand limited aspects of organizational-scale systems, while modern systems move towards inter-organizational networks and the computerization of entire industrial sectors.

The problem of standards is central to both legacy systems and the development of new systems and protocols. Within the Internet, for example, there are more than 100 accepted standards, and more on the table being reviewed. Attention to the problems of standardization has been relatively sparse, given the magnitude of the problems spawned by mismatches and proliferation of standards. There is both a need and an opportunity for a joint social, organizational, and technical analysis to understand the prospects for effectively developing various kinds of standards.

The ease of use of computers - usability - remains a critical issue, in part because ordinary users embrace ever more complex and larger-scale applications. Email, once the province of a small world of computer scientists and engineers, is rapidly becoming commonplace. Active professionals are experiencing email overload, and good social conventions for filtering, pacing, and even discarding email, are lagging behind the growth of mail (Yates and Orlikowski, 1992; Kling and Covi, 1993). At the same time, the Internet itself, including commercial carriers such as AOL, groans under the traffic and backs up. The two phenomena together threaten a kind of email gridlock, with deleterious consequences for individual and social productivity.

This state of affairs offers us the possibility of understanding emerging communications conventions, of action research into the dynamics of the network, and of understanding the role of email in the work process. It raises a host of intriguing research questions, from the role of trust in electronic communication, to the conditions for sharing versus hoarding knowledge, to the spill-over effects of shared information (Kraut and Attewell 1997) to the role of electronic communication in drawing peripheral members of organizations closer into the mainstream (Sproull and Kiesler, 1991)

A similar situation exists with proliferation of information on the Internet or World-Wide Web. Many social, information, and computer scientists are interested in better indexing, cataloguing, and filtering mechanisms for the information found on the Web.

2.0 When Should Computer Systems be Called Human Centered?

We began our group discussions by examining the term "human centered" and tried to characterize it clearly. We were specially concerned that the term "human centered" could easily become a trivialized buzzword that could casually be slapped as a label onto any computer application that seemed to help people. We did not believe that certain kinds of applications, such as medical diagnostic aids, should be automatically be called human-centered because improved medical diagnosis can help people. For example, a medical diagnostic system whose logic is difficult for a doctor to comprehend or interrogate would not be very human-centered.

Thus, we spent considerable time answering these questions:

What are the meanings of human-centered that justify a new label? What research questions would there be? What do we know about the organizational and social aspects of computer systems that sheds light on human centered systems developments? The following paragraphs summarize our deliberations.

There is no simple recipe for the design or use of human-centered computing. Our group agreed, however, that the analysis of any aspect of systems should take into account at least four dimensions of human-centeredness:

1. There must be analysis which encompasses the complexity of social organization and the technical state of the art. The analysis cannot be based upon a vague idea of what a generic individual would like, sitting at a keyboard in social isolation or in a stereotypic situation that effectively ignores the varieties of concrete social locations.

The computing world has developed a number of such generic scenarios, such as 4A -- in which any one can get any document anytime and anywhere. There are instantiations of 4A -- such as providing any researcher all of the documentary materials that they want for their research, even if they are traveling for a month; or providing any doctor with a complete medical record for any patient, anytime, anywhere. We can appreciate the practical value and symbolic power of these crisply stated goals. But they too easily trivialize the concept of human-centered system by homogenizing people and places into "everyman" and "everywhere." The various roles that people play in work groups are ignored and stereotyped. The ways that organizations structure information is also treated only as a barrier, unless materials are accessible 4A. The different kinds of resources (and skill sets) of organizations and groups are also all homogenized in 4 A scenarios.

In contrast, a human centered analysis must take account of varied social units that structure work and information -- organizations and teams, communities and their distinctive social processes and practices.

2. Human-centered is not a "one-off" or timeless attribute of a system at a given point in time. Rather, it is a process, one which would take into account how criteria of evaluation are generated and applied, and for whose benefit. It would include the participation of stakeholder groups -- such as involving patient groups in the development of specialist medical technologies, or teachers in the development of instructional technology.

3. There are important architectural relationships, such as the question of whether the basic architecture of the system reflect a realistic relationship between people and machines. As with the architecture of buildings, the architecture of machines embody questions of livability, usability and sustainability.

4. The question of whose purposes are served in the development of a system would be an explicit part of design, evaluation and use. Thus the question of whose ideas get put into the design process is an important one for human centered systems. As well, the question of whose problems are being solved is important -- systems which seek only to answer a very narrow technical or economic agenda or a set of theoretical technical points do not belong under the "human centered" rubric.

2.1 What is and isn't HCS

There is no single recipe for human centered design. Given that humans are so diverse, by nature human centered designing tends to be tailored, rather than mass produced. "One size fits all" seems distinctively non human-centered. On the other hand, we don't believe that complete tailorability results in human centered systems, because few people have the time or interest to effectively learn how to tailor thousands of features in complex computer systems.

The question of what is and isn't HCS may be divided into four parts:

1. What do we mean by human?

2. What is a system?

3. What are the goals of a human-centered system or process?

4. What are the processes associated with HCS?'

2.2 What do we mean by human?

We use the word human to mean a person with activities who participates in some workworlds, communities outside of workplaces, and a lifeworld. We don't use the term human to refer to a disembodied task, or to a set of cognitive processes. Humans are not divisible up into component parts such as tasks. Thus, a design which optimizes for performance of a data-entry task but which does not take into account ergonomics, organizational reward structures, and the other tasks, activities and feelings a person brings to the job is not effectively taking the human into account.

People are not stand-alone organisms -- we are quintessentially social and collective, not just individuals -- or individuals in a diffuse social world. We do not use the term "human" to refer to individuals working alone or to a set of cognitive activities. For use, the term human includes and goes beyond individuals and their cognitions to include the activity and interactions of people with various groups, organizations, and segments of larger communities. Thus, for example, we would view the appropriate communication systems to support distance education to be those which students to communicate with instructors and with each other, and not simply to download files and upload from an instructional site. Further these systems should be organized in ways that fit students' lifeworlds (ie., not require forms of connectivity that students could not sustain at home) and also enable communicants to develop some knowledge of and trust in each other.

People adapt and learn, and from the point of view of systems design, development and use, it is important to take account of the adaptational capabilities of humans (Dervin, 1992). Something that freezes at one development stage, or one stereotyped user behavior, will not fit a human centered definition.

Finally, it is worth noting that human systems are just as complex as technical systems (if not more so!). That is, although there is often a "it's common sense" approach to defining what is human and what human problems and challenges should be, the answers are no less complex than building a highly complex technical system.

2.3 What is a (more) Human Centered System?

Having characterized the meaning of "human," we can then better characterize human-centered systems?

First, design predicated on merely replacing human activity or automating is not human centered. That is, systems which do this may be interesting, but are not per se human centered -- in fact they may act to the detriment of humans in particular situations.

Human-centered systems are designed to complement humans skills. The impetus to build such systems are based on human needs, for information, assistance, or knowledge. We recognize that the conditions under which people use such systems vary considerably. An aircraft navigational system might remove significant control from a pilot and use a logic that is difficult to explore when a plane is flying at 200 mph near ground and other planes. In contrast, a medical diagnostic system might have to be designed so that a doctor can examine how it weighed evidence and a rule-base to make a specific diagnosis.

HCS designers recognizes that computer systems structure social relationships, not just information. (For example, email systems that order messages for a person to read based on criteria such as recency or length also influence the recipients' social relationships by encouraging attention to some messages and their senders rather than others). So the analysis which informs design is not just about optimizing the technical capacities of the machines, but also recognizes and respects the organizations or other forms of human social organization (such as the family or the classroom) into which they are being inserted.

HCS design should take into account the various ways that actors and organizations are "connected together" with social relationships, as well as information flows and decisional authority. For example, changes in a classroom may produce changes in the students' families if children encounter new opportunities to explore ideas freely. While we can't predict all such outcomes, human-centered systems designers should be cognizant of the possibility via analysis of systems' use in some very realistic contexts.

2.4 What goals best describe a human-centered system or process?

The holistic attitude of Human-Centered systems designers toward a person and their lifeworld is important. Since people are not reducible to a set of component tasks taken out of context, the strategies of Human Centered Systems design -- and technologies to support them -- should reflect this complexity.

There are two senses of the term "ecology" that illustrate this (Star, 1995b). The goals of a human centered system (or process) would be ecological in the sense of accounting for the larger picture of systems development and use. For example, displacing work does not make it go away. A system which is used to replace all the secretaries in a firm, while requiring extra hours of other employees to make up for the loss of services, has not accounted for the real organization of work. Fuller (1995) coined the term "cybermaterialism" to refer to the analytical approach in which the analyst is specially sensitive to the ways in which computerization reorganizes work and costs rather than simply reducing or eliminating them. As well, there are larger scale issues of infrastructure development, ethics and humaneness which are important; for example, the Computer Professionals for Social Responsibility guidelines for NII development suggest ethical as well as ecological approaches to infrastructure development that clearly have a place in discussions about human-centered computing (http://www.cpsr.org/cpsr/nii_policy).

Human Centered System designers would also ideally be ecological in terms of global concerns, and take into account issues of environmental sustainability. In this, by implication, we do not necessarily accept that only humans are important. A system which monitors acid rain or tree disease has wider natural implications as well.

The goals of a human centered system are not fixed once and for all, and then good for all contexts. People who user systems must be able to help define what they need systems to do (usually); it certainly means not just testing design when one is well down the design path, after it is too late for good user feedback. In this, we see a desirable shift from passive users of systems to more active participants in systems at all developmental phases.

Human Centered System designs must also scale up to become non-trivially human centered, and often here the values and implications for impacts change significantly. What works for a small group in a laboratory may entail larger scale issues which look different -- for example, privacy changes a great deal with larger groups, with lack of face to face accountability, and as systems move from the lab to the real world (Clement, 1994b). In this, the goals of human centered systems design should be congruent with social sustainability as well as environmental sustainability; analysis of policy and political implications especially with scale are important to defining a system's goals.

Finally, the system designers should use the best available social science knowledge in addressing all of these above points. Interdisciplinary teamwork is crucial to making this practice workable.

2.5 What are the processes associated with design, use and analysis of HCS?

How does one design, use and analyze human centered systems, according to the above precepts? Our group recommended several foci, including but not limited to the following:

a. One should take cognizance of multiple media (paper, computing, video, conversation, etc.) in the process of design. That is, information systems are always part of a large ecology of communicative devices and conventions, ranging from conversations to faxes and post-it notes. The interaction of these media is important for understanding the big picture of design in a human centered sense.

b. Human centered analysis would also extend to infrastructure and standards. That is, the usability of a system depends on infrastructural configurations of all sorts. Computers sent to a developing country without knowledge of the problems with its power grid and the dust-filled atmosphere may fail for reasons other than pure design; systems which work well for one group but violate existing standards in use for another will also not work.

c. Technology does and will not solve social justice problems. For example, putting more computers into inner city classrooms will not per se increase literacy. This is important to a human-centered approach, as is a certain modesty about systems capabilities. Sometimes "less is more", and the system which is helpful as a tool in solving a particular problem may not always be the most elegant technically. From a human centered perspective, 'pretty good systems' are sometimes the best systems.

d. Another part of human centered designing is articulating the values that are at stake in design processes themselves. This means examining the values of both designers and of the intended systems audiences and also being able to identify value-conflicts. This is only partly managed by user participation; it also requires ethics and values analysis for which it may be valuable to involve professionals who are very skilled in analyzing social values and social change.

e. Finally, in the design of human-centered systems, machinery should not be anthropomorphised. Machines should extending human capability as gracefully as possible. In line with the value of not simply replacing humans, human-centered system designers must know the limits of machines in a specific social order, and not impute certain human properties to them, such as fairness or objectivity.

3.0 State-of-the-art

We identified a body of research that is fundamental for anyone who wishes to understand how human centered systems can help or hinder organizations and social groups. In this brief review, we separate the research into five categories: evaluation and usability (including user centeredness); problems, paradoxes and overlooked social realities; organizational and group and community processes; co-design and design issues; and infrastructure, person power and training.

3.1 Evaluation and Usability (including user centeredness)

There is a large body of research on the evaluation of systems, interfaces, and usage at the individual level (see e.g. Bishop and Star, 1996; Hewins, 1990). Task analysis -- an individual system user and her tasks -- are also well understood. However, human centered systems have to be workable for groups. Some recent research has begun examine these issues at the group, organizational and community levels.

3.2 Problems, Paradoxes and Overlooked Social Realities

Much of the research about the social and organizational aspects of systems has pointed out actual and potential problems with design and use. In broad brush strokes, these include the following topics:

1. Computerization is ongoing, along with other organizational processes, rather than one-shot.

The computerization of common organizational activities, such as accounting, inventory control, or sales tracking, is not a one-short venture. Computerized systems that are introduced at one time are often refined over a period of years (Kling & Iacono, 1984), and periodically replaced by newer systems. Some computerized accounting systems have histories of 30 or 40 years (McKenney and Mason, 1995), and 10-20 years is quite common in manufacturing.

The decade-long time frame for the life of many computerized systems makes their adaptability to changing working and operational conditions an important aspect of human-centeredness (Zmuidzinas, Kling, and George, 1990). However, adaptability alone is not a sufficient condition for an information systems to be human centered. Software AG's SAP R/3 Enterprise Integration system is an interesting case in point. SAP requires that standards be set across an organization, but also allows many parameters to be tailored. Many large firms, including Corning, Compaq, Chevron, Borden, Owens-Corning, Mentor Graphics, Fujitsu, Dell, Apple, IBM and Microsoft are using SAP to help integrate far flung operations. It is common to have 8,000 data tables in an SAP database (Xenakis, 1996), and it is easiest for firms that have high levels of administrative centralization to decide upon parameters for geographically decentralized operations.

Because the customization is very complicated, some firms restructure the way that their people work and even their business policies rather than completely tailor SAP's R/3 (White, Clark, and Ascarelli, 1997). SAP is not a "human-centered system;" it is a strong example of an "organization centered system" that makes exceptional demands upon people to use it effectively. SAP is an interesting contrast to the kinds of Human Centered Systems (and design principles) that this research program should promote.

This discussion breaks new ground because we know relatively little about the conditions under which computer systems that are very human-centered also provide strong organizational support, and vice versa. Some readers have been surprised by our treating organizational-centered and human-centered systems as potentially very different. In our view, we will make more research headway by not automatically identifying human-centered with organizationally-centered (any more that we would say that all organizational structures and practices are always good for an organizations' employees, clients, etc.)

2. Neither technical excellence or market share alone define system survival. "Network externalities," on the other hand, can play a substantial role in the sustainability of system.

Economists have demonstrated the "path dependencies" associated with technical standards (Antonelli 1992). The analysis of these effects was inspired partly by the economics of telecommunications systems, in which subscribers often have an economic incentive to connect with the largest network (Cristiano, 1992). Computer users, likewise, often have economic reasons to adopt the dominant standards in information technology, even in cases where another standard might be preferable on narrow technical grounds. This phenomenon has profound consequences for the dynamics of competition in IT markets (Farrell and Saloner 1987), and consequently for policy as well (Kahin and Abbate 1995). Standardization also has broader economic consequences; research on business information (Bud_Frierman 1994; Bowker, Timmermans and Star, 1995), for example, has pointed to the mutual reinforcement between communication technology (which allows information to be transferred from dispersed locations to centralized offices), information technology (which increases the incentive to centralize information by making it easier to process), and the standardization of products and practices (which makes the various elements of accumulated information commensurable). The resulting economies of information ought to have pervasive consequences, although the nature and magnitude of these consequences remain controversial (Babe 1994).

Operating systems, such as UNIX or Microsoft Windows, were not necessarily the technically best alternatives when they were widely adopted. However, each of them was part of a larger matrix of social/technical systems and resources. UNIX was distributed as an "open system" to academic computer science departments whose technically inclined students were able to enhance it, and who sought it in the engineering labs and product development firms that employed them after graduation.

Microsoft Windows was, in some ways, technically inferior to IBM's OS/2. But the set of software companies that were willing to support Windows vastly outnumbered the number of firms that were willing to support OS/2. Neither of these observations about UNIX or Windows means that they were "poor technologies." Rather, we are noting that technologies become popular for reasons that are sometimes quite different from their technical strengths and weaknesses. Conversely, technologies can fall in popularity because of declining network externalities. For example Windows 95 is not quite as refined as the Apple Mac operating system; but Microsoft has out-marketed Apple in ways that lead software developers (and then the market) to shift away from Apple.

In a similar way to UNIX and Windows, SAP /R3 (and its enhancements) may become a commonplace Enterprise Integration system because of externalities, such as the extent to which consulting firms recommend it (White, Clark and Ascarelli, 1997) and offer training to help firms adopt it and tailor it.

3. There is a significant gap between the productivity that should result from the nation's investment in computer systems and the actual productivity gains in the economy.

The discrepancy between the expected economic benefits of computerization and measured effects has been termed "The Productivity Paradox," based on a comment attributed to Nobel laureate Robert Solow who remarked that "computers are showing up everywhere except in the [productivity] statistics."

Many analysts have argued that organizations could effectively increase the productivity of white collar workers through careful "office automation". There is a routine litany about the benefits of computerization: decreasing costs or increasing productivity are often taken for granted. In the last few years economists have found it hard to identify systematic improvements in United States national productivity which they can attribute to computerization. Although banks, airlines and other United States service companies spent over $750 billion during the 1980s on computer and communications hardware __ and unknown billions more on software __ standard measures have shown only a tiny 0.7 percent average yearly growth in productivity for the country's service sector during that time. (Productivity growth in many sectors of the United States economy was much lower since 1973 than between the end of World War II and 1973.)

In the mid-1990's, US National productivity has been closer to 2-3%/year. Macro economists see this as a workable growth rate, but it has also lead to income stagnation for many middle class families. It is also tiny relative to the 25%/year improvements in the cost/performance of computer hardware.

Research identifies many common social processes which reduce the productivity gains from computerization. Many changes in products and ways of work that come from computerization, such as improving the appearance of reports and visual presentations or managers being able to rapidly produce fine grained reports about their domains of action, often do not result in direct improvements in overall organizational productivity. Numerous accounting reports may give managers an enhanced sense of control. But managers may seek more reports than they truly need, as a way to help reduce their anxieties about managing. (SAP /R3, for example, can provide rapid access to transaction level detail about operational activities in diverse divisions of a multinational firm; a manager in San Jose California can readily track daily inventories in Munich and Melbourne).

Similarly, some professionals may be specially pleased by working with advanced technologies. But much of the investment may result in improving job satisfaction rather than being the most effective means for improving organizational productivity.

There are good diagnoses of the productivity process (and paradox) with respect to linkages between individual and organizational scale behavior (but not yet a clear solution)(See Harris et al., 1994; Landauer, 1995; Attewell, 1996).

4. Workable computer systems are usually supported by a strong socio-technical infrastructure.

The "surface features" of computerization are the most visible and the primary subject of debates and systems analysis. But they are only one part of computerization projects. Many key parts of information systems are neither immediately visible or interesting in their novelty. They include technical infrastructure, such as reliable electricity (which may be a given in urban America, but problematic in many Third World countries, in wilderness areas, or in urban areas after a major devastation.) They also involve a range of skilled-support -- from people to document systems features and train people to use them to rapid-response consultants who can diagnose and repair system failures. System infrastructure is a socio-technical system insofar as technical capabilities depend upon skilled people, administrative procedures, etc.; and social capabilities are enabled by supporting technologies (i.e., word processors for creating technical documents, telephones and pagers for contacting rapid-response consultants).

Much of the research about appropriate infrastructure comes from studies of systems that underperformed or failed (Star and Ruhleder, 1994; Kling and Scacchi 1982). The social infrastructure for a given computer system is not homogeneous across social sites. For example, the Worm Community System was a collaboratory for molecular biologists who worked in hundreds of university laboratories; key social infrastructure for network connectivity and (UNIX) skills depended upon the laboratory's work organization (and local university resources) (Star and Ruhleder, 1996). Star and Ruhleder found that the Worm Community System was technically well conceived; but it was rather weak as an effective collaboratory because of the uneven and often limited support for its technical requirements in various university labs. In short, lack of attention to local infrastructure can undermine the workability of larger scale projects.

There is a small body of research that amplifies these ideas. Web models of computing (which are not related to WWW) treat the infrastructure required to support a computerized systems as an integral part of it (Kling & Scacchi, 1982; Kling, 1992).~Star and Ruhleder (1996) have also shown that there are subtle individual and organizational learning processes underlying the development of local computing infrastructure (including the ability of professionals with different specialties to communicate about computerization issues) (see also Star, 1995b; Ruhleder, 1995 ).

3.3 Organizational, group and community processes

There is a solid body of empirical and theoretical work which identifies a variety of processes at scales above the individual. Among the points made in this research are the following:

1. Information sharing in groups can be supported by computerized systems, but organizational incentive systems play a major role in influencing the extent of information sharing.

One of the capabilities enabled by shared databases is the possibility of groups sharing data/information that was previously inaccessible in a timely manner, if at all. It is easy to identify examples, such as airline reservation systems where shared databases of seats on flights enhance the quality of service to passengers and the operational efficiencies of the airlines. Information sharing is technologically enabled by most computerized information systems; and some systems attract managers and professionals because of new kinds of information sharing that they enable. (For example, SAP /R3, as discussed above, can provide rapid access to transaction level detail about operational activities in diverse divisions of a multinational firm. Intranets seem to becoming popular for enhancing the flow of certain information across the boundaries of organizational subunits).

Much of the value of groupware applications, such as Lotus Notes, hinges on the promise of professionals' sharing narrative materials -- such as client studies in multi-office consulting firms, country-specific market-intelligence in multi-national firms, and software bug fixes in a vendor's technical support office. Careful research finds mixed support for the value of these applications (Orlikowski, 1993; Orlikowski, 1996, Ciborra and Suetens, 1996). Each of the studies just cited found some examples of Lotus Notes' use, but only staff in the technical support office made extensive use of Notes for routinely sharing information. In many consulting firms there is a negative incentive for consultants to share reports; they are rewarded for the time that they can bill to their clients and -- to some extent -- for demonstrating unique expertise (Orlikowski, 1993). Managers at a French (national) public utility company had hoped that their staff would use Lotus Notes to share information about market conditions, but they did not alter their organization's reward system to compensate for the time involved in creating online reports. While a pilot group was highly enthusiastic to share information via Notes, the project "lost momentum" as other groups were asked to participate (Ciborra and Suetens, 1996). In contrast, a small technical support workgroups in which technicians helped each other with problem call before they used Notes, found Notes to be a helpful extension of their preexisting cooperative practices (Orlikowski, 1996).

2. People who use computerized systems are often using multiple media.

Much of the writing about computerized systems tends to focus on the digital media that is part of the official systems design. But we know that people also other media, such as paper and telephone, as part of their work. In the case of digital libraries, some analysts take notes on paper about materials that they find on-line (Levy and Marshall, 1995). Scholars who read electronic journals often print out long articles onto paper for sustained reading and markup (Kling and Covi, 1995).

In an intriguing kind of example, air traffic controllers use paper "strips" for key information about flights in their sectors; and to share it when they pass control over an aircraft to a colleague (Stix, 1994). Stix (1994) article reports that recent efforts to develop a completely electronic flight control system lead to efforts to replace paper strips with unwieldy databases with dozens of fields.

3. The routine use of computer systems often requires articulation work

The concept of "articulation work" characterizes the efforts required to bring together diverse materials or to resolve breakdowns in work (such as clearing a paper jam when printing a long electronic document to read). In a provocative study, Gasser (1986) found that anomalies were common in many use of computer systems, and that professionals often developed informal (and sometimes strange) workarounds to compensate for recurrent difficulties. Suchman (1996) observes how articulation work is often invisible to people who are not close to the place and moment of working. She also notes that articulation work can require notable ingenuity, but that higher status professionals (and managers) who are buffered from the details of computer work, tend to trivialize the nature of the work to be done. To the extent that high status professionals and managers who can delegate most of their work to others are male, and that many of the clerical and technical staff who do the work are female, there is also a gender politics to articulation work. But Schmidt and Bannon (1992) argued that articulation work is so pervasive that (humanly) effective system designers have to routinely examine how new systems reduce, increase, or reorganize articulation work.

4. It is critical to comprehend the use of many computerized systems in terms of specific social units, such as workgroups, teams, local communities and communities of practice.

It is common for systems designers to conceptualize computerized systems in terms of organizations and individuals ("users"). But there are important intermediate levels of social organization between individuals and the larger collectivity. In practice, workgroups and teams (Galegher, Kraut and Egido, 1990; Ciborra, 1996; Tyre and Orlikowski, 1994) have proven to be critical social groupings which shape the use of computerized systems. (See below for some examples).

Brown and Duguid (1991) coined the term "communities of practice" to refer to people who are concerned with a common set of work practices. They are not a team, a task force, and not even necessarily an authorized or identified group. People in CoPs can perform the same job (but work in different places much of the time, such as field service engineers), collaborate on a shared task or work together on a product. They are peers in the execution of "real work." What holds them together is a common sense of purpose and a real need to know what each other knows. There are many communities of practice within a single organization and most people belong to more than one of them. Some research shows that communities of practice are the appropriate groups for learning how to best integrate new computer systems into real working practice (George, Iacono & Kling, 1995; Jones, 1995).

Local communities, as well, can be important units of analysis and frames of reference for human centered computing. "Community information systems" may mean organized information provision to special constituencies (e.g. cancer patients, small business owners, hobbyists), or it may be geographically local provision of services, including freenets and other public computing facilities. For more information on this, Prof. Ann Bishop has offered to share her syllabus from the University of Illinois for a graduate class, Community Information Systems (http://alexia.lis.uiuc.edu/gslis/courses/syllabi/450CI.html).

5. Communication is a key value for many users of computer system (even where that has not been an explicit or high priority goal).

For example, email was the "killer application" that drove up the use and demand for the Internet (in contrast with file transfer). Bullen and Bennett (1996) found that email was the most frequently used application within workgroups that used office suites that included group support functions (such as calendars).

6. There is an understanding of emergent social psychological processes when individuals work together in groups with computer networks

Social processes in groups that use electronic mail have been the subject of substantial research. We understand that email can reduce the contextual cues in messages (Sproull and Kiesler, 1991), and that flaming can result as a byproduct of people misunderstanding other's intentions. We also understand that people's who have on-going work relations can be very cognizant of social norms beyond those of the electronic workspace, and that these norms can reduce the frequency of phenomena such as flaming (Lea, O'Shea, Fung, and Spears, 1992). In some workplaces, people use email quite strategically (such as to convey bad news (Markus, 1994). There have been some systematic studies of the dynamics of groups online (see Sproull and Kiesler, 1991 for an introduction). One important finding is that email can gives greater visibility to "peripheral workers" -- those who are lower in social status, who work in distant location or in different time schedules that the more mainstream workforce (Sproull and Kiesler, 1991;, Hesse, Sproull, Kiesler, and Walsh, 1993). There is as well a related important body of work on scholarly communication which represents similar processes (Doty, Bishop, and McClure, 1991).

7. Information technologies may become a means of constructing and exploring individual, group, organizational and community identity.

Communication is not simply a matter of exchanging information. Studies of on-line communication show that people use them to construct certain identities (i.e., local technical expert), and in some cases, to explore new social identities (Mantovani, 1996).

3.4 Co-design and design issues

A more recent development in this research area is the partnership of social and computer scientists, particularly the participatory design or co-design thrust. Some findings from this area:

1. Designers design both system and shape the setting

The separation between system and setting can seem simple -- the system is the computer system (and telecommunications) and the setting is the arrangement of furniture, lighting, walls, and other facilities. In some cases, such as the design of cockpits and control rooms, teams explicitly design both system and setting. In other case, people reorganize their offices to more comfortably use computer systems -- pulling down venetian blinds to reduce glare on computer screens, shuffling desktop materials to make room for monitors and printers, and so on. In both cases, computerization reshapes the use of space and the ways that people inhabit it.

2. Three-way partnerships (social scientists, designers, users) have been powerful ways to organize systems development

Some of partnerships have been pioneered in Scandinavia (Kyng and Greenbaum 1991; Clement and Van den Besselaar, 1993; Bødker and Grønbaek, 1996), but they have also been developed within major North American firms, such as Xerox and NYNEX (Euchner and Sachs, 1993; Clement, 1994a). Dutton and Kraemer's early work on negotiations about computer modeling also points to complex de facto processes of implementation, modification and the politics of design (1984).

3.5 Infrastructure, community, personpower and training.

In the past few years the scientific community of those who study social impacts of computing, design, and social theory in information technology have created a scientific community in social Informatics. This has included the development of

1. Scientific journals

Information Systems Research

Journal of Computer Supported Cooperative Work

Office: Technology and People

Accounting, Management and Information Technologies

The Information Society

2. Conferences

Organizational informatics research is routinely discussed at a few annual conferences (International Conference on Information Systems, Association for Information Systems), the biannual conference on "Computer Supported Cooperative Work," and periodic conferences of certain IFIP Working Groups, such as WG8.2 (Information and Organizations). Social informatics research is not routinely discussed at these conferences or other identifiable conferences, although social informatics research is discussed infrequently at numerous conferences in various fields.

3. Curriculum and training programs

Organizational informatics courses are often taught in:

* the information systems departments of business schools

* in the graduate programs of a few Information Science/Information Studies schools (especially Syracuse U, Indiana U, U of Illinois, U of Toronto, UCLA)

* in the graduate programs of a few North American computer science programs (ie., UC Irvine) and many European CS Departments (especially in Scandinavia).

Social informatics courses are most often taught in undergraduate Computer Science programs and in the graduate programs of Information Science/Information Studies schools. (See the Social Informatics Home Page for a listing of courses and degree programs.)

We believe that both organizational informatics courses and social informatics courses should be much more widely available to computer science students (at all levels). In addition, the PhD education of prospective faculty would be strongly enhanced through NSF traineeships in organizational and social informatics.

4. Research Funding

The most sustained -- but very limited -- research funding for this nascent area has come from the NSF (especially IRIS). One-shot research projects have been funded by other foundations including the Annenberg Foundation, the Getty Foundation, and the Markle Foundation. Unfortunately, funding is spotty, so that even good senior investigators do not routinely have a continuing stream of extramural research grants.

5. How the human sciences create useful knowledge with respect to human centered systems

In addition to the topics under state of the art, we also identified instances of projects and practices where social scientists have contributed to human centered systems developments. There is a new (small) group of scientists who specialize at the intersection of social/organizational analysis and technical systems development. The following list identifies a few of the many different ways that social scientists and computer scientists have collaborated effectively on systems design/development projects.

* Fieldwork in support of requirements analysis. Fieldwork in settings in which systems development and work with computer systems will be done (see Forsythe, 192, Forsythe, 1994; Wagner, 1993).

* Joint project teams with social scientists and computer-scientists. The Home-net research project at CMU illustrates a project that was investigator initiated but whose instrumentation requirements made the involvement of computer scientists central.

* Troubleshooting in anticipating political and conflict situations that can sabotage system use.

* Identifying factors that influence the success and failure of systems through the post hoc evaluation of complex systems in actual use by the people and groups that use them.

* Identifying how the seeming intractability of recurrent technical problems is a symptom of ignoring the social elements in practices for designing, organizing and using systems.

* Do foundational analysis to conceptualize how people work with, through, and around computer systems (i.e., Orr, 1996; Kling and Scacchi, 1992).

* Translating between systems users and computer scientists, as in the participatory design tradition (Kyng & Greenbaum, 1991; Clement and Van den Besselaar, 1993). DL project at UIUC.


4.0 Future Research Directions

We identified several areas for further research: distributed human-centered information systems; representations; attentional economics; the provenance and quality of electronic documents (Bates, 1986); contextual knowledge; and the relationship between naturalistic and formal information systems. It is worth noting that these areas are in flux, as is the entire area of human-centered computing. Therefore in any of these areas there ideally should be a combination of action-oriented research, basic research, and foundational exploration.

4.1 Characterizations and Theories of Human-Centered Systems

In section 2.0, we discussed meaningful conceptions of the term "Human-Centered Systems." If the concept, Human Centered Systems, is to be the central concept of a major research program, then it is essential for there to be meaningful characterizations of the concept that are grounded in the experiences of people and organizations in working with computerized systems. HCS is not a completely new phenomenon -- this label better characterizes some systems design practices and systems developments than others. We need studies of systems in use that help the research community understand HCS in practice.

A Theory of HCS would link such systems to important human experience and social/organizational practices -- such as improved communication, easier work, better quality jobs, and so on. These kinds of outcomes are not simply deterministic byproducts of using computer systems -- however good (or human-centered) their design. Research shows that the outcomes of computerization emerge from the byproduct of ways of organizing, social practices and the use of specific systems. We need comparable research about HCS. A first priority is to develop strong empirically grounded Theories of HCS to help guide developments in this area.

4.2 Distributed human-centered information systems.

Perhaps no term has been more used (and abused) than that of "community" in the context of widespread use of the Internet. Recent years have seen the dismantling of much of the centralized mainframe data processing model of computer usage, in favor of distributed, desktop and networked usage. A key insight is that distributed systems are not simply technical artifacts, but are also distributed social systems as well.

This distribution has had a number of consequences, including extreme permeability of organizational boundaries and the shuffling of memberships across traditional institutional borders. For example, systems administrators in (different) large organizations may have more to say to each other than they do to their colleagues in other departments. It has always been true that technical specialists often have more in common with each other than with managers in their own organizations(see e.g. Strauss, 1978). But large scale distributed computing accelerates the process and provides and opportunity to support communication across communities of practice. (See Section 3.0 for a discussion of communities of practice).

One of the touchstone concepts associated with phenomena like these is the notion of "collective cognition." It is easier to conceive of problem-solving across group and organizational boundaries, and even to see thinking itself as a distributed phenomenon, under these new conditions. That is, the ability of any individual to work professionally is more a function of their participation in communities of practice that help them in key moments, than in simply their "individual cognitive capacities." Supporting these understandings includes sensitivity to semantic differences, processes of cooperation, and the identification of divisions of labor and differentiated roles within distributed groups. The key research issues for HCS include effective strategies for designing distributed systems that are workable for different groups; and ways to have communities of practice effectively support distributed systems.

4.3 The organization of effective groups and communities with electronic support

The word community is often abused in discussions of social life, but it still retains important meanings and resonances. A group can be called a community to the extent that its participants feel some sense of mutual obligation and reciprocity in helping one another, and value their social ties. In the last decade, thousands of work, public interest, leisure and service groups and numerous professional and academic communities have tried to use computer networks to support some of their activities. These efforts have had varying levels of success; and have been most valued when group or community participants could not otherwise make contact or meet.

The most visible successful cases are the public Usenet groups (such as comp.human-factors) and professional listservs (such as ASIS-L). These cases are successful insofar as some people use them routinely, and they visibly enhance communication between many of their participants. There are also significant experiments to use similar collections of electronic forums to enhance community life in certain towns and cities. The most famous in North America may be the Blacksburg Electronic Village (BEV), which is sponsored by Virginia Polytechnic University.

Unfortunately, there is little systematic research and effective theorizing about the strengths and limits of electronic forums, and ways to improve their abilities to enhance the social worlds that support them (through funding, volunteer work, etc.). For example, it is well known that most readers of large public forums electronic forums such as comp.human-factors or ASIS-L (and probably BEV) are lurkers who never speak up publicly (by posting) in the electronic forums.

Supporting geographically distributed groups with electronic means requires more than simply "putting them on a computer network" or computer conferencing system. Participants have to be able to trust each other's fairness, and the relative privacy of each electronic forum, to discus controversial issues openly. The fluidity of work and professional practice across organizational boundaries, makes it important to understand the permeability of groups -- how people and tasks flow across traditional organizational and community boundaries. It is very easy for comments that people make in one electronic forum (and in the context of a specific discussion) to be reported elsewhere in a different (and problematic) context.

Concretely, this may appear as confusion about the boundaries of responsibility; problems with "freeloading" across electronic boundaries; as opportunities in the matrixed and networked organization for more efficient tapping of expertise and gossip, and a recognition of the complexity of human skills which cross multiple group boundaries. It also requires strategies (or social protocols) for developing trust of various kinds (including ways of resolving conflicts and respecting informational privacy) among participants.

Even within organizations, electronic groups provide a challenge for management and for working people's sense of their tasks and scope of responsibility. Culturally, does participation in extra-organizational working groups "count"? How much does service to an electronic community count in the large organizational reward structure?

Since many professionals are members of multiple groups and sub-groups, a simple one-to-one mapping between person and group breaks down quickly. Are there strains involved in managing group memberships? For example, if someone has technical expertise in the design area, and also works part time as a design consultant for the organization's marketing group, are the different goals and norms of the two groups going to produce an irresolvable strain for the person? How will they juggle conflicting demands? This becomes important from the systems design and use perspective if support of electronic communities is a goal -- there must be a means of acknowledging multiple memberships.

How such groups organize and stay organized is an open research question. There has been some interest in "mapping cyberspace," and a few studies of the operation of Usenet discussion groups and emergent web communities. Nevertheless, from the basic scientific point of view, we know very little about the dynamics of membership, stability, and overall impact on organizations (of various sorts). There may be both centripetal and centrifugal forces at work as groups form and re-form, and these are worthy of investigation. Total fluidity is not always the best thing from the point of view of social organization; indeed, boundaries and barriers may help build group solidarity, and at the least, respect for these basic social processes is important to inform HCS design.

4.4 Productivity paradox

As we noted above, there is likely to be no single answer to resolving the productivity paradox. A plethora of studies show that organizations face many difficulties in integrating computerized systems into their work practices and work processes.

Human Centered Systems may help reduce these usage problems; but people still have to learn how to use them effectively, and organizations have to change their training, design and reward practices (sometimes). Understanding what kinds of "organizational learning" about HCS help leverage important value for systems is a specially promising avenue. One promising avenue is to examine how the creation of "communities of practice" among system developers and system users can help people work with systems more effectively.

Another promising avenue is interdisciplinary teams -- examining both the economic aspects of impacts on productivity, the sociological aspects of changes in work practices, and the workflow and HCI dimensions of adjustment to new technologies (among another approaches).

4.5 Technologically Facilitated Organizational Change

How do human-centered systems influence the ways that organizations can change their ways of working, their products/services, and their relations with their clients? To what extent do organizations have the "absorptive capacity" effectively to use new (human-centered) computerized systems? What kinds of openness to organizational change and technological changes are required to effectively use human-centered systems.

These questions flow partly out of issues such as facing the productivity paradox on a number of levels of organizational scale. We lack good empirical studies of electronic spaces -- both workplaces and marketplaces -- and of solid generalizable principles of the social dynamics of usage which could be useful by computer scientists and designers. Ideally, we would develop measurement tools and theoretical models which would speak to questions of usability and impact in parallel with questions of design choices, market feasibility, and high level requirements analysis. If an organization is overly rigid, or is unable to make both the capital expenditure and the investment in maintenance and training required for successful system absorption, then early analysis of this state of affairs is both prudent and important for the long term survival of the organization.

If effective systems use requires significant organizational learning, will managers have the ability to admit having made mistakes? To what extent can organizations create "open spaces" for their participants to discuss social and technological options "freely?"

4.6 Modeling and Representing Human Centered Systems Use

Much of the claims about the likely roles of computers in organizations (and communities, families, schools, etc.) involves making representations of:

* the computer system and how it is configured

* its relationship to other work practices and workplace technologies

* the work (or play or learning) involved

* the impact on organizational structure and social order.

These representations form a complex research program in their own right. How can designers represent the contextual nature of knowledge informing both design and use of systems? How can designers and implementers take account of this information in their professional practice?

How can we develop research that is generalizable across various kinds of HCS and the specific locales of their design/use? This is an old challenge in social science. But with the advent of large scale networked computing, and the pressing need for human centered approaches, opportunities for cooperation across organizational analysis and systems design becomes more possible.

Understanding the knowledge and intent of others in the workplace is an important aspect of human-centered systems development. People who use systems also make representations about their own work and that of others. For example, professionals are much more likely to share their knowledge in a forum, like a LISTSERV if they expect praise rather than ridicule. They are more likely to share information via documentary databases if they expect that their co-workers will use their reports.

Most profoundly, we need ways to frame credible narratives or models of the use and impact of information systems in specific organizational/social settings. Most of the influential narratives in information and computer science are centered on systems, information and their providers. We need a much better understanding of the consumption-side of systems and information.

The state of research is that we have some specific studies of consumption of specific information systems in specific settings. We need more such studies, and also better ways to model the use/consumption of systems/information. In particular, such models would have to help us take account of the multiple work/home social worlds that people participate in.

4.7 Digital Documents, Digital libraries and Professional Communication

The use of digital libraries to effectively enhance the quality of professional communication is an area that is rich in possibilities for human-centered approaches. There are questions about what it takes to incorporate the new digital library technologies in the extant organizational infrastructure (the recent firestorm about the new San Francisco Public Library can be read as indicative of the strong public feelings over the issue). What does it take to develop multiple media libraries -- where people locate, access and use documents in paper or electronic forms? That is, given that people like books, that libraries are more than repositories of bits (they are complex social and community organizations), how can we conceive of human centered systems which combine digital and other media? (Bishop and Star, 1996). After all, most professionals print long report onto paper for careful reading and annotation, even if they receive them in electronic forms (Kling and Covi, 1995; Levy and Marshall 1995).

At the level of the digital document itself, the provenance and quality of electronic documents are important social processes. "Junk on the web" is partly a lag between the amount of information out there and the lack of good indexing tools; but it is also partly a reflection of the lack of norms and conventions developed for assessing electronic document quality and usefulness. There are social processes of curatorship and adjudication, viz. the reluctance of academics to publish electronically as "counting toward" tenure. We need, in this sense, to understand documents in use, in a variety of organizational and social contexts.

One aspect of this, which is common to many other research issues, is the notion of material culture embodiment. Because digitization represents a shift in the relationship of people and things (piles of paper, location of offices, proximity of people to each other and to other physical resources), it is important to develop good conceptual models of that shift. How does the stuff around us fit in with information systems (or not)? What is the rich mixture of electronic and non-electronic sources, in light of working, learning, and leisure Note: leisure here does not just refer to the entertainment industry! environments?

4.8 Standards Development Dynamics

Although past research has outlined the economic dynamics of technical standards, these results remain largely theoretical. Qualitative research is needed to understand these dynamics more concretely. In particular, interdisciplinary research will be needed to comprehend the central role of public relations and other forms of symbolic communication in the establishment of standards. In an environment of network externalities, firms seeking to establish new standards have a powerful incentive to gather allies and create the impression that their standards are inevitable. Little is known, however, about how this works in practice. Research is also needed to understand the magnitude of these effects; it is still controversial, for example, under what conditions, if any, a inferior technology can benefit from network effects before being displaced by superior alternatives.

Back to top

Return to CSI Home Page


5.0 References:

Attewell, Paul. 1996. "Information Technology and the Productivity Challenge." in Kling (1996)

Babe, Robert E. 1994. The place of information in economics, in Robert E. Babe, ed, Information and Communication in Economics. Boston: Kluwer.

Marcia J. Bates, Subject access in online catalogs: A design model, Journal of

the American Society for Information Science 37(6), 1986, pages 357-376.

Bishop, Ann and Susan Leigh Star. 1996. "Social Informatics for Digital Libraries," Annual Review of Information Science and Technology (ARIST), 31, pp. 301-403

Susanne Bødker and Kaj Grønbaek, Users and designers in mutual activity: An analysis of cooperative activities in systems design, in Yrjö Engeström and David Middleton, eds, Cognition and Communication at Work, Cambridge: Cambridge University Press, 1996.

Bowker, Geoffrey, Stefan Timmermans and Susan Leigh Star. 1995. "Infrastructure and Organizational Transformation: Classifying Nurses' Work," Pp. 344-370 in W. Orlikowski, G. Walsham, M. Jones and J. Degrees, eds. Information Technology and Changes in Organizational Work. (Proceedings IFIP WG8.2 Conference, Cambridge, England.) London: Chapman and Hall.

Bowker, Geoffrey; Susan Leigh Star; William Turner and Les Gasser, eds. 1997. Social Science, Technical Systems and Cooperative Work: Beyond the Great Divide> Hillsdale, NJ: Erlbaum.

Brown,J.S. & Duguid, P. (1991). Organizational learning and communities-of-practice: Toward a unified view of working, learning, and innovation. Organization Science, 2(1), 40-57.

Bud_Frierman, Lisa (Ed). 1994. ,Information Acumen: The Understanding and Use of Knowledge in Modern Business. London: Routledge.

Bullen, Christine and John Bennett. 1996. "Groupware in Practice: An Interpretation of Work Experience" in Kling (1996).

Cristiano Antonelli. 1992. The economic theory of information networks, in Cristiano

Antonelli, ed, The Economics of Information Networks, Amsterdam:

North-Holland.

Ciborra, Claudio (ed). 1996. Groupware and Teamwork: Invisible Aid or Technical Hindrance? New York: John Wiley.

Ciborra, Claudio and Nicole Turbe Suetens. 1996. "Groupware for an Emerging Virtual Organization." in Ciborra (1996).

Clement, Andrew. 1994a. "Computing at Work: Empowering Action by 'Low_level Users'." Communications of the ACM. (37)1(January):52-65.

Clement, Andrew. 1994b. "Considering Privacy in the Development of Multi-media Communications." Computer Supported Cooperative Work. 2:67-88.

Clement, Andrew and Pater Van den Besselaar. 1993. A Retrospective Look at Participatory Design Projects. Communications of the ACM 36(4) (June):29-37.

Danziger, James, William Dutton, Rob Kling, & Kenneth Kraemer (1982). Computers and Politics: High Technology In American Local Governments. Columbia University Press, New York.

Dervin, Brenda. (1992). From the mind's eye of the user: The sense-making qualitative-quantitative methodology. In J. D.Glazier and R. R. Powell (Eds.), Qualitative Research in Information Management, 61-84. Englewood, CO: Libraries Unlimited.

Doty, P., Bishop, A. P., and McClure, C. R.(1991). Scientific norms and the use of electronic research networks. In Griffiths, J-M. (Ed.) ASIS '91: Proceedings Of The 54th ASIS Annual Meeting, 24-38. Medford, NJ: Information Today.

Dutton, W.H. and Kraemer, K.L. (1984). Modeling as Negotiating: The Political Dynamics of Computer Models in the Policy Process. Norwood, NJ: Ablex Publishing Company.

Euchner, Jim & Patricia Sachs. 1993. "The Benefits of Intentional Tension." Communications of the ACM. 36(4)(June):53.

Farrell, Joseph and Garth Saloner. 1987. Competition, compatibility and standards: The economics of horses, penguins and lemmings, in H. Landis Gabel, ed, Product Standardization and Competitive Strategy. Amsterdam: North_Holland

Finholt, Tom and Lee Sproull. 1990. "Electronic Groups at Work." Organization Science. 1(1):41_64.

Forsythe, Diana. 1992. "Blaming the User in Medical Informatics," Knowledge and Society: The Anthropology of Science and Technology 9: 95_111.

Forsythe, Diana. 1994. "Engineering Knowledge: The Construction of Knowledge in Artificial Intelligence", Social Studies of Science, Vol. 24, pp105_113.

Fuller, Steve. (1995). Cyberplatonism: An Inadequate Constitution For the Republic of Science. The Information Society 11(4):293-303.

Galegher, Jolene, Robert Kraut and Carmen Egido (eds.) 1990. Intellectual Teamwork: Social and Technological Foundations of Cooperative Work. Hillsdale: Lawrence Erlbaum.

Gasser, Les. 1986. "The Integration of Computing and Routine Work." ACM Transactions on Office Information Systems. 4(3)(July):205-225.

?? Grant, Rebecca and Chris Higgins. 1991. "The Impact of Computerized Performance Monitoring on Service Work: Testing a Causal Model." Information Systems Research. 2(2):116-141.

George, Joey, Suzanne Iacono and Rob Kling. 1995. Learning in context: Extensively computerized work groups as communities_of_practice. Accounting, Management and Information Technology. 5(3/4), 185_202.

Grudin, Jonathan. 1989. "Why groupware applications fail: problems in design and evaluation." Office: Technology and People. 4(3):245-264.

Harris, Douglas H. (Editor). 1994. Organizational Linkages: Understanding the Productivity Paradox. Washington, DC : National Academy Press.

Hesse, B. W., Sproull, L. S., Kiesler, S. B. and Walsh, J. P. (1993). Returns to science: Computer networks in oceanography. Communications of the ACM, vol. 36, no. 8, 90-101.

Elizabeth T. Hewins. 1990. Information need and use studies, Annual Review of

Information Science and Technology 25, pages 145-172.

Jacky, Jonathan. 1996. "Safety_Critical Computing: Hazards, Practices, Standards and Regulation." In Kling (1996).

Jewett, Tom and Rob Kling. 1991. "The Dynamics of Computerization Social Science Research Team: A Case Study of Infrastructure, Strategies, and Skills." Social Science Computer Review. 9(2)(Summer):246-275.

Steven G. Jones, Understanding community in the information age, in Steven

G. Jones, ed, CyberSociety: Computer-Mediated Communication and Community,

Thousand Oaks, CA: Sage, 1995.

Kahin, Brian and Janet Abbate. (eds) 1995. Standards Policy for Information Infrastructure. Cambridge: MIT Press.

King, John L. and Kraemer, Kenneth L. (1981). "Cost as a Social Impact of Telecommunications and Other Information Technologies." In Mitchell Moss Telecommunications and Productivity, New York: Addison-Wesley.

Kling, Rob. 1996. Computerization and Controversy: Value Conflicts and Social Choices. (2nd edition.). San Diego, Academic Press.

Kling, Rob and Lisa Covi. 1993. Review of Connections by Lee Sproull and Sara Kiesler. The Information Society. 9(2) (Mar-May).

Kling, Rob and Lisa Covi. 1995. "Electronic Journals and Legitimate Media in the Systems of Scholarly Communication." The Information Society. 11(4):261-271.

Kling, Rob and Suzanne Iacono. 1984. "The Control of Information Systems Development After Implementation" Communications of the ACM, 27(12) (December).

Kling, Rob and Suzanne Iacono. 1989. The Institutional Character of Computerized Information Systems. Office: Technology & People v5, n1 (Aug):7_28.

Kling, Rob and Tom Jewett. 1994. "The Social Design of Worklife With Computers and Networks: An Open Natural Systems Perspective." in Advances in Computers. Rob Kling and Tom Jewett (ed.) vol. 39:

Kling, Rob and Walt Scacchi. 1982. "The Web of Computing: Computing Technology as Social Organization", Advances in Computers. Vol. 21, Academic Press: New York.

Kyng, Morton and Joan Greenbaum. 1991. Design at Work: Cooperative Work of Computer Systems. Hillsdale: Lawrence Erlbaum.

Landauer, Tom. 1995. The Trouble with Computers: Usefulness, Usability and Productivity. Cambridge, Ma: MIT Press.

Lea, Martin (Ed.) 1992. Contexts of Computer-Mediated Communication. New York: Harvester Wheatsheaf.

Lea, Martin, Tim O'Shea, Pat Fung, and Russell Spears. 1992. 'Flaming' in Computer-Mediated Communication. in Lea (1992)

Leveson, Nancy G. and Clark S. Turner. 1993. "An Investigation of the Therac_25 Accidents." Computer. 26(7)(July):18-39.

Levy, David M and Marshall, Catherine C. 1995. Going digital: A look at assumptions underlying digital libraries. Communications of the ACM v38, n4 (April):77_84.

Mantovani, Giuseppe. 1996. New Communication Environments: from Everyday to Virtual. Bristol, Pa : Taylor & Francis.

Markus, M. Lynne. 1994. "Finding a Happy Medium: the Effects of Electronic Communication on Social Life at Work." ACM Transactions on Information Systems.

McKenney, James L with Duncan C. Copeland, Richard O. Mason. 1995. Waves of Change : Business Evolution Through Information Technology. Boston, Mass. : Harvard Business School Press.

Orlikowski, Wanda J. 1993. Learning from Notes: Organizational

Issues in Groupware Implementation. Information Society 9(3) (Jul_Sep):237_250.

Orlikowski, Wanda J. 1996. "Evolving with Notes: Organizational Change around Groupware Technology" in Ciborra (1996).

Orr, Julian. 1996. Talking about Machines: An Ethnography of a Modern Job. Ithaca, NY: Cornell University Press.

Perrow, Charles. 1984. Normal Accidents: Living with High-Risk Technologies. New York: Basic Books.

Ruhleder, Karen. 1995. "'Pulling down' books vs. 'pulling up' files: textual databanks and the changing culture of classical scholarship," Pp. 181_195 in Star (1995)

Schmidt, K. and Bannon, L. (1992). Taking CSCW seriously: Supporting articulation work. Computer_Supported Cooperative Work, 1, Nos. 1_2, 7_40.

Sproull, Lee and Sara Kiesler (1993) Connections: New Ways of Working in the Networked Organization. Cambridge, MA: MIT.

Star, Susan Leigh and Ruhleder, Karen. (1996). Steps towards an ecology of infrastructure: Design and access for large-scale collaborative systems. Information Systems Research 7: 111-138.

Star, Susan Leigh (1995c) "The Politics of Formal Representations: Wizards, Gurus, and Organizational Complexity," Pp. 88_118 in Susan Leigh Star, ed. Ecologies of Knowledge: Work and Politics in Science and Technology. Albany: SUNY Press.

Star, Susan Leigh (Ed.). 1995a. The Cultures of Computing. Oxford, UK: Blackwell Publishers.

Star, Susan Leigh (ed.) 1995b. Ecologies of Knowledge: Work and Politics in Science and Technology. Albany, NY: SUNY.

Stix, Gary. "Aging Airways." Scientific American. (May 1994)270(5):96-104.

Suchman, Lucy. 1996. "Supporting Articulation Work: Aspects of a Feminist Practice Office Technology Production". in Kling (1996).

Tyre, M. J. and Orlikowski, W. J. (1994).Windows of opportunity: Temporal patterns of technological adaptation in organizations. Organization Science, vol. 5, no. 1, 98-118.

Wagner, Ina. 1993. "A Web of Fuzzy Problems: Confronting the Ethical Issues." Communications of the ACM 36(4) (June):94-101.

White, Joseph B, Don Clark, and Silvia Ascarelli. 1997. "This German Software is Complex, Expensive, and Widely Popular" Wall Street Journal. Friday, March 14: A1, A8

Xenakis, John J. 1996. Taming SAP. CFO: The Magazine for Senior Financial Executives v12, n3 (Mar):23-30.

Yates, JoAnn and Wanda Orlikowski. 1992. "Genres of Organizational Communication: A Structurational Approach to Studying Communication and Media," Academy of Management Review 17: 299_326.

Zmuidzinas, M., Kling, R., & George, J. (1990, December). Desktop Computerization as a Continuing Process. In Proceedings of the 11th International Conference on Information Systems. Copenhagen, Denmark.

Back to top

Return to CSI Home Page