WPbanner5.jpg (10176 bytes)
No. WP- 02-13

Academic Rewards for
Scholarly Research Communication via Electronic Publishing

Rob Kling  and Lisa Spector

Center for Social Informatics
Indiana University
Bloomington, IN

(December 17, 2002  )

Colleges and universities face questions common to all organizations, although their particular forms may be unusual. Who should be hired? Who should be retained? Who should be fired? Who should be rewarded, if pay is based on merit rather than seniority? In North American colleges and universities, many of these decisions are made on the basis of scholarly merit, although the areas of evaluation often include teaching and professional service, as well as scholarship .

There are over three thousand colleges and universities in the U.S. that offer bachelor’s and advanced degrees. They differ substantially in their missions, and consequently, in the criteria that they employ for reviewing faculty for appointments, promotions, tenure, and other awards. In 2000, the Carnegie Foundation identified 151 universities as having strong doctoral research programs in at least 15 fields (Carnegie Foundation, 2002: Table #1). In contrast, the Carnegie Foundation identified about 600 master's colleges and universities and another 600 baccalaureate colleges based on the relative size of their degree programs. The primary missions of the colleges and universities in these broad categories differ in kind and in their relative emphasis upon research, various kinds of creative activities, teaching at various levels, professional service, and service to their local communities.

The majority of promotion and tenure reviews in the 600 Baccalaureate Colleges would necessarily differ in their criteria from the majority of promotion and tenure reviews in Carnegie's 151 "Doctoral/Research- Extensive Universities". Even within a specific university, different academic units may place different weight on research, service, and teaching, for promotion.

Scholarly Publishing: A Wide Spectrum
It is common for academics to treat publishing as a binary concept: an article or book has either been published or not. This binary view of the publication status of a manuscript simplifies the work of reviewers who are making comparative judgments to decide who might merit promotions, grants, appointments, honors and so on.  Reviewers compare the publication records of a group of candidates (and perhaps of their peers, as well); often regarding documents simply as “published” or “unpublished”.

We conceptualize publication, as a multidimensional continuum, rather than as a discrete, binary category. Publishing conceptions are anchored in  particular fields. It is often hard to evaluate precisely the relative quality and impact of several sets of paper publications. Adding e-scripts to the mix complicates evaluation.   But that is a feature of this new era of Internet-enabled scholarly publishing.

Scholars’ publication records are often heterogeneous; different kinds of places dominate in different disciplines. Natural scientists publish primarily journal articles; humanities scholars are much more likely to publish books and book chapters.  However, within each field, many scholars publish in various places.

Electronic Publishing
"Purely electronic" publishing places have been the subject of considerable controversy in colleges and universities when claims about their legitimacy and status are made by authors, journal editors, members of search committees, promotion committees, and so on.  Authors usually have incentive to claim that their works have been published in high-quality places. The members of search committees and promotion committees are usually comparing a number of scholars, and may wonder how to readily sort the value of pure electronic publications into the mix of an author’s traditional paper publications.

Scholarly publishing practices--especially those related to electronic publishing--have been the subject of many new kinds of projects, and of a rather cacophonous and sometimes confusing discourse emerged in the 1990s. Enthusiasts for publication places that rely upon the Internet as a distribution medium, emphasize potential advantages, such as enhanced speed of distribution, lower publication costs, new publication formats (such as hypertext), the accessibility of scholarship to larger groups of readers, and richer discourse about published works. These enthusiasts have created many new electronic publishing places, including authors posting their articles on their own Web sites, disciplinary repositories where authors can post their research articles (for example, arXiv.org), electronic versions of working paper series published by research institutes, peer-reviewed journals that are available exclusively via the Internet, and publishers distributing books via their Internet sites . This list is suggestive and hardly exhaustive. In particular, the list excludes publications that are available in both print and electronic form--an important blend that has been the subject of some even more confusing discussions.

A wide set of electronic publishing practices exists today (Kling and McKim 2000), and more extensive scholarly electronic publishing is virtually certain over the next decades.  Nonetheless, these practices make many scholars and academic administrators uneasy (Sweeney 2000). Some commonplace examples can illustrate today's scholarly publishing practices, and pose questions about how they should be viewed:

Later, we will explain the Kling/McKim framework, and then use it to analyze the effectiveness of publishing of these examples, in the Publishing on a Continuum section.

Some Nuances of Academic Reviews for Tenure, Promotion, and Rewards
One vexing issue in academic reviews is the way in which the nature of a publication place can serve as an "efficient surrogate" for the quality and impact of a publication. In this approach, the prestige of a conference, journal, or university press signals the scholarly quality of manuscripts published in that place.

In the effort to evaluate heterogeneous manuscripts, some reviewers may be tempted to simply delete all e-scripts, and focus exclusively on paper publications. In short, "e-anything" must be ephemeral and lower in quality than paper publications. We believe that this attractive simplification is a mistake; the two 'universes' of paper and electronic publications should not delineated in such a cursory manner.  A commonly held view is that any paper publication is more reliable than any electronic publication. There is a lot of misleading discourse that does not differentiate between places of a publication, within the electronic, or the paper universes.

Carlson’s (2002) article in The Chronicle of Higher Education reviews a study of scholars' use of online materials. The article is illustrative of this confusing discourse;  "Almost 90% of researchers said they went online first..." and, "Most respondents tended not to trust online sources of information."  "Online" can refer to a personal Web site--a technical report series--an article in a peer-reviewed electronic journal that is also a paper journal such as Science--The New York Times--or e-mail.  The Chronicle article does not differentiate between location (online) and specific place of publication (peer-reviewed journal, e-mail...), by grouping all online places into one misleading category.
As we noted above, some kinds of e-script places, such as peer-reviewed pure-e journals, attract such high quality e-scripts that they serve as flagship peer-reviewed research journals for their scholarly associations. In addition, the Science Citation Index, as well as other bibliographic databases, are also indexing the higher quality electronic-only journals, such as the Journal of Artificial Intelligence Research (JAIR) and the Journal of Turbulence.

Hybrid publications: The example of e-journals
Most discussions of electronic journals (e-journals) conflate a number of different formats into one overarching, and sometimes misleading, category of “electronic journals”.  Much of the enthusiasm for e-journals in the early 1990s was based on specific assumptions: they would be electronic only, they could be peer-reviewed, and there would be no charges to their authors and readers.  Concerns about the long-term archiving of e-journals and their academic legitimacy hinged on similar assumptions (Kling & Covi, 1995).  Today, the major scientific, technical, and medical (STM) publishers who offer electronic versions of their paper journals rely upon a subscription model in which they allow electronic access to individual subscribers or to members of organizations who purchase more expensive institutional (library) subscriptions.

For example, Okerson (2000) reviewed the history of journals and discussed a few e-journals of the early 1990s. She also provided a timeline from 1991 to 1999 and indicated the number of electronic journal titles that were listed in two directories. The number of titles grew from 27 in 1991, to 3634 in 1997, and then, to 8000 titles in 1999.  She briefly discussed the move by major STM publishers to provide WWW-based access to their journals in the period of 1996-2000. Unfortunately, Okerson does not carefully distinguish between the relatively few journals that were published only in electronic editions in 1999 from the majority that were published in parallel paper and electronic editions. As we shall show in this chapter, these distinctions have substantial consequences.

The questions about the early "pure" e-journals take on a different character for journals with an established reputation and readership as paper-based journals that also provide parallel electronic editions. The distinction between an e-journal without any paper version and a paper journal with an electronic version matters when trying to answer questions about such issues as the legitimacy or the costs of e-journals. For example, we know of no evidence that prestigious paper journals, such as Science, have lost legitimacy after they established online versions in addition to their printed copies. The question of legitimacy seems to affect only the journals that are completely or primarily distributed in electronic form. Similarly, questions of costs will hinge on the number of printed copies a journal produces as well as the character of its electronic form.  Last, questions about a journal's accessibility and readership can depend on the extent that it allows readers free access to electronic versions.

Following Kling and McKim (1997) we find it useful to distinguish at least four kinds of e-journals:

There are many published discussions of the possible benefits of pure e-journals and their advantages over traditional "pure paper" journals (p-journals). However, those discussions often ignore three ideas: First, although beneficial changes may be possible from a technical perspective, the social structure of online publishing does not change as rapidly as the technical structure. Second, possible changes are often discussed without distinguishing which type of e-journal they apply to. Third, possible advantages are often analyzed separately, without taking into account how one advantage may tradeoff with another (for example, an e-journal's cost versus the variety of features offered). When looking at impact factors of an electronic journal for purposes of promotion, tenure and review, it is important to distinguish whether or not a journal is well established, such as Science. We discuss impact further in the Framework and The Continuum of Publishing sections.

It is useful to clarify electronic publishing terminology.  We define an electronic publication as a document distributed primarily through electronic media.  The distribution medium is the defining factor, since an electronic publication may well be printed to be read, and may be circulated post-publication in printed form.  Conversely, most scholarly publications distributed in paper form have been electronic at some point in their creation, being produced on personal computers, and even typeset using software.  According this definition, a manuscript posted on a Web page (under a variety of restrictions or conditions), an article distributed via e-mail, or via an e-mail-based distribution list, are all electronic publications.
In this chapter, we will use terminology to describe research documents that works across many disciplines:

Article- The common term “article” can implicitly refer to a publication place. The Oxford English Dictionary (OED) defines an article as “a literary
            composition forming materially part of a journal, magazine, encyclopedia, or other collection, but treating a specific topic distinctly and
            independently.” We will use the term article in a broader way to refer to any document that fits the OED’s definition, or that is in a form that
            could fit the OED’s definition if it were published.

Manuscript, E-script - Manuscript is the primary candidate for labeling articles that authors circulate prior to their acceptance for publication.  The
        term manuscript is still widely used by journal editors to refer to articles that will be submitted or are under review.  We will use the term
        manuscript to refer to articles that have not yet been accepted for publication in a specific place, as well as to articles that have been published in
        an institutionally sponsored place, such as a working paper series or an online server for research articles, such as arXiv.org. Electronic versions
        may be called e-scripts.

Preprint - We believe that the term preprint should be used in a strict sense to refer to articles that have been accepted for publication in a specific
        place. Preprint refers to a relationship between two documents, rather than a feature of a document in isolation. We will use the terms preprint
        and e-print conservatively--to refer to manuscripts in the form in which they are likely to appear in a conference proceedings, journal or book
        (whether in printed form, electronic form, or both). E-print, which some scientists use to refer to e-scripts, plays off of its resonance with
        preprints,  and we believe that e-prints should refer to electronic versions of pre-prints.  We will examine the relative worth of preprints again in
        the Framework and the Continuum sections. Also, please read Appendix A for further discussion about the  cloudy discourse surrounding the
        terms “preprint,” manuscript,” “e-print,” and “e-script.”

The Kling/McKim Framework for the Strength of Publishing
In 1999, Kling and McKim proposed a framework to assess the strength of publishing within scholarly communication.  Prior to their work, there did not seem to be any research to evaluate when a publication is strongly or weakly published.  Scholars are knowledgeable about the status distinctions within their fields and thus can identify a stronger journal within the field from a weaker one, and a peer-reviewed article in a journal from a talk that was accepted at a conference based solely on the abstract.  However, there was no framework in place to analyze differences across fields and across all publications. The Kling/McKim framework explicitly defines three criteria: trustworthiness, publicity, and accessibility, to assess how effectively an article or book has been published within the scholarly community (Kling and McKim, 1999).

Trustworthiness of a document is based on its quality indicators: "The document has been vetted through some social processes that assure readers that they can place a high level of trust in the content of the document based on community-specific norms. Trustworthiness is typically marked by peer-review, publishing house/journal quality, and sponsorship." (Kling and McKim, 1999).

"Peer review is a particular form of vetting that is distinctive of the academic communities. However scholars use other signs to assess the value of a document as well, often in combination - such as the reputation of a journal or publishing house as indicators of reliability.  Peer-review practices vary across the disciplines: some social science journals rely upon double-blind reviewing; many journals seek two to three reviews, while others (the Astrophysical Journal, for example) assign one reviewer to each article.  Book publishers vary in the level of detail in a proposal that they require for review (from a short proposal through sample chapters to a full manuscript), and in the number of reviews.  At the lower end of a scale of trustworthiness lie practices such as self-publishing, publishing in non-reviewed (or weakly reviewed) outlets (such as the working paper series of an academic department), or publishing in edited (but not refereed) journals.  Even in non-reviewed or weakly reviewed places, the reputation of the author (as perceived by the reader) may be a major factor in determining trustworthiness. This analysis of trustworthiness refers to institutionalized practices that are 'beyond the person.' Each scholar knows others whose works s/he trusts and would be eager to read in a prepublication form. But these judgments rest on a mix of highly personal knowledge, tastes, and interests"(Kling and McKim1999) .

Even some collections of unrefereed e-scripts publish credible research. For example, about 90% of the e-scripts that are posted in the high-energy physics sections of arXiv.org are destined for future publication in conference proceedings and journals (O'Connell, 2002). To write off e-scripts as entirely worthless until they appear in a paper place, is a major judgmental error. On the other hand, we disagree with Arms (2002) who claims that the e-scripts that are posted on arXiv.org are equivalent in quality to peer-reviewed journal articles. The e-scripts published on arXiv.org are unrefereed research reports until they have been accepted for publication in a specific journal or conference. Many will be; however, we don’t see why those that are not accepted for publication in a peer-reviewed place should warrant the stature of those that are.

As discussed in the Introduction, all electronic publishing is often grouped together without distinguishing peer-reviewed from non-peer-reviewed publications.  This form of over-generalization makes the evaluation of a publication's trustworthiness confusing.  While the number of high-status scholars who currently publish in e-journals is smaller than those who publish in p-journals, drawing a conclusion from this would be specious, since the number of e-journals is much smaller than that of print journals, and few e-journals have been around more than a few years (Kling and McKim, 1999). The guidelines that apply to evaluating paper publications for trustworthiness apply to electronic publications, and vary in the same manner as paper.

Publicity involves making the relevant audiences aware of a publication: "The document is announced to scholars so that primary audiences and secondary audiences may learn of its existence.  Publicity represents a continuum of activities from subscription, report lists, abstract databases, advertising and special issues, and citation. A book or article is more effectively published to the extent that members of its primary and secondary audiences are made aware of its availability.  In principle, e-publication (such as posting on a Web site or in a forum on the Web) would seem overwhelmingly more likely to effectively advertise a book or article when compared with publishing in a paper journal, or surpass the relatively limited efforts of many (paper) book publishers to advertise their wares.  In practice, the differences are more subtle, since relatively few established scholars regularly read (pure) e-journals or seek them out, and many book publishers are attempting to exploit the Internet as a publicity medium.  Further, many Web sites are ‘weak attractors’ of reader interest.  A major paper journal, with a well-established readership and reputation (e.g. Science, Nature) may be able to publicize the results of a study within a particular readership community far more effectively than a typical Web site." (Kling, McKim, 1999)

Central to the notion of being effectively published is a perception that an author’s work can be readily located and obtained by interested scholars.

Readers must be able to access the document independent of the author, and, in a stable manner, over time.  Accessibility is typically assured by institutional stewardship as practiced by libraries, publishing houses, clearinghouses, and is supported by stable identifiers, such as ISBN and ISSN" (Kling and McKim 1999).
The improvements of interlibrary loan services in the last decade have increased the effective accessibility of books and articles. Even so, “the obscure journal” that few scholars can locate or find, still exists.  The short-term accessibility of most documents posted on public access Web sites is relatively high in universities. People who have an adequate Web browser and good Internet connections can access the document independently of the author. Kling and McKim (1999) examine a variety of exceptions, such as journals or documents that are only accessible to members of an institution or organization.

The long-term access of electronic documents is a broad and emerging topic, beyond the scope of this chapter.  Briefly, we will mention some key points. Paper documents in libraries usually have a lifetime of 100 years or more.  The paper version of a journal, Nature for example, might be found in over 2000 different libraries.  One hundred years from now, many libraries will still hold back-issues.  Long-term access --10 years plus-- is speculative for all electronic documents.  Both e-journals and p-journals require maintenance, and it is often not clear who carries this stewardship for the e-journals.  E-journals can lose their funding and become inaccessible (Crawford, 2002?).   There are many efforts underway to address this issue, for example, students are raising funds to pay for ongoing maintenance of the National Digital Library of Theses and Dissertations.

If a scholarly society sponsors an e-journal it is likely to be well maintained and archived. There is no guarantee that sites such as arXiv.org will continue to receive funding, and the question of archiving those documents remains unanswered.  Citation half-life counts vary greatly from field to field. In fields with long half-lives, digital preservation is critical, making institutional or society sponsorship critical.  Some think that long-term preservation of digital collections is the most critical issue for library science today (Flecker 2001).

A major strength of some of the better e-journals over Tier B and Tier C paper journals is that they offer much better publicity to their authors. Many of the Tier B and Tier C journals may circulate only a few hundred paper copies per issue.
However, in many cases, the Tier B and Tier C journals may offer longer-term access. In 2001, Crawford (2002) examined the current status of 104 scholarly pure e-journals that were indexed in the 1995 edition of the ARL’s Directory of Electronic Journals, Newsletters, and Academic Discussion Lists. Fifty-seven of these 104 pure e-journals had a URL for their gopher sites or WWW sites. Only 17 of these 57 URLs worked in early 2001. After considerable search effort, he found the URLs of 49 of these 104 e-journals that were still publishing and were free to readers, as well as the URLs of 22 others that had ceased publication. Specialists in a field are likely to keep up with URL shifts. However, about 50% of these new pure e-journals survived for six years. Over time, the sites of archives of deceased journals are taken down from the WWW. In contrast, libraries that subscribe to journals (of any tier) usually retain their copies if the journal ceases publication.

The duration of access may differ in importance from one field to another. Generally, humanists value the ability to read publications in their fields that are decades or centuries old.  Natural scientists rely more heavily on work published within the past ten years .

Publishing as a Continuum, Paper and E-scripts
Scholarly publishing is a complex continuum of communication places. A simple scale would range from the working draft of a manuscript that an author circulates at a seminar, to an article in a peer-reviewed place (such as a journal or book by a reputed publisher). However, many variations are possible, such as the reprinting of journal articles as chapters in books.  This continuum of publishing occurs both in paper publications and in e-scripts.

In order to help gauge the strength of publication for an e-script, the first activity is to identify its character.  The first question is whether the manuscript represents a dissertation, a working draft manuscript, a working paper or technical report in a series, a conference article, a book chapter, a magazine article, a peer-reviewed journal article, or a book?

It is then possible to apply the Kling/McKim framework to the e-script, and to assess its strength of publication. This gives a framework to use to examine the e-script relative to paper-based manuscripts in the same class (such as preprints, talks, or books).
We will now use the examples from the introduction, analyzing their relative strengths and weaknesses, using the Kling/McKim framework to examine publicity, trustworthiness, and accessibility.

Ph.D. Dissertations
U.S. Ph.D. dissertations are often shelved in university libraries, with free, local-only access.  They are also widely available in microform or paper, through University Microfilms (UMI) for a fee ranging from  $32 to  $73  (U.S. dollars) .  In many fields, dissertations may be posted on personal Web sites. They may also be made available in electronic form, through The National Digital Library of Theses and Dissertations (NDLTD) (Fox, et. al., 1996).  Should any of the electronic versions count as additional publications to scholars’ records, or should these kinds of electronic publications be ignored?

UMI does not alter the trustworthiness of a dissertation, since it acts as a clearinghouse for all dissertations.  A subject search on UMI will yield dissertations that a researcher may not be aware of, but a reader must order and pay to read the paper.  Access to the dissertation is slightly increased.  This is clearly not a simple matter. The dissertation is slightly more visible, short-term access is slightly enhanced and trustworthiness is not changed.
Dissertations may also be published on a researcher's personal Web page.  They are no more, and no less, trustworthy by being published on the Web site.  “Career review” of the author and the quality of the institute that hosts the site may be used to estimate the quality of other publications on the scholar’s personal Web page that are only lightly reviewed, or unrefereed (Kling, et al, 2002).  Publicity of personal Web sites is weak; with a major search engine, a researcher may (or may not) find the site.  Students and colleagues may be aware of the site, however there is not much advertising of personal Web sites.  Once the URL is known, access is usually easy, with the minimal requirement of Internet access.  However long-term access is dependent on the individual’s ability and interest in maintaining the site; there is no institution or society invested in the maintenance or long-term access of publications on personal Web sites.

The National Digital Library of Theses and Dissertations (NDLTD) is an electronic repository for Ph.D. dissertations (http://www.theses.org). By late 2002, approximately 25 universities were participating, including MIT and the University of Virginia. Nine participating universities were in Western Europe or East Asia.  A search on NDLDT slightly enhances a dissertation’s publicity; a scholar may search by topic and find a dissertation otherwise unknown to her.  She may be more likely to read it than to seek a copy from UMI, because there is no fee, and the document is immediately accessible.  Access is enhanced (more so than with UMI: it is free and readily available).  Trustworthiness is not enhanced by placement on NDLTD.

Repositories, Working Paper/Technical Report Series and Preprints

ArXiv.org contains over 208,000 e-scripts of talks, conference articles that will appear in conference proceedings, manuscripts submitted to peer-reviewed journals, and manuscripts accepted by peer-reviewed journals, in the fields of physics, math and computer science.  How should the e-scripts be integrated into a physicist’s publication list and evaluated?  Are they any more substantial as publications than unrefereed e-scripts that a physicist may post on her own WWW page?

Trustworthiness must be looked at on a document by document basis for evaluation: only minimal review is required to post on arXiv.org (an e-address signifying affiliation with a university: .edu, or a government agency: .gov).  An e-script on arXiv.org that is later published in a peer-reviewed journal has a high level of trustworthiness while a conference talk does not have the same quality indicators.

As we discussed in the definitions section, "preprint" is the terminology commonly used to describe all e-scripts posted to arXiv.org.  This misnomer grants inflated trustworthiness to documents that do not have the "accepted for publication" or "published in..." markers.  Considering a talk to be a preprint would add a false level of trustworthiness to those e-scripts.  In fact, only some of the posts on arXiv.org are preprints, having been accepted for publication in a journal, and as such have the same level of trustworthiness as other articles published in the same journal.
ArXiv.org is a highly visible e-script repository.   Any e-scripts posted on arXiv.org gain some added publicity.  A search on Google will not yield results from arXiv.org because it is "robot blocked."  Researchers who are aware of arXiv.org may easily search the site; arXiv.org is free to readers. The only prerequisite for short-term access to arXiv.org is Internet access.

Working Paper Series
Many working paper and technical report series are now available online.  Should these e-scripts be evaluated as being as substantial as their paper precursors, or be viewed as a new form of ephemera?
The trustworthiness of working papers is the same, regardless of the publishing medium--electronic working papers retain the same quality indicators as the paper versions .  Within research universities, working papers of a scholar early in her career may carry more weight than the working papers of a more experienced scholar when there are expectations of more strongly published work.  Frequently, the marker, “in preparation” is added to a scholar’s publication list.  “In preparation” is not as highly weighted as a manuscript in a working paper series; placement in the series demonstrates that the work is at least in a full draft stage.

In this section we compare and analyze one pure-e magazine, two pure-e journals, and one e-p journal.

D-Lib Magazine (pure-e magazine)

D-Lib Magazine is a pure-e magazine that is not peer-reviewed.  How should its articles be evaluated?
D-Lib Magazine's trustworthiness depends on several factors, more complicated than for peer-reviewed journals. D-Lib Magazine is widely read by scholars who are interested in digital libraries, and D-Lib Magazine is currently funded by the U.S. National Science Foundation as an adjunct to its research program on digital libraries.   A pure e-magazine (or journal) in a field where there is not already another specialized magazine (or journal) may have enhanced trustworthiness: scholars will publish their best work, knowing that their peers are reading it.  The success of such an e-magazine (or journal), if free, with easy short-term access, and good publicity, may have a hypothetical edge over a new p-magazine (or journal) that an enterprising publisher may want to exploit.

D-Lib Magazine is widely read by scholars who are interested in digital libraries, and its table of contents is circulated on a LISTSERV that is widely read in the field of information science (ASIS-L).  Broad search engines can point to D-Lib Magazine articles, enhancing the publicity of its contents.  Overall, D-Lib Magazine has a high-level of publicity.  Short -term access is easy with Internet availability, and articles are available in full text online, for free.

The following two publications are pure-e journals. These journals are peer-reviewed; therefore the quality indicators are equivalent to any peer-reviewed electronic, paper, or hybrid p-e journal.  Though similar in the level of trustworthiness, they differ in the levels of publicity, short, and long-term access.

First Monday (pure-e journal)
How should articles published in First Monday be evaluated relative to other peer-reviewed journal articles?
The wide topical breadth of First Monday means that there are many paper alternatives (such as Information Society, New Media and Society, Information Communication Society).  First Monday has a record of attracting many high quality, and some lower quality, articles, making it comparable to its paper alternatives.

Though a pure-e journal, authorship is high.  Between 1996 and 2002, almost 500 authors have published about 400 articles in 75 issues.  First Monday is indexed in INSPEC, LISA, and PAIS.  Readership is also high:

In the year 2001, users from 536,046 distinct hosts around the world downloaded 3,117,547 contributions published in First Monday. (First Monday Basics)
Apparently, publicity is high.  Short-term access to First Monday is easy, requiring only Internet access. Articles are available in full text, online, for free. Major search engines readily find First Monday articles.

The Journal of the Association of Information Systems (JAIS) (pure-e journal)
Should articles that are published in JAIS be viewed as ephemera, or as substantial scholarly contributions? How should its articles be evaluated relative to those that are published in high quality established paper journals in the information systems field?
Access to JAIS is limited to Association for Information Systems members.  Thus, JAIS has high visibility within the primary researcher circle of information systems, and very limited publicity for secondary researchers .  This could reduce its impact over time.  Limiting access to members only is a different--not better, not worse--sort of access than p-journals have.  P-journals can also have limited short-term access problems. They may or may not be available at certain institutions, depending on the research focus, perceived needs of their scholars, and funds available to the library or individual researchers.

Journal of Artificial Intelligence Research (JAIR) (e-p journal)
JAIR is an electronic journal that also publishes an annual paper volume of its articles (Kling and Covi 1995).  How should articles published in the Journal of Artificial Intelligence Research be evaluated relative to other peer-reviewed journal articles?
JAIR is peer-reviewed. It published its 17th volume in 2002, and is indexed in INSPEC, the Science Citation Index, and MathSciNet (JAIR, An International Electronic and Print Journal).  A similar journal, The Journal of Artificial Intelligence, was excluding some topics from its contents, spurring the creation of JAIR. Thus the new JAIR, had a ready-made audience and its existence was already known to many in the field.
This e-p journal has long-term access that rates higher than the pure e-journals mentioned above.  The paper bound volumes will be archived in libraries, virtually guaranteeing safe long-term access.  JAIR has high-level quality indicators, has a large circulation, and has both electronic (free) and paper versions (for a fee, in bound volumes). JAIR rates high in all three of our categories: trustworthiness, publicity and accessibility.

Tables 1A and 1B
Table 1A *
Long-term access
Short-term access
Strength of publishing, overall rating 
PhD dissertations posted on personal Web site
High, easy with Internet access, free to readers
Dissertations posted on National Digital Library of Theses and Dissertations (electronic)
High, easy with Internet access, free to readers
Speculative, efforts are underway to maintain long term access to electronic publications
PhD dissertations (paper) shelved in student's university library
Very low 
Low, must be on site to read them, or ordered through U.M.I. for a fee
ArXiv.org unrefereed e-scripts, peer-reviewed preprints, conference talks, and peer-reviewed journal papers High, within the fields of physics, mathematics, and computer science High to low: varies according to type of post (i.e., peer-reviewed pre or post-print vs. working paper) High, easy with Internet access, free to readers
no similar print publication? Physicist publishing on their own Web pages? 0 0 0 0 0
Electronic working paper and technical report series
Working paper and technical reports (paper) mailed to other researchers
Low -- at discretion of author

Table 1B

Short-term/ Long-term access access

D-Lib Magazine(pure-e journal) High within field (Widely read in field of Information Science) High, edited by D-Lib editors, not peer reviewed High, easy with Internet access, free to readers
First Monday (pure-e journal) 
High High, peer-reviewed High, easy with Internet access, free to readers 
The Journal of the Association of Information Systems http://jais.aisnet.org/home.asp High for AIS members, low for non-members High, peer-reviewed High for AIS members,low for non-members
Information Systems Research
?? High, peer-reviewed ??
The Journal of Artificial Intelligence Research (p-e-journal ) http://www.cs.washington.edu/research/jair/home.html High High, peer-reviewed High, easy with Internet access, free to readers
Annual paper volume of Journal of Artificial Intelligence Research  High High, peer-reviewed ??

  *These tables are suggestive, though not definitive, comparisons of print and electronic resources and journals.  Tables 1A and 1B give an overview of the application of the Kling/McKim framework to the examples we examined. These tables are stereotypic in nature. However, we realize that the academic world necessitates many contextual judgments. We present this as a useful framework, a guide for evaluation of academic publishing. The “Strength of publishing, overall rating” is based on a five-point scale, one being the lowest strength of publishing, and five being very strongly published.

The Tensions of E-Publishing in Perspective

The Rhetorics and Criteria of Research Reviews
We have examined the appointment, tenure and promotion guidelines for about 20 of 151 "Doctoral/Research- Extensive Universities." The reviews typically evaluate teaching, research, and professional and university service. The criteria for expected research accomplishments differ by the rank of appointment or promotion.  They also differ by the range of disciplines that the guidelines cover--from departmental or disciplinary to university wide. We have selected five short excerpts from much longer documents to illustrate the range of rhetorics and formal criteria used to judge faculty research during these career evaluations.

For example, the University of California's guidelines use the following wording to describe requirements for appointment or promotion as a tenured associate professor:

Superior intellectual attainment, as evidenced both in teaching and in research or other creative achievement, is an indispensable qualification for appointment or promotion to tenure positions. (University of California, 1992)
Professor level appointments add this to the "superior intellectual attainment:"
A candidate for the rank of Professor is expected to have an accomplished record of research that is judged to be excellent by his or her peers within the larger discipline or field.
The criteria for one of its high level professorial steps are:
…highly distinguished scholarship, highly meritorious service. . . excellent University teaching . . . [and] great distinction, recognized nationally or internationally, in scholarly or creative achievement or in teaching.
Advancement (or appointment) for a notably higher (and unusual) level of professorship:
is reserved for scholars and teachers of the highest distinction, whose work has been internationally recognized and acclaimed and whose teaching performance is excellent. (University of California, 2002).
The criteria for tenure in the College of Literature, Science and the Arts at the University of Michigan are also broadly worded:
Tenure in LS&A should be granted only to candidates who have demonstrated excellence in research and teaching and, in more modest ways, excellence in service. Excellent research should have a demonstrable impact on the area of study to which it is meant to contribute and should provide evidence for a strong presumption of future distinction. Excellent teaching should be demonstrated by evidence of a strong motivation to engage students in the learning process, by the rigor and scope of the courses taught and by course and instructor student and peer evaluations. The only overriding criteria for granting or not granting tenure is the quality, quantity, and impact of the candidate's research, teaching, and service. (University of Michigan, 2001).
In contrast, the "Guidelines for Tenure and Promotion" for the University of Florida College of Health Professions note in the discussion of evaluating research for promotion to associate professor:
The primary indicator of progress toward establishment of a national reputation shall be the publication of research findings in peer-reviewed journals of high quality (as indicated by, but not limited to, the judgments of experts in the field, the journals, rates of rejection, and empirically-based journal impact ratings).... The quality of research shall be judged as more important than quantity in evaluating the candidate's research contributions. (University of Florida, 2002).
The Guidelines list a broader set of research indicators, such as research funding, published book chapters, and editorial positions. However, the emphasis upon high quality peer-reviewed journals is notable, and workable in the health sciences.
The Mathematics Department at the University of Arizona evaluates each of its tenured faculty annually "in each of the three primary areas of responsibility (teaching, research/scholarly activity, and service/outreach) according to a five-level scale." These annual evaluations are used to adjust workloads and to set salaries.
The criteria for a rating of "Meets expectations" in research/scholarly activity are that the faculty member produce a yearly average of participation in at least one sponsored research grant or contract (as PI or Co-PI) or publication as author or co-author of one peer-reviewed document (books, book chapters, journal articles, conference papers, etc.) or activity as thesis or dissertation director for one graduate degree, or significant course or modular materials development dependent on deep understanding of a particular area, or any coherent combination of these four activities. (These criteria assume a 40% research load and should be adjusted to actual workload assignments). As the frequency and nature of scholarly output varies with areas of concentration, even within mathematics, it is expected that the rating will be adjusted to reflect such variation, using departmental averages and comparison with peer institutions and general trends in mathematics departments. (University of Arizona, 1998)
As in the health sciences, the mathematics department emphasizes peer-reviewed documents, but is more open to a variety of publishing places (i.e., conferences as well as journals). In contrast, the College of Humanities at the University of Arizona specifies different publication requirements for faculty in literature and area studies (books), scholars of language (journal articles), and for creative writers (books). The requirements for literature and area studies say, in part:
Promotion to Associate Professor with tenure will normally mean the acceptance for publication by a reputable press of at least one single-authored interpretive monograph or a major work of scholarship (such as a scholarly edition, a biography, annotated bibliography, or calendar of plays with complete critical apparatus) that makes a significant contribution to the candidate’s field. …Additional but not alternative evidence for promotion in this category of research will normally include the regular publication of scholarly or interpretive articles in refereed journals; it may also include the regular presenting of professional papers, winning grants and awards for scholarship, having one’s work translated or reprinted, being cited by peers, and being selected for tours of duty at special institutes for advanced study (University of Arizona, 2000).
One may smile at some of these vague criteria. Is the excellence required for tenure at the University of Michigan a higher or lower standard than the "superior intellectual attainment" required at the University of California? The meanings of these vague criteria are sorted out in practice by comparisons of faculty under review, with faculty in the same field at similar ranks and career stages at other major research universities. In practice, the level of accomplishment at the University of Michigan (Ann Arbor) and the major University of California campuses is comparable.

Most serious, pertaining to our concerns, is an understanding of how these general criteria influence the evaluation of electronic documents. For example, the Mathematics department at the University of Arizona and the University of Florida College of Health Professions emphasize peer-reviewed documents as central indicators of quality. The University of Arizona's guidelines for promotion to Associate Professor with tenure in literary and area studies refer to the acceptance of a scholarly book "for publication by a reputable press." Some promotion guidelines are even more specific. For example, the Accounting Department at North Carolina State University specifies that the "number of refereed works normally expected for promotion are three or more to Associate Professor and eight or more to Full Professor (NCSU, 1999)."

The quote from the University of Michigan's guidelines identifies what we believe are the three major underlying criteria for scholarly evaluations: the "quality, quantity, and impact" of the works. Some universities’ promotion and tenure guidelines explicitly stress that research and creative works should be evaluated and not merely enumerated.

The Grisly Work of Academic Reviewing
Substantive evaluation is time consuming, even for experts. Therefore, academic reviewers often seek “efficient indicators” of the quality and impact of publications. The publishing place--such as a peer-reviewed journal and or a  “reputable press” often serve as quality indicators. Citation counts are sometimes used as indicators of impact, especially in the natural and social sciences (Garfield, 1972). The impact of books can also be assessed through book reviews, and the ways that other scholars discuss them (or ignore them) in related writing. All of these evaluation strategies are commonplace in academia and are well known to be imperfect.

Scholars in various fields attribute higher quality to some journals and book publishers than to others. An extreme example is that of business schools, where departments are often asked to stratify the journals in their fields into three tiers. In these settings, it is common to hear of faculty evaluated in terms of how many "tier A" or "tier B" journal articles they have published. In the natural and social sciences, the Institute for Scientific Information calculates an "impact factor" for about 5,700 natural science journals and 1,700 social science journals, based on the fraction of articles in one year that are cited by other journal articles in subsequent years.

To what extent should the publishing medium--paper or electronic--be used as a quality indicator?  During the 1990s there were a number of surveys of faculty in various universities and various disciplines about their perceptions of the legitimacy of publications that appear in e-journals. Kling and Callahan (in press) note that these surveys rarely distinguish between different types of e-journals (i.e., pure e-journals vs. p-e journals), and thus can be unreliable. Today, the vast majority of e-journals are p-e journals, and their status is anchored in their p-journal status. For example, Science has not lost status because it developed from a p-journal to a p-e journal with Science Online as a parallel electronic edition.

The earliest surveys were conducted when e-journal meant pure-e journal; but few faculty were familiar with them (e.g., Schauder 1994).  Academics will likely be more familiar with pure e-journals and their variety over time, and scholars’ comfort might improve with familiarity.  Sweeney (2000) conducted a small survey of the high-level academic administrators in Florida's State University System, and of faculty at one of the 109 Carnegie "research intensive" universities. While approximately two-thirds of all respondents (administrators and faculty) ostensibly agreed that articles published in e-journals should be counted in tenure and promotion reviews, administrators in particular expressed repeated concern that the "legitimacy" of such e-journals (probably pure e-journals) be clearly established by the faculty member undergoing review.

However, there are pure e-journals in many fields, and the question of their legitimacy is central here. For example, in mathematics, the peer-reviewed pure e-journals include: Electronic Communications in Probability, Electronic Journal of Combinatorics, Electronic Journal of Differential Equations, Electronic Journal of Linear Algebra, Electronic Journal of Probability, and the Electronic Journal of Qualitative Theory of Differential Equations. Should publications in these journals be automatically relegated to "Tier E?"  The University of Arizona Mathematics Department requires each tenured faculty member to publish one peer-reviewed article annually to receive a  "a rating of 'Meets expectations' in research/scholarly activity." If their faculty chooses to publish in any of these journals, should their articles be counted as peer-reviewed or discounted?  We will offer some guidance later in this section.

Other chapters in this book discuss the range of e-publishing projects and products that may be "on the table" for academic review.  Most reviews are based on more than one document, i.e., there is usually a set of articles, and or book chapters, and or books to be evaluated. Some scientists’ publication corpuses are composed almost completely of research articles in the primary journals in their fields. Similarly, some humanists' and social scientists' publication corpuses are composed primarily of monographs published by high quality university presses. But we suspect that resumes with heterogeneous kinds of publications and places are more common across academia: some mix of conference papers, journal articles, book chapters, monographs, textbooks, etc.

The relative weight for these products varies by field (i.e., articles are usually more valued in the sciences while books are more valued in the humanities). These documents may vary by their character (i.e., an original research article versus a literature review; a text book versus a monograph, and so on). Documents are in different stages of their publication trajectories (i.e., under review, in press, published in a specific place).  is ever-present Adding the characteristic "electronic" to some of documents in this mix can further complicate the review of a scholarly corpus--whether the review is of an individual, a research institute (for national funding), or of a department (as in the case of the periodic Research Assessment Exercises in the UK).
In our experience, reviewers often try to simplify the cognitive complexity of their tasks by invoking simplified category schemes (e.g., unrefereed conference paper, peer-review journal article, monograph from a major university press) to focus attention on some "high quality" portion of the corpus and to remove the rest from detailed consideration. We sympathize with reviewers who wish to simplify their reviews of complex academic corpuses by such focus (and we have done so ourselves in reviewing scholars for academic appointments and promotions). However, we caution against the attractive simplification rule: remove all e-scripts from detailed review.

As we indicated earlier, there are two primary perspectives that underlie academics’ views of e-script publications (Kling and McKim 2002). The most common perspective emphasizes some information processing features of electronic publishing, and posits that electronic publishing can be relatively easy, inexpensive, and lead authors to rapidly reach much larger audiences. An alternative "socio-technical" perspective examines electronic publishing in a matrix of social practices, skill mixes, and support resources, which are often institutionalized in ways that turn electronic publishing in to a complex venture whose virtues take considerable effort to realize. Many academics--both enthusiasts of electronic publishing and skeptics--accept the information processing perspective.

The information processing perspective underlies many enthusiasts’ claims that scholarly electronic publishing can be much faster, much less expensive, and enable authors to more readily reach wide audiences that traditional print media. Skeptics often rely upon the information processing perspective to characterize scholarly electronic publishing as "too easy" and ephemeral, leading to e-scripts that circulate "in a kind of ghostly netherworld of academic publishing" (Kling and Covi, 1995). Reviewers who take this point of view would remove all e-scripts from detailed review.
In our research, we have found that the socio-technical perspective provides deeper insight into the virtues and limitations of scholarly electronic publishing. We will discuss key tensions of scholarly electronic publishing that are critical to evaluating e-scripts for tenure, promotion, and other rewards in the remainder of this section.

Pragmatics of e-publishing
It is not "a snap!" for academics to publish their works on Web sites.  It can require very complex pragmatics, including access to specialized computer programs and technical abilities (or technical support). The specific pragmatics differ for various kinds of publication places. Publishing on one's own Web site requires some basic skill with HTML. Publishing in an online working paper series may require little more than sending a manuscript as an e-mail attachment or on a diskette to the person who manages the series' Web site. Publishing in a repository, such as arXiv.org requires some basic abilities to fill in forms online and upload files. Publishing in an e-journal may seem as simple as publishing in a working paper series, but actually can be more complex because of a more involved editorial process (sometimes requiring rapid communication back and forth between authors and editors).
Responsibility for the formatting and layout of documents that are published differs between the world of e-publishing and the print medium. In paper publishing, the publisher handles the layout.  In pure e-publishing, authors are frequently asked to handle the layout.  Many computing skills and programs are needed, as well as technical support.  Some common word processing programs may automatically translate a file to HTML, so it may seem that electronic publishing may be easy. But communication between authors and e-journal editors can be complicated when the editors have shifted their markup language (such as TeX or HTML) to communicate copy-editing changes with authors. Thus authors must be familiar with these technologies for managing page layout.  Of course, these markup languages are known by many academics and can be learned by people who are not IT specialists. But the pragmatics of communication between authors and e-journal editors can require rapid turnaround within a few days prior to a journal issue being published.

In brief, one cannot assume that e-scripts take "almost no effort" to publish. When they require considerable effort to publish (i.e., "high friction"), they also require considerable effort to alter. Thus the extent that e-scripts are easy to publish and alter at will depends to a great extent upon the pragmatics of the publication place and the author’s skills (or abilities to enlist skilled assistants to do the work).

Post it and they will read -- the field of dreams myth
Perhaps no topic is more misunderstood than the extent to which articles that are posted in online places will be widely read. As we indicated above in our discussion of publishing, places differ considerably in the extent to which they effectively publicize their documents (and the extent to which they are accessible for a long time after their initial publication).

Academic reviews in research universities emphasize the quality of a document (or a scholar's corpus) and the impacts of the scholar's document (and corpus). An extreme example is the scholar who posts an e-script on her personal Web site. This e-script may be accessed “worldwide”--if potential readers know about it and suspect that it is worth their attention.  However, scholars’ time (and thus attention) is limited. Consequently, we have not found many scholars searching for research in their fields via search engines. When they seek e-scripts, they are much more likely to visit specific places such as:

Thus, each of these is much stronger as a publication place than self-publishing on one's own Web site.

Scholarly Credit
Our main argument has been that e-scripts should not be automatically discounted in academic reviews, or treated as Tier-E publications. Rather, they should be assessed, as are other publications in an academic's corpus under review.  One key task is to sort publications into broad categories (i.e., books, journal articles, conference papers, working papers, textbooks, etc.) and to note those that have been peer-reviewed. It is also important to note publication statuses, such as (in press).  Some e-publishing enthusiasts have complicated this sorting by their elastic use of the term pre-print, or e-print. As we noted earlier, in the Definitions section, a document can be called a preprint once it has another publication place, such as a specific journal. Until that time, it is another unreviewed manuscript.

The next step is to evaluate the quality of the materials to be carefully reviewed. Publications may be evaluated using the Kling/McKim model that we explained in the Framework section. In some reviews, one has written evaluations from peers at other universities. Sometimes they are merely testimonials and provide little detailed insight. Others may clarify the stature of a publication place (for example a journal or publisher) without evaluating the publications. Still other reviewers may evaluate specific works in substantial detail.


The medium of a publication--paper or electronic--does not influence its core scholarly content.  The quality indicators, combined with the publicity and access of a document,  determine the strength of publishing.

While the temptation (due to time limitations) for surrogacy is ever-present, the use of simplified evaluation criteria such as ruling out online publishing, does not give a fair look at a scholar’s work. E-publications should not automatically be deleted!

In fact, a broad continuum of publishing, both in paper and electronic form exists, and the Kling/McKim framework, though not definitive, is useful for tenure, promotion, and review, evaluation purposes.  Within this framework, we see that overall the strength of e-publishing differs from p-publishing in two major areas: publicity and accessibility.  Through e-lists (and XXX?), publicity of e-publishing can be high (in addition to the standard indexing and advertising done though societies). Short-term access (five to ten years) may be higher in e-publishing (than p-publishing), though long-term access is, at least at this point in history, not as strong in the e-publishing world. These tradeoffs in strengths differ in importance from field to field. Historians may deem e-journals to lack permanence that is critical for their work.  However, high-energy physicists may be more interested in high-level short-term access and publicity.

This research has been funded by NSF Award #9872961 for the SCIT (Scholarly Communication and Information Technology) project (http://www.slis.indiana.edu/SCIT/).  Disclaimer: “Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.” This work was also funded by SLIS at Indiana University, Bloomington, Indiana, USA.  We appreciate helpful comments from David Spector and Deborah Shaw, and encouragement from Deborah Andersen. A shorter form of this article will appear as Kling and  Spector (in press).

Addis, Louise. 2002. Brief and Biased History of Preprint and Database Activities at the SLAC Library, 1962-1994 Available: http://www.slac.stanford.edu/~addis/history.html

Arms, William Y. 2002.  What Are the Alternatives to Peer Review? Quality Control in Scholarly Publishing on the Web.  Journal of Electonic Publishing 8 (1). Available: http://www.press.umich.edu/jep/08-01/arms.html

Carnegie Foundation. 2002. Carnegie Classification of Institutions of Higher Education. Table #1. Available:

Carlson, Scott  2002.  Student and Faculty Members Turn to Online Library Materials Before Printed Ones, Study Finds. Chronicle of Higher Education (Chronicle) 2 October. Available: http://chronicle.com/free/2002/10/2002100301t.htm

Crawford, Walter. 2002. Free electronic refereed journals: Getting past the arc of enthusiasm. Learned Publishing 15:117-123.

Flecker, Dale. 2001. Preserving Scholarly E-Journals. D-Lib Magazine 7(9). Available: http://www.dlib.org/dlib/september01/flecker/09flecker.html

Fox, Edward A., John L. Eaton, Gail McMillan Neill A. Kipp, Laura Weiss, Emilio Arce, and Scott Guyer. 1996. National Digital Library of Theses and Dissertations: A Scalable and Sustainable Approach to Unlock University Resources. D-Lib Magazine, September. Available: http://www.dlib.org/dlib/september96/theses/09fox.html

Garfield, Eugene. 1972. Citation analysis as a tool in journal evaluation. Science 178:471-479.

Ginsparg, Paul. 2000. Creating a global knowledge network. Freedom of Information Conference. The impact of open access on biomedical research. New York Academy of Medicine, 6-7, July. Retrieved October 4, 2001 from http://www.biomedcentral.com/info/ginsparg-ed.asp

Harnad, Stevan. 1999. The Future of Scholarly Skywriting. In: Scammell, A. (Ed.) "i in the Sky: Visions of the information future" Aslib, 1999.
Available: http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad99.aslib.html

Harter, Stephen P. 1996. "The Impact of Electronic Journals on Scholarly Communication: A Citation Analysis." The Public-Access Computer Systems Review 7(5): 5-34.

Harter, Stephen P. 1998.  "Scholarly Communication and Electronic Journals: An Impact Study." Journal of the American Society for Information Science 49(6): 507-516.

Hawkins, Donald T. 2001. Bibliometrics of Electronic Journals in Information Science. Information Research 7(1). Available: http://informationr.net/ir/7-1/paper120.html

Journal of Artificial Intelligence Research: An International Electronic and Print Journal homepage. Retrieved 17 October, 2002. Available at: http://www.cs.washington.edu/research/jair/home.html

Kling, Rob. (forthcoming). Scholarly Publishing Without Peer Review via the Internet. Annual Review of Information Science and Technology, 38.

Kling, Rob  and Ewa Callahan. 2002. Electronic Journals, the Internet, and Scholarly Communication. Blaise Cronin (Ed.). Annual Review of Information Science and Technology 37.

Kling, Rob. & Covi, Lisa. 1995. Electronic journals and legitimate media in the systems of scholarly communication. The Information Society 11(4):261-271.

Kling, Rob. & McKim, Geoff. 1999. Scholarly communication and the continuum of electronic publishing. Journal of the American Society for Information Science, 50, 890-906.

Kling, Rob and Lisa Spector. (in press). Rewards for Scholarly Communication via Publications. In Deborah Andersen (Ed).  Digital Scholarship in the Tenure, Promotion, and Review Process: A Primer. M.E. Sharpe. Armonk, N.Y

Kling, Rob, Lisa Spector, and Geoff McKim. 2002.  Locally Controlled Scholarly Publishing via the Internet: The Guild Model.  Journal of Electonic Publishing 8 (1). Available: http://www.press.umich.edu/jep/08-01/kling.html

Kreitz, P. A., Addis, L., Galic, H., & Johnson, T. 1997. The virtual library in action: Collaborative international control of high-energy physics pre-prints. Publishing Research Quarterly, 13, 24-32.

Morrison, James and Suber, Peter. 2002. The Free Online Scholarship Movement: An Interview with Peter Suber.  This article was originally published in The Technology Source (http://ts.mivu.org/) as: James L. Morrison, and Peter Suber "The Free Online Scholarship Movement: An Interview with
Peter Suber." The Technology Source, September/October 2002. Available: http://ts.mivu.org/default.asp?show=article&id=1025.

North Carolina State University, Department of Accounting. 1999. Retention, Promotion, and Tenure Guidelines. Available: http://www.ncsu.edu/provost/academic_affairs/rpt/guidelines/ACC.html

O'Connell, Heath. 2002. Physicists Thriving with Paperless Publishing. High Energy Physics Libraries Webzine (March)6. Available: http://library.cern.ch/HEPLW/6/papers/3/

Odlyzko, Andrew. 1997. The Economics of electronic journals.  First Monday July 16. Retrieved 9 October, 2002. Available:

Okerson, Anne. 2000. Are we there yet? Online e-resources ten years after. Library Trends, 48, 671-694.

Oxford English Dictionary, 2nd edition, 1989. Oxford, New York: Oxford University Press (electronic version).

Patterson, David, Lawrence Snyder & Jeffrey Ullman. 1999. Evaluating computer scientists and engineers for promotion and tenure. Computing
Research Association Best Practices Memo in Computing Research News (September). Available: http://www.cra.org/reports/tenure_review.pdf

Physical Review Letters Policies and Procedures 1996(July). Searched: October, 2002. Available: http://forms.aps.org/historic/6.1.96ppl.html

Schauder, Don. 1994. Electronic Publishing of Professional Articles: Attitudes of Academics and Implications for the Scholarly Communication Industry. Journal of the American Society for Information Science 45 (March): 73-100.

Sweeney, Aldrin E. 2000. Tenure and Promotion: Should You Publish in Electronic Journals?   Journal of Electronic Publishing  6(2).  Available: http://www.press.umich.edu/jep/06-02/sweeney.html

Till, James. E. 2001. Predecessors of preprint servers. Learned Publishing, 14, 7-13.

University of Arizona. 1998. Mathematics Division, Annual Performance Review  Processes, Criteria, and Measures  (January). Available: http://www.math.arizona.edu/overview/perf.html

University of Arizona. 2000. College of Humanities-- Promotion and Tenure: Criteria. Available: http://www.coh.arizona.edu/COH/facinfo/pandtcriteria2000/pandtcriteria2000.htm

University of California. 1992. Point 210-1 D. Academic Personnel Manual 210, d., Page 5. Office of the President. (Available http://www.ucop.edu/acadadv/acadpers/apm/apm-210.pdf)

Appendix A

Research Manuscripts and Preprints

In 1969 The American Physical Society Division of Particles and Fields and the U.S. Atomic Energy Commission sponsored a community-wide distribution of a weekly list of new research manuscripts received by the Stanford Linear Accelerator (SLAC). This listing was named Preprints in Particles and Fields (PPF). PPF listed authors, titles, abstracts and author contact information to enable subscribers to request the full text of an article of interest to them. Hundreds of physicists paid an annual subscription fee to receive PPF weekly by airmail  (Till, 2001; Addis, 2002) . Not all of the manuscripts that are listed in PPF are eventually published. This leaves open the question exactly what are these subsequently unpublished research manuscripts to be considered as preprints of?

These differences in the nomenclature for research articles i.e., preprints by high-energy physicists and manuscripts, technical reports (or working papers) by others continues today. Unfortunately, some of this terminological diversity clouds the discussions of alternative ways to organize Internet forums to support scholarly communication. It is amplified by the terms used by some advocates of more open exchanges of research articles via Internet forums, such as Stevan Harnad (1999), who often refers to "unrefereed preprints.”

Consider the unusual case in which a scholar writes an article, submits it to a journal, and has it both accepted for publication and finally published with no changes (including copy editing and updating references). A copy of the article in the scholar's file starts out as a research memorandum (or working paper or technical report) on the day that she submits it to the journal for publication. When it is accepted for publication, with no changes, its status is changed to that of a preprint (i.e., a preprint of a forthcoming definitive publication).  When the journal has published the article, it gets another status boost, becoming a reprint. In this extreme example, there has been no change in the content of this document, though it has a status boost as a result of what happened at the place of publication.

A more common occurrence is that authors submit manuscripts to journals, then they are asked to make some changes requested by peer reviewers and editors, or to initiate some changes on their own. In the social sciences, where many of the most prestigious journals accept less than 20% of the articles that are submitted for review, many authors will submit their rejected articles to other journals. This practice is not uncommon in the natural sciences as well. Of course, some articles are never accepted for publication. These articles do not merit the label preprint in any stage before there is a clear relationship to the article that will be accepted for definitive publication in a conference proceedings, journal or book. As an article travels through a peer-review process, value is added to it by a combination of the editorial work that can lead to major or minor changes, as well as by the "peer-reviewed" status that is bestowed upon it by the conference or journal.

The Oxford English Dictionary (Oxford English Dictionary, 2nd Edition [Electronic version]1996) defines a preprint as "something printed in advance; a portion of a work printed and issued before the publication of the whole." Unfortunately, physicists have casually used the term preprint to refer to research manuscripts whose publication status is similar to articles that are called research manuscripts, working papers and technical reports in other fields, before they were submitted for and accepted for publication. For example, according to Physical Review Letters’ official description, "Recently, fewer than 40% of submitted papers have been finally accepted for publication in Physical Review Letters” (Physical Review Letters Policies and Procedures 1996).

The “PREPRINT Network” at Oak Ridge National Laboratories defines the documents that it helps readers to obtain, in these terms:

preprints, or 'e-prints,' are manuscripts that have not yet been published, but may have been reviewed and accepted; submitted for publication; or intended for publication and being circulated for comment.(***cite)
The PREPRINT Network is a valuable service in the physical sciences; but its definition of preprint is so elastic that it can refer to any manuscript, even one that is only posted on an author's personal Web site, and not subsequently published anywhere else.
This misuse of "preprint" is confusing in terms of review as well,  "preprint"  implies that a manuscript has been sufficiently revised and edited so as to be acceptable for publication, "as is" in a journal.