Go on reading the following pages, provided you don't feel you do not have enough time for it.
"I would love to ... but I have no time to read the documentation, to train myself or to attend a training class ... and sorry, not even to read all what you have written on the subject here... couldn't you be more synthetic?".
Got it: you look like the «There is never enough time to do it well, but there's always time to do it again» category of people.
Simple, all-intuitive software, often accomplish just trivial tasks. That's my limited experience.
If you do not have time, just leave it, and do not read here any further: it would be another loss of time. I have no pills, sorry.
The use of such a personal bibliography management product is likely to be an investment for at least a few years, if not for your professional lifetime. The amount of money to be spent in purchasing such a tool should be regarded just as an investment. But money can always be a problem and it could orientate your choice towards free or extra cheap products. Do check prices and purchasing conditions then.
Web based products and site license can make 'free' the product for members of the licensed community (academic/school staff, students ...) but not for the insitution itself that will have to pay a probably heavy -globally speaking- fee, thus offering a tool and a service to all its registered or affiliated members: check this possibility with your dedicated Institution's staff.
Increasingly, free and/or open source, desktop or web based, software packages are offered, also social cataloging, sharing bibliographies over the Internet has become a viable opportunity (Connotea, Bibsonomy, CiteUlike ...)
Are you ready to read everything in English?
Though you may find French, German and Czech products, they represent rare examples in an English-speaking universe : web pages, menu options, help, manuals, e-mail supports: all in English.
What kind of OS-Operating system and machine do you use? Mac OS® (Classics? [sic]), Windows® (which version?), Linux, Unix, cross-platform ...
The largest majority of the BMS are MS-Windows® applications, but there are also a few Mac application programs.
A growing number of open-source, often cross-platform, applications have started populating the scene of bibliographic software, they have often been conceived to work under Unix/Linux OS (see Open standards and software for bibliographies and cataloging).
Increasingly, entirely web based BMS are coming to the front stage (such as RefWorks, EndNote Web).
Do you need to work in a network with a real network functionality?
If so, simultaneous write access and record locking, plus, perhaps different authorization levels are certainly a must. As there are not many products which offer this full functionality, real network operability is likely to be a key factor.
What kind of user are you?
"Well, I want a stable and efficient product..."
Sure, and we agree that being stable is even more important than being "brilliant", "over-rich" in features.
But if you had to choose between "fast and canned" and "sophisticated and elaborate", what would you prefer?
This choice involves critical consequences.
The second type of user will certainly accept a steeper learning curve.
He will feel willing to read the full documentation.
He will enjoy flexibility much more than ready-made solutions.
He will love modifying the product (add and change record types, fields, term lists..., not to mention citation styles and conversion filters).
The first type of user will appreciate the opposite.
Needless to say we do not have just two types of users: try to locate yourself in the range.
The more you modify or customize the delivered product, as far as document types and their fields are concerned, the more you tend to isolate yourself from the context. These software products are made and shipped with default and proprietary
structures: record type definitions, fields, output styles and import filters which all tend to work together harmoniously.
For example, if I add, modify or suppress a field in a workform (e.g.: in "book chapter" I do not expect to find "Place of publication" but I want "ISBN"), I cannot expect the system to know all that and adjust the filter
while importing from an external bibliographic database; rather I will have to intervene and adapt the filter. Ditto for output styles: e.g. Chicago or Vancouver will first react according to the default database structure as designed by the
software developer and should be adapted to comply with my modifications.
So normally a product that is fairly stiff, or less customizable than others, is more easy to be used.
If you do not need to modify the shipped database structure you have a much quieter life, as styles and filters will work relying on the document types and fields configuration designed by the developer.
But the 'active' user is also likely to exhibit the skill and the stamina required to reach the shore (other users, standard output formats ...) from which he is temporarily distant.
The trend of the market and of the leading products is to offer simpler, more limited, ready-made products. Publishers believe that the large majority of users prefers a large quantity of canned solutions rather than the tool to craft its own solution
(this is especially the case with import filters and output styles).
Fields should all have variable length, or in any case be 'very' hospitable: we are mainly dealing with text strings whose length cannot be predicted once for all: to allow up to 40 characters for the Publisher field is nonsense, we simply do not know in advance, we need free space.
At least some of the fields should be able to host multiple values with no practical limitation: multiple authors, multiple keywords (i.e. subject headings). Those multiple entities are not simply words: "water pollution" is one keyword. Multiple values have to be recognized and handled as such when building lists, indexes, sort sequences.
Are you looking for flexibility in order to add reference types and fields, and term lists?
Would you like to be able to modify fields attributes?
There exist products which work with a given, unmodifiable, number of reference types and fields (often including a few empty and neutral User-Custom fields), others let you add new ones: the utility of this difference is deeply appreciated when you manage your own project with specific requirements (e.g. dealing with a special collection, a very detailed bibliographic project with lots of fussy fields...).
As far as import filters and output styles are concerned there is no question: any product -I dare say- will allow you to modify the existing ones and create others.
Do you need to establish vertical and/or horizontal links between records, entries, records and notes etc.? Most of the packages we are dealing with do not offer this capability (the late Papyrus was a remarkable exception), apart from the connection between an entry (e.g. a name or a keyword) and the multiple records which contain it.
A partial replacement for record linking is what is called 'groups' or 'folders': the user can put (pointers to the) records in different containers (like directories: folders) also naming them at his own will. Sometimes they can have
a hierarchical structure. They never duplicate records physically but offer a virtual copy of them.
By far, most BMS are flat file managers with no relational database structure displayed to the user. But there are also relational database and SQL tables based applications like Biblioscape, Bookends and RefWorks, which are reviewed
here.
Subfields is another prominent way to structure data: normally they are lacking in BMS database structure. The only exceptions are: name fields (author, editor, secondary author etc. where one single comma marks the splitting into subfields) and the date field (publication year, where often the software can extract the year portion from a full date).
"When you enter look for the exit": it may sound like a spy story motto.
Prior to selecting your product, prior to entering massive amounts of data, check the export facility, the available formats, test the export of all reference types and fields (if possible with any style attribute), default and custom. Open the export file and take a careful look at it, check how repeatable fields (namely authors and keywords) are handled. It is important that you may export also a code for the reference (document) type and that you properly separate the occurrences of any multi-value field.
All this will turn out to be crucial whenever you have to give your data to somebody else using a different system, and whenever you decide to switch to another software package. The latter is something that is very likely to happen, sooner or later : products come and go, whereas data are, and have to be, more stable; in general (not only in this domain): data is much more important than the package you use to manipulate it.
Export should be available in delimited, tabbed, tagged formats. Sometimes it turns out that the most efficient way to resort to a reliable export format is to design it as an output format having in mind the target
system's format, and print data as a file. It takes more time and attention than using a direct export.
That's also why it is important to be able to properly output :
Conversely: import is very important. Moreover: import should be the real way to double-check the export itself. Export and import work togethere and one can help the other. But when you start using a package you do not know yet which will the receiving package be, whereas you know what you have in your hands, therefore you should start by checking export.
These are the basic required functions.
If you need to enter data in languages rich with diacriticals using extended character sets becomes essential, even more if you use different scripts, then consider Unicode compliance.
are much less essential.
This is a very, very, important function.
By far the most difficult and delicate procedure within this kind of software.
Here flexibility is an asset, if you are not able to shape the shipped filters you must completely rely on them.
First of all flexibility means: "IF ... THEN", i.e. to be able to set conditions. Conditions can be set also when "IF ... THEN" is not explicitly stated, because selection of predesigned subordinate options often implies
conditional commands. Predesigned subordinate options are almost always present in BMS, something like: "if there are more than 3 authors, display only the first and add [et al.]", is more than one simple "IF ... THEN", and it has
already been handled by programmers, the code is hidden, you only choose one or more options.
Parsing is another example of imposing conditions. Parsing means fragmenting one field to send its chopped contents to different fields. Parsing is important, often used with the 'source' field for Journal title, volume, issue, date, pages.
The ability to replace and add text is also a useful feature.
It is crucial to be able to handle several formats (tabbed, delimited, tagged in various ways).
Relevant issues: varying structure and position of field tags, occurrences separator for multi-value fields, wrapping lines.
None of the reviewed packages is able to import records formatted in ISO 2709 format, including MARC bibliographic records, but nowadays MARC records are most often displayed, captured and converted in the tagged (labelled) format. One way to get them is via the Z39.50 search and retrieve protocol. The latest trend consists in letting the package catch bibliographic data displayed on the web and swallow it directly within its own database, by direct "grasping", "copy to", drag-and-drop, without having to save or to export a file and then import it in separate steps.
The general tendency is to offer the user hundreds of ready-made filters. I would never trust any of them without double-checking what they do with the data I am interested in, which often implies working with very few records printed out on paper, a pencil and a certain amount of attention: genius is not required. At this moment it is essential that import filters are customizable.
See also Testing.
Searching and retrieving is the main way of selecting part of the database. As one seldom manipulates at the same time the whole database, search is the preferred approach to data.
Presently library OPACs and BMS seem to consider a window-structured and driven search interface as the most appropriate tool, (better: the single Google-like window is gaining audience). No symbols, no explicit logic
is required from the user. The user is offered several superimposed windows to enter data and combo boxes to select boolean operators connecting the windows: that's it. This seems to be simple and efficient, but it is also deceptive. It makes searching
easier and finding faltering.
We still think and speak with clauses and pauses. (A OR B) and (C OR D) is not such a complicated query: I look for« (children OR adolescents) AND (death OR suicide)». With the mainstream dumb window-structured search interface, such a
basic query statement becomes impossible to formulate, because parentheses are not foreseen. The algorithm which governs the syntax and the priority among boolean operators is hidden and the expression is commonly transformed into: «(children OR
(adolescents AND death) OR suicide) or in (((children OR adolescents) AND death) OR suicide)». In the first case priority is given to the type of operator, in the latter priority is top down. None of them gives the appropriate response to the
abovementioned query.
The alternative is not necessarily full SQL (Structured Query Language), it is enough that one can make use of parentheses, as in the first statement that we made above. Curiously enough it seems that we are going from (too) simple driven query interfaces
directly to SQL queries, loosing the ability to write search queries like sentences.
Any field (full text indexing), truncation and phrase queries are essential.
Accented letters (e é è ê) should not make any difference, like case (upper=lower): as a matter of fact only the latter is a de facto standard, whereas the former is very variable and deceiving.
Searching in the result ('refine') and saving query expressions are only a bit less essential: to be deeply appreciated.
The use of browsing term lists directly pointing to the records is really useful.
When indexes are rudimentary based on a 1:1 correspondence with the fields that originate them, you end by having the authors field that generates one index, translators another, directors still another and so on with
editors etc., and the same for titles (article, host document, journals, series, translated title...): the outcome is deceiving from the point of view of a field based searching.
Because you know that "Umberto Eco" is an author, somebody "who writes", and when you search for it in a database you mainly want to retrieve the records where he has got an intellectual responsibility, no matter if he acted as translator or editor
or author, this is something you will investigate later. You certainly accept and expect for the difference between Eco as an author and Eco as the subject of an essay. It is at least irritating being forced to make three or four different searches or a
dumb full-text (any field) searching to retrieve all the records where Umberto Eco was involved as a writer.
Despite the traditional indexing approach in library catalogs, several BMS offer this kind of poor 1:1 field based indexing. The alternative is field clustering, i.e. one index for all the 'names' fields (authors, contributors, editors, translators,
directors etc.), better if flexible, whereby you can decide which fields to link to a given index. This is already reality in several packages: judgement is involved more than engineering.
Other aspects like soundex, fuzzy, relevance ranking operators are a bit finical in this context.
Z39.50 searching remote database is important to retrieve and import data.
If the BMS implements the OpenUrl protocol you will be able to send data of your database records to the relevant -often 'local'- OpenUrl link resolver in order to ask for specific services, such as full-text article, document delivery, searching the local OPAC for a physical copy ... etc.
Here too, the trend is to offer the user hundreds of ready-made styles rather than a powerful and rich formatting language, and once againg a powerful language is one that can handle conditions (at least: "IF something is absent THEN do that"). A basic language is always incorporated into these packages and citation styles can be modified or created from scratch but they tend to perform and assure the minimal performance.
It is fashionable to state that the package can produce HTML and XML outputs: presently the quality, the complexity of such HTML or XML tagging can be truly deceiving.
"Subject bibliography" traditionally refers to a list which is not only sorted by one or more nested criteria, but a list where the sorting key (i.e. the criteria) is also a heading out of context. It is routine in large bibliographies and catalogs where the key used to sort the items is clearly displayed on top:
Reference List: Adam, D. M. (2) Alam, F., A. H. Soloway, R. F. Barth, N. Mafune, D. M. Adam and W. H. Knoth. "Boron Neutron Capture Therapy: Linkagae of a Boronated Macromolecule to Monoclono Antibodies Directed Against Tumore Associated Antigens." J. Med. Chem, 32 (1989): 2326-30. Tjarks, W., A. K. M. Anisuzzaman, L. Liu, S. H. Soloway, R. F. Barth, D. J. Perkins and D. M. Adam. "Synthesis and in Vitro Evaluation of Boronated Uridine and Glucose Derivatives for Boron Neutron Capture Therapy." J. Med. Chem. 35, no. 9 (1992): 16228-786 |
Here the heading (the "subject" of the bibliograpic list) is the author name: Adam, D.M. followed by a counter for the references where it is recorded "(2)".
Two nested levels of sorting headings would be a plus, normally lacking. If you can replace the full bibliographic references displayed under the headings by a reference, for example, to the record number, you get an index: very useful.
Sorting records is essential to handle data, it comes just after searching. Sorting records via more than one nested criteria (first date, if date is the same then sort by authors, under the same author sort by main
title etc.) is very important (and it is another example of 'hidden' "IF...THEN" clause).
Sort implies and can often hide other important factors regarding the way characters are handled: main heading lacking in records, length of the sort key, case, digits, leading articles and non-filing characters, letter's sequence according
to the selected language (Spanish sorts different from Italian), letters with accents and diacritics ...
Manuscript formatting means to place markers ("placeholders")in the document you are typing via a word processor and to format the paper exploiting those markers.
Markers are something like: (Alam 1992) that makes reference to the relevant record by Alam, and published in 1992, contained in your database. When you eventually format the paper, that marker can be transformed within the text, or footnote, into something like: (Alam, F. et al., 1992) or into (Alam et al., Boron Neutron Capture Therapy) or into (1), also the full reference can be printed in the bibliography reference list at the end of the paper.
Two great advantages:
We may even say that this has been the very reason to develop such a kind of software in the early eighties, software that, consequently, we can still define bibliography formatting software. Developers (like prof.
Victor Rosenberg author of ProCite) realized that a scholar might submit the same journal article to more than one journal's editorial board at the same time. It is almost the rule that different journals have their own different citations policy and
styles. Scholars would appreciate not needing to change manually the output format of the references, either within the text or in the final reference list. Packages like these streamline the process: once the triangle works (1. references are properly
stored in the database, 2. the output style is tuned, and 3. the markers are correct) the writer does not need to worry about changing style, formatting citations and bibliography.
Manuscript formatting is one main function that clearly distinguishes and serves to identify this family of software packages from the others: pure personal information managers, or generic databases, not to mention word processors or spreadsheets,
are completely lacking it.
This is typically a procedure where details are countless, and incessantly increasing too, therefore we are not going to mention any of them (see for a detailed analysis: Manuscript formatting). It is notable that
the market trend is to offer the user the possibility to stay within the document, and the wp, somehow 'calling' from it the references stored in the database to insert the markers and format the same document. This requires the developers to write
third-party portions of codes in order to interact with a certain number of word processors. This is not always feasible, in this case markers are placed via a command given within the BMS or manually, the document is saved in one format (e.g. RTF)
as a static file: this file can the be read by the BMS application and formatted according to the selected citation style (often a copy of it is eventually produced).
The whole procedure is so important that producers invest a lot of effort in improving its features at every release, at the same time users choose this kind of software product just because this function is available. Therefore this is likely to be
an important factor in selecting a package.
By far the tightest integration is with MS-Word -regretfully in the Windows world this is going to become the standard (the Mac environment is more open): our hope is that compatibility and integration within OpenOffice Writer becomes a current feature. Beside that it is important that a BMS keeps the capability to scan a static file as a paper (.txt, .rtf, .odt, .doc, .docx, .wpd ...) and to produce the final bibliography within it.
Web publishing
This has increasingly become a quite important issue due to the evolution and diffusion of the web. Users often want to be able to publish, and make dynamically searchable, if not modifiable, their own database in a seamless way, without the need to convert data from their BMS into another web database management system. You will notice that few packages offer this function and how the picture will change in future.
Web based: no more client software
What about going completely on-line? Webmail, bookmarking and citing (Connotea), mass storage of private data are just some of the opportunities already offered on the Internet to individuals and groups, unrestraining them from the need to install, use and update a client software. Likewise in the
bibliography management, new products are offering centralized and remote server storage and management of data, software availability, only web operativity via browser with individual or institutional licensing, upgrades included. The user will
not install anything on its machine, he will use the software on a remote server via a web browser and store the core of its data thereby as well. Nonetheless, he will have the possibility to donwload his own data and format manuscripts created with a
text processor on his own local machine. License is based on annual subscription.
A web based system is different from merely publishing a database on the web, inasmuch it could be envisaged that your database will be visible only to you.
Myself I consider Documentation still important. Very important. I hate learning things in this field of knowledge by trial and error, or just by the use of the mouse ... by serendipity, worse by "intuition".
I just dream of the possibility to interview skilled -and sincere- personnel in front of a running machine: that would let me analyze a package probably in just 7 hours whereas it takes me at least 12 full days to read and test the basic functions of one product. Lacking this chance, I always read the manual of the package I analyze from the first to the last printed page. You don't need to tell me that it's boring. But it's instructive, even when the manual does not properly describe what the software is able to do.
Manuals and online help vary to great degrees in this respect. Additionally, information can be scattered: FAQs, web pages of the publisher's site, tutorial, help, reference manual ... One solid authoritative source: what a dream. How often it is disorganized, overlapping, repetitive. Documentation is often the last thing publishers worry about. It should always be updated, it should be revised by two kinds of skilled personnel: computer people and more user-oriented people. This is not always the case.
Apart from documentation, it is useful to have the possibility to rely on skilled human being's expertise.
Usually producers offer assistance by e-mail, telephone, bulletin board: paid, or free, on an annual fee basis. You 'd better check carefully.
It is unlikely that you will ever contact the 'techies' directly, the computer human resources that know the 'bellies' of the software engine: they do not have the same knowledge of the teaching training marketing personnel and
viceversa: unless the firm is small, a kind of "one-man-firm" (by the way: excellent landscape, you'll get by far the best responses, provided you correspondent feels willing to talk and give information to you on the subject).
Also discussion lists, e-mail with remote 'never-met' individuals, if not colleagues, can help a lot.
"What are you going to use this software for"?
This could be the first question to ask yourself.
For example, let's take the two most prominent and somehow opposite goals: will you be using the software mainly to publish papers or to manage a database? In the first case the Manuscript formatting function is likely to be the, or one of the, most
important features for you; in the latter case, searching and sorting will have more importance.
For sure the programs that are analyzed here were also tested, but the one presented here is rather a description than a technical test. I try to detect and evaluate the functions, the procedures which are available, and not to the same extent the way the very same functions and procedures work with different and large amount of data, with specific citation styles and data sources.
If, for example, the test of the input, sorting and searching capabilities of a given package can be fast and thorough, I dare say that any test done on two crucial issues like output styles and import filters cannot be but partial and even ephemeral.
This statement is grounded on four issues:
Citation styles: one single package easily offers 100, 200 ... styles (EndNote more than 2300, RefWorks and Reference Manager are close to 1000).
Each of them covers several document types, often in terms of 20-30: article in a journal, chapter in a book, book (each: long/short), conference proceedings, thesis, technical report, audiovisual, computer software, artworks, Internet resources, legal
materials ... etc.
Furthermore, scientific journals, publishers, scholar institutions, content providers change their output citation styles without warning the BMS producers, for example because they want to deal with a "new" kind of document type: blog, electronic
thesis, RSS feed, or alreday existing but that they did not use to consider before, like ancient manuscript, letters, will.
Thus, the producer of a BMS package that simply includes 150 styles, where each of them covers -let's say- only 7 types of documents, has to write down specifications for some 1.050 arrangements, and monitor them over the years.
And what about the figures for packages that offer citation styles in number of thousands, theoretically across 20, 30 different reference types? A package that has got 38 defined reference types and 2.300 styles should provide the user, theoretically,
with 87.400 specifications (of course they will be less, for many document types are not taken into account by certain citations standard, though 87.400/2=43.700/2=21.850 ... and so on are still appalling figures).
Besides, usually these packages do not consider only one citation form for each document yype within each style but up to three: the complete one for the final bibliography, the short in-text (either: Adam 2006 or 1), the
discoursive one in foot/endnotes where sometimes the first citation's occurrence is different from the successive ones.
Can you imagine how many full time persons should be devoted to such a task to assure accurate monitoring and format specifications writing?
The same, and even worse, takes place as far as import filters are concerned.
Content providers tend to merge databases, to incorporate different exisiting sources, sometimes changing or creating data formats on their own, without rule, without standard, without documentation: again BMS producers have to discover them,
interpretate them and monitoring them endlessly.
Again, can you imagine how many full time persons should be devoted to such a task to assure accurate monitoring and conversion specifications writing?
Testing: Despite that, I usually do a few of these tests for each package, and I can assure that the outcomes are fairly disappointing, even as far as 'famous' packages and 'famous' styles and 'famous' data sources
are concerned: errors, lack of precision, skidding ... are abundant. I never trust shipped citation styles and import filters "as is" whenever I need to seriously rely on the bibliographic data I'm dealing withdata. I always double check
them using printouts and a pencil and very few selected bibliographic records.
But any statement made about working or not working of a given package to this respect would be extremely partial.
If we wanted to carefully test just one style, a famous one, like Chicago A-B, we should :
Once done with one (1) software package we would be expected to repeat the same test across all the other packages that are considered here. At the end we would have tested one (1) output style out of the hundreds that are taken into
account by these software packages.
The very similar test should be done with import filters, also considering that we would not have the equivalent standard as the Chicago Manual of Style: record formats are here proprietary by definition.
Thorough testing? Sorry, I simply consider it unworthy and insufficient to give a grounded evaluation of the output and import capabilities of any package.
Lack of standardization: The bibliographic world is much less standardized of the library world: there are not agreed standards as far as input, coding of data, output and exchange formats.
Citation is ruled by dozens, hundreds of citing styles.
Import depends on data formats and here there simply are not rules at all.
Any content provider can create, modify its own format and often includes a large variety of them in its huge databank.
Here there are not common, and sometimes worldwide spread, standards like AACR2, ISBD, MARC and ISO 2709. The "ANSI-NISO Z39.80 Standard Format for Downloading Records" to my knowledge never came to its final approval.
But I do not need to be reminded that the library world, notwithstanding all its long dated standards, has got its communication, exchange, retrospective conversion problems as well.
Since the beginning I have received messages from readers suggesting that I should add to the template a clear and concise assessment of the analyzed products. But I have chosen not to depart from the different "analytic evaluation" approach with which I started and have stuck to my guns.
Interested readers might read through the sections and the cells of the template and then have many elements to make their own judgment. This would be much better than supplying a "concise assessment". Also, often users don't ask for generic statements, quite the contrary: they make inquiries about specific -if not finical, but very relevant to their activity- points: "will it let me disambiguate citations 'same author same year' or it will perform it automatically its own way?" "does it use small caps"
One goal of the template is to show that if you stay on the surface, asking only general questions, you end by finding that all the products look almost identical.
Another goal of the template would be to be used as an empty grid where any user can use it and write down the answers for products that have not been analyzed here.
Others, many, have asked me: 'I wonder why you did not evaluate X ... or ...Y': the answer is simple and can read dim: simply because it takes so much work to do it and keep it updated.
I think that the "80/20 per cent rule" could be easily applied here: I guess I could analyze 80% of a given software package in the 20% of the time it requires me to analyze, double-check and write down about the whole 100% I deal with.
Nevertheless, I admit I had to cope with a serious reduction of resources available to analyze packages to this depth and have consequently reduced the amount of 'evaluation items': since edition March 2009 the number of items has been decreased.
A software program that is not developed any longer and assisted is said to be just dead, even if still working. The major problem not being an engine regularly running by itself, but its interaction with other necessary layers of
software like: operating system, word processors, characters coding ...
That's why that software application is said to be dead.
Somebody calls it "history".
Is history synonimic with death?
From history we can learn, I dare say: especially if recent and part of our own experience.
We can learn, for example, that the present is not necessarily better than the past. Market supremacy does not always imply technological superiority. We might have the chance to observe that valid, outstanding, technology has been simply burnt out
because of financial and commercial -more powerful- interests. We might have the chance to observe that noble products no longer developed, sometimes still working, exhibit features far superior to those of living, on-the-edge, continuously updated
programs.
In our small context I can mention the Papyrus bibliography management software®, still working in its own operating environment, freely downloadable from the Internet. Papyrus, for example, had unrivalled -to my limited knowledge- features in
terms of thesaurus management and linking entries in lists and records in a database: features still lacking in living and leading products.
Another example is Procite, which development was abruptely interrupted when another company took it off from the owner/creator's own company in 1999. It lives now in a family with at least two other similar brothers, like EndNote and
Reference Manager, and offers its source code unvealed to the firm which owns it. During these past 10 years Procite has literally given parts of his body to one or the other of the two brothers: subject bibliography, Z39.50 searching, citation style
design, the finical edit window configuration, virtual records grouping ... but Procite has still got features lacking in both or one of the other brothers. For example: rich and flexible search interface, complex query expressions,
integrated powerful clear list browsing.
Papyrus is not part of this review any longer because it has declared to have been discontinued, but Procite deserves to stay where it is and helps our understanding.
What do I especially dislike and yet find in some BMS?
These kind of software applications are called in many different ways:
'personal' and, obviously, 'software' are specifications that could always be added to the abovementioned denominations giving a framework like:
personal | bibliography
citation literature reference research information |
management
|
software |
I have tested these programs with:
Although the relevant tables are not updated any longer, along the years I have reviewed other BFS packages:
Biblioscape ® Windows v. 6 Professional edition - (at: Personal Bibliography Management Software)
Bookends ® Mac OS X: v. 9.2.1 - (at: Personal Bibliography Management Software)
Citation® Windows: v. 9 - (at: Personal Bibliography Management Software)
Library Master ® Windows: v. 4.15 - (at: Personal Bibliography Management Software)
Papyrus ® 5 Mac OS > 7.0: v. 8 - (at: Bibliography Formatting Software: An Evaluation Template)