Neoluddites will hate this machine

A steam powered printing press was invented around 200 years ago. With that machine it is possible to reduce the costs in producing books, makes the printing faster and spread knowledge into the society. Also, it allows to print new kind of books like comics and novels. An information overload will be the result that means, the people are overwhelmed by too much information and they will think about the wrong topics.

At timecode 1:00 the following video shows a detailed look.

The new issue of an academic journal gets printed in fast speed. And everything what the people will get read is blasphemy.

The first steam powered printing press was introduced 1811 in London by Koenig and Bauer. It was mostly an improved version of the well known Gutenberg printing press, with the addition of steam. This results into much lower costs and higher productivity.

Like 400 years before as Gutenberg introduced his machine, also the Koenig&Bauer press was first ignored by the public. It was recognized how powerful the device would be, and because of this reason it was not used.


Primary goal of libraries

In the debate around a modern utopia of Open Access sometimes the employees in the library are forgetting what their primary goal is and was. The task of a library was not to bring the knowledge to the people or to bring technology change forward. That is done by internet companies and cable providers like AT&T. What the primary goal of a library is can be made clear with a timetravel back to the 1980s. Let us consult some of the library tutorials of that time.

The main challenge in the 1980s was to make so called microfiche catalog. The situation was, that around a university many different kind of libraries and sub-libraries were available. They were created chaotic, that means, somebody bough a book shelf, put some books in it and that was the library. The task in the 1980s was, to create a systematic catalog of these books. The library workers are going around, note the available books on a paper and then they are creating the overall catalog in the microfiche format. The aim was, that the microfiche catalog contains all the book in reality and vice versa. For example, if some of the books get lost but it was available in the microfiche catalog that is called a bug.

Core library works is focussed around the creation and updating of the microfiche catalog. The library has not the task to print new books, to teach students in math or to install fast internet connections. Instead the task of the library is to update the catalog and make the books available.

The microfiche catalog was not the only medium, other option from the past are a card catalog or in newer times an OPAC catalog which is electronically. Apart from the catalog, the library has no other obligations. If the microfiche catalog is up to date, the library works fine.

It is surprising, that some people are interpreting too much obligations into the library. They see in the building some kind of treasure chamber of knowledge. The reason is, that they don’t understand what a library is and what not. Without the book publishers, printing house, authors and the student a library is nothing. It is not the central hub of knowledge, it is only the catalog. Let us investigate what a bad looking library is. It is a library without a microfiche catalog. That means, there are some books in the shelf, but nobody knows which books exactly are available, so it is not a structured archive it is more a mess.

From today’s perspective it looks funny, how complicate it was in the past to create a centralized book catalog. Today, the website has such a catalog available. But in former times, the website was not there, because there was no internet. And without a centralized electronic database, each university library had to made his own catalog, and it was a very demanding task. That means in the 1980s it was an exception, if the catalog was uptodate. In most cases only a part of the archive was visible in the catalog, and a timelag was also there. That means, 2 years before i new book was bought by a sublibrary, and it was never catalogized in the microfiche catalog, so nobody was aware of it.

Most libraries from the late 1970s until the early 1990s were fully occupied with the task to update the catalog. It was their main business. And the goal of every library CEO was to have the most impressive microfiche catalog.

With the early Internet and WWW in the mid 1990s the libraries lost their monopol in creating the central catalog. New ressources in the World Wide Web never get into the old microfiche catalogs. Instead the libraries focussed only of the books already there. Today, the website is very accurate, but it contains only physical books. Normal webpages, online forums, pdf papers and Unix FAQ are not part of the index. I’m in doubt, if knows, that Wikipedia is a website … Perhaps it thinks, that it is book.

History of Interlibrary loan

Before the internet it was also possible to get worldwide books and academic papers. We are talking about the 1980s-1990s. In that area the journey starts at the Microfiche catalog. Not for getting the fulltext information, but on the Microfiche only the catalog was stored. For example, the library users wants to read a paper which is not available in his own library. The first what he needs is the exact bibliographic record. The normal paper based library catalog will not help him, because only the books which are available in the library itself are referenced in. What he needs is a much broader library catalog which contains bibliographic records from international libraries. Only Microfiche was in that time period able to handle such amount of information. What the user has to do, is manually browse through the microfiche catalog until he finds the record. This has to be written down including author, title, year, signature and library and then the interlibrary loan can be activated. That means, a formal request was send to the library to deliver the fulltext of the paper.

The process itself takes very long time, it was only used for important literature. The search in the microfiche catalog alone takes hours, and until the interlibrary loan request was answered, around it can take can 3 weeks. What will happen, if the user doesn’t know exactly what he is searching for? Today, we would use a fulltext search, in the 1980s such technology wasn’t invented. What the user can do, is taking the search seriously and browse many hours in the microfiche catalog. Browsing means, to bring one record to the screen, read the year, author, title and decide if he want’s to request an interlibrary loan for getting the fulltext or not. A speedup was possible, if the user asked for help at the library desk. In most cases, the people at the desk are familiar with the microfiche device, and they know in general which records are containing which information.

Today, it is a bit difficult to get detailed information what a microfiche was. In most tutorials only microfiches are shown which contains the fulltext of a newspaper. But that was in the 1980s not the prefered usage. Storing the fulltext on Microfiche was advanced technology and was to expensive for daily usage. What in reality was widespread available was a library catalog on microfiche. That means, not the fulltext information, but only the microfiche version of a card catalog. In most cases, the entries were sorted alphabetically by the authors name. That means, the user must know what the lastname of the author is, to identify the right microfiche and then he can browse to the exact bibliographic record.

In modern terminology, the website is similar. What today provides as an internet website was in the 1980s the microfiche catalog which was used together with interlibrary loans.

From microfiche catalogs to OPAC

In the 1980s technological innovations were happening in the university library. The first one was a microfiche catalog the second one an Online OPAC system. Both systems are not storing fulltext information, but providing access in form of a catalog. That means, the bibliographic records are stored and can be read by the users. A microfiche cataloge was the pre-computer idea. The idea was to store information about books and papers not on paper catalog but on smaller format. The main problem with microfiche is, that no search is possible. If the user want’s to know what books a certain author has written he must browse manuall through the items. After the microfiche catalog the first OPAC systems were invented. They can be dated back to the mid 1980s in the united states and in to the early 1990s in europe. A microfiche catalog and an OPAC are interchangable, that means they are working with the same principle. The user needs a first overview over a topic and searches in the catalog for references. He notes down the signatures and then he can get access to the books either by the library itself or with an interlibrary loan.

For example, the student is in a library in Washington and searches through the microfiche catalog. There he finds a book which is located in New york. That means, the Washington library has a microfiche catalog which stores the information about books located somewhere else. If the user has found the record, the information search stops, because he has no access to the book itself. The fulltext was not stored on the microfiche, he must first initiate an official interlibrary request.

Why microfiche catalogs were used in the past is simple. Because computers were not available. So microfiche was the only option researchers had. From todays perspective it makes no sense, because today the bibliographic information and also the fulltext is stored in the internet. So what is the current role of libraries? Right, there is no role. The library is like the microfiche cataloge an invention of the past before the internet age. It doesn’t survive the media revolution which was lead by google and telephone companies.

A huge innovation in the 1980s was a so called “Microform subject catalog”. The idea was not only store the records by author name but in a second catalog with topics, for example, math, physics and so on. Creating such catalogs was expensive because the libraries has to invent a decimal classification system which maps a booktitle to a subject hierarchy. For the users there was the advantage, that they can browse more freely through the database.

What most librarians are not aware is how fast the situation has changed over the years. Microform catalogs were in use from around 1975-1985. PC-based OPAC system were used widely from 1985-1995. Internet based bibliographic databases like were used from 1995-2005, and since 2005 until today internetbased-fulltext access is the standard. That means, everything is done electronically: the bibliographic record, the paper itself, and the upload of new papers to the repository. The only thing which was constant since mid 1980s was the usage of computer technology plus telephone wires. That means, companies like Microsoft, IBM, Google and AT&T are defining what knowledge is, how it comes to the enduser and what pricetag they have to pay. What the university library since the 1980s has done is to give up. Instead of recognize the situation, they are talking about a future, which they will not have. Academic knowledge is something which will happen outside the library, because the service a library can offer (microform catalogs, printed books, archives on a geographical place) are to expensive for future’s need.

Modern library

The above youtube video shows the “Library at Alexandria”, perhaps the most modern library todate. It is mainly an empty huge building which is similar to an art gallery. There are some PC workstations, but nobody is using them, because the people have their own smartphones and their own internet connections. There are some chairs, but they are not real chairs, because they are art too. The self-description of the library is “a place for self-learning”, but in reality a motto is missing. Because self-learning in the 21th century is done over the internet, and the library is not an internet service provider, it is not a search engine like Google, and it is not a software company. So the current Library of Alexandria is the opposite of a library. It is an empty rooms which stays visible, if the technology took over the knowledge production and distribution.

From an objective point of view, the location can’t be called a library. Because it has no books (they are only online available) and it has no physical server hardware (because cloud hardware is located at computer companies). That the location is instead is a art gallery, a museum or a similar place without any real purpose. The term monument is also correct because the people there are aware that 30 years ago on that place was a library. That means, the idea of storing books on a place is history.

The interesting fact is, that even after the library is no longer relevant, the task of searching for knowledge, building infrastructure for distributing knowledge and guide people to find the information they need remains a challange. And many work has to be done to move it forward. But, the work is no longer done inside an library it is done outside of the library. For example, the programmers at Microsoft are developing new operating systems, the engineers from Adobe are programming the new pdf standard, the Google employees are programming the Google Scholar search engine, the AT&T company has also a lot to do with rolling out fiber cable to the houses and so forth. That means, the development is ongoing and it is done with huge amount of money. But, nothing of these tasks has to do with classical library working. That means, all the service a library has to offer like storing books in archives, providing access to microfiche devices, making interlibrary loans, and teach the people in using an OPAC catalog is no longer needed. It is like the library itself a historical monument.

Has a monument a future? No, monuments were build for eternity, that means the library of Alexandria in 40 years will look the same like today. That means, the idea of an library has evolved into a stadium in which it was no longer needed by the information society.

Other areas of knowledge processing like AT&T company, Microsoft, Google and so on will have a future. That means, the future will be different from today. For example, Microsoft can’t be called a monument because they company has to invent every day themself new. The difference is, that these new it-companies are not recognized as a digital library. They are simply called software center or internet service provider, but they are doing the job of a library. And they are very good in it.

End of life for libraries

Since which date in the past, libraries become obsolete? In the 1980s, there wasn’t, because libraries were the only place in which the fulltext books were available. In the 1990s there were also alive. The internet in that time provides access to online catalogs like Worldcat but not to fulltext information. From 1995-2005 only the OPAC in the library was no longer needed. Because for searching the database an internet access was enough. But, if the user wants to take a look into a book he needed physical access to libraries. So the date of death is perhaps the year 2005. Since then, the bibliographic catalogs and the fulltext documents were available online and Google Scholar provides the first search engine for a retrieval inside the content. This was 13 years ago. Since the year 2005 the library no longer evolved into something new, they become stable and transformed themself into galleries.

The main reason why new libraries are built today, is not because they are needed for information distribution, but because the people are strongly connected to them and want to remember how the time looks before. Libraries are build as a museum per default. .That means, everybody know, that they are useless, but the people are too shy to speak it out. That means, since the year 2005 libraries are dead, but in the official debate they are living forever.


Somebody may argue, that the end of the library is not sure, because many people are preferring books and microfiches over Internet technology. No, they are not preferring old school books, they are only not informed about the costs. The technology of a microfiche device is great. The user has everything he needs, the only problem are the costs. It takes around 2 hours to find a record, while the same task can be done in the internet in 2 seconds. And to transport a physical book across the US around 100 US$ are needed, while the same procedure can be done in the internet for 1 cent. This gap can’t be negotiate, it can only be hidden from the user, for example, that using a library is for free while a DSL telephone line costs 40 US$ per month. So the user beliefs that the library is cheap, in reality the costs are paid by somebody else and the comparison is not fair.

What today libraries are really good in is to play these kind of cost hiding games. They are arguing that only a human library can guide a user through the catalog. .But they are forget, that the costs of the human guide is very high and that the same money can be invested better into a fiber-optic connection. That means, building new library is some kind wrong allocation of money and the library-industrial complex is interested in such a miscalculation. So they are arguing pro classical library because they can. It is very easy to explain to uninformed people how high the quality of the library is, and how cheap a classical book is, while in reality the opposite is true.

Daily OpenAccess Answers

Author is looking for an academic journal

I have maybe found a source which looks interesting to you: “Ziobrowski, Alan, and Karen Gibler. “Factors academic real estate authors consider when choosing where to submit a manuscript for publication.” Journal of Real Estate Practice and Education 3.1 (2000): 43-54.” [1]

It is a bit older paper and discuss the situation, that authors are in a competitive situation to publish something and they have to chose between different scholarly journals. The matching process is usually in control of the journal and the marketing image which they can build around a certain publication. The idea is to make a journal attractive to an author so that he will chose it naturally.

Quote: “Authors who are the target of these messages” page 1.

Also the paper explains what the main purpose of publishing is. It is not the idea of bringing a subject forward, it is “to fulfill the promotion and tenure requirements of an institution” (page 2) The process of choosing a journal is according to the above cited paper depended from:

– author’s perception of the quality of the journal
– ranking of the journal
– Fair criticism of reviewers

That means, the average author is acting according to this parameters. What the OP asks in his Academia.stackexchange posting is the wrong way to ask, because his primary motivation is topic centered. That means, he has written a paper about topic A and no tries to find a journal. That is not the way scholarly publishing works.

incentive for publishing a paper

Why would somebody create and publish a paper in Academia? Because he get money for or he want’s to increase his reputation? No, that has nothing to do with Science. The main reason why somebody should write, proofread and publish a paper was given by a website called Stackoverflow, they have the motto: learn, share and build. The main reason why somebody is posting a question or an answer to that website is, because he want’s to learn something new. The stackoverflow website itself is in many cases not the right choice for improving someones reputation. The opposite is true, because if the world knows, what I’m not know about C++ programming, everybody think that I’m a noob. On the other side, it is possible to learn from the network, that means, the posting of a question itself is the answer. It is a bit difficult to explain the phenomena, but Stackoverflow works, that means, that the motto is true.

Open Access in Russia

I’m not really a fan of disaster and bugs and the following video can be called tragic because the new postal drone in russia is crashing against a wall. But in every drama also a good news is hidden. The good news is, that the camera was working great and has captured the scene, the second good news is, that in russia some robotics experts are given new technology a trial, and perhaps some day the drone will work better and can deliver the new issue of the printed Nature journal to a library …