Community building with

From a technical point of view it is possible to ignore and post a self-created paper into a blog. One thing is missing there: the integration into a broader community. What does that mean? has a nice but often not recognized feature which is called “related papers”. It is simply a menubar which is visible right from a paper and shows other papers which have – according to – an equal topic. For example, I have written a paper about the RHEL distribution and in the “related paper” section, papers from other authors about the Linux ecosystem are presented. Sure, it is simply an advertaisment, if i click on one of this recommandations, the pageview counter of the other author will increase by one. But it is also a nice feature to read information which goes into the same direction. It can be uses as some kind of searchengine, where a complete paper is used to find similar papers.

Such a connection is not so strong, as if somebody is commenting a paper or citing it in the footnote. It is more an automatic generated search result, but it is nice to know, that the own paper is not alone in the world. Instead there are other authors how have written something equal.

In old-school academic writing such a search-engine is called a plagiarism detection. And if the engine has a result, it is always a bad sign. But plagiarism is something different from a community. A community is, if many authors are writing about the same topic, and perhaps with the same style but produce different content.


Can we remove the Peer review process from Science?

I’m citing Richard Price: “The web has no peer-review process”. That means, anybody can upload his podcasts, blogs and Facebook updates. There is no higher instance which acknowledge to one of these changes. The only exception from that rule is Academia. Here we have a traditional peer-review in place. Which means, that a normal student isn’t allowed to upload his content to and that even the paper of a professor will be rejected from an Elsevier journal if the editor says it is not important or it was not scientific.

Let us investigate what the Internet uses instead of a peer-review system of measuring the quality of information. There are three main elements:

– fulltext searchengines (Google)
– manual created directories like dmoz
– and references from other webpages like blogs and from wikipedia

This collection of orientation works surprisingly efficient. That means, even without a centralized place in which the information is aggregated everybody finds the content he needs. Why need Academia a dedicated peer-review system which is centralized on Elsevier, pubmed and Wiley? The answer is, that not Academia needs such a system, but Elsevier and Wiley.

decentralized peer-review

Open Science is usually thought with the idea in mind, that future academic publishing is based on peer-review. But peer-review is not the answer to the quality problem it is the bug. Let us investigate in detail, how a centralized peer-review works. The idea is, that a beginner student is writing a paper and give the paper to his professor. He reads the paper, and he creates a bibliography about all good papers. Some of the students are not included into the bibliography and their paper is rejected by the professor. The problem is, that we have a single instance of failure. Only the professor has the knowledge to evaluate the content and only the professor is allowed to create an annotated bibliography with a quality judgement.

The better idea is to cancel the peer-review. That means to see all players in the game as equal. If somebody wants to judge about the papers from other he is allowed to do so. For example, I have the need to judge about “Rooter: A Methodology for the Typical Unification of Access Points and Redundancy” and say that the paper was generated by a random generator and contains nonsense. But I can call myself a peer-review instance which is above Jeremy Stribling? No, I’m only one of 3 billion internet users who is allowed to post my quality judgement to the internet. I can create bibliography with very well written papers and with bad written papers. But i do not have the right to reject a certain paper or prevent other to create his own evaluation.

What Open Science need is a decentralized peer-review system. That means, a huge amount of different evaluations. How can such a system looks like? It contains of:

– wiki based annotated bibliography
– blogs article with paper reviews
– citations in own article
– search engine which counts the number of citations
– social networks which are counting the +1 votes for a paper
– citation of a paper inside wikipedia
– and many more

That means, there is no single peer-review instance to which a student submits his paper, instead it is possible that his paper is references by many places with different perspectives. It is only a technical question of how to aggregate the different evaluations about one paper. One formula could be, that a paper is scientific if it was cited by wikipedia and has at least one citation in another paper. But there are also different criteria possible. The classical centralized peer-review system in which an anonymous peer-reviewer at Elsevier takes a yes / no decision about the quality of a paper is ridiculous. It is interesting to know what his judgement is, but only in comparison with the judgement of the rest of the internet.

A centralized peer-review system is something which has to overcome. It is an invention of the past, which was needed in the pre-internet-age.

Peer-review is obsolete

In the above blogpost, the peer-review system is described as inefficient. I would agree only halfway. It is right, that experts can judge wrong about a paper, but the problem is not the creation of annotated bibliography itself, the problem is a centralized peer-review system. I think, that Open Access will need more peer-review than ever. And it needs a more diverse form. Not only by a journal, but also from bloggers, Wikipedia, independent bibliographies and by automatic search engines.

From the old peer-review it is possible to learn many things. For example how to censor information, how to establish a bias and how to upvote underdogs. The aim is not overcome this misconception, it is more an evolutionary process to get a peer-review system which is more biased. That means, not only Elsevier can create a list of recommended papers, but this can be done also by amateurs. Lets take a look into the Amazon customer ratings. Are these votes objective, unbiased and well informed? They are the opposite of it, and exactly this is what peer-review should be.

Switching back from Lyx is not possible

Only for fun, I’ve tried out not to use Lyx but the normal Libreoffice Writer for formatting a pdf-paper. The idea was, that the document was short (3 pages) and Libreoffice is also well suited for formatting a simple document without references and images. No, it isn’t. It tooks huge amount of time to put the text via copy and paste in the window, and then with manual formatting all the headlines have to be adjusted. Even if there is some feature for defining what a headline is, it seems that after selecting all the text and change the font-size, this feature isn’t working. I’ve tried my best to give the paper a serious look, but the result was not very good. A page numbering at the bottom is missing, and between the paragraphs are bad-looking spacing without any sense.

And the example was only a short paper with 3 pages, without any floating figures. So my conclusion is, that even for simple documents, the Lyx textprocessor can’t be replaced by anything else. In general I’m a fan of OpenSource software like Libreoffice, but for writing a scientific paper the software is a nightmare. The main problem is, that it using like MS-word the WYSIWYG paradigm, which means that the user sees the DIN A4 paper and can position the text there. Or to make it clear: Libreoffice is not using LaTeX as a backend.

If the newbies has the first contact with LaTeX and Lyx they may think, that with Libreoffice he can do the same without learning a new language. But a deeper look into the subject shows, that it has reasons, why LaTeX + Lyx is so often used by publishers.

Super-predatory publishing without any peer-review

Predatory publishers redefine the relationship between journal and author. The publisher dominates the relationship while the authors are sheep without any rights. The answer to the problem is even more from the same poison.

Table of Contents
1 Forword
2 Predatory publisher
2.1 The Jeffrey Beall’s list
2.2 Predatory publishing is the answer
3 Peer-review
3.1 Understanding Peer-review
3.2 Peer-review request
3.3 Agressive peer-review request
4 Technical questions
4.1 Openjournal System vs. Mediawiki
4.2 How an academic journal works
4.3 Tutorial of how to use
4.4 Bug: has no ipad app
5 Publishing
5.1 How to prevent too much citation?
5.2 Be attractive to authors
5.3 Inefficiency in publishing industry
5.4 What is
5.5 Time as a currency

1 Forword

The following paper would be normally published at But in the last time, I’m not happy with their search interface, so I want to set a sign against the “Academic social network”. So I reformated the PDF output to a plaintext-file and drop the knowledge into WordPress.

2 Predatory publisher

2.1 The Jeffrey Beall’s list

In context of academic publishing the “Jeffrey Beall list” of predatory journals has reached cult-status. The main problem with the list is, that not only the journals on the list but also the journals outside the list are under fire. How exactly defines Beall a predatory journal? He said, that the journal takes money, invites an academic for peer-review, publishes paper without peer-review and in general is not recommended. So this definition can be in theory used as a description for every journal. What Beall really has addressed are journal with a lower APC charge than normal. A serious journal like Nature takes 4000 US$ APC for publishing a paper, while a predatory low quality journal takes only 400 US$. That is the difference.

And if a journal which takes 400 US$ charge per paper is predatory, what is about peerj preprint or Researchgate? Right, they are more predatory because their fee is lower. So Jeffrey Beall and perhaps most of the community knows that the Beall list makes no sense, except that the list is provocative.

The hypothesis is, that science in general moves into the direction of openaccess diamond. So science in future will cost less than today. That means, in future all journals are predatory. And the journals who are not, are no longer existing. So a non-predatory journal is an old-school journal which has no online-edition, which costs at least 4000 US$ or so much money, that even a pricetag is not given and has peer-review process which is so perfect, that nobody understands it. In contrast the name “predatory” can be used for all journals, which a thin, cheap and business-oriented. But the dominant difference is the role of the author. In old-school publishing the author who writes a manuscript is equal to a holy god in person, while in predatory publishing the journal is tracking her own success at the expense of the writers.


For analysing the scandal behind the Beall’s list, it is important to make clear what the normal self-perception in academic publishing is. The mantra which is used in discussions about the topic is, that on one hand we have the god like authors who are either genius or nobel laureate and on the other hand we have voluntary journals which are devoted to science. That is the image, of how the community wants to see themself. The beall’s list contradicts this narrative because it tells a story about greedy, deceitful journals on the one side and naive dumb authors of the other. So it is not really a list about publishers but more a description of the relationship between authors and journals. Which publishers are exactly on the list is not important, but all of them are greedy and ruthless, and all authors are loyal simple minded sheeps. And this description of the world is not, what the community likes.

The reason why is simple. The old model of a genius god like author was never the reality, it was only a marketing idea for selling free slots in a journal to the authors. It is easier to find customers, if the customers are thinking that they dominate the relationship.

The mantra which is widespread in academia is about the voluntary nature of journals and universities. In theory the journals are publishing for free, and the students paying no money for attending the school. All what is happening inside academia is done on a non-profit base with the aim to bring science forward. This has nothing to do with reality, it is the elephant in the room. But to talking about this topic is not easy. Jeffrey Beall was the first who invented a story which explain the reality differently. Even if his story is wrong, and the journals on list are innocent, the story itself is interesting. The main problem in academia is not, that there is money in the game, the main problem is that this is not visible.

A major step forward was, that since a short time APC charges are normal. In reality there were also normal in the past, but it was not clear for everybody that the author must pay the money. Also a step in the right direction is, if newly founded openaccess journals tries to increase the margin. Greed brings academia forward.


If a list of predatory publishers exists what is the help against it? The answer is, to be more predatory. If the normal evil publisher takes 400 US$ APC charge per paper and fakes the peer-review process than the answer is to found a journal without any APC charge and without peer-review process. Today only two candidates out there which can be called “top predatory”. That are Researchgate and Both have reduced the APC charge downto a zero and they are financing themself with advertisement and premium services. They have nothing to do with real science, instead they have more in common with netflix streaming and Facebook. To beat even is hard, perhaps it is not possible. So the answer to all problems is try it out and see how science will loose everything which is holy.

A platform like is exactly this, what authors needs. A quick and cheap possibility to publish their paper, nothing more and nothing less. The task of a publisher is to put the pdf-file on the web, and the rest is not his job. peer-review is normally a thing what the science-community has to do, not the publisher. And formatting a paper is also a thing which has nothing to do with the core task of a publisher. So the best-practice model for future publishers is a minimalistic predatory business model.

2.2 Predatory publishing is the answer

Some people see in predatory publishing the downfall in academic. They are arguing that in the worst-case, greedy publishers are sending spam-e-mails and never make a peer-review. But the newly founded alternative publishers have many advantages. The most dominant is, that they are changing the relationship between author and publisher. If the publisher is predatory, greedy and commercial then the author is the victim. That is a healthy idea, because the author is the slave of the reader and of the publisher at the same time. Normally, the publisher argues as a reader. He wants something, mostly which has to do with academic papers. And the author has to deliver the good, that is his task.

To explain what predatory publishing is and what not is simple. The most aggressive publisher today is, because it’s business model is based on OpenAccess diamond, which means that the portal is financed with advertisement. At there is lot of spam e-mails, no peer-review, no apc-charges and no editorial board. Other, so called predatory publishers, which have an APC charge of around 500 US$ per paper and have a low quality peer-review are not really predatory, their are not in competition with The most advanced form of publishing is if the costs are reduced to zero for reading and writing, if the authors are formatting their manuscript alone, and the delay between uploading a paper and publishing is smaller then a minute. It is hard to imagine, that somebody invents are more aggressive form of academic publishing. The only possibility is to use Artificial intelligence to replace the human authors and generate all documents witch Scigen. But today this is not possible. It is perhaps a future version in which IBM Watson is involved.

In this AI based publishing future, the speed is increased further because the portal is no longer depended on human-users who are uploading papers. For example, today the server is technical ready for publishing 100k papers per day. The problem is, that this amount of information must be written first. This is the bottleneck. If a resource is available which produces this amount of paper, a new kind of science will occur.

3 Peer-review

3.1 Understanding Peer-review

For understanding the peer-review process it is important to make a thought experiment. We are imagine, that we are the editor of a big worldwide academic journal and somebody other sends us manuscripts. Our task is to read the incoming paper and reject the low-quality submissions. There is only one problem: we are in reality not a academic journal and nobody sends us a manuscript. But this can be solved easily with a look into a public preprint server, for example this one:

There are some incoming manuscript listed and we have material for role playing. Ok lets start. We have now November 2017 and the first manuscript on the list is:

“Manipulating the alpha level cannot cure significance testing – comments on Redefine statistical significance”,

The problem is, that according to the title “statistical significance” it seems that the paper has something to do with math. But we don’t like math very much. No problem, the author gets a rejection from our fictional journal, that he should revise his work and resubmit …

Ok, let us go to the next paper on the list:

“Angiosperm phylogeny poster (APP) – Flowering plant systematics, 2017”,

The problem here is, that we never heard the word “phylogeny”. And the title also looks not very interesting. But no problem, we are the journal – the paper gets a rejection too. This time we are writing that the number of references is not high enough. That we never read the paper or counting the references must not be told in the letter to the author.

Now, the next paper:

“Xenoposeidon is the earliest known rebbachisaurid sauropod dinosaur”,

According to the title this has something to do with dinosaur. It is a topic from the history of biology, and biology is not our favourite subject. Perhaps the paper is well written, but we are not motivated to analyse the text for errors. So this paper also gets a standard rejection.

The problem is, that in the peerj repository are not only this 3 three manuscripts, but a way more. And perhaps in most cases the title looks not very promising and in general it is hard work to read a paper.

But there is an alternative out there. I have explained this in an earlier blogpost but repetition is always good. Instead of peer-reviewing the incoming manuscripts chronologically, we are retrieving something in which we are currently interested. Today we like to read a paper about “principal component analysis”

“site: principal component analysis” A google-search shows us, that indeed a manuscript is available. The paper:

“Hand posture comparison in Synergy Space”,

was uploaded in July this year (so it is 6 months old), has 55 pages and the author comes from India. He measures with a dataglove the movement of a human hand and creates a PCA model of it. He has some colourful pictures in his manuscript and introduces a “posture similarity index”. From the perspective of our fictional journal the paper is very good and the author gets an acceptance letter. We are also encouraging him to send us more manuscripts from his institute because we are interested in his work.


What have we done so far? We have made a roleplay for switching the perspective. Instead of arguing from the perspective of an author who is frustrated by a journal, we are playing by ourself an evil journal which rejects incoming papers. This role-switch gave us a good knowledge about the inner working of the peer-review process.

3.2 Peer-review request

A peer-review request is something what happens according to the above URL many times. It is the result of the peer-review process. At first, the authors submits his manuscript to a journal, and the journal editor sends the paper to a reviewer. He gets a formal peer-review request. That means, the editor asks his subordinate if he can read and commenting the paper. In the process flow their is a decision needed. How to react properly?

According to the Stackexchange website, the normal case is, that the scientists are reacting positive, especially if the request comes from a major journal and if there is a long relationship. The answer, which is rated in the OP with +17 points gets much support from other users. But there is one answer in the system which goes more in the direction, which I personally prefer. The last one, which has currently -5 downvotes. According to this answer, the best strategy is to never review an incoming paper.

But what happens, if all doing this? The result is the following. The editor of a journal gets a new manuscript but he didn’t find any reviewer who wants to read and commenting it. So the journal editor must take a decision by himself. He can peer-review the paper by itself, send it back to the author or doing something else.

In this point of the story it is important to give a more abstract description of the overall process. There are two possibilities of how research can be organized: topdown or bottom up. Topdown means, that the authors are on top of the pyramid. Here an example.

The normal usecase is, that a random author is writing a random paper. The author can select a journal, for example Nature, and the author defines the date when he sends the manuscript to the journal. The author decides the following:

– title “new pathplanning algorithm for a walking robot”
– date of submission: march 2017
– journal: Nature

Let us make a roleswitch to the journal. Now we are not the author but the Nature journal. We have March 2017 and an author decides to send us his manuscript. We as the journal must react on this submission. The author wants in under 6 months a decision if his manuscript is good or not, so we must find a peer-reviewer for it. Topdown means not, that we as the nature journal are on top, it means, that the authors who are submitting are playing the active role.

The consequence is, that journal editors are subordinates to authors. Their normal behaviour is to obey them. And that means to accept every peer-review request. Back to the example. The author has written the above cited manuscript. What the author wants is, that the peer-reviewer reads and commenting the manuscript. So the author wants, that the peer-reviewer accepts his request. The author is dictating the date and the title of what to review.

Surprisingly in another Thread a different view is presented: He is a peer-review request printed which looks like spam. The journals sends an e-mail to an academic and invites him to read a certain paper.

3.3 Agressive peer-review request

A user at the Academia.stackexchange forum has posted a typical invitation e-mail which is send by the Elsevier online portal: In the first e-mail the system says, that the user is registered and the second is an invitation to review a certain paper in under 21 days. The OP asks if this e-mail is spam, because it looks similar to what predatory publishers are sending but the Academia.stackexchange community has confirmed the mail and it is not spam.

The question remains if it’s make sense to invite somebody to write a peer-review. It is comparable if a random generator chooses one of the 5 million articles in Wikipedia and you should write a peer-review about it, even if you’re not interested. I think a better method is, to let the peer-reviewer decides which paper he wants to review. Either in a normal citation or in a dedicated peer-review in which the reviewer selects a paper and writes a longer notice about it. Why is this bottom-up driven workflow not common at Elsevier? It has something to do with access to fulltext archives. If it is not possible to search into the database for older papers, than it is not possible to select one which is interesting.

I want to explain the workflow in detail. Peer-reviewing old paper which are already published makes no sense, because all of them have an peer-review already. It is only necessary to peer-review new papers which are in the pipeline this year. Who has access to the pipeline? Not the public, only the journals know, which papers were submitted to them in the last 6 months. So the circulating paper are not retrievable as fulltext. And here is the reason, why the Elsevier system invites somebody to review a certain paper. Because he is the only one worldwide who can read that paper. This is the bait which is used by the journal to get loyal peer-reviewers. If Elsevier would take an old paper which is already published at Arxiv and invite somebody to review it, then it would look like scam. Because the public already have access to the papers and it is nothing special to read it.

The concept of circulating papers means, that only the journal has access to the new papers. If an authors submits a paper, than not his colloquies can read it (artificial shortage). The same principle is used in cinema-business where selected journalists can see a blockbuster 4 weeks before the public. And the studios are sending invitations out and most journalists are happy to be in such exclusive groups.

4 Technical questions

4.1 Openjournal System vs. Mediawiki

Today, there are two major software frameworks out which supports science-like publishing. Mediawiki on the one hand which drives the Wikipedia-system and Openjournal-System (OJS) on the other, which is used by more then 7000 openaccess journals. Both frameworks are very modern software-plattforms which are oriented on digital publishing over the internet. The remarkable thing is, that mediawiki and OJS are working different and perhaps it is not possible to join the projects. Let’s take at first a look at Mediawiki which is older and widely known.

Mediawiki and Wikipedia can be called a success. The system is today the quasi standard in producing an encyclopaedia and no-one has a better alternative. The idea of mediawiki is, that the authors can edit an html page on the fly with a markup-language. The main-feature is, that this can be done in parallel: every user can edit every article. Because the software is so easy to use, Wikipedia can be called a success.

Additional to the original wikipedia project there was also a subproject, called Wiki-university where the wiki-concept was used to producing ebooks and e-journals. There are some examples online but a mainstream success never happens.

On the other hand in the area of academic publishing another system is widely used which is the above cited OJS. OJS works differently from a wiki, it has more in common with the content-management-system like Typo3 where a workflow can be defined. Also the output format is different to mediawiki. In OJS the normal output is PDF.

The similarity betwen OJS and Mediawiki is, that both software-frameworks are oriented on the internet. There are no printed sheet of paper which are send by snail-mail around the world, but there is a LAMP-server and the software is installed on it. So the default GUI is webbrowser and a user must login to use the system. The reason why so much journals are using OJS is simple. The software is good enough for modelling the whole workflow of a journal, is well tested and it is for free. Additional to OJS there are proprietary systems out there which where used by paywalled journals. Perhaps these system looks fancier and can be customized more. But in general the software works with the same principle in mind.

A possible successor to OJS is a pre-installed “Academic social network” like The difference there is, that a working software is installed already and the user can setup his own journal inside the “Academic social network”. So he must not configure OJS or thinking about templates because it is all there. The disadvantage is, that is driven by a company so that the user has not the full control over his journal. It is not possible to customize like OJS.

4.2 How an academic journal works

In the following video three professors from the physics department have founded a new academic journal in which authors can submitting their papers. The senior-editor is standing near the lovely newspaper shredding machine. He takes the manuscript and put it into the machine. The system works very well. Her referees are helping him to stuff the paper into the slot.

4.3 Tutorial of how to use

At first you need a comic strip which was created on your own tablet. After converting it into the PDF format it is ready for uploading. gives you two options: publish as paper or publish as draft. The normal use case is to publish it as draft. The reason is, because until now the newly created manga has no DOI number. Only a draft paper can get a DOI number in future. After uploading the paper another dialogbox will occur in which the author can invite collogues for a peer-review. This decision is a bit tricky, the best option is to open the paper for peer-review but deselect all followers from the menu. So nobody gets an invitation. The reason is simple: from a perspective of a peer-reviewer it is not very funny to get an invitation. This behaviour was described by Jeffrey Beall in the context of predatory journals. Instead it is better to let the peer-reviewer decide for himself what and when he likes to review something.


Theoretical it is possible to write the comic strip in every language. But for reasons of a wide range of readership English is the preferred one.It makes no sense for uploading cartoons written in German or Chinese. The number of people who wants to read them is too small. That may looks a bit funny, because German is spoken by around 200M people and Chinese by more then 1000M. But in comparison to the world population this is a small audience.

4.4 Bug: has no ipad app

All the major academic journals have it today, an ipad app. For example the PLOS One reader, Elsevier ebooks or Mendeley iOS app are available in the iTunes store. It is possible to browse with the ipad in ebooks and journals. The only provider which has no such an app is the “Academic social network” which is located in San Francisco. Perhaps the guys behind doesn’t know that Apple exists?

The benefit of a an Apple or Android app are obvious. It is possible to browse inside ebooks without booting up a windows pc. For reading the abstract of the new published papers of a friend it is no longer necessary to use a blown up desktop pc.

Somebody may argue, that it is possible to open the pdf file inside the webbrowser which can be also the ipad. But that is not the same as a dedicated ebook app., For example, in the PLOS One reader it is possible to swipe the pages with an animation and to get additional informations. But the main reason why a app is better is because it is possible to enable in-app-purchases. So the most print magazines today don’t want to renounce.

5 Publishing

5.1 How to prevent too much citation?

In the classical discussion about academic publishing often journal editors betray magical tricks about the journal system. For example, they giving hints of how long a good peer-review takes, which person somebody should know, and how a good journal article looks like which is not rejected in the first round. The illusion is, that improving the own citation count is a need.

The reality is, that most papers today are overcitated. That means, that the average robotics paper is cited at least by 5 other papers, gets 2 peer-reviews by the journal and often is referenced in weblogs too. It is very difficult to find inside the academic system a paper, which is not cited by somebody other and also get no positive peer-review from a journal. The problem is, that the reputation system which consists of positive feedback and citation by others is wrong. We have on one side 50M papers out there which are cited many times, and on the other hand the objective quality of these papers is low. I can say so, because I’ve read them all and in non of them is an interesting innovation published.

To answer the question of how to get a good feedback and how to get cited often is easy. No effort is necessary to reach that goal, because it is the normal case. The average phd student gets more feedback then he can ever read. The more interesting question is of how to write a paper which gets never be cited. One possibility is to prevent the publication. Another option is to use a language, which nobody other speaks. But the best possibility is to artificial lowering the quality.

Somebody may argue that feedback to papers is a fundamental element in academic. No, it’s not. There are enough other ways to distinguish between truth and nonsense, for example an experiment. Not only peer-reviewers know that water is freezing at 0 degree, also the water itself knows it.

Astonishingly, the author of a paper knows in most cases by himself, if his work is good or not. Even he is doing no experiment, and gets no external feedback he is often the best critics and feels, if his work is right or not. The reason is, that evaluation of work is easier then the work itself. Surprisingly, often non-experts are able to evaluate complex papers. If the author has written the text easy to read and his arguments are convincingly, a normal non-academic person, will understand it.

I want to say, that the current peer-review system in academia is not only too slow and ineffective, but it is also overrated. A more efficient system and less from it, is to be strived for. This minimalistic peer-review system works without feedback in normal-mode, and only as an exception it will provide feedback for a work. The normal case should be, that the authors are uploading their papers to a preprint server and they never get cited or get comments. With a strong self-discipline it is possible to avoid references and prevent a formal citation. So the authors are encouraged to guess if they were cited indirectly or not. I’m not so arrogant to recommend the zero-citation paper, in which at the end no one other will be cited. But a good science-paper needs not more then 5 citations. The aim is to reduce the h-index for all.

The same concept is usefull for peer-review. The only feedback which an author needs is the decision between yes or no. Yes means, the paper is good and will be published, no means, that the editor doesn’t understand the manuscript.

5.2 Be attractive to authors

In the above video a journal editor explains, how the relationship between author and journal works. The reason why an authors sends a manuscript to a journal is, because he hopes to get detailed feedback. The service, what a journal provides, is to read the work and give feedback. That is some of kind of currency which flows back to the author.

As a consequence the author will send his manuscript to a journal, which gives detailed feedback. The journal is acting like a chatroom. The question is: how can a openaccess journal provide feedback to the authors? Normally this task is done with external peer-review. Good journals have access to good peer-reviewer which reads and commenting manuscripts.

It is important to know, that most scientists are unsure about her own papers. They are writing something but they not know if it’s true or not. So the authors are learners. A feedback of a science-journals guides the learning process. This is called “adult education” on a very high level. That means, that even a phd person has the need for improving their knowledge. And getting feedback to the own work is a good possibility.

In the youtube clip at 2:35 the journal editor says:

“As a new journal, you have to provide incentives / good reasons to authors for giving us their manuscripts. When authors submits their manuscripts, we really try to give him very detailed feedback.”


In another essay, I have explained of how a queue-less peer-review process works. Mostly, it works because there is no guaranty that the authors gets feedback. Let’s compare this with a traditional journal. The authors sends his manuscript to a journal in the expectation that he gets feedback. The feedback had to come into a defined time-period of not more than 2 months, and should be high quality feedback. That are the expectations of the author. The journals has now a problem: How to provide feedback for a certain paper if no one is available who understands the topic? The best-practice method for solving this dilemma is to send the manuscript to a network of peer-reviewers. The work of writing a feedback is crowdsourced to the science-community (blind peer-review). And here comes the magic into the game. The process of selecting peer-reviewers is not visible. So in theory the following situation is possible:

Two physicist are sitting in the same room. Physicists #1 sends his manuscript to the journal. The journal anonymizes the paper and sends it to Physicists #2. He writes a peer-review and send the result back to the journal. And the journal presents the feedback to the author. Both Physicists have no idea, that they have commenting each other. So in reality, the journal itself doesn’t write the peer-review, instead it acts as an intermediate. This process is done with secrecy not because of quality reasons, but to save the business model of the journal. The alternative is simple: openpeer review with knowing the names of the reviewers. So that the science-community needs no intermediate as an exchange.


The dominant reason for an author to sends his manuscript to somewhere is because he wants to get feedback. Feedback guides his learning progress. The question which every academic journal or “Academic social network” has to answer is how to provide feedback to the authors. That is the incentive which the hub provides to his customers. A simple but working form of peer-review is the citation in another paper. I want to give an example:

I’m writing a 8 pages pdf paper with LaTeX. At the end, I have references to 20 different papers which were written by authors around the world. I upload the paper to The parser dissects the references and sends a notice to every author which was cited by me. So 20 different authors in the network get a notification that they were cited. So they have some kind of peer-review. The can see my paper in fulltext and see the reason why they were cited by me.

There are two possible forms of citations: positive of negative. It is possible that I agree with an author, so i will cite him because i like the work. Or it is possible that there is mistrust then i will cite him because I don’t like the work. So the counterpart gets either positive or negative feedback.

The question remains, why in traditional journals are normal citations and also peer-reviews possible. In my opinion, dedicated peer-reviews are created by market-makers, like in a stockexchange to guarantee a minimum transaction volume. If enough external citations are available there is no need for writing a dedicated peer-review.

5.3 Inefficiency in publishing industry

Uploading a paper to ResearchGate and costs nothing. Even the premium account is with 100 US$ per year a small investment. So in theory, a library can apply for such account and now hundred or more researcher from the university can upload their manuscripts for free. Researchgate is going one step further. It is possible to get a DOI number for newly added working drafts. The surprising fact is, that in reality the researcher, libraries and universities are doing something different. They are publishing their paper for 1000 US$ per unit at PLOS One or they are paying 4000 US$ for publishing the paper in a traditional journal. And sometimes they are paying 10000 US$ per paper for publishing it in a journal which has no openaccess feature, so that the price is much higher.

Somebody may argue, that in this comparison is something missing, that it is not possible to compare with a IEEE transaction journal, and that the scientists are well informed about the system and their decisions are rational. But is this true? It would not be the first time, that a lot of inefficiency are in the system. Another example is the preferred computer operating system. Most libraries are using today Windows. The better libraries in Harvard have installed Windows 10, because this is newest and greatest software, some libraries with less money have installed Windows 8 and even Windows XP is in use. Ontop of these system additional proprietary software which costs a huge amount of money is installed. The same people may argue, that this is the result of an well documented decision and that it makes sense, to use Windows XP and not Red Hat, at a desktop PC. But is that true?

The main problem is, that from an objective point of view, most decisions which are taking in the higher education system are wrong. There are done because of an information mismatch and the wrong peer group. It is not the fault of a certain library that they have installed Windows 8 and not even know what Researchgate is, it is more a group thinking, that the people who take the decisions are getting positive or negative feedback from their community.

Let us make a small but rational economical calculation. A library is buying a education license from Red Hat for 50 US$ per year to install RHEL on their desktop machine, and the library is buying from the premium account for around 100 US$ per year. According to my calculator the overall costs are 150 US$ per year. From now on the library can surf with the PC in the internet, write papers with LaTeX and uploads the papers to the internet. It is some kind of library-ready Workstation, which supports the reading and writing of academic manuscripts. According to my knowledge, no library worldwide has such a system in use. I’ve found no youtube video and no pdf paper in which such a system and the economical implications are discussed. Why?

The reason was given above. It has something to do with a certain culture. On a formal level the described workstation is easy to install and easy to buy. The price is low, technical problems doesn’t occurs and negative effects are missing. The only problem is, that such a combination is uncommon. The people who can such a decision are not familiar with it. They are knowing nobody who would say to them: good work.

It is easy to predict what the future will look like. Nothing will change. Even in 10 years, the same Windows 8 PC are used in Harvard, and the same amount of APC charges are paid to upload simple manuscripts to the internet. The people are in a system involved, which has other criteria than costs and efficiency. They simply not understand what is or what Red Hat is. They didn’t even heard the word, and they are not discussing it with their friends.


The academic publishing business can be called outdated. It is not using current technology. What the participant are doing is playing a roleplay. They are acting with a mindset of the 1970’s. A person gets a positive feedback if the imitates such a behaviour. In most cases this is equal to avoid all technologies or ideas which were invented later. In the roleplay-guideline is not a thing such Red Hat Linux,, LaTeX or OpenPeer-Review. These things were simply not invented in the roleplay. Instead as a good practice are known handwritten manuscripts, paper-based libraries and the absence of the internet. So a recommendation like “why you are not using an Academic social network?” can be answered easily: “it was not invented yet”. Yet means today in the 1970s in which the roleplay takes action. The setup and the time in which the universities and libraries are playing their social game is determined by different stakeholders like the taxpayer, big companies with interests and career expectations of people. The system is stable in that sense, that it is immune against input from outside. In that sense, if somebody recommends a different way of publishing perhaps on the basis of lowcost publishing he will get enough negative feedback that he has no chance to convince the group.

5.4 What is

On the first hand it is very easy to explain what is. It is a website which is not very sophisticated and it is also a company which wants to earn money. But that is not, what really is. The real explanation is not the website itself, but it is the help of how to use it. In some interviews Richard Price has explained how his website works, but what he really has given was an explanation of how academia itself is working, the mindset of journals and what peer-review is.

From a technical point of view the URL will never improve the science process. The website is too slow, especially the search box and the site is not attractive to the users. But is really god in explaining of how the competitors are working. It is not important to sign up at for an account. The only thing what the people has to do is to read and understand the help to the website and use their knowledge in a journal they like.

Improving the academia-process means not, that everybody must switch from PLOS one to It means a changed mindset, that the knowledge about how academic publishing works is put on the internet. The website is not really a website it is more computer-based training for publishing in general.

From a technical point of view, is a clone of much bigger websites . For example, the analytics section is a 1:1 clone of the “Elsevier stats” section. Elsevier has a dedicated section where the author can see in which country the paper where downloaded. And the upload feature in is a copy of what Researchgate has in their website. The question is not, if Elsevier or is better the question is, which website explains better of how publishing works.

In theory, also can be cloned. I would guest, that it is possible to program the website from scratch and make a new “Academic social network”. But, providing such a website is not the real goal. The goal is to educate people of how to use such websites.

Today, most academics probably know of how to write an article. But only a small group of people who are involved in the editorial board are informed about how the publishing process works. That is the bottleneck in the system. Not the old-school journals from Elsevier are the problem or the costly APC charges, the real problem is, that most people don’t know of how a journal works.

5.5 Time as a currency

The current publishing system is defined by money. Publishing a paper at PLOS One cost 1000 US$, attending a phd program costs money, and publishing another paper at Nature costs around 4000 US$. But is money really the currency for scientists? No it’s not. Most scientists are founded by libraries and the government and they are very rich. What scientists not have is time. What is time? Time is a limited resource. In some cases it is equal to money but not always. For example typing in a manuscript into a notebook costs nothing, but it takes hours. The main mistake what most scientists today are doing is to believe that they must invest huge amount of money, and that time is not important. An example:

They are publishing at nature, because they can. They are founded by their library. And the APC charge is paid. So the scientists believe, that they are on the safe side of life, because they are so well equipped with US-Dollar. But the real bottleneck is the procedure which follows after the submitting. Paying the money is only one piece, what the researcher are doing, often the must resubmit the paper and arguing about the topic. What is the alternative? The alternative is to focus more on the invested time.

Writing an international paper

It is still unclear of how to write a paper which is cited by authors or generates a high impact. But one question can be answered: how to write a paper which is read by a broad community. In the analytics menu the author can see from which country the readers come from. The first pdf-files which i uploaded to the plattform had a limited reading circle. Over 95% of all readers where from Germany, Switzerland and Austria. In the map, this results into readership which was located in the country’s borders of Germany. Readers from other countries like United States or china are simply not existing.

The interesting fact is, that with uploading two additional papers which are written in English the readership is widely spread. Not only from Germany, but also from all other countries of the world are people who have read the paper. Even from countries which are far away and not widely known like Ghana was one reader interested in the paper. The topic of the two newly added paper are not very exciting. It is more or less the same, what I’ve written in earlier papers, but this time in the English language. Apparently this was enough to increase the diversity in the readership to a worldwide level.

In retrospect it seems rational. Why somebody outside of Germany should read a paper written in German? So it is normal, that a paper written in German generates only traffic within Germany and not in US, Nigeria, or China. On the other hand, it surprised me, that the other way around works. That simply by translating a text into English readers from all over the world are interested. In total numbers, the attention which my paper gets is small. Only 30 views shows the counter of That means in total number, nobody wants to read it. But the difference is, that this time the term nobody means, that nobody is equal to worldwide.