How to make advertisement for

Usually a product is promoted to the people with ads. An advertisement explains the product and give hints what the concept behind is. So, how does a campaign look for a very new website, called It is very hard to say, even after some thinking about it, I don’t know. But what is interesting, that it is possible to make an anti-campaign for That means, not to sell the website of Richard Price but make some bit applause for the opposite. Perhaps, this will help to understand the general idea.

At first, I found a printing museum in Germany. They have some kind of old printing press available which can be used by the audiance. For example, a hand driven press which was invented in 1750, but also some modern devices which a driven by steam, like the Koenig&bauer like model from 1811. Here is a short video which introduces some of the revolutionary machines.

In reality, the situation with the machines was not so easy as it shown in the clip. But a museum is a place in which the past is a bit better explained. This can be seen as some kind of advertisement for a printing process which is 200 years old. In the last section of the video, also a linotype machine is shown, which is very new in printing history. The idea was, that the user only types in characters on a keyboard, and the machine prints them like in a typewriter.

Overall, that was the physical anti- campaign which explains former printing devices. Now we are coming to the social aspect of education.

Here is a promotional clip for the Miami University. The idea is to recruit new students with a tv ads in under 60 seconds. The clip was made great and sells the university to the public. Here is another promotional clip which focus entirely on the exam test:

All of these advertisement have something in common. They are campaigns for some kind of past educational resources. For example, an outdated printing machine, a university which is nobody aware of it and a SAT test which can be seen as outdated. Nevertheless it seems to be very easy to promote something which doesn’t work in the past. That means, a paper based printing machine will not only get huge audience in a museum but million of people who want to try out in real, for example for printing their phd thesis. The same is true for the sat test, which is done every year by thousands of people, not because they want to get an experience about the past, but they belief in the sat test. In contrast, the website is hated by the mainstream, everybody knows why this doesn’t work. It seems, that the campaigns for the outdated technologies are very strong, and it is impossible to guide the people to a new idea.

The reason why it is so easy to make promotion for outdated educational systems and less productive technology has to do how modern society is organized. It consists of neoluddism groups. For example the group of printing press fans are very huge. They like everything which has to do with mechanical driven printing invented 200 years ago. So it is easy to promote this social behavior, because there is a group in which somebody can be part of it. The same is true for a group called SAT fans. These types of neoluddist are preferring SAT test which is known as a very bad choice because it is unfair to 90% of the people, but nevertheless, the number of fans is huge. In contrast, has no fangroup which contains of neoluddist, because is brandnew, it is modern educational technology and that means, it is hated by the neoluddist. It is hard or even impossible to make promotion for it, because there is no neoluddist like group in which the user can be a part of.

What I want to express, is that it is very easy and common to make commercial advertaisment for technology used by neoluddist, which is equal to mainstream behavior, and very hard to promote something which has to do with modernity. Most people are prefering to stay in the shadow of a group, they get respect because of a certain behavior, and this has to do with rejecting technology. For example, steampunk fans of classical printing press gets reward by rejecting modern DTP like alternatives like laser printers or digital publishing. And the steampunk community is rewarding such rejection, that people who can express well that a steam driven printing machine is superior will get a bonus in the social game.


Richard Price and the invention of consumerism in Academia

What is the reason, why the talks of Richard Price (CEO of are seen as provocative? He is a smart guy with a sympathetic smile but what he speaks out sounds a bit uncommon. It is not a technical problem, that his website is using crazy technology. Compared to Facebook and Twitter, can be seen as old-school technology. But what the website is about is a different attitude for future students. To explain the situation we must go first a step back and explain what Meritocracy is.

The term is used to describe the chinese education system. The main idea is to make pre-university tests, in which 50 % of the students fail and let the other 50% go into the university. After a year, a second test has to be done, in which also 50% fail and so on. At the end, the academic elite is recruited from 1% of all the people of one year. They are the best of the best.

The interesting aspect about the chinese Meritocracy system is, that it is very equal to the system used in Europe and the US. That means, any university works with selection and letting the other fails. The first, what us-college students will learn is how to fail a test. It can be an English class or Math. It can be an undergraduate course or a university course. So we can say, that the US-academic system works as a meritocracy. That means, that some people fail and other not. This is decided by a test, for example a pre-university test, an IQ-test, a TOEFL language test or whatever. In most cases it is a multiple choice test, but it can also be a written exam including a peer-review. The main idea behind any test is to reduce the number of students. Not all student will be successful and the test is dividing them into two groups.

The alternative to a Meritocratic academic system is called consumerism. That means, the students are not seeing themself as some kind of elite, but they are seeing their career as a good like a motorcycle. And the idea is to go shopping. That means, the students buying courses, they are buying books and the are paying money for ideas. What Richard Price is explaining to the public is, that everybody is a consumer. The idea is no longer to make tests, the idea is to see education as a shopping experience.

In the world of there is no university or academic elite, instead there are goods, services, money and consumers. Every behavior is welcome, as long it has to do with spending money for education. The idea is to see papers as some kind of trophy what is collected by consumers. The danger is high, that this will result into the same situation, which is known from Shopping-america from the past. That means, instead of buying goods for a certain need, the shopping is seen as end in itself. This kind of egocentric shopping mentality has to compensate weaknesses in character. Onstitutions are boycotting “dear-scholars-delete-your-account-at-academia-edu” not because of the website itself, or the idea of electronic publication of manuscripts, they are doing so, because they don’t like the transition away from a meritocratic academic system.

The real benefit of

Sometimes is marketed as a platform for accelerate science and connecting researchers. But the real benefit of the website is something different. I want to give a concrete example.

In the screenshot above a folder is shown in which a newly created paper is available. The content is nothing special. It is a quick&dirty paper with 13 pages and some citations. The problem is, that the file needs to much space on my harddrive. The textfile itself occupies 103 kb, the images are taking 300 kb on top. The correct command for deleting the file is under Linux is simple “rm paper33-open.lyx”, but that is not the way it works. The better alternative is to upload the content somewhere to the internet. But who is interested in the file? The website of Richard Price for example. It can be used as a storage for written paper which should be provide as Openaccess to the world. This will improve the situation, because I do not need to store the content on my computer anymore. This makes room for something new, which is interesting.

There is a second reason why is useful. That has to do with the low quality of my paper. Yes, the 13 pages are not very good. The paper has a lot of errors, and I did not really understand the topic. It has to do with Motion graphs which are used in robotics. The best practice method to write a better paper is to start from scratch. The workflow has to be seen as a continuous process. Paper1 is written with the aim, that the next paper2 will be much better and so forth. The old (bad written papers) is thrown away and in the next iteration the writer has understand the topic hopefully much better. The problem is, that someone can only write an improved version, if he first writes the bad-one.

What will happen, if everybody throws away his first draft and uploads the file to the internet? Yes, it will weaken the quality of science. And this will help to reduce the quality of the authors. Bad written papers are sign, that someone is out there who is trying to write better quality. Low quality today results into high-quality in the future. What is providing today is low quality content, and this will help to improve future science. It is a place for trying out new ideas without the need of being competetive with real scientists. I wouldn’t call it an online journal, it is more a writing club for beginners. The perfect user has absolutely no idea about Science, and has never written a paper before.

Aaron Swartz about OpenLibrary

In a talk from the year 2008, Aaron Swartz introduces at first the Wikipedia and comes in the second part of his speech to a possible extension, “TOC2008 – Aaron Swartz – Wikipedia and the Future of Libraries”, The problem with Wikipedia is, that the upstream for Wikipedia is Google. That means, people are surfing through the internet, reading some websites and add this information to the Wikipedia. But what is upstream from Wikipedia? Where get Google his knowledge from? Aaron Swartz idea was to build a collection of books from which the Wikipedia can cite. Because Wikipedia itself is not a source, Wikipedia can only cite information already there. It is some kind of news aggregation. The question is how to use the Wikipedia principle for creating new knowledge. At around 30:00 in his speech Swartz is focussed entirely on libraries, books and book publishers.

Let us go a step backward and define what Wikipedia is. Wikipedia has replaced successfully a commercial encyclopedia like the Encyclopedia Britannica. But, the Britannica is only one book of many millions. How can we replace the library system at all? Answering the question is not easy, and Aaron Swartz hasn’t found the answer in his speech. Because the creation of knowledge is a complicated process. It is more than only putting a website online. Knowledge creation is done historically by universities and researcher. Today, 9 years after Aaron Swartz the answer is obvious, it is called OpenScience.

In another video Aaron Swartz is explaining his vision of an Open Library in detail, “Aaron Swartz on The Open Library”. As far as i can see,, he has programmed on his Macbook a prototyp website which allows to search for ISBN numbers. The problem is, that the task of building such a library is surprisingly hard. The problem is less to program with PHP a website, but to explain the general idea. Today, 9 years later, many talks and conferences are held in which the same topic is discussed, but this time more than one person is involved but people from different subjects. The problem with an electronic library is, that the academic publishing itself is way more complicated than a simple Wikipedia. Perhaps some raw numbers. The english version of WIkipedia consists of around 5000 people who are heavily involved and who a writing regularly new articles. In contrast, the worldwide academic publishing industry contains of around 2-3 million people who are writing papers and attending at conferences. From the number of people, Wikipedia has the size of 0.16% of academic publishing. That means, an open library is not only a second wikipedia but it something different.

I think, the main mistake of Aaron Swartz was, that his vision was only technical driven. In reality, the electronic library is not a question of programming a php website, it is more a question of copyright, existing academic publishers like Elsevier, and todays university system. From a technical point it is indeed very easy to put books and papers online, but that is not the way the system works. Instead the situation is comparable with the software industry, in which copying a floppydisk is the smallest problem. What counts in software industry are legal rights, profits and monopols. Ignoring the existing system is not a good idea.

The problem is not how to upload existing papers to the internet, the problem is how to create new papers under a creative commons license. It is the same situation like Richard Stallman was confronted in the 1980s, and writing the first Linux operating system was hard work. That means, a Open Library project must reverse engineering the existing library and create something from scratch under a free license.

Community building with

From a technical point of view it is possible to ignore and post a self-created paper into a blog. One thing is missing there: the integration into a broader community. What does that mean? has a nice but often not recognized feature which is called “related papers”. It is simply a menubar which is visible right from a paper and shows other papers which have – according to – an equal topic. For example, I have written a paper about the RHEL distribution and in the “related paper” section, papers from other authors about the Linux ecosystem are presented. Sure, it is simply an advertaisment, if i click on one of this recommandations, the pageview counter of the other author will increase by one. But it is also a nice feature to read information which goes into the same direction. It can be uses as some kind of searchengine, where a complete paper is used to find similar papers.

Such a connection is not so strong, as if somebody is commenting a paper or citing it in the footnote. It is more an automatic generated search result, but it is nice to know, that the own paper is not alone in the world. Instead there are other authors how have written something equal.

In old-school academic writing such a search-engine is called a plagiarism detection. And if the engine has a result, it is always a bad sign. But plagiarism is something different from a community. A community is, if many authors are writing about the same topic, and perhaps with the same style but produce different content.

Can we remove the Peer review process from Science?

I’m citing Richard Price: “The web has no peer-review process”. That means, anybody can upload his podcasts, blogs and Facebook updates. There is no higher instance which acknowledge to one of these changes. The only exception from that rule is Academia. Here we have a traditional peer-review in place. Which means, that a normal student isn’t allowed to upload his content to and that even the paper of a professor will be rejected from an Elsevier journal if the editor says it is not important or it was not scientific.

Let us investigate what the Internet uses instead of a peer-review system of measuring the quality of information. There are three main elements:

– fulltext searchengines (Google)
– manual created directories like dmoz
– and references from other webpages like blogs and from wikipedia

This collection of orientation works surprisingly efficient. That means, even without a centralized place in which the information is aggregated everybody finds the content he needs. Why need Academia a dedicated peer-review system which is centralized on Elsevier, pubmed and Wiley? The answer is, that not Academia needs such a system, but Elsevier and Wiley.

decentralized peer-review

Open Science is usually thought with the idea in mind, that future academic publishing is based on peer-review. But peer-review is not the answer to the quality problem it is the bug. Let us investigate in detail, how a centralized peer-review works. The idea is, that a beginner student is writing a paper and give the paper to his professor. He reads the paper, and he creates a bibliography about all good papers. Some of the students are not included into the bibliography and their paper is rejected by the professor. The problem is, that we have a single instance of failure. Only the professor has the knowledge to evaluate the content and only the professor is allowed to create an annotated bibliography with a quality judgement.

The better idea is to cancel the peer-review. That means to see all players in the game as equal. If somebody wants to judge about the papers from other he is allowed to do so. For example, I have the need to judge about “Rooter: A Methodology for the Typical Unification of Access Points and Redundancy” and say that the paper was generated by a random generator and contains nonsense. But I can call myself a peer-review instance which is above Jeremy Stribling? No, I’m only one of 3 billion internet users who is allowed to post my quality judgement to the internet. I can create bibliography with very well written papers and with bad written papers. But i do not have the right to reject a certain paper or prevent other to create his own evaluation.

What Open Science need is a decentralized peer-review system. That means, a huge amount of different evaluations. How can such a system looks like? It contains of:

– wiki based annotated bibliography
– blogs article with paper reviews
– citations in own article
– search engine which counts the number of citations
– social networks which are counting the +1 votes for a paper
– citation of a paper inside wikipedia
– and many more

That means, there is no single peer-review instance to which a student submits his paper, instead it is possible that his paper is references by many places with different perspectives. It is only a technical question of how to aggregate the different evaluations about one paper. One formula could be, that a paper is scientific if it was cited by wikipedia and has at least one citation in another paper. But there are also different criteria possible. The classical centralized peer-review system in which an anonymous peer-reviewer at Elsevier takes a yes / no decision about the quality of a paper is ridiculous. It is interesting to know what his judgement is, but only in comparison with the judgement of the rest of the internet.

A centralized peer-review system is something which has to overcome. It is an invention of the past, which was needed in the pre-internet-age.

Peer-review is obsolete

In the above blogpost, the peer-review system is described as inefficient. I would agree only halfway. It is right, that experts can judge wrong about a paper, but the problem is not the creation of annotated bibliography itself, the problem is a centralized peer-review system. I think, that Open Access will need more peer-review than ever. And it needs a more diverse form. Not only by a journal, but also from bloggers, Wikipedia, independent bibliographies and by automatic search engines.

From the old peer-review it is possible to learn many things. For example how to censor information, how to establish a bias and how to upvote underdogs. The aim is not overcome this misconception, it is more an evolutionary process to get a peer-review system which is more biased. That means, not only Elsevier can create a list of recommended papers, but this can be done also by amateurs. Lets take a look into the Amazon customer ratings. Are these votes objective, unbiased and well informed? They are the opposite of it, and exactly this is what peer-review should be.

Switching back from Lyx is not possible

Only for fun, I’ve tried out not to use Lyx but the normal Libreoffice Writer for formatting a pdf-paper. The idea was, that the document was short (3 pages) and Libreoffice is also well suited for formatting a simple document without references and images. No, it isn’t. It tooks huge amount of time to put the text via copy and paste in the window, and then with manual formatting all the headlines have to be adjusted. Even if there is some feature for defining what a headline is, it seems that after selecting all the text and change the font-size, this feature isn’t working. I’ve tried my best to give the paper a serious look, but the result was not very good. A page numbering at the bottom is missing, and between the paragraphs are bad-looking spacing without any sense.

And the example was only a short paper with 3 pages, without any floating figures. So my conclusion is, that even for simple documents, the Lyx textprocessor can’t be replaced by anything else. In general I’m a fan of OpenSource software like Libreoffice, but for writing a scientific paper the software is a nightmare. The main problem is, that it using like MS-word the WYSIWYG paradigm, which means that the user sees the DIN A4 paper and can position the text there. Or to make it clear: Libreoffice is not using LaTeX as a backend.

If the newbies has the first contact with LaTeX and Lyx they may think, that with Libreoffice he can do the same without learning a new language. But a deeper look into the subject shows, that it has reasons, why LaTeX + Lyx is so often used by publishers.