Agile Prototyping

The term agile programming is well known for developers. It is equal to continuous delivery, user driven bugtracking and version control systems. The problem with agile programming is, that even it its agile, it is not flexible enough. The problem is that in many so called agile projects the wrong programming language is used, notably C, C++ and Java. Somebody may argue, that these are industry standard programming languages and there is no alternative available. The better idea is to separate between agile programming and agile prototyping. Programming means to use one of the mentioned industry standard languages for writing software, for example to write in Java a business application. Before this can be done a step before is needed called gui prototyping. Using Java for creating prototypes is not the best choice. Better tools are painting programs like gimp, Excel spreadsheets and last but not least Python. All these files can be managed by agile paradigm too, that means it make sense to create a bug report for a malfunction excel spreadsheet or a non-working Python script. The difference to normal programming is, that in most cases no binary files are generated. That means, no standalone application is there instead the development process stays within the prototype.

Let us define who agile prototyping looks in reality. It is a git folder which contains Pythons scripts, plain text files, some word documents, PNG images and perhaps some bash scripts. That means, it is not programming in a classical sense which means to create C++ and C files which gets compiled into binary files, but it is the pre-step before the code can be written. Agile prototyping isn’t done by programmers, but by design experts within the marketing department. What they are doing is to convert customer requirements into a mockup. This mockup is very slow (Python is at least 30x times slower than a C++ program) or in case of an Excel spreadsheet it is called “no code”, that means it is simply a table with numbers in it and some formulas. The new thing in agile prototyping is, that this design step isn’t ignored but can be seen as an important part of the overall project. I would guess, that prototyping needs 90% of the total project time. That means, software development has nothing to do with creating C++ code, but with creating lots of prototypes.

Let us define what Python is. Sometimes it is called a programming language like Java. But it isn’t. Python scripts are a mess. They run very small, they are ready for install in a production environment and a python3 script will not run very well with python2. Also many libraries doesn’t make sense and it is not possible to create a fast library with Python sourcecode. That means, in professional software development, Python is a no go. It is similar to a matlab script or a visual basic macro. But Python has a major feature: it is great for creating throw away prototypes. That means, to create on the fly a GUI, asking for comments and then reprogram everything from scratch.

Suppose, a prototype was created, what is the next step? Surprisingly it is very easy to convert a prototype into runable software. If a Python GUI is already there and some spreadsheets it’s no problem for the programmers to convert this into a C++ application which runs very fast and compiles into executable code. The task can’t be done autonomously, but compared to create a C++ app from scratch it is easy going. The reason is, that in the prototype all important decisions are made. The GUI is fixed, the algorithms are known, the interfaces are clear. All what the programmer has to do, is to create for that specification an efficient C++ code. If he has done so, this application can be delivered to the end user. It can be installed with a package manager and will run very fast on a PC. In contrast, it’s not recommended to deliver a prototype to the enduser, because this is often buggy and the performance is poor.

I think it’s important to see Python and C++ as different kind of tools. Python is great for GUI design and documenting algorithms, while C++ is great for creating fast running code which runs on any device.

Low fidelity prototyping
Game designers are creating wireframe models for brainstorming. The idea is to reduce the invested time to a minimum and focus on the general concept. The result is some kind of game, but it has a abstract form. From a critical standpoint everything is wrong with an animated wireframe realized in Python. At first, the wireframe doesn’t look realistic, secondly the Python language provides only a small frames per second, and third no sound is there. But for a prototype it is ok.


Throw-away prototyping

The following sourcecode shows in 30 lines of code written in Python how to create a quick&dirty prototype. In the literature such applications are called throw-away GUI prototype because the sourcecode is coded once and then gets deleted. The reason why the code is not important has to do, that everybody can create the whole prototype from scratch again, without knowing the original code. Instead the idea is to use the pictures and the code as an intermediate for communicating about the application. That means, the sourcecode is not commited to a git repository for the aim to store it forever, but the code is copy&pasted into a chatroom and after 2 weeks, all the postings are deleted on a regular basis.

From a technical perspective the shown prototype is not very advanced. It is not a binary file which runs out-of-the box, instead the python interpreter is needed as a helper application. Also the amount of features the app provides is nearly zero. Apart from simple “hello world” nothing will happen if somebody is pressing the button. That means, it’s a classical mockup. The main advantage is, that it was created in small amount of time. Typing in the sourcecode, fixing some typing errors and start the app from within the ide can be done in under one hour. Even if somebody has never programmed in Python before, he will understand what the idea is.

#!/usr/bin/env Python
#-*- coding: utf-8 -*-

import Tkinter, ScrolledText, time

class GUI:
  def __init__(self):
    self.tkwindow = Tkinter.Tk()
    # input
    self.widgetinput = ScrolledText.ScrolledText(self.tkwindow, width=30, height=5)
    self.widgetinput.insert(Tkinter.END, "input"), y=20)
    # processing
    self.widgetproc = ScrolledText.ScrolledText(self.tkwindow, width=30, height=5)
    self.widgetproc.insert(Tkinter.END, "processing"), y=120)
    # output
    self.widgetoutput = ScrolledText.ScrolledText(self.tkwindow, width=30, height=5)
    self.widgetoutput.insert(Tkinter.END, "output"), y=220)
    # button
    self.button = Tkinter.Button(self.tkwindow, text="run",, y=20)
    # main
  def run(self):

myGUI = GUI()

GUI prototyping with Python

The term prototyping is classical used in the domain of GUI design. The idea is, that before the real C++ programmer can start to write hard code, a design department has to deliver soft pre-steps which is a concept, some drawings and a GUI prototype. According to my investigation, the best tool in creating quick&dirty gui prototypes is Python together with libraries like tkinter and pygtk. Both libraries are documented very well and a first hello world application can be created with 10 lines of code easily by beginner.

Writing such a GUI prototype is done with commands for textboxes and buttons and an additional position parameter to adjust the widget in the app. Often a bit logic is integrated to activate an action, if the button was pressed. The advantage over classical IDE engines for example Glade and C++ Builder is, that with Python it is much easier to program a GUI prototype. Instead of coding the application in C++, only the Python scripting language is used. The disadvantage is, that the created GUI prototype is restricted to a local computer and their performance is slow. That means, a GUI prototype created on an Linux PC will perhaps doesn’t run on an Windows version, especially if the underlying Python interpreter has a different version. From a negative point of view, the created Python sourcecode has a short time to life. It is created as a concept and it is quickly replaced by something which works better.

In the screenshot a simple GUI is shown, which was created with the tkinter library. It contains of some textfields and a drawing window. The idea behind the gui prototype is, that the code doesn’t matter. The same application can be created from scratch without knowing the original code. This concept is called sourceless programming, or nocode and means to focus on the GUI itself, but ignore how the underlying code looks like. What does that mean? The code for realizing the app was written in Python in 50 lines of code. But these lines of code are not important. Also the tkinter library is not important. If someone want’s to create his own version of the prototype, he can take the pygtk+ library and writes a different piece of code. The only important aspect is the look and feel, according to the concept the app contains of 2 textfields, a drawing arena and an input field. That means, even i put the sourcecode not as opensource onto github, everybody else can reproduce the app by creating the prototype from scratch again. The precondition for this software development principle is, that powerful prototyping tools are available. That means, software which allows to easily create a full blown gui in under 10 minutes from scratch with minimal programming skills.

What is programming?

On the first look this question is simple to answer. But the question is a bit tricky because the term programming is often used in historical context of writing software. In the decade of 1980s software programming was equal to create things like operating system, spreadsheet applications and games which can be started on an Intel 386 computer. Nowadays it make sense to define programming with a detailed slightly different meaning. Programming can be divided into programming itself and creation of an prototype. Programming itself was done in the 1980s with assembly language and in the 1990s with the C language and with the C++ language. It was equal to open a texteditor, put the sourcecode in it and compile the sourcecode into executable code. Often the next step was to bundle the compiled binary into a software package which can be distributed with one of the major operating systems like Linux and Microsoft Windows. To shorten the explanation a bit we can say, that the best programming language is C++, because it allows to write the fastest binary code. All comparisons are showing, that sourcecode written in C++ is translated to super-fast binary files which is often faster then handcoded Assembly language. Other languages like Turbo Pascal, Perl or Java are smaller than C++ so they can’t recommended for programming.

C++ is only the best language for the programming task itself, not for the previous prototyping step. This step isn’t mentioned in classical programming books, the reason is that small easy to write programs doesn’t need a prototype. If somebody creates a prime-number-generator he can write the sourcecode in the C++ syntax, compile the code into a binary file and after some minor bugfixes he gets his application. Also many games in 1990s were created without any prototype. Instead the programmer typed in the sourcecode direct into the editor in the C++ syntax and that was the overall project. In modern software engineering which are more complex, programming alone is not enough. In most software projects, the C++ syntax is not the problem. The reason is, that the programmer already knows what a for loop is, how to control the graphics card, or how to use multiple C++ classes. The problem are located somewhere else, notably in the prototyping step of the workflow.

Let us construct a standard case, to define exactly what core programming is. As input, a working prototype is available. This mockup gets translated into the C++ sourcecode. Then the binary file is executed on the computer. Programming means, to convert the prototype into executable code which runs fast under mainstream operating system. This produces an interesting question: who decides how the prototype will look like? Which programming language is used to create a prototype?

The interesting answer is that in the year 2018 both questions are remaining unanswered. The subject of software prototyping is not researched very well. Most authors have the assumption that only core programming itself needs to be understand. If we are focus on prototyping many advice from classical programming will become obsolete. The dominant C++ language is no longer needed. C++ is a bad choice for creating prototypes. The better choice is Python in combination with a painting program like gimp, a documentation tool which can produce pdf documents, a mindmapping tool and scripting tools like matlab. None of these prototyping tools is able to generate fast binary code which is comparable to C++ speed. But that was never the intention. A prototype is not created as delivery but as internal communication tool for the software engineering team.

The perhaps most widespread used prototyping tool is Microsoft Excel. This spreadsheet application is used worldwide at the office workplace to create tables which contains formulas. A MS-Excel file can’t be executed, it needs an environment to get started. Promoting a MS-Excel file as a high-quality application doesn’t make much sense. In comparison to software created with C++, MS-Excel is very slow, didn’t provide a GUI and only a reduced amount of information can be presented. But the comparison wasn’t fair. Because MS-Excel is used for prototyping purpose while C++ compiler are the best practice method in creating a standalone application.

What i want to express is, that modern software development can save a lot of time, if the prototyping step gets more attention. The idea is to separate software engineering into a difficult subtask which is prototyping and an easy subtask which is core programming. Easy means, that the translation step from a prototype to an executable application is well understood. That means, an average programmer is able to take the prototype and create for this prototype the C++ software which can be started on all systems available. He will not recognize any major problems, all what he will find are smaller problems which can be treated as simple Stackoverflow questions. One of these subproblems is for example how to create in C++ a GUI, or how to create in C++ a for loop.

The more demanding task in software engineering is the prototyping step. That means to define what the GUI will look like and which algorithm are needed for the software. This step can’t be answered by programmers, it is located somewhere else. Prototyping is the translation from customer demands into a mockup. For example, the customer needs a prime number generator and the prototype provides the blueprint. It contains the important algorithm, a simple GUI and a bit documentation. In a software engineering project, that prototyping step takes 90% of the overall time while the programming step needs 10%.

Shortage of prototypes For defining the concept of software prototypes much better it’s important to say something about situations in which the prototype shows disadvantages. Suppose, we have created with Python, LibreOffice Calc, PHP and some powerpoint slides a prototype. It is not a standalone app, but a folder of some files who have to be started after each other and are not ready for production. The Python scripts are slow, will run only on the developer machine, while the libreoffice spreadsheats contains only some calculations but are not intended as final program. Everything is wrong with such a prototype folder, he has a weak performance, Python and PHP scripts are using too much cpu power. Everybody who is arguing, that switching the programming language into C++ is needed is right. But is the design department who has created the prototype with the mentioned technology is wrong at the same time? No, they have done everything right. Because a prototype is allowed to run slowly, and has the freedom to be buggy. These issues can be overcome in the next step, which is called the transition from the prototype to a final application. If Python and PHP would be great choices in production environment they would have replaced C++. But for production server they don’t.

Some projects which have revolutionized Academic publishing

If we are searching how Academic publishing works, most information are from the year 2000 and before which contains outdated best-practice method. In some recent talks about library modernization the debate is grouped around the magic word “digitization” but this doesn’t describe very well what state of the art technology is. That is the reason, why a short overview is necessary which technology is available which has already changed the workflow in Academia.

The first one is Google Scholar. This search engine was mostly ignored. No books are published yet about the engine, and in the public debate the engine is invisible. That means it is the big elephant in the room, which is used by 100% of the researchers but nobody is talking about it, nor is courageous to describe it advances. What Google Scholar is, is very simply. Instead of searching in the metadata with it is possible to search in the fulltext of all existing papers. Such kind of technology was not available before the year 2008 and especially not for free.

A second major breakthrough is the founding of The advantage is, that the plattform is for free, open to everybody and allows to upload pdf document. All three features combined results into a very powerful distribution platform which can bypass existing publishers and existing libraries. Like in the Google Scholar case nobody is talking about it, but most are aware of it. What we see in reality, is some kind of professional ignorance. That means, if we are asking 100 scholar, nobody will say, that he heard of, but what he really want’s to express is, that the website doesn’t fit to the stories told about academic publishing.

The third important milestone in academic publishing is the invention of the LaTeX document system. It was invented long time ago, together with the UNIX operating system. LaTeX is more powerful than MS-Word and Adobe Indesign combined. It allows to create academic document which contains an bibliography in the pdf format.

What will happen if we combine all three inventions? Nobody knows it exactly, but it will change everything. A combination of Google Scholar, plus LaTeX is able to replace existing publishing infrastructure, will reduce the costs and make Science open to the world. Let us describe an example workflow. The easiest interaction mode with the scientific community is to read passive existing information. This is possible with the Google Scholar website. It allows to find high-quality documents without further costs. Google Scholar works outside of university library, a standard internet connection is enough. In some countries the access is blocked by the government, but this is also true for Wikipedia. Suppose, the user has read many documents, then he can write his own paper. With LaTeX this is very easy. He doesn’t need an external company who is doing the layout or proofreading, everything can be done alone within the LaTeX software and as output the user gets a PDF file. And now comes the magic step. Thanks to the help of it’s possible to upload the self-created document to the internet, so that everybody can read it. That means, the entire workflow in academic publishing can be done outside of traditional university system and without any costs. The software is available for free in the internet, and the mentioned websites are free to use.

Right now, the amount of people worldwide who are doing so is small. Most homeusers are not aware what Google Scholar is, or what the advantage of the bibtex format is. But, the technology is there and it’s working great, it will be only a question of how long it takes until millions of people will recognize the advantage for their own.


Why should somebody care about Google Scholar, and LaTeX? Because the combination of all three will make classical academic institutions obsolete. If the author has it’s own profile in the Internet at he doesn’t need anymore a publisher. And if all documents are digital only, no library is needed which archive existing information. And if the library is no longer needed, there is no need to go to the university to get the latest scientific research. That means, everything which is known about Academia worked in the past will become outdated. Universities, Publishers, Libraries and Author are getting under pressure. That means, they have to invent themself and they will fail. That is the main reason, why nobody about talking about the revolution. Because everybody is aware of the danger and he is trying to play the existing game as long as possible. But the major pressure is not against the institutions, it has to do with money. The new all digital publication system will become cheaper than ever before. A single paper will not be created without any costs, but it will be a fraction of the costs of the past.

Has Microsoft lost the war?

In an old youtube video “Microsoft Windows 3 and NT, 1991 Part 1” Bill Gates gave an overview about the Windows 3 and Windows NT product line. He explained, who Microsoft brings modern computer technology to the user and he presented modern software and even network capabilities. For the time of 1991 the presented technology can be called advanced and it was explained easy, so that any customer can understand it. But something is wrong with the presentation and the reason why Microsoft was modern in 1991 but not in 2018. It is not about the Windows 3 operating system, the technology is great. The problem is that the user has to pay enormous amount of money to profit from all these feature. What is not shown in the promotional video is how much exactly. The shown 386’er PC costs around 3000 US$, the MS-DOS 5 operating system was sold for 200 US$ and the presented Windows NT line costs even more. If the user needs also the Microsoft encyclopedia, the MS-Office product line and the C++ development kit he will pay 1000 US$ additionally. And that is only the price for 2 years. If he needs upgrades for software bought some months ago, he will pay even more. That means, if money is not important Microsoft is a great company. But for most users this is a problem. The PC industry is competing with technology and with the price and if a company is not able to reduce the costs he will not be able to survive.

Let us take a look into today’s market position of Microsoft. For beginner users with a small need in software, Microsoft is great. Like in the early 1990s the customer buys a PC, gets a preinstalled Windows operating systems and buys additionally one game and the MS-office package. The total amount of money he has to spend is low, perhaps 300 US$. That is the behavior of many million customer worldwide. But, if the customer has greater needs he will recognize that Windows is a dead end. For example, if he is visiting the computer store with a list of software he needs: Database application, Server system, C++ compiler, encyclopedia, office application, image manipulation, sound tools and video editor. If he puts all the software in his cart and moves to the checkout he will notice that the computer store ask him to pay 4000 US$ only for the software. That is too much. And there is a need to search for an alternative. Microsoft isn’t able to provide such an alternative. Their prices are fixed some kind of software flatrate isn’t available. What the customer can do and what most professional users are doing is to switch to Open Source. So they get all the software they need and their costs are limited.

The business model Microsoft worked in a time, in which only a small amount of software was needed. That means, the typical PC of the 1990s consists of an operating system, plus 3 additional programs, not more. Paying the license costs was affordable. Since the 1990s times have changed. Today’s poweruser have a higher demand. They need all software available on the market. That means, they have a PC and on the PC they have installed 100s of programs. The only business model which is able to provide such a service would be Open Source and a flatrate. That means, the individual software package costs nothing or very few.

Open Access Gold is preventing progress

Open Access advocates are promoting the model as future publishing system. Papers under that model are public accessible and the financing is secured by libraries. On the first look, is a promising model of how education works. But the truth is, that Open Access slowdowns the progress and is not the right way to go.

Let us focus on some preconditions under which Open Access Gold takes place. The first constraints is, that today’s publishing company remaining untouched. That means, the old major player like Wiley, Springer and Elsevier will become the dominant stakeholders in the Open Access Gold world too. And the second untouched assumption is, that the old-school libraries which are government founded will remain the same. That means, the taxpayer has to pay the invoice.

Instead of promoting Open Access Gold the better idea would be realizing a competing market in which Elsevier and Springer are under pressure and are replaced by newly founded publishing companies for example PLOS, while at the same time, the government sponsored libraries are replaced by privately financed houses which has customer and are more open to new technology.

Open Access Gold is nothing else but the admission, that nothing has to change in the publishing system. That means, Open Access is the combined effort of outdated publishers and outdated libraries to defend their weak position against technological progress. Open Access Gold will fail, it will replaced by academic publishing which has lower costs than today.

The third assumption of the Open Access Gold model is, that only in that mode it will become possible to make papers access for free to the world. That means, only if outdated publishers and obsolete libraries are in charge it is possible to provide free information for all. This assumption is wrong, because it describes not a free market but a monopolized situation which is financed by the government. The taxpayer is financing the libraries and the libraries are financing Elsevier. That means, Elsevier is financed by the taxpayer. The better approach is to cut down government sponsored publication and let the market decide. He is able to provide higher quality with lower costs.

Anti-market advocates are usually argue, that a market-driving academic publishing system is equal to pay wall protected content, which means that the public looses access to important information especially in subjects like medicine. The opposite is true, there are many examples out there in which a for-profit attitude is equal to free access for everybody. The best example is the Apple iTunes store which hosts thousands of podcasts and so called iTunes U lectures from privately owned universities which are streamed for free to the world. The financing of such free content can be done with hardware sales (which is done by Apple) or with advertisement. Another example of a “free to everybody” but commercial oriented website is Github which provides large amount of free content but is not payed by taxpayer’s money.