Stack based programming

In the above article a good introduction is given, why Forth is different from other programming languages. The article itself is about the 256 bytes long stack in the 6502 CPU. The first thing what the modern CC65 compiler is doing is to extend this stack witch an software stack. It seems, that High-level compiler for C or other even C++ have a need for huge stacks. The reason is, that a function can have local variables and objects can call other objects. For mapping this into a CPU on hardware level a huge stack is needed.

Let us take a look into Forth CPUs and the Forth programming language. The difference is, that the stacksize there is very small. The stack of the J1 CPU is 33 cells deep with a 16 bit, The newest GA144 Forth CPU has only a 10 element data stack,

In both cases the stack of the Forth CPU is smaller than the stack on the Commodore 64 and this is the reason why compiler construction for a Forth CPU is not possible. If even a normal C-compiler like CC65 builds his own stack for compiling the source right, what should a potential C-compiler for the GA144 do? I have no idea, the only option what Forth programmers have is to not using any local variable and optimize their programs into not using any stacks.

The general assumption about Forth CPU is, that these devices are stackbased machines. In reality they can be called stackless-CPUs because they have no stack or only a very small one.

Apart from Forth CPU, I’ve found an interesting documentation about the stackusage of mainstream languages like C/C++, The ARM infocenter writes, that C/C++ makes intensive use of the stack for storing:

• backup of registers

• local variables, local arrays, structures, unions

• variables for optimization

And also there are some hints given, how to reduce the consumption of stackmemory, for example avoiding local variables and do not use large arrays structures.

Is the ARM infocenter right? Absolutely. They know a lot about C/C++ compilers and CPUs. And it explains what the difference between C and Forth is. Forth and dedicated Forth chips are computers with a small amount of stacksize. And programming in Forth means to avoid using the stack. The C-64 community has at least 256 byte stack for storing local variables, the Forth community has only 10 bytes. Reducing the usage further is possible. Because instead writing a computerprogram it is always possible to convert an algorithm into AND, NOT, OR logicgates. Either with a compiler or by hand! Forth is not a minimalistic language because under Forth are logicgates which are not using any stack or program counter. Such computers are using a breadboard for wiring the logicgate on the fly. There is no program, only boolean algebra. From the perspective of the ENIAC a forth system is bloatware.


In some books about the advantages of Forth based stackmachines, sometimes the development of the mainstream computing around the x86 architecture is presented as a conspiracy against Forth. As something which is the wrong way. But the reason why not the GA144 cpu is used in mainstream but ARM, Intel and other vendors is driven by the need of higher programming languages. Suppose, we as the programmer are not interested in realizing the advices of reducing local variables. Suppose we want to use a lot of local variables and object oriented classes which are heavily stored on the stack. How would the C/C++ compiler look like, and on which CPU he would run? The answer is simple. If the C/C++ sourcecode is using lots of local variables, the compiler demands for a lot of stacksize. To make the machine code fast, the best case is, if the CPU has a huge stack onboard, so the compiler doesn’t need to build a software stack in the main memory, like the CC65. And here is the real explanation why current hardware and compiler designs prefers register-based cpus. Because the programmers are writing software in a certain way.

In theory, it is possible to implement an algorithm in a stack-saving form. But this decreases the productivity of the programmer. Instead of thinking about the problem itself, he is trying to minimize the number of variables, witch the only purpose to run his code on a certain type of hardware. This is what Forth programmers are doing. At first, they are building a limited CPU architecture with a low amount of registers and stack and they they are trying out to program such a system.


How to bring Aimbots forward

In classical Aimbot forums, the AutoIt language is used routinely for creating so called Aimbots. The reason is, that AutoIt is an interpretive language with an integrated pixelsearch feature, which makes it easy for accessing an external program. But there are also tutorials available which have C# as the programming language, and perhaps C# is better suited if the program gets complexer. At first we should define what exactly an Aimbot is.

It would call it a C++ program which is using pixelsearch as input and sends keystrokes as output for interacting with existing games. The important things about Aimbot is, that they are beginner friendly. In contrast to theoretical AI and also in contrast to classical robotics competions like Micromouse or Robocup, an Aimbot can be created really by amateurs. That are people who have apart from one week experience with programming no further technical skills. And the setting in which Aimbots are created usually is very relaxed which results in a creative atmosphere.

But let us go into problems. At first, it is difficult in the Linux operating system to realize a pixelsearch like function. It has something to do with the Wayland interface and foremost with the small number of people who are using Linux for gaming, so the problem of a missing pixelsearch feature is not escalating in support forums. In contrast under the MS-Windows ooperating system there are many more users who have the need for using a pixelsearch like function, either in AutoIt, in C# or in C++. But without a pixelsearch feature it is not possible to access the rawdata of an existing game. So the user is in the position to create his own game from scratch, which increases the difficulty without doubt. For example the Mario AI game was programmed first by the community only for the reason to program after it an Aimbot. But using a pixelsearch like interface will make the workflow more flexible because the user can decide by his own which game he wants to play with an AI. User1 likes Tetris, user2 prefers Sokoban, user3 a shot’em’up and so forth.

In my opinion a pixelsearch feature plus the keystroke sending is the minimal requirement for beeing part of the Aimbot community. After it, we can talk in detail if C#, C++ or Autoit is the better programming language. So the first thing to get Aimbot programming more popular is to realize a pixelsearch interface for Linux.

Helvetica font for a thesis?

The community of academics has a clear assumption of which font is used in a thesis. The answer is Times New Roman 11pt. Around 99% of all papers on Google Scholar are written in that font, and nearly all publisher templates and Elsevier documents are prefering this font. Only minor changes from the standard are allowed, for example the Koma-script template in LaTeX uses a sans-serif font for the heading which is a nice contrast to the standard font used in columns. But why is a serif font so popular?

To answer the question we must ask for the stepbrother of Times, the Helvetica font series. In offline printing and typo-design, Helvetica is used maily for headings and huge labels in cities. But it is not used for printed books or magazines. In contrast, the Helvetica font is the standard online-font. That means, most webbrowser and android phones are using a Sans-serif font for displaying text.

The screenshot was taken from the Google Chrome browser which shows an wikipedia entry. For better visibility a box was zoomed to maximum. What we are seeing on the screen is not a Times New Roman font but a Helvetica like typeface. It is not used for the headline but for normal text. It seems, that Times is the standard font in printed documents, while Arial like fonts are used as screenfont. The reason why is simple: a sans-serif font can be read easier than a normal serif font. In contrast to Monotype font the size of the single letter is different from each other, but in contrast to Times like font something is missing. An helvetica font is often called “cold”, because it is different from a Times like font.

Again the question: which font is right for a thesis? I would suggest, that this depends on the medium. If the aim is to print out the document, than the classical Serif font is great. But if the aim is to use the PDF file for reading on screen only, than perhaps a helvetica like font is the better choice. The critics are right, if they are saying that documents written with Helvetica are looking like in a draft mode. The font looks cheap, not like art.

Research in the Internet

Science and Academia which is taking place online is slightly different from normal Science and Academia. The reason is not only a technical problem with digitizing existing papers but a new kind of community. In the internet, the overall idea is not to research nature or make philosophy, the overall idea is to be online. That means, Internet works at first with digital information. And if a webserver is not online, he is not part of the internet. Science is only the second step, it is a special kind of content which is uploaded.

The most important fact is, that scientists from the offline world are not also good internet-scientists. The reason has to do with different cultures. The internet was it’s beginning a hacker project. It was driven by UNIX and Open Source. Later, also commercial companies are taking over the internet. But, the Hacker community plus commercial companies is a different culture like in offline science. Offline-science is oriented on groups of people. That means, it is important if a certain person is a scientist or not. It has something do with the personal biography. If somebody has worked 10 years at a university, then he is a scientist. But, this personal biography has nothing to do with the internet. In the extreme case, the scientist is not visible in the internet or only a part of him is visiable, for example, if he has a homepage.

Internet-Science works different. The main idea is, that the people are antonym and instead the information which are available for everyone are counting. For example, the typical online-forum consists of thousands of pages, written by bots, unknown persons. But the person behind the information is less important. The people are only a tool, what counts is the Online-forum itself. That means, the internet is per default open to everyone. This is not the case, in offline-science.

It is hard to tell which kind of science is better. From the perspective of Open Access the internet community is the better science. But from the perspective of technology advancement, the offline-science is more developed, because this is traditional science, which tokes place since centuries. The good news is, that it is easy to cartograph science which is took place online. The number of articles posted to Wikipedia can be counted exactly, and the number of hits google finds for a certain keywords can also be measured. And what Google is not know, this doesn’t exist. A way more harder is the idea to map the offline science. In my opinion it is not possible. Because, if it is not possible behind a paywal or behind a printed library, than we can not say which type of content is in there.

What we can say about offliine-science is, that it looks different from online-science. The second fact is, that it build around certain people. That means, not everybody is welcome, but only a minority. For example, to become member of a classical learned society, some preconditions must be fulfilled, and to become member of a library the same is true. The main idea behind offline-science is secrecy, while the main idea behind Internet-science is openness.

Let us take a look into so called conspiracy forums in the internet. The most important fact is, that they are existing. From the beginning, the internet was the number one place for talking about conspiracies. The reason why is simple: the internet has a lack of information. Only this content is available which was uploaded to the iinternet. And this is only a small part of the worldwide knowledge. The rest is part of speculation. Most conscpiracy forums are discussing the problem, which information is already there, but not visible for the public. And the idea is to reducing the gap. In most cases without success. Getting access to a certain commercial book is very hard, but getting access to really important knowledge is impossible. The result is, that the internet project can be called a failure, because the number of information there is limited. For most information, Google has no answer, but the information is out there. The problem is, that we must separate between online-available information and offline-available information.

But let us go back to offline science. The main idea behind offline science is power. Knowledge is some form of currency, and to have access is to is better than to have no access to it. This idea is not new, it can be traced back at least 500 years ago. A surprising fact is, that even all people are living on the same planet in the same time, not all people have the same information. The easiest form is the separation between people who have access to the internet and people who don’t. Having access is better, there is no doubt. But all people with internet access have not the same right. There is the next gap, between normal internet users, which are searching with Google and the first class internet users, who have apart from Google additional searchengines, for example Web of Science. The web of science website looks for the normal internet users boring. He sees a login form which asks for username and password, and around 99% of the internet population do not have an account there, so they can’t use the service. From the perspective of somebody with a webofscience account, these normal users have no internet, because they see less than he sees.

We can describe the distribution of knowledge as a game, which works around people. They are trying to get access to important information and they can win the game or loosing it. Wikipedia is also part of the game. The idea there is, that per default everybody is welcome. :The disadvantage of Wikipedia is, that the knowledge there is behind state-of-the-art. The reason is, that the user who are editing the websites are not experts but amateurs. They have their knowledge from public sources. What Wikipedia is trying to do is summarizing the knowledge from the public domain, and ignoring the offline-science. The funny thing is, that Wikipedia stays in contrast to real science. Offline science has also to do with physics, math and history, but the shared identity is different. Wikipedia is like the internet per default open to everybody, while offline-science is per default organized in societies.

Between online-science and offline-science is an intersection. Both is driven by people. The reason for take place in offline science is motivated by the search for wisdom, the same is the motivation behind people who are using the internet. The difference is the question what to do with the new knowledge. The answer of the internet is easy. Share it with others. In the offline science the answer is to hide the knowledge, because it is an asset. Both communities have developed incentives to regulate the actions of the community. For example, Wikipedia is based on the idea that people are contributing knowledge, while in offline science somebody who is not publishing a paper will get more money. How different online and offline science works can be seen in the forum The topics which are discussed there is “offline science” and how to behave right. The topic is not how to contribute to Wikipedia or how to upload a paper to No, the subject is reduced to a certain kind of science, which takes place outside of the internet. For example, the latest topic is about “handling an editor”, Only people who are involved in offline science will understand the question itself. It has to do with some special aspect of the peer-review system and possible actions done there. Not the editor of Wikipedia is adressed and not the bibliography which was posted in an online-forum, but it’s about a question of offline science. About the OP itself, I can’t say so much. Because offline-science is not very well researched, most of it is hidden behind paywalls and secrecy. But what we can say is, that the OP is irrelevant for internet-based science, because there are not editors, which have to be “Handeled”.

I do not see a direct competition between offline and online-science. I see only two systems which are working in parallel with different rules of engagement. The Internet-based online science has it’s background in the hackermovement, which took place in the early Unix based internet which was created in the 1970s. While the offline-science is much older and can be dated back in the history of mankind in general. To understand online-science better, I want to cite the declaration of John Perry Barlow, It is very young and was published 20 years ago. The core idea of the declaration is, that Internet and offline world are different.

The main idea behind cyberspace is, that it has no physical representation. It can be described as a game with self-given rules. Inventing games is not new, but the Internet is a game which is open to 7 billion people. The best example for conflicts between the independence declaration of internet and the real world, are copyright protected information. For example, a new movie can be accessed in a cinema, but can not be seen on Youtube. That means, the Internet is less powerful than the offline world. The same is true for copyright protected science documents. They can be read in a library, but not online in the webbrowser.

On the other hand, in some cases the internet is superior. For example, the Wikipedia can be read online, but until now, no one has printed out all the books for distributing it to the normal library. What the libraries have done instead is to buy a computer for show the online-wikipedia in their library.

Someone has called the text of Barlow stupid. But there is much truth in it. The core feature of the internet is, that somebody can enter a URL in a webbrowser and this gets him access to the information. That is something different, from what is normal in the offline world, where access to something is more complex. The interesting aspect on the internet is, that it was not invented as a revolution like for example the communism, which was trying to modify existing society, but Internet was invented as a parallel game which runs additional to the normal world.


For measuring the difference between offline and online-science better the website is a good starting point. According to they have records for 500 million scientific articles. Most of them not as fulltext, but only as a bibtex entry. That means, Worldcat knows that the information exist, but if the user wants to read the paper he must go to a library. In contrast, the Google Scholar service provides access to around 50 million scientific articles which are searchable online. The difference between both numbers can be explained with the copyright law. It is so powerful, that online-science has less papers than real science.

Apart from the 500 million records in Worldcat, there are additional more articles which are not published in libraries but contains knowledge which was produced in the past. This hidden knowledge increases the gap further. The description is simple: offline science (real science) consists of 500+ million papers, written by scientists, while online science in the internet has only access to around 50 million papers and only a small amount of unique content like Wikipedia which was created for the Internet. I’m not sure if until now it is correct to speak about online-science, because it is mostly a subdivision of traditional science. I would the aspect more an extension of the unix culture, which means, that apart from books about programming and tcp/ip protocol some scienfitic related material was created with the aim to upload it to the internet.

How to program object-oriented?

The topic was explained in a former blogpost, but repetition is always good. What is OOP? At first, OOP is an advanced form of procedural programming. It is not what is totally know, but more an improvement of existing techniques. For learning OOP it is the best to start with procedural programming, for example in C or in Python. The idea is, that the programmer writes down some commands which are executed by the computer. For doing so, he has some basic functions for example, if, for, variables and a way for defining functions. A well written C program consists of some functions which are containing commands. Now the user can execute one of the functions, for example, “init()”, or “show()”.

What most programmers are talking about is the algorithm, which means how to reach a certain goals with the basic commands. For example it is possible to arrange the commands in a way, that they will print out the primenumbers up to 100, or they can search in a textfile for a pattern.

So far, we know what procedural programming is. But what is OOP? At first, it is nothing complete new, and it nothing to do with the real world, that everything is an object. No, OOP is simply a way for not only writing 5 functions in a C program but 50. The idea is to group the functions in higher classes. A class is like a program in the program, it has some functions and some global variables, called class attributes. Programming object-oriented means only to use one additional keyword (class) for grouping existing C-functions into higher classes. Everything else from procedural programming for example, the if statement or the idea that the programmer writes down an algorithm is the same.

Some C-programmers doesn’t like object-oriented programming very much. And they are right. Because the same feature can be realized with normal C language. An easy idea is to store the C-functions into separate files. So class1 would be file1.c and so forth. But in general using OOP in daily programming helps a lot, because it is more luxury.

At the end, I want to talk about the weakness in object-oriented programming. The problem is, that OOP alone do not answer how to program itself. For example, if someone wants to program a primenumber algorithm OOP alone will not help him. What the user needs is an algorithm. OOP is only an option for implementing an existing algorithm into run-able sourcecode. That means, if a programmer sees on stackoverflow a prime-number-implementation in Turbo-pascal, C or lua, he can easy create the appropriate C++ implementation of it. But if he do not know how the algorithm looks like is can’t program the sourcecode.

The reason why object-oriented programming is used so often in reality has to do, that it stores an algorithm into fixed sourcecode. That means, if the sourcecode is available the program can be executed on any computer worldwide. If the sourcecode is written the problem is solved. Object-oriented programming in general and C++ in detail is an universal language and quasi-standard in programming applications. The reason is, that C++ can be executed direct on a machine, even by people who have not written the sourcecode.

OpenAccess meets Donald E. Knuth

The debate around Open Access is surprisingly ignoring one important milestone: the TeX software for generating high-quality papers. Let us first take a look into the old publications by Donald E. Knuth from the 1970s in which he descibed the general idea. The early books about TeX are fresh even today, because they start not with computerprogramming itself, but a history lesson about linotype printers. From that base, Donald Knuth develops a text-language not for describing the final Postscript page, but for generating such a page out of a markup language. Everybody who is familiar with the UNIX operating system knows, that TeX is equal to troff, but with more features. Surprisingly outside of the hacker community, most scientists are not aware of TeX. :And if the know the system, because the standardtemplates of IEEE and Elsevier are based on TeX they thing the software is not very important or is outdated today.

Is TeX already relevant for publishing a paper? Yes, instead of many attempts to make TeX obsolete, the software is more important than before. Not in his original iteration, as the TeX system plus a DVI previewer, but in a improved version, called Xelatex together with the PDF format and modern Truetype fonts.

The question is, why is the TeX community obsessed by typesetting itself? Is it really important in which font a paper was written or if the linebreaking is correct? The reason why such details are discussed has to do with the aspect, that the people have time to think about it. Usually the paper itself is written in a very short time. The author enters the text, presses the run button, and the perfect formatted pdf paper is there. The pipeline is more efficient compared to MS-Word or Framemaker from Adobe. And that means, the author who has typed in the text has produced his text in half of the time, and has enough time for discussion details, for example, which font is really good. Instead, users of MS-Word have no such freetime, because they are involved in creating the manuscript itself. They are happy, if all the figures are visible and if the pdf paper is generated without crashing the computer.

Why not more people are familiar with TeX is simple: the people involved in higher educatation and the university system have simply not a background in computerscience. They see themself as historians, physicians and something between them, but not as a computer scientists or as a hacker. That results into ignorance in relation to TeX. For example, let us take a look in the last paper of Richard Price, The pdf file was created with the Acrobat Distiller in the Windows operating system and the software for entering the text is “Arbortext Advanced”. That means, Price or better his publisher at PLOS One is not familiar with TeX. Instead a proprietary MS-Word like program was used, which is compared to TeX very time-consuming.

But why is TeX not used heavily by Academic publishers? The answer has to do with copyright law. TeX and UNIX are both results of the hacker community. Their philosophy is OpenSource and free flow of information. That is not compatible with academics itself. Either physics based science nor academics are sharing these goals. Instead, the hacker community is isolated. Using a propriatary MS-Windows operating system with a 1000 US$ program for entering the text works more in a way like traditional academics. That means, the aim is not be open, but restrict the circulation of papers, create paywalls and increase the costs for reproducing the work.

The reason why TeX is not used heavily by publishers is because the software is too advanced. Their main features are: no costs and fast formatting of a paper. This is from the point of view of mainstream academics a problem. If somebody is trying to protect information, block knowledge and holding normal people out of the process TeX is something which has to be overcome. Not using TeX is the shared identity of todays publishers because it helps them to distribute copyright-protected journals.

daily OpenAccess answers

Talking about the passing percentage,

An interesting question was asked recently in the above URL. It was interesting because the normal way of talking about the situation is on a personal level. For example, a single student asks why he has failed the test and what his error was in a certain question. The OP instead asks about the percentage of all students who has failed the test. And yes, there is an answer, it has to do with costs in academics. Attending a university costs around 30000 US$ per year. The price is paid either by the student itself or by taxpayer money. Reducing the number of student who are studying on the university is an answer to reducing the costs. If a course let all students pass, not the quality of the course will decrease, but the amount of money the university gets from a higher instance.

Reducing the passing rate is possible only with new inventions in the higher education sector, for example with blended learning, Academic social networks and udacity like companies. All these new inventions are working with a different founding than traditional university sector.

Is useful?,

The same URL has another interesting question which i want to answer. In my opinion, is useful. The OP is right, if he questions, if the platform is helpful for networking with academics. Indeed, I see his critics similar. But it is also possible to use the platform alone, as some kind of backup in the cloud. After my initial signup for an account, I have learned a lot from it, only because I wrote some papers, uploaded them online and tried to improve my style. At least in my opinion, my todays papers are better than my papers 2 years ago. And it is very easy to upload many of “just-for-fun” to the website, because it costs nothing.

It’s right, that the website could be way more better, for example I miss feedback for my papers. But, this kind of feedback can the serious user get elsewhere, for example from Wikipedia. So I see as piece together with Google Scholar, Open Science and youtube lectures for free.

Another interesting question

This time the OP asks for a high-quality alternative to Arxiv. He is not alone with his mistrust against The question is, why do are researcher are fans of Arxiv but not of From a formal point of view, both websites are working equal. But there is something different. Arxiv is located inside the university system while is a for-profit company. is a kind of the cyberspace which is breathing the same air like John Perry Barlow and Richard Stallman, while Arxiv is real science which is located outside the internet. is not open to everyone and it is not devoted to Unix, but it is some kind of mirror for the serious scientists.