A carefully dive into “Forth vs Linux” debate

Linux can be called the most advanced operating system right now. It is hacker friendly, is open source and is used on many servers and as a workstation. Sometimes, Linux is compared with netbsd, FreeBSD or SUN OS, because these systems were used early on workstations, but in general Linux is accepted as superior, mainly because of the size of the community and how fast changes are integrated into the kernel.

But, there is competitor in the shadow, called Forth. In theory, Forth can replace Linux. But it is unclear how exactly. Let us make things more easier and describe what Forth can do for a user in the mainstream. A computer can be booted with qemu, but qemu needs an operating system which is stored in the .img file. Some github projects are available which provide such a qemu image with Forth. The idea is similar to the retroforth project which was programmed for the Commodore 64, but only for the x86 PC.

The assumption which was teached by Linux is, that after the booting, the operating system must provide all the hardware drivers for example network, graphicscard, filesystem and so on. And that without the Linux kernel and the userland no additional program can be started. But is this assumption correct? Every computer has a BIOS as default. The BIOS provides the VESA standard which is able to paint lines on the screen. So in theory it is possible to run a PC without an operating system, or with a very small one, for example QNX which is much more compact than Linux. What if we program in Forth a small operating system which is utilizing existing BIOS routines for getting network access, videographics access and put sound to the speaker?

It is unclear right now, if this is possible but it would bypass the existing Linux ecosystem. It would feel more like the early MS-DOS which contains only the msdos.sys and command.com file, but is programmed in Forth. And thanks Forth no complicated C compiler is needed. So the whole system would be a strip down version of Minix 2.

Suppose, a Forth microkernel is able to start high-level programs and schedule them. Suppose the system fits in under 500 kb and is more efficient than current Linux distributions. What can the user do with such a system, doesn’t he has to write Forth code? Yes, he must. It is not possible to execute Linux binary in Forth, so the user must write all the software from scratch. But, as far as i have seen in the Forth documentation it is able to extend the ANS Forth standard with object oriented features. And writing all the needed software is possible. On the first look, the Forth syntax is a bit complicated, but not more complicated then C++. But the main advantage is, that it will run efficient, especially on low power cpus.

The disadvantage is, that the user will loose most of today’s ecosystem: the Linux operating system, the C/C++ compiler, existing libraries for OpenGL access and most other features created by the Open Source movement like the Ubuntu ideology which has the goal to teach programming to beginners. In contrast, a software ecosystem made of Forth is much more mature. That means, Forth is the language of computer scientists

What is the idea? The idea is to create a compact operating system, which is faster than Linux and takes less resources than Linux..Such an operating system is not programmed in Assembly language like the menuet-os example, but in High-level Forth. That means, in Forth is assembly language integrated for setting the BIOS Vesa mode, but then the user can use Forth high-level commands to put lines on the screen. It is unclear, how to program such a Forth OS, but in theory it would make sense.

The most dominant problem is perhaps, that existing software written in C would no longer run in Forth. It is not possible to convert C sourcecode into Forth. Only the opposite direction is possible, but there is no github with Forth sourcecode available. That means, the project would start from scratch and write everything new. As far as i can see at Stackoverflow and on github, there is no large community available which is interested in such a project. Most programmers are fully stretched with Windows, Mac and Linux, they didn’t see a need for Forth. And the programmers who are familiar with Forth are also not interested because it is to easy for them to write a QNX clone in Forth. It is something which doesn’t bring the Forth movement forward.

Advertisements

Linux for Non-programmers?

The famous Ubuntu Linux distribution was founded with a certain vision about the future. The idea was to use Linux to motivate non-programmers to become programmers. The website ubuntuusers has as a target group the ordinary PC user who is currently running MS-Windows but is interested to do advanced stuff with the command line, with the Python interpreter and with Linux. Ubuntu is some kind of broader movement with the aim to make programming knowledge more attractive for the masses. And with open source software this is an easy going task, because advanced programming environments and Unix like operating systems are distributed for free.

There is only a small problem. Ubuntu wasn’t a success. People who are already programmers have tried the system and they loved it, but non-programmers are not interested in Ubuntu Linux. They preferred their MS-Windows box and stay away from any kind of IDE, texteditor, compiler and make tool.

Ubuntu was attacked from two sides at the same time. Profi programmers from within the Open Source community like Linus Torvalds doesn’t like Ubuntu because it wasn’t possible to compile software with the system. Instead it was an operating system for beginners with a reduced feature set. Newbies and non-programmers also don’t like Ubuntu, because it was too complicated for them and they understand nothing. As a consequence, Ubuntu was not accepted. The aim to attract millions of non-programmers for doing advanced stuff with their computer was not reached. The ubuntu project failed, and the LInux community too.

What can we learn from this experiment? We can learn, that 99% of the PC users are not interested in programming. What they want is using a computer for surfing in the Internet but they don’t want to write any single line of code. Not in Windows, and also not in Linux. They have nothing against Python, Java or C++ in special, they have something against programming in general. Any effort which is trying to educate people in learning programming will fail. That means, Ubuntu fails, Windows subsystem on Linux fails, Microsoft C# fails and so on. The most advanced stuff the mainstream user is doing with a computer is using it like a calculator in MS-Excel. That means, he writes numbers in cells and then calculates the sum. Using MS-Excel is used very often by million of people. But the next logical step after MS-excel, which are bash scripts, Visual Basic, Python or C++ is ignored. Real programming is something which is ignored by the mainstream user.

What is the reason why the normal users resists to learning programming? Because it has something to do, that knowledge about programming is useless. It is changing all the time, and what is right in programming language A is different in programming language B. Let us make an example. Suppose somebody have learned the perl language 20 years before. What can he do with this knowledge in the now? Right, nothing, Perl is called an obsolete language. It is no longer used for writing code. And the same will happen with all the other languages. From a technical perspective it makes absolute sense, that C++11 looks different from C++98 but suppose somebody is familiar with C++98 what can he do next? Learning programming is some kind of dead end and the mainstream users is aware of it. He knows, that if he want’s to program his first hello world example in Python, he will invest many months and as a result he gets an error warning that some library isn’t installed in his computer. So the right decision is to stay aware from the voodoo and let other people become a programming expert. If he can give to these Microsoft programmers a small amount of 50 US$ for the OEM license the customer is happy, because he isn’t force to programing for himself.

On that basis, Microsoft have created a business model and the customer isn’t interested in destroying this model with Linux. Because in the future of LInux, everybody needs to be a programmer.

Let us observe how the communication flow between programmers and non-programmer is. It works not in a way, that programmer will teach to mainstream users how to become programmers itself. That was the idea behind ubuntu and it failed. Instead the interaction works in a way, that the non-programmer is paying 50 US$ into the pockets of software company and they are writing the software for them. The aim is, that the numbers of programmers stays small below 20 million. There is no need, that a huge number of people is able to debug C++ application or can install a webserver in under 5 minutes. Instead the work distribution is organized by capitalism ideas which means with money in the loop. That is the explanation, why Linux failed. Linux is trying to convince the mainstream users to become computer experts and learn to program by it’s own. So that the user no longer needs a software company like MIcrosoft but can create his own operating system. This vision isn’t possible, because the value of learning programming is very small.

Programming isn’t a universal skill which has to be teached to anybody in the internet age and it isn’t the road to understanding robotics. It is some kind of skill which is outdated today. Programming was a major subject in the early 1980s in which the commercial software market didn’t exists. In that time programming was the only way to get new software. Nowadays the situation has changed. To get the latest software nobody needs to learn programming, instead he is visiting a software store and click the buy button.

The most pessimistic outlook is, that learning programming in the now isn’t the best hint. Because even somebody is very good in writing code, his knowledge will be outdated in 5 years. The software industry invents all the time new programming languages and new backend libraries. Some kind of general pattern which remains the same isn’t there. In a worst case scenario somebody is investing many hours in learning a language and if he is an expert in it, the language is outdated and he can learn the next language.

What the user wants

Let us first explain what the mainstream is interested in. He want’t to buy Windows PC, he want’s access to the internet and he want’s to try out different software. That means, the PC and the ecosystem around it can be called a great success. The trend is positive. Right now, more then 2000 million PC are worldwide in use, and the number will grow in the future. On the other hand, the mainstream user isn’t interested in becoming a power user, which includes to learn programming. That means, the students are not motivated to attend a programming course, the elderly do not want to program in old-school language and the mainstream user isn’t interested in give away his Windows operating system and switch to a Linux distribution.

Sure, there is a group out there which is fascinated by Open Source software and by writing it’s own software. But compared to all PC users it is a minority. They are less then 0.3% of all the users which means, the group is too small to talk about it. The problem in the media is, that in the past this small group has reached a high attention and the idea was, that this small group will become a large group. This wasn’t happen.

The mainstream user see the advantage of getting a PC incontrast to get not a PC. With a PC and a working internet connection he can send e-Mail, watch Youtube and write letters in MS-Word. Without a PC he can’t do so. The decision to become interested in PC technology is very easy and it was done by nearly all the people worldwide. But what is the advantage of learning to program and mastering the LInux operating system? Right, there is no advantage, the customer will loose he time and he gets nothing. If he don’t want to learn programming he has made the rational decision. What he can do with his time alternatively is to learn a foreign language, read a book or making sport.

Programming is boring

Sometimes, programming is called exciting, because the students can do what they want. They can create games, animate characters and write web-application. Others say, that programming is hard, because C++ is so complicated and every year new programming languages are invented for different operating system. But in reality, programming is boring. That means it is too easy and makes no sense. But let us go into the details. The best place to learn programming is stackoverflow. It the right portal for any language, no matter if Python, C++ or Java is the subject. Stackoverlow is based around questions and answers. Somebody needs to know how to write a GUI and an expert or Amateur is answering the question. Somebody may argue, that this kind of game will never be the same, and that every language and every topic is different. But the reality is, that after a while all the languages look the same. They have all to do with open the texteditor, write a for loop, and inside the loop, a function is called. The function gets a parameter and uses an if statement to decide what to do. Sometimes an external library is used to activate higher functions, for example painting on the gui or writing to a textfile, but in general that is everything what programming is about.

Some people are fascinated by this workflow and other are programming code for the last 30 years. But what can we learn from doing so? Right, nothing. That means, programming itself is boring. The only way to increase the level is to program a certain kind of software which is interesting by its domain. For example, if somebody is interested in Sports he can write a app with that topic, or if somebody likes social networks, then he can program a database which connects millions of people. Such a domain is indeed interesting and the programmer will learn a lot.

But let us go back to coding itself. In the past, programming itself was an adventure. If somebody had only a Amiga 500 and want’s to program something in Pascal he has to figure out how to do so. The same was true for the 1970s area if somebody want’s to program a mainframe. In both cases, no tutorials were available and the programmer has to start from scratch. But in modern world, most have changed. Today, all operating systems have a preinstalled compiler which understand any language, and the bookmarket explains in detail how to use the tools. That means the programming process itself is solved. It is documented what the workflow is, and for any problem there is a solution. This makes programming a bit boring, because it is equal to a sandbox without any trap. It is no longer possible, that a beginner, an amateur or an expert programmer come to a situation which was unexplored before. Instead he will make the same experience then hundred guys before him. He will forget the semicolon at the same position, like all beginners and he becomes the same good feeling, if the hello world text is printed to the screen like anybody before.

The problem with today’s programmers is, that most of them didn’t recognized that every thing is repeating over and over again. They belief that their problem with C# is completely new and equal to a scientific problem which has to be handled by university. In reality, programming isn’t a science anymore it is more like building a house. It is done million of times before, and it is impossible that somebody will reinvent the wheel.

Somebody may argue, that building a house is exciting and it is a demanding task. But he ignores, how often and how standardized the procedure is. It can’t be called a scientific work, because it is known how to do it. Programming hasn’t anything to do with computerscience, it is something which should be learned and teached off the campus. In the freetime, in the public library or by friends. Or to make the point clear. A programming course at the university lowers the quality without any need. It slows down the scientific progress and is a waste of time.

Programming a robot

There is one exception to the rule that programming is boring. If the task is to program a robot. With classical programming this task hasn’t much in common. And stackoverflow won’t help in this question. Programming a robot is a scientific task, that means it has nothing to do with C++ or Python, but with papers on Google Scholar. Programming a robot is equal to write a Phd thesis. That means, it isn’t a repetitive task, but it is exciting. At the beginning nobody knows what the result will look like. The reason is, that the subject in unexplored land. That means, probably you will be the first human ever, who has tried out a certain algorithm on a robot and it fails … If you’re asking at Stackoverflow nobody knows why. Even the sourcecode is error free and the code works, it is hard to guess what the problem was. It depends on the domain, that means which type of robot and which kind of algorithm was used. And sometimes it depends even on the student, what he has tried out before. In contrast to normal programming, robot programming is a challenge. It is the core discipline on a good university and is never boring.

What was wrong with the 1970s?

From a technical perspective the 70s was a great decade. Color television was there, videocameras too, the IBM 705 homecomputer worked great with 40 kb RAM and all the latest scientific papers were published on microfilm. But there was a small problem, the price. The IBM computer wasn’t sold like the C-64 for a mass audiance but it costs one million US$. The microfilm had a great resolution but the medium was expensive and the same was true for color television. From a technical perspective it was similar with today’s HDTV television but the price was huge.

What the decades after the 70s have realized wasn’t new technology and it wasn’t the internet but it was a massive cost reduction. That means, to transer a single bit over a telephone line becomes cheaper, recording a scene with a camera was affordable for everyone and be the prowd owner of a homecomputer was normal. The revolution had nothing to do with new cutlure or a different society, but with a reduced pricetag. That means, the same products available in the 70s were also sold in the 80s, but they cost 100x times less. This work hypothesis helps a lot to imagine the 70s. We have to only assume, that the price reduction will be taken back. Each decade the price is 100x more expensive. Let us make an example.

The iPhone costs in the now around 500 US$. The same product will cost in the 2000 100×500=50000 US$, the same device will cost in the 1990s 5 million us$, in the 80s 500 million and in the 70s around 50 billion us$. I don’t know if an iphone was sold in the 70s but if it was there it would costs this high price.

In the 70s there was no problem with the Internet, video compression or high-speed computers. The only problem which was there in the 70s was the economy. That means, the costs were high and nobody was able to buy consumer technology. I’m referencing to this problem because the same problem is visible in the 2010. Robots, Nanotechnology, holography and fast internet connection is technology invented right now. For example, the Honda Asimo robot is able to walk, run, open a bottle and he can even fall downstairs … The only problem with the device is, that it costs 100 million US$ each. If somebody is rich, he can buy an Asimo service robot today, but most people are not able to spend such amount of money.

From a technological perspective a computer contains of RAM, software and a cpu. But this description leaves out the economical dimension. A computer is at first a product. There is on the first hand a consumer and on the other side the supplier. To analyze new technology we must focus on the economic situation of the computer manufacturing company. The inventor of the ASIMO robot is Honda. They have a profile as a technology company compared to IBM in the 60s. That means the company is very powerful, has invented lots of things, and many of them are very advanced. Now let us focus how cheap Honda provides his knowledge. How many papers have the company uploaded to Google Scholar which describes how to build a robot? Answer: zero. That means, the company has a huge amount of robots, but it isn’t ready to share it for free. That means, the robot is available, the knowledge for programming the software is there but for the enduser the price is extreme high to get one of these items.

The consumer has the option to wait. If he waits 10 years from now, robots and papers about robot will become much cheaper. If he waits another 10 years, he can buy a service robot in his local store. Not because this technology will be invented in 20 years from now, but it would take so long until the price is low enough for ordinary customers.

The main problem in robotics is to invent the technology, but the problem is to reduce the price of technology which is already there. A low price is equal to reproduce technology from the past.

The latest invention of Honda is the E2-DR disaster response robot. From a technical point of view the device is great. It is the advanced system ever invented and contains lots of great patents. The only problem is the pricetag. Today, it is unknown how much the system costs, but it would be probably more then the Honda Asimo. So I would guess, that the out-of-the box version of E2-DR is sold for 1 billion US$ to the customers. But how would the world look like if the device is sold for 100 US$? I mean, the same product, but only cheaper. Yes, it would be equal to a revolution. Not technology prevents future but the pricetag.

Links2 works great

I’ve tested out some alternatives to Google chrome and the best one I’ve found is “links2 -g google.com”. Links2 is a textbrowser which can be started in a graphical mode. His cpu consumption is nearly zero, but at the same time it can show images. Sure, compared to Google Chrome there are some features missing: a pdf viewer isn’t included, youtube videos doesn’t play and the playback of webm files is only possible with external programs. But from all textbrowsers (lynx, dillo, Opera) links2 is the best one. It can be used with keyboard only and has a textonly fallback mode which makes it usable for command-line only usage. The question now is: why takes links2 less then 0.2% cpu while Chrome eats all the ressources? It has to do with the videoplayback feature. It seems, that videostreams are some kind of voodoo magic, and in contrast to the announcement, youtube and other webportals have not switchted to HTML5 but invented his own technology which is integrated into webkit. All browsers who are using this library produce the same amount of fan-noise so it must be called bloatware.

If links2 would have a little more functions (for example an integrated pdf viewer) i would call the software the ideal webbrowser. But also in the current configuration it is a useful tool. The interesting thing is, that even videoplayback doesn’t need huge amount of cpu time. A desktop only playback of a webm stream needs nearly no cpu ressources. The only problem is the combination between video and websites. This seems to be very cpu hungry. What exactly is webkit doing? Right, it is a secret. Nobody knows. What webkit isn’t doing is to playback a H.264 video, that is clear. Instead some kind of proprietary flash streaming playback is started which has nothing to do with open standards, HTML5 or a free internet. I would guess, that is main reason why links2 will never become a success. The software can’t playback videos, so the mainstream user doesn’t use it.

Update about GUI programming

The topic of GUI programming seems to be very complicated. In the past there were many frameworks out there, C#, Java, X11, Gnome, Qt, wayland, GTK+, SFML, OpenGL and much more. But which of them is right for creating a gui which runs on any OS? To make things short, the easiest approach for creating a GUI is Python together with GTK+. This works on Linux and Windows, and it is good documented. Here is the link https://python-gtk-3-tutorial.readthedocs.io/en/latest/textview.html to an example texteditor. I’ve copy and pasted the code in Fedora and start the application with “python texteditor.py” and it runs quite well. The sourcecode is very easy to understand, much easier than Java, C with GTK+ or Qt.

Let us describe the workflow for a real project. THe good news is, that a gui is not very important in an application. It is something which works on top of the software for interacting with the user. That means, normal programs and especially large programs are programmed for the command line and the gui is only the interface. It can be programmed at last if everything else is working.

The above cited example with the Python GUI shows, that GTK+ is a powerful and easy to use framework which is well documented for Python programmers. But using GTK+ together with other languages like C and C++ is very complicated. I would guess because of missing documentation. But at least the Python language works great together with GTK+. For a real project we can learn from it, that a GUI prototype can be created quick and without pain with Python. And then the programmer has to decide. Either he can stay on Python and he connects the gui with the core-program which is hopefully written in C++. Or he can convert the Python sourcecode manually into C++ which is complicated and not documented.

But let us stay for a while on the combination between Python and GTK+. The surprising information is, that this hasn’t anything to do with C# or Qt. Instead the old-school and not very often used GTK+ framework is used, combined with a modern scripting language. The next important information is, that the sourcecode even it was programmed in Python doesn’t look like a real object oriented program. It is more similar to an extended XML file. The reason is, that GUI programming is not real programming but more a layout task. That means, the designer defines rectangles and events. The GUI layer shouldn’t be mistaken by a real program and perhaps this is the reason why Python is great for such a task?

The codedensity of the example texteditor GUI is very low. That means no algorithms are implemented but it looks more like macro / script. Not only because Python was used as the programming language, but also from the statements itself. Every widget has it’s class (searchdialog, Textviewwindow) and the classes have methods which defines interaction with the user. Like I mentioned above the result looks similar to a XML file but is different from a normal program which implements algorithm. Maybe this is the explanation why GUI programming is at the same time too easy and too hard? Because classical languages like C/C++ are too powerful for only create some buttons on the screen, but at the same time it is very complicated to program the GUI with C/C++?

I’m not sure what the answer is. Perhaps the idea software contains of the main program written in C++ and a frontend written in “Python+GTK+”. Or the other option is first to code the GUI with Python and then convert the code into C++. I don’t know. Right now, i would guess, that a GUI can be created more easier with XML+Python then with C++ and without XML. Or let us explain the situation from the other perspective. Suppose, only the GUI should be created and not the whole program. The fastest way in doing so is to use:

– Glade for creating the XML file

– Python for creating the GUI out of the XML file

– Python for specify the event handlers and redraw the buttons

The open problem is how to connect this prototype with the real application which is normally not created in Python but in a more sophisticated language like C/C++.

The bottleneck is called gtkmm

GUI programming is on the first hand complicated, especially under Linux. The good news is, that it is possible to delimit the reason for failure. At first we ignoring all frameworks and all programming languages and only describe what the fastest way is to build a new GUI from scratch. It is the combination of Python, GTK+ and glade. This approach is very well documented in the internet, it works under Linux and Windows and the user needs only a few lines of code to create a nice looking GUI prototype.

With this enclose we have described what a working technology is: GTK+. And now we want to describe something which doesn’t work. Programming in C++ (which is the best programming language) for GTK+. Programming in C++ itself is easy, it is also well documented and the speed is amazing. The connection between C++ and GTK+ is usually done with the interface called gtkmm. And this is the weak point in the formula. GTKmm isn’t documented, the library doesn’t work and it is too complicated to use.

Here the summary right now. A working GUI prototype can be done with Python, GTK+ and Glade. A working software can be created with C++. But bringing all together, that means to write the code in C++ and connect this with GTK+ is a pain.

It is unclear right now, why the Python GTK+ binding works so great and why it is documented quite good, while the gtkmm binding doesn’t work. Or to change the point of view a bit. GUI programming is easy going, if we ignore the C++ language and give Python a chance. Together with glade it is possible to create quick&dirty a nice looking GUI which works everywhere. The problem is, that most programmers are not interesting in writing an app in Python, because the Python interpreter is slow, the Python standard is changing all the time and it is not possible to write a library in Python. They are prefering C++. But if we want to use gtkmm for making a simple GUI it is nearly impossible because it is too complicated and nobody knows why.

The correct answer to the problem is to rewrite the gtkmm interface and make document the result. That means, create a gtkmm interface which looks the same like Python binding for GTK+. Not GTK+ is the problem (it works great), not C++ is the problem (it works also great), the problem is the connection between both.

Poll: What is your favorite source of information about robotics?

Introduction The latest poll “favorite Linux distribution” wasn’t a success. No one has filled out the survey, but it was visible for over a week on the sidebar on the right side. Perhaps the question was not interesting enough? So dear reader, I want to try out something different. This time I want to know what your preferred knowledge source is to stay informed about robotics. Here are the details:

Description Staying on the shoulders of giants is only possible if somebody has access to previous knowledge. Getting the right information in a short amount of time is complicated and sometimes the books and websites don’t fit to the users needs and his current skills. The Internet has made the search process more complicated, because the possibilities are endless and not all of them are useful. Which strategy are you preferring to get in depth information about robotics and Artificial Intelligence?