Libraries are doomed – and they know it

In a recent talk on a conference about the future of public and university library one speaker explained what the role of libraries is. At first, he recognized, that with the Internet the libraries are different and then he explains, that in the future the libraries will become maker spaces, in which beginners can try out robotics hardware, 3d printing and get access to the internet. That vision get a lot of applause and the talk was celebrated as an important contribution.

Before I can explain the future role of libraries it is important to take a look back into the golden years of libraries. That was a surprising long period, from the invention of the first books until around the 1990s, in which the CD-ROM was invented. Over all the years, the library has fulfilled lots of tasks: it gives access to education for broader audience who were not able to buy books, it gave scientists a huge amount of academic literature and the library was a public space for meeting each other. If the internet were never invented, the library would have all these roles until today. But in mid 1990’s the well known Tim-Berners Lee has invented something which was better then the printed book: the World Wide Web. At first, the Internet wasn’t a threat for the library because a dial-up telephone line was very expensive and the connection speed was to slow to transfer fulltext books. But around the year 2000 the situation changed dramatically. Overnight some improvements to the original internet were available, for example search engines, flatrate internet access providers and affordable microcomputers and smartphones.

Now it is time to go back to the statement from the introduction about the future role of libraries. Their role is very clear: they will disappear. The idea of converting old school libraries into modern hackerspaces won’t work, and the hope that printed books will have a future is also an illusion. Both is a vision which is present in the todays debate, if this debate is held by the libraries itself. As far as i know, on the conference about the future of libraries no real stakeholders how is important was invited, for example AT&T, Apple, Google or Microsoft. These companies will decide (in the name of their customers) what future education will look like, and what valuable content is. The customers of AT&T for example will decide if a 100 mbit fiberconnection is sufficient, or if 1 gbit is a must have. The customers of Google will decide which information has to be on top and the customers of Microsoft will decide, if they need a Linux subsystem or not. None of these companies is involved in the library business or earns money by borrowing books. And perhaps, they have forgotten what a library was. And they are right, if somebody has tried out the fulltext search of Google for searching inside a book he will lost all the faith in a traditional librarian who has to ask personally.

Education and information of the future will need manual work, like in the library before. Bits & Bytes are not able to travel without costs to the enduser. There is somebody in the chain who selects informations, provides a space and builds up the infrastructure. The prediction is, that future media will need more of manual labor then the systems from the past. But, the work will be organized around the classical library. For example, if AT&T want’s to provide super-fast internet connection in every home, the company will need construction workers and an excavator. If Microsoft want’s to program better operating system they will need also stuff who is well skilled. They new think is, that no libraries are needed in the future. That means, people who have learned to sort books into a shelf and are able to advise the younger generation which books they should read.

Or to make the point clear. It is not possible that apart from internet economy there will be a need for an institution who is able to collect, store and sort media. Everything what isn’t managed by the internet itself, isn’t there. Today’s libraries no longer provide a service, they are only a customer for a service. Todays libraries pay a lot of money for getting internet access, getting computer hardware, computer software and so on. The problem is, that according to the definition a company or an institution is at first a system which provides a service. What can a library provide? Right, they have lost everything what they had in the past. The ingredients for future education, media and information are owned by Apple, Google and Microsoft. Or to be more specific, the customers of these companies own the information.

The only way to compete with these companies is to make them obsolete. For example a better search engine then Google would be an answer, a better operating system then WIndows 10 or a better internet connection then AT&T has to offer. The problem is, that the libraries are not interested in become one of the Google replacements. They belief that Internet will disappear itself or that the public will change their mindset. How realistic is this outcome? Right, not very realistic, and that’s the reason why the story told by the libraries is broken. They explain something to the audience, but it makes no sense.

Advertisements

Small C compiler for the Commodore 64

Today the Commodore 64 is not longer sold, it is outdated hardware and only retro-nerds are using the machine. But suppose we want to do some advanced stuff with the system, then a good starting point is a c compiler. The most remarkable aspect is, that even in the year 2018 everything what is called “compiler construction” is the future. No matter if the target machine is an x86 PC or a MOS6502 CPU. In contrast, normal programming on the C-64 in BASIC or Assembly language can be called outdated. It makes no sense to program for the BASIC interpreter anymore, because under a modern operating system and on modern hardware it is much easier. Only the question of how to convert a high-level language like C into lowlevel machine code is relevant.

Around the year 1980 the so called “Small C” compiler was presented in the homecomputer market. It contains the compiler itself, but also some books and tutorials about the inner working of compilers.40 years later, everything around the “Small C” movement can be called also interesting. I’ve found in the internet an outdated handbook from the year 1986 about the “Power C” compiler, http://project64.c64.org/misc/index.html which is for todays readership very relevant. It describes the c programming language and their transformation into C64 assembly language.

The precondition for Small C is, that the user isn’t interested in programming Assembly directly. Insteads he needs an interpreter or a compiler which converts a different language into machine code. This concept smells like a revolution in the mid 1980’s and today too. Quote from the Power C handbook:

“The use of the C programming language is probably one of the more important developments In the micro computer field because C is not tied to any one particular manufacturer’s hardware or Disk Operating System.”

Artificial Intelligence in Films

Hollywood has produced movies about any subject with only one exception: Artificial Intelligence is a non-topic in movies. To understand the problem in detail, let us first investigate one of the few example and discuss if this is really Artificial Intelligence.

The most prominent example is the film Wargames (1983) in which an AI becomes self-aware while playing TicTacToe. The technology was introduced to the audiance with a library scene in which the protagonist tried out to do some research for finding out about which topic a former scientists has worked. With knowing the real history it is easy to guess which real person is portrayed here: Claude Shannon. But the total amount which has to do with AI research is very limited.

The next guess for AI in films are blockbusters like Ex Machina (2015), A.I. Artificial Intelligence (2001) and Startrek TNG (1987-1994) which have all a plot around a robot which represents Artificial Intelligence. But in contrast to Wargames, this isn’t grounded in real AI history, it is more a hollywood version of machine intelligence and the audience gets no details how the problems were solved on a technical basis. The same problem is happening in I, Robot (2004), in which the plot is about robots with a “positronic brain”. This has nothing to do with AI as a technology, but is about the ethical consequences of AI. That means, AI is introduced as invented without telling the details, and then potential dangers and advantages for the society gets analyzed.

Other movies or tv-shows with a more scientific background simply doesn’t exist. That means, hollywood thinks that the audiance is not interesting in the topic. That is surprising because other subjects like crime investigation or the work in a hospital is very well portrayed by the entertainment industry. That means, if somebody has watched some episodes of Emergency Room (1994-2009) he is able to work in a real hospital.

So why is Artificial Intelligence a non-topic in cinema? We don’t know. I would guess it is a combination of an audience who is not interested in getting the details combined with screenwriters who are not familiar with expert systems, LISP, Forth and neural networks. We can say, that Artificial Intelligence in films is some kind of taboo. The only aspect which is outspread in boring details is the former called “ethical consequences of AI” for example in the TV-show Westworld (2016-) which describes in over 20 episodes who robots are living together with humans. But with Artificial Intelligence this has nothing to do. Like in Startrek or in Exmachina, no realistic details are given, instead AI is explained with a positronic wonder brain, and only the consequences of this technology are explained.

Is perhaps the topic of programming a computer to play games too hard for a naive audience? Are they able to handle the truth? It seems, that the film production industry thinks that describing AI realistically is a mistake, so they have figured out how to prevent it. From a technical perspective it is very easy to make movies a bit more realistic. Some keywords from real research paper are enough even they are used in the wrong context. But it seems, that there is no need for detailed description of important technology, especially not in the trivial science-fiction genre.

Smart pointer in C++: the easy way

Sometimes, C++ is called complicated in comparison to Delphi and other managed languages because it allows the programmer to get raw access to pointers and memory addresses. But with the new smart pointer feature, this isn’t a real problem and can be mastered by everyone. The main problem with pointers is, that there is no problem, that means, the sourcecode works, no compiler errors are there and everythink is compiled into ultrafast machine level assembly code. Here a small example:

// g++ -std=c++14 hellopointer.cpp
#include <iostream>
#include <memory>

class Physics {
public:
  int pos;
  void run() {
    std::cout<<pos<<"\n";
  }
};

int main()
{ 
  std::shared_ptr<Physics> p(new Physics);
  p->pos=23;
  p->run();
}

At first we are defining a class, which has the normal syntax. The only difference is the creation of the object. I’ve decided to use the “shared_ptr” keyword, which is according to the official documentation smart and ready for practical usage. And indeed, the variables in the class can be changed with the dereferencing operator and executing methods is also possible. What does the programmer needs apart from this? Nothing, with smart pointers he has a powerful tool for creating his applications, no matter if the domain is a webservices, a desktop application or a game. Why does anybody promote the usage of Perl, C#, Java, PHP or Python? I have no idea. All the other programming languages apart from C++ are outdated. That means, they are slower, are not ready for practical usage and do not work very well together with Forth.

Painting images with GIMP

GIMP is a powerful tool for creating images – that right, but why? At first, the most interesting aspect of GIMP is, that the software is able to replace traditional painting workflow. Let us focus on the result of fine art. Usually a painter is trying to create a realistic painting, for example of a person or from a landscape. Sure, it also possible to paint non-realistic, which is called abstract painting, but most pictures are created with a realistic goal. How usually the workflow look like until an artist has created such a painting is not well researched. According to the artists itself, they need around 3 weeks until the oil painting is ready, and what they are doing the in the meantime is unclear. They are not talking about painting, they only say, that painting is an art which has to be learned in an art school.

And now comes GIMP into the game. GIMP doesn’t reinvent painting itself, but GIMP is able to make the workflow more transparent. Explaining how a realistic oil painting is created in GIMP is easy. At first, a photo of the face has to be opened. Then a second layer is created with the aim to draw the contour lines of the picture with the selection tool. Such a selection is then colored and after additional steps it will be transformed into the oil painting. The overall process is not magic, but it is repetitive task which can be mastered by everyone how is familiar with the gimp software.

The most important feature of the software, that it is possible to combine photography with painting. On layer 1 a 1024×768 pixel photography can be loaded, while on layer 2 the oil painting is created from scratch. It is not a simple filter like the image magick software, but the process is individual. The surprising fact is, that the result will look very similar to a classical oil painting.

The optimal resolution in digital painting

Many users of GIMP and Photoshop are unsure about the correct image resolution. They heard something about 300 dpi, but it is unclear what does that mean in an all-digital workflow. The question occurs everytime the artist is pressing the new button in his prefered image manipulation software. Here he has to enter in the dialog a pixel resolution for example 600×400. What is the correct value? A rule of thumb is, that the resolution can never be high enough. If digital painting wants to be competitive with paintings of Van Gogh and other there is a huge amount of resolution needed. The paintings of the early impressionist were around 2meter by 1.4 meter and to realizing a high quality can only be done with pixels.

On the other hand there are some constraints which can be bypassed by the users. I want to a give a small example, and have tried out in the gimp dialogue a pixelresolution of 20000×14000 which makes sense for an artwork. But it seems, that this was a bit too much. Not because of the quality, but because can’t handle it. It will take a lot of time until the empty image was created and a look into the properties says, that the image takes 2.7 GB size in memory. Sure, if we are exporting the image as an jpeg to the harddrive it takes only 10 MB, which is not very much for a file transferred over the internet, but for editing the file in image manipulation software it has to be handled as RAW data, that means as uncompressed 2.7 GB. The problem is, that in mainstream computing the RAM is a limiting factor, that means from an artist perspective the resolution of 20kx14k is great, but it is not a practical choice for daily usage.

Let us try out a smaller resolution, perhaps 4000×2850. This image size needs only 180 MB in main memory and GIMP can handle the data without major concerns. The exported jpeg file is moderate in the filesize too and takes only 500 kb, perhaps a bit more. So from my perspective, the image should have not more then 4000 pixels in width. Sure, it would be nice to take more, but not with todays computing hardware. It is the upper limit of what can be handled by current operating systems and hardware today. It is equal to 11 million pixels.

Sure, many artist will say, that they need more resolution. If they want to fill an image which is 2 meter wide, the 4000 pixels will result into a poor resolution of 50 dpi (50 pixels * 79 inch = 3950 pixels). That means, the quality is way below the standard of 300 dpi and is perhaps inferior to a classical painting technique without using a computer. But the problem is, that todays computing power is limited. It is not possible to edit a picture in raw format which has 1 billion pixels, because this will blow up the 4 GB of RAM.

GIMP and the revolution in painting

Since around the year 2010 there was a huge revolution going on in the domain of art. It started with two well programs: Photoshop and GIMP. Both are well known for it’s ability to manipulate photos, and before the year 2010 there were used even by experts only for that reason. That means, the standard workflow was, that somebody has made a digital picture and used an image manipulation program to edit the pixels a bit. But, after a while the users have recognized that it is possible to paint also with the software. Even without a trackpad but with a normal mouse, remarkable effects can be realized.

An objective look into the capabilities of GIMP shows us, that the software is superior to classical painting. That means, if an artist is usually painting with oil colors or acryl in reality he can switch his workflow into an all digital one. He will not missing anything. The funny thing is, that every style is possible: from realistic painting, abstract painting or simply copy an image which is already there just for fun. Somebody have tried to compare digital art with normal art. Such a comparison will fail, digital art is a here to stay and will replace on the long term any other technique. There main advantage are the costs. If an artist takes 3 weeks until his oil painting is ready he gets the same result with GIMP in 3 days. And if he usually spends lots of US-dollar for painting equipment he get the same for free, if he is using GIMP.

Around the year 2010 the typical usecase in digital art was dominated by computer games. That means, GIMP and other software were used to draw the characters in real time roleplaying games. That is the reason why 90% of all digital images have something to do with fantasy. But, it is possible to use the same technology for different topics, for example for abstract paintings, for oil paintings and even for portrait painting.

What we can monitor right now, that digital art is critized heavily by the established artist. They simply ignore it, or they finding lots of reason to not using it. It is the same pattern like Open Access is critized. In most cases, the artist are simply saying that they don’t like it, that it is evil and will destroy everything. And they are right. The majority of images created with software has a low quality, was created by amateurs and has from an artist perspective no value. It is some kind of mass product and exactly this will destroy classical painting. If one technology (digital art) replaces technology from the past (classical painting with physical brush) this is called revolution. And we are recognizing it right now.

Let us describe what classical art is. Usually the picture is large-scale (2 meter by 1 meter), was painted with oil colors and is showing a realistic scene with people. Such a setting is what can be called the core of an art school. Other technique like abstract painting, smaller images and so on are only an addon. In the now most of the pictures were created handcrafted without computer support. But, from a technology point of view it is possible to create in gimp also a 2mx1m picture, in oil colors which shows a realistic scene. It is not very often, but it is possible. The main difference are the lower costs, that means, it takes less time and needs less money to create such a picture with GIMP. I would guess at least by the factor 10. The bottleneck in digital painting is the printing on a real paper. Today, large inkjet printers can be used, but they have a lower quality then a handmade oil painting. To overcome the problems a robotic-oil printing device makes sense. Here is the idea that a jpeg image is used as input, and a robotarm is brushing the colors on the sheet.

Digital oil painting

Let us observe a typical youtube tutorial with the title “Digital oil painting timelapse”. What is interesting is, that the overall workflow is transparent. That means, the artist gives a hint, that he took exactly 59 minutes to complete the work from scratch to the last brushstroke. And the audience can see exactly what the artist has done, so it is possible to do the same. That means to use the same colors, the same brushes and the same software. And that is perhaps the most important difference to classical non-digital painting. Here is everything a secret. The audience is not allowed to observe the artist, the artist is working in secrecy and gives no hint how long does it take and at the end the audience gets only the picture, but not a tutoriall how to become a painter for themself. Some people are argue, that these is equal to real art, but it is the art how it was done in the past. The new form can be called Open Art, it contains of more information for the audience, is entirely digital and produces the same or better quality.

Another remarkable feature of digital art is, that it can be used as an addition to a traditional workflow. In the first 60 minutes the artist is creating with gimp a digital image on the screen. On layer 1 he is drawing the outline with a pencil and on layer 2 he is filling the gaps with colors. Then he prints out both layers and paints the image on a real sheet of paper with oil colors. That means, he is not using an inkjet printer and not a robotic printer, but paints themself with real colors. The advantage is the separation between creating the art on the screen (that means to think about the pencil drawing and the colors) and the mechanical apply of the colors at the canvas. Such a semi-automatic workflow will take longer then 60 minutes, but is not so time consuming as a traditional artwork.

DIgital art is able to automate only parts of the workflow, for example it is possible to scan in a pencil drawing, print out a concept draw or print out only the pencil drawing and filling the colors by hand. The fastest and most efficient is an all-digital one. That means, the artist is opening gimp, clicks on the buttons and saves the file in the XCF-format. With such an optimized workflow it is possible for a single artist to create in one year hundreds of oil paintings. From smaller one up to huge paintings in the size of 10 meter x 7 meter.

The reason why this revolution takes place only after the year 2010 has to do with technical limitations. To save a huge painting on a harddrive many megabytes are necessary. Otherweise the resolution is not good enough. Perhaps a small example, on the Amiga 500 computer the first paitings were created in an all digital workflow. Special painting software was available and some artists have played with the tools. But, the quality was not very high. Because 500 kb of RAM are not enough to store a realistic artwork.