sábado, 8 de abril de 2017

The Hitchhiker's Guide to the Galaxy (Comment)

This is the last entry of the Compiler Design Course and it has been unbelievable all the adventures that Arthur Dent has had during the book titled "The Hitchhiker's Guide to the Galaxy", written by Douglas Adams in 1979. Starting with having a normal life and then, some moments later, traveling through the space because the Earth was going to disappear. He obviously received the help of someone to save his life, but will it be worthy? 

You will need to read it to know it! 

One thing that I will say is, you should write down in Google or ask Siri, Cortana, Alexa, Google Voice, etc. the next question: "What's the meaning of life?". Maybe you will receive the answer "42" after one or more tries. Why? Well, you should read the book to find it (I highly recommend it).


Although this book doesn't have many references to computer science, it has been really cool to read it and have a good time while laughing and having some suspense in each chapter of the book. It is an obligated book for any person that likes science fiction. It's unpredictability makes it even more fun, and it is so easy to read that you will finish in less time that you think. The trama keeps you interested to know what will happen next.


To finish this comment, I want to thank the professor that asked me to read this book because I really enjoyed it and maybe I wouldn't had ever read it because I didn't had any clue that it existed. My interpretation on why 42, is that no matter what, there will be a time when computers will do operations on which the seed will be the human, but its only going to be just that, anything that happens next is not going to be known for the human, just the answer of such computation.


Source:

Adams, D. (1979). The Hitchhiker’s Guide to the Galaxy. UK.










domingo, 19 de marzo de 2017

Comment about "Technical Overview of the Common Language Runtime (or why the JVM is not my favorite execution environment)"

Although JVM is one of the primary tools chosen by language researchers, it might not be good with other languages than Java. The main reasons for choosing an alternative to native compilers are:

Portability
Compactness
Efficiency
Security
Interoperability
Flexibility

JVM is an attractive option because of its high-level runtime support and the rich set of libraries. But in the other side it is important to say that it doesn't gives a way to encoding type-unsafe features of typical programming languages, such as pointers, immediate descriptors  (tagged pointers) and unsafe type conversions.

Other features that it lacks of are:

  • Unboxed structures and unions (records and variant records)
  • Reference parameters
  • Varargs
  • Multiple return values
  • Function pointers
  • Overflow sensitive arithmetic
  • Lexical closures
  • Tail calls
  • Fully dynamic dispatch
  • Generics
  • Structural type equivalence


Another tool that has been developed is the CLI by Microsoft which has good support for imperative and statically Oriented Object languages. One of the great features the CLI has is that it maps natural-size or generic types depneding on the processor, for example, a native int would map to int32 on a Pentium processor, but to int64 on an IA64 processor. All of this is made at JIT- or run-time. This can make the program work better in different processors and it wouldn't be tedious for the person to design its program for each of the great amount of different processors in the market.

In summary the main points that are mentioned in the paper are how the CLI is better in assembling files, it supports a number of primitive types, its instructions are more polymorphic, provides a number of instructions for transferring values to and from the evaluation stack, supports types such as classes, interfaces, arrays, delegates, it has two call instructions for directly invoking methods and interfaces and another important one is that it supports tailcalls which are the only way to do recursion in languages such as Haskell, Scheme, Mercury, etc.)

Sources:

Erik Meijer, J. M. (2001). Technical Overview of the Common Language Runtime. From http://webcem01.cem.itesm.mx:8005/s201613/tc3048/clr.pdf

domingo, 12 de marzo de 2017

Building Server-Side Web Language Processors (Comment)

One of the most incredible things with programming languages is that even when it is not the same to code in one than in another, you can create a pretty much alike solution or idea. If you want to get the sum in an array you can do it in python like

sum = 0
array = [1, 2, 3, 4, 5]
for x in array:
    sum += array[x]

or in javascript (ECMAScript 2015):

var sum = [1, 2, 3, 4, 5].reduce((a, b) => a + b, 0);

And that is the main points of the paper "Building Server-Side Web Language Processors", which tries to build a language processor which is coded and maintained using web technologies. This approach can be more suitable for a person who knows more about constructing web solutions than someone that works with a terminal. As making a compiler is not a task a computer science graduate does often in most of the companies, it can be more valuable to create it while using tools that are common such as HTML, CSS, Javascript and other, so that the student can get experience in two areas.

There are many server side programming languages options to choose, it can be PHP, Node (Javascript), Python, Ruby, and if you are using a framework like ruby on rails, django or  another you can stop worrying about cross-site scripting and work more in the implementation and design, the only disadvantage is the learning curve if you don't know them. And if you decide to use web technologies to teach how to do language processors, you will have the drawback of having to give web lectures and such thing can take a little bit of time. 

If you are interested on checking some already made content you can go to: 


You will find examples for the implementation of some of the strategies used on the pdf Building Server-Side Web Language Processors, written by Ariel Ortiz.


Sources:

Ortiz, A. 2010. "Building Server-Side Web Language Processors". Retrieved 12 March 2017 from: http://webcem01.cem.itesm.mx:8005/publicaciones/weblang.pdf

domingo, 5 de marzo de 2017

Language Design and Implementation w/ Ruby and the Interpreter Pattern (Comment)

The article "Language Design and Implementation using Ruby and the Interpreter Pattern" written by Ariel Ortiz in 2008, describes the S-expression Interpreter Framework, also called (SIF), which is a tool that allows to demonstrate language design and implementation concepts. It supports integers, symbols, lists and procedures and this can be used to learn about different programming styles (functional and imperative). 

The SIF consists of 3 files:
  • sif_parser.rb Scanner and parser. Required by sif.rb. 
  • sif.rb Fundamental framework classes, including: Interpreter, Primitives, and Node
  • sif_file_reader.rb Reads and interprets an s-expression source file, one expression at a time. The result of every expression is printed to the standard output, unless it's equal to Ruby's false or nil. It also catches any runtime errors. Required by sif01.rb, sif02.rb and sif03.rb.
The main part of SIF is its Interpreter.eval class method which receives a string (with one or more valid S-Expressions) and converts the data into Ruby data values, which are then evaluated using the interpreter pattern. 

We can extend SIF to have a functional language interpreter just by adding the forms quote, define, if, fn, etc. and we can do the same to create a imperative one adding instead set!, begin, etc. If you are not familiar with them you can check this (Functional Programming vs. Imperative Programming) to understand a little bit more about this two ways of programming which in summary, the Functional, is more interested in getting information and its transformations, and the Imperative is more interested in performing tasks and tracking its changes. Nevertheless, you can use whatever one you want to teach/learn with the S-Expression Interpreter Framework.

One of the core properties of SIF is that it differentiates between syntax and semantics of any language construct.

If you want to check the source code, you can click on this link:


Source:
Ortiz, A. 2008. "Language Design and Implementation using Ruby and the Interpreter Pattern". ACM. Retrieved 5th of March 2017 in: http://webcem01.cem.itesm.mx:8005/publicaciones/sif.pdf

domingo, 26 de febrero de 2017

Mother of Compilers (Comment)

There has been only some women that have been recognized in the history of computers and given the situation that those machines were developed in a time in which women had more presence in working at the house while the mens in factories and offices, gives them more recognition. That is the case of Grace Hopper, or I should call her by her nickname, The Mother of Compilers.

When I first started to program in Python my mind was completely amazed of all the things you can code. From simple stuff to complex algorithms which make this world communicate faster. But this wouldn't be possible if Grace hadn't created the first compiler which was the Cobol Compiler. Even when it is an old programming language, it was the first time that an idea of such magnitude was made.

Hopper

She was one of the first persons to refer to a code failure as a 'bug' and that is quite used now a days. With the help of her team at Remington, they wrote the A-0 compiler. Making it possible to program things easier. And ACM created an award on her name, for young computer personnel. After reading all the things she did, you start to see how important she was for humanity. We could be coding in low level programming languages today.

A piece of her biography that gives you the idea of what type of person she was is this one:

"The most important thing I've accomplished, other than building the compiler, is training young people. They come to me, you know, and say, "Do you think we can do this?" I say, "Try it." And I back 'em up. They need that. I keep track of them as they get older and I stir 'em up at intervals so they don't forget to take chances."

She will always be remembered as a pioneering woman in computer science with all her contributions. In recent years there has been an increment of programs for women to bring them to technology, this is a result of the steps they have made to become more and more common in working at jobs that we only taught that a men could do. As we continue forward we will start to see more entrepreneur women directing the next innovations in programming solutions.

References:

Historian. 2013. "The Mother of Cobol". Retrieved 26 of February 2017 from: http://www.i-programmer.info/history/people/294-the-mother-of-cobol.html

McCann, A. 2015. "The Queen of Code". Retrieved 26 of February 2017 from: http://fivethirtyeight.com/features/the-queen-of-code/

domingo, 5 de febrero de 2017

Internals of GCC (Comment)

Compiler design is a subject that might not interest many programmers but it is the backbone of our code. Without them we couldn't write software as we now it today. Understanding how a compiler works allows us to understand how is our code interpreted and there is nothing better than knowing more about the thing that makes the magic come to life. As told in the podcast, it might not be really important but it is good to know it, just as being aware of how your car engine works, you never know when this can come in handy. 

The GNU gcc is one of the most used compilers in the world. One of the key points about is it's portability to many OS (Windows, MacOs, Linux, etc.) and it is very flexible to take source as C/C++, Java, Ada, Fortran to produce object code. This compiler is very modular, it has a front-end, middle-end and a back-end (Each of them have their own responsibility). Any source programming language mentioned comes inside the compiler and it checks if everything is ok. The compiler creates a tree (data structure) to give logic to the operations in the code. Then the middle-end creates a similar tree but slightly lower level representation with possibly some optimizations in space or time.

RTL then makes like a low level representation of the source code. This cannot be recognized anymore because it doesn't have any of the high level language references that you coded. Then the register allocation comes in and it assigns all the instructions. It stores frequently used variables in some registers to make it more efficient to run. RTL matches the registers with specific assembly instructions (using patterns) and it makes it possible to do the operations between them. Knowing which architecture are you going to use can allow you to create code that will run more efficiently on it.  

References:

Arno, 2007. "Episode 61: Internals of GCC". Software Engineering Radio. Retrieved February 5th 2017 in: http://www.se-radio.net/2007/07/episode-61-internals-of-gcc/

miércoles, 25 de enero de 2017

The Hundred-Year Language (Comment)

"What technologies will survive for 100 more years?" is a question that me as a computer science student and many other ask, maybe not in this way but by trying to guess which one will give me most opportunities to find a job or the best paid language outside. No one wants to learn a programming language that cannot land you a job or which its learning curve is to big to create just simple tasks. But how can you really know which one will be still alive if not a single one has ever survived for a hundred years? We are in 2017 and the first programming language that was created was FORTRAN (1957), that gives us a result of 60 years, but wait! FORTRAN is not popular anymore. It was so bad at handling input and output and because of that COBOL was created (1959)1. However, now a days it is in the same box of "not popular anymore".

Some languages as LISP (1958) or C (1972) are still used today. Python (1991), Java (1995), Ruby (1995), Javascript (1995), C# (2000) and others are also used but they were created more recently (You can check a diagram of the programming languages history on this link). Go (2009) and Swift (2014) are not even in this image2. As the essay "The hundred-year language" says, "we should be consciously seeking out situations where we can trade efficiency for even the smallest increase in convenience"3 and that is happening with all the programming languages that come from C. We are having faster machines who can compute languages which are more scalable or easier to program. 

An analogy that comes to my mind is animal evolution and adaptability. There are some that can fly, others that can swim and there are also ones that can run really fast, but this aren't all the abilities which animals can have and they didn't have them since the universe started. I think the same will happen with programming languages, the ones that can adapt better for the future necessities will be the ones that will survive, the rest will end up in the same place as the FORTRAN, COBOL, Algol and so on.

To end this post I would recommend checking the Stack Overflow statistics and also which are the most popular programming languages on Github. Maybe the won't be popular in a hundred years but I'm sure that they are or will be in the next 5 to 10 years.

References:

[1] Ferguson, A. (2000). "A History of Computer Programming Languages." Consulted on Jan 26,
2017 from https://cs.brown.edu/~adf/programming_languages.html

[2] Genealogical Tree of Programming Languages [Image]. Consulted on Jan 25, 2017 from https://upload.wikimedia.org/wikipedia/commons/2/25/Genealogical_tree_of_programming_languages.svg

[3] Graham, P. (2003) "The Hundred-Year Language". Consulted on Jan 24, 2017 from http://www.paulgraham.com/hundred.html