Some notes on reading paper: One VM to Rule Them All.
There are really a lot of programming languages: Java, .NET, PHP, maybe you have your own programming langauge.
Here is the problem, most langauges are written from scatch, they need to write parser and compiler or a virtual machine if needed. Most of them have to sovle almost the same problems.
Then can we build a virutal machine which is suitable for every programming langauge? I didn't seen any. There are two main stream common programming language virutal machines: JVM
and CLR
. You could use them to implement almost any langauges you want, and there are many of them, JRuby, Scalar, Clojure are built on top of JVM and C#, F#, VC++ are built on top of CLR. even if we could do that but from a pespective of performance, not everyone of them are fast enough, different langauge need differect approche to archive better performance.
In this paper, authors present a method to achive good performance with: AST based interpeter and node-rewriting methods.
we really want an universal virutal machine to do all of the dirty work for us, we just need to design our language's sementic, its syntax.
OK it is a myth:(
I think
LLVM
should be a great backend for your progamming language, parser is just small portion of your compiler.
This paper's title is ambitious. It explained the approches they tried, but they didn't have any numbers to show us how the methods worked.
Below are the key methods they present to us:
Most languages use bytecode to speedup interpreting, howerver
they believed that "AST interpreter allow for much more extensive moifications."
[1].
I though use AST we could use more context to do optimization, but in fact there are a lot of research on bytecode based optimizations. I don't know why it will performance better, I will read the paper later, I hope there will be some numbers to compare, but not just claiming it is faster:)
Choosing a AST based interpreter is the foundation of the whole paper, since most optimization are based on AST operation.
Here is the most important part. When interpreting AST, it will rewrite node to collect profiling information, such as node execution frequency and dynamic type information.
"When compiler is invoked to compile a part of the application, it uses the extant ASTs (with profiling and frequency information) to perform a automatic partial evaluation of the AST interpreter"
As far as I know, it try to collect enough profiling information to perform optimization, and it maybe partial evaluated, because not all of node have enough information to compile to native code.
After partial evaluation, the program can run with native code to gain speed improvement. Since the VM aimed to support dynamic languages, so the types may change, then the compiled version will need to reevaluation, but the paper choose a better approch, if the assumptions are not met(they use gard to protect it), it will fallback to AST interpretion.
They call it Deoptimiztion
. after deoptimization it will collect new type information and compile to a new version of partial evaltioned native code.
I have a question If a node's type keep changing, will it keep optimiztion and deoptimzation loop? performance will suffer from this, the paper didn't explain it.
Before reading this paper I know HHVM
's internals a little bit, They seems share most of ideas, in HHVM the compilation called JIT-Compilation, in fact they are the same thing, they both need type information to generate speicalized native code to speedup.
The different from HHVM is that, HHVM use bytecode interpretion and it have a lot of optimizaiton passes to improve interpration.
What I learned from this papaer? I learn a new idea which use AST to do interpretion.
Anyway, I will study more through HHVM.
PS: I tried the method in this paper: How to Read a Paper, it works!