Here's a little project I started a few months ago. I managed to get the JIT working last weekend. This is the first thing I've done with JIT compilation.
Here's a little project I started a few months ago. I managed to get the JIT working last weekend. This is the first thing I've done with JIT compilation.
This looks pretty well-written for a change; most BF interpreters that I've seen are quite messy. Input and using 30k cells seems to be broken with JIT compiling enabled, though. Also, if you're...
This looks pretty well-written for a change; most BF interpreters that I've seen are quite messy. Input and using 30k cells seems to be broken with JIT compiling enabled, though. Also, if you're looking for something more to do, you might want to consider some peephole optimization, e.g., simply setting the current cell to 0 on [-].
BrainFuck is an esoteric programming language designed as a joke. jThe environment provides the programmer with an "infinite" array of unsigned bytes and a data pointer.
Actually, it was designed with the goal of implementing it with the smallest possible compiler. Also, the original compiler used an array that's 30k cells long, so that's generally what's assumed in programs, and it's not specified that it should unsigned; the original interpreter simply uses char.
There are many more than can be done, other easy ones are (pseudo-C code): Multiplication, for example, [>+++<-] can be translated to mem[1] *= 3 and mem[0] = 0 Search: [>] becomes while(mem[ptr])...
simply setting the current cell to 0 on [-]
There are many more than can be done, other easy ones are (pseudo-C code):
Multiplication, for example, [>+++<-] can be translated to mem[1] *= 3 and mem[0] = 0
Search: [>] becomes while(mem[ptr]) ptr++;
@teaearlgraycold can take more advanced ideas from esotope-bfc. Their old wiki page describes some techniques. It also has a comparison page listing supported optimizations written in more "readable" English than the optimization page I linked previously.
Thanks for the feedback! I'll go ahead and up the memory size (you can see where that's defined here) for JIT. When interpreted the memory space expands dynamically. Reading input works......
Thanks for the feedback! I'll go ahead and up the memory size (you can see where that's defined here) for JIT. When interpreted the memory space expands dynamically.
Reading input works... sometimes. So it's been hard to figure out exactly what's broken there.
Actually, it was designed with the goal of implementing it with the smallest possible compiler.
Ironically JITing isn't really that useful since I believe you can do BF -> C conversion using only a naive find/replace. That should be better than my janky JIT since you'll get all of the benefits of gcc/llvm.
Also, if you're looking for something more to do, you might want to consider some peephole optimization, e.g., simply setting the current cell to 0 on [-]
I investigated that and found there wasn't any noticeable performance boost - at least not on the programs I tested on.
It's a huge performance boost in brainfuck interpreters (I'm not sure about brainfuck JITs), you may not be using good enough programs for benchmarking. Try using mandelbrot.b, mandelbrot-huge.b...
I investigated that and found there wasn't any noticeable performance boost - at least not on the programs I tested on.
It's a huge performance boost in brainfuck interpreters (I'm not sure about brainfuck JITs), you may not be using good enough programs for benchmarking. Try using mandelbrot.b, mandelbrot-huge.b and mandelbrot-titannic.b and hanoi.bf . Also try running those under two layers of interpreters using the dbfi.b brainfuck interpreter running on your JIT compiler. There's also the game of life in brainfuck but I have never used that one for benchmarking. See also bfoptimization in GitHub for more benchmarking tips (and programs).
Yeah I've been benchmarking on the Mandelbrot renderers. I'll give it another look if you say it should be much faster. I'll definitely have to check out the GoL program.
Yeah I've been benchmarking on the Mandelbrot renderers. I'll give it another look if you say it should be much faster.
I'll definitely have to check out the GoL program.
It may not make a big difference in JIT compilers (I only benchmarked my optimized interpreter and some of the most optimized interpreters I found on the web against a non-optimized JIT compiler,...
It may not make a big difference in JIT compilers (I only benchmarked my optimized interpreter and some of the most optimized interpreters I found on the web against a non-optimized JIT compiler, and the JIT compiler was always faster). Also keep in mind that my benchmarks were done using an old CPU, if you have a fast CPU you may not notice any speed differences at all and you may need to use a profiler.
Here's a little project I started a few months ago. I managed to get the JIT working last weekend. This is the first thing I've done with JIT compilation.
This looks pretty well-written for a change; most BF interpreters that I've seen are quite messy. Input and using 30k cells seems to be broken with JIT compiling enabled, though. Also, if you're looking for something more to do, you might want to consider some peephole optimization, e.g., simply setting the current cell to 0 on
[-]
.Actually, it was designed with the goal of implementing it with the smallest possible compiler. Also, the original compiler used an array that's 30k cells long, so that's generally what's assumed in programs, and it's not specified that it should unsigned; the original interpreter simply uses
char
.There are many more than can be done, other easy ones are (pseudo-C code):
[>+++<-]
can be translated tomem[1] *= 3
andmem[0] = 0
[>]
becomeswhile(mem[ptr]) ptr++;
@teaearlgraycold can take more advanced ideas from esotope-bfc. Their old wiki page describes some techniques. It also has a comparison page listing supported optimizations written in more "readable" English than the optimization page I linked previously.
Thanks! Those look easy enough.
Thanks for the feedback! I'll go ahead and up the memory size (you can see where that's defined here) for JIT. When interpreted the memory space expands dynamically.
Reading input works... sometimes. So it's been hard to figure out exactly what's broken there.
Ironically JITing isn't really that useful since I believe you can do BF -> C conversion using only a naive find/replace. That should be better than my janky JIT since you'll get all of the benefits of gcc/llvm.
I investigated that and found there wasn't any noticeable performance boost - at least not on the programs I tested on.
It's a huge performance boost in brainfuck interpreters (I'm not sure about brainfuck JITs), you may not be using good enough programs for benchmarking. Try using mandelbrot.b, mandelbrot-huge.b and mandelbrot-titannic.b and hanoi.bf . Also try running those under two layers of interpreters using the dbfi.b brainfuck interpreter running on your JIT compiler. There's also the game of life in brainfuck but I have never used that one for benchmarking. See also bfoptimization in GitHub for more benchmarking tips (and programs).
Yeah I've been benchmarking on the Mandelbrot renderers. I'll give it another look if you say it should be much faster.
I'll definitely have to check out the GoL program.
It may not make a big difference in JIT compilers (I only benchmarked my optimized interpreter and some of the most optimized interpreters I found on the web against a non-optimized JIT compiler, and the JIT compiler was always faster). Also keep in mind that my benchmarks were done using an old CPU, if you have a fast CPU you may not notice any speed differences at all and you may need to use a profiler.
I don't have anything x86-64 that's older than 2013. I could try and port it to my Pentium II thinkpad.