Orion UCI chess engine

Download v1.0 (64-bit version)

March 10th, 2024

Orion 1.0 is available !

I'm very happy to release a new version of my little engine Orion ! Almost two years since the last release, with a lot of tries and errors, and significant progress made the last past months. Be aware that the new version will be weaker, and without any new user features, so don't be too disappointed... But I'm really proud of it, and it will be a good base for future growth !

It includes:


The "zero approach"

That was the objective, and that task was really, really, difficult to achieve. Training a neural network with such noisy labels was a real challenge. To give an hint on that, imagine that you have to evaluate a position near the start of the game (with, let's says, 30 or more pieces), just using the fact that, at the end, one player - for example Black - won the game. The "signal" to catch is even weaker when you consider that the game can be a draw...

I started to explore other approaches, like the Seer's one (see the previous post), but without success. The time needed to train so many networks and to label data was impressive. And the results were not here.

I then decided to switch to a simpler method, where I have to train only one network, with game results in input, but trying to predict two values : the win ratio (or probability to win), between 0 and 1.0 (renormalised between -1.0 and 1.0), and the material, between -1.98 and 1.98* (with pawn=0.1, ..., queen=0.9). I took the average of the two predictions, and multiplied it by 1000 to get a final evaluation in pseudo-centipawns.

This original approach led me to get an engine around 3050 elo (~100 elo weaker than version 0.9), but ranking doesn't matter here, what matters for me is that I managed to get a pretty strong performance in an original way, without requiring any evaluations from other engines !

Architecture of the network is almost exactly the same than for the version 0.9, except that it predicts now two values instead of one, and I Let the possibibilty to ajdust (at inference time) the balance between these two values to produce the final evaluation (0.5-0.5 by default in version 1.0).


Quantization

Another thing that I absolutely wanted to explore was quantization of the weights and biases. As I wanted to release a new version before version 0.9 was 2 years old, I used a simple approach with a post-training quantization.

This should imply a loss in evaluation accuracy, but I'm not even really sure of that due to the fact than 1) quantization can help to reduce overfitting, if any (it could be considered as a kind of regularisation method) and 2) it resulted in a (very) nice speed improvement : +40% in terms of nodes per second !

I was really impressed by the difference, even if, at the end, standard (i.e. not quantized) and quantized versions of Orion 1.0 were very close in strength.


The Cerebrum library... and engine !

A last important thing for me was to get reproducible results and to allow other people to reproduce my work. I merely rewrote all the Cerebrum library in that perspective, and even wrote the smallest (and stupid !) UCI chess engine possible (named "Cerebrum 1.0", available here) in order to demonstrate how to load and use a NNUE-like chess neural network in an engine. Do not expect strong performance here: the engine is limited at depth 1. I let testers decide if they want or not to include it in their respective rating list, at least to see if it can reach... the last position ;-)

I really hope that you, reader (!), or at least someone, will try to use the library to reproduce my results, and obtain the exact same network than the one now embedded in Orion 1.0. If you are interested, please follow these instructions !


Data used for the training

For those who are interested, some words about games used to train the network network. Here again, I tested several alternatives: use only games played by engines (CCRL, CEGT, etc.), only games played by humans (lichess elite database), or a mix.

As expected, I finally got best results with datasets composed of games played exclusively by engines (CCRL). But... It appears that it leads the current network to have some weaknesses in endgames for example (where games are usually adjudicated by the use of tablebases). It also have some difficulty to convert winning positions to an actual win (because strongly unbalanced games are not so common in engine tournaments). But, that's great ! I have now understood that the quality and the representativeness of data is crucial ! Let's see if we can go further...

As already said, a big thank you to the CCRL team for providing in such a simple way all the games they played !


Evaluation method and estimated strength

Evaluation has been performed using Arasan 20.4.1 (~3075 elo), Igel 2.4.0 (~3100 elo) and Counter 4.0 (~3125 elo) in a 40/2 tournament (total: 600 games). The estimated strength of the version 1.0 is ~3050 elo (more or less 100 elo weaker than version 0.9 on the same test).

All the given elo values are to be considered as "CCRL 40/15" elo values.


Next steps

I have ton of ideas, both for evaluation and search, to experiment. I also have in mind than some users already asked for more features (e.g. mutlipv). I will try to release a new version with such improvements in less than two years this time !

* 1.98 = 127/64, i.e. the maximum (absolute) value that can be represented using an int8 (signed byte, in the range -127..127) with a quantization method centered in zero with 64 possible negative and 64 possible positive values.

April 12th, 2022

Orion 0.9 is available !

More than one year since the last release already: time flies ! I have been very busy these last months, without a lot of time to dedicate to Orion, but the next version is here (it is actually ready since... last November !), with:

Support for SMP

I'm really happy to announce the support for SMP : Orion will now be able to think using several CPUs/threads in parallel, hopefully resulting in a stronger play ;-)

This required a lot, what am I saying, a ** LOT ** of work: I had to redesign the main parts of the engine, to ensure thread-safe execution, split, refactor, simplify, rearrange the code to avoid problems when computing in parallel. On the contrary, I was surprised by the simplicity of the Lazy SMP approach, that's brilliant !

Smaller network architecture

The other big change is the architecture of the neural network: it is now much simpler than the previous one, for a more or less equivalent strength (~20-30 elo weaker in my own tests). I replaced the 40960x2 weights in the first layer by a simple 768x2 scheme (6 types of piece x 2 colors x 64 squares = 768).

This probably hampers accuracy in some complex positions, but globally speeds up evaluation as you don't have to recompute all the first layer when the king is moving (this is really helpful in endgame positions where kings have more mobility). This choice resulted in a 24x smaller network (421 kB vs 10 MB)...!

I'm really happy with the result: It seems possible to compress chess knowledge a lot !

Important note about originality

I know that some people are looking for originality: do not forget that engine creation can remain for some of us (including me) a hobby and/or a way to learn programming and A.I. !

This has always been my goal: develop a 100% original engine, not only in terms of playing style (that's not the case for the moment) but also in terms of code: Orion is not derived from any other engine, I wrote 100% of the lines of code, in my own way, always after having taken the time to understand what I was doing (the most recent example being the NNUE experiments I led in 2020).

For example, Orion is based on a "only-legal" move generator, using flags embedded in each move representation to help sorting and pruning moves during search. Its transposition table also uses the number of pieces on the board as a criterium to replace old-entries.

But then comes the issue of the data used to train the neural network with the NNUE approach.

As for the 0.8 version, the provided network has been trained on positions that were statically evaluated with the nn-82215d0fd0df.nnue network which is the one embedded in StockFish 12. The StockFish engine itself was not used at all in that process: I took the network, reused the code I developed for my NNUE experiments to read the weights and evaluate a bunch of positions that I collected from CCRL games, and then trained my own network with my own (shared) Cerebrum library (note: this time, I was able to use only 128 million of positions, compared to the 360 million used for 0.8).

Finally, from this perspective, I think one cannot consider that Orion is - at this stage - a 100% original work, as it uses knowledge coming from another engine. Please note that starting from v0.4, it has actually been the case: I was previously using StockFish 8 (static) evaluations to tune parameters of my handcrafted evaluation function.

But, for sure, this remains the goal, and I already started to work on reaching that objective...

The "zero" approach

I think the most exciting challenge now that I know how to design and train neural networks is to find a way to train a network from zero, i.e. only using results of games (win / draw / loss). Inspired by an idea proposed by Connor McMonigle (Seer author), I tried to train one of such network, without success so far.

The idea is to consider endgame positions (3-4-5-6 pieces), use the results provided by Syzygy tablebases, train a network on these positions, use the engine to evaluate 7-pieces positions with the trained network (after a depth 'd' search), re-train a new network on these labelled 3-to-7 pieces positions, and then restart all the process for 8 up to 32 pieces positions. The beauty of this approach is that the network is trained only using the endgame outcome, and shall learn how to "retropropagate" to middlegame positions the expected result.

Next steps

This is my current effort: try to improve the way to train such a "from zero" neural network, only relying on game results. That's a very difficult challenge ! Be patient ;-)

December 1st, 2020

Orion 0.8 is released !

I finally managed to build my own "neural network trainer", after a lot of experiments (see here) ! I'm now pleased to release a new version of my little engine Orion, where all the evaluation part relies (only) on a neural network !

Architecture

The architecture of the network used is "NNUE-like", but smaller and simpler than the one used by Stockfish 12 : I was very curious to see to which extent it was possible to "compress" chess knowledge without sacrifying too much strength.

After having tested several combinations, I finally found that halving (*) the first NNUE layer was a good compromise between the loss in strength and the gain in speed (which compensates).

Another change is that all dot products are performed on float values, which is a handicap in terms of speed but simpler from a training perpective. Values of the first layer are rounded and stored as 16-bit integers, resulting in a final 10 MB file.

Training data

Training was performed using 360 million unique positions, extracted from CCRL games, against the nn-82215d0fd0df.nnue network. This network has been released back in August in the public domain by Sergio Vieri, and is now embedded in Stockfish 12.

After 150 iterations ("epochs"), my own tests showed an increase of ~200 elo against v0.7, but this has yet to be confirmed (it is probably highly biased by the fact that I kept the same set of opponents).

The Cerebrum

To help other programmers to understand how to train and use neural networks, I decided to share my work through the "Cerebrum" library, composed of a trainer (Python script) and the corresponding inference code to be embedded in an engine (C langage). The trainer is a cleaned version of the one used for Orion, while the inference code is actually the one used in the engine. I hope all of this will be useful.

What's next ?

This version represents a lot of work. Understanding how neural networks work and how to train them was very challenging ! Now, the next challenge will be to "cut the link" with Stockfish's evaluation. The road is still long but, as we said in French, "Paris ne s'est pas faite en un jour" !

Credits

Credits and a big thank to Sergio Vieri for his incredible work, but also to Yu Nasu for the NNUE concept introduction, and following authors/creators who have worked on its implementation in Shogi and Stockfish (see list here). Last but not least, thanks to the CCRL team for providing games of their tournaments in such a simple way !

Final note

Syzygy support has been removed from this version.


(*) The network architecture is : 2x[40960x128 + 128] x [256x32 + 32] x [32x32 + 32] x [32x1 + 1], where "[W + B]" are the weights (W) and the biases (B).

November 27th, 2020

Orion 0.8 is almost ready !

Next version should be released in a few days, if all goes as expected. In the meanwhile, here is how v0.7 performed in CCRL and CEGT lists : this corresponds more or less to a 110-130 elo increase from the v0.6 : I'm very happy !

SiteTC (*)RankEloGames
CCRL40/151142761+25-25516
CCRL40/21282736+17-171231
CEGT40/41212595+15-151350

(*) Time control (40/15 means 40 moves in 15 minutes)

August 26th, 2020

Experiments with Neural Networks

I really don't have a lot of time these days, but due to the NNUE on-going 'revolution', and because I'm deeply convinced that this kind of approach is the future, I decided to play a bit with neural networks.

My experiments are related here. To date, I managed to get my own NNUE implementation, giving a serious boost in terms of elo performance (note that this version is purely experimental and shouldn't be considered as the official "Orion" : it is provided only for entertainement/experiments).

I'm currently trying to build a 'neural network trainer' to train my own networks, with the aim to build in a first attempt simpler networks than Stockfish's ones, and test if they can improve current v0.7 evaluation function.

Stay tuned !

July 3rd, 2020

Orion 0.7 is available !

Next weeks will be busy, and I won't have a lot of time to work on the SMP version. I prefer to release the new version now, which already includes a lot of rework. Main changes are described in the previous post. I forgot to mention that now Orion also embeds an handcrafted KPK bitbase... and a refreshed logo ;-) As regards Transposition Table (TT) ageing, I opted for a simple implementation : at the beginning of a search, TT is informed of how many pieces remain on board. Every TT entry which is already stored with a greater 'popcount' can be safely and unconditionnaly replaced. This seems sound, and gave good results during my own tests.

I hope the new version will reach a +100 elo increase (when using Syzygy tablebases), but that remains to be confirmed !

June 1st, 2020

Happy birthday Orion ! Last version is just one year old, and performed relatively well with an increase of around 100-110 elo from the previous version : I'm very happy :-)

Next version is on good shape : I managed to achieve some good results, mainly thanks to the addition of Syzygy tablebases support. Some parts of the code have been totally rewritten, like evaluation, magic numbers generation, magic/BMI attacks computation, or static exchange evaluation (again !). Among various changes, aspiration window is finally working, Transposition Table is being "aged" (it was not the case until now, but I choose a different approach than other engines - more details to come), hash move is always tried in Quiescence (even if it's a quiet move which should have not been generated) and before move generation (speed gain), and, finally, PVS is also implemented in Quiescence (surprisingly, this does not seem to be common : maybe I'm doing something wrong). For evaluation tuning, I switched from genetic algorithms to pure linear regression (using Python scripts and Scikit-learn). Orion's evaluation has always been and is still... basic :-) At the moment, gain is between 50 (without Syzygy) and 100 (with) elo. I'm wondering whether to release or not the current development version, but at this stage, I would like to try to implement an important feature which is still missing : multi-CPU support (SMP) !

Current version strength (v0.6) :

SiteTC (*)RankEloGames
CCRL40/151442635+20-20819
CCRL40/21492624+17-171204
CEGT40/41412464+9-93200

(*) Time control (40/15 means 40 moves in 15 minutes)

June 1st, 2019

Orion v0.6 is here !

Almost one year of work... :-) Main changes are :

In my testing conditions (5000 games, 4000 played at 40/1 + 1000 played at 40/2), this version should be ~100 elo stronger than v0.5.

En route for v0.7 and a new long-term objective: reach one day 2800 elo ?!

May 16th, 2019

Orion v0.6 is almost ready to be released !

I'm currently running last tournaments to ensure non-regression with the very last build. It has been almost one year since the last post on this "blog": I worked hard on the new version, continuously trying to improve the engine, test after test... Sometimes, I wonder how other programmers do to improve so quickly their own engine, especially for 2500+ elo engines !

As a rule, and from the very beginning, I always refuse to watch other engines' evalution code. I only took inspiration in code related to search, but only for the parts I can understand and implement on my own. For example, aspiration window at root node is a concept that still doesn't work in Orion. I think I understood the idea, but something is still wrong. As far as evaluation is concerned, Orion's code is 100% original : I only took inspiration from well-known sites like CPW, blogs from other authors (a thought for Mediocre which seems to reborn !) and, for sure, forums (TalkChess being the one I read the more).

Most of the Orion v0.6 progress will come from the evaluation function: I added some concepts and the magic of genetics did the rest :-) In the meanwhile, I'm really proud of the v0.5 strength. This version has been a solid ground to build another release ! After almost one year from its release, here are its current rating:

SiteTC (*)RankEloGames
CCRL40/401752529+15-151548
CCRL40/41762513+13-132348
CEGT40/41662352+13-131922

(*) Time control (40/4 means 40 moves in 4 minutes)

June 21st, 2018

Orion v0.5 is available !

After several weeks of hard work, and a huge number of games played to test, test, and test again, I'm pleased to release a new version of my little engine Orion !

So, what's new ? Not a long list of new features, but a lot of code changes and rewrite :

Yes ! It worked ! I finally managed to improve strength using PBIL method (a big thank to Thomas Petzke) ! v0.5 is the first genetically modified version of Orion :-)

Gain can appear low, but I use a simple and straightforward fitness evaluation method : only compare score difference between Orion and Stockfish (v8). My previous attempts failed because of a bad initialisation of weights. Tuning only applied on evaluation terms, from 25 millions of unique positions extracted from CCRL 40/40 games and took ~ 8-10 hours. For the next release, I'll try to include search parameters but this will need to change fitness evaluation and run games : it should really take a lot of time !

Lastly, I tried to improve my testing framework. In previous versions, I only ran gauntlets against my 3 prefered partners : iCE, Lozza, and Madchess. I now run 4000 games against 20 engines, at 40 moves / 60 seconds, using the Hert500.pgn opening book. To preserve my computer, CPU is underclocked at 2.24 GHz. A complete run takes ~ 36 hours (7 games are run in parallel).

I hope all this work will be reflected in an elo gain in real conditions !

May 14th, 2018

Orion v0.5 is approaching !

Since the release of v0.4, I have been working a lot to try to improve Orion, testing dozens of code changes and playing thousands of games. I finally started to get promising results a few weeks ago. I'm currently trying to grab a few more elos before releasing a new version.

In the meanwhile, I'm really satisfied to see that current version performs relatively well in tournaments (it is 60-100 elo stronger than v0.3 !). Compared to previous versions, v0.4 is clearly a strong and sound basis to try new ideas. You'll find in the table below an idea of its current strength. I'm really excited with the current development version : stay tuned !

SiteTC (*)RankEloGames
CCRL40/401882447+24-24581
CCRL40/42052420+18-191045
CEGT40/41852262+14-141550

(*) Time control (40/4 means 40 moves in 4 minutes)

October 15th, 2017

Orion v0.4 is out !

I'm really happy to release this new version : I worked a lot on it, testing tens of versions, to finally get a version doing what it was intended to :-)

From the source code perspective, this version does not vary a lot from the previous : I only made small adjustements on search and fixed some pieces of code that didn't do what I expected to.

Evaluation was just modified to adjust rooks scoring. I gave a new chance to PBIL algorithm to improve it with no results. This time, I tried to minimize the difference between Orion and Stockfish v8 scores, but in real games, it didn't give better play.

So, what's new in this version ? Even if final code differences are small, there are some big changes:

In addition, a BMI2 version of the engine is now provided, giving a small speed bonus (+ 5%) on compatible systems.

Why releasing a new version now ? Because, even if evaluation has not been improved, my own tests show a clear progression against v0.3 : +/- 100 elo at 40/4 ! I hope this will be confirmed in real tournaments and longer time controls...

Have fun and do not hesitate to give me feedback !

March 4th, 2017

It has been nearly a year since Orion was put online, and you will find in the table below a good idea of its level. I'm quite happy with these results (many thanks to all testers) ! In fact, the engine performed better than I expected. However, during last months, I tried to improve again the last version but faced difficulties... Developing a chess engine can really cause headaches !

I first tried to improve my evaluation function (using genetic algorithms) : it only allowed me to validate my PBIL framework as real strength was finally not increased...! After multiple attempts, I suspected pruning and reductions techniques had (bad) influence while trying to optimize evaluation.

I then started to inspect search tree implementation to decide what to deactivate, and found some bugs and pieces of code not doing what they ware intended to... Several hundreds games later, I also suspected problems in Transposition Table, notably on replacement strategies. I then tried multiple approaches... before being satisfied.

I'm here. And last results seem to go in the right direction, but it's too early to release a new version : a lot of work is still planned ! I want first to stabilize search tree implementation and then give a new chance to genetic algorithms to improve Orion's evaluation function. For the latter, I think I will disable pruning and reduction techniques to better converge to a good solution...

During all my efforts, I also found time to implement a BMI2 version of the engine, giving (on compatible systems) an incredible... +0% speed boost ! Another disappointment... and a new source of forthcoming debug sessions :-)

SiteTC (*)RankEloGames
CCRL40/402032383+22-22686
CCRL40/42092345+18-181187
CEGT40/42042162+13-131600

(*) Time control (40/4 means 40 moves in 4 minutes)

April 3rd, 2016

Orion v0.3 is now available !

I'm very happy to release this new version after several weeks of hard work. It (almost) consists in a complete rewrite of the previous version, in order to have a more readable and robust code, which should be a better basis for further enhancements. And code is not throwing anymore tons of warnings when compiling ;-)

Aside from rewriting, some features have been added, changed or removed :

Evaluation is unchanged. The new pruning techniques allow smaller search trees while adding some search instability. It results in less reliable moves in shallow depths, but should increase strength for longer time controls. I'm very impatient to see how it will behave in tournaments !

Next version will focus on evaluation enhancement with a PBIL framework already implemented and ready to be played with !

July 19th, 2014

New Orion v0.2 ratings :

SiteTC (*)RankEloGames
CCRL40/402032266+38-38230
CEGT40/413462105+25-25600

(*) Time control (40/4 means 40 moves in 4 minutes)

June 25th, 2014

Orion v0.2 participated in its first tournament ("Special Stars", organized by CCRL team) and finished in 4th place !

As it was my goal to compete with other engines, I'm very proud of it ;-)

June 17th, 2014

First feedback from testers with computers that don't support 'popcnt' instruction show that the engine may crash : this problem has been fixed and a patched version of Orion v0.2 has been repackaged in the zip file (see download section).

This shows that we never test enough ! Thanks to all testers for their patience...

Please report any new problem here.

June 15th, 2014

I'm pleased to announce the release of Orion v0.2 !

This new version includes :

All these features should improve the engine speed :-)

Please enjoy !

June 7th, 2014

The CEGT team tested intensively Orion v0.1... playing 1100 games ! Here is the rating obtained :

SiteTC (*)RankEloGames
CEGT40/413602048+18-181100

(*) Time control (40/4 means 40 moves in 4 minutes)

May 31st, 2014

After the last CCRL update (many thanks to all testers !), these are the ratings of Orion v0.1 :

SiteTC (*)RankEloGames
CCRL40/42292167+39-39248
CCRL40/402202194+116-10830

(*) Time control (40/4 means 40 moves in 4 minutes)

May 24th, 2014

Orion v0.1 is now listed in CCRL (in the "complete list" only, because It played less than 200 games) !

After 30 games played, Orion has been evaluated at 2194 elo. The error margin is quite big (+/- 116), but totally normal since only a few games were played. I think its real level is closer to 2078 :-)

May 21st, 2014

I'm very happy and proud to release the first version of my UCI chess engine : Orion v0.1 !

I started to work on it several years ago, as a hobby, but decided to rewrite it entirely (and more seriously) at the beginning of the year, switching from Java (easy for prototyping) to C (easiest to distribute).

It includes :

My long term goal is to reach 2500 elo (one day ?!), but for the moment, this version seems to have, let's say, some room for improvement :-)

It's an 100% original work (no fork/derivative), a lot inspired by chessprogramming.wikispaces.com, and ideas taken from the excellent blogs of Jonatan Pettersson (Mediocre) and Thomas Petzke (iCE).

In order to use Orion, you will need a GUI like Arena.

Last but not least, many thanks to Graham for accepting Orion to enter the CCRL competition !

Please enjoy !

License

Orion is free : you can download and use/test it without limitation/restriction. The zip contains a Windows executable, a personal logo (astronomy is another passion), and a network file. You are allowed to redistribute it or its elements, on the absolute condition that you don't modify them. Sources of the engine are not included since development is in a too early stage. From v0.8, a part of Orion has been released under the MIT license ("The Cerebrum" library).

Download v1.0 (64-bit version)

Previous versions : Download v0.1 to v0.9 (64-bit versions)