Releases: princesslana/princhess
Release 0.19.0
π¦ NEW: Use see threshold of +8
π¦ NEW: Remove virtual loss (was probably bugged)
π¦ NEW: Various threading and performance fixes/improvements.
π¦ NEW: Info output improvements
π¦ NEW: SCReLU in value net
π¦ NEW: Time management
--------------------------------------------------
Results of princhess-0.19.0 vs princhess-0.18.0 (8+0.08, 1t, 128MB, 8moves_v3.epd):
Elo: 79.17 +/- 14.63, nElo: 120.71 +/- 21.53
LOS: 100.00 %, DrawRatio: 35.20 %, PairsRatio: 3.44
Games: 1000, Wins: 352, Losses: 128, Draws: 520, Points: 612.0 (61.20 %)
Ptnml(0-2): [9, 64, 176, 196, 55], WL/DD Ratio: 0.35
--------------------------------------------------
Release 0.18.0
π¦ NEW: Add MultiPV support
π¦ NEW: Use generalized cpuct
π¦ NEW: 50 move rule damping
π¦ NEW: Perspective network
π¦ NEW: Policy Net from self play data
elo_check-1 | Score of princhess-0.18.0 vs princhess-0.17.0: 327 - 240 - 433 [0.543] 1000
elo_check-1 | ... princhess-0.18.0 playing White: 186 - 103 - 211 [0.583] 500
elo_check-1 | ... princhess-0.18.0 playing Black: 141 - 137 - 222 [0.504] 500
elo_check-1 | ... White vs Black: 323 - 244 - 433 [0.539] 1000
elo_check-1 | Elo difference: 30.3 +/- 16.2, LOS: 100.0 %, DrawRatio: 43.3 %
Release 0.15.2
π¦ NEW: Add MultiPV support
Release 0.17.0
π¦ NEW: Value net. New Data, 512 hidden layer, king and threat buckets
Release 0.16.0
π¦ NEW: New value network from self play data using goober for training
Score of princhess-0.16.0 vs princhess-0.15.0: 472 - 246 - 282 [0.613] 1000
Elo difference: 79.9 +/- 18.5, LOS: 100.0 %, DrawRatio: 28.2 %
Release 0.15.1
π¦ NEW: Increased mate score
Release 0.15.0
π¦ NEW: New policy network architecture
π¦ NEW: 384 hidden layers in eval
π¦ NEW: More history by keeping full ttable between searches and 16 bit move struct
π¦ NEW: Add CPuctRoot and tune
π IMP: Various other refactorings
π FIX: Correct depth calculation
π IMP: Include score in tb move info
Release 0.14.1
π FIX: Use an explicit architecture/target rather than native
Release 0.14.0
π¦ NEW: Eval net with increase to 256 nodes in hidden layer and quantization
π¦ NEW: Policy net
π¦ NEW: Change net inputs to position + threats + defenses with previous move and virtual king mobility
π¦ NEW: Correct training data based on tablebases
π¦ NEW: Switch to fathom for tablebases
π¦ NEW: Add node limit support (use node=1 for policy only play)
π¦ NEW: Use FPU in selection
π¦ NEW: Add policy temperature
Release 0.13.0
π¦ NEW: Only expand nodes on second visit
π¦ NEW: Consider visits in move selection
π¦ NEW: Eval network
π FIX: Use <empty> as default for SyzygyPath
π FIX: Error in cp eval and sane minimums for info output
STC:
Score of princhess-0.13.0 vs princhess-0.12.0: 505 - 209 - 286 [0.648] 1000
Elo difference: 106.0 +/- 18.7, LOS: 100.0 %, DrawRatio: 28.6 %