Sunday, February 23, 2014

Chess Coding Challenge - Part 1 - How I Leveraged Scala and FP to Find a Solution

NOTE: I have no idea how many parts this series will end up being. It will be as many parts as are needed to tell the story of how this problem came to me, how it tortured me for weeks, how it called me to use so many of the skills I have learned over the +32 years I have been programming, developing and software engineering. As I write new parts, I will be sure to come back here and forward reference them.

Part 2 - Scala infrastructure source code (and two encoder/decoder implementations)

Part 3 - Scala solution source code (and interim encoder/decoder implementations)

Summary:
The initial problem statement of the Chess Coding Challenge, the years since I first heard it as a job interview question and the numerous Aha...pause...Ah-Shit moments which have followed me right through to writing this blog post.



Details:
A couple of years ago, Bill Strahan and I were meeting (like we do every Wed. night for the last 10 years) and he mentioned one of the most notable moments he had ever had when going on job interviews. He said that after they had done all the preliminaries and other such stuff, they asked him a simple question regarding regarding Chess. The problem statement was intentionally simple in context, but belied a very subtle set of below the surface complications. As best as I can remember Bill's telling, here's the problem statement:


  1. Given a picture of an 8x8 chessboard from any valid chess game, what is proveably the absolute minimum number of bits required to losslessly encode all the pieces?


Bill paused and let me mull on it. And then he said that he had gotten a solution to just 176 bits. I mulled for a minute and then reached for my trustiest of tools, RLE (Run Length Encoding). I was certain it would give me the smallest value. And for the starting board, it looked awesome. Bill and I then started discussing specific board positions where RLE might not be as efficient. He then described his solution (which will be described in a later post) and I could see how it worked. He pointed out that he thought there was something more that he could do with the inefficiencies around pawns, but just had not spent any more time on it. We played with the problem space for another couple of minutes and then dropped it.


I picked it up a week or so later. However, my interest in Chess has waned over the years. Once I had learned Go (from David Stafford in 1988), I drifted away from Chess. This problem was slightly more interesting from a computational perspective, though. Since RLE didn't seem to work, I dropped the problem.


Jump ahead a year or so. I've discovered Scala and FP. I am enjoying my experience of lazily learning both. Then, the free Coursera MOOC (Massive Online Open Course) “Functional Programming Principles in Scala” became available with Scala's creator (Martin Odersky) teaching it. I jumped in wholeheartedly (and aced it which included turning in every assignment on-time). During that course, one of the techniques explored is Huffman encoding. This was the first I had ever really explicitly seen this kind of encoding. I enjoyed doing the problem they provided (related to storing words, I think). A day or two after completing the lesson, Bill's interview chess problem just jumped into my mind again. I immediately saw how I could use Huffman encoding to quickly encode the chess board. So, in less than 10 minutes, I reworked the code to encode any chess board. And sure enough, it encoded it in 162 bits, way below Bill's 176 bit solution.


I was so damn elated. I texted him. We got together and showed him the solution. At first he thought it was great. Then, he asked me what would happen with pawn promotions. I said I didn't think it would matter much in typical games. That's when he clarified and said that the problem domain wasn't just typical/normal/average games. It was any game board that was reachable from a valid starting chess board with any legal sequence of moves. A quick check of the math where all 16 pawns were promoted to queens ruined my simple Huffman encoding approach. ARGH!


And so the problem went away for another while...a long while. I happily wrote much Scala/FP and Java code in the intervening time.


Then, on Super Bowl Sunday this year (Feb/2nd), I had just finished a first draft of a Google Document on creating a “Better Scala” (which wouldn't let me sleep the night before after waking to hit the restroom and then keep me up to tell me all about it...for hours...UGH). Due to the a$$whoopin the Seahawks were giving the Broncos, I barely had any attention on the TV.


And that's when this damn Chess problem showed up again. I have NO IDEA why it showed up. But it did. Just popped into my mind along with a new variation on the previous Huffman solution. And this time, it was convinced it had found a solution beating Bill’s.


So, I opened Notepad and just began typing whatever came. I wasn't going to be left alone until I got it out of my head. I had just learned that with my “better Scala” experience. Once I finished, it looked like I consistently got it under 168 bits (21 bytes, one byte less than Bill's solution). I texted Bill and let him know. The plan was to discuss it on Wed. night. The high I felt from resolving the problem was so awesome! I wasn’t able to pay it much attention on Monday. However, it was on my mind as I drove into work on Tuesday morning.


So, Tuesday morning, February 4th, I decided to share my problem with all my software engineering friends on Facebook. It looked like this:
Jim O'Flaherty is facing an interesting "programming" problem:
Given a picture of an 8x8 chessboard from any valid chess game, what is proveably the absolute minimum number of bits required to losslessly encode all the pieces?


{fps}Any guesses on this? And if you guess, please explain how you got to said number of bits. Thank you! I'll wait awhile and then post the answer at which I arrived, and how I got there. The only clue I'll give is I got it under 192 bits (24 bytes).


I then wrote the first comment attempting to offer clarification for all the questions I knew might be coming:
Just as clarification, this is just the pieces on a chessboard that can be arrived at from a valid sequence of moves starting from a standard piece layout for a normal chess game. The encoding does not have to worry about temporal aspects of the game like color to move, castling, en passant, etc. It does have to account for pathological states that arise from maximum pawn promotion (which can be to any valid piece; rook, knight, bishop or queen).


As I was (falsely) convinced I had a solution well under 168 bits and wanted to see if anyone else could find one less than that. I made the post at the beginning of the work day. I then dropped off of Facebook until that evening. Once I got home and logged in, I found that the comment thread on the post had taken on a life of its own. It was awesome seeing all of the different kinds of thoughts about how to encode it.


David Stafford (a friend with whom I competed with when we first got our home computers back in 1981) had asserted in one of his comments that he had his down to 174 and that he thought that an consistent efficient encoding couldn't be variable. And that a variable size encoding couldn't be efficiently placed into a fixed bit window. That made me begin to doubt my solution.


As I went to bed that night, right before I went to sleep, a pathological board layout appeared in my mind. I then couldn't sleep and had to prove that it broke my newly created algorithm. It did. That was one hell of a downer.


I got up for work on Wednesday morning and my brain played with different ways to play with encoding. This began a cycle of finding an encoding that would work great on the previous case's pathological case but then have its own unique pathological case. I did this cycle at least 6 times.


Being it was Wednesday, that evening Bill and I met like we have every week for +10 years. I asked him about his solution again. We stepped through it. And it really did encode the initial board very small. However, at this point I was rapidly becoming an expert in thinking up pathological cases which broke my algorithms. I took a moment to evaluate his algorithm and then asked him to encode what I thought might be a pathological case for his. Sure enough, it pushed well past the 176 he had thought was the max. We then discussed it. I decided to show him my (now defunct) last solution. We talked through it. Then, he realized that we could use part of mine with part of his and come up with a better solution. We worked through it and got it to 172 bits, 2 bits better than David Stafford's solution. We then went about trying to find the pathological case for this new algorithm. The ones we found were consistently encoding below 172.


When I got home that night, I wrote on the Facebook thread about getting to 172 bits. David Stafford had announced his having gotten down his solution down to 171 bits. I was tired and just wanted to get this damn problem out of my head. So, I committed to writing up then new 172 bit solution Bill and I worked through on my blog (this series of posts) and detailing the solution.


Again, right before going to sleep, I thought of a pathological case for the new solution. Sure enough, it went up to 176. That was so damn annoying. I was able to let go a bit for the rest of Thursday and Friday. But, Friday night, my brain wouldn't stop and came up with another variation of the last solution Bill and I had. It worked great to resolve the previous pathological condition, but I immediately set about trying to find the pathological condition for this new algorithm. Sure enough, it had one, too, which topped out at over 180.. It was pretty damn depressing. It was not my favorite weekend.


And then while getting ready for work and just before walking out the door on Monday morning, my brain surprised me with a new solution. However, this one was going to require more than just mental imagining to a conclusion. I was going to have to write code and work through test cases to validate each step in the process. And that was going to take some time.


That Wednesday night, Bill wasn't available due to a conflict. So, I came home and then started working up the solution in Scala (which is what I will be covering in the next several parts). The coding in Scala and FP didn't take long. It was all the pauses to figure out ways to write out test cases, get those sucked in and testable, etc.


I was able to get all the infrastructure completed. And I was able to do the simplest 256 and 192 bit cases. However, I had to stop as I had run out of time. Naturally, Murphy showed up and ensured I didn't get to work on it again...for a freakin week. Between work, Valentine's day and various life events, I didn't get to work on it again for a whole week.


During that week, I then made the following lateral-thinking discovery (well, I it was more making explicit what I already had been doing implicitly):
To solve this problem, I had to stop looking for chess boards arising from two players competing to win a chess game. And instead, I had to begin to look at it as someone intentionally playing valid moves on a chessboard to produce an end state which attempted to break whatever the specific encoding algorithm was being used to capture said state.


When Bill and I met, I explained this to him. I was able to show Bill the Scala and FP code I had written the following Wednesday. But, we didn't have time for me to finish that code. Again, too many real life things happening.


On Thursday (Feb/20th), I finally got the time to finish up all the remaining implementations and ran the resulting code to produce a huge number of results. I took the results, dropped them into Google Spreadsheet. After playing with the numbers, I found a bug in the coders (after having to hand walk through the encoding/decoding to make sure I wasn't losing my mind). I then worked through all the coders to ensure I hadn't missed any implementation details. And then I stepped through each of the test cases (most of which were pathological cases for various coder algorithms).


I then reran the results, dropped them into Google Spreadsheet and finished my analysis. While it is possible I will think of another pathological case, I now sincerely doubt it. But more on why I think that in a later post. {smirk}


The final result – I now have something under David Stafford's 171 bits (for which he has not yet shown his solution).

No comments:

Post a Comment