Episode 99: Ben Sulsky

For a guy who’s played $500/$1000 no-limit, Ben “Sauce123” Sulsky is a surprisingly nitcast-appropriate guest. He talks about his background in philosophy, his low-rolling lifestyle, how he arrived at the heavily game theoretic style he plays today, and how he thinks artificial intelligence will shape the future.

The Thinking Poker Diaries

As announced on the show, The Thinking Poker Diaries Volume 1 now available at www.nitcast.com and will be available on Amazon on October 23 (the latter is actually a slight clarification from what was announced on the show).

Timestamps

0:30 – Hello & Welcome
10:23 – Mailbag: Locking up a win
23:04 – Interview: Ben Sulsky

23 thoughts on “Episode 99: Ben Sulsky

  1. I think I might need professional help. When Andrew led with “I have some good news and some bad news,” I immediately panicked and started thinking “this is it, they’re ending the Podcast!” or “no – it’s worse; he has ass cancer! Oh my God!”. Thankfully the news wasn’t nearly that bad and I’m glad that this wonderful podcast will be continuing and that everyone’s colons are (one assumes) doing just fine.

  2. I was worried the bad news would be worse too. Sean apparently did a nice job in editing, as the speech was audible.

    Great episode. Ben seems like a cool guy, and now I really want to get a RunItOnce subscription, if only to see him on his treadmill. 🙂

    Good luck with the new book. You know I snap-called on buying that (and I don’t even play/like tourneys!) and I will surely be recommending it to my buddies once I’ve found time to read it.

    I can’t wait for episode 100 and the return of Carlos and Gareth. It should be fantastic.

    • For reasons including people here are smart about betting and #nitcast (FIVE DOLLARS?!?!?!), I’m not sure you’re going to get a lot of action on that bet! Pretty cool though that you guys used to play together…

  3. I enjoyed this episode, but I felt the need to correct something about the discussion on Snowie and GTO solutions. Nate mentioned that someone had said that using a neural network is a lot like looking for the deepest part of a valley, but that the deepest part might actually be a long way away.

    This is a good analogy for finding the minimum of a very complicated function, but not such a good analogy for finding GTO solutions. Here’s why.

    1) For multiplayer games, there can be many equilibrium solutions, and finding them is just part of the problem. It’s not clear which one is the one that you want to use, and it’s not just a question of the difficulty of finding them, which is of course high. It’s not even clear whether you want to play an equilibrium solution. If two other players are using the strategies from the same equilibrium solution, for sure you need to be playing the corresponding equilibrium strategy for you. However, if they’re playing the strategy from different equilibria, or just not playing an equilibrium solution at all, plonking yourself down at some equilibrium strategy or other does not give you the guarantees of unexploitability that you get in two player zero sum games. You can play ‘GTO’, whatever that means, and be royally screwed. I’m sure that an ‘optimal’ strategy involves adjusting to all the strategies of all the other players. This leads me to believe that a neural network, like Pokersnowie, which has a fixed strategy and only reacts to the action in the current hand cannot claim to play ‘GTO’. It’s worth noting that HU Limit HoldEm, an extraordinarily complex game, is pretty much solved, as noted on the show, but that the three player AKQJ game (yes just four cards) is not, and is still the subject of research and competitions.

    2) In two player, zero sum games, like HU games or HU pots postflop in NLHE, there can still be multiple equilibria, but they are guaranteed to have the same payoff. Following the valley analogy, if you find the deepest part of a valley, there may be another valley on the other side of the world that has a deepest part, but you’re guaranteed that the depth is precisely the same. As you can see, the structure of a game makes the problem of determining how to play much harder than simply finding a minimum. Because of this structure, if Snowie is playing optimally postflop, its neural network should have converged to one of these equilibria. If it isn’t, it means that it hasn’t had enough training iterations yet. I have my own CFRM software which I’m hoping to use to check Snowie’s equilibrium calculations at some point. Unfortunately, somebody has just pointed out to me a technical flaw with my bucketting algorithm that has made me question whether I can do this, but I will keep on working on it like the gentleman amateur that I am.

    • Thanks, John. I need to preface the following with the caveat that I have no idea I’m talking about:

      As Nate said, I (paraphrasing, badly I’m sure, a friend) was the source of the valley analogy. The point I meant to make with this is that one can find the deepest part of a given valley more easily than one can prove definitively that this is the deepest of all valleys. I believe that would be consistent with your second point (in two-player zero-sum games, all the equilibria will be co-optimal), but what I understand you to be saying in the first point is what I meant to say with that analogy, that finding the bottom of a given valley does not guarantee that you’ve found an optimal strategy.

      • Yes, kind of, but my point is that it’s unclear what ‘optimal’ means for multiplayer games, and whatever it does end up meaning, it’s going to be something dynamic, not static, so a static analogy can’t be right. If you think of it as more like finding the lowest point on the surface of the sea, you might be nearer the mark.

      • Thanks for the entertaining and philosophical podcast.
        I would like to hear your thoughts about Snowie which I bought a year ago. I have not read a serious evaluation of its strength, and I am not convinced by the total win figure provided by Snowie Team. Do you think it would beat NL100 or NL400 ? Do you feel a change of play level in the updates of the neural network?

        As for Go, there just have been a competition between a German 6-Dan, Franz-Joseph Dickhut and a French Bot named Crazy Stone. The bot won the first game without handicap, but then lost the next three. You can get explanations and review the games on https://go.codecentric.de/. There is a level of magnitude between 6-Dan players and best professionals so humans should be safe for a few years. It is not so much the increase in computing power which is doomed by the high exponential growth of the tree (as in Arimaa), but the new Monte Carlo Tree Search algorithms which has allowed this performance of computer programs.

        • @ Carroll Jeff,

          It’s really hard to evaluate Snowie’s strength, or ability to beat a real money game, because it hasn’t played against the “mixed ability” fields one finds online. Snowie obviously doesn’t know how to exploit a specific calling station or a tilted maniac, because it hasn’t practiced against that specific player, be it at 100NL or nosebleed stakes. As you’re no doubt aware, switching to exploitative play against an exploitable player in a particular situation has a higher EV than aiming for “balance” against him. (I learned this to my cost by bluff-catching at almost “optimal” frequencies against players that are incapable of bluffing!)
          All that said, I believe that the guys in Montenegro that learned poker pretty much solely from training with early versions of Snowie were breaking even at 6-max 200NL Zoom with error rates of around 7-8 and might be playing higher now.

          I’ve only just tried to the new update of the AI, and it seems to be quite a radical “upgrade”. The bot is reportedly no longer folding at such exploitable frequencies in certain spots (in the past it sometimes folded the second nuts on the flop to a single bet!) but the pre-flop ranges and bet-sizes have also changed significantly. (It even open-limps the SB in 6-max games now).

          I think these changes serve as a tacit admission that the engine was pretty far from GTO when the software was launched. I wouldn’t be surprised if the changes actually lead to new exploits that require patching, but I’m not qualified to say if the engine is now a tougher player on the whole, or whether it can beat 100NL. I might have a better idea in a couple of weeks once I’ve played it more and taken notes of the changes I spot. FWIW, I beat the “old” version over an insignificant sample size, while spotting some clear mistakes in its judgment, but expect it to crush me in the long run if it has truly improved, especially if I stop sucking out after making blunders. 😉

          • Quick update on my Snowie investigations.
            At 6-max, it used to play a style that led to stats of around 22/18. With it’s new minraising and SB-limping strat, it’s now playing closer to 25/17.

            And it’s completely crushing me so far. Grrrr!

  4. Re computers and go, for a long time computers were terrible at go, because of some mix of
    1) game space
    2) non locality – things in one corner can effect things a long way from that corner significantly
    3) (probably most important) the lack of a natural metric akin to material in chess.

    I’m a little out of the loop, but aiui, they’ve got much better in the last decade or so, through the introduction of methods relying on playing games out at random many times (monte Carlo methods), essentially using processing power to come up with a solution to 3. They’re not professional good yet, but they are better than me good. So the expectation of when they become better than mankind has gone from ‘who knows’ to ‘sometimes in the not to distant future’.

  5. the good news / bad news thing freaked me out as well.

    thankfully my mind didn’t immediately jump to ass cancer, but I did fear for nate.

    since nate was not present for the intro, i naturally assumed he had departed the show. perhaps he binked the startup lottery and now was immersed in cocaine and loose women. perhaps he was off to teach philosophy to the privileged class at some ivy league school. either way, that five minutes of dread was real, and when andrw revealed is was merely a sound glitch we had to endure, my heart leapt.
    From that point on, no technical difficulty or exorbitant e-book price would ever deter me.

  6. I think you could have tried what we in the biz call ADR. Or Automated Dialogue Replacement. Especially for AB’s longer remarks. That is to say you could have rerecorded them and edited them in. It is done all the time when the scene is perfect except for a glich in the sound track or an inadvertent exterior sound that bleeds onto the track. Its also known as dubbing. But since you did not have to match lip movement it could have been done easier.

    • Thanks, Keone!

      I never knew the term ADR before. This was actually on the table as one of our options, and for the reasons you mentioned would have been the most ideal solution from a quality of listening perspective. I am not sure exactly which reasons led Andrew and Nate to their ultimate decision of not dubbing Andrew’s parts again, but my guess is that it was just more time than he was looking to spend in order to transcribe his portions and re-record them. I’m pleased that the salvaged recording was okay to listen to (thank you also to Arty and Dana from above comments).

  7. Guys – thanks for a great episode, loved the discussion here – and can’t wait to listen to EP 100 tonight. Can anyone point me to a URL with the Ben Sulsky videos which Andrew and Nate mentioned were among the best – if not the best ever – training videos for NLHE? Pretty sure the context was around how well Ben explained GTO considerations for application in game. I’d be really interested in checking these out.

    • Sulsky’s videos are on the Run It Once site. Unfortunately all his vids are available through Elite membership only which is like $100 a month. Theres some things on Youtube but not instructionals.
      Good luck

Comments are closed.