Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie

Game Theory and Poker

Options
2»

Comments

  • Users Awaiting Email Confirmation Posts: 176 ✭✭pkr_ennis


    Wow, thats genius!
    I think that this shows a basic optimal betting strategy for poker. Care to share the frequency player 1 should bluff with a queen roundtower? Please.
    It would be even more interesting to add player 2 raising options in there as this would create a basic betting/bluffing/calling strategy in a real poker situation. Wouldn't it?
    I'm starting to understand your location roundtower lol.


  • Registered Users Posts: 1,922 ✭✭✭fergalr


    Lets see if I've understood this... apologies if I'm a bit too verbose here, RoundTower, want to see if I understood it properly.
    I also have a question or two about the applying this to games in general:
    RoundTower wrote: »
    here's how you solve this problem:
    there is nothing to be gained from checking with an Ace, so you should always bet.
    With an Ace, the payoff/profit for a check is 1. No matter what player 2 has or does, in response to a raise, the minimum payoff is still 1, and could be as high as 2, if the other player calls the raise. So rationally, you'd always bet.
    RoundTower wrote: »
    there is nothing to be gained from betting with a King, so you should always check.
    If you call with a king, the EV of the profit is 0.
    But, if you raise, the EV of the profit falls to -.5 (half the time player2 will have an ace, and you'll lose 2, half the time player2 will have a queen, and you'll only gain 1, because they will rationally fold, knowing they are beaten, hence -.5 EV)
    -.5 EV being less than 0 EV, it's rational to always check, with a King.

    RoundTower wrote: »
    with a Queen you should adpot a mixed strategy, either check or bet. let's say you bet with probability x, 0 <= x <= 1

    So, this is where it gets a little trickier.
    When you have a queen, and player2 has an ace, you are going to lose money. Nothing can be done about that. But half the time player1 has a queen, player2 has a King, and not an Ace.
    When player1 has a queen, and player2 has that king, player2 doesn't know if player1 has a queen or an ace. Therefore there's a chance of being able to bluff player2 out of the pot, by raising.
    Hence, you can't say that player1 should either always check or always bet here.
    The game theoretic thing to do will be to adopt a mixed strategy, which means that you don't always take a single action, instead you choose which action according to a probability distribution. As you can only either check or bet, the distribution here can be represented by a single number - essentially the ratio of checks to bets.

    So, the only thing to do to figure out optimal play for player1 is to figure out what that number is.
    RoundTower wrote: »
    for him, he should always call with an Ace and fold with a Queen.
    Simple enough - he's ahead with an ace, and behind with a queen. The only time he'll have an ace and have a decision to make is when you are trying to bluff him with your queen, so he obviously calls. The only time he has a queen and has a decision to make is when you have raised with your ace, so he obviously folds.
    RoundTower wrote: »
    He should call with a King some of the time - enough that you don't get to win the pot every time with a Queen, but he also doesn't want to pay you off every time you have an ace. So say he calls with probability y.
    So, thats what player2 does when player2 has an ace or a queen.
    There are two situations in which player2 can have a King, and have a decision to make about whether to call or fold.
    One is when player1 has an Ace, and has made a bet, because it made no sense to just check with an ace, and the other is when player1 has a queen and has made a bet as a bluff (with frequency X).
    So, when player2 has the King, and sees a raise, he has to call a certain amount of the time - ie, as you say, with probability Y.

    What the Y is to maximise player2's EV obviously depends on what X player 1 is using, given that player1 moves first each time.
    And also, if player1 wants to maximise player1s winnings, then player1s X should be chosen mindful of player2s Y. For example, if Y was 1, player1 would make X to be 0 - if player2 always calls with the King, player1 will always check the queen, (and maximise profits by getting paid for his Ace).

    But if Y was 0, player 1 would make X to be 1 - if player2 never calls with the King, player1 will always bet with the queen, and while player1 won't get paid for his ace, he'll take down the pots when he only has a queen instead.

    But we're trying to find the equilibrium strategy, which is the best X given that player2 will chose the best Y...
    RoundTower wrote: »
    Then there are 6 possibilities for how the cards get dealt, all equrobable. When you have an Ace and he has a Queen, for example, you make €1. When you have a Queen and he has an Ace, you lose €2 x of the time and lose €1 (1-x) of the time, so on average you make €(-2x -1(1-x)) which is €(-x-1). You can also find the outcomes for the other four possibilities too, and average them together to get the expectation of the game. You should get (x + y - 3xy)/6.
    So, the expectation are:
    [player1card,player2card]:[expectation]
    AQ:1
    AK:y+1
    KA:-1
    KQ:1
    QA:-x-1
    QK:2x-3xy-1

    And these are all equally likely, the expectation of the game is the sum of them all divided by their number, so as you said, (x+y-3xy)/6, again, where x is the probability of player1 to bluff with a queen, and y is the probability of palyer2 to call with a king.

    RoundTower wrote: »
    Now to find the optimal x, you need to know that you should choose x such that it you are indifferent to his choice of y, in other words, the terms containing y should sum to 0.

    So we now have the expectation of the game (from the perspective of player1, and remembering the game is zero sum), expressed as a function of X and Y.
    What player1 wants to do is choose X such that the value of the expectation, given the worst value of Y player2 can choose for that X, is at a maximum.
    This happens when player1 chooses 1/3 as the value. (see pic)

    I'm not sure I understand what you say that the player 'should choose x such that you are indifferent to his choice of y, in other words, the terms containing y should sum to 0' - can you explain this part in more detail?
    I understand that 1/3,1/3 is the equilibrium from the function, and theres plenty of ways to calculate that, but I don't fully see how you calculated that?
    RoundTower wrote: »
    So y - 3xy = 0 for all y, therefore x = 1/3. Similarly you should get y = 1/3, and the expectation of the game should be that you, on average, make one eighteenth of a euro. He expects to lose one eighteenth of a euro on average
    Now you could see how the game changes if you have a different choice, for example if the bet amount was €2 instead of €1, or more interestingly, if you had a choice to check, bet €1 or bet €2.

    I'm pretty sure I understand the analysis of this game that you gave.

    I'm wondering though, how well does the method used generalise? Working this stuff out by hand, like you did there, could get pretty complex as the game complexity grew. How would you go about analysing more complex games? Is it possible to work these things out algorithmically, or to represent the game rules in some standardised format, and apply something to calculate the equilibrium positions in a general way? (or do some sort of stochastic approximation?)

    gameTree.png
    imagePoker.png


  • Registered Users Posts: 1,922 ✭✭✭fergalr


    pkr_ennis wrote: »
    Wow, thats genius!
    I think that this shows a basic optimal betting strategy for poker. Care to share the frequency player 1 should bluff with a queen roundtower? Please.
    AFAIK he's saying that X is the probability that a player bets with a queen (a bet with a queen is always a bluff here) so thats the number you want. For the equilibrium strategy, its .333... so, bluff one third of the time, with the queen.
    pkr_ennis wrote: »
    It would be even more interesting to add player 2 raising options in there as this would create a basic betting/bluffing/calling strategy in a real poker situation. Wouldn't it?
    I'm starting to understand your location roundtower lol.
    I'd be very wary about extending the results from this analysis - or even a slightly more complex one - to a more complex poker game, like real poker. Small changes in the game setup could make the correct strategies very different.


  • Registered Users Posts: 39,188 ✭✭✭✭Mellor


    pkr_ennis wrote: »
    Chris Ferguson is one of the best poker players in the world.
    I'm pretty sure his father is (or was) a leading GT professor, in UCLA IIRC



    When I look at game theory problems, I tend to approach it as if I was playing a reasonably rational opponent , but not one that plays with perfect rationality

    Example, Nash's say that the answer to the travelers dilemma is 2, yet we all know that picking say 94 would have a much higher EV against most rational people.

    This is the huge flaw in nash's equilibrium, it assumes everyone has perfect rationality, which they don't.

    A perfect example is the guess 2/3's of the average puzzle.
    In this, players attempt to guess a number that is 2/3s the average all of all players guesses. The solution is simple, yet hugely flawed.



    Applying this to the AKQ problem, I'd assume that player B calls with a K 50% of the time. Now what is our EV, and how often should we bluff.


  • Users Awaiting Email Confirmation Posts: 176 ✭✭pkr_ennis


    Mellor wrote: »
    A perfect example is the guess 2/3's of the average puzzle.
    In this, players attempt to guess a number that is 2/3s the average all of all players guesses. The solution is simple, yet hugely flawed.



    Applying this to the AKQ problem, I'd assume that player B calls with a K 50% of the time. Now what is our EV, and how often should we bluff.

    I understood the solution like this, That B calls 100% of the time with a K b/c A's bets with Q's 33% of the time. This is where A's edge lies in the puzzle, and it's b/c B never gets a chance to raise etc.

    GT is flawed so, jeepers. Just wondering if you could quantify that and add it to a puzzle, like this-

    A is on the river with the 2nd nuts and bets,
    B has the third nuts and is thinking something along these lines,
    1, A bets a hand that is beating me x% of the time
    2, A bets a hand that is losing to me x% of the time
    mmm, stuck here, b/c A is considering the times in which he's beat, which would/could take into consideration A's tilting tendencies as well as when he's playing solidly.
    Maybe you have to come up with 2 different numbers. . .

    I found this article very intersting and is totally relevant to this thread
    http://www.twoplustwo.com/magazine/issue60/Christenson-commentary-approximating-game-theoretic-optimal-strategies-full-scale-poker.php

    C :)


  • Advertisement
  • Registered Users Posts: 383 ✭✭REFLINE1


    Mellor wrote: »
    I'm pretty sure his father is (or was) a leading GT professor, in UCLA IIRC



    When I look at game theory problems, I tend to approach it as if I was playing a reasonably rational opponent , but not one that plays with perfect rationality

    Example, Nash's say that the answer to the travelers dilemma is 2, yet we all know that picking say 94 would have a much higher EV against most rational people.

    This is the huge flaw in nash's equilibrium, it assumes everyone has perfect rationality, which they don't.





    A perfect example is the guess 2/3's of the average puzzle.
    In this, players attempt to guess a number that is 2/3s the average all of all players guesses. The solution is simple, yet hugely flawed.



    Applying this to the AKQ problem, I'd assume that player B calls with a K 50% of the time. Now what is our EV, and how often should we bluff.



    Your right Mellor-His father is Thomas.S. Ferguson.
    Some really good examples at the link below.
    http://www.math.ucla.edu/~tom/Game_Theory/mat.pdf


  • Registered Users Posts: 5,083 ✭✭✭RoundTower


    fergalr wrote: »

    I'm not sure I understand what you say that the player 'should choose x such that you are indifferent to his choice of y, in other words, the terms containing y should sum to 0' - can you explain this part in more detail?
    I understand that 1/3,1/3 is the equilibrium from the function, and theres plenty of ways to calculate that, but I don't fully see how you calculated that?

    I'm not sure what the proof is that this works as a method to find the equilibrium of the game, I just know it does.

    As for the argument that "game theory is flawed because it assumes rational players" - this isn't really true. Examples like the Travellers Dilemma are interesting because they are atypical and provide unexpected results. If you could find optimal play for poker against rational players, firstly, you would probably be able to crush almost any game in the world, and secondly, it would then be a relatively small step to move towards a strategy that exploits weaker players (for example, in the example game, if the guy always called with a King you would never bluff with a Queen).

    You can't solve any "real" poker game by breaking down all the trillions of possibilities, like we did for the one-card game, but you can still apply some of the conclusions from it. For example, you can generalise the result to say that your optimum bluffing frequency should be such that your bluffs:value bets are in the ratio bet size:pot size+bet size (which was 1:3 in the example).


  • Registered Users Posts: 1,922 ✭✭✭fergalr


    Mellor wrote: »
    I'm pretty sure his father is (or was) a leading GT professor, in UCLA IIRC

    When I look at game theory problems, I tend to approach it as if I was playing a reasonably rational opponent , but not one that plays with perfect rationality
    This makes sense to me, in many real world scenarios, especially where it isn't a fully correct model of the real world, and where it isn't a zero sum game, and where the other players are acting intuitively.
    For example, the prisoners dilemma as mentioned earlier.
    Mellor wrote: »
    Example, Nash's say that the answer to the travelers dilemma is 2, yet we all know that picking say 94 would have a much higher EV against most rational people.

    This is the huge flaw in nash's equilibrium, it assumes everyone has perfect rationality, which they don't.

    A perfect example is the guess 2/3's of the average puzzle.
    In this, players attempt to guess a number that is 2/3s the average all of all players guesses. The solution is simple, yet hugely flawed.

    Applying this to the AKQ problem, I'd assume that player B calls with a K 50% of the time. Now what is our EV, and how often should we bluff.

    I think the equilibrium analysis here is pretty useful though.
    Like, as player1 I now know the correct way to play this game, and if we player and you are player2, I'll always beat you - in the end, I'll always come away profitable (over the long run, expected value, etc).
    Without the game theory analysis, its possible that as I played I might wander either 1) make a silly mistake (not betting with my Ace) or 2) without making silly mistakes, still wander into that blue corner of the surface where I have -EV.


    If B calls with a K 50% of the time, we should never bluff, and our EV is about .08 - which means the game is worth more to player1 than it was before. In other words, B is now losing more money from deviating from the best strategy.

    So yes, if you have an opponent thats deviating from the best strategy like this, then it does make sense to change from the game theory best strategy in order to maximise how much money you take off your opponent.

    However, we must remember that the deviation the opponent did didn't improve the game from their perspecitive - by deviating from .333 player2 did not improve his chances to take money from player1 - only gave player1 an opportunity to take more money from him.

    If player1 still just played .3333 then he'd still beat player2 always.
    Where as if player1 instead deviates to never bluffing, in an attempt to maximise the amount taken from a player2 who we think is calling half the time, and player2 starts to never call, suddenly the EV for player1 goes to zero - no longer making any money at all.
    You get into this cycle of trying to figure out what your opponent is actually playing, having to second guess them, and repeat that again and again - whereas with the GT optimum player1 can just sit there and take their earnings.
    pkr_ennis wrote: »
    I understood the solution like this, That B calls 100% of the time with a K b/c A's bets with Q's 33% of the time. This is where A's edge lies in the puzzle, and it's b/c B never gets a chance to raise etc.

    GT is flawed so, jeepers.
    Its not a perfect model of the real world and how the real world works.
    But the analysis of this poker game, as done here, isn't flawed as such - you need to be careful about what claims are made, but the claims that are made are accurate - .333333 is the way to go if you are player1, and you'll take home the best average profit over time, even against an infinitely rational and smart opponent.
    pkr_ennis wrote: »
    Just wondering if you could quantify that and add it to a puzzle, like this-

    A is on the river with the 2nd nuts and bets,
    B has the third nuts and is thinking something along these lines,
    1, A bets a hand that is beating me x% of the time
    2, A bets a hand that is losing to me x% of the time
    mmm, stuck here, b/c A is considering the times in which he's beat, which would/could take into consideration A's tilting tendencies as well as when he's playing solidly.
    Maybe you have to come up with 2 different numbers. . .

    I found this article very intersting and is totally relevant to this thread
    http://www.twoplustwo.com/magazine/issue60/Christenson-commentary-approximating-game-theoretic-optimal-strategies-full-scale-poker.php

    C :)
    RoundTower wrote: »
    I'm not sure what the proof is that this works as a method to find the equilibrium of the game, I just know it does.
    Will have to look at that in more detail later, be interested to read more about this.
    RoundTower wrote: »
    As for the argument that "game theory is flawed because it assumes rational players" - this isn't really true. Examples like the Travellers Dilemma are interesting because they are atypical and provide unexpected results. If you could find optimal play for poker against rational players, firstly, you would probably be able to crush almost any game in the world, and secondly, it would then be a relatively small step to move towards a strategy that exploits weaker players (for example, in the example game, if the guy always called with a King you would never bluff with a Queen).

    You can't solve any "real" poker game by breaking down all the trillions of possibilities, like we did for the one-card game, but you can still apply some of the conclusions from it. For example, you can generalise the result to say that your optimum bluffing frequency should be such that your bluffs:value bets are in the ratio bet size:pot size+bet size (which was 1:3 in the example).
    I should also read more about mechanically trying to solve bigger games that better approximate the real world, to trying to find partial solutions to them. Interesting stuff...

    Thanks for the example, and the solutions/discussion about it, RoundTower.


  • Registered Users Posts: 39,188 ✭✭✭✭Mellor


    fergalr wrote: »
    I think the equilibrium analysis here is pretty useful though.
    Like, as player1 I now know the correct way to play this game, and if we player and you are player2, I'll always beat you - in the end, I'll always come away profitable (over the long run, expected value, etc).

    In the original problem (Kuhn poker), it's player two that has the edge.
    The one as posted left out the re-raise option for P2, but copied the edge straight to P1.
    Maybe it was a typo, or maybe this version has the same EV just reversed, be a bit coincidence though


  • Registered Users Posts: 5,083 ✭✭✭RoundTower


    Mellor wrote: »
    In the original problem (Kuhn poker), it's player two that has the edge.
    The one as posted left out the re-raise option for P2, but copied the edge straight to P1.
    Maybe it was a typo, or maybe this version has the same EV just reversed, be a bit coincidence though

    I just picked it as the simplest possible poker game I could imagine, it isn't copied from anywhere although I have seen similar writings before.

    I find it hard to believe Player 2 has exactly a .08 edge if he always has the option to bet or raise (not sure what you mean by "reraise" here), but you could be right.


  • Advertisement
  • Registered Users Posts: 2,164 ✭✭✭cavedave


    I saw that "total poker" book cheap I mentioned earlier in Chapters Dublin today. They also have "The Education of a Poker Player" which is not a great book on how to play holdem but is a great autobiography with a large poker elements.


  • Users Awaiting Email Confirmation Posts: 176 ✭✭pkr_ennis


    I was thinking about player 2's calling frequency and was wondering why his optimal number wasn't 33% if player 1's betting frequency with a Q was known to be 33%. To my mind, this would save player 2 money or am I getting into second level thinking here, where 2 is adjusting already?

    A more complete poker puzzle would be a JQKA game with player 2 being able to bet and raise.

    Thanks y'all for the thread,
    C :)


  • Registered Users Posts: 39,188 ✭✭✭✭Mellor


    RoundTower wrote: »
    I just picked it as the simplest possible poker game I could imagine, it isn't copied from anywhere although I have seen similar writings before.
    Yeah, I figured you just made up a sample problem.
    I find it hard to believe Player 2 has exactly a .08 edge if he always has the option to bet or raise (not sure what you mean by "reraise" here), but you could be right.
    I was a little of in my last post. See problem below for accurate details
    pkr_ennis wrote: »
    A more complete poker puzzle would be a JQKA game with player 2 being able to bet and raise.
    Try solving it with the re-rasie without introducing the 4th card first.


    3 cards, AKQ (or KQJ)
    Each player antes $1 and is dealt a card at random
    Player 1 can either check or bet $1
    If P1 checks, P2 can check or bet $1 (and P1 must call or fold)
    if P1 bets, P2 can call or fold

    Basically, each player has the option to raise the pot to $3, and the other then can call for a total of $4. The pot can't be bigger than $4.



    What are optimal strategies for player 1 and player 2.
    Who has the advantage, or is there one?


  • Registered Users Posts: 5,083 ✭✭✭RoundTower


    this seems like a much more difficult problem despite having changed it only slightly.


  • Registered Users Posts: 1,922 ✭✭✭fergalr


    pkr_ennis wrote: »
    I was thinking about player 2's calling frequency and was wondering why his optimal number wasn't 33% if player 1's betting frequency with a Q was known to be 33%. To my mind, this would save player 2 money or am I getting into second level thinking here, where 2 is adjusting already?
    Not sure what you are saying here.

    The poker problem, as stated by RoundTower here, has equilibrium at 1/3,1/3, as he said, and we worked out - so player2s calling frequency is 33.33..%.
    Maybe you misread something there?
    pkr_ennis wrote: »
    A more complete poker puzzle would be a JQKA game with player 2 being able to bet and raise.

    Thanks y'all for the thread,
    C :)
    That'd be a more interesting problem all right, although could be a lot harder.
    I'll take a look at the version with the reraise at some stage, and then maybe an extra card, but each extra decision grows the tree a lot.


  • Users Awaiting Email Confirmation Posts: 176 ✭✭pkr_ennis


    fergalr wrote: »

    The poker problem, as stated by RoundTower here, has equilibrium at 1/3,1/3, as he said, and we worked out - so player2s calling frequency is 33.33..%.
    Maybe you misread something there?

    I miss read and have troubles with understanding the math speak. Thanks for clarifying that for me. I seem to be enjoying poker more (winning more) recently, and disguising my hands better since getting into this... Life's Good lol, C


  • Registered Users Posts: 39,188 ✭✭✭✭Mellor


    RoundTower wrote: »
    this seems like a much more difficult problem despite having changed it only slightly.

    Yeah, I had a quick attempt at it and soon noticed i'd need a lot more time.
    I'll try to work it out though.


  • Registered Users Posts: 2,186 ✭✭✭NewApproach


    This is an interesting concept, one which I thought about myself a few months back when studying GT in college, but my initial conclusion after thinking about it is that it would only be of use in limit poker, rather than no limit, and to a lesser extent pot limit.

    Its easy to say an ace has no reason to check, so should bet etc, but AFAIS the model you propose suggests that each player can only put money into the pot once, and this is not realistic.

    Even with all the tracking software in the world, we will never know a players exact tendencies at any given moment, and each 'move' in poker is very dependent on both the other player(s) and their previous actions in the hand.

    While obviously a very interesting concept, I somehow have my doubts as to how useful it would be in a real life situation.


  • Registered Users Posts: 1,922 ✭✭✭fergalr


    This is an interesting concept, one which I thought about myself a few months back when studying GT in college, but my initial conclusion after thinking about it is that it would only be of use in limit poker, rather than no limit, and to a lesser extent pot limit.
    Why do you say that?
    We've discussed earlier in the thread the idea that you can discretise no limit games into a much smaller set of actions. Obviously, NL increases the already large search space, but aside from that, once you have the idea of discretising the actions, how does it change things?

    Its easy to say an ace has no reason to check, so should bet etc, but AFAIS the model you propose suggests that each player can only put money into the pot once, and this is not realistic.
    Its not supposed to be a 'model' of real poker. Its a cut down toy version of the game to allow a proof of concept of the GT reasoning. The results from it aren't expected to apply to real poker in any way.
    Even with all the tracking software in the world, we will never know a players exact tendencies at any given moment,
    The idea with a GT approach is that you don't need to know the other players exact tenancies, instead, you play rationally optimally, given that they are playing rationally optimally too.
    and each 'move' in poker is very dependent on both the other player(s) and their previous actions in the hand.

    While obviously a very interesting concept, I somehow have my doubts as to how useful it would be in a real life situation.

    I certainly agree with you that what we've looked at here isn't at all useful in a real game of NL texas hold-em.
    It'd be very useful if you were playing this simple game though.
    And it does show that a GT approach can apply to some simple variants.

    I would be hesitant to rule out someone 'solving' texas hold-em, at some stage, given the amount of work thats gone into some of the research papers mentioned - and more compute is always becoming available.


  • Registered Users Posts: 39,188 ✭✭✭✭Mellor


    Its easy to say an ace has no reason to check, so should bet etc, but AFAIS the model you propose suggests that each player can only put money into the pot once, and this is not realistic.

    You completely missed the point. And actually studied GT?????
    It's a GT problem, not a poker hand. And there are applications of this in poker. But it's not a case a copy and paste. Its basic GT strategy.


  • Advertisement
  • Closed Accounts Posts: 2,771 ✭✭✭TommyGunne


    The Mathematics of Poker, as mentioned earlier, is compulsory reading for anyone interested in the mathematical and game theory side of the game. Lots of problems similar to RT's, and a few that take it a step or two further.

    And obv game theory is incredibly important. As RT mentioned earlier, if we could somehow find a GTO solution for any spot in poker, then we crush that spot. Of course there might be more profitable solutions via exploitation, but knowing GTO solution is more important than knowing the exploitative solution. If we could somehow solve the entirety of poker to its GTO solution, and completely forget any sort of exploitative strategy, we would absolutely crush, and all the money would be ours. We might make a small bit less than the absolutely best players at small stakes (I doubt it though), but above 2/4 we would have a winrate immensely higher than anyone else's. GTO is the holy grail of poker.


  • Registered Users Posts: 253 ✭✭Moro Man


    Finally a bit of weight to the theory that being mad improves your poker ability:D


  • Registered Users Posts: 1,922 ✭✭✭fergalr


    Moro Man wrote: »
    Finally a bit of weight to the theory that being mad improves your poker ability:D

    I'm not sure I understand that.

    I'd almost say that its closer to being a bit of weight to the theory that being rational improves your poker ability.

    If someday someone starts figuring out the optimal play for real poker, being unpredictable won't help.


Advertisement