LeBot
Your friend, Frank, asks, “can you suggest some basic rules for leading against suit contracts? We learned that against NT, we should generally lead 4th from longest/strongest or top of a sequence, should we do the same against suit contracts?” Since Frank is just learning to play bridge, you keep the advice simple: lead partner’s suit or top of a sequence, don’t underlead Aces, etc.
Frank, a retired software engineer, recognizes that your advice can be organized into a set of rules suitable for a computer program. He creates “LeBot”, which chooses its opening lead by assigning each suit in opening leader’s hand to one of 4 tiers:
To choose its opening lead, LeBot uses only “authorized information” available at the bridge table: opening leader’s hand + the auction. Frank adds a few rules to help LeBot understand some nearly universal elements of mainstream bidding systems, such as 1NT-2♣ is Stayman, not natural.
Frank is curious whether LeBot’s simple rules are effective compared to human players. It occurs to him that he could test this by loading data for bridge hands played online into the LeBot program. But he needs a method to score opening leads that can be automated by LeBot. You suggest that he use a variation of Kit Woolsey’s method for detecting collusion between partners: judge the lead without looking at the opponent’s cards, just partner’s cards.
“Frank, let’s keep this simple. What are you hoping partner holds in the suit you lead? For example, if you lead a low card from Kxxx, hopefully partner has the Ace or QJ, and not just small cards.” You and Frank make a list of various suit combinations and rate each as:
LeBot vs. LeHuman
Frank was ready for the big test. He downloaded the BBO history of his local club’s novice game (up to 500 Master Points) for the past 12 months and created a game called “LeBot vs. LeHuman”:
Ignoring ties, LeBot “won” 48% of the time. Not bad! LeBot was good enough to compete in the novice game. But how would it do against the experts? Frank expected LeBot to be at a disadvantage, partly because his rules for ranking opening leads are very simple, and also because LeBot uses less information from the auction than available to a skilled human player. Sure enough, a group of a dozen well-known experts outscored LeBot 58% to 42% when tested by Frank, a sizable, though not huge, advantage.
Sample Size Matters
Frank noticed something odd when looking more closely at data from the novice game: LeBot’s win percentage against individual players varied from 0% to 80%, an astonishing range. Digging deeper, Frank noticed that the players who either did very well or very poorly didn’t play very often. In fact, Mr. 0% only played twice and led just 6 times vs a suit contract (5 times LeBot chose the exact same opening lead, and the only time they chose different leads, LeBot “won”).
When Frank looked only at novices with 500 or more total deals, the numbers made more sense: LeBot’s win percentage was between 29% and 67%. Restricting to players with over 5000 total deals, LeBot’s overall average dropped to 47% and the spread was now 40% to 57%. In comparison, LeBot’s average of 42% among top experts with over 5000 total deals has a spread of 37% to 46%.
LeBot gets LeCrushed
Frank then downloaded data for those who played 2000+ total deals in his club’s open game, expecting Lebot’s win percentage to fall somewhere between the novices (48%) and experts (42%). Surprisingly, LeBot won only 41% vs open club players, worse than it did against the experts. Puzzled, Frank looked over results for individual players and discovered that a player named ‘LeCrusher’ lost only 11% of the time to LeBot.
LeCrusher, who used to play in Frank’s online club, had played over 7000 deals, so this phenomenal winning percentage wasn’t an artifact of small sample size. Frank decided to update LeBot’s simple rules to something more complex, with the goal of better competing with gifted players like LeCrusher. However, the reasoning behind these leads was not apparent to Frank, LeCrusher mostly ignored LeBot’s simple rules, often leading a random side suit instead of partner’s suit. Unable to decipher LeCrusher’s strategy, he once again asked for your help.
You’ve played enough bridge to know that LeCrusher’s seemingly random opening lead style can’t win over the long run, so you advise Frank to consult the “Under Discipline” section of the ACBL website. Sure enough, LeCrusher is a known cheater, and excluding LeCrusher from the data increases LeBot’s win percentage from 41% to 44%, between the novice and expert rates as expected.
EDGAR’s Kit-O-Matic (KOM)
KOM is EDGAR’s primary tool for detecting cheating on opening lead. KOM ranks leads using a somewhat more complicated ruleset than LeBot, and scores leads on a variable rather than win-loss scale, but it follows the same procedure used by LeBot to rank and compare leads with and without illicit knowledge of partner’s hand.
KOM is very conservative: to view a player as “cheaty”, that player must outperform expert players by a wide margin over a large number of opening leads. Note that the weaker the player, the more conservative KOM - it is not enough to just outperform one’s peers by a wide margin over a larger number of opening leads, but one must outperform experts.
More details are found in this Bridge Winners article: https://bridgewinners.com/article/view/edgar-the-kit-and-the-algorithm/
Copyright © 2023 edgarbridge.org - All Rights Reserved. TheEDGARAssociation@gmail.com