Brian and Franco, for The EDGAR Association, have developed a suite of software tools for analyzing bridge results (ideally including complete auction and play records for each deal) for evidence of collusive cheating, where partners illicitly exchange information about their hands.
These tools do not use “artificial intelligence” or “black box models”. Instead, they apply common-sense rules based on bridge logic. For example, when evaluating a lead from Kxxx, EDGAR will consider whether this is an unusual lead (perhaps we hold AKxx in another suit), or mainstream (perhaps partner bid the suit, or other leads are unattractive). When evaluating the outcome, EDGAR will understand that hitting partner with the Ace, or QJ, will usually look attractive, and hitting partner with xxxx could be very bad (at least against a suit contract). Importantly, this logic can all be understood and evaluated by a bridge player, without requiring in depth knowledge of computers, statistics, black box models, etc.
By looking at a collection of scenarios that EDGAR can “understand”, across the bidding, opening lead, and subsequent defense, many “incriminating” and “absolving” actions can be identified. EDGAR, using parameters informed by empirical results and expert insight, assigns weights to these bits of evidence based on the scenario. Normal plays achieving normal outcomes will have little or no effect. Abnormal plays achieving spectacularly good outcomes will have a larger effect. The details necessarily vary by “scenario”, and are carefully evaluated and tested.
Compared to a human investigator, EDGAR will “understand” a little less about each action it evaluates, but will be able to evaluate far more actions, far more consistently, and with far less risk of bias. It cannot flag “5 smoking guns” that would convince anyone, but the aggregate evidence available over many deals, properly understood, is far more reliable.
EDGAR does not emit a probability or likelihood of cheating. Instead, after inspecting 100s or 1000s of deals, it evaluates, in aggregate, how “consistent” the results are with normal play and compares that to how “consistent” they are with collusive cheating. Any report that indicates much greater consistency with collusion is also reviewed by an EDGAR analyst. The final interpretation of this evaluation in a discipline context happens “after EDGAR”, and depends on how the evaluation is intended to be used, other facts of the case, the relevant standard of proof, and so forth. Such decisions are ultimately the province of the organizations using EDGAR.
Interested organizations requiring more information can contacts us at TheEDGARAssociation@gmail.com
Copyright © 2023 edgarbridge.org - All Rights Reserved. TheEDGARAssociation@gmail.com