The original version of the game had counters which moved around as challenges were made, and they provided a measure of success. In the latest iteration, the counters have gone and scoring is being done with cards themselves: if you successfully challenge someone, you add the cards in their "live" stack to your score pile, and if your challenge is unsuccessful they gain your stack.
|Some games look compelling when they are set up on the table. This is not one of those games.|
This now opens up a question: should the winner be determined by who has been "correct" in the most challenges, or by who has accumulated the most cards in their score pile?
Luckily, it's possible to try both methods out simultaneously. If one approach or the other is chosen, it might change how some people play the game, but I figure that those players would be in a minority, and possibly not the core target audience anyway.
So, what I have done for recent playtests is ask players to keep the cards they gain in challenges in separate piles so we can count how many "tricks" (that's technically an inappropriate term but it's what I'm thinking in my head) they win as well as how many cards they have in total. I can record both sets of scores, along with what I call the "people's choice", where I ask everyone who they feel did best in the game and thus deserved to win.
This last point is one that I need to remember for the future. It won't be appropriate for all games, but I think that for many it would be a really useful piece of feedback to ask for. Where scores aren't being obviously tallied in a game, players often have a perception of who they feel is doing the best. This is often mistaken, and surprise turnarounds can really add excitement to an endgame, but I think that seeing how well players' perceptions line up with the actual result may be very interesting.
I have so far only run a couple of test games using this game, but in both cases, all three measures lined up pretty well. The player who everyone believed did the best amassed the most cards and either took the most "tricks" or tied for the most. I am planning to do a few more playtests using this approach, but it is actually looking like whichever scoring scheme I settle on will probably be fine. I'm guessing that the "just count the cards" system is probably best (pending further data) as, while there is a bit more counting, it is simpler (people do manage to get mixed up trying to track those tricks) and has the added benefit of being far less likely to result in a tie.