It has been well over a month since my collaborator, Chris O'Regan, and I set up an experimental play-by-blog playtest of our flip-and-write game, The Village on the River, and it's about time I got off my backside and wrote something about how it went.
If you remember, the game involves a series of rounds, in each of which, three cards are flipped from a deck of cards and you, as a player, get to make use of two of them in order to build your own little village, and what you build (and how you place it) gives you a score that you can compare with others when the deck has run out. We set up a blog post on Board Game Geek, which had a pre-set series of images of these card flips, each hidden behind "spoiler" tags, so you could go through, revealing one set of cards at a time, and play the game that way.
Over the next few days I received messages through various channels, including email, Facebook and Discord, though nobody actually commented on the blog post directly. Everyone sent an image of their play sheet, and quite a few folk sent comments about their experience. One person even went so far as to send a video of them playing the game, with the chatter with their partner about decisions being made.
Eight playsheets, with a variety of approaches. |
Just by way of a quick detour, I have heard a lot of people (most notably Matt Leacock, designer of Pandemic and much more) talking about having videoed remote playtesting as a really useful tool. This was my first experience of this, and I can totally see how useful this can be. If the players can relax enough to not worry about the camera, you can get so much information about what people are doing, what is causing them problems, and so on.
The Village on the River is, like a lot of random-and-write games, essentially a multiplayer solitaire; what you do does not affect other players at all, and the challenge is trying to make the best use of the random sequence of events that is available to all players. As such, while it appears ideally suited to solo play, the problem is that what constitutes a good score can vary greatly from game to game. Our observation from earlier playtests is that a score above 40 is generally very good and puts you in with a good chance of winning, but sometimes the winning score (even with competent players) is around 30.
So, with that in mind, while you could set thresholds for winning or losing (and we may end up doing that), what you really need is other players' scores against which to compare yourself. The more other players there are (and the game can, in principle, scale infinitely), the better idea you have of how good your score is, and the better idea we, as designers, have of how the game performs.
The eight sheets we received had scores ranging from 17 to 49, with four players in the 40's. There were a few small mistakes in scoring and at least a couple of players had misunderstood parts of the game until it was too late. There also wasn't a single approach that the higher scoring players had all found: several different strategies resulted in competitive scores.
Overall, we were very happy with how things turned out, particularly with the comments that helped us home in on the elements that were causing problems.
Our big issue here was in communicating some of the rules, so it is easy to miss some parts or interpret them in a way that wasn't intended. This is something that was clear from some of the playsheets, and made even more so by the comments and reports that folk were kind enough to send. Now, with things like this, it might be that your graphical presentation makes some things less intuitive than you would like, the wording or organisation of the rules might not be clear enough, or the rules themselves might be the problem and you need to change to something more intuitive to the average player. It is too early for us to be sure on this, but we are working to improve the first two points initially.
Aside from this, though, there was a general sense of, "yeah, I'd play again", which is very helpful for morale. Of course, we'll have to see how that works out in practice. And I think that, based on this experiment, we'll be having another go at this format of testing pretty soon.
No comments:
Post a Comment