Designing competitive strategy games is a constant fight against solvability. It's a struggle to make a system simple enough to understand, yet complex enough that players can't figure out the best way to play and then always play that same way.
Pure Solution vs. Mixed Solution
It's a much different situation if a game has a pure solution rather than a mixed solution. To understand why, we'll first have to define those terms.
A pure strategy is a complete definition of how to play a game. It's a set of instructions describing the move the player should make for every situation they could face. If a certain pure strategy is the best way to play the game, we'll call that a pure solution. If you know a pure solution for a game, it's hardly a game anymore because there aren't any actual decisions left for you; you simply follow the instructions of the pure solution.
A mixed strategy is a set of pure strategies where you assign a probability to each one. So instead of your instructions being something like "If the opponent does X, I'll do Y," it's more like "If the opponent does X, I'll do Y 30% of the time and Z 70% of the time." If a certain mixed strategy is the optimal way to play, we'll call that a mixed solution.
If you knew the mixed solution for a game, it sounds like just as bad of a situation as if you knew a pure solution. You still wouldn't be making any decisions, just randomizing across a set of choices. But this is NOT correct; there's still a lot for you to do in the case of a mixed solution. To understand why we'll have to look more closely on what playing "optimally" really means.
Playing Optimally
We said that if you had several possible mixed strategies, the one that lets you play optimally we'll call a mixed solution (this is also a Nash Equilibrium). There's a lot of potential confusion there because the word optimal has two meanings: an ordinary English meaning and a specific mathematical definition. This article is always referring to the mathematics meaning, NOT the everyday usage of the word that means "the best way to play." The math meaning is that playing optimally is playing least exploitably.
Let's see what playing exploitably looks like. If you were playing rock, paper, scissors and you decided to play rock 100% of the time, that is extremely exploitable. Your opponent could pick up on that and shift to playing paper 100% of the time. Your opponent can exploit your strategy so fully that your win rate goes down to 0%. If instead you play rock only 80% of the time (and paper 10%; scissors 10%), that's still a bad idea but it's a bit less exploitable. Your opponent could still play paper 100% of the time, but at least you'll win 10% of the time, rather than 0%.
If you want to be the least exploitable possible, you'll have to play each option 33% of the time. If you do that, there's no strategy your opponent can use to do better than you. That's the optimal mixed strategy to simple RPS.
Optimal Is Not "Best"
Playing optimally sounds like the best you can do, but if your goal is to win a tournament, then playing optimally is very likely not to be the best idea. Imagine you entered a rock, paper, scissors tournament and face a player who is known to play rock 100% of the time and they do exactly that against you. If you play optimally, you'll play each option 33%, so each hand of RPS there's a 33% chance you'll lose. Meanwhile, another player in the tournament could choose to play 100% paper when facing the 100% rock player. Your so-called optimal strategy has a much higher chance of losing and getting you eliminated from the tournament than if you had played 100% paper, too.
By choosing to play optimally, you gave up a massive advantage that was right there for you to take. Your opponent was ridiculously exploitable, but you chose not to capitalize on it. That's poor play if your goal is to win the tournament. This is an extreme example but the concept is still true even if the opponent was playing 40% rock, or even 35%.
What if you do play 100% paper against the 100% rock player, but after several rounds of play they change their strategy? It's possible that they could exploit you because now you strayed from optimal play. Yes, that's correct, but it's still worth it to try. If you're worried about your opponent changing their strategy to exploit you, then you don't have to go all the way from 33% paper to 100% paper. If you went up to, say, 40% then you're more likely to win this match than someone who stuck to 33%, but you're still not all that exploitable. Also, how good is your opponent at a) recognizing that you strayed from optimal and b) correctly implementing a strategy against that? It's entirely possible that you are better at those things, in which case you should definitely exploit their strategy. As they slowly adjust to that, you adjust faster.
Donkeyspace
The term donkeyspace, coined by Frank Lantz, describes the space of suboptimal plays. As described in the previous section, a good player should intentionally enter donkeyspace (in other words: play in an exploitable way) in order to exploit opponents who are also playing in donkeyspace. If both players are good, they each might dance through different regions of donkeyspace, jockeying for advantages.
It's important to have some perspective here. You might be thinking that everyone is going to play optimally so there's no dance through donkeyspace in high level play. That's laughable if you think about actual competitive games though. First, even at a high level, it's very common for players to play far from optimal. Second, it's highly unlikely that any—much less ALL—opponents will be playing optimally or even close to it. In a good competitive game, it's incredibly difficult to know what optimal play even is. There can be rules of thumb, but to know exactly the right probabilities in which to play a mixed strategy of exactly the right moves in a specific game state that could have thousands of variables? Even in a popular, well-understood game like Poker, optimal play is not known perfectly and in practice players stray from it considerably. Knowing optimal play in Pandante or Yomi is way more hopeless than in Poker.
Remember that when other players are playing non-optimally, even if you did know how to play optimally and even if you could perfectly execute the mixed solution, you still need to closely monitor your opponents and react to their styles in order to maximize your win rate.
There are several other reasons why actually executing an optimal mixed strategy is extremely difficult, even if you did somehow know what it was:
- People are very bad at actually playing randomly, so it would be very difficult to choose some certain option 42.3% of the time, for example.
- When people fail to play randomly, they are probably falling back on tendencies they do not know they have, but that you can detect and exploit.
- People cannot help but let their personalities spill over into decisions about how conservative or risky they are.
- If it's a real-time game, then skills at timing and physically executing the right moves (such as a difficult combo in a fighting game or a precise shot in a shooter) mean no one is ever anywhere near playing mathematically optimally.
Item 2 on that list is especially interesting. In two studies by Lewicki, et al (1997 and 1998), they demonstrated that people learn patterns without knowing that they learned them and without being able to explain or express what they learned. Subjects were shown four quadrants of numbers and had to press one of four buttons corresponding to the quadrant containing a certain number. They did several trials of this, but weren't told that the location of the numbers across trials was not random. The locations followed a complex set of 10 rules. As subjects did more and more trials, they were able to perform more and more quickly, yet they weren't aware of there being any pattern and no subjects could explain a pattern even after they were informed one existed and even when they were offered $100. Furthermore, when the underlying pattern was secretly replaced with pure randomness, the subjects immediately did far worse. Hilariously, even the subjects who were fellow psychology professors in Lewicki's department who were aware of Lewicki's research were adamant in their belief that the trials containing a secret pattern were actually random. They learned to exploit the pattern, yet were convinced it didn't exist.
The point is that your unconscious mind will make you perform mixed strategies imperfectly, and you'll fall into patterns you won't know you're doing. And then your opponent will pick up on those patterns and be able to exploit them, even if your opponent isn't aware that's happening. Mixed strategy games and dances through donkeyspace involve interesting battles of unconscious minds vs. other unconscious minds in addition to the part where conscious minds might disagree on what optimal play even is.
Pure Solution Games Degenerate Faster Than Mixed Solution Games
So in a game with a mixed solution, you still must be highly sensitive to what your opponent is doing. You have be able to detect how far they are straying for optimal play and then you have to be able to correctly counter that strategy. These are very difficult things to do and they involve, among other things, your unconscious mind picking up subtle patterns.
In a game with a pure solution, you do not have to care what your opponent will do, ever. If you know that pure solution, it doesn't matter what the opponent tends to do or what you think is in their mind, etc. You should follow the optimal script and there's no gameplay left.
It's also very important to think about how a game with a mixed solution looks vs. one with a pure solution when the playerbase is on their way to knowing that solution but isn't fully there yet. They're learning more and more about each game over time, they're approximating what optimal play is more and more closely. For the game with the pure solution, that means pockets of the game here and there become entirely about memorization and not about what the opponent is doing. For example, solved endgames in Chess are this way (but not Chess 2, because the midline invasion rule prevents all those solved endgames from happening). Openings in Chess (but not Chess 2) are another good example of that. As more and more is known about Chess over the years, the more structured the opening books become (the set of known-good opening moves) and the more important memorizing them becomes so that you don't enter the midgame at too much disadvantage.
Meanwhile, when we get closer to an approximation of a mixed solution—in Poker, Pandante, or Yomi for example—these games do not start to collapse into memorization. They are still about being very responsive to what your opponent is doing. And while these approximations get closer to a complete mixed solution over time (which will not happen for Yomi in our lifetimes), remember that EVERYONE is in donkeyspace. Even when there are lots of good players, they aren't literally playing optimally at every single step. Everyone is in some sort of donkeyspace and they disagree on who is where. Playing the game is in some way a method of resolving that debate. "I think I should block 42% of the time in this particular situation," while another player believes blocking 60% is correct. They each try to exploit the other's incorrect approximation, if they can even detect it in the first place.
From a design standpoint, games with mixed solutions have an inherent advantage in fighting against solvability. It's much safer to design a game that is still very interesting even if solved than it is to design a game that necessarily degenerates closer and closer to pure memorization and no decisions as it gets closer to being solved.
Making Pure Solution Games vs. Mixed Solution Games
As I've explained, pure solution games are dangerous to make. On the one hand, if your game is deep enough then you could delay people finding the solutions for a very long time. Chess and Go have been around for many centuries without full solutions being known. On the other hand, you'd be hard-pressed to make a pure solution game that stands up anywhere near as long as those games. Checkers is already solved, for example. Furthermore, even if you have a game as deep as Chess, Chess shows that memorization becomes more and more of what a pure solution game is about as the playerbase gets better. That's an unfortunate fate for a competitive multiplayer game.
There's also some irony there. If I told you that a certain game had perfect information (you know the full state of the game at each moment you have to make a decision) and that it had no randomness, then you'd probably say it sounds very skill-based. If you like skill-based games, you'd say we're off to a good start. But actually, we just guaranteed that this game has a pure solution and that it will necessarily become LESS about skill (decisions in-the-moment) and more about memorization as the game develops. A similar game that had some unknown elements, hidden information, and/or randomness could actually be more skill-testing, not less skill-testing.
So in creating a design, I recommend looking for what those unknown elements will be. What that hidden information will be. What that random element will be. Randomness has a real stigma, but it's important to understand that it's a valid tool to keep your game out of the dangerous pure solution category.
Kongai
My game Kongai is an example of that. You make two decisions per turn and each of those decisions is double blind. That means you make the decision at the same time the opponent makes theirs, then you simultaneously reveal those decisions. This very much helps against solvability (it's no longer a perfect information game), but even then, the game would be dangerously solvable without some other unknown elements. I used randomness in hit rates (just like in the Pokemon game Kongai is based on) as well as randomness in proc rates (the chance that a move does a special thing on hit). This worked extremely well in fighting against solvability. Those hit and proc percentages make it very, very difficult to compute the possibility tree several moves ahead.
Some Kongai players intent on finding mixed solutions had to zero in on the most pared down, toy examples you can imagine. They tried to find out optimal play given a certain lineup of characters vs another certain lineup, with a certain set of items equipped, in the endgame only when each team was down to its last character so no switching or intercepting was possible, and hit points were down to the end. In this microscopic portion of the game, it took them a dozen pages of analysis to determine the right play. Doing this for the real full game is basically unthinkable.
Conclusion
Competitive multiplayer games have to strive to be as unsolvable as possible while at the same time being understandable to players. Games with pure solutions may seem skill-based, but over time will necessarily degenerate to becoming pure memorization. Meanwhile, mixed solution games remain strategically interesting far, far longer.
In order to make a game with a mixed solution, incorporate some sort of unknown elements, hidden information, or randomness. If your game is real-time rather than turn-based, you're even better off.