Official

E - Avoid Boring Matches Editorial by evima


First, let’s determine if the goal can be achieved when \(S\) is fixed.

Let’s focus on participant \(1\).

If participant \(1\) is wearing a red hat, we need to pair them with a participant wearing a blue hat. Considering that participant \(1\) will always win the match, it is not a loss to choose the participant with the largest number among those wearing blue hats. Therefore, in this case, the pair can be determined.

If participant \(1\) is wearing a blue hat, there is no restriction on the color of the opponent’s hat. If some remaining participants are wearing red hats, the one among them with the smallest number can be paired with participant \(1\) without a loss. If everyone is wearing a blue hat, it is not a loss to pair participant \(1\) with the participant with the largest number. Therefore, the pair can also be determined in this case.

From the above, the pair for participant \(1\) has been determined. After removing this pair, the pair for the participant with the smallest number will be determined by the same argument. By repeating this operation, the final pairing is determined, and the progress of the tournament will reveal whether the goal can be achieved.

Now that we know how to determine the case when \(S\) is fixed, we need to improve our approach to solve the original problem.

Intuitively, it is easier to achieve the goal when:

  • There are more Bs in \(S\).
  • The Bs in \(S\) are positioned more to the left.

Therefore, let’s consider an \(S\) that can achieve the goal, containing as few Bs as possible, and those Bs are positioned as far to the right as possible. We will call this the minimal \(S\). The definition of “as far to the right as possible” is ambiguous, but the same conclusion is reached regardless of how it is determined. It is not self-evident that the minimal \(S\) is unique, but it is understood to be unique from the discussion below.

By experimenting, we find that the following \(S\) is minimal for small \(N\):

  • \(N=1\): \(S=\)RB
  • \(N=2\): \(S=\)RBRB
  • \(N=3\): \(S=\)RBRRBRBB

By considering the “reverse” of the greedy method explained at the beginning, we can also find the minimal \(S\) for larger \(N\).

Let \(T(N)\) denote the minimal \(S\) for a certain \(N\). Using \(T(N)\), we characterize an \(S\) that can achieve the goal, leading to the following condition:

  • Replace R and B in \(S\) with \(-1\) and \(+1\), respectively, and let \(A\) be the sequence obtained by taking the cumulative sums. Similarly, replace R and B in \(T(N)\) with \(-1\) and \(+1\), respectively, and let \(B\) be the sequence obtained by taking the cumulative sums. If \(A_i \geq B_i\) for all \(i\), then the goal is achievable; otherwise, the goal is unachievable.

The proof can be done by induction. Suppose there is an \(i\) where \(A_i<B_i\). First, take the smallest such \(i\). Then, observe what happens when you run the greedy pairing explained at the beginning up to position \(i\). It turns out that after the first round, \(S\) does not meet the condition.

Conversely, if \(A_i \geq B_i\) for all \(i\), then \(S\) meets the condition. This can be easily verified.

From here, the rest is easy. The problem ultimately becomes:

  • A sequence of \(+1, -1\) is given. Perform adjacent swaps repeatedly to make the cumulative sums reach or exceed certain values.

The answer is the total difference between the cumulative sums and the target values divided by \(2\).

All the above operations can be performed in \(O(2^N)\) time, so the original problem can be solved in the same time complexity.

Sample Solution

posted:
last update: