Tidek wrote:
Yeah, and player with 10misses on 1000notes map can still get 990k points, no, thanks
A pure accuracy system can be based in something else besides the sum of the judgment values (which is not really a very good way to measure accuracy, since the value of each judgment is pretty arbitrary).
The scale of the score system is not really important, for example, you could take the accuracy ratio "r", and change the scale by using:
Scaled_r = r^3
And the meaning of the scale doesn't change (if ra and rb are different accuracy ratios from different plays, and ra>rb, then the scaled valued of ra is also bigger than the scaled value of rb).
The only situation where the scale matters is from team multiplayer matches, since the scores of different players are added together; the solution here is, instead of adding different scores together, make the overall score of the team be a score calculated by adding the judgment counts of the players together.
Another way to calculate accuracy is fitting the Normal Distribution probability curve (with mean 0) that fits the distribution of the hit errors the best.
In this case, for example:
- Play A: OD10 map, no mod, 10,000 judgments, 100 misses (the rest are Rainbows).
- Play B: OD10 map, no mod, 10,000 judgments, 153 50s (the rest are Rainbows).
- Play C: OD10 map, no mod, 10,000 judgments, 308 100s (the rest are Rainbows).
- Play D: OD10 map, no mod, 10,000 judgments, 996 200s (the rest are Rainbows).
- Play E: OD10 map, no mod, 10,000 judgments, 3263 300s (the rest are Rainbows).
All those score would be rated as very similar under the normal distribution fit (the order is C<A<E<D<B, but the differences between plays are very small).
Under the current accuracy percentage formula (scaled linearly so the max is 1,000,000).
- Play E: 1,000,000 (no different as if the play was only rainbows)
- Play D: 966,800
- Play C: 979,467
- Play B: 987,250
- Play A: 990,000