forum

Accuracy when dealing with HR/OD10/DT

posted
Total Posts
58
show more
koromo

Genshin wrote:

so basically, adding hr would be the same between 8~10? it doesn't seem at all. ex in Gensou no satellite (od9) is way harder keeping a good acc rate rather than airman (od8).
HR increases OD and AR by 40% and it caps at 10. So both OD8 and OD9 become OD10 after adding HR, OD7 becomes OD9.8.
thelewa


let's see how many people freak about the necrobump instead of realizing what I'm trying to say
Full Tablet

thelewa wrote:

let's see how many people freak about the necrobump instead of realizing what I'm trying to say
Is that OD10 or OD9?
If that is OD9 it seems that the offset is wrong.
silmarilen
or he's having trouble keeping up with the map
winber1
thelewa needs to teach me the art of accuracy

so i can be #1
thelewa

Full Tablet wrote:

thelewa wrote:

let's see how many people freak about the necrobump instead of realizing what I'm trying to say
Is that OD10 or OD9?
If that is OD9 it seems that the offset is wrong.
OD10

I also had runs where I had 1x100 with the same unstable rate

unstable rate is a poor measurement for accuracy if used alone, since it only measures how constant your clicks are. You could get all 100's and still have like, 59 unstable rate.
Full Tablet

thelewa wrote:

OD10

I also had runs where I had 1x100 with the same unstable rate

unstable rate is a poor measurement for accuracy if used alone, since it only measures how constant your clicks are. You could get all 100's and still have like, 59 unstable rate.
Well, the expected unstable rate for that accuracy in that map (with OD10) is 88.0688, not very far from the unstable rate you really got. The expected unstable rate for getting 1x100 is 62.3514 (that means, if the errors follow a normal distribution, with that unstable rate, there is 50% chance you get at least 99.85%acc, with a slight probable deviation from 99.85%).

The thing is, the calculation is a ESTIMATION that helps linking unstable rate to accuracy (the calculation assumes the average timing error is 0, assumes normal distribution of hit errors). It's possible to get all 100's with 59 unstable rate, but it is very improbable (it would only likely happen in situations where you intentionally set your offset to a very wrong value).

Even if the calculation assumptions are right, there is an expected deviation from the mean for the results. For example: You can play that song 1000 times, every time with a unstable rate of 90, with different results each time (50% of the time it will be below 97.067%, 50% of the time above 97.067%, the median accuracy in those plays would be 97.067%) (this is not considering deviation in the unstable rate itself, of course).
JAKACHAN
Where do you people even get these numbers? "Expected unstable rate?" you just made that up lol...

People are taking this game way to serious nowadays.
thelewa
Where are you pulling this from

edit: LOL JAKA NICE NINJA
Ekaru

JAKACHAN wrote:

Where do you people even get these numbers? "Expected unstable rate?" you just made that up lol...

People are taking this game way to serious nowadays.
He's likely applying statistical analysis to the mathematical formulas (not sure if he has the exact formula for the unstable rate or if he's estimating it, though). For example, the timings of a player's hits will usually resemble a normal distribution (excluding misses). There's an astounding amount of shit you can do with such normal distributions. And yes, there are ways to account for that one hit that was waaaaaaay off.

EDIT: And and and and and yes, you could use the formulas to create an "Expected unstable rate" if you were some sort of mathematical genius and had plenty of time on your hands. I'm not, though.
JAKACHAN

Ekaru wrote:

EDIT: And and and and and yes, you could use the formulas to create an "Expected unstable rate" if you were some sort of mathematical genius and had plenty of time on your hands. I'm not, though.
And neither is he. If you are going to provide evidence about any of this you shouldn't use a your own calculation or treat it like you know what you are doing.
Luna
Unstable rate is most likely just standard deviation, which has a known formula and is easily analysed by maths software.
Of course this is just an assumption, but it would make sense.
ScarletFrost
Why do math when u can play OSU! -_-
GoldenWolf

ScarletFrost wrote:

Why do math when u can play OSU! -_-
What is OSU! ?
I only know osu!
And osu! is serious business.
Full Tablet
When I asked woc2006 how was Unstable Rate calculated, he said it was Standard Deviation. By looking at the numbers it usually gives, I think he meant standard deviation of the hit errors measured in tenths of a millisecond (so 100 Unstable Rate means 10ms of standard deviation).

Now, assuming that the hit errors follow a Normal Distribution, and the average errors are zero, for a certain standard deviation there is a probability for each hit to have an error with a magnitude less than A (where A is the time leniency for getting a 300, it depends on OD, it is 18ms for OD10).

Now, estimating the amounts of 300s and 100s in circles given a certain accuracy and number of circles:
Number of 100s: C = Circles*(3/2)*(1-Acc)
Number of 300s: T = Circles - C
(Note that this estimation only considers 300s and 100s; plays with 50s and Misses would be calculated as having a non-integer amount of 300s and 100s, which should be fine as an approximation in plays where most hits are 300s and 100s).

Now, calculate the "probability of getting a 300" that each hit has to have to have a median amount of 300s equal to the amount of 300s in the play. The amount of 300s in this case would follow a binomial distribution, and using the inverse regularized beta function to get the probability: (Mathematica Syntax)
Probability:

Now that we have the probability of getting a 300 each hit, using the inverse error function would allow to calculate the expected standard deviation of the hits in the normal distribution:

Where A is measured in tenths of milliseconds if the standard deviation calculated corresponds to Unstable Rate.

Also, for plays that have sliders and spinners, assuming that all sliders and spinners are 300s, calculate the "Circle Accuracy" of the play:

Circle Accuracy gives a better estimation than shown accuracy for the previous formulas (still not as good as the estimations obtained in a map with only circles though)
Aqo
I demand a gorgeous excel table for this.
Full Tablet

Aqo wrote:

I demand a gorgeous excel table for this.
http://www.mediafire.com/download/k1zw8 ... _Rate.xlsx
winber1
that was not gorgeous at all
Aqo
I kinda like it, it's nice. gj
Yarissa
osu! standard science/ math never ceases to amaze me.
Full Tablet
To reflect the variability that is to be expected according to the previous model, I updated the tables so they reflect the expected unstable rate with different levels of confidence.

For example, in the table with 70% of confidence, for an entry with (90%acc - 150Unstable Rate): According to the model, if you play a certain map several times all of them with 150 Unstable Rate (or close to it), it is expected that you get AT LEAST 90%acc 70% of the times you played that map. This can be useful if you want a estimation of how many times you have to retry a map to get certain accuracy if your unstable rate is almost constant each try.

(yes, I have nothing else to do)
http://www.mediafire.com/download/pgb55 ... te_v2.xlsx
Full Tablet
To test if the Normal Distribution model is appropriate, you can analyze the distribution of this variable in several plays.

A: Leniency of getting a 300 measured in tenths of milliseconds. It is 180 (18ms) for OD10.
Acc: Accuracy on the circles of the map.
Circles: Number of circles in the map.

For that calculated variable,
Values near 1 mean that the play got bad accuracy for that Unstable Rate, according to the normal distribution model.
Values near 0 mean that the play got good accuracy for that Unstable Rate, according to the normal distribution model.

The variable itself should follow a Continuous Uniform Distribution, with values in the [0,1] range (so the mean should be 0.5, and the Standard Deviation 0.288675). This is the same distribution as a "Perfect die with an infinite amount of sides, with values in the sides that go from 0 to 1".

I tested this playing this map https://osu.ppy.sh/b/207567&m=0 (edited to have OD10) 26 times. Got a mean of 0.519718 and a standard deviation of 0.27152, which is close to the expected values.
Full Tablet
I made a new version of the table (with a new formula that is more accurate for lower accuracy percentages, since it considers the probability of getting 50s and misses too).
https://www.dropbox.com/s/6ravpyq7zssfy ... %20v3.xlsx

Also I made a table for osu!mania based on the timing windows indicated in this table http://i.imgur.com/V7EbLVZ.png
From what I tested, it seems to be accurate with plays where I don't mess up too badly with hold notes.
It is based on base score (without mods) of plays https://osu.ppy.sh/wiki/Osu!mania#Score. That value is a better indicator than the accuracy percentage (a rainbow 300 is worth the same as a regular 300 for the accuracy percentage).

https://www.dropbox.com/s/c07paqv9hull2 ... mania.xlsx

Note that the model considers getting a miss as pressing way too early/late (and in that case the game might not register the key press at all, so in-game would be impossible to achieve Unstable Rates so high). Also, it doesn't consider the case where the player doesn't hit the object.
Full Tablet
I made a table based on the Taiko OD table in this thread: t/146678
osu! Standard Table: https://db.tt/WDQRi0MM
o!mania Table: https://db.tt/a10M2n1T
Taiko Table: https://db.tt/hSeemliT

The formula used in more detail (using Taiko as an example, since it is the simplest case):
  1. For a given OD, take the timing windows of the judgment. In taiko, for OD10, the timing windows are:
    GREAT (300 base value): 18ms
    GOOD (150 base value): 48ms
  2. For a given accuracy percentage, calculate the percentage of GREATs a play would have in average if the hit errors of each hit followed a Normal Distribution (with mean error in the hit errors equal to 0, and a standard deviation set so the mean accuracy the plays would have is equal to the given accuracy).
    The probability each hit has to be a GREAT is:

    (Erf is the error function http://en.wikipedia.org/wiki/Error_Function).
    The probability of getting GOODs is:

    So the mean accuracy given a Standard Deviation is:

    So, to calculate the percentage of GREATs (equal to it's probability) given a accuracy percentage, one has to find the standard deviation that makes the mean accuracy equal to the value given (the properties of the error function makes it easy to calculate the value numerically).

    Note: the standard deviation calculated here doesn't correspond to the expected unstable rate, since the value calculated here uses the mean, instead of the median (or a percentile)(using the median or percentiles has better practical applications in this case: in the case of 100% accuracy, the mean standard deviation is 0, since with any value higher, there is a small probability the accuracy achieved is not 100%. With the median, one would calculate a value where 100% accuracy is achieved half of the time).

    In the old formula, there was the assumption that every hit that wasn't a GREAT (or what the best judgment is in the game mode) was a GOOD. Now, this also accounts for the probability of getting MISSES (in other game modes, 50s-MISS in standard, and 200s-100s-50s-MISS in o!m). This is the main difference the new formula has.

    Note that using the (mean amount of GREATs a normal distribution with a mean accuracy equal to the accuracy achieved would have) instead of the (real amount of GREATs) makes the results depend on the base values of the judgments (which are very subjective), the dependency is bigger when the real amount of GREATs and GOODs is unlikely to be caused by a process that follows a Normal Distribution (Gaussian process). This calculation weights all the possible judgment distributions that have certain accuracy value, by the probability they occur by a Gaussian process, and gives a result based on the mean case.

    Example: OD10, 100 hits.
    62 GREATs, 36 GOODs, 2 MISSES (this distribution is likely to be caused by a Gaussian process: the mean distribution with 80% accuracy, with the current judgment values, that represents a Normal Distribution is 61.9385 GREATS, 36.1229 GOODs, 1.93853 MISSES).
    If a GREAT is worth 300, and a GOOD is worth 150: The expected unstable rate (median) is: 207.636
    If a GREAT is worth 300, and a GOOD is worth 50: The expected unstable rate (median) is: 207.433

    62 GREATs, 6 GOODs, 32 MISSES (this distribution is very unlikely to be caused by a Gaussian process).
    If a GREAT is worth 300, and a GOOD is worth 150: The expected unstable rate (median) is: 320.89
    If a GREAT is worth 300, and a GOOD is worth 50: The expected unstable rate (median) is: 233.923

    With this way of calculating the amount of GREATs (based on the accuracy percentage achieved instead of the amount of GREATs, GOODs and MISSES achieved in the plays directly), it is possible to compare different judgment distributions according to the game rules (for example, being able to determine whether 300 GREATS and 200 GOODs is better than 310 GREATs, 180 GOODs and 10 MISSES).
  3. Now, that we have calculated the percentage of GREATs represented by a Normal Distribution, we calculate the (probability of getting a GREAT per hit) so the (probability of getting at least the calculated amount of GREATs) is equal to a certain probability (50% for the median case).
    In the simplest case (100% GREATs), the probability needed per hit to get a GREAT so there is a 50% chance of getting 100% GREATs, if there is a total of 1337 hits in the map, is: 0.5^(1/1337) = 99.9482%.
    In general, one can use the Inverse Beta Regularized Function (used to calculate the value required in the Binomial Distribution) to calculate this probability. The probability needed per hit to get a GREAT so there is an A chance of getting B GREATs or more, if there is a total of M hits in the map, is (Wolfram Syntax):
    (The value of A is the Confidence value in the tables).

    One can change the value of A to reflect the expected accuracy obtained with several retries when the Unstable Rate obtained is constant, with the expected deviations encountered in the normal model. For 1 try, A is 50%, with R tries, A is is 1/(1+R). This is equivalent to the expected minimum value obtained when drawing a random real number between 0 and 1 several times (where each number has the same chance of appearing).
  4. With the probability per hit of getting a GREAT calculated, calculate what is the Standard Deviation of the Normal Distribution where the probability of obtaining a value between -18ms and 18ms (the timing window of a GREAT in OD10) is equal to the calculated probability. This is:
    Where InverseErf is the Inverse Error Function.
  5. With the standard deviation calculated, multiply to a value so it corresponds to the Unstable Rate shown in game.
    For 10ms standard deviation: 100 Unstable Rate no mod, 150 Unstable Rate with DT, 75 Unstable Rate with HT.
    For any mode that is not osu!mania, for a given OD, 150 Unstable Rate in-game in a map would have the same expected accuracy than 150 Unstable Rate in the same map with DT (but 150 Unstable Rate is harder to achieve with DT, since it corresponds to less standard deviation in milliseconds). For osu!mania, DT and HT don't change the timing windows, so 150 Unstable Rate with DT would have better accuracy than 150 Unstable Rate no mod (and would be harder to achieve)
With Mathematica, in my laptop, calculating the values of each page in the taiko table takes 4.79 seconds (so calculating all the values in the taiko table takes about 33.53 seconds). Each Expected Unstable Rate value takes about 3.7ms to calculate.
Please sign in to reply.

New reply