forum

[added] Mapping Ecosystem Changes - BN Application Process

posted
Total Posts
101
show more
Shad0w1and
I would like to start a related discussion on reintroducing the tiered BN system. Here’s the post for reference—I wonder if it will pique anyone’s interest: https://osu.ppy.sh/community/forums/topics/1899201?n=1
ikin5050
Gamifying the process might make it more fun and I understand why my thoughts are inconsistent with that.
I ask you to consider this: If we gamify the process of becoming BN and remove tests does that not open the door to people's attitudes being less serious and their rigorousness/thoroughness in checking maps being lax?
Loctav

ikin5050 wrote:

Gamifying the process might make it more fun and I understand why my thoughts are inconsistent with that.
I ask you to consider this: If we gamify the process of becoming BN and remove tests does that not open the door to people's attitudes being less serious and their rigorousness/thoroughness in checking maps being lax?
I don't know how a test that only shows your english comprehension (the original intend of the RC tests back in the days) or your competence to google regularities assures attitude.

I firmly believe that the true attitude can only be assessed *during BNship* in terms of continuous evaluations of the choices they are making under the role. Every attempt to preemptively weed out the "less serious" will eventually fail, because you basically attempt to scry into the future.

Gamification helps getting people interested in the system in the first place and the process can be considered a "spam protection", a rather basic filter. The proposed changes will try to preemptively determine what decisionmaking and actions they will perform with this role, but it won't be all conclusive and a test won't fix that flaw. The entire overhead of evaluations, applications and having the candidate provide insight on their decision making and beatmap assessment process already sets a rather stringent tone of what expectations come along with the role. The continuation of that will make clear that even while having that role, they can not just pass that hurdle and then do whatever. They will have eyes on them - all the way through their BNship - and this won't change (and is actually the most important aspect to keep things serious)
Drum-Hitnormal
theres ways to cheat any test unless u want introduce photo Id, video recording + keyboard mouse recording + time limit + frequently updated questions

removing test is good, the main judgment of someone is good bn or not is dependent on the nat eval
Serizawa Haruki
Increasing transparency on a BN application's status
Reworking BN application feedbacks
Platform for communication between NAT and applicant post-application
These seem like good changes and are definitely long overdue so it's nice to see them added. My only concern is that individual feedback from each evaluator could potentially be inconsistent, contradictory or confusing due to different views, so when making these comments available to the applicant, it would be important to make sure that they're easily understandable and not conflicting with other evaluators, and to adjust them if necessary.

Removing the RC test
I'm not sure how I feel about this one honestly. On one hand, it is indeed a tedious test for applicants, and maintaining it is probably tedious for the NAT as well. But on the other hand, it might still be a valuable tool to determine knowledge/comprehension about the ranking criteria, BN rules, modding code of conduct etc. While it's true that looking up the answers is possible, it still teaches people about these things which they might not even read or know about otherwise. Therefore it could be seen more as a lesson than an actual test, which I don't think is bad. Whether it helps filtering out incompetent candidates or not is another story and it's hard to judge without knowing how many people are passing/failing the test. Assuming that the vast majority of applicants pass the test, perhaps increasing the threshold for passing could be an option to make it a more efficient selection process. Adding more questions that really test the person's knowledge, understanding and judgement could also make it more useful.

Reworking BN applications
First I want to address the proposed changes and some of the comments posted in this thread so far (I will talk about other issues and my own suggestions in a separate post later).

I was anticipating bigger changes, but I suppose all the talk about it just blew them out of proportion. I genuinely don't see much of a difference between this and the previous format, there are still 3 maps of different quality to submit and comment on. The questions are slightly different but still similar, so I guess the main difference would be supposedly shifting the focus from individual mods to decision-making skills and judgment, as stated above. However, it's unclear what this actually entails and how it affects the evaluation. Is it an actual different approach in how the modder is evaluated, or does it just mean that the selected maps and the answers will be more important while the mods themselves will be a bit less important than before?
I have a lot of doubts about the questions too because it's not clear how they should be answered. For example, when answering "How did your mod improve this beatmap?", are applicants meant to describe the issues and corresponding suggestions they made and how they would make the map better, are they meant to explain how the map in its current state differs from how it was before the mod, or are they meant to link certain timestamps as examples of things that were fixed (or a combination of these)? The question "Why do you believe this beatmap is ready to be nominated?" is also strange to ask because whether a map is suitable for ranked is usually not determined based on the presence of positive aspects, but rather based on the absence of negative ones. Neither BNs who nominate a map nor evaluators who assess the nominating BNs have to explain why a map is fine to be ranked. Therefore, an answer along the lines of "The map doesn't have problems" would be perfectly reasonable, but I doubt this is the type of answer that is being looked for. What is the expected response to this question though? That the map has good structure, represents the song well, and plays well? These things don't really provide meaningful information for the application. Also, how detailed should these answers be? All of this can be very offputting for those who apply and it could impact their application negatively if questions are misunderstood or they don't know how to reply, even if they are good at modding and judging maps.

achyoo wrote:

Goes without saying but please disclaimer to people to not answer "no dont like song" even though that's what all BNs do, since it gives nothing for you to judge.

Or maybe don't, let the people who troll wait 60 days more.
It's not trolling to give a valid reason not to nominate a map, just like "I don't like the map" is a valid reason. This is why the questions' objective should be very clear. If certain answers are not wanted, the questions should be formulated in a way that doesn't allow said answer to be given.

Nao Tomori wrote:

We are requesting one map judged as good (but not an overly safe map that has already been bubbled or by an experienced mapper)
This is another example of "hidden expectations" within BN applications that have been a huge problem. There is no mention anywhere that a "safe map by an experienced mapper" is not desired, so expecting people to know this is unfair. Aside from the fact that there is no objective definition of such a map, I also don't get why it would not be appropriate to choose. Sure, if the map is very polished there might not be much to point out, but a good modder still has the possibility to find areas for improvement and mistakes that should be fixed. Also, aren't BNs supposed to nominate high quality maps (which are often created by experienced mappers)? This even shows they are capable of recognizing maps that are better than average.

RandomeLoL wrote:

The goal of the changes wasn't so much as to outright reduce the bar of entry into the BNG, but to make it less obtuse, easier to understand, more transparent to the end user, and finally give a sense of direction of what exactly should be prioritized when evaluating someone's work which would affect the way applications are approached from both ends.
Unfortunately I fail to see how these changes concretely achieve these goals or even move closer to them, at least from the perspective of applicants. The transparency improvements are good, but other than that I really don't think this would make things "less obtuse, easier to understand" or give "a sense of direction", it's just slightly different than before but essentially the same process.

RandomeLoL wrote:

Users should have a better, more fun time. Submitting mods that they may've done at their own leisure and pace feels to be less restrictive and mentally taxing.
I don't think there's even a single person who is having fun or treating BN apps like a game. Even without the BN test, applicants are preparing for it like an exam and usually getting nervous. Sometimes people might "yolo" apply without caring too much but usually those attempts are not successful, unless they're very experienced.
RandomeLoL
Mainly a response to the above, but to somewhat give my (after) shower thoughts:

Regarding treating it like a game
I'm not saying that's how it's currently being treated as, or if the proposed changes fully achieve that. In my opinion, they do not. But regardless that's the direction that we should probably strive forwards. Removing the test by itself, doesn't seem much. But it's just one less burden applicants have to interact with. And anecdotally in Mania, most of the times the test never really decided the outcome of an application. We've had people with extremely low scores show that they know the stuff when put into practice.

Regarding current changes not achieving the goals
Same as before, do think we're being far too conservative in some places. Despite that, I disagree that the proposed change is the same as the old one. The way you present a question or a problem is as important as the contents of the question themselves. Not only can the wording of a question make it easier/harder for the applicant, but it can also set a rough guideline of both what evaluators are exactly trying to look for while trying to steer applicants towards a clear direction.

The current system boils down to "Just do good mods, do a lot of them, and be varied" which is beyond vague.

Why "I don't like the map" would not be a valid reason
Beyond whether an applicant likes a map or not, they have to be able to explain why exactly they do not like it. Other than subjectivity, the new model would expect applicants to be able to justify this answer in a reasonable manner. Evaluators do not get any meaningful information out of an answer like this, other than assuming the style of maps they would not be willing to nominate.

-------------------------

With all this said, I ultimately agree with the sentiment that the needle isn't being moved too much. We should start being less conservative, see how things work, and re-asses whether the compromises made to make the application process overall less obtuse and easier to interact with are fair & square.

Problem with all of these changes is that it's a game of balance. Evaluators may prefer a stricter system to allow a better comprehension of applicants, while applicants just want to have an easier time. I do not think any side is wrong in that regard.
arialle
Serizawa brings up very valid points and I also echo these concerns.

Removing the BN test on paper is a progressive move, but reflecting upon my experience as a Trial BN, I can say the main reason I failed was because I didn’t (and kind of still don’t) understand the RC. Wording errors can be forgiven and improved on, but technical errors are not taken as lightly. At least when I applied, the test actually led me to fully look at the RC. Removing the test I think are leading potential Probation BN’s to making more technical errors as I did (Say for example audio or BPM adjustments that aren’t 100% explicit in the RC). It becomes much easier to get in without these aspects being fully evaluated, especially since you should mod a map that the NAT expects you to nominate. Of course you are not going to nominate a map that has so many unrankabales, and if the map is by an experienced mapper it gets harder to prove you fully understand the RC.

I’m assuming the question ‘What map would you NOT nominate?’ would be the proof of technical understanding but it’s pretty easy to catch egregious errors with the help of tools like Mapset Verifier.
Perhaps a solution to this could be a reverse BN test - A shorter test designed for probation BN’s to demonstrate technical knowledge, but it being after acceptance removes the reliance on a good score. The questions would be redesigned, mode specific (no more std questions for a mode like Taiko) and relevant. They can use it to check their knowledge rather than test it, and if they have questions they can ask NAT and brush up on that specific area. Also removes the stress and pressure since you’re already an accepted BN at this point.

Another point that does need to be examined are the questions in the answer section. I don’t hold as much issue for the 3 types of maps - I do think they are much more clear than before and I kind of made up my own guidelines - but the written part allows for really nonspecific answers that could make an already qualified applicant look pretty bad.
I also believe that the questions are redundant - you’re nominating a map because it lacks bad things, just as Serizawa said above. Adding guidelines would be useful or complete readjustment of the question such as ‘What makes this map unique?’ or ‘What does this map offer to the Ranked section?’ could offer more creative answers whilst still being objective.

‘How did your mod improve this beatmap?’ seems a bit silly no? First it implies that the mapper has to accept all the mods. Mods can still have value without being implemented as previously mentioned on this thread. But also, you would be able to see that yourself by checking the map in the before osz right? Why make the applicant craft an answer to an already self explanatory answer? I personally don’t understand the value in this.

Maybe I don’t fully get some things but those are just my two cents.
Mirash
I pretty much always wanted this system to be more welcome to beginners, so agreeing with changes, people shouldn't go through what too have gone through.
From personal experience i only learned how to be a somewhat decent nominator through actually being one for some time and under casual mentorship of other bns, like checking 2nd bn mods on the same map you've modded was the most powerful thing. All of that + having the ability to push maps that you like was very satisfying to me, hence why i stayed so long.
Nifty
Like this change. Previously, the only way to disagree with NAT feedback (assuming you privately asked for it in the first place, or asked somebody else due to the NAT ignoring you) was to rally against them on twitter. I only hope that NAT are willing to put in the effort to satisfy (or try to satisfy) applicants. If providing feedback was one of the most exhausting and delaying tasks, knowing what feedback was previously given, I can't see how making each NAT write more cohesive individual feedback will be less exhausting.

I do however think it is a bit silly to ask applicants to submit a map they would not nominate. I don't know any BNs who routinely mod maps they don't nominate, so I don't see why that would be something we would look for in a prospective BN. Not much information about the applicant would be given from a map they wouldn't nominate, especially when a BN's reason for not nominating a map is usually simply not liking the song, the map being too easy/hard, or some other really simple and boring reason.
aceticke

Nifty wrote:

Like this change. Previously, the only way to disagree with NAT feedback (assuming you privately asked for it in the first place, or asked somebody else due to the NAT ignoring you) was to rally against them on twitter. I only hope that NAT are willing to put in the effort to satisfy (or try to satisfy) applicants. If providing feedback was one of the most exhausting and delaying tasks, knowing what feedback was previously given, I can't see how making each NAT write more cohesive individual feedback will be less exhausting.

I do however think it is a bit silly to ask applicants to submit a map they would not nominate. I don't know any BNs who routinely mod maps they don't nominate, so I don't see why that would be something we would look for in a prospective BN. Not much information about the applicant would be given from a map they wouldn't nominate, especially when a BN's reason for not nominating a map is usually simply not liking the song, the map being too easy/hard, or some other really simple and boring reason.
Regarding the individual feedback point, we already do this currently, it was quite labour intensive to go from writing thoughts individually to then collate it into one sole feedback message. It also serves as nice proof of our work for our own evaluations and an archive to look back on, so I think it's nice to keep.

On a personal note regarding the other point (other NATs may not agree), I think whilst it would not tie into their potential future BN work very heavily, it's great for evaluation purposes. We want to ensure that applicants can identify unrankables and equally as importantly identify issues not directly covered by RC, which they might not encounter in maps they do want to nominate. I fear removing this may end up with applicants passing without a proper grasp on issues, and forcing them to find maps with each issue under the sun is a main reason why we wanted a change for the application process in the first place.
Drum-Hitnormal
BN eval looks at DQs...

DQ mostly related to RC

RC is removed from test

applicants can become BN with poor knowledge of RC and just nom safe and simple map alongside experienced BN so it doesnt expose their issue with RC, but this is fine since they dont cause any problems.

but new BN are usually higher availability due to not burned out like experienced BN, so they are saviorv of new mappers trying to get first ranked, now less skilled bn have to worry about their eval and make it into direction of less new mapper friendly
Shad0w1and

Drum-Hitnormal wrote:

but this is fine since they dont cause any problems.
Oh this sounds like we can make everyone a nominator lol



Hmm, has anyone taken into account the mapping skills of the BN? It doesn’t need to be perfect, but I believe a skilled modder or nominator will likely demonstrate their expertise in mapping. There must be something we can look at don't you think so? If you guys are getting rid of the test, let them provide something to demonstrate their expertise on basic mapping theory :/
lenpai
wanna drop my 2c with regards to tests due to it becoming a point of discussion

ploping straight into actual BN work is really the best way to get actual practice with the technicalities of the RC as the user will be able to come into direct contact with the bns and discuss stuff and whatnot -- even learn from others' DQs. me having failed probation, had an extended probation, and now full, it doesnt quite compare to theoreticals

the current testing method really doesnt accomplish much as it is a ctrl+f test with reading comprehension of the question
at best, it allows people to be reminded of some niche scenarios and acts as a general recap of the ins and out of the RC but without really much internalizing
at worst, it gives negative feedback on outdated information as the RC changes every now and then

as for the prospect a more dynamic testing method like exam with a reference chart, i think this will hamper the inflow of BNs for TCM (taiko catch mania), since it would necessitate batches of applicants and more frequent changes to the test itself. Would the extra effort in facilitating this (when said effort can be used for tasks of more priority) be worth it? This would be on top of giving feedback to the applicants. I dont really have optics of the average performance of bns across all modes so feel free to build upon this

so i lean towards not having RC tests

sidenote:
i do like to reiterate, probably for the benefit of the nats and the applicants. A clear example of what counts as "good modding" or a general reference of what should be checked could help with the process. There are initiatives such as modding mentorship that helps with the process but for thee others, clear expectations can bridge the gap
Doug
• The transparency about each stage of the application and the possibility of contacting the NAT members involved in the evaluation is interesting.

• I don't have a clear opinion about removing the RC tests, they may seem useless and an extra burden, but it was because of this test that I felt like reading the whole RC and I learned things that I had no idea existed, and I was able to apply them in practice on the maps that I sent in the application.

Just my opinion, but the RC test was the most fun part of the BN app, since looking for suitable maps for the BN app is very stressful and can take weeks in some cases (not complaining, just pointing it out).

• Regarding the changes in the maps to be sent, I don't see a real change. It's just become a bit more specific in terms of what the NAT wants, but the process is still the same: 1 map that you would nom, 1 map that you would NOT nom and 1 random map to complement the other two.

The part about answering the questions is just written differently (which may have made it a bit more specific), the old question about whether or not you would nom the beatmap in question already covered all of them, but with a slightly higher chance of getting answers like "the map isn't ready", so the changes only seem to affect those answers a bit.

----------------------------------

Honestly, I see some positive points and feel that we are heading in the right direction to improve the whole process. It seems to me that some points can only show whether they will be effective or not after some test period, in which NAT could make another post like this one reporting the result of the changes and allowing some more discussion about it, that would help the whole system to improve more and more :)
clayton
Reworking BN applications
it seems good but I also want the clarification mint asked for ("im curious about why an applicant needs to demonstrate the capacity to mod something that they are absolutely unwilling to nominate"). if I were a prospective bn, the only time I would even encounter this is if I think a map is great at first glance but uncover problems as I dig into it deeper, and for some reason those problems can't be fixed or the mapper is unwilling or something. it seems difficult to provide a good answer here if you're the type of modder/nominator to have high standards about the maps that you want to help along to ranked

Removing the RC test
I don't think this is too important regardless, but I find it odd that this is what was written about it

by being more indicative of reading comprehension than BN abilities.
isn't that the entire point of the test? understanding the sometimes meticulous wording of RC is important when you run into new examples of the edge cases it was designed around.

Increasing transparency on a BN application's status
ok

Reworking BN application feedbacks
ok (and i think it doesnt ahve to be anonymous either but w/e)

Platform for communication between NAT and applicant post-application
I can't appreciate why this is meaningfully different than the mentioned "group DMs", but I will just take your word for it that it's helpful since it must have been run by enough people to include in this forum post
Nao Tomori
The main logic for why we want applicants to explain their thought process is because, essentially, every BN is always making an evaluation as to the quality of the map they are requested, based on issues they see when they look at it. We want to make sure that the applicant can similarly identify those issues and, if needed, actually provide solutions and work with the mapper, even if in practice that sort of thing is a bit rarer due to the large amount of requests BNs get.

As an update, we've rewritten the application guidelines to be much more clear on what we are asking for and why, taking into account the feedback from y'all. That's currently under discussion internally but we're generally happy with the spot the app changes are on and intend to reopen applications soon.
roufou
While I currently can't say I have any ideas for how, I think this system could be improved even more. However it seems like a great step in the right direction. (or a better direction?)

I can't say for sure, but I think it's easy for people who could make for good BNs to think "why bother?" due to application being a bother, and I think it'd be better if it was at least slightly less tedious than it is currently.

My only real concern is that some people who may be good BNs could have trouble formulating their thoughts on some of these questions (particularly if we want to make BN more accessible to people who don't speak english super well). I wouldn't say it's a super huge concern, but I think it's something evaluators should keep in mind, albeit maybe this is so obvious it's something that won't cause problem?

edit: thought about that part above some more and it's probably not something that needs too much of a concern, it's probably fine.

I should say I'm mostly speculating based on my impression, so maybe the current system isn't as bad as I think.
Yuii-
this current BN test is extraordinarily dumb because it's only based on... 3 mods? and you get stuff like "subjective, mapper's intention" which is the biggest bull i have ever read in my entire life

actual good modders being left out of the BNG (again, 2017 all over again) for an application that is at its worst, by not hiding the applicant's name

reapplying every 2 months doesn't make sense, especially if every piece of feedback is subjective and you cannot even respond other than "contact the NAT".

<removed comment>

ya, remove the written part because it's useless, very easy and it's the same 20 questions on every single test. like who the hell cares about "skinning"? :rofl: :rofl: :rofl: :rofl: :rofl:

<removed comment about transparency in the system not being good enough, poster thinks applications depend on how much the NAT likes the applicants and suggests bringing back old BAT system from 2014>

anyway, fun system!!!
achyoo
Who hurt you? Incredibly cynical take and useless reply especially considering you've been away for so long I fully doubt you actually have a full grasp on how things are currently. Bring back BAT and the vouching thing is a hilarious thing to say, considering the leaks that happened 2 weeks ago about a canned proposal which tells me you made zero effort to actually understand what is even going on in the mapping ecosystem right now. This kind of unwarranted ego is kind of crazy considering oBWC'21 proved you fail to understand even things like decimal OD. But go off oomfie
Shii
Please remember that this is a place for actual discourse, not the malding ramblings of someone who clearly has a hateboner for the current people in charge. If you can't try and argue sensibly, just leave cuz you're not adding anything.

That said:

"subjective, mapper's intention" is a skill issue - you really should be able to understand mapper's intention whether you're a good modder or a bn (these are two distinct but overlapping things fwiw). I understand you might have difficulties with mapper intention, based on obwc, but don't feel too discouraged!

As above, good modders aren't necessarily good bns and vice versa. Hiding applicant name is irrelevant when writing styles and whatnot are pretty clear (and evalers could find out easily by other means anyways).

2 months is plenty of time to absorb and address the concerns raised in an application - personally I think 3 was fine but I don't see the arm either. And if that's all you can gleam from bn app feedback I am seriously concerned for your reading skills.

NAT aren't required to be exceptional mappers (not that the current ones are bad) to be able to evaluate BNs or uphold precedent or whatever. I also don't know why you're shittalking mapper skill when you're not exactly hot stuff yourself.

Quiz was a formality for all but the worst applicants - you could easily ignore niche questions for shit like skinning and still pass just fine. The rest of the questions are indirectly answered in your mods anyways, so there's no loss.

The proposed changes do increase transparency as well. You're trying to equate some amount of inherent bias to a system that encourages circlejerking, which really goes to show that you're complaining in bad faith. Yes there'll be some bias. But have you considered that the applicants who're friends with NAT/BNs do better not because of the social connections, but because they have access to feedback and resources?

Ayu's own response hammers in my other thoughts so just read that. Now, can we get some actual sensible discourse on the proposal again :D
Nao Tomori
Please see below for the updated application guidelines. Let us know your thoughts. As a note, the evaluation generally does not rely on the mapper responding to the mods at any point.


Map 1: Submit a "BN check" mod on a map that you believe is close to a rankable / nominatable state. If the mapper were to address your mods, you would immediately be ready to press the nominate button. Briefly describe why you believe the map is ready for ranked. The map should not have any nominations at the time of submitting your application.

This is intended to provide information on your ability to conduct the final steps of the modding process as well as independently evaluate a map's overall rankability.



Map 2: Submit a mod on a map that you not would nominate unless significant improvements are made. Additionally, briefly explain why the map was not in a rankable state when you modded it; your modding should generally address those concerns. The map should include a full spread of difficulties. The map should be hosted by a different mapper than the first map (including collab participants).

This is intended to provide information on your issue identification skills, communication and wording, and ability to evaluate a map's overall rankability.



Map 3: Submit a mod on a map that, in your opinion, would be helpful to us in evaluating your ability to judge map quality and readiness. Indicate whether you would or would not nominate the map after your modding has been addressed. The map should be hosted by a different mapper than the first or second maps (including collab participants). The map should not have any nominations at the time of submitting your application.

This will provide you with an opportunity to further improve your application, keeping in mind the intentions stated in the descriptions of the previous submissions.
clayton
I think these improve on the aspects that you can use to judge applications

to clarify:

- "mock BN check" still means posting a normal mod to a real discussion, right? just the wording throwing me off here cuz "mock" makes it sound like it's a manufactured scenario in some way. if I'm understanding it right I think you can still just call it a "mod", and the rest of the description explains what you're looking for

- if map 3 is intended to be a wildcard mod of the applicant's preference, I think that could be worded a bit clearer (specifically "judge map quality and readiness" is vague to me, I don't know if you meant for this to describe all good mods or not). if that's not what you were going for then idk what it means

and I guess my earlier concern isn't as big of a deal if the applicant can show the before-mapper-updated version of the maps
Nao Tomori
Appreciate the feedback. Updated the wording on the first one. For the third map, it is supposed to be a wildcard for the applicant to "shore up" their app. It's supposed to be pretty vague - I wanted to avoid giving extremely specific instructions (well to be honest what I actually wanted was 2 of map 1 and 2 of map 2) so that it didn't become way too formulaic as it kind of has been in the last few years.
Shii

Nao Tomori wrote:

Appreciate the feedback. Updated the wording on the first one. For the third map, it is supposed to be a wildcard for the applicant to "shore up" their app. It's supposed to be pretty vague - I wanted to avoid giving extremely specific instructions (well to be honest what I actually wanted was 2 of map 1 and 2 of map 2) so that it didn't become way too formulaic as it kind of has been in the last few years.
Imo the proposed changes are already formulaic because they clearly lay out a setup similar to what people have already been using in their apps for years (1 nommable map, 1 bad map, 1 filler).

I reckon you could get away with some more specifics (perhaps offering suggestions for how to fill the wildcard?) without railroading the app process too much though :3
arialle
I think these are a lot better changes, and much more specific. Being formulaic shouldn’t be a concern - it’s still an application and the more focused it is, applicants are able to provide much clearer signals that they would be able to perform as BN, saving headaches down the road. They still have the choice of spread, song, style etc that still provides vagueness and choice for the applicant.

1 - The term BN check may still throw some people off. Does this mean they don’t have to be as detailed? Does this mean that they have to rely more on their explanation of choice (since if the map is really up to standard there probably won’t be many mods)?
I don’t think it’s something super of concern, but maybe something to think about.

3 - As mentioned above this ‘wild card’ could be more specific. Adding details such as ‘Must be in a different style to Map 1 and 2’ makes it clearer for the applicant, rather than just assuming they know all the maps should be varied.

So far this is definitely on the right track of becoming more accessible and easier to understand.
Nao Tomori
I think leaving exactly what a BN check entails up to the applicant makes more sense as that's what they would actually do when BN checking a map. Then we can evaluate that.

The maps don't have to be varied (beyond the mapper name). The whole point of the 3rd one is to not be specific - not sure what kind of clarity is needed there as it's supposed to be a catch all for anything the applicant thinks is missing from their application. Also, we aren't trying to teach people how to mod from square one in the application form. There is no checklist of issues the applicant needs to mod or we won't accept them or something. As long as the submitted mods show an ability to add value to the maps and reflect a good understanding of modding, it's what we're looking for.
Shad0w1and

Nao Tomori wrote:

I think leaving exactly what a BN check entails up to the applicant makes more sense as that's what they would actually do when BN checking a map. Then we can evaluate that.

The maps don't have to be varied (beyond the mapper name). The whole point of the 3rd one is to not be specific - not sure what kind of clarity is needed there as it's supposed to be a catch all for anything the applicant thinks is missing from their application. Also, we aren't trying to teach people how to mod from square one in the application form. There is no checklist of issues the applicant needs to mod or we won't accept them or something. As long as the submitted mods show an ability to add value to the maps and reflect a good understanding of modding, it's what we're looking for.
emm, now this sounds better, and I understand that you guys are looking for someone who can improve mapsets instead of a ranked section gatekeeper. but this way we might putting more pressure on checking them during the qualified phase, which people lack interest in. I think we will need to incentivize people to check rc related stuff during qualified phase, otherwise some errors might sneak through.
BlackBN
I'm wondering the catch side. I can understand the purpose of submitting different type of maps but mmh tbh as a catch BN (and an ex-catch BN applicant) I feel like it's quite hard to find different type of maps. Our mapping community is already pretty small, and maps quality are pretty extreme. They are either good as no problem, or literally unmoddable (like the whole map should be remapped because of wrong concepts). I don't really have ideas in my mind currently but I wonder if we can have some exceptions or simplifications for catch, especially looking at how small our mapping community (and BNG) are 😢.
too

Nao Tomori wrote:

Please see below for the updated application guidelines. Let us know your thoughts. As a note, the evaluation generally does not rely on the mapper responding to the mods at any point.


Map 1: Submit a "BN check" mod on a map that you believe is close to a rankable / nominatable state. If the mapper were to address your mods, you would immediately be ready to press the nominate button. The map should be by a mapper with 5 or less ranked maps. The map should not have any nominations at the time of submitting your application.

This is intended to provide information on your ability to conduct the final steps of the modding process as well as independently evaluate a map's overall rankability.



Map 2: Submit a mod on a map that you not would nominate unless significant improvements are made. Additionally, briefly explain why the map was not in a rankable state when you modded it; your modding should generally address those concerns. The map should include a full spread of difficulties. The map should be hosted by a different mapper than the first map (including collab participants).

This is intended to provide information on your issue identification skills, communication and wording, and ability to evaluate a map's overall rankability.



Map 3: Submit a mod on a map that, in your opinion, would be helpful to us in evaluating your ability to judge map quality and readiness. Indicate whether you would or would not nominate the map after your modding has been addressed. The map should be hosted by a different mapper than the first or second maps (including collab participants). The map should not have any nominations at the time of submitting your application.

This will provide you with an opportunity to further improve your application, keeping in mind the intentions stated in the descriptions of the previous submissions.
It is much clearer and better than the first post.
This change will reduce the difference between knowing this and not knowing this, so you will be able to decide for yourself if this mapset or mod is ok when applying for further bn.

I like that change.
wafer
i think the changes are mostly better but im really not a fan of imposing restriction of ‘will nominate’ on people with 0-5 ranked

seems more tedious and complex to find for applicants vs now
-Hitomi

Nao Tomori wrote:

Map 1: Submit a "BN check" mod on a map that you believe is close to a rankable / nominatable state. If the mapper were to address your mods, you would immediately be ready to press the nominate button. The map should be by a mapper with 5 or less ranked maps.
Does it have to be a map from a "new-ish" mapper? Might be pretty hard to find one at some point, I think ~10 ranked maps would be better tbh. idk if the figure is this low just for the sake of the applicants to post more mods cuz such mappers int more (I assume) but I do think map quality really depends, despite how many ranked maps the user has.

Nao Tomori wrote:

The map should be hosted by a different mapper than the first or second maps (including collab participants)
Refers to top diffs only or including low diffs collabs?

Also, if the 2 oszs (before mod and current) are still a thing, is it necessary to have them (specifically the after mods one)? Feels like that's just overcomplicating the process cuz even if the host messed up after applying mods ofc u won't nominate it and instead it will require more modding.
FuJu

-Hitomi wrote:

Refers to top diffs only or including low diffs collabs?

Also, if the 2 oszs (before mod and current) are still a thing, is it necessary to have them (specifically the after mods one)? Feels like that's just overcomplicating the process cuz even if the host messed up after applying mods ofc u won't nominate it and instead it will require more modding.
1. Refers to Host

2. Osz's have never been a necessity. They just make life easier for us and you because we can't misinterpret things with them.
Nao Tomori
The limit on ranked maps is to push applicants to forming their own opinion on the map rather than just picking a safe map by a very experienced mapper. Obviously we can't quantify "safe map" and don't want to either (since BNs perennially nominating safe maps is fine). We want to see the applicant judge it to be safe by themselves and not solely by virtue of being by an experienced mapper or by being bubbled, indicating widespread approval by other BNs. For what it's worth the vast majority of apps I've seen have been exclusively mods on 0-ranked map mappers so if anything this is explicitly expanding it from the de facto "meta" restrictions.
ikin5050

Nao Tomori wrote:

Map 1: Submit a "BN check" mod on a map that you believe is close to a rankable / nominatable state. If the mapper were to address your mods, you would immediately be ready to press the nominate button. The map should be by a mapper with 5 or less ranked maps. The map should not have any nominations at the time of submitting your application.


Map 2: Submit a mod on a map that you not would nominate unless significant improvements are made. Additionally, briefly explain why the map was not in a rankable state when you modded it; your modding should generally address those concerns. The map should include a full spread of difficulties. The map should be hosted by a different mapper than the first map (including collab participants).

This is intended to provide information on your issue identification skills, communication and wording, and ability to evaluate a map's overall rankability.

I struggle to see the difference here. Both map 1 and map 2 should be immediately ready to press the nominate button on if your mods are addressed and applied properly? The only real difference between these is that the first map has a silly <5 ranked maps requirement on it. I also don't really see how it helps especially for taiko specifically where everyone and their dog can get 5 ranked maps without making anything that would be considered a 'safe map'.
Resona
not a fan of the 5 ranked map limit being on the 'accept' map, as by your own admission, BNs perennially nominating safe maps is fine
to be fully transparent, i'm one of these BNs; there just aren't many new mappers who make things i like enough to nominate and i wouldn't want to see people trying to lower their own standards to just get into bn
achyoo

Nao Tomori wrote:

Map 1: Submit a "BN check" mod on a map that you believe is close to a rankable / nominatable state. If the mapper were to address your mods, you would immediately be ready to press the nominate button. The map should be by a mapper with 5 or less ranked maps. The map should not have any nominations at the time of submitting your application.

This is intended to provide information on your ability to conduct the final steps of the modding process as well as independently evaluate a map's overall rankability.
Does this new revised applications include the previous questions? i.e "Why do you think the map is ready to be nominated?"

If it does (and I think it should), you can remove the 5 or less ranked maps limit. If applicants are being evaluated on their ability to evaluate, and they pick the "safe map" like you alluded to, the "why" question forces them to showcase their understanding and evaluation of the map's quality, and gives you a glimpse into their thought process as well as quality standards. Truly capable applicants would have no issue with this part, and those who rely on others' widespread approval would struggle and thus be filtered out during the evaluation process.

It also reduces the likelihood that the applicant picks a mapper who can easily rank maps; if they are unable to satisfactory justify themselves beyond "This map is clean and has no issues, while representing the song well." then they can be rejected for "lack of content in application, cannot accurately judge applicant". This has happened many times under the old system where applicants apply with mods that don't fully cover the grounds expected of them, so I don't see why it cannot be carried over to the new system. With rejection for "lack of content" being a possibility, applicants will be less likely to self-sabotage by picking "safe, easily rankable" maps to use in their application.

Would like to hear what you think about this.
Nao Tomori
Okay removed the limit. To be honest I don't think an answer of "map is clean and I like the song" is necessarily useful but also is a perfectly fine answer. Regardless, I think map 1 would not be a target for lack of content issues, because map 1 is not supposed to showcase issue resolution, it's supposed to show they can a) identify and b) polish something already fine. I would expect map 2 especially, and probably map 3 most of the time, to showcase more in depth "actual" modding, and map 1 would be more like metadata checks, mp3, hitsounds and so on.
achyoo
Right, so the filter would mostly be in map 2 and map 3, while map 1 showcases how they would do in an actual BN check.
Yea that sounds good to me
AnimeStyle
I never applied to BNG because the prospect of wasting someone's time with my application was too daunting.
With those proposed changes (specifically the last changes made by Nao) this seems more like something I would consider doing.
Mostly due to having way more direction on what to actually submit - and I feel like others - especially those who didn't get mentored by a former BN/NAT - will appreciate that.
Topic Starter
Hivie
and with that, we've finally re-opened applications!

new format is based on Nao's suggestions, which you can fully check out in https://bn.mappersguild.com/bnapps

included changes from here are the removal of the RC test, and displaying evaluator comments in future evals. rest of the changes will be implemented over the next few weeks.
clayton
reviewing the whole page now, I can't tell if an application would be denied for reasons not accounted for on this page. for example is general modding activity important (beyond just having 150 lifetime kd)?
Mirash
Will the fast rejoin thing be up
Okoayu

Mirash wrote:

Will the fast rejoin thing be up
that option should have remained there. if it's not there we will need to add that back

apparently if you go to bn app page, select the gamemode ure applying for it should give you a prompt saying you can skip the app if you're on good terms :)

Edit2 : just confirmed that it is indeed still there
Serizawa Haruki
I'd like to talk about various issues that go beyond these changes and make some suggestions for potential improvement. I believe that what has been discussed (and partially implemented) so far is mostly just a change of semantics, but ultimately this isn't enough and what is needed is a complete overhaul of the BN application process. The points presented in this post are based on long-term observations from myself as well as several other people and I'll try to summarize them below.

1) Current issues


1.1) The "3 mod showcase system" is generally not ideal because even just finding 3 maps that meet the desired criteria can be very difficult and time consuming. A lot of maps are incomplete or very low quality which already makes them unfitting for the application. Some others are very high quality and therefore don't offer a lot to work with. Those that fall somewhere in between often have lots of mods already which again reduces the amount of content that can be used to demonstrate one's modding skills. Usually the maps are also supposed to contain specific issues in order to cover all possible aspects of mapping as expected.

1.2) The expectations/requirements are also problematic due to the fact that they lead to "artificial" mods that often don't reflect actual mods done by BNs. Applicants don't just mod any map or make any kind of mod, over time a specific formula has been developed which is supposed to meet these requirements. However, this doesn't properly measure someone's modding abilities but rather the ability to figure out what exactly evaluators are looking for and adapt accordingly. If you took random mods made by BNs and used them in a BN application, the result would most likely be negative since there are no such expectations from them while being a BN, so expecting them from applicants doesn't make much sense and is not realistic.

1.3) This way of modding for BN applications can have other drawbacks as well, such as copying certain mods/suggestions from other people without understanding them and using them in a different context where they might not even apply. It has led to modding becoming quite homogenized because people are treating it like there is one right way to mod and wanting to learn that in order to become BN, so they are often blindly following other modders by pointing out the same type of issues, using the same reasoning, wording, etc. In reality there are many different "modding styles" and none is objectively better than the other. Moreover, by trying to check all the boxes, modders might focus too much on finding potential issues and subconsciously mentioning things that are fine or exaggerating minor issues.

1.4) The evaluation criteria are unclear/vague and there are hidden expectations so modders essentially have to take a guess as to what they should and shouldn't do. While the attempt to improve this aspect as discussed in this thread is a good start, I still have doubts whether it really works in practice. This might also have to do with the way evaluations are done though, which brings me to my next point.

1.5) Due to the subjective nature of mapping and modding, evaluations can differ widely from one NAT member to another. As such, there is a certain RNG component at play, which can make evaluations feel unfair. This also means that if an applicant has different views on map quality or on what is and isn't an issue in a map compared to an evaluator, it could impact their result negatively. While it might be impossible to avoid making judgements based on personal preferences, it should be reduced to a minimum.

1.6) Unfortunately evaluations are also prone to bias in multiple ways. Firstly, there might be a subconscious bias towards negative aspects since the task consists of checking for mistakes the applicant might have made, similar to what I mentioned above regarding modders focusing too much on finding potential issues in a map. Mistakes and shortcomings seem to hold significantly more weight than things that were done well, so even if most aspects are positive, it can still result in a failure. Another thing to note is that according to the evaluation process, if the majority of the evaluators votes "fail", it results in the applicant automatically being denied. However, the same is not true for a majority of "pass" votes, again indicating a tendency towards negativity.

1.7) The other form of bias consists in favoring people someone likes or is friends with, and on the other hand opposing people they dislike. This is exacerbated by the fact that evaluators are not always exclusively randomized, but they can also assign themselves to an application in order to substitute someone or as an additional evaluator, giving them the possibility to skew the result.

1.8) Generally the aforementioned inconsistencies in evaluations are demonstrated by occurrences like former BNs or even former NAT members failing applications (or for example, Elite Nominators/former NAT members being kicked/probationed) which understandably raise some questions. It seems unlikely that competent modders forget or unlearn their skills in a few months or 1-2 years, so either problems have been overlooked previously and they were seen as better than they actually are, or the assessments don't do a good job at determining someone's capabilities. Previous BN experience should play a bigger role when assessing a candidate.

1.9) I find it questionable that the behavior of future and existing BNs is assessed by the NAT because they are not specifically educated/trained on how to do this and might not always be able to make fair calls about what's right or wrong. As some members have had incidents of misconduct themselves, they might not be the best candidates to judge how others act, and there have been examples of debatable decisions taken in this regard.

1.10) All of this ties into the fact that there are little to no checks or consequences for subpar or unfair evaluations. This is of course a result of the NAT's self-regulation, but I think in part it also has to do with the fact that applications and their result are not visible to the public, so they are not subject to community opinions like qualified maps are for example, which can be a form of quality control. Apparently it is now possible to allow applications to be viewed publicly, but I'm not sure where they can be viewed by other users (was this explained anyhwere?). The inability for decisions to be appealed can elicit feelings of powerlessness in applicants as well.

1.11) Whether the upcoming changes to how feedback is delivered are beneficial remains to be seen. Either way, the problematic aspect is not necessarily the feedback's format, but more importantly its content. Issues are often explained poorly or insufficiently, making it hard to understand for the person reading it. The provided reasoning is sometimes overly subjective and not supported by facts or evidence, as well as generally lacking helpful information on how to improve. The different and potentially contradicting answers from evaluators when asking further questions only add to the confusion, but this should hopefully be mitigated by the new unified communication method.

2) Stats


Next, I want to present and talk about some stats on the pass rate of BN applications. The data was taken on February 15th 2024 and is based on all-time evaluations from all current NAT members. I can share the complete spreadsheet if someone is interested.

2.1) The first thing that stands out is the large discrepancy between the different game modes:
osu! (standard): out of 526 total evaluations 169 passed = 32,13% pass rate
osu!taiko: out of 164 total evaluations 68 passed = 41,46% pass rate
osu!catch: out of 190 total evaluations 135 passed = 71,05% pass rate
osu!mania: out of 272 total evaluations 155 passed = 56,99% pass rate
A possible reason could be the size difference between for example standard and catch, but the number of successful applications being less than half in the former is still a huge gap. And when comparing the significant growth of mania in recent times, it nearly reached the same amount of BNs as standard (even surpassing it briefly), but the percentages seen above still differ notably, so this is likely not the only factor (if at all). Taiko is also on the lower side here, I'm not sure if it's related to the fact that there are several newer NAT members in this mode, but it just stuck out to me and also explains why there are not that many evals in total. So the question is: Is the skill level of modders across game modes so much different, is the learning curve higher or lower depending on the mode, or does each mode simply approach evaluations differently (stricter or more lenient)?

2.2) The other interesting aspect I noticed is how much the pass rates vary between individual members of each mode. The most notable one is osu! standard, where the highest rate is 45,83% and the lowest only 18,00%, and they are not outliers either, as there are some other similar values for other people. The only other mode where the numbers differ significantly across evaluations is taiko (24,32%-57,14%), however both the highest and the lowest one are outliers. Both mania (48,28%-61,76%) and especially catch (70,27%-75,00%) are closer together, which (coincidentally or not) are exactly the ones with the highest pass rates overall.

3) Suggestions/ideas


3.1) The core idea is similar to what has been posted by Shii here, but with some modifications and additional details. Instead of submitting 3 specific mods, the applicant's recent modding history is looked at in general. However, the modded maps and their respective suggestions wouldn't be analyzed in a very detailed manner. It would just be for checking if the mods make sense, are explained in a comprehensible way, help to improve the map somehow, and if nothing major was missed. Smaller mistakes and more sophisticated modding abilites would not be relevant though, and no special criteria have to be met. This would filter out applicants who are clearly not experienced and skilled enough to mod maps on a BN level.
The advantage is that the process would become less stressful, time consuming and difficult, for both parties involved (at least to some degree). This would also put less focus on the application and more on the applicant themselves, resulting in more accurate and consistent outcomes.

3.2) If no major problems are found, the applicant becomes a trial/pseudo BN (a new usergroup wouldn't be necessary) where they don't actually have any BN abilities like nominating or disqualifying maps, but they can place hypothetical nominations after completing their mod. This would either work by pressing an actual button on the map's discussion page if there is the necessary dev support , or otherwise save the map as .osz file and submit it as nomination to the BN website. After a certain period of time or a certain number of nominations, they are evaluated and, if found competent enough, added as probationary BN. Otherwise they are removed and given the standard cooldown.
The idea behind this is the fact that practical experience is often the best way to actually learn how to do something. A good analogy is applying for a job: Usually you are not expected to know everything beforehand, there is a lot of stuff you just master while doing the job. Obviously someone without experience is not ready to take major responsibilities yet, but they can be guided to that point slowly.

3.3) The option to appeal BN app results should be added. The way it would work is that the applicant could file an appeal where they present reasons as to why they think the assessment was faulty. As long as the provided arguments are not completely unreasonable, it would be evaluated by previously uninvolved NAT members. If they come to the same conclusion as the other evaluators, an explanation would be given to the applicant and they could not appeal it again. Otherwise, if the appeal is considered valid, it would be discussed by the entire NAT of the relevant game mode until a consensus is reached, or if no agreement can be found, a vote is held to determine the final outcome by simple majority.
This feature would address some of the issues mentioned above such as inconsistency, subjectivity and bias. If I remember correctly, it used to be a thing for existing BN evaluations (but I don't know how exactly it worked) and I think there were some cases where appeals were granted, but don't quote me on that.

4) Potential issues


Of course there are also some potential issues and disadvantages to the ideas proposed above, so this is an attempt to identify, address and solve them.

4.1) As mentioned by RandomeLoL here, there would probably be quite a lot of "pseudo BNs" being added which would increase the workload of the NAT considerably. A few possibilites have also been named before, such as increasing the minimum kudosu requirement to apply (to about 500 perhaps) which would also ensure the modder has at least some experience. Having a hard cap of applicants and/or BN additions at a time has also been brought up already. Other ways to alleviate the problem could be using the help from BNs to do evaluations, which is already in place now, but it could possibly be expanded further. Another aspect to consider is that it might only be a temporary issue because the number of total BNs would likely increase over time, which in turn means there's a larger group of evaluators. This of course only works if corresponding training programs for BNs are done consistently and successfully.

4.2) Many people would probably expect quality standards to drop a lot, but this is not necessarily the case. First and foremost, the fact that these new BNs would not be able to actually nominate maps is already a major safeguard. If deemed necessary, this trial period can even be extended for more than just 1-2 months. Another important preventive measure that has been neglected lately is quality assurance. If more people were actually checking, playing and reporting qualified maps, mistakes and quality issues can be reduced significantly, and given the right tools and incentives, there are definitely enough people willing to do these things. The short-lived Qualified Inspector project was honestly a great idea and although the fact that a new usergroup won't be added, that should not be a reason to abandon the topic entirely. I think adding users who have this role to the BNs would be an acceptable alternative which would also grant them rewards like tenure badges. Additionally, I remember there being a conversation about more incentives to play qualified maps for players (as in non-mappers/modders) in order to detect problems more easily. I'm not sure what happened to that idea but even just making plays during qualified carry over to ranked (as long as the map is not disqualified) would surely increase the play count.

4.3) The trial/pseudo BN system could be demotivating because the people going through it don't actually get to nominate maps, which is usually the interesting and exciting part about becoming a BN. On the other hand though, I think this is still less demotivating than doing mods for BN applications, failing and starting over again, since that can also feel like a waste of time. A trial phase would at least give people the feeling of having accomplished something and making progress towards their goal.

4.4) Another valid concern is that for modders who are already capable of performing actual BN duties, this would just hold them back unnecessarily. A solution to this could be letting people skip to regular probation immediately if they are considered good enough.

4.5) Not picking and submitting certain mods could certainly be an issue if someone has recently made a mod that is incomplete or they didn't put effort in, as it would reflect badly on them. As a compromise, there could be the option to exclude certain mods from the application. Similarly, if someone thinks they did particularly well on a certain map, they could still be able to mention that, but just optionally.

--------------------------------------------------------------------------------------------------------------------

On a sidenote, I also recommend checking out this thread, it's quite interesting because there are some questions and concerns related to this topic: https://www.reddit.com/r/osugame/comments/14nv46a/we_are_the_nomination_assessment_team_ask_us/
achyoo
I'll try to give my thoughts on the points I actually have thoughts on

Serizawa Haruki wrote:

1.1) The "3 mod showcase system" is generally not ideal because even just finding 3 maps that meet the desired criteria can be very difficult and time consuming. A lot of maps are incomplete or very low quality which already makes them unfitting for the application. Some others are very high quality and therefore don't offer a lot to work with. Those that fall somewhere in between often have lots of mods already which again reduces the amount of content that can be used to demonstrate one's modding skills. Usually the maps are also supposed to contain specific issues in order to cover all possible aspects of mapping as expected.
I believe that the difficulty of finding maps to use for application is overblown. The advice I always give modding mentees is to simply go to a BN that has their request log public, and mod maps from people that requested said BN. There's more maps suitable and available out there. If applicants still struggle to find maps, it's because they are going into maps looking for specific issues rather than modding a map and finding issues in the map.

Serizawa Haruki wrote:

1.2) The expectations/requirements are also problematic due to the fact that they lead to "artificial" mods that often don't reflect actual mods done by BNs. Applicants don't just mod any map or make any kind of mod, over time a specific formula has been developed which is supposed to meet these requirements. However, this doesn't properly measure someone's modding abilities but rather the ability to figure out what exactly evaluators are looking for and adapt accordingly. If you took random mods made by BNs and used them in a BN application, the result would most likely be negative since there are no such expectations from them while being a BN, so expecting them from applicants doesn't make much sense and is not realistic.
Most applicants do the formulaic modding and most of them fail so I don't see why this is an issue. Anyway the new system already solves this by shifting focus from the mods themselves to the overall decision making and ability to judge maps so.

Serizawa Haruki wrote:

1.3) This way of modding for BN applications can have other drawbacks as well, such as copying certain mods/suggestions from other people without understanding them and using them in a different context where they might not even apply. It has led to modding becoming quite homogenized because people are treating it like there is one right way to mod and wanting to learn that in order to become BN, so they are often blindly following other modders by pointing out the same type of issues, using the same reasoning, wording, etc. In reality there are many different "modding styles" and none is objectively better than the other. Moreover, by trying to check all the boxes, modders might focus too much on finding potential issues and subconsciously mentioning things that are fine or exaggerating minor issues.
How is that a BN app problem, that's a mentorship problem. Modding mentors are teaching people to mod that way for BN app so everyone does it. It's funny because most people that do this don't pass so I don't know why they keep doing it.


Serizawa Haruki wrote:

1.4) The evaluation criteria are unclear/vague and there are hidden expectations so modders essentially have to take a guess as to what they should and shouldn't do. While the attempt to improve this aspect as discussed in this thread is a good start, I still have doubts whether it really works in practice. This might also have to do with the way evaluations are done though, which brings me to my next point.
Hidden expectations is a fair argument, but I believe the new system solves it pretty adequately. What you need to do is clearly outlined, but I expect that people will take time to adapt and change their BN app methods so we can look back on this in a few months to see how applicants are doing.

Serizawa Haruki wrote:

1.5) Due to the subjective nature of mapping and modding, evaluations can differ widely from one NAT member to another. As such, there is a certain RNG component at play, which can make evaluations feel unfair. This also means that if an applicant has different views on map quality or on what is and isn't an issue in a map compared to an evaluator, it could impact their result negatively. While it might be impossible to avoid making judgements based on personal preferences, it should be reduced to a minimum.
From my personal experience, evaluator's personal opinion and preferences don't have as much of an impact as you think. The evaluators will bring it up in group discussion but very rarely is it what makes the difference in the final evaluation outcome. Most of the biggest "mistakes" are judged based on what is intersubjective; using past DQ discussions and veto mediation outcomes as precedents on what needs to be enforced and what doesn't need to be. At least this is in my experience of being an evaluator for 8 months.

Serizawa Haruki wrote:

1.6) Unfortunately evaluations are also prone to bias in multiple ways. Firstly, there might be a subconscious bias towards negative aspects since the task consists of checking for mistakes the applicant might have made, similar to what I mentioned above regarding modders focusing too much on finding potential issues in a map. Mistakes and shortcomings seem to hold significantly more weight than things that were done well, so even if most aspects are positive, it can still result in a failure. Another thing to note is that according to the evaluation process, if the majority of the evaluators votes "fail", it results in the applicant automatically being denied. However, the same is not true for a majority of "pass" votes, again indicating a tendency towards negativity.
Yea I agree, but I would assume that evaluators are aware of this and do try to look at big picture rather than focusing on mistakes. The new system should help because the applicant should be judged holistically and not based on their modding mistakes anymore (due to the new decision making judgement portion). BTW, 3 fails can still result in a pass, the vote is not final and whatever consensus comes out of group discussion is final rather than the vote itself. I'm not saying it happens regularly, but I'm saying it is a possibility.

Serizawa Haruki wrote:

1.7) The other form of bias consists in favoring people someone likes or is friends with, and on the other hand opposing people they dislike. This is exacerbated by the fact that evaluators are not always exclusively randomized, but they can also assign themselves to an application in order to substitute someone or as an additional evaluator, giving them the possibility to skew the result.
Substitutions are generally never done on a whim, it's only done when a) it goes overdue OR b) one of the evaluators specifically said they can't do a certain one, in which case it's usually rerolled, not handpicked. Disclaimer that this is based on my tenure and I cannot with 100% certainty say that it works this way now, but I think it's fair to assume they still do it this way.

Serizawa Haruki wrote:

1.8) Generally the aforementioned inconsistencies in evaluations are demonstrated by occurrences like former BNs or even former NAT members failing applications (or for example, Elite Nominators/former NAT members being kicked/probationed) which understandably raise some questions. It seems unlikely that competent modders forget or unlearn their skills in a few months or 1-2 years, so either problems have been overlooked previously and they were seen as better than they actually are, or the assessments don't do a good job at determining someone's capabilities. Previous BN experience should play a bigger role when assessing a candidate.
I don't see why someone couldn't unlearn their skills in a few years though. A few months isn't even an argument because reapp is only needed for few months if they left on standard terms (which usually means they fucked up as a BN, in which case it's not that unlikely they fail??) Previous BN experience does play a role, most returning members prior to the instant rejoin button had massive leniency given to them. It also works the other way, former members that had a shaky tenure would have that held against them as well.

Serizawa Haruki wrote:

1.9) I find it questionable that the behavior of future and existing BNs is assessed by the NAT because they are not specifically educated/trained on how to do this and might not always be able to make fair calls about what's right or wrong. As some members have had incidents of misconduct themselves, they might not be the best candidates to judge how others act, and there have been examples of debatable decisions taken in this regard.
Let's be real who on osu is specifically trained on assessing behaviour. Both NAT and GMT as far as I know have access to the same set of rulebook and guidelines, and most big behavioural issues goes through GMT as well anyway. Unless you mean you want a higher up position to verify everything, which is kind of ridiculous.


Serizawa Haruki wrote:

1.10) All of this ties into the fact that there are little to no checks or consequences for subpar or unfair evaluations. This is of course a result of the NAT's self-regulation, but I think in part it also has to do with the fact that applications and their result are not visible to the public, so they are not subject to community opinions like qualified maps are for example, which can be a form of quality control. Apparently it is now possible to allow applications to be viewed publicly, but I'm not sure where they can be viewed by other users (was this explained anyhwere?). The inability for decisions to be appealed can elicit feelings of powerlessness in applicants as well.
a) Evaluations can be public, just not from NAT side (always been the case even before the new update). The applicant is free to show their evaluation to anyone they want. Many don't though. NAT have no issues making evaluations public, but there are still BNs and applicants that would rather have the anonymity.

b) You can appeal, people just don't.

Serizawa Haruki wrote:

1.11) Whether the upcoming changes to how feedback is delivered are beneficial remains to be seen. Either way, the problematic aspect is not necessarily the feedback's format, but more importantly its content. Issues are often explained poorly or insufficiently, making it hard to understand for the person reading it. The provided reasoning is sometimes overly subjective and not supported by facts or evidence, as well as generally lacking helpful information on how to improve. The different and potentially contradicting answers from evaluators when asking further questions only add to the confusion, but this should hopefully be mitigated by the new unified communication method.
I am cautiously optimistic about the new system, but speaking from past experience most people fail because they just lack the prerequisite knowledge to even begin modding anyway. Like you can't identify issues and give good solutions if you barely know mapping stuff. How is that going to fit in a feedback? If i were to be harsh, the correct play is to "quit modding, learn how to map then come back" but who wants to hear that?
Also you said "The provided reasoning is sometimes overly subjective and not supported by facts or evidence, as well as generally lacking helpful information on how to improve."
I find it hilariously ironic because you touched on this point but your provided solutions do nothing to improve this, nor did you support your point with evidence as well.

Serizawa Haruki wrote:

Next, I want to present and talk about some stats on the pass rate of BN applications. The data was taken on February 15th 2024 and is based on all-time evaluations from all current NAT members. I can share the complete spreadsheet if someone is interested.

2.1) The first thing that stands out is the large discrepancy between the different game modes:
osu! (standard): out of 526 total evaluations 169 passed = 32,13% pass rate
osu!taiko: out of 164 total evaluations 68 passed = 41,46% pass rate
osu!catch: out of 190 total evaluations 135 passed = 71,05% pass rate
osu!mania: out of 272 total evaluations 155 passed = 56,99% pass rate
A possible reason could be the size difference between for example standard and catch, but the number of successful applications being less than half in the former is still a huge gap. And when comparing the significant growth of mania in recent times, it nearly reached the same amount of BNs as standard (even surpassing it briefly), but the percentages seen above still differ notably, so this is likely not the only factor (if at all). Taiko is also on the lower side here, I'm not sure if it's related to the fact that there are several newer NAT members in this mode, but it just stuck out to me and also explains why there are not that many evals in total. So the question is: Is the skill level of modders across game modes so much different, is the learning curve higher or lower depending on the mode, or does each mode simply approach evaluations differently (stricter or more lenient)?

2.2) The other interesting aspect I noticed is how much the pass rates vary between individual members of each mode. The most notable one is osu! standard, where the highest rate is 45,83% and the lowest only 18,00%, and they are not outliers either, as there are some other similar values for other people. The only other mode where the numbers differ significantly across evaluations is taiko (24,32%-57,14%), however both the highest and the lowest one are outliers. Both mania (48,28%-61,76%) and especially catch (70,27%-75,00%) are closer together, which (coincidentally or not) are exactly the ones with the highest pass rates overall.
Consider ^ what I mentioned above; I just think most people are trying for BN before they're ready. There's just more people in standard that are overly eager to apply. BTW, when mock evaluations were a thing back then, the randomly rolled BN's opinions generally aligned with the NAT. During BN evaluators cycles, the BNs were generally even more strict than the NAT. So if you're wondering, it's a gamemode thing, not an individual thing. Discrepancy in pass rates across individuals can probably mostly be explained by just RNG. Also, like I mentioned before, the votes are NOT FINAL. Someone can pass BN app with 3 fails if group discussion goes positively. NATs can also sometimes overcompensate; when they see an applicant they think they other 2 will pass, they play devils advocate and try to point out the negatives to make group discussion phase more valuable. Similarly, if an NAT sees an applicant they think will be failed by the other evaluators, they can sometimes take a more positive outlook, similarly to make group phase more productive and less prone to narrow perspective. It happens sometimes, skews the vote a little. But again, the vote isn't final anyway.


RE: 3.1, 3.2:

"recent modding history is looked at in general"
yea no. Either this means they are being evaluated purely on wording and not looking at the maps (which fucking sucks), or the evaluators have to download the maps and look at them, which even when not analyzed in a super detailed way (btw no one does that for evaluations, it's simply not feasible and takes too much time), is more workload than current 3 map system.

new system already tries to mitigate the "look at modder, not mods" issue by trying to understand the modder's thought process so we can see how it works.

I agree with the practical application thing, kinda sad trial BN was gone tbh.

RE: 3.3:

Appeals are available. Have been for a while.

RE: 4.2:

Qualified QA and the like have been discussed for almost half a decade at this point, nothing you brought up is new. If a good, feasible, and effective solution had been brought up in the past 5 years it would have already been implemented. Everyone is desperate for a working QA system not just you lol
clayton
1.1 I thought this was an issue too before the map selections were changed a bit, now I don't really see the problem, it's pretty easy to just find one map like this even if it's not what you normally mod or something because the criteria are broad

1.2 - 1.3 problems similar to this are found in every type of testing or interviewing culture, it sucks in some ways but I think it's difficult to legitimately evaluate someone's abilities without either a contrived setup or observation of real-world experience

1.4 - 1.11 (okay I admit I skimmed it) I've never really been sympathetic toward complaints that these things are biased because I think a lot of personal weight from evaluators is part of a working system. how I envision the best form of this system anyway is that the NAT are experienced people who you can trust to moderate the mapping ecosystem & promote new people into its management, and it would only be a detriment if they are prevented from applying their judgement. obviously in practice not everyone is perfect and people want different things. but this style of top-down management actually works fine for me in terms of QC for a small team (despite my very vocal opposition to multiple parts of it, over the years...). I have ideas for other types of management that I feel could be worthwhile but they're less like "NAT should have a different process for xyz" and more like "start from nothing and refactor everything" lol so I will not say that here

2 these are individual people applying their own standards across multiple not-very-related gamemodes so tbh I don't see the significance of basically any of this.

3.2 I wouldn't mind if this system was just what it meant to be a probationary BN. but having both this and probation seems somewhat redundant, they are obviously different but serve a similar purpose (hold back their nominating ability while they begin to perform BN duties)

3.3 I don't know how to assess the amount of extra work this creates, but if it's not too much then this sounds good. to be honest I thought this was already a thing in some unofficial capacity but I guess not? person above says it is a thing
Serizawa Haruki

achyoo wrote:

I'll try to give my thoughts on the points I actually have thoughts on


1.1) I believe that the difficulty of finding maps to use for application is overblown. The advice I always give modding mentees is to simply go to a BN that has their request log public, and mod maps from people that requested said BN. There's more maps suitable and available out there. If applicants still struggle to find maps, it's because they are going into maps looking for specific issues rather than modding a map and finding issues in the map.
That's exactly the problem though, with the current system you have to look for maps with specific issues because not all maps are suitable so just picking any map doesn't work. A lot of times you start modding a map and at some point you realize it doesn't fulfill the role of a "BN app mod" so you have to find another one, increasing the effort of getting 3 appropriate maps together.


achyoo wrote:

1.2) Most applicants do the formulaic modding and most of them fail so I don't see why this is an issue. Anyway the new system already solves this by shifting focus from the mods themselves to the overall decision making and ability to judge maps so.
1.3) How is that a BN app problem, that's a mentorship problem. Modding mentors are teaching people to mod that way for BN app so everyone does it. It's funny because most people that do this don't pass so I don't know why they keep doing it.
I really don't believe this is the case, for example most BN app feedback continiously emphasized the importance of overarching mods, especially about contrast, emphasis and song representation, no matter if those issues were actually present in the submitted maps or not. The notion that smaller/less impactful suggestions are bad has also been pushed as another example. Whether people pass or not is irrelevant here as the reason why they fail is not specifically because of this formulaic modding, it's about the fact that certain types of modding are arbitrarily encouraged while others are discouraged instead of being open to different ones.


achyoo wrote:

1.4) Hidden expectations is a fair argument, but I believe the new system solves it pretty adequately. What you need to do is clearly outlined, but I expect that people will take time to adapt and change their BN app methods so we can look back on this in a few months to see how applicants are doing.
It's a little better perhaps but I really don't see this as a solution to the problem. There are still quite a few things that are unclear and were not really explained, for example I mentioned some of them here.


achyoo wrote:

1.5) From my personal experience, evaluator's personal opinion and preferences don't have as much of an impact as you think. The evaluators will bring it up in group discussion but very rarely is it what makes the difference in the final evaluation outcome. Most of the biggest "mistakes" are judged based on what is intersubjective; using past DQ discussions and veto mediation outcomes as precedents on what needs to be enforced and what doesn't need to be. At least this is in my experience of being an evaluator for 8 months.
The term "intersubjective" is often thrown around to justify precisely these kinds of things, but in reality there are no intersubjective standards that are shared by the whole mapping community or even a majority of it since it's so divided on certain opinions. NAT members might have similar views on map quality and use that as a standard, but it doesn't necessarily align with the views of BNs, mappers etc. And even then, they are not enforced consistently, in part because of evaluators' opinions differing from each other, as well as bias towards/against certain BNs, mappers or specific maps. The newly added public evaluation archives contain plenty of evidence that personal preferences are very much part of evaluations. I'm not saying it's the same as the 2015 QAT era, but it's definitely going in a similar direction, in the sense that instead of controlling which maps get ranked based on individual beliefs, what is being controlled is who gets BN based on those beliefs, ultimately affecting which maps get ranked. Also, using past DQ discussions and veto mediation outcomes as precedents also seems strange considering how much the NAT has insisted that previous ranked maps shouldn't be used as precedents and how each map should be judged in a vacuum.


achyoo wrote:

1.6) Yea I agree, but I would assume that evaluators are aware of this and do try to look at big picture rather than focusing on mistakes. The new system should help because the applicant should be judged holistically and not based on their modding mistakes anymore (due to the new decision making judgement portion). BTW, 3 fails can still result in a pass, the vote is not final and whatever consensus comes out of group discussion is final rather than the vote itself. I'm not saying it happens regularly, but I'm saying it is a possibility.
Again, the new application system is only different in theory so far. Whether the actual evaluation process is different in practice can't be said for sure yet, as we don't know what it actually looks like and how it's executed.


achyoo wrote:

1.7) Substitutions are generally never done on a whim, it's only done when a) it goes overdue OR b) one of the evaluators specifically said they can't do a certain one, in which case it's usually rerolled, not handpicked. Disclaimer that this is based on my tenure and I cannot with 100% certainty say that it works this way now, but I think it's fair to assume they still do it this way.
From my experience a lot of evaluations go overdue so that doesn't seem like a rare occurrence. I don't know whether it's always rerolled or sometimes handpicked, but it would be good to have more transparency about this as well.


achyoo wrote:

1.8) I don't see why someone couldn't unlearn their skills in a few years though. A few months isn't even an argument because reapp is only needed for few months if they left on standard terms (which usually means they fucked up as a BN, in which case it's not that unlikely they fail??) Previous BN experience does play a role, most returning members prior to the instant rejoin button had massive leniency given to them. It also works the other way, former members that had a shaky tenure would have that held against them as well.
Until recently, BNs could only instantly rejoin up to 6 months after resigning which is why I said "months", but even with the current 1 year time period, people don't just unlearn their skills like that. You might be a little rusty after not modding for a while, but after doing a few mods (which you have to do anyway in order to apply) you get back into it. Besides, not everyone that resigns from BN stops modding completely, many continue to mod every now and then. Yet there are still cases of failed applications for these former BNs. I'm not sure the "massive leniency" is actually a thing as you claim, or at least it's not applied to everyone equally.


achyoo wrote:

1.9) Let's be real who on osu is specifically trained on assessing behaviour. Both NAT and GMT as far as I know have access to the same set of rulebook and guidelines, and most big behavioural issues goes through GMT as well anyway. Unless you mean you want a higher up position to verify everything, which is kind of ridiculous.
Yes, nobody is trained on assessing behaviour and it shows (including GMT). Having access to rulebooks and guidelines doesn't mean they are actually used and enforced properly. Of course an even higher position wouldn't solve the issue, but there should be more rigorous tests and checks in place to make sure NAT members are competent in this regard and held accountable for their actions. Given that BNs are not actually part of staff, I don't think it makes sense to police their attitude so strictly. Another contradiction is that BNs who take part in evaluations are also supposed to evaluate behavior while not being in a position to do so. So either BNs should be part of staff or not be expected to act how the NAT wants them to.


achyoo wrote:

1.10)
a) Evaluations can be public, just not from NAT side (always been the case even before the new update). The applicant is free to show their evaluation to anyone they want. Many don't though. NAT have no issues making evaluations public, but there are still BNs and applicants that would rather have the anonymity.

b) You can appeal, people just don't.
Yes, people could always post their evaluations publicly, but most of the time there was just no reason to do so. The problem is that even if they did, nothing would happen. There are several cases of public outrage following certain BN's removal for example, but ultimately even if most people agreed it wasn't a legitimate decision there's nothing you could do about it.
You must be misinformed about appeals because for the longest time this was written at the bottom of every application: "The consensus of your evaluation is final and appeals will not be taken." It was recently removed from applications (even old ones) but without announcing or communicating this anywhere, so even if it's possible to do so now, people are obviously not aware of it. How does the process even work exactly if it's currently in place?


achyoo wrote:

1.11) I am cautiously optimistic about the new system, but speaking from past experience most people fail because they just lack the prerequisite knowledge to even begin modding anyway. Like you can't identify issues and give good solutions if you barely know mapping stuff. How is that going to fit in a feedback? If i were to be harsh, the correct play is to "quit modding, learn how to map then come back" but who wants to hear that?
Also you said "The provided reasoning is sometimes overly subjective and not supported by facts or evidence, as well as generally lacking helpful information on how to improve."
I find it hilariously ironic because you touched on this point but your provided solutions do nothing to improve this, nor did you support your point with evidence as well.
Not everyone is at that level where they lack basic mapping and modding knowledge, I'd say most people have some skills but need to improve quite a bit, so the feedback you're describing only applies to some cases. Others would definitely benefit from feedback that is explained better.
I could provide countless examples of evidence but I deliberately chose not to because I don't want the discussion to devolve to specific examples since that's not the point of it. But now that a public evaluation archive was made, it's easy to find some that show this.


achyoo wrote:

2.1 & 2.2) Consider ^ what I mentioned above; I just think most people are trying for BN before they're ready. There's just more people in standard that are overly eager to apply. BTW, when mock evaluations were a thing back then, the randomly rolled BN's opinions generally aligned with the NAT. During BN evaluators cycles, the BNs were generally even more strict than the NAT. So if you're wondering, it's a gamemode thing, not an individual thing. Discrepancy in pass rates across individuals can probably mostly be explained by just RNG. Also, like I mentioned before, the votes are NOT FINAL. Someone can pass BN app with 3 fails if group discussion goes positively. NATs can also sometimes overcompensate; when they see an applicant they think they other 2 will pass, they play devils advocate and try to point out the negatives to make group discussion phase more valuable. Similarly, if an NAT sees an applicant they think will be failed by the other evaluators, they can sometimes take a more positive outlook, similarly to make group phase more productive and less prone to narrow perspective. It happens sometimes, skews the vote a little. But again, the vote isn't final anyway.
Sure, a lot of people apply before they're ready. But if it's a gamemode thing as you said, why is this not the case as much in other modes? Do modders for those modes have access to more useful resources and guides? Is the community closer together and helping each other improve? Or are BN apps just evaluated differently or less strictly?
If by "Discrepancy in pass rates across individuals can probably mostly be explained by just RNG" you mean that some people might just happen to get better applicants than others, this is statistically not relevant because the dataset is quite large and contains up to several years worth of applications in some cases, meaning that RNG would be minimized by then. The only reason to explain this discrepancy is different people having vastly different standards which is obviously not good if evals are supposed to be fair.
Similarly, whether the votes are final or not doesn't really matter I think because they will even out across the hundreds of evaluations (as some can be "pass" for people who failed and vice versa like you said), so I believe the data is still valid, but I'd be happy to analyze the stats based on the actual final results if they're made available.


achyoo wrote:

3.1 & 3.2) "recent modding history is looked at in general"
yea no. Either this means they are being evaluated purely on wording and not looking at the maps (which fucking sucks), or the evaluators have to download the maps and look at them, which even when not analyzed in a super detailed way (btw no one does that for evaluations, it's simply not feasible and takes too much time), is more workload than current 3 map system.

new system already tries to mitigate the "look at modder, not mods" issue by trying to understand the modder's thought process so we can see how it works.

I agree with the practical application thing, kinda sad trial BN was gone tbh.
The maps would obviously still be looked at, otherwise important context is missing. I do think the mods were being analyzed quite in detail until now, in the sense that every single suggestion was looked at and sometimes minor things were pointed out in the bn app feedback. By "recent modding history is looked at in general" I don't mean that the evaluators should look at all the mods from the past 6 months, but simply look at some of them. So if someone is looking at 3 maps, it's not more workload than before. If anything, it would be less work due to the less thorough check.


achyoo wrote:

4.2) Qualified QA and the like have been discussed for almost half a decade at this point, nothing you brought up is new. If a good, feasible, and effective solution had been brought up in the past 5 years it would have already been implemented. Everyone is desperate for a working QA system not just you lol
If everyone is desperate for it, why is nobody doing anything then, especially the people who are in a position to make systemic changes? You say there is no good, feasible and effective solution, so what are the problems? I'm aware that my suggestions are probably nothing new or groundbreaking, but I still don't think it's impossible to implement anything like that, it's just that nobody has been putting in the effort to make it work, therefore I want to respark a discussion regarding this.


clayton wrote:

1.2 - 1.3 problems similar to this are found in every type of testing or interviewing culture, it sucks in some ways but I think it's difficult to legitimately evaluate someone's abilities without either a contrived setup or observation of real-world experience
Well, that's precisely why a system focused more on obvervation of real-world experience would be better as I've laid out in my suggestions.


clayton wrote:

1.4 - 1.11 (okay I admit I skimmed it) I've never really been sympathetic toward complaints that these things are biased because I think a lot of personal weight from evaluators is part of a working system. how I envision the best form of this system anyway is that the NAT are experienced people who you can trust to moderate the mapping ecosystem & promote new people into its management, and it would only be a detriment if they are prevented from applying their judgement. obviously in practice not everyone is perfect and people want different things. but this style of top-down management actually works fine for me in terms of QC for a small team (despite my very vocal opposition to multiple parts of it, over the years...). I have ideas for other types of management that I feel could be worthwhile but they're less like "NAT should have a different process for xyz" and more like "start from nothing and refactor everything" lol so I will not say that here
What you said about top-down management doesn't really adress any of the issues I pointed out. Sure, this type of management can work but only under certain circumstances which are just not given right now.


clayton wrote:

2 these are individual people applying their own standards across multiple not-very-related gamemodes so tbh I don't see the significance of basically any of this.
The significance is that it indicates how much evaluations differ from one another, which matters because the standards should be fair and not affected by randomness as much as possible.


clayton wrote:

3.2 I wouldn't mind if this system was just what it meant to be a probationary BN. but having both this and probation seems somewhat redundant, they are obviously different but serve a similar purpose (hold back their nominating ability while they begin to perform BN duties
Both could also just be combined into a single probation phase somehow, I just put it as an additional trial period in order to prevent the idea from being shut down immediately due to an increased risk of unprepared modders becoming BNs.
achyoo
> You must be misinformed about appeals because for the longest time this was written at the bottom of every application: "The consensus of your evaluation is final and appeals will not be taken." It was recently removed from applications (even old ones) but without announcing or communicating this anywhere, so even if it's possible to do so now, people are obviously not aware of it. How does the process even work exactly if it's currently in place?

Yea that's why it was removed, to allow appeals (I was literally involved in the removal of that line lol). It must have been almost 2 years already and people have already tried to appeal.
Also you aren't kept up. There's a literal box on the eval page to send in any queries now. Idk what else needs to be done tbh this is as low of a barrier of entry as it gets.

> Until recently, BNs could only instantly rejoin up to 6 months after resigning which is why I said "months",

Similar to above it must have been more than a year already (edit: been told the duration increase was implemented 4 months ago, previous implementation was 6 months like u mentioned). Evidently the issue was recognized and implemented. I don't know why you keep bringing up the past when the issues you mentioned were already rectified.

> it's just that nobody has been putting in the effort to make it work,

you've probably just pissed off the people who have been trying so hard to make it work over the years behind the scenes lmao.

> or example most BN app feedback continiously emphasized the importance of overarching mods, especially about contrast, emphasis and song representation, no matter if those issues were actually present in the submitted maps or not.

exaggerating issues is also a very common reject reason so idk what you are talking about

everything else you wrote are pretty heavily based on your assumptions and experiences and it does not feel in line with my experiences so either you're extrapolating information wrongly from what you have or you have access to information that i dont. (do you?)

all your suggestions essentially boil down to "lower the barrier of entry for BN", which is fair in the cases that are previously being judged too harshly (especially concerning things like wording, which i believe is an unfair barrier to non-native speakers of English). But everything else you are basically saying you do not agree with and do not have faith in the ability and judgement of NAT to correctly pick out who is and isn't suitable to be BN. In that case reworking the application system does nothing since the same people are in charge. You may as well petition to delete the usergroup.
Serizawa Haruki

achyoo wrote:

Serizawa Haruki wrote:

> You must be misinformed about appeals because for the longest time this was written at the bottom of every application: "The consensus of your evaluation is final and appeals will not be taken." It was recently removed from applications (even old ones) but without announcing or communicating this anywhere, so even if it's possible to do so now, people are obviously not aware of it. How does the process even work exactly if it's currently in place?
Yea that's why it was removed, to allow appeals (I was literally involved in the removal of that line lol). It must have been almost 2 years already and people have already tried to appeal.
Also you aren't kept up. There's a literal box on the eval page to send in any queries now. Idk what else needs to be done tbh this is as low of a barrier of entry as it gets.
I'm aware of the new communication method for questions about the feedback but that has nothing to do with it as it doesn't contain any information about appealing. And even if that text was removed from BN apps so long ago, why was nobody informed about this change? Again, can you explain how exactly the appeal process works? I've never heard or seen anyone do this in the past few years.


achyoo wrote:

Serizawa Haruki wrote:

> Until recently, BNs could only instantly rejoin up to 6 months after resigning which is why I said "months",
Similar to above it must have been more than a year already (edit: been told the duration increase was implemented 4 months ago, previous implementation was 6 months like u mentioned). Evidently the issue was recognized and implemented. I don't know why you keep bringing up the past when the issues you mentioned were already rectified.
That wasn't the point, it was only an explanation as to why I was talking about examples where people reapplied after less than a year. Either way I don't get why you chose to respond to half a sentence that is not relevant to the point I'm making and ignored everything else. As I've said before, even with the 1 year grace period it doesn't change the issue at hand of former BNs failing applications, sometimes despite continuing modding during their break.


achyoo wrote:

Serizawa Haruki wrote:

> it's just that nobody has been putting in the effort to make it work,
you've probably just pissed off the people who have been trying so hard to make it work over the years behind the scenes lmao.
That was obviously not my intention, it's just not possible for me to know about something that is being done behind the scenes. If there are people working on these things, that's great to hear, but it might be better to involve the community at large or at least inform them on what is going on. For example, the discussion in this thread was never followed up by staff.


achyoo wrote:

Serizawa Haruki wrote:

> or example most BN app feedback continiously emphasized the importance of overarching mods, especially about contrast, emphasis and song representation, no matter if those issues were actually present in the submitted maps or not.
exaggerating issues is also a very common reject reason so idk what you are talking about
One has nothing to do with the other though. What I meant by this is not that modders were expected to point out overarching problems when there were none, but they were expected to submit maps that had these specific issues. If none of the maps that were modded had that kind of general problems, it would impact the outcome negatively.


achyoo wrote:

everything else you wrote are pretty heavily based on your assumptions and experiences and it does not feel in line with my experiences so either you're extrapolating information wrongly from what you have or you have access to information that i dont. (do you?)
As stated at the very beginning of my post, these are absolutely not only my own experiences, but those of many users who participated in BN apps. The reason why it might not align with your experiences could be that you are viewing them from the position of an evaluator and not an applicant. I know that every evaluator used to be an applicant at some point, but for many this experience lies in the past and is easily overshadowed by what they saw and did in their role on the opposite end. Someone who has learned the required skills to succeed in becoming BN and who becomes familiar with the intricacies of the system might not always be able to perceive things with the same perspective as someone who hasn't.


achyoo wrote:

all your suggestions essentially boil down to "lower the barrier of entry for BN", which is fair in the cases that are previously being judged too harshly (especially concerning things like wording, which i believe is an unfair barrier to non-native speakers of English). But everything else you are basically saying you do not agree with and do not have faith in the ability and judgement of NAT to correctly pick out who is and isn't suitable to be BN. In that case reworking the application system does nothing since the same people are in charge. You may as well petition to delete the usergroup.
Yes, lowering the barrier of entry for BN (to a reasonable degree without impacting the quality of the ranked section) is one of the goals, but certainly not the only one. Your statement "You may as well petition to delete the usergroup" is incredibly reductive and does not contribute to a meaningful discussion at all. There is nothing wrong with disagreeing or questioning certain things about the current system. None of my critiques are meant to be personal attacks or anything, it's simply an attempt to improve the situation. So I'd appreciate if you could actually adress my points with proper counterarguments instead of trying to dismiss valid concerns merely due to disagreement.
4lw fan

Nao Tomori wrote:

This is a fair concern. However, the point of this is to attempt to determine the applicant's quality standards and ability to evaluate maps in a vacuum. That objective is significantly impaired if the map has already been nominated. The requirement would only extend to maps not bubbled at the time of submission, not throughout the life of the application. As such, I view this as a necessary tradeoff to better accomplish the goals of the system.
Although i understand the sentiment, it might be quite frustrating/unfair for people to spend time modding a map, for it to be disqualified for bn app usage because a bn nominated it before they wrote the application.

It can be changed to be more lenient by the rule being "you have to have modded the map before the bns of the map modded it, and judge if the map is ready to be nominated after your mods of the map."

This would make it so that modded maps that were nominated by a bn afterwards are still usable for a bn application, and still be able to determine the applicant's quality standards and ability to evaluate maps in a vacuum.

Edit:
talked to nao in the taiko mapping server, they bring up valid points about how you could still rely on other BNs judgement afterwards anyway, but i think by using my suggestion, it causes the applicator to have to judge if their mods made the map rankable, and not the bns ones
(to clarify, im not saying the maps should never have been modded by a bn, but that you modded it before the nominators of the map.)
Topic Starter
Hivie
Everything in this proposal has been implemented now, so I'll be archiving this.

Don't hesitate to start another discussion about anything you'd want to see improved!
Please sign in to reply.

New reply