Why another change to the BN app system?
The new system that allows people to see the different perspectives of each evaluator was a great achievement. However, it would be prudent to try to improve the system by seeing how it has worked over time. It is quite common to receive contrary comments among evaluators or comments that are not so positive for the applicant when reading their feedback. Evaluators tend to write all over the place and lose the sense of feedback just to go straight to the consensus and overall feeling they have of the applicant.
Problems with the current system
(1) Inconsistent Feedback
(2) Ambiguous Feedback
I will not make any examples since each point would need a context. However, it is common for applicants to receive subjective comments and to think that it is a big deal. The problem is not receiving subjective comments, the problem is not taking into account how the applicant will read it when receiving the comments. If this problem already happened with regular evaluations, it's incredibly hard not to expect it won't happen with completely inexperienced candidates.
(3) Scattered Feedback
Some evaluators simply have no structure at all and put everything together, making it unnecessarily difficult to read. Plus, some of them only do one part of the feedback and skip another. It is painful to read and learning something from it becomes complicated depending on the case.
A possible solution
(3) Usually the evaluators have a structure where they review map 1-2-3 and that's it. Polishing that structure would lead to something easier to understand, for example:
- - map 1 - -
- difficulty -
judging posts
missed issues
I don't mean that all evaluators must write in a certain way and that one is the correct way, but keep in mind that it is difficult for applicants to read all the scattered feedback. It will be easier for the applicant to see how their posts were and what they missed.
(2) Fixing the second point could be fairly easy if evaluators would symbolize their opinions before they start writing whatever they want. Visualizing the position of the opinion gives a different direction to everything that is written after it. Besides when getting a lot of things mixed up, the applicant can only take it all seriously or not seriously at all depending on whether the other evaluators mentioned the same thing, since if they all mention the same thing it must be obvious and severe (although it doesn't work that way).
More in detail for the points within the structure:
Conclusion
Visual Example - The readability of the feedback is important so that the applicant can truly use feedback to improve. In this way it would also be more valid to be able to visualize how fair the consensus is to the applicant, and that the applicant is not completely demotivated by thinking that their consensus was pure rng. If evaluators could include that positive/negative for the applicant's posts and severity for unmentioned things it would be much more beneficial to the applicants.
The new system that allows people to see the different perspectives of each evaluator was a great achievement. However, it would be prudent to try to improve the system by seeing how it has worked over time. It is quite common to receive contrary comments among evaluators or comments that are not so positive for the applicant when reading their feedback. Evaluators tend to write all over the place and lose the sense of feedback just to go straight to the consensus and overall feeling they have of the applicant.
Problems with the current system
(1) Inconsistent Feedback
- user 1: (post) blanket statement that bursts starting on red ticks play better is highly subjective and doesnt make sense as the most intuitive rhythm depends on the structure of the song
- user 2: (post) yes damn finally someone bringing up playability of overmaps
(2) Ambiguous Feedback
I will not make any examples since each point would need a context. However, it is common for applicants to receive subjective comments and to think that it is a big deal. The problem is not receiving subjective comments, the problem is not taking into account how the applicant will read it when receiving the comments. If this problem already happened with regular evaluations, it's incredibly hard not to expect it won't happen with completely inexperienced candidates.
(3) Scattered Feedback
Some evaluators simply have no structure at all and put everything together, making it unnecessarily difficult to read. Plus, some of them only do one part of the feedback and skip another. It is painful to read and learning something from it becomes complicated depending on the case.
A possible solution
(3) Usually the evaluators have a structure where they review map 1-2-3 and that's it. Polishing that structure would lead to something easier to understand, for example:
- - map 1 - -
- difficulty -
judging posts
missed issues
I don't mean that all evaluators must write in a certain way and that one is the correct way, but keep in mind that it is difficult for applicants to read all the scattered feedback. It will be easier for the applicant to see how their posts were and what they missed.
(2) Fixing the second point could be fairly easy if evaluators would symbolize their opinions before they start writing whatever they want. Visualizing the position of the opinion gives a different direction to everything that is written after it. Besides when getting a lot of things mixed up, the applicant can only take it all seriously or not seriously at all depending on whether the other evaluators mentioned the same thing, since if they all mention the same thing it must be obvious and severe (although it doesn't work that way).
More in detail for the points within the structure:
- judging posts: It would be a good idea to go with (+) ∙ (+/-) ∙ (-) marks to improve the display and prevent any misunderstanding. So the evaluators can continue to write whatever they want, but their position on the matter is clear from the outset.
(+) positive (evaluator agrees)
(+/-) neutral (e.g evaluator agrees with the post, but part of it falters)
(-) negative (evaluator disagrees)
- missed issues: Including marks like (!) ∙ (!!!) in this section can help a lot to visualize the severity of the issues that the applicant did not mention. Something (!) and the applicant thinks it's a big deal or vice versa.
(!) = missed issue but not severe (e.g evaler thinks smth can be improved)
(!!!) = severe missed issue (e.g complete lack of contrast)
Conclusion
Visual Example - The readability of the feedback is important so that the applicant can truly use feedback to improve. In this way it would also be more valid to be able to visualize how fair the consensus is to the applicant, and that the applicant is not completely demotivated by thinking that their consensus was pure rng. If evaluators could include that positive/negative for the applicant's posts and severity for unmentioned things it would be much more beneficial to the applicants.