As for the distribution of classes, 10\% are misogynistic, with the most prevalent subclasses being pejoratives and derogation (both at 4\%).
In their experiments, model performance was overall quite low. The best examined model is BERT with class weighting (to account for the low percentage of positive examples). This model scores a precision score of 0.38, while recall is at 0.5, F1 at 0.43 and accuracy at 89\%. In their error analysis, the authors found a lot of false positives. Mentions of women, even if they are not misogynistic, get classified as misogynistic.
Ella Guest, Bertie Vidgen, Alexandros Mittos, Nishanth Sastry, Gareth Tyson, and Helen Margetts. 2021. An expert annotated dataset for the detection of online misogyny. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1336–1350, Online. Association for Computational Linguistics.