Also known as the “positive specific agreement” measure.
Details #
In our field you are likely to see it as a measure of inter-rater agreement where the total number of possible objects to be rated is indeterminate, for example when a text can be broken into ratable segments in many different ways and two raters won’t break the text into the same segments. My Rblog post: F-measure: ‘positive specific agreement’ index shows the details (don’t worry, it’s a very short post!) and also shows that Cohen’s kappa converges on the F-measure as the number of possible segments to rate (say) gets very large.
Try also #
- Agreement measures
- Inter-rater agreement/reliability
- Cohen’s kappa
Chapters #
Not covered in the OMbook.
Online resources #
- My Rblog post: F-measure: ‘positive specific agreement’ index
- I will try to create a shiny app to compute the F-measure
Dates #
First created 18.ii.25.