Skip to content

Commit

Permalink
Processed metadata corrections (closes #4522)
Browse files Browse the repository at this point in the history
  • Loading branch information
mjpost committed Feb 3, 2025
1 parent 2f4c9f7 commit d4da537
Showing 1 changed file with 3 additions and 3 deletions.
6 changes: 3 additions & 3 deletions data/xml/2025.comedi.xml
Original file line number Diff line number Diff line change
Expand Up @@ -156,10 +156,10 @@
</paper>
<paper id="15">
<title>Disagreement in Metaphor Annotation of <fixed-case>M</fixed-case>exican <fixed-case>S</fixed-case>panish Science Tweets</title>
<author><first>Alec M.</first><last>Sanchez-Montero</last></author>
<author><first>Alec</first><last>Sánchez-Montero</last></author>
<author><first>Gemma</first><last>Bel-Enguix</last></author>
<author><first>Sergio Luis</first><last>Ojeda Trueba</last></author>
<author><first>Gerardo</first><last>Sierra Martínez</last></author>
<author><first>Sergio-Luis</first><last>Ojeda-Trueba</last></author>
<author><first>Gerardo</first><last>Sierra</last></author>
<pages>155–164</pages>
<abstract>Traditional linguistic annotation methods often strive for a gold standard with hard labels as input for natural language processing models, assuming an underlying objective truth for all tasks. However, disagreement among annotators is a common scenario, even for seemingly objective linguistic tasks, and is particularly prominent in figurative language annotation, since multiple valid interpretations can sometimes coexist. This study presents the annotation process for identifying metaphorical tweets within a corpus of 3733 Public Communication of Science texts written in Mexican Spanish, emphasizing inter-annotator disagreement. Using Fleiss’ and Cohen’s Kappa alongside agreement percentages, we evaluated metaphorical language detection through binary classification in three situations: two subsets of the corpus labeled by three different non-expert annotators each, and a subset of disagreement tweets, identified in the non-expert annotation phase, re-labeled by three expert annotators. Our results suggest that expert annotation may improve agreement levels, but does not exclude disagreement, likely due to factors such as the relatively novelty of the genre, the presence of multiple scientific topics, and the blending of specialized and non-specialized discourse. Going further, we propose adopting a learning-from-disagreement approach for capturing diverse annotation perspectives to enhance computational metaphor detection in Mexican Spanish.</abstract>
<url hash="772ead3b">2025.comedi-1.15</url>
Expand Down

0 comments on commit d4da537

Please sign in to comment.