Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
data:esslli2008:comparison_with_speaker-generated_features [2008/03/07 00:00]
127.0.0.1 external edit
data:esslli2008:comparison_with_speaker-generated_features [2010/11/01 14:07] (current)
Line 22: Line 22:
 We operationalize the property generation task as follows.  We operationalize the property generation task as follows. 
  
-We focus on the same set of 44 concepts used in the [[http://wordspace.collocations.de/doku.php/data:concrete_nouns_categorization|concrete noun categorization task]].+We focus on the same set of 44 concepts used in the [[:data:esslli2008:concrete_nouns_categorization|concrete noun categorization task]].
  
 For each target concept, we pick the top 10 properties from the McRae norms (ranked For each target concept, we pick the top 10 properties from the McRae norms (ranked
Line 64: Line 64:
 match, and we ignore the lower ones (i.e., lower matches are not treated as match, and we ignore the lower ones (i.e., lower matches are not treated as
 hits, but they do not contribute to the n-best count either). hits, but they do not contribute to the n-best count either).
- 
- 
-Back to [[data:comparison_with_speaker-generated_features|Top]] 
- 
- 
- 
  
  
Line 78: Line 72:
 ==== Gold standard and evaluation script ==== ==== Gold standard and evaluation script ====
  
-** +**NB: on March 7, we made a small correction to the property expansion fileif you downloaded the archive before this dateplease download it again**
-NB: ON MARCH 7, WE MADE A SMALL CORRECTION TO THE PROPERTY EXPANSION FILEIF YOU DOWNLOADED THE ARCHIVE BEFORE THIS DATEPLEASE DOWNLOAD IT AGAIN**+
  
-This {{data:propgen.tar.gz|archive}} contains the gold standard (with property expansions as described above) and an evaluation script that computes average precision at various n-best thresholds.+This {{propgen.tar.gz|archive}} contains the gold standard (with property expansions as described above) and an evaluation script that computes average precision at various n-best thresholds.
  
 Detailed information about the script can be accessed by running it with the ''-h'' option: Detailed information about the script can be accessed by running it with the ''-h'' option:
Line 97: Line 90:
 We provide this script to have a common benchmark when comparing models, but we also encourage you to explore the McRae et al.'s database for other possible ways to evaluate the models. We provide this script to have a common benchmark when comparing models, but we also encourage you to explore the McRae et al.'s database for other possible ways to evaluate the models.
  
- 
-Back to [[data:comparison_with_speaker-generated_features|Top]] 
- 
-Back to [[Start]]