Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Last revision Both sides next revision
course:esslli2018:start [2018/07/26 09:05]
schtepf [Course description]
course:esslli2018:start [2018/07/26 09:24]
schtepf [Distributional Semantics – A Practical Introduction (ESSLLI 2016 & 2018)]
Line 9: Line 9:
   * [[course:esslli2018:schedule|Course schedule & handouts]]   * [[course:esslli2018:schedule|Course schedule & handouts]]
   * [[course:material|Software & data sets]]   * [[course:material|Software & data sets]]
-  * [[course:bibliography|Suggested readings (bibliography)]]+  * [[course:bibliography|Bibliography & links]]
  
 ===== Course description ===== ===== Course description =====
Line 15: Line 15:
 Distributional semantic models (DSM) – also known as “word space” or “distributional similarity” models – are based on the assumption that the meaning of a word can (at least to a certain extent) be inferred from its usage, i.e. its distribution in text. Therefore, these models dynamically build semantic representations – in the form of high-dimensional vector spaces – through a statistical analysis of the contexts in which words occur. DSMs are a promising technique for solving the lexical acquisition bottleneck by unsupervised learning, and their distributed representation provides a cognitively plausible, robust and flexible architecture for the organisation and processing of semantic information. Distributional semantic models (DSM) – also known as “word space” or “distributional similarity” models – are based on the assumption that the meaning of a word can (at least to a certain extent) be inferred from its usage, i.e. its distribution in text. Therefore, these models dynamically build semantic representations – in the form of high-dimensional vector spaces – through a statistical analysis of the contexts in which words occur. DSMs are a promising technique for solving the lexical acquisition bottleneck by unsupervised learning, and their distributed representation provides a cognitively plausible, robust and flexible architecture for the organisation and processing of semantic information.
  
-This course aims to equip participants with the background knowledge and skills needed to build different kinds of DSM representations – from traditional “count” models to neural word embeddings – and apply them to a wide range of tasks. It is accompanied by practical exercises with the user-friendly software package [[http://wordspace.r-forge.r-project.org/|wordspace]] for [[http://www.r-project.org/|R]] and various pre-built models.+This course aims to equip participants with the background knowledge and skills needed to build different kinds of DSM representations – from traditional “count” models to neural word embeddings – and apply them to a wide range of tasks. It is accompanied by practical exercises with the user-friendly [[http://www.r-project.org/|R]] software package [[http://wordspace.r-forge.r-project.org/|wordspace]] and various pre-built models.