Differences
This shows you the differences between two versions of the page.
| Next revision | Previous revision | ||
|
course:esslli2018:start [2018/07/25 10:26] schtepf created |
course:esslli2018:start [2018/07/26 09:25] (current) schtepf [Course description] |
||
|---|---|---|---|
| Line 2: | Line 2: | ||
| **Distributional Semantics – A Practical Introduction** | **Distributional Semantics – A Practical Introduction** | ||
| - | [[http://esslli2009.labri.fr/|{{ :course:esslli2009:esslli09_logo.png|ESSLLI | + | [[http://esslli2016.unibz.it/|{{ :course:esslli2018:esslli2016_logo_outline.png?150|ESSLLI |
| + | [[http:// | ||
| \\ | \\ | ||
| // | // | ||
| * [[course: | * [[course: | ||
| - | * [[course: | + | * [[course: |
| - | * [[course: | + | * [[course: |
| ===== Course description ===== | ===== Course description ===== | ||
| + | Distributional semantic models (DSM) – also known as “word space” or “distributional similarity” models – are based on the assumption that the meaning of a word can (at least to a certain extent) be inferred from its usage, i.e. its distribution in text. Therefore, these models dynamically build semantic representations – in the form of high-dimensional vector spaces – through a statistical analysis of the contexts in which words occur. DSMs are a promising technique for solving the lexical acquisition bottleneck by unsupervised learning, and their distributed representation provides a cognitively plausible, robust and flexible architecture for the organisation and processing of semantic information. | ||
| + | |||
| + | This course aims to equip participants with the background knowledge and skills needed to build different kinds of DSM representations – from traditional “count” models to neural word embeddings – and apply them to a wide range of tasks. It is accompanied by practical exercises with the user-friendly [[http:// | ||
| + | |||
| + | **Lecturer: | ||