sudo apt-get install subversion
svn checkout http://airhead-research.googlecode.com/svn/trunk/sspace sspace-read-only
ant
. Ant is part of the Apache project and is used to build java libraries. It will automatically detect the file build.html and install from it. I explained here how to install ant.
— Eduardo Aponte 2010/11/16 10:38
java -jar /net/data/CL/projects/wordspace/software_tests/sPackage/sspace-read-only/bin/lsa.jar -dwp500_articles_hw.latin1.txt.gz -X200 -t10 -v -n100 results/firstTry.sspace
This command should read 200 documents from the corpus (the first 200 lines), using 10 threads (no idea how this would take place) and use svd (default set) with 100 dimensions. I didn't check memory, although I allowed a verbose terminal output.
FINE: Processed all 200 documents in 0.271 total seconds
Nov 21, 2010 12:44:10 PM edu.ucla.sspace.lsa.LatentSemanticAnalysis processSpace INFO: reducing to 100 dimensions Nov 21, 2010 12:44:10 PM edu.ucla.sspace.matrix.MatrixIO matlabToSvdlibcSparseBinary INFO: Converting from Matlab double values to SVDLIBC float values; possible loss of precision
I performed a number of trials with LSA. This trials where intended to prove the time and memory comsumed by different algorithms compatible with the LSA implementation. Available from the command lines are:
I didn't make a large review of the implementations and rather start proving every algorithm. As previous results showed, using the default algorithm (SVDLIBC) generated strange results (the angular distance between vectors) was extremely low among close neighbors. Two reason were identified as possible: either, the number of dimensions was to low in relation to the numbers of documents; this would cause that performing SVD would collapse the distance, creating a extremely dense vector-space. A second possibility was a bug in the implementation. My supposition is that, since the implementation requires a pipeline between the internal format of the argument and SVDLIBC, a loss in precision caused the problem. If that were the problem, selecting Matlab for SVD should solve the problem (because the matlab format are the internal format of lsa are identical).
I performed a test with 30000 document and 200 dimensions with SVDLIBC, MATLAB, OCTAVE and COLT. The results were in part disappointing because, with the exception of MATLAB, all other algorithms ran out of memory (in particular, the pipeline between LSA and the SVD algorithm ran out of heap memory).
Visual inspection suggest that problems regarding the density of the vector space are solved by using MATLAB as the defauld algorithm.
Finally, I compared the scalability of Random Indexing and LSA (using SVDLIBC with 100 dimension):
It is clear that LSA can hardly handle large corpora. Although the results are different in the case of Random Indexing, they suggest a similar conclusion.
I wrote a simple script that automatically document the results of every experiment. It can be found under the key name "myScript.sh" in the corpora directory. The results are documented in the directory statistics. A python script automatically generates a graphviz representation in the directory vizImages. Since it is intended to be used with twopi, it has this very extension.