{"id":17191183,"url":"https://github.com/dfm/arxiv-analysis","last_synced_at":"2025-04-13T19:50:34.761Z","repository":{"id":2132979,"uuid":"3076301","full_name":"dfm/arxiv-analysis","owner":"dfm","description":null,"archived":false,"fork":false,"pushed_at":"2020-06-12T18:15:43.000Z","size":63,"stargazers_count":20,"open_issues_count":0,"forks_count":5,"subscribers_count":3,"default_branch":"main","last_synced_at":"2025-03-27T10:21:38.944Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":"biopython/biopython","license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/dfm.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2011-12-30T19:51:24.000Z","updated_at":"2022-04-20T15:56:51.000Z","dependencies_parsed_at":"2022-08-20T08:41:13.390Z","dependency_job_id":null,"html_url":"https://github.com/dfm/arxiv-analysis","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dfm%2Farxiv-analysis","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dfm%2Farxiv-analysis/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dfm%2Farxiv-analysis/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dfm%2Farxiv-analysis/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/dfm","download_url":"https://codeload.github.com/dfm/arxiv-analysis/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":248773682,"owners_count":21159516,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-10-15T01:25:07.975Z","updated_at":"2025-04-13T19:50:34.741Z","avatar_url":"https://github.com/dfm.png","language":"Python","funding_links":[],"categories":[],"sub_categories":[],"readme":"# ArXiv analysis\n\nRun [online variational LDA](http://arxiv.org/abs/1206.7051v1) on all the\nabstracts from the arXiv. The implementation is based on [Matt Hoffman's\nGPL licensed code](http://www.cs.princeton.edu/~mdhoffma/).\n\n## Usage\n\nYou'll need a [`mongod`](http://www.mongodb.org/) instance running on\nthe port given by the environment variable `MONGO_PORT` and a\n[`redis-server`](http://redis.io/) instance running on the port given by\nthe `REDIS_PORT` environment variable.\n\nThe code depends on the Python packages: `numpy`, `scipy`, `requests`,\n`pymongo` and `redis`.\n\n* `mkdir abstracts`\n* `./analysis.py scrape abstracts` — scrapes all the metadata from the arXiv\n  [OAI interface](http://arxiv.org/help/oa/index) and saves the raw XML\n  responses as `abstracts/raw-*.xml`. This takes a _long time_ because of\n  the arXiv's flow control policies. It took me approximately 6 hours.\n* `./analysis.py parse abstracts/raw-*.xml` — parses the raw responses and\n  saves the abstracts to a MongoDB database called `arxiv` in the collection\n  called `abstracts`.\n* `./analysis.py build-vocab` — counts all the words in the corpus removing\n  anything with less than 3 characters and removing any stop words.\n* `./analysis.py get-vocab 100 5000 \u003e vocab.txt` — lists the vocabulary\n  skipping the first 100 most popular words and keeping 5000 words total.\n* `./analysis.py run vocab.txt` — runs online variational LDA by randomly\n  selecting articles from the database. The topic distributions are stored\n  in the `lambda-*.txt` files. This will run forever so just kill it whenever\n  you feel like it.\n* `./analysis.py vocab.txt lambda-100.txt` — list the topics and their most\n  common words at step 100.\n","project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdfm%2Farxiv-analysis","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fdfm%2Farxiv-analysis","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdfm%2Farxiv-analysis/lists"}