Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/systats/textlearnR
A simple collection of well working NLP models (Keras, H2O, StarSpace) tuned and benchmarked on a variety of datasets.
https://github.com/systats/textlearnR
classification hyperparameter-optimization keras nlp r text-mining
Last synced: about 2 months ago
JSON representation
A simple collection of well working NLP models (Keras, H2O, StarSpace) tuned and benchmarked on a variety of datasets.
- Host: GitHub
- URL: https://github.com/systats/textlearnR
- Owner: systats
- Created: 2019-02-26T17:33:15.000Z (over 5 years ago)
- Default Branch: master
- Last Pushed: 2019-03-08T00:09:08.000Z (over 5 years ago)
- Last Synced: 2024-02-09T02:08:34.977Z (5 months ago)
- Topics: classification, hyperparameter-optimization, keras, nlp, r, text-mining
- Language: R
- Size: 73.5 MB
- Stars: 17
- Watchers: 3
- Forks: 1
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Lists
- awesome-stars - systats/textlearnR - A simple collection of well working NLP models (Keras, H2O, StarSpace) tuned and benchmarked on a variety of datasets. (R)
README
textlearnR
================A simple collection of well working NLP models (Keras) in R, tuned and benchmarked on a variety of datasets. This is a work in progress and the first version only supports classification tasks (at the moment).
What can this package do for you? (in the future)
-------------------------------------------------Training neural networks can be bothering and time consuming due to the sheer amount of hyper-parameters. Hyperparameters are values that are defined prior and provided as additional model input. Tuning those requires either deeper knowledge about the model behavior itself or computational resources for random searches or optimization on the parameter space. `textlearnR` provides a light weight framework to train and compare ML models from Keras, H2O, starspace and text2vec (coming soon). Furthermore, it allows to define parameters for text processing (e.g. maximal number of words and text length), which are also considered to be priors.
Beside language models, textlearnR also integrates third party packages for automatically tuning hyperparameters. The following strategies will be avaiable:
#### Searching
- Grid search
- Random search
- Sobol sequence (quasi-random numbers designed to cover the space more evenly than uniform random numbers). Computationally expensive but parallelizeable.#### Optimization
- [`GA`](https://github.com/luca-scr/GA) Genetic algorithms for stochastic optimization (only real-values).
- [`mlrMBO`](https://github.com/mlr-org/mlrMBO) Bayesian and model-based optimization.
- Others:
- Nelder–Mead simplex (gradient-free)
- Particle swarm (gradient-free)For constructing new parameter objects the tidy way, the package `dials` is used. Each model optimized is saved to a SQLite database in `data/model_dump.db`. Of course, committed to [tidy principals](https://cran.r-project.org/package=tidyverse/vignettes/manifesto.html). Contributions are highly welcomed!
Supervised Models
-----------------[model overview](https://becominghuman.ai/cheat-sheets-for-ai-neural-networks-machine-learning-deep-learning-big-data-678c51b4b463)
``` r
keras_model <- list(
simple_mlp = textlearnR::keras_simple_mlp,
deep_mlp = textlearnR::keras_deep_mlp,
simple_lstm = textlearnR::keras_simple_lstm,
#deep_lstm = textlearnR::keras_deep_lstm,
pooled_gru = textlearnR::keras_pooled_gru,
cnn_lstm = textlearnR::keras_cnn_lstm,
cnn_gru = textlearnR::keras_cnn_gru,
gru_cnn = textlearnR::keras_gru_cnn,
multi_cnn = textlearnR::keras_multi_cnn
)
```Datasets
--------- [celebrity-faceoff](https://github.com/jlacko/celebrity-faceoff)
- [Google Jigsaw Toxic Comment Classification](https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge/data)
- [Hate speech detection](https://github.com/t-davidson/hate-speech-and-offensive-language)
- [nlp-datasets](https://github.com/niderhoff/nlp-datasets)
- Scopus Classification
- party affiliationsUnderstand one model
--------------------``` r
textlearnR::keras_simple_mlp(
input_dim = 10000,
embed_dim = 128,
seq_len = 50,
output_dim = 1
) %>%
summary
```## ___________________________________________________________________________
## Layer (type) Output Shape Param #
## ===========================================================================
## embedding_1 (Embedding) (None, 50, 128) 1280000
## ___________________________________________________________________________
## flatten_1 (Flatten) (None, 6400) 0
## ___________________________________________________________________________
## dense_1 (Dense) (None, 128) 819328
## ___________________________________________________________________________
## dropout_1 (Dropout) (None, 128) 0
## ___________________________________________________________________________
## dense_2 (Dense) (None, 1) 129
## ===========================================================================
## Total params: 2,099,457
## Trainable params: 2,099,457
## Non-trainable params: 0
## ___________________________________________________________________________- rather flowchart or ggalluvial