{"id":21502458,"url":"https://github.com/eagerai/kerastuner","last_synced_at":"2025-07-07T14:10:33.648Z","repository":{"id":42977702,"uuid":"231190401","full_name":"EagerAI/kerastuneR","owner":"EagerAI","description":"R interface to Keras Tuner","archived":false,"fork":false,"pushed_at":"2024-04-15T06:48:58.000Z","size":69465,"stargazers_count":34,"open_issues_count":1,"forks_count":6,"subscribers_count":3,"default_branch":"master","last_synced_at":"2025-07-05T19:21:42.679Z","etag":null,"topics":["hyperparameter-tuning","hypertuning","keras","keras-tuner","r","tensorflow","trial"],"latest_commit_sha":null,"homepage":"https://eagerai.github.io/kerastuneR/","language":"R","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/EagerAI.png","metadata":{"files":{"readme":"README.md","changelog":"NEWS.md","contributing":null,"funding":null,"license":null,"code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2020-01-01T08:05:14.000Z","updated_at":"2025-02-02T19:22:16.000Z","dependencies_parsed_at":"2024-11-15T03:45:17.658Z","dependency_job_id":"86fe667e-dd1f-43dc-8269-90157814f73e","html_url":"https://github.com/EagerAI/kerastuneR","commit_stats":{"total_commits":296,"total_committers":3,"mean_commits":98.66666666666667,"dds":0.1216216216216216,"last_synced_commit":"c98ade562d30b7d774d83d0e1f894712e8d4a9ec"},"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/EagerAI/kerastuneR","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/EagerAI%2FkerastuneR","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/EagerAI%2FkerastuneR/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/EagerAI%2FkerastuneR/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/EagerAI%2FkerastuneR/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/EagerAI","download_url":"https://codeload.github.com/EagerAI/kerastuneR/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/EagerAI%2FkerastuneR/sbom","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":264089672,"owners_count":23555785,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["hyperparameter-tuning","hypertuning","keras","keras-tuner","r","tensorflow","trial"],"created_at":"2024-11-23T18:15:03.304Z","updated_at":"2025-07-07T14:10:33.619Z","avatar_url":"https://github.com/EagerAI.png","language":"R","readme":"## R interface to Keras Tuner\n\nThe kerastuneR package provides R wrappers to [Keras Tuner](https://keras-team.github.io/keras-tuner/).\n\nKeras Tuner is a hypertuning framework made for humans. \nIt aims at making the life of AI practitioners, hypertuner algorithm creators and model designers as simple as possible by providing them with a clean and easy to use API for hypertuning. Keras Tuner makes moving from a base model to a hypertuned one quick and easy by only requiring you to change a few lines of code.\n\n\u003cimg src=\"images/kerastuneR.png\" width=200 align=right style=\"margin-left: 15px;\" alt=\"Keras Tuner\"/\u003e\n\n[![Actions Status](https://github.com/eagerai/kerastuneR/workflows/KT_stable/badge.svg)](https://github.com/eagerai/kerastuneR)\n[![CRAN](https://www.r-pkg.org/badges/version/kerastuneR?color=green)](https://cran.r-project.org/package=kerastuneR)\n\u003cbr\u003e\n[![Last month downloads](http://cranlogs.r-pkg.org/badges/last-month/kerastuneR?color=green)](https://cran.r-project.org/package=kerastuneR)\n\u003cbr\u003e\n[![Last commit](https://img.shields.io/github/last-commit/eagerai/kerastuneR.svg)](https://github.com/eagerai/kerastuneR/commits/master)\n\n\nA hyperparameter tuner for [Keras](https://keras.io/), specifically for ```tf$keras``` with *TensorFlow 2.0*.\n\nFull documentation and tutorials available on the [Keras Tuner website](https://eagerai.github.io/kerastuneR/).\n\n## Installation\n\nRequirements:\n\n- Python 3.9\n- TensorFlow 2.0.x\n\n```kerastuneR``` can be installed from CRAN:\n\n```\ninstall.packages('kerastuneR')\n```\n\nThe dev version:\n\n```\ndevtools::install_github('eagerai/kerastuneR')\n```\n\nLater, you need to install the python module kerastuner:\n\n```\nkerastuneR::install_kerastuner()\n```\n\n## Usage: the basics\n\nHere's how to perform hyperparameter tuning for a single-layer dense neural network using random search.\n\nFirst, we define a model-building function. It takes an argument ```hp``` from which you can sample hyperparameters, such as ```hp$Int('units', min_value = 32, max_value = 512, step = 32)``` (an integer from a certain range).\n\nSample data:\n\n```\nlibrary(magrittr)\nx_data \u003c- matrix(data = runif(500,0,1),nrow = 50,ncol = 5)\ny_data \u003c-  ifelse(runif(50,0,1) \u003e 0.6, 1L,0L) %\u003e% as.matrix()\n\nx_data2 \u003c- matrix(data = runif(500,0,1),nrow = 50,ncol = 5)\ny_data2 \u003c-  ifelse(runif(50,0,1) \u003e 0.6, 1L,0L) %\u003e% as.matrix()\n```\n\nThis function returns a compiled model.\n\n```\nlibrary(keras3)\nlibrary(tensorflow)\nlibrary(kerastuneR)\n\nbuild_model = function(hp) {\n  \n  model = keras_model_sequential()\n  model %\u003e% layer_dense(units = hp$Int('units',\n                                     min_value = 32,\n                                     max_value = 512,\n                                     step=  32),input_shape = ncol(x_data),\n                        activation =  'relu') %\u003e%\n    layer_dense(units = 1, activation = 'softmax') %\u003e%\n    compile(\n      optimizer = tf$keras$optimizers$Adam(\n        hp$Choice('learning_rate',\n                  values=c(1e-2, 1e-3, 1e-4))),\n      loss = 'binary_crossentropy',\n      metrics = 'accuracy')\n  return(model)\n}\n```\n\nNext, instantiate a tuner. You should specify the model-building function, the name of the objective to optimize (whether to minimize or maximize is automatically inferred for built-in metrics), the total number of trials ```(max_trials)``` to test, and the number of models that should be built and fit for each trial ```(executions_per_trial)```.\n\nAvailable tuners are ```RandomSearch``` and ```Hyperband```.\n\n\u003e Note: the purpose of having multiple executions per trial is to reduce results variance and therefore be able to more accurately assess the performance of a model. If you want to get results faster, you could set executions_per_trial=1 (single round of training for each model configuration).\n\n```\ntuner = RandomSearch(\n    build_model,\n    objective = 'val_accuracy',\n    max_trials = 5,\n    executions_per_trial = 3,\n    directory = 'my_dir',\n    project_name = 'helloworld')\n```\n\nYou can print a summary of the search space:\n\n```\ntuner %\u003e% search_summary()\n```\n\nThen, start the search for the best hyperparameter configuration. The call to search has the same signature as ```model %\u003e% fit()```. But here instead of ```fit()``` we call ```fit_tuner()```.\n\n```\ntuner %\u003e% fit_tuner(x_data,y_data,\n                    epochs = 5, \n                    validation_data = list(x_data2,y_data2))\n```\n\n### Plot results\n\nThere is a function ```plot_tuner``` which allows user to plot the search results. For this purpose, we used the parallel coordinates plot from ```plotly```. This function allows to get a data.frame of the results, as well.\n\n```\nresult = kerastuneR::plot_tuner(tuner)\n# the list will show the plot and the data.frame of tuning results\nresult \n```\n\n\u003cimg src=\"images/tuner.gif\" width=900 align=center style=\"margin-left: 15px;\" alt=\"Keras Tuner plot\"/\u003e\n\n### Plot Keras model\n\nFirst one should extract the list of tuned models and then using function ```plot_keras_model``` to plot the model architecture.\n\n```\nbest_5_models = tuner %\u003e% get_best_models(5)\nbest_5_models[[1]] %\u003e% plot_keras_model()\n```\n\n\u003cp align=\"center\"\u003e\n  \u003cimg src=\"images/model.png\" height=380 alt=\"Keras model\"\u003e\u003c/center\u003e\n\u003c/p\u003e\n\n## You can easily restrict the search space to just a few parameters\n\nIf you have an existing hypermodel, and you want to search over only a few parameters (such as the learning rate), you can do so by passing a ```hyperparameters``` argument to the tuner constructor, as well as ```tune_new_entries=FALSE``` to specify that parameters that you didn't list in ```hyperparameters``` should not be tuned. For these parameters, the default value gets used.\n\n```\nlibrary(keras)\nlibrary(kerastuneR)\nlibrary(magrittr)\n\nmnist_data = dataset_fashion_mnist()\nc(mnist_train, mnist_test) %\u003c-%  mnist_data\nrm(mnist_data)\n\nmnist_train$x = tf$dtypes$cast(mnist_train$x, 'float32') / 255.\nmnist_test$x = tf$dtypes$cast(mnist_test$x, 'float32') / 255.\n\nmnist_train$x = keras::k_reshape(mnist_train$x,shape = c(6e4,28,28))\nmnist_test$x = keras::k_reshape(mnist_test$x,shape = c(1e4,28,28))\n\n\nhp = HyperParameters()\nhp$Choice('learning_rate', c(1e-1, 1e-3))\nhp$Int('num_layers', 2L, 20L)\n\n\nmnist_model = function(hp) {\n  \n  model = keras_model_sequential() %\u003e% \n    layer_flatten(input_shape = c(28,28))\n  for (i in 1:(hp$get('num_layers')) ) {\n    model %\u003e% layer_dense(32, activation='relu') %\u003e% \n      layer_dense(units = 10, activation='softmax')\n  } %\u003e% \n    compile(\n      optimizer = tf$keras$optimizers$Adam(hp$get('learning_rate')),\n      loss = 'sparse_categorical_crossentropy',\n      metrics = 'accuracy') \n  return(model)\n  \n}\n\n\ntuner = RandomSearch(\n  hypermodel =  mnist_model,\n  max_trials = 5,\n  hyperparameters = hp,\n  tune_new_entries = T,\n  objective = 'val_accuracy',\n  directory = 'dir_1',\n  project_name = 'mnist_space')\n\ntuner %\u003e% fit_tuner(x = mnist_train$x,\n                    y = mnist_train$y,\n                    epochs = 5,\n                    validation_data = list(mnist_test$x, mnist_test$y))\n\n\n```\n\n## You can use a HyperModel subclass instead of a model-building function\n\nThis makes it easy to share and reuse hypermodels.\n\nA ```HyperModel``` subclass only needs to implement a ```build(self, hp)``` method.\n\n```\nlibrary(keras)\nlibrary(tensorflow)\nlibrary(magrittr)\nlibrary(kerastuneR)\n\nx_data \u003c- matrix(data = runif(500,0,1),nrow = 50,ncol = 5)\ny_data \u003c- ifelse(runif(50,0,1) \u003e 0.6, 1L,0L) %\u003e% as.matrix()\n\nx_data2 \u003c- matrix(data = runif(500,0,1),nrow = 50,ncol = 5)\ny_data2 \u003c- ifelse(runif(50,0,1) \u003e 0.6, 1L,0L) %\u003e% as.matrix()\n\n\nHyperModel \u003c- reticulate::PyClass(\n  'HyperModel',\n  inherit = kerastuneR::HyperModel_class(),\n  list(\n    \n    `__init__` = function(self, num_classes) {\n      \n      self$num_classes = num_classes\n      NULL\n    },\n    build = function(self,hp) {\n      model = keras_model_sequential() \n      model %\u003e% layer_dense(units = hp$Int('units',\n                                           min_value = 32,\n                                           max_value = 512,\n                                           step = 32),\n                            input_shape = ncol(x_data),\n                            activation = 'relu') %\u003e% \n        layer_dense(as.integer(self$num_classes), activation = 'softmax') %\u003e% \n        compile(\n          optimizer = tf$keras$optimizers$Adam(\n            hp$Choice('learning_rate',\n                      values = c(1e-2, 1e-3, 1e-4))),\n          loss = 'sparse_categorical_crossentropy',\n          metrics = 'accuracy')\n    }\n  )\n)\n\nhypermodel = HyperModel(num_classes = 10)\n\n\ntuner = RandomSearch(hypermodel = hypermodel,\n                      objective = 'val_accuracy',\n                      max_trials = 2,\n                      executions_per_trial = 1,\n                      directory = 'my_dir5',\n                      project_name = 'helloworld')\n\n```\n\nMore tutorials can be found on https://eagerai.github.io/kerastuneR/\n\n\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Feagerai%2Fkerastuner","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Feagerai%2Fkerastuner","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Feagerai%2Fkerastuner/lists"}