{"id":13932977,"url":"https://github.com/dclambert/Python-ELM","last_synced_at":"2025-07-19T16:32:21.902Z","repository":{"id":7248848,"uuid":"8559658","full_name":"dclambert/Python-ELM","owner":"dclambert","description":"Extreme Learning Machine implementation in Python","archived":true,"fork":false,"pushed_at":"2021-03-05T16:41:38.000Z","size":36,"stargazers_count":544,"open_issues_count":17,"forks_count":257,"subscribers_count":42,"default_branch":"master","last_synced_at":"2024-08-08T21:19:44.297Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"other","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/dclambert.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2013-03-04T16:30:09.000Z","updated_at":"2024-07-27T05:12:11.000Z","dependencies_parsed_at":"2022-08-06T20:01:09.218Z","dependency_job_id":null,"html_url":"https://github.com/dclambert/Python-ELM","commit_stats":null,"previous_names":[],"tags_count":2,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dclambert%2FPython-ELM","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dclambert%2FPython-ELM/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dclambert%2FPython-ELM/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dclambert%2FPython-ELM/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/dclambert","download_url":"https://codeload.github.com/dclambert/Python-ELM/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":226643873,"owners_count":17662968,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-08-07T21:01:25.820Z","updated_at":"2024-11-26T23:30:50.913Z","avatar_url":"https://github.com/dclambert.png","language":"Python","funding_links":[],"categories":["Python","Machine Learning"],"sub_categories":["Extreme Learning Machine"],"readme":"Python-ELM v0.3\n===============\n\n__---\u003e ARCHIVED March 2021 \u003c---__\n\n###### This is an implementation of the [Extreme Learning Machine](http://www.extreme-learning-machines.org) [1][2] in Python, based on [scikit-learn](http://scikit-learn.org).\n\n###### From the abstract:\n\n\u003e It is clear that the learning speed of feedforward neural networks is in general far slower than required and it has been a major bottleneck in their applications for past decades. Two key reasons behind may be: 1) the slow gradient- based learning algorithms are extensively used to train neural networks, and 2) all the parameters of the networks are tuned iteratively by using such learning algorithms. Unlike these traditional implementations, this paper proposes a new learning algorithm called extreme learning machine (ELM) for single- hidden layer feedforward neural networks (SLFNs) which ran- domly chooses the input weights and analytically determines the output weights of SLFNs. In theory, this algorithm tends to provide the best generalization performance at extremely fast learning speed. The experimental results based on real- world benchmarking function approximation and classification problems including large complex applications show that the new algorithm can produce best generalization performance in some cases and can learn much faster than traditional popular learning algorithms for feedforward neural networks.\n\nIt's a work in progress, so things can/might/will change.\n\n__David C. Lambert__  \n__dcl [at] panix [dot] com__  \n\n__Copyright © 2013__  \n__License: Simple BSD__\n\nFiles\n-----\n#### __random_layer.py__\n\nContains the __RandomLayer__, __MLPRandomLayer__, __RBFRandomLayer__ and __GRBFRandomLayer__ classes.\n\nRandomLayer is a transformer that creates a feature mapping of the\ninputs that corresponds to a layer of hidden units with randomly \ngenerated components.\n\nThe transformed values are a specified function of input activations\nthat are a weighted combination of dot product (multilayer perceptron)\nand distance (rbf) activations:\n\n\t  input_activation = alpha * mlp_activation + (1-alpha) * rbf_activation\n\n\t  mlp_activation(x) = dot(x, weights) + bias\n\t  rbf_activation(x) = rbf_width * ||x - center||/radius\n\n_mlp_activation_ is multi-layer perceptron input activation  \n\n_rbf_activation_ is radial basis function input activation\n\n_alpha_ and _rbf_width_ are specified by the user\n\n_weights_ and _biases_ are taken from normal distribution of\nmean 0 and sd of 1\n\n_centers_ are taken uniformly from the bounding hyperrectangle\nof the inputs, and\n\n\tradius = max(||x-c||)/sqrt(n_centers*2)\n\n(All random components can be supplied by the user by providing entries in the dictionary given as the _user_components_ parameter.)\n\nThe input activation is transformed by a transfer function that defaults\nto numpy.tanh if not specified, but can be any callable that returns an\narray of the same shape as its argument (the input activation array, of\nshape [n_samples, n_hidden]).\n\nTransfer functions provided are:\n\n*\tsine\n*\ttanh\n*\ttribas\n*\tinv_tribas\n*\tsigmoid\n*\thardlim\n*\tsoftlim\n*\tgaussian\n*\tmultiquadric\n*\tinv_multiquadric\n\nMLPRandomLayer and RBFRandomLayer classes are just wrappers around the RandomLayer class, with the _alpha_ mixing parameter set to 1.0 and 0.0 respectively (for 100% MLP input activation, or 100% RBF input activation)\n\t\nThe RandomLayer, MLPRandomLayer, RBFRandomLayer classes can take a callable user\nprovided transfer function.  See the docstrings and the example ipython\nnotebook for details.\n\nThe GRBFRandomLayer implements the Generalized Radial Basis Function from [[3]](http://sci2s.ugr.es/keel/pdf/keel/articulo/2011-Neurocomputing1.pdf)\n\n#### __elm.py__\n\nContains the __ELMRegressor__, __ELMClassifier__, __GenELMRegressor__, and __GenELMClassifier__ classes.\n\nGenELMRegressor and GenELMClassifier both take *RandomLayer instances as part of their contructors, and an optional regressor (conforming to the sklearn API)for performing the fit (instead of the default linear fit using the pseudo inverse from scipy.pinv2).\nGenELMClassifier is little more than a wrapper around GenELMRegressor that binarizes the target array before performing a regression, then unbinarizes the prediction of the regressor to make its own predictions.\n\nThe ELMRegressor class is a wrapper around GenELMRegressor that uses a RandomLayer instance by default and exposes the RandomLayer parameters in the constructor.  ELMClassifier is similar for classification.\n\n#### __plot_elm_comparison.py__\n\nA small demo (based on scikit-learn's plot_classifier_comparison) that shows the decision functions of a couple of different instantiations of the GenELMClassifier on three different datasets.\n\n#### __elm_notebook.py__\n\nAn IPython notebook, illustrating several ways to use the __\\*ELM\\*__ and __\\*RandomLayer__ classes.\n\nRequirements\n------------\n\nWritten using Python 2.7.3, numpy 1.6.1, scipy 0.10.1, scikit-learn 0.13.1 and ipython 0.12.1\n\nReferences\n----------\n```\n[1] http://www.extreme-learning-machines.org\n\n[2] G.-B. Huang, Q.-Y. Zhu and C.-K. Siew, \"Extreme Learning Machine:\n          Theory and Applications\", Neurocomputing, vol. 70, pp. 489-501,\n          2006.\n          \n[3] Fernandez-Navarro, et al, \"MELM-GRBF: a modified version of the  \n          extreme learning machine for generalized radial basis function  \n          neural networks\", Neurocomputing 74 (2011), 2502-2510\n```\n\n","project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdclambert%2FPython-ELM","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fdclambert%2FPython-ELM","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdclambert%2FPython-ELM/lists"}