Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/brade31919/sequential-minimal-optimization
This is a Matlab implementation of SMO based on the original work from JC Platt
https://github.com/brade31919/sequential-minimal-optimization
Last synced: about 11 hours ago
JSON representation
This is a Matlab implementation of SMO based on the original work from JC Platt
- Host: GitHub
- URL: https://github.com/brade31919/sequential-minimal-optimization
- Owner: brade31919
- Created: 2017-04-04T18:59:46.000Z (over 7 years ago)
- Default Branch: master
- Last Pushed: 2015-09-19T18:27:59.000Z (about 9 years ago)
- Last Synced: 2023-03-10T14:42:39.276Z (over 1 year ago)
- Language: Matlab
- Size: 141 KB
- Stars: 0
- Watchers: 2
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Sequential-minimal-optimization
=========================
Author: Ehsan Zahedinejad [email protected]
=========================Title:
=========================
Sequential-minimal-optimization. A matlab implementation of Sequential-minimal-optimization based on the original work from
John C. Platt 2000. See publication for more detail: http://research.microsoft.com/pubs/68391/smo-book.pdfBenchmarks:
=========================
I have tested the code on UCI Ionosphere Data Set with a traning error less than 0.3% and confidence of more than 96% on the validation data. The training size was 200 and the validation size was 151.Function description:
============================================
The main function is SMO(X,Y,eps,tol,type,ul,Pdata,Sigma).Input arguments for SMO function
===========================================
1. X= The feature matrix (2d array)
2. Y= Target matrix (1d array)
3. eps= Convergence criteria--show what should the difference between lagranage multiplier between two consecutive iteration has to be to exit. Example eps=0.001 (scalar)
4. tol= Distance within that the lagrange multiplier will be mapped to zero or upper limit 'C'. Best value could be:0.001 (scalar)
5. type= What Kernel function one likes to use. Currently Linear 'L' or Gaussian function are supported 'G' (char)
6. ul= upper and lower bound for lagrange multipliers. Example [0, 1]. If the upper is infinity write [0 Inf] (1d array)
7. Pdata= Percenateg of data being used for training data. The rest will be used for validation (scalar in terms of percentage)
8. Sigma= Gaussian variance. If liner kernel write zero.Output arguments for SMO function
===========================================
1. [alpha,b,TS]=SMO(X,Y,eps,tol,type,ul,Pdata,Sigma).
2. alpha= array of lagrange multipliers (1d array)
3. b= threshold value (scalar)
4. TS= training size
5. Training error= SMO prints out the error when a run is complete
6. Confidence of hypothesis= SMO prints out the confidence when a run is complete