1 \chapter{Machine Learning}
3 \section{Introduction. Common classes and functions}
5 \subsection{Statistical Models}
7 The Machine Learning Library (MLL) is a set of classes and functions for statistical classification, regression and clustering of data.
9 Most of the classification and regression algorithms are implemented as C++ classes. As the algorithms have different seta of features (like the ability to handle missing measurements, or categorical input variables etc.), there is a little common ground between the classes. This common ground is defined by the class `CvStatModel` that all the other ML classes are derived from.
14 Base class for the statistical models in ML.
21 /* CvStatModel( const CvMat* train_data ... ); */
23 virtual ~CvStatModel();
25 virtual void clear()=0;
27 /* virtual bool train( const CvMat* train_data, [int tflag,] ..., const
28 CvMat* responses, ...,
29 [const CvMat* var_idx,] ..., [const CvMat* sample_idx,] ...
30 [const CvMat* var_type,] ..., [const CvMat* missing_mask,]
31 <misc_training_alg_params> ... )=0;
34 /* virtual float predict( const CvMat* sample ... ) const=0; */
36 virtual void save( const char* filename, const char* name=0 )=0;
37 virtual void load( const char* filename, const char* name=0 )=0;
39 virtual void write( CvFileStorage* storage, const char* name )=0;
40 virtual void read( CvFileStorage* storage, CvFileNode* node )=0;
44 In this declaration some methods are commented off. Actually, these are methods for which there is no unified API (with the exception of the default constructor), however, there are many similarities in the syntax and semantics that are briefly described below in this section, as if they are a part of the base class.
47 \cvfunc{CvStatModel::CvStatModel}
53 CvStatModel::CvStatModel();
57 Each statistical model class in ML has a default constructor without parameters. This constructor is useful for 2-stage model construction, when the default constructor is followed by \texttt{train()} or \texttt{load()}.
60 \cvfunc{CvStatModel::CvStatModel(...)}
66 CvStatModel::CvStatModel( const CvMat* train\_data ... );
70 Most ML classes provide single-step construct and train constructors. This constructor is equivalent to the default constructor, followed by the \texttt{train()} method with the parameters that are passed to the constructor.
73 \cvfunc{CvStatModel::~CvStatModel}
79 CvStatModel::~CvStatModel();
83 The destructor of the base class is declared as virtual, so it is safe to write the following code:
89 model = new CvSVM(... /* SVM params */);
91 model = new CvDTree(... /* Decision tree params */);
97 Normally, the destructor of each derived class does nothing, but in this instance it calls the overridden method \texttt{clear()} that deallocates all the memory.
100 \cvfunc{CvStatModel::clear}
102 Deallocates memory and resets the model state.
106 void CvStatModel::clear();
110 The method \texttt{clear} does the same job as the destructor; it deallocates all the memory occupied by the class members. But the object itself is not destructed, and can be reused further. This method is called from the destructor, from the \texttt{train} methods of the derived classes, from the methods \texttt{load()}, \texttt{read()} or even explicitly by the user.
113 \cvfunc{CvStatModel::save}
115 Saves the model to a file.
119 void CvStatModel::save( const char* filename, const char* name=0 );
123 The method \texttt{save} stores the complete model state to the specified XML or YAML file with the specified name or default name (that depends on the particular class). \texttt{Data persistence} functionality from CxCore is used.
126 \cvfunc{CvStatModel::load}
128 Loads the model from a file.
132 void CvStatModel::load( const char* filename, const char* name=0 );
136 The method \texttt{load} loads the complete model state with the specified name (or default model-dependent name) from the specified XML or YAML file. The previous model state is cleared by \texttt{clear()}.
138 Note that the method is virtual, so any model can be loaded using this virtual method. However, unlike the C types of OpenCV that can be loaded using the generic \\cross{cvLoad}, here the model type must be known, because an empty model must be constructed beforehand. This limitation will be removed in the later ML versions.
141 \cvfunc{CvStatModel::write}
143 Writes the model to file storage.
147 void CvStatModel::write( CvFileStorage* storage, const char* name );
151 The method \texttt{write} stores the complete model state to the file storage with the specified name or default name (that depends on the particular class). The method is called by \texttt{save()}.
154 \cvfunc{CvStatModel::read}
156 Reads the model from file storage.
160 void CvStatMode::read( CvFileStorage* storage, CvFileNode* node );
164 The method \texttt{read} restores the complete model state from the specified node of the file storage. The node must be located by the user using the function \cross{GetFileNodeByName}.
166 The previous model state is cleared by \texttt{clear()}.
169 \cvfunc{CvStatModel::train}
175 bool CvStatMode::train( const CvMat* train\_data, [int tflag,] ..., const CvMat* responses, ...,
177 [const CvMat* var\_idx,] ..., [const CvMat* sample\_idx,] ...
179 [const CvMat* var\_type,] ..., [const CvMat* missing\_mask,] <misc\_training\_alg\_params> ... );
183 The method trains the statistical model using a set of input feature vectors and the corresponding output values (responses). Both input and output vectors/values are passed as matrices. By default the input feature vectors are stored as \texttt{train\_data} rows, i.e. all the components (features) of a training vector are stored continuously. However, some algorithms can handle the transposed representation, when all values of each particular feature (component/input variable) over the whole input set are stored continuously. If both layouts are supported, the method includes \texttt{tflag} parameter that specifies the orientation:
185 \item \texttt{tflag=CV\_ROW\_SAMPLE} means that the feature vectors are stored as rows,
186 \item \texttt{tflag=CV\_COL\_SAMPLE} means that the feature vectors are stored as columns.
188 The \texttt{train\_data} must have a \texttt{32fC1} (32-bit floating-point, single-channel) format. Responses are usually stored in the 1d vector (a row or a column) of \texttt{32sC1} (only in the classification problem) or \texttt{32fC1} format, one value per input vector (although some algorithms, like various flavors of neural nets, take vector responses).
190 For classification problems the responses are discrete class labels; for regression problems the responses are values of the function to be approximated. Some algorithms can deal only with classification problems, some - only with regression problems, and some can deal with both problems. In the latter case the type of output variable is either passed as separate parameter, or as a last element of \texttt{var\_type} vector:
192 \item \texttt{CV\_VAR\_CATEGORICAL} means that the output values are discrete class labels,
193 \item \texttt{CV\_VAR\_ORDERED(=CV\_VAR\_NUMERICAL)} means that the output values are ordered, i.e. 2 different values can be compared as numbers, and this is a regression problem
195 The types of input variables can be also specified using \texttt{var\_type}. Most algorithms can handle only ordered input variables.
197 Many models in the ML may be trained on a selected feature subset, and/or on a selected sample subset of the training set. To make it easier for the user, the method \texttt{train} usually includes \texttt{var\_idx} and \texttt{sample\_idx} parameters. The former identifies variables (features) of interest, and the latter identifies samples of interest. Both vectors are either integer (\texttt{32sC1}) vectors, i.e. lists of 0-based indices, or 8-bit (\texttt{8uC1}) masks of active variables/samples. The user may pass \texttt{NULL} pointers instead of either of the arguments, meaning that all of the variables/samples are used for training.
199 Additionally some algorithms can handle missing measurements, that is when certain features of certain training samples have unknown values (for example, they forgot to measure a temperature of patient A on Monday). The parameter \texttt{missing\_mask}, an 8-bit matrix the same size as \texttt{train\_data}, is used to mark the missed values (non-zero elements of the mask).
201 Usually, the previous model state is cleared by \texttt{clear()} before running the training procedure. However, some algorithms may optionally update the model state with the new training data, instead of resetting it.
204 \cvfunc{CvStatModel::predict}
206 Predicts the response for the sample.
210 float CvStatMode::predict( const CvMat* sample[, <prediction\_params>] ) const;
214 The method is used to predict the response for a new sample. In the case of classification the method returns the class label, in the case of regression - the output function value. The input sample must have as many components as the \texttt{train\_data} passed to \texttt{train} contains. If the \texttt{var\_idx} parameter is passed to \texttt{train}, it is remembered and then is used to extract only the necessary components from the input sample in the method \texttt{predict}.
216 The suffix "const" means that prediction does not affect the internal model state, so the method can be safely called from within different threads.
218 \section{Normal Bayes Classifier}
220 This is a simple classification model assuming that feature vectors from each class are normally distributed (though, not necessarily independently distributed), so the whole data distribution function is assumed to be a Gaussian mixture, one component per class. Using the training data the algorithm estimates mean vectors and covariance matrices for every class, and then it uses them for prediction.
222 \textbf{[Fukunaga90] K. Fukunaga. Introduction to Statistical Pattern Recognition. second ed., New York: Academic Press, 1990.}
225 \cvfunc{CvNormalBayesClassifier}
227 Bayes classifier for normally distributed data.
230 class CvNormalBayesClassifier : public CvStatModel
233 CvNormalBayesClassifier();
234 virtual ~CvNormalBayesClassifier();
236 CvNormalBayesClassifier( const CvMat* _train_data, const CvMat* _responses,
237 const CvMat* _var_idx=0, const CvMat* _sample_idx=0 );
239 virtual bool train( const CvMat* _train_data, const CvMat* _responses,
240 const CvMat* _var_idx = 0, const CvMat* _sample_idx=0, bool update=false );
242 virtual float predict( const CvMat* _samples, CvMat* results=0 ) const;
243 virtual void clear();
245 virtual void save( const char* filename, const char* name=0 );
246 virtual void load( const char* filename, const char* name=0 );
248 virtual void write( CvFileStorage* storage, const char* name );
249 virtual void read( CvFileStorage* storage, CvFileNode* node );
257 \cvfunc{CvNormalBayesClassifier::train}
263 bool CvNormalBayesClassifier::train( \par const CvMat* \_train\_data, \par const CvMat* \_responses,
264 \par const CvMat* \_var\_idx =0, \par const CvMat* \_sample\_idx=0, \par bool update=false );
268 The method trains the Normal Bayes classifier. It follows the conventions of the generic \texttt{train} "method" with the following limitations: only CV\_ROW\_SAMPLE data layout is supported; the input variables are all ordered; the output variable is categorical (i.e. elements of \texttt{\_responses} must be integer numbers, though the vector may have \texttt{32fC1} type), and missing measurements are not supported.
270 In addition, there is an \texttt{update} flag that identifies whether the model should be trained from scratch (\texttt{update=false}) or should be updated using the new training data (\texttt{update=true}).
272 \cvfunc{CvNormalBayesClassifier::predict}
274 Predicts the response for sample(s)
278 float CvNormalBayesClassifier::predict( \par const CvMat* samples, \par CvMat* results=0 ) const;
282 The method \texttt{predict} estimates the most probable classes for the input vectors. The input vectors (one or more) are stored as rows of the matrix \texttt{samples}. In the case of multiple input vectors, there should be one output vector \texttt{results}. The predicted class for a single input vector is returned by the method.
284 \section{K Nearest Neighbors}
286 The algorithm caches all of the training samples, and predicts the response for a new sample by analyzing a certain number (\textbf{K}) of the nearest neighbors of the sample (using voting, calculating weighted sum etc.) The method is sometimes referred to as "learning by example", because for prediction it looks for the feature vector with a known response that is closest to the given vector.
291 K Nearest Neighbors model.
294 class CvKNearest : public CvStatModel
299 virtual ~CvKNearest();
301 CvKNearest( const CvMat* _train_data, const CvMat* _responses,
302 const CvMat* _sample_idx=0, bool _is_regression=false, int max_k=32 );
304 virtual bool train( const CvMat* _train_data, const CvMat* _responses,
305 const CvMat* _sample_idx=0, bool is_regression=false,
306 int _max_k=32, bool _update_base=false );
308 virtual float find_nearest( const CvMat* _samples, int k, CvMat* results,
309 const float** neighbors=0, CvMat* neighbor_responses=0, CvMat* dist=0 ) const;
311 virtual void clear();
312 int get_max_k() const;
313 int get_var_count() const;
314 int get_sample_count() const;
315 bool is_regression() const;
324 \cvfunc{CvKNearest::train}
330 bool CvKNearest::train( \par const CvMat* \_train\_data, \par const CvMat* \_responses,
331 \par const CvMat* \_sample\_idx=0, \par bool is\_regression=false,
332 \par int \_max\_k=32, \par bool \_update\_base=false );
336 The method trains the K-Nearest model. It follows the conventions of generic \texttt{train} "method" with the following limitations: only CV\_ROW\_SAMPLE data layout is supported, the input variables are all ordered, the output variables can be either categorical (\texttt{is\_regression=false}) or ordered (\texttt{is\_regression=true}), variable subsets (\texttt{var\_idx}) and missing measurements are not supported.
338 The parameter \texttt{\_max\_k} specifies the number of maximum neighbors that may be passed to the method \texttt{find\_nearest}.
340 The parameter \texttt{\_update\_base} specifies whether the model is trained from scratch \newline (\texttt{\_update\_base=false}), or it is updated using the new training data (\texttt{\_update\_base=true}). In the latter case the parameter \texttt{\_max\_k} must not be larger than the original value.
343 \cvfunc{CvKNearest::find\_nearest}
345 Finds the neighbors for the input vectors.
349 float CvKNearest::find\_nearest( \par const CvMat* \_samples, \par int k, CvMat* results=0,
350 \par const float** neighbors=0, \par CvMat* neighbor\_responses=0, \par CvMat* dist=0 ) const;
354 For each input vector (which are the rows of the matrix
355 \texttt{\_samples}) the method finds the $ \texttt{k} \le
356 \texttt{get\_max\_k()} $ nearest neighbor. In the case of regression,
357 the predicted result will be a mean value of the particular vector's
358 neighbor responses. In the case of classification the class is determined
361 For custom classification/regression prediction, the method can optionally return pointers to the neighbor vectors themselves (\texttt{neighbors}, an array of \texttt{k*\_samples->rows} pointers), their corresponding output values (\texttt{neighbor\_responses}, a vector of \texttt{k*\_samples->rows} elements) and the distances from the input vectors to the neighbors (\texttt{dist}, also a vector of \texttt{k*\_samples->rows} elements).
363 For each input vector the neighbors are sorted by their distances to the vector.
365 If only a single input vector is passed, all output matrices are optional and the predicted value is returned by the method.
367 \cvfunc{Example. Classification of 2D samples from a Gaussian mixture with the k-nearest classifier}
373 int main( int argc, char** argv )
376 int i, j, k, accuracy;
378 int train_sample_count = 100;
379 CvRNG rng_state = cvRNG(-1);
380 CvMat* trainData = cvCreateMat( train_sample_count, 2, CV_32FC1 );
381 CvMat* trainClasses = cvCreateMat( train_sample_count, 1, CV_32FC1 );
382 IplImage* img = cvCreateImage( cvSize( 500, 500 ), 8, 3 );
384 CvMat sample = cvMat( 1, 2, CV_32FC1, _sample );
387 CvMat trainData1, trainData2, trainClasses1, trainClasses2;
389 // form the training samples
390 cvGetRows( trainData, &trainData1, 0, train_sample_count/2 );
391 cvRandArr( &rng_state, &trainData1, CV_RAND_NORMAL, cvScalar(200,200), cvScalar(50,50) );
393 cvGetRows( trainData, &trainData2, train_sample_count/2, train_sample_count );
394 cvRandArr( &rng_state, &trainData2, CV_RAND_NORMAL, cvScalar(300,300), cvScalar(50,50) );
396 cvGetRows( trainClasses, &trainClasses1, 0, train_sample_count/2 );
397 cvSet( &trainClasses1, cvScalar(1) );
399 cvGetRows( trainClasses, &trainClasses2, train_sample_count/2, train_sample_count );
400 cvSet( &trainClasses2, cvScalar(2) );
403 CvKNearest knn( trainData, trainClasses, 0, false, K );
404 CvMat* nearests = cvCreateMat( 1, K, CV_32FC1);
406 for( i = 0; i < img->height; i++ )
408 for( j = 0; j < img->width; j++ )
410 sample.data.fl[0] = (float)j;
411 sample.data.fl[1] = (float)i;
413 // estimates the response and get the neighbors' labels
414 response = knn.find_nearest(&sample,K,0,0,nearests,0);
416 // compute the number of neighbors representing the majority
417 for( k = 0, accuracy = 0; k < K; k++ )
419 if( nearests->data.fl[k] == response)
422 // highlight the pixel depending on the accuracy (or confidence)
423 cvSet2D( img, i, j, response == 1 ?
424 (accuracy > 5 ? CV_RGB(180,0,0) : CV_RGB(180,120,0)) :
425 (accuracy > 5 ? CV_RGB(0,180,0) : CV_RGB(120,120,0)) );
429 // display the original training samples
430 for( i = 0; i < train_sample_count/2; i++ )
433 pt.x = cvRound(trainData1.data.fl[i*2]);
434 pt.y = cvRound(trainData1.data.fl[i*2+1]);
435 cvCircle( img, pt, 2, CV_RGB(255,0,0), CV_FILLED );
436 pt.x = cvRound(trainData2.data.fl[i*2]);
437 pt.y = cvRound(trainData2.data.fl[i*2+1]);
438 cvCircle( img, pt, 2, CV_RGB(0,255,0), CV_FILLED );
441 cvNamedWindow( "classifier result", 1 );
442 cvShowImage( "classifier result", img );
445 cvReleaseMat( &trainClasses );
446 cvReleaseMat( &trainData );
452 \section{Support Vector Machines}
454 Originally, support vector machines (SVM) was a technique for building an optimal (in some sense) binary (2-class) classifier. Then the technique has been extended to regression and clustering problems. SVM is a partial case of kernel-based methods, it maps feature vectors into higher-dimensional space using some kernel function, and then it builds an optimal linear discriminating function in this space (or an optimal hyper-plane that fits into the training data, ...). in the case of SVM the kernel is not defined explicitly. Instead, a distance between any 2 points in the hyper-space needs to be defined.
456 The solution is optimal in a sense that the margin between the separating hyper-plane and the nearest feature vectors from the both classes (in the case of 2-class classifier) is maximal. The feature vectors that are the closest to the hyper-plane are called "support vectors", meaning that the position of other vectors does not affect the hyper-plane (the decision function).
458 There are a lot of good references on SVM. Here are only a few ones to start with.
460 \item \textbf{[Burges98] C. Burges. "A tutorial on support vector machines for pattern recognition", Knowledge Discovery and Data Mining 2(2), 1998.} (available online at \url{http://citeseer.ist.psu.edu/burges98tutorial.html}).
461 \item \textbf{LIBSVM - A Library for Support Vector Machines. By Chih-Chung Chang and Chih-Jen Lin} (\url{http://www.csie.ntu.edu.tw/~cjlin/libsvm/})
466 Support Vector Machines.
469 class CvSVM : public CvStatModel
473 enum { C_SVC=100, NU_SVC=101, ONE_CLASS=102, EPS_SVR=103, NU_SVR=104 };
476 enum { LINEAR=0, POLY=1, RBF=2, SIGMOID=3 };
479 enum { C=0, GAMMA=1, P=2, NU=3, COEF=4, DEGREE=5 };
484 CvSVM( const CvMat* _train_data, const CvMat* _responses,
485 const CvMat* _var_idx=0, const CvMat* _sample_idx=0,
486 CvSVMParams _params=CvSVMParams() );
488 virtual bool train( const CvMat* _train_data, const CvMat* _responses,
489 const CvMat* _var_idx=0, const CvMat* _sample_idx=0,
490 CvSVMParams _params=CvSVMParams() );
492 virtual bool train_auto( const CvMat* _train_data, const CvMat* _responses,
493 const CvMat* _var_idx, const CvMat* _sample_idx, CvSVMParams _params,
495 CvParamGrid C_grid = get_default_grid(CvSVM::C),
496 CvParamGrid gamma_grid = get_default_grid(CvSVM::GAMMA),
497 CvParamGrid p_grid = get_default_grid(CvSVM::P),
498 CvParamGrid nu_grid = get_default_grid(CvSVM::NU),
499 CvParamGrid coef_grid = get_default_grid(CvSVM::COEF),
500 CvParamGrid degree_grid = get_default_grid(CvSVM::DEGREE) );
502 virtual float predict( const CvMat* _sample ) const;
503 virtual int get_support_vector_count() const;
504 virtual const float* get_support_vector(int i) const;
505 virtual CvSVMParams get_params() const { return params; };
506 virtual void clear();
508 static CvParamGrid get_default_grid( int param_id );
510 virtual void save( const char* filename, const char* name=0 );
511 virtual void load( const char* filename, const char* name=0 );
513 virtual void write( CvFileStorage* storage, const char* name );
514 virtual void read( CvFileStorage* storage, CvFileNode* node );
515 int get_var_count() const { return var_idx ? var_idx->cols : var_all; }
526 SVM training parameters.
532 CvSVMParams( int _svm_type, int _kernel_type,
533 double _degree, double _gamma, double _coef0,
534 double _C, double _nu, double _p,
535 CvMat* _class_weights, CvTermCriteria _term_crit );
539 double degree; // for poly
540 double gamma; // for poly/rbf/sigmoid
541 double coef0; // for poly/sigmoid
543 double C; // for CV_SVM_C_SVC, CV_SVM_EPS_SVR and CV_SVM_NU_SVR
544 double nu; // for CV_SVM_NU_SVC, CV_SVM_ONE_CLASS, and CV_SVM_NU_SVR
545 double p; // for CV_SVM_EPS_SVR
546 CvMat* class_weights; // for CV_SVM_C_SVC
547 CvTermCriteria term_crit; // termination criteria
553 %\cvarg{svm\_type}{Type of SVM, one of the following types:
555 %\cvarg{CvSVM::C\_SVC}{n-class classification ($n>=2$), allows imperfect separation of classes with penalty multiplier \texttt{C} for outliers.}
556 %\cvarg{CvSVM::NU\_SVC}{n-class classification with possible imperfect separation. Parameter \texttt{nu} (in the range 0..1, the larger the value, the smoother the decision boundary) is used instead of \texttt{C}.}
557 %\cvarg{CvSVM::ONE\_CLASS}{one-class SVM. All of the training data is from the same class, SVM builds a boundary that separates the class from the rest of the feature space.}
558 %\cvarg{CvSVM::EPS\_SVR}{regression. The distance between feature vectors from the training set and the fitting hyper-plane must be less than \texttt{p}. For outliers the penalty multiplier \texttt{C} is used.}
559 %\cvarg{CvSVM::NU\_SVR}{regression; \texttt{nu} is used instead of \texttt{p}.}
561 %\cvarg{kernel\_type}{The kernel type, one of the following types:
563 %\cvarg{CvSVM::LINEAR}{no mapping is done, linear discrimination (or regression) is done in the original feature space. It is the fastest option $d(x,y) = x•y == (x,y)$.}
564 %\cvarg{CvSVM::POLY}{polynomial kernel: $d(x,y) = (gamma*(x•y)+coef0)^{degree}$.}
565 %\cvarg{CvSVM::RBF}{radial-basis-function kernel; a good choice in most cases: $d(x,y) = exp(-gamma*|x-y|^2)$}
566 %\cvarg{CvSVM::SIGMOID}{sigmoid function is used as a kernel: $d(x,y) = tanh(gamma*(x•y)+coef0)'$}
568 %\cvarg{degree, gamma, coef0}{Parameters of the kernel, see the formulas above.}
569 %\cvarg{C, nu, p}{Parameters in the generalized SVM optimization problem.}
570 %\cvarg{class\_weights}{Optional weights, assigned to particular classes. They are multiplied by \texttt{C} and thus affect the misclassification penalty for different classes. The larger weight, the larger penalty on misclassification of data from the corresponding class.}
571 %\cvarg{term\_crit}{Termination procedure for the iterative SVM training procedure (which solves a partial case of constrained quadratic optimization problem)}
574 The structure must be initialized and passed to the training method of \cross{CvSVM}.
577 \cvfunc{CvSVM::train}
583 bool CvSVM::train( \par const CvMat* \_train\_data, \par const CvMat* \_responses,
584 \par const CvMat* \_var\_idx=0, \par const CvMat* \_sample\_idx=0,
585 \par CvSVMParams \_params=CvSVMParams() );
589 The method trains the SVM model. It follows the conventions of the generic \texttt{train} "method" with the following limitations: only the CV\_ROW\_SAMPLE data layout is supported, the input variables are all ordered, the output variables can be either categorical (\texttt{\_params.svm\_type=CvSVM::C\_SVC} or \texttt{\_params.svm\_type=CvSVM::NU\_SVC}), or ordered (\texttt{\_params.svm\_type=CvSVM::EPS\_SVR} or \texttt{\_params.svm\_type=CvSVM::NU\_SVR}), or not required at all (\texttt{\_params.svm\_type=CvSVM::ONE\_CLASS}), missing measurements are not supported.
591 All the other parameters are gathered in \cross{CvSVMParams} structure.
594 \cvfunc{CvSVM::train\_auto} % XXX not in manual
596 Trains SVM with optimal parameters.
600 train\_auto( \par const CvMat* \_train\_data, \par const CvMat* \_responses,
601 \par const CvMat* \_var\_idx, \par const CvMat* \_sample\_idx,
602 \par CvSVMParams params, \par int k\_fold = 10,
603 \par CvParamGrid C\_grid = get\_default\_grid(CvSVM::C),
604 \par CvParamGrid gamma\_grid = get\_default\_grid(CvSVM::GAMMA),
605 \par CvParamGrid p\_grid = get\_default\_grid(CvSVM::P),
606 \par CvParamGrid nu\_grid = get\_default\_grid(CvSVM::NU),
607 \par CvParamGrid coef\_grid = get\_default\_grid(CvSVM::COEF),
608 \par CvParamGrid degree\_grid = get\_default\_grid(CvSVM::DEGREE) );
613 \cvarg{k\_fold}{Cross-validation parameter. The training set is divided into \texttt{k\_fold} subsets, one subset being used to train the model, the others forming the test set. So, the SVM algorithm is executed \texttt{k\_fold} times.}
616 The method trains the SVM model automatically by choosing the optimal
617 parameters \texttt{C}, \texttt{gamma}, \texttt{p}, \texttt{nu},
618 \texttt{coef0}, \texttt{degree} from \cross{CvSVMParams}. By optimal
619 one means that the cross-validation estimate of the test set error
620 is minimal. The parameters are iterated by a logarithmic grid, for
621 example, the parameter \texttt{gamma} takes the values in the set
622 ( $min$, $min*step$, $min*{step}^2$, ... $min*{step}^n$ )
623 where $min$ is \texttt{gamma\_grid.min\_val}, $step$ is
624 \texttt{gamma\_grid.step}, and $n$ is the maximal index such, that
626 \[ \texttt{gamma\_grid.min\_val}*\texttt{gamma\_grid.step}^n < \texttt{gamma\_grid.max\_val} \]
627 So \texttt{step} must always be greater than 1.
629 If there is no need in optimization in some parameter, the according grid step should be set to any value less or equal to 1. For example, to avoid optimization in \texttt{gamma} one should set \texttt{gamma\_grid.step = 0}, \texttt{gamma\_grid.min\_val}, \texttt{gamma\_grid.max\_val} being arbitrary numbers. In this case, the value \texttt{params.gamma} will be taken for \texttt{gamma}.
631 And, finally, if the optimization in some parameter is required, but
632 there is no idea of the corresponding grid, one may call the function
633 \texttt{CvSVM::get\_default\_grid}. In
634 order to generate a grid, say, for \texttt{gamma}, call
635 \texttt{CvSVM::get\_default\_grid(CvSVM::GAMMA)}.
637 This function works for the case of classification
638 (\texttt{params.svm\_type=CvSVM::C\_SVC} or \texttt{params.svm\_type=CvSVM::NU\_SVC})
639 as well as for the regression
640 (\texttt{params.svm\_type=CvSVM::EPS\_SVR} or \texttt{params.svm\_type=CvSVM::NU\_SVR}). If
641 \texttt{params.svm\_type=CvSVM::ONE\_CLASS}, no optimization is made and the usual SVM with specified in \texttt{params} parameters is executed.
643 \cvfunc{CvSVM::get\_default\_grid} % XXX not in manual
645 Generates a grid for the SVM parameters.
649 CvParamGrid CvSVM::get\_default\_grid( int param\_id );
654 \cvarg{param\_id}{Must be one of the following:
657 \cvarg{CvSVM::GAMMA}{}
660 \cvarg{CvSVM::COEF}{}
661 \cvarg{CvSVM::DEGREE}{}.
663 The grid will be generated for the parameter with this ID.}
666 The function generates a grid for the specified parameter of the SVM algorithm. The grid may be passed to the function \texttt{CvSVM::train\_auto}.
669 \cvfunc{CvSVM::get\_params} % XXX not in manual
671 Returns the current SVM parameters.
675 CvSVMParams CvSVM::get\_params() const;
679 This function may be used to get the optimal parameters that were obtained while automatically training \texttt{CvSVM::train\_auto}.
682 \cvfunc{CvSVM::get\_support\_vector*}
684 Retrieves the number of support vectors and the particular vector.
688 int CvSVM::get\_support\_vector\_count() const;
690 const float* CvSVM::get\_support\_vector(int i) const;
694 The methods can be used to retrieve the set of support vectors.
696 \section{Decision Trees}
699 The ML classes discussed in this section implement Classification And Regression Tree algorithms, which are described in \href{#paper_Breiman84}{[Breiman84]}.
701 The class \cross{CvDTree} represents a single decision tree that may be used alone, or as a base class in tree ensembles (see \cross{Boosting} and \cross{Random Trees}).
703 A decision tree is a binary tree (i.e. tree where each non-leaf node has exactly 2 child nodes). It can be used either for classification, when each tree leaf is marked with some class label (multiple leafs may have the same label), or for regression, when each tree leaf is also assigned a constant (so the approximation function is piecewise constant).
705 \subsection{Predicting with Decision Trees}
707 To reach a leaf node, and to obtain a response for the input feature
708 vector, the prediction procedure starts with the root node. From each
709 non-leaf node the procedure goes to the left (i.e. selects the left
710 child node as the next observed node), or to the right based on the
711 value of a certain variable, whose index is stored in the observed
712 node. The variable can be either ordered or categorical. In the first
713 case, the variable value is compared with the certain threshold (which
714 is also stored in the node); if the value is less than the threshold,
715 the procedure goes to the left, otherwise, to the right (for example,
716 if the weight is less than 1 kilogram, the procedure goes to the left,
717 else to the right). And in the second case the discrete variable value is
718 tested to see if it belongs to a certain subset of values (also stored
719 in the node) from a limited set of values the variable could take; if
720 yes, the procedure goes to the left, else - to the right (for example,
721 if the color is green or red, go to the left, else to the right). That
722 is, in each node, a pair of entities (variable\_index, decision\_rule
723 (threshold/subset)) is used. This pair is called a split (split on
724 the variable variable\_index). Once a leaf node is reached, the value
725 assigned to this node is used as the output of prediction procedure.
727 Sometimes, certain features of the input vector are missed (for example, in the darkness it is difficult to determine the object color), and the prediction procedure may get stuck in the certain node (in the mentioned example if the node is split by color). To avoid such situations, decision trees use so-called surrogate splits. That is, in addition to the best "primary" split, every tree node may also be split on one or more other variables with nearly the same results.
729 \subsection{Training Decision Trees}
731 The tree is built recursively, starting from the root node. All of the training data (feature vectors and the responses) is used to split the root node. In each node the optimum decision rule (i.e. the best "primary" split) is found based on some criteria (in ML \texttt{gini} "purity" criteria is used for classification, and sum of squared errors is used for regression). Then, if necessary, the surrogate splits are found that resemble the results of the primary split on the training data; all of the data is divided using the primary and the surrogate splits (just like it is done in the prediction procedure) between the left and the right child node. Then the procedure recursively splits both left and right nodes. At each node the recursive procedure may stop (i.e. stop splitting the node further) in one of the following cases:
733 \item{depth of the tree branch being constructed has reached the specified maximum value.}
734 \item{number of training samples in the node is less than the specified threshold, when it is not statistically representative to split the node further.}
735 \item{all the samples in the node belong to the same class (or, in the case of regression, the variation is too small).}
736 \item{the best split found does not give any noticeable improvement compared to a random choice.}
738 When the tree is built, it may be pruned using a cross-validation procedure, if necessary. That is, some branches of the tree that may lead to the model overfitting are cut off. Normally this procedure is only applied to standalone decision trees, while tree ensembles usually build small enough trees and use their own protection schemes against overfitting.
740 \subsection{Variable importance}
742 Besides the obvious use of decision trees - prediction, the tree can be also used for various data analysis. One of the key properties of the constructed decision tree algorithms is that it is possible to compute importance (relative decisive power) of each variable. For example, in a spam filter that uses a set of words occurred in the message as a feature vector, the variable importance rating can be used to determine the most "spam-indicating" words and thus help to keep the dictionary size reasonable.
744 Importance of each variable is computed over all the splits on this variable in the tree, primary and surrogate ones. Thus, to compute variable importance correctly, the surrogate splits must be enabled in the training parameters, even if there is no missing data.
746 \textbf{[Breiman84] Breiman, L., Friedman, J. Olshen, R. and Stone, C. (1984), "Classification and Regression Trees", Wadsworth.}
749 \cvfunc{CvDTreeSplit}
751 Decision tree node split.
774 %\cvarg{var\_idx}{Index of the variable used in the split.}
775 %\cvarg{inversed}{When it equals 1, the inverse split rule is used (i.e. left and right branches are exchanged in the expressions below).}
776 %\cvarg{quality}{The split quality, a positive number. It is used to choose the best primary split, then to choose and sort the surrogate splits. After the tree is constructed, it is also used to compute variable importance.}
777 %\cvarg{next}{Pointer to the next split in the node split list.}
778 %\cvarg{subset}{Bit array indicating the value subset in the case of split on a categorical variable.
780 %The rule is:\texttt{if var\_value in subset then next\_node<-left else next\_node<-right}.}
781 %\cvarg{c}{The threshold value in the case of a split on an ordered variable.
783 %The rule is:\texttt{if var\_value in subset then next\_node<-left else next\_node<-right}.}
784 %\cvarg{split\_point}{Used internally by the training algorithm.}
812 %\cvarg{value}{The value assigned to the tree node. It is either a class label, or the estimated function value.}
813 %\cvarg{class\_idx}{The assigned to the node normalized class index (to 0 to class\_count-1 range), it is used internally in classification trees and tree ensembles.}
814 %\cvarg{Tn}{The tree index in an ordered sequence of trees. The indices are used during and after the pruning procedure. The root node has the maximum value \texttt{Tn} of the whole tree, child nodes have \texttt{Tn} less than or equal to the parent's \texttt{Tn}, and the nodes with
815 %$ \texttt{Tn} \le \texttt{CvDTree::pruned\_tree\_idx} $ are not taken into consideration at the prediction stage (the corresponding branches are considered as cut-off), even if they have not been physically deleted from the tree at the pruning stage.}
816 %\cvarg{parent, left, right}{Pointers to the parent node, left and right child nodes.}\cvarg{split}{Pointer to the first (primary) split.}
817 %\cvarg{sample\_count}{The number of samples that fall into the node at the training stage. It is used to resolve the difficult cases - when the variable for the primary split is missing, and all the variables for the other surrogate splits are missing too,the sample is directed to the left if \texttt{left->sample\_count$>$right->sample\_count} and to the right otherwise.}
818 %\cvarg{depth}{The node depth, the root node depth is 0, the child nodes depth is the parent's depth + 1.}
821 Other numerous fields of \texttt{CvDTreeNode} are used internally at the training stage.
824 \cvfunc{CvDTreeParams}
826 Decision tree training parameters.
834 int min_sample_count;
838 bool truncate_pruned_tree;
839 float regression_accuracy;
842 CvDTreeParams() : max_categories(10), max_depth(INT_MAX), min_sample_count(10),
843 cv_folds(10), use_surrogates(true), use_1se_rule(true),
844 truncate_pruned_tree(true), regression_accuracy(0.01f), priors(0)
847 CvDTreeParams( int _max_depth, int _min_sample_count,
848 float _regression_accuracy, bool _use_surrogates,
849 int _max_categories, int _cv_folds,
850 bool _use_1se_rule, bool _truncate_pruned_tree,
851 const float* _priors );
856 %\cvarg{max\_depth}{This parameter specifies the maximum possible depth of the tree. That is the training algorithms attempts to split a node while its depth is less than \texttt{max\_depth}. The actual depth may be smaller if the other termination criteria are met (see the outline of the training procedure in the beginning of the section), and/or if the tree is pruned.}
857 %\cvarg{min\_sample\_count}{A node is not split if the number of samples directed to the node is less than the parameter value.}
858 %\cvarg{regression\_accuracy}{Another stop criteria - only for regression trees. As soon as the estimated node value differs from the node training samples responses by less than the parameter value, the node is not split further.}
859 %\cvarg{use\_surrogates}{If \texttt{true}, surrogate splits are built. Surrogate splits are needed to handle missing measurements and for variable importance estimation.}
860 %\cvarg{max\_categories}{If a discrete variable, on which the training procedure tries to make a split, takes more than \texttt{max\_categories} values, the precise best subset estimation may take a very long time (as the algorithm is exponential). Instead, many decision trees engines (including ML) try to find sub-optimal split in this case by clustering all the samples into \texttt{max\_categories} clusters (i.e. some categories are merged together).
862 %Note that this technique is used only in \texttt{N($>$2)}-class classification problems. in the case of regression and 2-class classification the optimal split can be found efficiently without employing clustering, thus the parameter is not used in these cases.}
863 %\cvarg{cv\_folds}{If this parameter is $>$1, the tree is pruned using \texttt{cv\_folds}-fold cross validation.}
864 %\cvarg{use\_1se\_rule}{If \texttt{true}, the tree is truncated a bit more by the pruning procedure. That leads to compact, and more resistant to the training data noise, but a bit less accurate decision tree.}
865 %\cvarg{truncate\_pruned\_tree}{If \texttt{true}, the cut off nodes (with
866 % $ \texttt{Tn} \le \texttt{CvDTree::pruned\_tree\_idx} $ ) are physically
867 % removed from the tree. Otherwise they are kept, and by decreasing
869 % \texttt{CvDTree::pruned\_tree\_idx} (e.g. setting it to -1) it is still possible to get the results from the original un-pruned (or pruned less aggressively) tree.}
870 %\cvarg{priors}{The array of a priori class probabilities, sorted by the class label value. The parameter can be used to tune the decision tree preferences toward a certain class. For example, if users want to detect some rare anomaly occurrence, the training base will likely contain many more normal cases than anomalies, so a very good classification performance will be achieved just by considering every case as normal. To avoid this, the priors can be specified, where the anomaly probability is artificially increased (up to 0.5 or even greater), so the weight of the misclassified anomalies becomes much bigger, and the tree is adjusted properly.
873 %A note about memory management: the field \texttt{priors} is a pointer to the array of floats. The array should be allocated by the user, and released just after the \texttt{CvDTreeParams} structure is passed to \cross{CvDTreeTrainData} or \cross{CvDTree} constructors/methods (as the methods make a copy of the array).}
876 The structure contains all the decision tree training parameters. There is a default constructor that initializes all the parameters with the default values tuned for standalone classification tree. Any of the parameters can be overridden then, or the structure may be fully initialized using the advanced variant of the constructor.
879 \cvfunc{CvDTreeTrainData}
881 Decision tree training data and shared data for tree ensembles.
884 struct CvDTreeTrainData
887 CvDTreeTrainData( const CvMat* _train_data, int _tflag,
888 const CvMat* _responses, const CvMat* _var_idx=0,
889 const CvMat* _sample_idx=0, const CvMat* _var_type=0,
890 const CvMat* _missing_mask=0,
891 const CvDTreeParams& _params=CvDTreeParams(),
892 bool _shared=false, bool _add_labels=false );
893 virtual ~CvDTreeTrainData();
895 virtual void set_data( const CvMat* _train_data, int _tflag,
896 const CvMat* _responses, const CvMat* _var_idx=0,
897 const CvMat* _sample_idx=0, const CvMat* _var_type=0,
898 const CvMat* _missing_mask=0,
899 const CvDTreeParams& _params=CvDTreeParams(),
900 bool _shared=false, bool _add_labels=false,
901 bool _update_data=false );
903 virtual void get_vectors( const CvMat* _subsample_idx,
904 float* values, uchar* missing, float* responses,
905 bool get_class_idx=false );
907 virtual CvDTreeNode* subsample_data( const CvMat* _subsample_idx );
909 virtual void write_params( CvFileStorage* fs );
910 virtual void read_params( CvFileStorage* fs, CvFileNode* node );
912 // release all the data
913 virtual void clear();
915 int get_num_classes() const;
916 int get_var_type(int vi) const;
917 int get_work_var_count() const;
919 virtual int* get_class_labels( CvDTreeNode* n );
920 virtual float* get_ord_responses( CvDTreeNode* n );
921 virtual int* get_labels( CvDTreeNode* n );
922 virtual int* get_cat_var_data( CvDTreeNode* n, int vi );
923 virtual CvPair32s32f* get_ord_var_data( CvDTreeNode* n, int vi );
924 virtual int get_child_buf_idx( CvDTreeNode* n );
926 ////////////////////////////////////
928 virtual bool set_params( const CvDTreeParams& params );
929 virtual CvDTreeNode* new_node( CvDTreeNode* parent, int count,
930 int storage_idx, int offset );
932 virtual CvDTreeSplit* new_split_ord( int vi, float cmp_val,
933 int split_point, int inversed, float quality );
934 virtual CvDTreeSplit* new_split_cat( int vi, float quality );
935 virtual void free_node_data( CvDTreeNode* node );
936 virtual void free_train_data();
937 virtual void free_node( CvDTreeNode* node );
939 int sample_count, var_all, var_count, max_c_count;
940 int ord_var_count, cat_var_count;
941 bool have_labels, have_priors;
944 int buf_count, buf_size;
957 CvMat* var_type; // i-th element =
959 // k>=0 - categorical, see k-th element of cat_* arrays
962 CvDTreeParams params;
964 CvMemStorage* tree_storage;
965 CvMemStorage* temp_storage;
967 CvDTreeNode* data_root;
979 This structure is mostly used internally for storing both standalone trees and tree ensembles efficiently. Basically, it contains 3 types of information:
981 \item{The training parameters, an instance of \cross{CvDTreeParams}.}
982 \item{The training data, preprocessed in order to find the best splits more efficiently. For tree ensembles this preprocessed data is reused by all the trees. Additionally, the training data characteristics that are shared by all trees in the ensemble are stored here: variable types, the number of classes, class label compression map etc.}
983 \item{Buffers, memory storages for tree nodes, splits and other elements of the trees constructed.}
985 There are 2 ways of using this structure. In simple cases (e.g. a standalone tree, or the ready-to-use "black box" tree ensemble from ML, like \cross{Random Trees} or \cross{Boosting}) there is no need to care or even to know about the structure - just construct the needed statistical model, train it and use it. The \texttt{CvDTreeTrainData} structure will be constructed and used internally. However, for custom tree algorithms, or another sophisticated cases, the structure may be constructed and used explicitly. The scheme is the following:
987 \item The structure is initialized using the default constructor, followed by \texttt{set\_data} (or it is built using the full form of constructor). The parameter \texttt{\_shared} must be set to \texttt{true}.
988 \item One or more trees are trained using this data, see the special form of the method \texttt{CvDTree::train}.
989 \item Finally, the structure can be released only after all the trees using it are released.
998 class CvDTree : public CvStatModel
1004 virtual bool train( const CvMat* _train_data, int _tflag,
1005 const CvMat* _responses, const CvMat* _var_idx=0,
1006 const CvMat* _sample_idx=0, const CvMat* _var_type=0,
1007 const CvMat* _missing_mask=0,
1008 CvDTreeParams params=CvDTreeParams() );
1010 virtual bool train( CvDTreeTrainData* _train_data,
1011 const CvMat* _subsample_idx );
1013 virtual CvDTreeNode* predict( const CvMat* _sample,
1014 const CvMat* _missing_data_mask=0,
1015 bool raw_mode=false ) const;
1016 virtual const CvMat* get_var_importance();
1017 virtual void clear();
1019 virtual void read( CvFileStorage* fs, CvFileNode* node );
1020 virtual void write( CvFileStorage* fs, const char* name );
1022 // special read & write methods for trees in the tree ensembles
1023 virtual void read( CvFileStorage* fs, CvFileNode* node,
1024 CvDTreeTrainData* data );
1025 virtual void write( CvFileStorage* fs );
1027 const CvDTreeNode* get_root() const;
1028 int get_pruned_tree_idx() const;
1029 CvDTreeTrainData* get_data();
1033 virtual bool do_train( const CvMat* _subsample_idx );
1035 virtual void try_split_node( CvDTreeNode* n );
1036 virtual void split_node_data( CvDTreeNode* n );
1037 virtual CvDTreeSplit* find_best_split( CvDTreeNode* n );
1038 virtual CvDTreeSplit* find_split_ord_class( CvDTreeNode* n, int vi );
1039 virtual CvDTreeSplit* find_split_cat_class( CvDTreeNode* n, int vi );
1040 virtual CvDTreeSplit* find_split_ord_reg( CvDTreeNode* n, int vi );
1041 virtual CvDTreeSplit* find_split_cat_reg( CvDTreeNode* n, int vi );
1042 virtual CvDTreeSplit* find_surrogate_split_ord( CvDTreeNode* n, int vi );
1043 virtual CvDTreeSplit* find_surrogate_split_cat( CvDTreeNode* n, int vi );
1044 virtual double calc_node_dir( CvDTreeNode* node );
1045 virtual void complete_node_dir( CvDTreeNode* node );
1046 virtual void cluster_categories( const int* vectors, int vector_count,
1047 int var_count, int* sums, int k, int* cluster_labels );
1049 virtual void calc_node_value( CvDTreeNode* node );
1051 virtual void prune_cv();
1052 virtual double update_tree_rnc( int T, int fold );
1053 virtual int cut_tree( int T, int fold, double min_alpha );
1054 virtual void free_prune_data(bool cut_tree);
1055 virtual void free_tree();
1057 virtual void write_node( CvFileStorage* fs, CvDTreeNode* node );
1058 virtual void write_split( CvFileStorage* fs, CvDTreeSplit* split );
1059 virtual CvDTreeNode* read_node( CvFileStorage* fs,
1061 CvDTreeNode* parent );
1062 virtual CvDTreeSplit* read_split( CvFileStorage* fs, CvFileNode* node );
1063 virtual void write_tree_nodes( CvFileStorage* fs );
1064 virtual void read_tree_nodes( CvFileStorage* fs, CvFileNode* node );
1068 int pruned_tree_idx;
1069 CvMat* var_importance;
1071 CvDTreeTrainData* data;
1076 \cvfunc{CvDTree::train}
1078 Trains a decision tree.
1082 bool CvDTree::train( \par const CvMat* \_train\_data, \par int \_tflag,
1083 \par const CvMat* \_responses, \par const CvMat* \_var\_idx=0,
1084 \par const CvMat* \_sample\_idx=0, \par const CvMat* \_var\_type=0,
1085 \par const CvMat* \_missing\_mask=0,
1086 \par CvDTreeParams params=CvDTreeParams() );
1089 bool CvDTree::train( CvDTreeTrainData* \_train\_data, const CvMat* \_subsample\_idx );
1093 There are 2 \texttt{train} methods in \texttt{CvDTree}.
1095 The first method follows the generic \texttt{CvStatModel::train} conventions, it is the most complete form. Both data layouts (\texttt{\_tflag=CV\_ROW\_SAMPLE} and \texttt{\_tflag=CV\_COL\_SAMPLE}) are supported, as well as sample and variable subsets, missing measurements, arbitrary combinations of input and output variable types etc. The last parameter contains all of the necessary training parameters, see the \cross{CvDTreeParams} description.
1097 The second method \texttt{train} is mostly used for building tree ensembles. It takes the pre-constructed \cross{CvDTreeTrainData} instance and the optional subset of training set. The indices in \texttt{\_subsample\_idx} are counted relatively to the \texttt{\_sample\_idx}, passed to \texttt{CvDTreeTrainData} constructor. For example, if \texttt{\_sample\_idx=[1, 5, 7, 100]}, then \texttt{\_subsample\_idx=[0,3]} means that the samples \texttt{[1, 100]} of the original training set are used.
1100 \cvfunc{CvDTree::predict}
1102 Returns the leaf node of the decision tree corresponding to the input vector.
1106 CvDTreeNode* CvDTree::predict( \par const CvMat* \_sample, \par const CvMat* \_missing\_data\_mask=0,
1107 \par bool raw\_mode=false ) const;
1111 The method takes the feature vector and the optional missing measurement mask on input, traverses the decision tree and returns the reached leaf node on output. The prediction result, either the class label or the estimated function value, may be retrieved as the \texttt{value} field of the \cross{CvDTreeNode} structure, for example: dtree-$>$predict(sample,mask)-$>$value.
1113 The last parameter is normally set to \texttt{false}, implying a regular
1114 input. If it is \texttt{true}, the method assumes that all the values of
1115 the discrete input variables have been already normalized to $0$
1116 to $num\_of\_categories_i-1$ ranges. (as the decision tree uses such
1117 normalized representation internally). It is useful for faster prediction
1118 with tree ensembles. For ordered input variables the flag is not used.
1120 Example: Building A Tree for Classifying Mushrooms. See the
1121 \texttt{mushroom.cpp} sample that demonstrates how to build and use the
1124 \section{Boosting} % XXX make sure the math is right
1126 A common machine learning task is supervised learning. In supervised learning, the goal is to learn the functional relationship $F: y = F(x)$ between the input $x$ and the output $y$. Predicting the qualitative output is called classification, while predicting the quantitative output is called regression.
1128 Boosting is a powerful learning concept, which provide a solution to the supervised classification learning task. It combines the performance of many "weak" classifiers to produce a powerful 'committee' \cross{HTF01}. A weak classifier is only required to be better than chance, and thus can be very simple and computationally inexpensive. Many of them smartly combined, however, results in a strong classifier, which often outperforms most 'monolithic' strong classifiers such as SVMs and Neural Networks.
1130 Decision trees are the most popular weak classifiers used in boosting schemes. Often the simplest decision trees with only a single split node per tree (called stumps) are sufficient.
1132 The boosted model is based on $N$ training examples ${(x_i,y_i)}1N$ with $x_i \in{R^K}$ and $y_i \in{-1, +1}$. $x_i$ is a $K$-component vector. Each component encodes a feature relevant for the learning task at hand. The desired two-class output is encoded as −1 and +1.
1134 Different variants of boosting are known such as Discrete Adaboost, Real AdaBoost, LogitBoost, and Gentle AdaBoost \cross{FHT98}. All of them are very similar in their overall structure. Therefore, we will look only at the standard two-class Discrete AdaBoost algorithm as shown in the box below. Each sample is initially assigned the same weight (step 2). Next a weak classifier $f_{m(x)}$ is trained on the weighted training data (step 3a). Its weighted training error and scaling factor $c_m$ is computed (step 3b). The weights are increased for training samples, which have been misclassified (step 3c). All weights are then normalized, and the process of finding the next weak classifier continues for another $M$-1 times. The final classifier $F(x)$ is the sign of the weighted sum over the individual weak classifiers (step 4).
1137 \item Given $N$ examples ${(x_i,y_i)}1N$ with $x_i \in{R^K}, y_i \in{-1, +1}$.
1138 \item Start with weights $w_i = 1/N, i = 1,...,N$.
1139 \item Repeat for $m$ = $1,2,...,M$:
1141 \item Fit the classifier $f_m(x) \in{-1,1}$, using weights $w_i$ on the training data.
1142 \item Compute $err_m = E_w [1_{(y =\neq f_m(x))}], c_m = log((1 - err_m)/err_m)$.
1143 \item Set $w_i \Leftarrow w_i exp[c_m 1_{(y_i \neq f_m(x_i))}], i = 1,2,…,N,$ and renormalize so that $\Sigma i w_i = 1$.
1144 \item Output the classifier sign$[\Sigma m = 1M c_m f_m(x)]$.
1148 Two-class Discrete AdaBoost Algorithm: Training (steps 1 to 3) and Evaluation (step 4)
1151 \textbf{NOTE:} As well as the classical boosting methods, the current implementation supports 2-class classifiers only. For M$>$2 classes there is the \textbf{AdaBoost.MH} algorithm, described in \cross{FHT98}, that reduces the problem to the 2-class problem, yet with a much larger training set.
1153 In order to reduce computation time for boosted models without substantially losing accuracy, the influence trimming technique may be employed. As the training algorithm proceeds and the number of trees in the ensemble is increased, a larger number of the training samples are classified correctly and with increasing confidence, thereby those samples receive smaller weights on the subsequent iterations. Examples with very low relative weight have small impact on training of the weak classifier. Thus such examples may be excluded during the weak classifier training without having much effect on the induced classifier. This process is controlled with the weight\_trim\_rate parameter. Only examples with the summary fraction weight\_trim\_rate of the total weight mass are used in the weak classifier training. Note that the weights for \textbf{all} training examples are recomputed at each training iteration. Examples deleted at a particular iteration may be used again for learning some of the weak classifiers further \cross{FHT98}.
1155 \textbf{[HTF01] Hastie, T., Tibshirani, R., Friedman, J. H. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer Series in Statistics. 2001.}
1157 \textbf{[FHT98] Friedman, J. H., Hastie, T. and Tibshirani, R. Additive Logistic Regression: a Statistical View of Boosting. Technical Report, Dept. of Statistics, Stanford University, 1998.}
1160 \cvfunc{CvBoostParams}
1162 Boosting training parameters.
1165 struct CvBoostParams : public CvDTreeParams
1170 double weight_trim_rate;
1173 CvBoostParams( int boost_type, int weak_count, double weight_trim_rate,
1174 int max_depth, bool use_surrogates, const float* priors );
1178 %\begin{description}
1179 %\cvarg{boost\_type}{Boosting type, one of the following:
1180 %\begin{description}
1181 %\cvarg{CvBoost::DISCRETE}{Discrete AdaBoost}
1182 %\cvarg{CvBoost::REAL}{Real AdaBoost}
1183 %\cvarg{CvBoost::LOGIT}{LogitBoost}
1184 %\cvarg{CvBoost::GENTLE}{Gentle AdaBoost}
1186 %Gentle AdaBoost and Real AdaBoost are often the preferable choices.}
1187 %\cvarg{weak\_count}{The number of weak classifiers to build.}
1188 %\cvarg{split\_criteria}{Splitting criteria, used to choose optimal splits during a weak tree construction:
1189 %\begin{description}
1190 %\cvarg{CvBoost::DEFAULT}{Use the default criteria for the particular boosting method, see below.}
1191 %\cvarg{CvBoost::GINI}{Use the Gini index. This is the default option for Real AdaBoost; may be also used for Discrete AdaBoost.}
1192 %\cvarg{CvBoost::MISCLASS}{Use the misclassification rate. This is the default option for Discrete AdaBoost; may be also used for Real AdaBoost.}
1193 %\cvarg{CvBoost::SQERR}{Use the least squares criteria. This is the default and the only option for LogitBoost and Gentle AdaBoost.}
1196 %\cvarg{weight\_trim\_rate}{The weight trimming ratio, between 0 and 1. See the discussion of it above. If the parameter is $ \le 0 $ or $ >1 $, the trimming is not used and all of the samples are used at each iteration. The default value is 0.95.}
1199 The structure is derived from \cross{CvDTreeParams}, but not all of the decision tree parameters are supported. In particular, cross-validation is not supported.
1202 \cvfunc{CvBoostTree}
1204 Weak tree classifier.
1207 class CvBoostTree: public CvDTree
1211 virtual ~CvBoostTree();
1213 virtual bool train( CvDTreeTrainData* _train_data,
1214 const CvMat* subsample_idx, CvBoost* ensemble );
1215 virtual void scale( double s );
1216 virtual void read( CvFileStorage* fs, CvFileNode* node,
1217 CvBoost* ensemble, CvDTreeTrainData* _data );
1218 virtual void clear();
1227 The weak classifier, a component of the boosted tree classifier \cross{CvBoost}, is a derivative of \cross{CvDTree}. Normally, there is no need to use the weak classifiers directly, however they can be accessed as elements of the sequence \texttt{CvBoost::weak}, retrieved by \texttt{CvBoost::get\_weak\_predictors}.
1229 Note, that in the case of LogitBoost and Gentle AdaBoost each weak predictor is a regression tree, rather than a classification tree. Even in the case of Discrete AdaBoost and Real AdaBoost the \texttt{CvBoostTree::predict} return value (\texttt{CvDTreeNode::value}) is not the output class label; a negative value "votes" for class \#0, a positive - for class \#1. And the votes are weighted. The weight of each individual tree may be increased or decreased using the method \texttt{CvBoostTree::scale}.
1234 Boosted tree classifier.
1237 class CvBoost : public CvStatModel
1241 enum { DISCRETE=0, REAL=1, LOGIT=2, GENTLE=3 };
1243 // Splitting criteria
1244 enum { DEFAULT=0, GINI=1, MISCLASS=3, SQERR=4 };
1249 CvBoost( const CvMat* _train_data, int _tflag,
1250 const CvMat* _responses, const CvMat* _var_idx=0,
1251 const CvMat* _sample_idx=0, const CvMat* _var_type=0,
1252 const CvMat* _missing_mask=0,
1253 CvBoostParams params=CvBoostParams() );
1255 virtual bool train( const CvMat* _train_data, int _tflag,
1256 const CvMat* _responses, const CvMat* _var_idx=0,
1257 const CvMat* _sample_idx=0, const CvMat* _var_type=0,
1258 const CvMat* _missing_mask=0,
1259 CvBoostParams params=CvBoostParams(),
1260 bool update=false );
1262 virtual float predict( const CvMat* _sample, const CvMat* _missing=0,
1263 CvMat* weak_responses=0, CvSlice slice=CV_WHOLE_SEQ,
1264 bool raw_mode=false ) const;
1266 virtual void prune( CvSlice slice );
1268 virtual void clear();
1270 virtual void write( CvFileStorage* storage, const char* name );
1271 virtual void read( CvFileStorage* storage, CvFileNode* node );
1273 CvSeq* get_weak_predictors();
1274 const CvBoostParams& get_params() const;
1278 virtual bool set_params( const CvBoostParams& _params );
1279 virtual void update_weights( CvBoostTree* tree );
1280 virtual void trim_weights();
1281 virtual void write_params( CvFileStorage* fs );
1282 virtual void read_params( CvFileStorage* fs, CvFileNode* node );
1284 CvDTreeTrainData* data;
1285 CvBoostParams params;
1291 \cvfunc{CvBoost::train}
1293 Trains a boosted tree classifier.
1297 bool CvBoost::train( \par const CvMat* \_train\_data, \par int \_tflag,
1298 \par const CvMat* \_responses, \par const CvMat* \_var\_idx=0,
1299 \par const CvMat* \_sample\_idx=0, \par const CvMat* \_var\_type=0,
1300 \par const CvMat* \_missing\_mask=0,
1301 \par CvBoostParams params=CvBoostParams(),
1302 \par bool update=false );
1306 The train method follows the common template; the last parameter \texttt{update} specifies whether the classifier needs to be updated (i.e. the new weak tree classifiers added to the existing ensemble), or the classifier needs to be rebuilt from scratch. The responses must be categorical, i.e. boosted trees can not be built for regression, and there should be 2 classes.
1309 \cvfunc{CvBoost::predict}
1311 Predicts a response for the input sample.
1315 float CvBoost::predict( \par const CvMat* sample, \par const CvMat* missing=0,
1316 \par CvMat* weak\_responses=0, \par CvSlice slice=CV\_WHOLE\_SEQ,
1317 \par bool raw\_mode=false ) const;
1321 %\begin{description}
1322 %\cvarg{sample}{The input sample.}
1323 %\cvarg{missing}{The optional mask of missing measurements. To handle missing measurements, the weak classifiers must include surrogate splits (see \texttt{CvDTreeParams::use\_surrogates}).}
1324 %\cvarg{weak\_responses}{The optional output parameter, a floating-point vector of responses from each individual weak classifier. The number of elements in the vector must be equal to the \texttt{slice} length.}
1325 %\cvarg{slice}{The continuous subset of the sequence of weak classifiers to be used for prediction. By default, all the weak classifiers are used.}
1326 %\cvarg{raw\_mode}{It has the same meaning as in \texttt{CvDTree::predict}. Normally, it should be set to false.}
1329 The method \texttt{CvBoost::predict} runs the sample through the trees in the ensemble and returns the output class label based on the weighted voting.
1332 \cvfunc{CvBoost::prune}
1334 Removes the specified weak classifiers.
1338 void CvBoost::prune( CvSlice slice );
1342 The method removes the specified weak classifiers from the sequence. Note that this method should not be confused with the pruning of individual decision trees, which is currently not supported.
1345 \cvfunc{CvBoost::get\_weak\_predictors}
1347 Returns the sequence of weak tree classifiers.
1351 CvSeq* CvBoost::get\_weak\_predictors();
1355 The method returns the sequence of weak classifiers. Each element of the sequence is a pointer to a \texttt{CvBoostTree} class (or, probably, to some of its derivatives).
1357 \section{Random Trees}
1360 Random trees have been introduced by Leo Breiman and Adele Cutler: \url{http://www.stat.berkeley.edu/users/breiman/RandomForests/}. The algorithm can deal with both classification and regression problems. Random trees is a collection (ensemble) of tree predictors that is called \textbf{forest} further in this section (the term has been also introduced by L. Breiman). The classification works as follows: the random trees classifier takes the input feature vector, classifies it with every tree in the forest, and outputs the class label that recieved the majority of "votes". In the case of regression the classifier response is the average of the responses over all the trees in the forest.
1362 All the trees are trained with the same parameters, but on the different training sets, which are generated from the original training set using the bootstrap procedure: for each training set we randomly select the same number of vectors as in the original set (\texttt{=N}). The vectors are chosen with replacement. That is, some vectors will occur more than once and some will be absent. At each node of each tree trained not all the variables are used to find the best split, rather than a random subset of them. With each node a new subset is generated, however its size is fixed for all the nodes and all the trees. It is a training parameter, set to $\sqrt{number\_of\_variables}$ by default. None of the trees that are built are pruned.
1364 In random trees there is no need for any accuracy estimation procedures, such as cross-validation or bootstrap, or a separate test set to get an estimate of the training error. The error is estimated internally during the training. When the training set for the current tree is drawn by sampling with replacement, some vectors are left out (so-called \emph{oob (out-of-bag) data}). The size of oob data is about \texttt{N/3}. The classification error is estimated by using this oob-data as following:
1366 \item Get a prediction for each vector, which is oob relatively to the i-th tree, using the very i-th tree.
1367 \item After all the trees have been trained, for each vector that has ever been oob, find the class-"winner" for it (i.e. the class that has got the majority of votes in the trees, where the vector was oob) and compare it to the ground-truth response.
1368 \item Then the classification error estimate is computed as ratio of number of misclassified oob vectors to all the vectors in the original data. In the case of regression the oob-error is computed as the squared error for oob vectors difference divided by the total number of vectors.
1371 \textbf{References:}
1373 \item \url{http://stat-www.berkeley.edu/users/breiman/wald2002-1.pdf}Machine Learning, Wald I, July 2002
1374 \item \url{http://stat-www.berkeley.edu/users/breiman/wald2002-2.pdf}Looking Inside the Black Box, Wald II, July 2002
1375 \item \url{http://stat-www.berkeley.edu/users/breiman/wald2002-3.pdf}Software for the Masses, Wald III, July 2002
1376 \item And other articles from the web site \url{http://www.stat.berkeley.edu/users/breiman/RandomForests/cc_home.htm}.
1381 Training Parameters of Random Trees.
1384 struct CvRTParams : public CvDTreeParams
1386 bool calc_var_importance;
1388 CvTermCriteria term_crit;
1390 CvRTParams() : CvDTreeParams( 5, 10, 0, false, 10, 0, false, false, 0 ),
1391 calc_var_importance(false), nactive_vars(0)
1393 term_crit = cvTermCriteria( CV_TERMCRIT_ITER+CV_TERMCRIT_EPS, 50, 0.1 );
1396 CvRTParams( int _max_depth, int _min_sample_count,
1397 float _regression_accuracy, bool _use_surrogates,
1398 int _max_categories, const float* _priors,
1399 bool _calc_var_importance,
1400 int _nactive_vars, int max_tree_count,
1401 float forest_accuracy, int termcrit_type );
1405 %\begin{description}
1406 %\cvarg{calc\_var\_importance}{If it is set, then variable importance is computed by the training procedure. To retrieve the computed variable importance array, call the method \newline \texttt{CvRTrees::get\_var\_importance().}}
1407 %\cvarg{nactive\_vars}{The number of variables that are randomly selected at each tree node and that are used to find the best split(s).}
1408 %\cvarg{term\_crit}{Termination criteria for growing the forest: \texttt{term\_crit.max\_iter} is the maximum number of trees in the forest (see also \texttt{max\_tree\_count} parameter of the constructor, by default it is set to 50).
1410 %\texttt{term\_crit.epsilon} is the sufficient accuracy (\cross{OOB error}).}
1413 The set of training parameters for the forest is the superset of the training parameters for a single tree. However, Random trees do not need all the functionality/features of decision trees, most noticeably, the trees are not pruned, so the cross-validation parameters are not used.
1421 class CvRTrees : public CvStatModel
1425 virtual ~CvRTrees();
1426 virtual bool train( const CvMat* _train_data, int _tflag,
1427 const CvMat* _responses, const CvMat* _var_idx=0,
1428 const CvMat* _sample_idx=0, const CvMat* _var_type=0,
1429 const CvMat* _missing_mask=0,
1430 CvRTParams params=CvRTParams() );
1431 virtual float predict( const CvMat* sample, const CvMat* missing = 0 )
1433 virtual void clear();
1435 virtual const CvMat* get_var_importance();
1436 virtual float get_proximity( const CvMat* sample_1, const CvMat* sample_2 )
1439 virtual void read( CvFileStorage* fs, CvFileNode* node );
1440 virtual void write( CvFileStorage* fs, const char* name );
1442 CvMat* get_active_var_mask();
1445 int get_tree_count() const;
1446 CvForestTree* get_tree(int i) const;
1450 bool grow_forest( const CvTermCriteria term_crit );
1452 // array of the trees of the forest
1453 CvForestTree** trees;
1454 CvDTreeTrainData* data;
1463 \cvfunc{CvRTrees::train}
1465 Trains the Random Trees model.
1469 bool CvRTrees::train( \par const CvMat* train\_data, \par int tflag,
1470 \par const CvMat* responses, \par const CvMat* comp\_idx=0,
1471 \par const CvMat* sample\_idx=0, \par const CvMat* var\_type=0,
1472 \par const CvMat* missing\_mask=0,
1473 \par CvRTParams params=CvRTParams() );
1477 The method \texttt{CvRTrees::train} is very similar to the first form of \texttt{CvDTree::train}() and follows the generic method \texttt{CvStatModel::train} conventions. All of the specific to the algorithm training parameters are passed as a \cross{CvRTParams} instance. The estimate of the training error (\texttt{oob-error}) is stored in the protected class member \texttt{oob\_error}.
1480 \cvfunc{CvRTrees::predict}
1482 Predicts the output for the input sample.
1486 double CvRTrees::predict( \par const CvMat* sample, \par const CvMat* missing=0 ) const;
1490 The input parameters of the prediction method are the same as in \texttt{CvDTree::predict}, but the return value type is different. This method returns the cumulative result from all the trees in the forest (the class that receives the majority of voices, or the mean of the regression function estimates).
1493 \cvfunc{CvRTrees::get\_var\_importance}
1495 Retrieves the variable importance array.
1499 const CvMat* CvRTrees::get\_var\_importance() const;
1503 The method returns the variable importance vector, computed at the training stage when \texttt{\cross{CvRTParams}::calc\_var\_importance} is set. If the training flag is not set, then the \texttt{NULL} pointer is returned. This is unlike decision trees, where variable importance can be computed anytime after the training.
1506 \cvfunc{CvRTrees::get\_proximity}
1508 Retrieves the proximity measure between two training samples.
1512 float CvRTrees::get\_proximity( \par const CvMat* sample\_1, \par const CvMat* sample\_2 ) const;
1516 The method returns proximity measure between any two samples (the ratio of the those trees in the ensemble, in which the samples fall into the same leaf node, to the total number of the trees).
1519 Example: Prediction of mushroom goodness using random trees classifier
1529 CvStatModel* cls = NULL;
1530 CvFileStorage* storage = cvOpenFileStorage( "Mushroom.xml",
1531 NULL,CV_STORAGE_READ );
1532 CvMat* data = (CvMat*)cvReadByName(storage, NULL, "sample", 0 );
1533 CvMat train_data, test_data;
1535 CvMat* missed = NULL;
1536 CvMat* comp_idx = NULL;
1537 CvMat* sample_idx = NULL;
1538 CvMat* type_mask = NULL;
1541 CvRTreesParams params;
1542 CvTreeClassifierTrainParams cart_params;
1543 const int ntrain_samples = 1000;
1544 const int ntest_samples = 1000;
1545 const int nvars = 23;
1547 if(data == NULL || data->cols != nvars)
1549 puts("Error in source data");
1553 cvGetSubRect( data, &train_data, cvRect(0, 0, nvars, ntrain_samples) );
1554 cvGetSubRect( data, &test_data, cvRect(0, ntrain_samples, nvars,
1555 ntrain_samples + ntest_samples) );
1558 cvGetCol( &train_data, &response, resp_col);
1560 /* create missed variable matrix */
1561 missed = cvCreateMat(train_data.rows, train_data.cols, CV_8UC1);
1562 for( i = 0; i < train_data.rows; i++ )
1563 for( j = 0; j < train_data.cols; j++ )
1564 CV_MAT_ELEM(*missed,uchar,i,j)
1565 = (uchar)(CV_MAT_ELEM(train_data,float,i,j) < 0);
1567 /* create comp_idx vector */
1568 comp_idx = cvCreateMat(1, train_data.cols-1, CV_32SC1);
1569 for( i = 0; i < train_data.cols; i++ )
1571 if(i<resp_col)CV_MAT_ELEM(*comp_idx,int,0,i) = i;
1572 if(i>resp_col)CV_MAT_ELEM(*comp_idx,int,0,i-1) = i;
1575 /* create sample_idx vector */
1576 sample_idx = cvCreateMat(1, train_data.rows, CV_32SC1);
1577 for( j = i = 0; i < train_data.rows; i++ )
1579 if(CV_MAT_ELEM(response,float,i,0) < 0) continue;
1580 CV_MAT_ELEM(*sample_idx,int,0,j) = i;
1583 sample_idx->cols = j;
1585 /* create type mask */
1586 type_mask = cvCreateMat(1, train_data.cols+1, CV_8UC1);
1587 cvSet( type_mask, cvRealScalar(CV_VAR_CATEGORICAL), 0);
1589 // initialize training parameters
1590 cvSetDefaultParamTreeClassifier((CvStatModelParams*)&cart_params);
1591 cart_params.wrong_feature_as_unknown = 1;
1592 params.tree_params = &cart_params;
1593 params.term_crit.max_iter = 50;
1594 params.term_crit.epsilon = 0.1;
1595 params.term_crit.type = CV_TERMCRIT_ITER|CV_TERMCRIT_EPS;
1597 puts("Random forest results");
1598 cls = cvCreateRTreesClassifier( &train_data,
1601 (CvStatModelParams*)&
1609 CvMat sample = cvMat( 1, nvars, CV_32FC1, test_data.data.fl );
1611 int wrong = 0, total = 0;
1612 cvGetCol( &test_data, &test_resp, resp_col);
1613 for( i = 0; i < ntest_samples; i++, sample.data.fl += nvars )
1615 if( CV_MAT_ELEM(test_resp,float,i,0) >= 0 )
1617 float resp = cls->predict( cls, &sample, NULL );
1618 wrong += (fabs(resp-response.data.fl[i]) > 1e-3 ) ? 1 : 0;
1622 printf( "Test set error = %.2f\n", wrong*100.f/(float)total );
1625 puts("Error forest creation");
1627 cvReleaseMat(&missed);
1628 cvReleaseMat(&sample_idx);
1629 cvReleaseMat(&comp_idx);
1630 cvReleaseMat(&type_mask);
1631 cvReleaseMat(&data);
1632 cvReleaseStatModel(&cls);
1633 cvReleaseFileStorage(&storage);
1638 \section{Expectation-Maximization}
1640 The EM (Expectation-Maximization) algorithm estimates the parameters of the multivariate probability density function in the form of a Gaussian mixture distribution with a specified number of mixtures.
1642 Consider the set of the feature vectors $x_1, x_2,...,x_{N}$ : N vectors from a d-dimensional Euclidean space drawn from a Gaussian mixture:
1645 p(x;a_k,S_k,\pi_k) = \sum_{k=1}^{m}\pi_kp_k(x), \quad \pi_k \geq 0, \quad \sum_{k=1}^{m}\pi_k=1,
1649 p_k(x)=\varphi(x;a_k,S_k)=\frac{1}{(2\pi)^{d/2}\mid{S_k}\mid^{1/2}}exp\left\{-\frac{1}{2}(x-a_k)^TS_k^{-1}(x-a_k)\right\},
1652 where $m$ is the number of mixtures, $p_k$ is the normal distribution
1653 density with the mean $a_k$ and covariance matrix $S_k$, $\pi_k$
1654 is the weight of the k-th mixture. Given the number of mixtures
1655 $M$ and the samples $x_i$, $i=1..N$ the algorithm finds the
1656 maximum-likelihood estimates (MLE) of the all the mixture parameters,
1657 i.e. $a_k$, $S_k$ and $\pi_k$ :
1660 L(x,\theta)=logp(x,\theta)=\sum_{i=1}^{N}log\left(\sum_{k=1}^{m}\pi_kp_k(x)\right)\to\max_{\theta\in\Theta},
1664 \Theta=\left\{(a_k,S_k,\pi_k): a_k \in \mathbbm{R} ^d,S_k=S_k^T>0,S_k \in \mathbbm{R} ^{d \times d},\pi_k\geq 0,\sum_{k=1}^{m}\pi_k=1\right\}.
1667 EM algorithm is an iterative procedure. Each iteration of it includes
1668 two steps. At the first step (Expectation-step, or E-step), we find a
1669 probability $p_{i,k}$ (denoted $\alpha_{i,k}$ in the formula below) of
1670 sample \texttt{i} to belong to mixture \texttt{k} using the currently
1671 available mixture parameter estimates:
1674 \alpha_{ki} = \frac{\pi_k\varphi(x;a_k,S_k)}{\sum\limits_{j=1}^{m}\pi_j\varphi(x;a_j,S_j)}.
1677 At the second step (Maximization-step, or M-step) the mixture parameter estimates are refined using the computed probabilities:
1680 \pi_k=\frac{1}{N}\sum_{i=1}^{N}\alpha_{ki}, \quad a_k=\frac{\sum\limits_{i=1}^{N}\alpha_{ki}x_i}{\sum\limits_{i=1}^{N}\alpha_{ki}}, \quad S_k=\frac{\sum\limits_{i=1}^{N}\alpha_{ki}(x_i-a_k)(x_i-a_k)^T}{\sum\limits_{i=1}^{N}\alpha_{ki}},
1683 Alternatively, the algorithm may start with the M-step when the initial values for $p_{i,k}$ can be provided. Another alternative when $p_{i,k}$ are unknown, is to use a simpler clustering algorithm to pre-cluster the input samples and thus obtain initial $p_{i,k}$. Often (and in ML) the \cross{KMeans2} algorithm is used for that purpose.
1685 One of the main that EM algorithm should deal with is the large number
1686 of parameters to estimate. The majority of the parameters sits in
1687 covariance matrices, which are $d \times d$ elements each
1688 (where $d$ is the feature space dimensionality). However, in
1689 many practical problems the covariance matrices are close to diagonal,
1690 or even to $\mu_k*I$, where $I$ is identity matrix and
1691 $\mu_k$ is mixture-dependent "scale" parameter. So a robust computation
1692 scheme could be to start with the harder constraints on the covariance
1693 matrices and then use the estimated parameters as an input for a less
1694 constrained optimization problem (often a diagonal covariance matrix is
1695 already a good enough approximation).
1697 \textbf{References:}
1699 \item Bilmes98 J. A. Bilmes. A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models. Technical Report TR-97-021, International Computer Science Institute and Computer Science Division, University of California at Berkeley, April 1998.
1705 Parameters of the EM algorithm.
1710 CvEMParams() : nclusters(10), cov_mat_type(CvEM::COV_MAT_DIAGONAL),
1711 start_step(CvEM::START_AUTO_STEP), probs(0), weights(0), means(0),
1714 term_crit=cvTermCriteria( CV_TERMCRIT_ITER+CV_TERMCRIT_EPS,
1718 CvEMParams( int _nclusters, int _cov_mat_type=1/*CvEM::COV_MAT_DIAGONAL*/,
1719 int _start_step=0/*CvEM::START_AUTO_STEP*/,
1720 CvTermCriteria _term_crit=cvTermCriteria(
1721 CV_TERMCRIT_ITER+CV_TERMCRIT_EPS,
1723 CvMat* _probs=0, CvMat* _weights=0,
1724 CvMat* _means=0, CvMat** _covs=0 ) :
1725 nclusters(_nclusters), cov_mat_type(_cov_mat_type),
1726 start_step(_start_step),
1727 probs(_probs), weights(_weights), means(_means), covs(_covs),
1728 term_crit(_term_crit)
1735 const CvMat* weights;
1738 CvTermCriteria term_crit;
1742 %\begin{description}
1743 %\cvarg{nclusters}{The number of mixtures. Some EM implementation could determine the optimal number of mixtures within a specified value range, but that is not the case in ML yet.}
1744 %\cvarg{cov\_mat\_type}{The type of the mixture covariance matrices; should be one of the following:
1745 %\begin{description}
1746 %\cvarg{CvEM::COV\_MAT\_GENERIC}{a covariance matrix of each mixture may be an arbitrary, symmetrical, positively defined matrix, so the number of free parameters in each matrix is about $\texttt{d}^2/2$. It is not recommended to use this option, unless there is pretty accurate initial estimation of the parameters and/or a huge number of training samples.}
1747 %\cvarg{CvEM::COV\_MAT\_DIAGONAL}{a covariance matrix of each mixture may be an arbitrary diagonal matrix with positive diagonal elements, that is, non-diagonal elements are forced to be 0's, so the number of free parameters is \texttt{d} for each matrix. This is the most commonly used option yielding good estimation results.}
1748 %\cvarg{CvEM::COV\_MAT\_SPHERICAL}{a covariance matrix of each mixture is a scaled identity matrix, $\mu_k*\texttt{I}$, so the only parameter to be estimated is $\mu_k$. The option may be used in special cases, when the constraint is relevant, or as a first step in the optimization (e.g. in case when the data is preprocessed with \cross{CalcPCA}). The results of such preliminary estimation may be passed again to the optimization procedure, this time with \texttt{cov\_mat\_type=CvEM::COV\_MAT\_DIAGONAL}.}
1750 %\cvarg{start\_step}{The initial step the algorithm starts from; should be one of the following:
1751 %\begin{description}
1752 %\cvarg{CvEM::START\_E\_STEP}{the algorithm starts with E-step. At least, the initial values of mean vectors, \texttt{CvEMParams::means} must be passed. Optionally, the user may also provide initial values for weights (\texttt{CvEMParams::weights}) and/or covariance matrices (\texttt{CvEMParams::covs}).}
1753 %\cvarg{CvEM::START\_M\_STEP}{the algorithm starts with M-step. The initial probabilities $p_{i,k}$ must be provided.}
1754 %\cvarg{CvEM::START\_AUTO\_STEP}{No values are required from the user, k-means algorithm is used to estimate initial mixtures parameters.}
1756 %\cvarg{term\_crit}{Termination criteria of the procedure. EM algorithm stops either after a certain number of iterations (\texttt{term\_crit.num\_iter}), or when the parameters change too little (no more than \texttt{term\_crit.epsilon}) from iteration to iteration.}
1757 %\cvarg{probs}{Initial probabilities $p_{i,k}$; are used (and must be not \texttt{NULL}) only when \newline \texttt{start\_step=CvEM::START\_M\_STEP}.}
1758 %\cvarg{weights}{Initial mixture weights $\pi_k$; are used (if not \texttt{NULL}) only when \newline \texttt{start\_step=CvEM::START\_E\_STEP}.}
1759 %\cvarg{covs}{Initial mixture covariance matrices $S_k$; are used (if not \texttt{NULL}) only when \newline \texttt{start\_step=CvEM::START\_E\_STEP}.}
1760 %\cvarg{means}{Initial mixture means $a_k$; are used (and must be not \texttt{NULL}) only when \newline \texttt{start\_step=CvEM::START\_E\_STEP}.}
1763 The structure has 2 constructors, the default one represents a rough rule-of-thumb, with another one it is possible to override a variety of parameters, from a single number of mixtures (the only essential problem-dependent parameter), to the initial values for the mixture parameters.
1771 class CV_EXPORTS CvEM : public CvStatModel
1774 // Type of covariance matrices
1775 enum { COV_MAT_SPHERICAL=0, COV_MAT_DIAGONAL=1, COV_MAT_GENERIC=2 };
1778 enum { START_E_STEP=1, START_M_STEP=2, START_AUTO_STEP=0 };
1781 CvEM( const CvMat* samples, const CvMat* sample_idx=0,
1782 CvEMParams params=CvEMParams(), CvMat* labels=0 );
1785 virtual bool train( const CvMat* samples, const CvMat* sample_idx=0,
1786 CvEMParams params=CvEMParams(), CvMat* labels=0 );
1788 virtual float predict( const CvMat* sample, CvMat* probs ) const;
1789 virtual void clear();
1791 int get_nclusters() const { return params.nclusters; }
1792 const CvMat* get_means() const { return means; }
1793 const CvMat** get_covs() const { return covs; }
1794 const CvMat* get_weights() const { return weights; }
1795 const CvMat* get_probs() const { return probs; }
1799 virtual void set_params( const CvEMParams& params,
1800 const CvVectors& train_data );
1801 virtual void init_em( const CvVectors& train_data );
1802 virtual double run_em( const CvVectors& train_data );
1803 virtual void init_auto( const CvVectors& samples );
1804 virtual void kmeans( const CvVectors& train_data, int nclusters,
1805 CvMat* labels, CvTermCriteria criteria,
1806 const CvMat* means );
1808 double log_likelihood;
1815 CvMat* log_weight_div_det;
1816 CvMat* inv_eigen_values;
1817 CvMat** cov_rotate_mats;
1822 \cvfunc{CvEM::train}
1824 Estimates the Gaussian mixture parameters from the sample set.
1828 void CvEM::train( \par const CvMat* samples, \par const CvMat* sample\_idx=0,
1829 \par CvEMParams params=CvEMParams(), \par CvMat* labels=0 );
1833 Unlike many of the ML models, EM is an unsupervised learning algorithm and it does not take responses (class labels or the function values) on input. Instead, it computes the \cross{MLE} of the Gaussian mixture parameters from the input sample set, stores all the parameters inside the structure: $p_{i,k}$ in \texttt{probs}, $a_k$ in \texttt{means} $S_k$ in \texttt{covs[k]}, $\pi_k$ in \texttt{weights} and optionally computes the output "class label" for each sample: $\texttt{labels}_i=\texttt{arg max}_k(p_{i,k}), i=1..N$ (i.e. indices of the most-probable mixture for each sample).
1835 The trained model can be used further for prediction, just like any other classifier. The model trained is similar to the \cross{Bayes classifier}.
1838 Example: Clustering random samples of multi-Gaussian distribution using EM
1842 #include "highgui.h"
1844 int main( int argc, char** argv )
1847 const int N1 = (int)sqrt((double)N);
1848 const CvScalar colors[] = \cvexp{0,0,255}},{{0,255,0}},
1849 {{0,255,255}},{{255,255,0}
1853 CvRNG rng_state = cvRNG(-1);
1854 CvMat* samples = cvCreateMat( nsamples, 2, CV_32FC1 );
1855 CvMat* labels = cvCreateMat( nsamples, 1, CV_32SC1 );
1856 IplImage* img = cvCreateImage( cvSize( 500, 500 ), 8, 3 );
1858 CvMat sample = cvMat( 1, 2, CV_32FC1, _sample );
1863 cvReshape( samples, samples, 2, 0 );
1864 for( i = 0; i < N; i++ )
1866 CvScalar mean, sigma;
1868 // form the training samples
1869 cvGetRows( samples, &samples_part, i*nsamples/N,
1871 mean = cvScalar(((i%N1)+1.)*img->width/(N1+1),
1872 ((i/N1)+1.)*img->height/(N1+1));
1873 sigma = cvScalar(30,30);
1874 cvRandArr( &rng_state, &samples_part, CV_RAND_NORMAL,
1877 cvReshape( samples, samples, 1, 0 );
1879 // initialize model's parameters
1881 params.means = NULL;
1882 params.weights = NULL;
1883 params.probs = NULL;
1884 params.nclusters = N;
1885 params.cov_mat_type = CvEM::COV_MAT_SPHERICAL;
1886 params.start_step = CvEM::START_AUTO_STEP;
1887 params.term_crit.max_iter = 10;
1888 params.term_crit.epsilon = 0.1;
1889 params.term_crit.type = CV_TERMCRIT_ITER|CV_TERMCRIT_EPS;
1892 em_model.train( samples, 0, params, labels );
1895 // the piece of code shows how to repeatedly optimize the model
1896 // with less-constrained parameters
1897 //(COV_MAT_DIAGONAL instead of COV_MAT_SPHERICAL)
1898 // when the output of the first stage is used as input for the second.
1900 params.cov_mat_type = CvEM::COV_MAT_DIAGONAL;
1901 params.start_step = CvEM::START_E_STEP;
1902 params.means = em_model.get_means();
1903 params.covs = (const CvMat**)em_model.get_covs();
1904 params.weights = em_model.get_weights();
1906 em_model2.train( samples, 0, params, labels );
1907 // to use em_model2, replace em_model.predict()
1908 // with em_model2.predict() below
1910 // classify every image pixel
1912 for( i = 0; i < img->height; i++ )
1914 for( j = 0; j < img->width; j++ )
1916 CvPoint pt = cvPoint(j, i);
1917 sample.data.fl[0] = (float)j;
1918 sample.data.fl[1] = (float)i;
1919 int response = cvRound(em_model.predict( &sample, NULL ));
1920 CvScalar c = colors[response];
1922 cvCircle( img, pt, 1, cvScalar(c.val[0]*0.75,
1923 c.val[1]*0.75,c.val[2]*0.75), CV_FILLED );
1927 //draw the clustered samples
1928 for( i = 0; i < nsamples; i++ )
1931 pt.x = cvRound(samples->data.fl[i*2]);
1932 pt.y = cvRound(samples->data.fl[i*2+1]);
1933 cvCircle( img, pt, 1, colors[labels->data.i[i]], CV_FILLED );
1936 cvNamedWindow( "EM-clustering result", 1 );
1937 cvShowImage( "EM-clustering result", img );
1940 cvReleaseMat( &samples );
1941 cvReleaseMat( &labels );
1947 \section{Neural Networks}
1949 ML implements feed-forward artificial neural networks, more particularly, multi-layer perceptrons (MLP), the most commonly used type of neural networks. MLP consists of the input layer, output layer and one or more hidden layers. Each layer of MLP includes one or more neurons that are directionally linked with the neurons from the previous and the next layer. Here is an example of a 3-layer perceptron with 3 inputs, 2 outputs and the hidden layer including 5 neurons:
1951 \includegraphics{pics/mlp_.png}
1953 All the neurons in MLP are similar. Each of them has several input links (i.e. it takes the output values from several neurons in the previous layer on input) and several output links (i.e. it passes the response to several neurons in the next layer). The values retrieved from the previous layer are summed with certain weights, individual for each neuron, plus the bias term, and the sum is transformed using the activation function $f$ that may be also different for different neurons. Here is the picture:
1955 \includegraphics{pics/neuron_model.png}
1957 In other words, given the outputs $x_j$ of the layer $n$, the outputs $y_i$ of the layer $n+1$ are computed as:
1960 u_i = \sum_j (w^{n+1}_{i,j}*x_j) + w^{n+1}_{i,bias}
1967 Different activation functions may be used, ML implements 3 standard ones:
1969 \item Identity function (\texttt{CvANN\_MLP::IDENTITY}): $f(x)=x$
1970 \item Symmetrical sigmoid (\texttt{CvANN\_MLP::SIGMOID\_SYM}): $f(x)=\beta*(1-e^{-\alpha x})/(1+e^{-\alpha x}$), the default choice for MLP; the standard sigmoid with $\beta =1, \alpha =1$ is shown below:
1972 \includegraphics{pics/sigmoid_bipolar.png}
1974 \item Gaussian function (\texttt{CvANN\_MLP::GAUSSIAN}): $f(x)=\beta e^{-\alpha x*x}$, not completely supported by the moment.
1976 In ML all the neurons have the same activation functions, with the same free parameters ($\alpha, \beta$) that are specified by user and are not altered by the training algorithms.
1978 So the whole trained network works as follows: It takes the feature vector on input, the vector size is equal to the size of the input layer, when the values are passed as input to the first hidden layer, the outputs of the hidden layer are computed using the weights and the activation functions and passed further downstream, until we compute the output layer.
1980 So, in order to compute the network one needs to know all the
1981 weights $w^{n+1)}_{i,j}$. The weights are computed by the training
1982 algorithm. The algorithm takes a training set: multiple input vectors
1983 with the corresponding output vectors, and iteratively adjusts the
1984 weights to try to make the network give the desired response on the
1985 provided input vectors.
1987 The larger the network size (the number of hidden layers and their sizes),
1988 the more is the potential network flexibility, and the error on the
1989 training set could be made arbitrarily small. But at the same time the
1990 learned network will also "learn" the noise present in the training set,
1991 so the error on the test set usually starts increasing after the network
1992 size reaches some limit. Besides, the larger networks are train much
1993 longer than the smaller ones, so it is reasonable to preprocess the data
1994 (using \cross{CalcPCA} or similar technique) and train a smaller network
1995 on only the essential features.
1997 Another feature of the MLP's is their inability to handle categorical
1998 data as is, however there is a workaround. If a certain feature in the
1999 input or output (i.e. in the case of \texttt{n}-class classifier for
2000 $n>2$) layer is categorical and can take $M>2$
2001 different values, it makes sense to represent it as binary tuple of
2002 \texttt{M} elements, where \texttt{i}-th element is 1 if and only if the
2003 feature is equal to the \texttt{i}-th value out of \texttt{M} possible. It
2004 will increase the size of the input/output layer, but will speedup the
2005 training algorithm convergence and at the same time enable "fuzzy" values
2006 of such variables, i.e. a tuple of probabilities instead of a fixed value.
2008 ML implements 2 algorithms for training MLP's. The first is the classical
2009 random sequential back-propagation algorithm
2010 and the second (default one) is batch RPROP algorithm.
2014 \item \url{http://en.wikipedia.org/wiki/Backpropagation}. Wikipedia article about the back-propagation algorithm.
2015 \item Y. LeCun, L. Bottou, G.B. Orr and K.-R. Muller, "Efficient backprop", in Neural Networks---Tricks of the Trade, Springer Lecture Notes in Computer Sciences 1524, pp.5-50, 1998.
2016 \item M. Riedmiller and H. Braun, "A Direct Adaptive Method for Faster Backpropagation Learning: The RPROP Algorithm", Proc. ICNN, San Francisco (1993).
2019 \cvfunc{CvANN\_MLP\_TrainParams}
2021 Parameters of the MLP training algorithm.
2024 struct CvANN_MLP_TrainParams
2026 CvANN_MLP_TrainParams();
2027 CvANN_MLP_TrainParams( CvTermCriteria term_crit, int train_method,
2028 double param1, double param2=0 );
2029 ~CvANN_MLP_TrainParams();
2031 enum { BACKPROP=0, RPROP=1 };
2033 CvTermCriteria term_crit;
2036 // backpropagation parameters
2037 double bp_dw_scale, bp_moment_scale;
2040 double rp_dw0, rp_dw_plus, rp_dw_minus, rp_dw_min, rp_dw_max;
2044 %\begin{description}
2045 %\cvarg{term\_crit}{The termination criteria for the training algorithm. It identifies how many iterations are done by the algorithm (for sequential backpropagation algorithm the number is multiplied by the size of the training set) and how much the weights could change between the iterations to make the algorithm continue.}
2046 %\cvarg{train\_method}{The training algorithm to use; can be one of \texttt{CvANN\_MLP\_TrainParams::BACKPROP} (sequential backpropagation algorithm) or \texttt{CvANN\_MLP\_TrainParams::RPROP} (RPROP algorithm, default value).}
2047 %\cvarg{bp\_dw\_scale}{(Backpropagation only): The coefficient to multiply the computed weight gradient by. The recommended value is about 0.1. The parameter can be set via \texttt{param1} of the constructor.}
2048 %\cvarg{bp\_moment\_scale}{(Backpropagation only): The coefficient to multiply the difference between weights on the 2 previous iterations. This parameter provides some inertia to smooth the random fluctuations of the weights. It can vary from 0 (the feature is disabled) to 1 and beyond. The value 0.1 or so is good enough. The parameter can be set via \texttt{param2} of the constructor.}
2049 %\cvarg{rp\_dw0}{(RPROP only): Initial magnitude of the weight delta. The default value is 0.1. This parameter can be set via \texttt{param1} of the constructor.}
2050 %\cvarg{rp\_dw\_plus}{(RPROP only): The increase factor for the weight delta. It must be $>1$, the default value is 1.2, which should work well in most cases, according to the algorithm's author. The parameter can only be changed explicitly by modifying the structure member.}
2051 %\cvarg{rp\_dw\_minus}{(RPROP only): The decrease factor for the weight delta. It must be $<1$, the default value is 0.5, which should work well in most cases, according to the algorithm's author. The parameter can only be changed explicitly by modifying the structure member.}
2052 %\cvarg{rp\_dw\_min}{(RPROP only): The minimum value of the weight delta. It must be $>0$, the default value is \texttt{FLT\_EPSILON}. The parameter can be set via \texttt{param2} of the constructor.}
2053 %\cvarg{rp\_dw\_max}{(RPROP only): The maximum value of the weight delta. It must be $>1$, the default value is 50. The parameter can only be changed explicitly by modifying the structure member.}
2056 The structure has default constructor that initializes parameters for \texttt{RPROP} algorithm. There is also more advanced constructor to customize the parameters and/or choose backpropagation algorithm. Finally, the individual parameters can be adjusted after the structure is created.
2064 class CvANN_MLP : public CvStatModel
2068 CvANN_MLP( const CvMat* _layer_sizes,
2069 int _activ_func=SIGMOID_SYM,
2070 double _f_param1=0, double _f_param2=0 );
2072 virtual ~CvANN_MLP();
2074 virtual void create( const CvMat* _layer_sizes,
2075 int _activ_func=SIGMOID_SYM,
2076 double _f_param1=0, double _f_param2=0 );
2078 virtual int train( const CvMat* _inputs, const CvMat* _outputs,
2079 const CvMat* _sample_weights,
2080 const CvMat* _sample_idx=0,
2081 CvANN_MLP_TrainParams _params = CvANN_MLP_TrainParams(),
2083 virtual float predict( const CvMat* _inputs,
2084 CvMat* _outputs ) const;
2086 virtual void clear();
2088 // possible activation functions
2089 enum { IDENTITY = 0, SIGMOID_SYM = 1, GAUSSIAN = 2 };
2091 // available training flags
2092 enum { UPDATE_WEIGHTS = 1, NO_INPUT_SCALE = 2, NO_OUTPUT_SCALE = 4 };
2094 virtual void read( CvFileStorage* fs, CvFileNode* node );
2095 virtual void write( CvFileStorage* storage, const char* name );
2097 int get_layer_count() { return layer_sizes ? layer_sizes->cols : 0; }
2098 const CvMat* get_layer_sizes() { return layer_sizes; }
2102 virtual bool prepare_to_train( const CvMat* _inputs, const CvMat* _outputs,
2103 const CvMat* _sample_weights, const CvMat* _sample_idx,
2104 CvANN_MLP_TrainParams _params,
2105 CvVectors* _ivecs, CvVectors* _ovecs, double** _sw, int _flags );
2107 // sequential random backpropagation
2108 virtual int train_backprop( CvVectors _ivecs, CvVectors _ovecs,
2109 const double* _sw );
2112 virtual int train_rprop( CvVectors _ivecs, CvVectors _ovecs,
2113 const double* _sw );
2115 virtual void calc_activ_func( CvMat* xf, const double* bias ) const;
2116 virtual void calc_activ_func_deriv( CvMat* xf, CvMat* deriv,
2117 const double* bias ) const;
2118 virtual void set_activ_func( int _activ_func=SIGMOID_SYM,
2119 double _f_param1=0, double _f_param2=0 );
2120 virtual void init_weights();
2121 virtual void scale_input( const CvMat* _src, CvMat* _dst ) const;
2122 virtual void scale_output( const CvMat* _src, CvMat* _dst ) const;
2123 virtual void calc_input_scale( const CvVectors* vecs, int flags );
2124 virtual void calc_output_scale( const CvVectors* vecs, int flags );
2126 virtual void write_params( CvFileStorage* fs );
2127 virtual void read_params( CvFileStorage* fs, CvFileNode* node );
2131 CvMat* sample_weights;
2133 double f_param1, f_param2;
2134 double min_val, max_val, min_val1, max_val1;
2136 int max_count, max_buf_sz;
2137 CvANN_MLP_TrainParams params;
2142 Unlike many other models in ML that are constructed and trained at once, in the MLP model these steps are separated. First, a network with the specified topology is created using the non-default constructor or the method \texttt{create}. All the weights are set to zeros. Then the network is trained using the set of input and output vectors. The training procedure can be repeated more than once, i.e. the weights can be adjusted based on the new training data.
2145 \cvfunc{CvANN\_MLP::create}
2147 Constructs the MLP with the specified topology
2151 void CvANN\_MLP::create( \par const CvMat* \_layer\_sizes,
2152 \par int \_activ\_func=SIGMOID\_SYM,
2153 \par double \_f\_param1=0, \par double \_f\_param2=0 );
2158 \cvarg{\_layer\_sizes}{The integer vector specifies the number of neurons in each layer including the input and output layers.}
2159 \cvarg{\_activ\_func}{Specifies the activation function for each neuron; one of \texttt{CvANN\_MLP::IDENTITY}, \texttt{CvANN\_MLP::SIGMOID\_SYM} and \texttt{CvANN\_MLP::GAUSSIAN}.}
2160 \cvarg{\_f\_param1,\_f\_param2}{Free parameters of the activation function, $\alpha$ and $\beta$, respectively. See the formulas in the introduction section.}
2163 The method creates a MLP network with the specified topology and assigns the same activation function to all the neurons.
2165 \cvfunc{CvANN\_MLP::train}
2171 int CvANN\_MLP::train( \par const CvMat* \_inputs, \par const CvMat* \_outputs,
2172 \par const CvMat* \_sample\_weights, \par const CvMat* \_sample\_idx=0,
2173 \par CvANN\_MLP\_TrainParams \_params = CvANN\_MLP\_TrainParams(),
2179 \cvarg{\_inputs}{A floating-point matrix of input vectors, one vector per row.}
2180 \cvarg{\_outputs}{A floating-point matrix of the corresponding output vectors, one vector per row.}
2181 \cvarg{\_sample\_weights}{(RPROP only) The optional floating-point vector of weights for each sample. Some samples may be more important than others for training, and the user may want to raise the weight of certain classes to find the right balance between hit-rate and false-alarm rate etc.}
2182 \cvarg{\_sample\_idx}{The optional integer vector indicating the samples (i.e. rows of \texttt{\_inputs} and \texttt{\_outputs}) that are taken into account.}
2183 \cvarg{\_params}{The training params. See \texttt{CvANN\_MLP\_TrainParams} description.}
2184 \cvarg{\_flags}{The various parameters to control the training algorithm. May be a combination of the following:
2186 \cvarg{UPDATE\_WEIGHTS = 1}{algorithm updates the network weights, rather than computes them from scratch (in the latter case the weights are initialized using \emph{Nguyen-Widrow} algorithm).}
2187 \cvarg{NO\_INPUT\_SCALE}{algorithm does not normalize the input vectors. If this flag is not set, the training algorithm normalizes each input feature independently, shifting its mean value to 0 and making the standard deviation =1. If the network is assumed to be updated frequently, the new training data could be much different from original one. In this case user should take care of proper normalization.}
2188 \cvarg{NO\_OUTPUT\_SCALE}{algorithm does not normalize the output vectors. If the flag is not set, the training algorithm normalizes each output features independently, by transforming it to the certain range depending on the activation function used.}
2192 This method applies the specified training algorithm to compute/adjust the network weights. It returns the number of done iterations.