The time taken to learn a model from training examples is often unacceptable. For instance, training language understanding models with Adaboost or SVMs can take weeks or longer based on numerous training examples. Parallelization thought the use of multiple processors may improve learning speed. The invention describes effective methods to distributed multiclass classification learning on several processors. These methods are applicable to multiclass models where the training process may be split into training of independent binary classifiers.

 
Web www.patentalert.com

< Methods and apparatus for decoding LDPC codes

> System and method for indexing a data stream

> Load prediction based on-line and off-line training of neural networks

~ 00527