A modified large margin perceptron learning algorithm (LMPLA) uses asymmetric margin variables for relevant training documents (i.e., referred to as "positive examples") and non-relevant training documents (i.e., referred to as "negative examples") to accommodate biased training sets. In addition, positive examples are initialized to force at least one update to the initial weighting vector. A noise parameter is also introduced to force convergence of the algorithm.

 
Web www.patentalert.com

< Heuristic method of classification

< Accelerated learning in machine vision using artificially implanted defects

> Generalized segmentation method for estimation/optimization problem

> Trainable, extensible, automated data-to-knowledge translator

~ 00283