Kernel-based machine learning methods that adapt to GPU.

A method for optimally adapting a class of kernel-based machine learning methods to parallel architectures including those of GPU hardware.

The Need

Most modern machine learning methods utilize GPU hardware to speed up processing large datasets. Modern GPU's implement parallel architectures which allow for very fast computations of certain mathematical operations. GPU-based architectures are widely used in the industry for a range of Artificial Intelligence and Machine Learning applications, including computer vision and self-driving cars. Current kernel-based machine learning methods, however, do not fully utilize the available computing power of a parallel computing resource like a GPU.

The Technology

Ohio State researchers Siyuan Ma and Mikhail Belkin in The Department of Computer Science & Engineering have developed a method for optimally adapting a class of kernel-based machine learning methods to parallel architectures including those of GPU hardware. The main innovation of their technology lies in fast, scalable, and accurate training for kernel machines that output solutions mathematically equivalent to those obtained using standard kernels, but are capable of fully utilizing the available computing power of a parallel computational resource, such as the GPU. This utilization is key to strong performance when considering that much of the computational resource capability is wasted with standard iterative methods. The practical result of these innovative learning methods is an accurate, principled, and very fast algorithm, EigenPro 2.0.

Commercial Applications

  • Broad range of applications related to data analysis and machine learning

Benefits/Advantages

  • Using a single GPU, training on ImageNet with 1.3 × 106 data points and 1000 labels takes under an hour, while smaller datasets, such as MNIST, take seconds.
  • Little tuning beyond selecting the kernel and kernel parameter is needed
  • Practical use of methods
  • Potential to enable scaling to very large data

Loading icon