Jump to content

Numba

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Tambora1815 (talk | contribs) at 19:01, 19 January 2018 (Preview version 0.37.0dev1 (3 January 2018): https://github.com/numba/numba/releases). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Original author(s)Continuum Analytics
Developer(s)Community project
Initial release15 August 2012; 12 years ago (2012-08-15)
Stable release
0.36.2 / 19 December 2017; 6 years ago (2017-12-19)
Preview release
0.37.0dev1 / 3 January 2018; 6 years ago (2018-01-03)
Repository
Written inPython, C
Operating systemCross-platform
TypeTechnical computing
Websitenumba.pydata.org

Numba is an open-source NumPy-aware optimizing compiler for Python sponsored by Continuum Analytics, Inc. It uses the LLVM compiler infrastructure to compile Python to machine code.[1]

Traits

Numba compiles Python code with LLVM to code which can be natively executed at runtime. This happens by decorating Python functions, which allows users to create native functions for different input types, or to create them on the fly:

@jit('f8(f8[:])')
def sum1d(my_double_array):
    total = 0.0
    for i in range(my_double_array.shape[0]):
        total += my_double_array[i]
    return total

This optimized function runs 200 times faster than the interpreted original function on a long NumPy array; and it is 30% faster than NumPy's builtin sum()function (version 0.27.0).[2][3]

To make the above example work for any compatible input types automatically, we can create a function that specializes automatically:

@jit
def sum1d(my_array):
    ...

GPU Kernels

GPU kernels can be written in Python, and run on the GPU. There are two approaches available currently:

NVIDIA CUDA

Example CUDA kernel, written using Python source-code:

@cuda.jit
def increment_a_2D_array(an_array):
    x, y = cuda.grid(2)
    if x < an_array.shape[0] and y < an_array.shape[1]:
       an_array[x, y] += 1

numba.pydata.org/numba-doc/dev/cuda/overview.html

AMD HSA

Simply use the annotation '@hsa.jit':

@hsa.jit(device=True)
def a_device_function(a, b):
    return a + b

numba.pydata.org/numba-doc/dev/hsa/overview.html

Alternative approaches

The following projects are alternative approaches to accelerating Python:

References

  1. ^ "numba/numba: NumPy aware dynamic Python compiler using LLVM". GitHub.
  2. ^ "A Speed Comparison Of C, Julia, Python, Numba, and Cython on LU Factorization".
  3. ^ "Numba vs. Cython: Take 2".