Numba
Original author(s) | Continuum Analytics |
---|---|
Developer(s) | Community project |
Initial release | 15 August 2012 |
Stable release | 0.50.0
/ 10 June 2020 |
Preview release | 0.50.0dev0
/ 30 March 2020 |
Repository | |
Written in | Python, C |
Operating system | Cross-platform |
Type | Technical computing |
Website | numba |
Numba is an open-source JIT compiler that translates a subset of Python and NumPy into fast machine code using LLVM, via the llvmlite Python package. It offers a range of options for parallelising Python code for CPUs and GPUs, often with only minor code changes.
Numba was started by Travis Oliphant in 2012 and has since been under active development at https://github.com/numba/numba with frequent releases. The project is driven by developers at Anaconda, Inc., with support by DARPA, the Gordon and Betty Moore Foundation, Intel, Nvidia and AMD, and a community of contributors on GitHub.
Example
Numba can be used by simply applying the numba.jit
decorator to a Python function that does numerical computations:
import numba
import random
@numba.jit
def monte_carlo_pi(nsamples: int):
acc = 0
for i in range(nsamples):
x = random.random()
y = random.random()
if (x**2 + y**2) < 1.0:
acc += 1
return 4.0 * acc / nsamples
The Just-in-time compilation happens transparently when the function is called:
>>> monte_carlo_pi(1000000)
3.14
The Numba website at https://numba.pydata.org contains many more examples, as well as information on how to get good performance from Numba.
GPU support
Numba can compile Python functions to GPU code. Currently two backends are available:
- NVIDIA CUDA, see numba
.pydata .org /numba-doc /dev /cuda - AMD ROCm HSA, see numba
.pydata .org /numba-doc /dev /roc
Alternative approaches
Numba is one approach to make Python fast, by compiling specific functions that contain Python and Numpy code. Many alternative approaches for fast numeric computing with Python exist, such as Cython, TensorFlow, PyTorch, Chainer, Pythran, and PyPy.