SciPy Notes and Guide
- Address:
dkuhlman (at) reifywork (dot) com http://www.reifywork.com
- Revision:
- 2.1a
- Date:
- October 30, 2024
- copyright:
Copyright (c) 2005 Dave Kuhlman. All Rights Reserved. This software is subject to the provisions of the MIT License http://www.opensource.org/licenses/mit-license.php.
- abstract:
This document provides notes and a guide for the beginning NumPy/SciPy user. h5py and Matplotlib are also discussed.
Note: This document is quite old, although hopefully still of some use. I've written a newer document on some of the same topics plus additional ones. You can find it here: py-datasci-survey.html.
1 Introduction
One goal of this document is to expose as much of Numpy and SciPy in a single document and a single Web page so that you have some ability to search for the function and capability you need. Notice too, that this document was generated from a plain text reStructuredText file. Look at the bottom of this HTML document for a link to that plain text document, so you can download it, and search it with the tool of your choice.
This document is a re-working of some of the information in the standard Numpy and SciPy documents at http://docs.scipy.org/doc/. I want to thank those who've working on those projects for the wonderful work they've done on Numpy and SciPy and for very useful documention.
1.1 What is SciPy?
SciPy is both (1) a way to handle large arrays of numerical data in Python (a capability it gets from Numpy) and (2) a way to apply scientific, statistical, and mathematical operations to those arrays of data. When combined with a package such as h5py or PyTables, if is also capable of storing and retrieving large arrays of data in an efficient way. Since much of it's calculations are done in C extension modules, SciPy can be quite fast.
1.2 What is Numpy?
Numpy is an array and vector library. SciPy, and other Python numberical packages as well, is built on top of Numpy.
1.3 Resources and Help
You can find help and documentation here -- SciPy.org -- http://www.scipy.org/
This document mostly discusses the use of Numpy and SciPy. However, if you are processing numerical data, you may also want to look at these:
Pandas -- http://pandas.pydata.org/
Numexpr -- https://github.com/pydata/numexpr
Numba -- http://numba.pydata.org/
Sympy -- http://sympy.org/en/index.html
Matplotlib -- http://matplotlib.org/
IPython -- An advanced Python interactive shell -- http://ipython.org/
h5py -- HDF5 for Python -- http://www.h5py.org/
PyTables -- http://pytable.sourceforge.net/
SciPy -- http://www.scipy.org/
Numpy -- http://www.numpy.org/
Overviews and collections -- The following Web sites contain summaries of and links to many of the Python (numerical) data analysis packages:
PyData -- http://pydata.org/downloads/
Python for data -- Packages at Github -- https://github.com/pydata
Python for Data Analysis: The Landscape of Tutorials -- http://datacommunitydc.org/blog/2013/07/python-for-data-analysis-the-landscape-of-tutorials/
And there is a great deal of information that may be of help here: http://www.scipy.org/topical-software.html
1.4 Installation, configuration, etc
For instructions on installing Numpy and SciPy, look at the SciPy Web site.
2 Help on SciPy
Suggestion: Use IPython for your interactive Python shell.
You can create a custom environment for SciPy. To do so, do the following:
Create a scipy profile:
$ ipython profile create scipy
This will create a directory ~/.ipython/profile_scipy (if your IPython directory is in the default location).
Add a file in directory ~/.ipython/profile_scipy/startup that does some or all of the following (depending on which packages you have installed):
import sys import os import numpy as np import scipy as sp from scipy import stats import matplotlib as mpl import matplotlib.pyplot as plt import h5py from pprint import pprint as pp xx = quit
IPython runs the *.py files in that startup directory in lexicographical order by name. See the README file in that directory.
Start ipython with the following:
$ ipython --profile=scipy
Doing so automatically loads Numpy, SciPy, matplotlib, etc.
At your leisure, look in the file ~/.ipython/profile_scipy/ipython_config.py (which was automatically generated when you created the scipy profile) for more configuration options.
And, perhaps you also want to create other IPython profiles for various categories of interactive Python use.
For information on configuring and customizing IPython, see:
Get help on SciPy modules, classes, functions, and methods with the help built-in function. Or, with IPython, use the ? operator and the pdoc magic command. Examples:
In [6]:help(scipy.stats.tmean) ... In [7]:scipy.stats.tmean? ... In [8]: %pdoc scipy.stats.norm.pdf Probability density function at x of the given RV. ...
To list the contents of modules, use dir(). Example:
In [24]:dir(scipy.stats)
Or, if you have started IPython using the above scipy profile, you can do:
$ ipython --profile=scipy Python 2.7.6 (default, May 9 2014, 17:13:57) Type "copyright", "credits" or "license" for more information. IPython 3.0.0-dev -- An enhanced Interactive Python. ? -> Introduction and overview of IPython's features. %quickref -> Quick reference. help -> Python's own help system. object? -> Details about 'object', use 'object??' for extra details. IPython profile: scipy In [1]: dir(stats)
which will give you a list of the contents of the stats module.
3 Preface -- Assumptions for this document
For many of our examples, we assume the following imports:
import numpy as np import scipy as sp
4 Arrays and Array Operations
A good introduction to arrays is in the "Numpy user guide": http://docs.scipy.org/doc/numpy/user/basics.html
Arrays are simple. An example:
In [2]:a1 = np.array([1, 2, 3, 4,]) In [3]:a2 = np.array([4, 3, 2, 1,]) In [4]:print a1 [1 2 3 4] In [5]:a3 = a1 * a2 In [6]:print a3 [4 6 6 4] o o o In [41]: a1 = np.zeros((4,5)) In [42]: print a1 [[0 0 0 0 0] [0 0 0 0 0] [0 0 0 0 0] [0 0 0 0 0]] In [43]: a2 = np.empty((4,5)) In [44]: print a2 [[-1209828888 -1209828888 14 3 24] [ 24 6 6 6 6] [ 6 6 6 139519736 64] [ 9 139519712 11 12 139519680]] In [45]: a3 = np.zeros((4,5), dtype='f') In [46]: print a3 [[ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.]]
To index into multi-dimension arrays, use either of the following:
In [37]:a2 = zeros((4,3),dtype='f') In [38]:a2 Out[38]:NumPy array, format: long [[ 0. 0. 0.] [ 0. 0. 0.] [ 0. 0. 0.] [ 0. 0. 0.]] In [39]:a2[3,0] = 5. In [40]:a2[2][1] = 6. In [41]:a2 Out[41]:NumPy array, format: long [[ 0. 0. 0.] [ 0. 0. 0.] [ 0. 6. 0.] [ 5. 0. 0.]]
But, indexing into a complex array seems a little counter intuitive:
In [31]: aa = zeros((5,4), dtype=complex64) In [32]: aa Out[32]: array([[ 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j], [ 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j], [ 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j], [ 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j], [ 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j]], dtype=complex64) In [33]: aa.real[0,0] = 1.0 In [34]: aa.imag[0,0] = 2.0 In [35]: aa Out[35]: array([[ 1.+2.j, 0.+0.j, 0.+0.j, 0.+0.j], [ 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j], [ 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j], [ 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j], [ 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j]], dtype=complex64)
Note that we use this:
aa.real[0,0] = 1.0 aa.imag[0,0] = 2.0
and not this:
aa[0,0].real = 1.0 # wrong aa[0,0].imag = 2.0 # wrong
Package base has array helper functions. Examples:
import scipy def test(): a1 = scipy.arange(5, 10) print a1 a2 = scipy.zeros((4,5), dtype='f') print a2 test()
Prints the following:
[5 6 7 8 9] [[ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.]]
For help, use something like the following:
help(scipy) help(scipy.zeros)
Or in IPython:
scipy? scipy.zeros?
You can also "reshape" and transpose arrays:
In [47]: a1 = arange(12) In [48]: a1 Out[48]: array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]) In [49]: a2 = a1.reshape(3,4) In [50]: a2 Out[50]: array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) In [51]: a3 = a2.transpose() In [52]: a3 Out[52]: array([[ 0, 4, 8], [ 1, 5, 9], [ 2, 6, 10], [ 3, 7, 11]])
And, you can get the "shape" of an array:
In [53]: a1.shape Out[53]: (12,) In [54]: a2.shape Out[54]: (3, 4) In [55]: a3.shape Out[55]: (4, 3)
And, you can change the shape of an array. For example:
In [79]: a = np.arange(12) In [80]: a Out[80]: array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]) In [81]: a.shape = (2, 6) In [82]: a Out[82]: array([[ 0, 1, 2, 3, 4, 5], [ 6, 7, 8, 9, 10, 11]]) In [83]: a.shape = 3, 4 In [84]: a Out[84]: array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]])
You can "vectorize" a function. Doing so turns a function that takes a scalar as an argument into one when can process a vector. For example:
In [12]: def t(x): ....: return x + 3 ....: In [13]: a1 = np.zeros((3, 4)) In [14]: a1 Out[14]: array([[ 0., 0., 0., 0.], [ 0., 0., 0., 0.], [ 0., 0., 0., 0.]]) In [15]: s = np.vectorize(t) In [16]: a2 = s(a1) In [17]: a2 Out[17]: array([[ 3., 3., 3., 3.], [ 3., 3., 3., 3.], [ 3., 3., 3., 3.]])
Note, however, that when you type np.vectorize? (in IPython), you will learn:
"The vectorize function is provided primarily for convenience, not for performance. The implementation is essentially a for loop."
4.1 The array interface and array protocol
The array interface is a specification for a developer who wishes to implement a replacement for the implementation of arrays, e.g. those used in scipy.
The array protocol is the way in which, for example, a scipy user uses arrays. It includes such things as:
The ability to select elements in an array, for example, with a1[3] or a1[3,4].
The ability to select slices of an array, for example, with a1[1:3].
The ability to convert arrays without copying (see Converting arrays, below).
The iterator protocol -- The ability to iterate over items in an array.
Can respond to request for its length, for example len(my_array).
You should be aware of the difference between (1) a1[3,4] and (2) a1[3][4]. Both work. However, the second results in two calls to the __getitem__ method.
4.2 Converting arrays
At times you may need to convert an array from one type to another, for example from a numpy array to a scipy array or the reverse. The array protocol will help. In particular, the asarray() function can convert an array without copying. Examples:
In [8]: import numpy In [9]: import scipy In [10]: a1 = zeros((4,6)) In [11]: type(a1) Out[11]: <type 'scipy.ndarray'> In [12]: a2 = numpy.asarray(a1) In [13]: type(a2) Out[13]: <type 'numpy.ndarray'> In [14]: a3 = numpy.zeros((3,5)) In [15]: type(a3) Out[15]: <type 'numpy.ndarray'> In [16]: a4 = scipy.asarray(a3) In [17]: type(a4) Out[17]: <type 'scipy.ndarray'>
5 Input and Output
This section describes several approaches to doing I/O with Numpy/SciPy data:
Plain text files
CSV -- comma separated values
HDF5 -- hierarchical data files
There are several reasons for discussing these techniques: (1) CSV is a common file representation for numerical data, so you may have the need to be able to deal with it, and Python makes handling CSV files so easy. (2) HDF5 is very well suited both for storing very large datasets and for organizing multiple datasets within a single file, and Python makes handling HDF5 very easy, also.
When thinking about problems in this area, it is good to keep in mind that there are easy ways to convert Python data structures into Numpy arrays and back. For example, the following converts a Python list of lists to a Numpy array, and then converts that array back into a list of lists:
In [11]: a = [[1,2,3], [4,5,6], [7,8,9]] In [12]: print a [[1, 2, 3], [4, 5, 6], [7, 8, 9]] In [13]: b = np.array(a) In [14]: print b [[1 2 3] [4 5 6] [7 8 9]] In [15]: c = [list(x) for x in list(b)] In [16]: print c [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
See this for more on creating Numpy arrays: http://docs.scipy.org/doc/numpy/user/basics.creation.html
Comparing PyTables and h5py -- Here are several comparisons:
From the PyTables FAQ -- http://pytables.org/moin/FAQ#HowdoesPyTablescomparewiththeh5pyproject.3F
From the h5py FAQ -- http://docs.h5py.org/en/latest/faq.html#what-s-the-difference-between-h5py-and-pytables
5.1 Plain text
The Numpy function genfromtxt is helpful for reading a text file containing tabular data and creating an array from it. See: http://docs.scipy.org/doc/numpy/user/basics.io.genfromtxt.html
5.2 CSV
Here is an example that stores a 2-dimension array of floats into a CSV file, then later reads that CSV file back into a Numpy 2-D array:
import numpy as np import csv def test(filename): data1 = np.array([[1, 2, 3], [4, 5, 6]], dtype='f') print 'data1:', data1 with open(filename, 'wb') as csvfile: spamwriter = csv.writer(csvfile) for row in data1: spamwriter.writerow(list(row)) with open(filename, 'rb') as csvfile: csvreader = csv.reader(csvfile) data2 = [] for row in csvreader: row = [float(x) for x in row] data2.append(row) data3 = np.array(data2, dtype=np.float_) print 'data3:', data3 test('tmp01.csv')
When we run the above code, we see:
$ python test.py data1: [[ 1. 2. 3.] [ 4. 5. 6.]] data3: [[ 1. 2. 3.] [ 4. 5. 6.]]
5.3 h5py
h5py is similar to PyTables (see section PyTables and HDF5) in the sense that it gives Python access to HDF5 files. However, in contrast, the interface to h5py feels a little bit lower level while still being very easy to use. Learn more about h5py here: http://www.h5py.org/
The following is an example that creates a 2-dimensional array, writes that array to an HDF5 file, then reads that array from the file and prints it:
import numpy as np import h5py def test(filename, datasetname): data1 = np.array([[1, 2, 3], [4, 5, 6]], dtype=np.float_) print 'data1:', data1 hdf5file = h5py.File(filename) dataset01 = hdf5file.create_dataset( datasetname, data1.shape, dtype=np.float_) dataset01[...] = data1 print 'Dataset names:', list(hdf5file) data2 = np.array(hdf5file[datasetname]) print 'data2:', data2 test('tmp01.hdf5', 'dataset01')
When we run the above code we see:
$ python test.py data1: [[ 1. 2. 3.] [ 4. 5. 6.]] Dataset names: [u'dataset01'] data2: [[ 1. 2. 3.] [ 4. 5. 6.]]
The following is an example that (1) creates several datasets, (2) stores those datasets in an HDF5 file in a structured way (a tree of datasets), and later (3) retrieves those datasets both (a) by walking the entire hdf5 file and (b) by path name:
# Create several datasets/arrays. # Store the datasets; put some in a sub-group. # Recursively walk the hdf5 file and retrieve each dataset. # Retrieve a dataset using a path name to it. import numpy as np import h5py def create_datasets(filename): data1 = np.arange(10, 19, 2, dtype=np.float_) data2 = np.arange(20, 29, 2, dtype=np.float_) data3 = np.arange(30, 39, 2, dtype=np.float_) data4 = np.arange(40, 49, 2, dtype=np.float_) print 'data1:', data1 print 'data2:', data2 print 'data3:', data3 print 'data4:', data4 hdf5file = h5py.File(filename) group01 = hdf5file.create_group('group01') dataset01 = hdf5file.create_dataset( 'dataset01', data1.shape, dtype=np.float_) dataset01[...] = data1 dataset02 = group01.create_dataset( 'dataset02', data2.shape, dtype=np.float_) dataset02[...] = data2 dataset03 = group01.create_dataset( 'dataset03', data2.shape, dtype=np.float_) dataset03[...] = data3 dataset04 = group01.create_dataset( 'dataset04', data2.shape, dtype=np.float_) dataset04[...] = data4 return hdf5file def show_group(group): print 'group name:', group.name print '-' * 40 for obj in group.values(): if isinstance(obj, h5py.Group): show_group(obj) elif isinstance(obj, h5py.Dataset): data = np.array(obj) print 'dataset name:', obj.name print 'dataset data:', data print '-' * 20 else: pass def show_datasets(hdf5file): show_group(hdf5file) def show_dataset_by_name(hdf5file, name): print 'dataset by name --' obj = hdf5file.get(name) data = np.array(obj) print 'dataset name:', obj.name print 'dataset data:', data def test(filename): hdf5file = create_datasets(filename) print '=' * 60 show_datasets(hdf5file) print '=' * 60 show_dataset_by_name(hdf5file, '/group01/dataset03') test('tmp01.hdf5')
When we run the above code, we'll see:
$ python test.py data1: [ 10. 12. 14. 16. 18.] data2: [ 20. 22. 24. 26. 28.] data3: [ 30. 32. 34. 36. 38.] data4: [ 40. 42. 44. 46. 48.] ============================================================ group name: / ---------------------------------------- dataset name: /dataset01 dataset data: [ 10. 12. 14. 16. 18.] -------------------- group name: /group01 ---------------------------------------- dataset name: /group01/dataset02 dataset data: [ 20. 22. 24. 26. 28.] -------------------- dataset name: /group01/dataset03 dataset data: [ 30. 32. 34. 36. 38.] -------------------- dataset name: /group01/dataset04 dataset data: [ 40. 42. 44. 46. 48.] -------------------- ============================================================ dataset by name -- dataset name: /group01/dataset03 dataset data: [ 30. 32. 34. 36. 38.]
Notes:
The hdf5 file itself is a group. It's the root group and its name is "/".
5.4 PyTables and HDF5
PyTables writes and reads HDF5 files. It supports the ability to save and retrieve SciPy arrays into HDF5 files. Multiple arrays and separate datasets can be organized in nested groups (analogous to folders or directories).
You can learn more about PyTables at PyTables -- Hierarchical Datasets in Python.
5.4.1 Installing PyTables
Obtain PyTables from PyTables -- Hierarchical Datasets in Python.
For MS Windows, there are binary executable installers.
For Linux, install PyTables with something like the following (depending on the version):
$ tar xvzf orig/pytables-1.3.2.tar.gz $ cd pytables-1.3.2/ $ python setup.py build_ext --inplace $ sudo python setup.py install
When installing from source, there are possible problems with Pyrex (possibly in combination with Python 2.4). If you try installing PyTables before these problems are fixed and get errors while building and installing, take a look at the fixes suggested in the following messages:
5.4.2 Using PyTables
There is extensive documentation in the PyTables source distribution. See: pytables-?.?.?/doc/html/usersguide.html.
The source distribution also contains a number of examples. See: pytables-?.?.?/examples.
You can also find user documentation at the PyTables Web site. See PyTables User's Guide: http://www.pytables.org/docs/manual/. Of particular interest are:
Chapter 3: Tutorials
Chapter 4: Library Reference
From PyTables 1.3 on, PyTables supports NumPy (and hence SciPy) arrays right out of the box in Array objects. So, if you write a NumPy array, you will get a NumPy array back, and the same goes for Numeric and numarray arrays. In other objects (EArray, VLArray or Table) you can make use of the 'flavor' parameter in constructors to tell PyTables: "Hey, every time that I read from this object, please, return me an (rec)array with the appropriate flavor". Of course, PyTables will try hard to avoid doing data copies in conversions (i.e. the array protocol is used whenever possible).
For versions of PyTables prior to 1.3, PyTables can save and read only numarray arrays. You can still use PyTables with SciPy, but for versions of PyTables prior to 1.3, an array conversion is needed.
If you are using a recent version of SciPy and numarray, then you will be able to do this conversion without copying, using the array protocol. Converting a Scipy array to a numarray array:
numarray_array = numarray.asarray(scipy_array)
And, converting a numarray array to a SciPy array:
scipy_array = scipy.asarray(numarray_array)
If you insist on using older versions, a simple method is to convert a SciPy array to a Python list. For example:
In [17]:data1 = s.array([[1.0,2.0],[3.0,4.0],[5.0,6.0]]) In [18]:list1 = data1.to In [18]:list1 = data1.tolist() In [19]:print list1 [[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]]
However, conversion from numarray arrays to SciPy arrays is simple. This example:
import scipy import numarray def test(): scipyArray = scipy.array([[1.0,2.0],[3.0,4.0],[5.0,6.0]]) list1 = scipyArray.tolist() print 'list1:', list1 numarrayArray = numarray.array([[1.0,2.0],[3.0,4.0],[5.0,6.0]]) print 'numarrayArray:\n', numarrayArray scipyArray2 = scipy.array(numarrayArray) print 'type(scipyArray2):', type(scipyArray2) print 'scipyArray2:\n', scipyArray2 test()
prints the following:
list1: [[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]] numarrayArray: [[ 1. 2.] [ 3. 4.] [ 5. 6.]] type(scipyArray2): <type 'scipy.ndarray'> scipyArray2: [[ 1. 2.] [ 3. 4.] [ 5. 6.]]
Here is an example that uses sufficiently recent versions of PyTables and SciPy to write and read arrays:
#!/usr/bin/env python import sys import getopt import scipy import tables Filename = 'testpytables2.h5' Dataset1 = [[1, 2, 3, 4],[5, 6, 7, 8],[9, 10, 11, 12]] Dataset2 = [[1.,2., 2.1],[3.,4.,4.1],[5.,6.,6.1]] def test1(): """Write out several sample datasets. """ filename = Filename print "Creating file:", filename #filter = tables.Filters() h5file = tables.openFile(filename, mode = "w", title = "PyTables test file", # filters=filter ) print '=' * 30 print h5file print '=' * 30 root = h5file.createGroup(h5file.root, "Datasets", "Test datasets") datasets = h5file.createGroup(root, "Phase1", "Test datasets") scipy_array = scipy.array(Dataset1) h5file.createArray(datasets, 'dataset1', scipy_array, "Test dataset #1") scipy_array = scipy.array(Dataset2) h5file.createArray(datasets, 'dataset2', scipy_array, "Test dataset #2") scipy_array = scipy.zeros((100,100)) h5file.createArray(datasets, 'dataset3', scipy_array, "Test dataset #3") h5file.close() # # Read in and display the datasets. # def test2(): filename = Filename h5file = tables.openFile(filename, 'r') dataset1Obj = h5file.getNode('/Datasets/Phase1', 'dataset1') dataset2Obj = h5file.getNode('/Datasets/Phase1', 'dataset2') print repr(dataset1Obj) print repr(dataset2Obj) dataset1Array = dataset1Obj.read() dataset2Array = dataset2Obj.read() print 'type(dataset1Array):', type(dataset1Array) print 'type(dataset2Array):', type(dataset2Array) print 'array1:\n', dataset1Array print 'array2:\n', dataset2Array # print several slices of our array. print 'slice [0]:', dataset1Array[0] print 'slice [0:2]:',dataset1Array[0:2] print 'slice [1, 0:4:2]:',dataset1Array[1, 0:4:2] h5file.close() USAGE_TEXT = """ Usage: python test_pytables1.py [options] Options: -h, --help Display this help message. -t n, --test=n Test number: 1: Write file 2: Read file Example: python test_pytables1.py -t 1 python test_pytables1.py -t 2 """ def usage(): print USAGE_TEXT sys.exit(-1) def main(): args = sys.argv[1:] try: opts, args = getopt.getopt(args, 'ht:', ['help', 'test=', ]) except: usage() testno = 0 for opt, val in opts: if opt in ('-h', '--help'): usage() elif opt in ('-t', '--test'): testno = int(val) if len(args) != 0: usage() if testno == 1: test1() elif testno == 2: test2() else: usage() if __name__ == '__main__': main()
Run the above by typing the following at the command line:
$ python test_pytables2.py -t 1 $ python test_pytables2.py -t 2
Notes:
We use h5file.createGroup() to create a group in the HDF5 file and then to create another group nested inside that one. A group is the equivalent of a folder or directory. PyTables supports nested groups in HDF5 files.
To write an array, we use h5file.createArray().
To retrieve an array, we use getNode() followed by node.read().
Notice, also, that we can read slices of an array directly from disk using the array subscription and slicing notation. See function test2.
You may find both h5dump and h5ls (from hdf5-tools) helpful for displaying the nested data structures:
$ h5dump -n testpytables2.h5 HDF5 "testpytables2.h5" { FILE_CONTENTS { group /Datasets group /Datasets/Phase1 dataset /Datasets/Phase1/dataset1 dataset /Datasets/Phase1/dataset2 dataset /Datasets/Phase1/dataset3 } }
See NCSA HDF5 Tools.
Other examples are provided in the PyTables distribution and in the PyTables tutorial.
6 Plotting and Graphics
We'll be learning the use of matplotlib.
You can find matplotlib and information about it here: http://matplotlib.org/
There are examples:
See the following for an explanation of the relationship between matplotlib, pyplot, and pylab: http://matplotlib.org/faq/usage_faq.html?highlight=pylab#matplotlib-pylab-and-pyplot-how-are-they-related
For interactive plotting in IPython, use:
$ ipython --pylab
6.1 Configuration
For information on using matplotlib in ipython, do:
$ ipython --help
For more information on configuring and running ipython with matplotlib/pylab, see:
To find out where Python is loading matplotlibrc from, do the following:
In [1]: import matplotlib In [2]: matplotlib.matplotlib_fname()
6.2 Interactive use
Consider using:
$ ipython --pylab
Here are some convenient and useful commands while using matplotlib interactively:
clf() -- Clear the current figure.
cla() -- Clear the current axes.
draw() -- Redraw the current figure.
close() -- Close the current figure. close(num) -- Close figure number num. close(h) -- Close figure whose handle is h. close('all') -- Close all figure windows.
Learn the following in order to create and control your plots:
figure(num) -- Create or activate figure number num.
subplot() -- Create a sub-plot within the current figure.
plot()
Learn the following in order to annotate your plots:
xlabel(s) -- Add a label s to the x axis.
ylabel(s) -- Add a label s to the y axis.
title(s) -- Add a title s to the axes.
text(x, y, s) Add text s to the axes at x, y in data coords.
figtext(x, y, s) -- Add text to the figure at x, y in relative 0-1 figure coords.
Note that these annotation functions return a matplotlib Text object. You can use this object to get and set properties of the text. Example:
91: cla() 92: plot([1,2,3]) 93: t = xlabel('increasing temp') 94: t.set_weight('bold') 95: t.set_color('b') 96: draw()
Notes:
The call to draw() may be needed in interactive mode (e.g. when in IPython) in order to update or refresh the drawing window.
Use dir() and help() (or "?" in IPython) to learn what methods are supported by the matplotlib Text object (or any other objects), what they do, what parameters they take, etc. Also, see the matplotlib user guide.
6.3 A simple plot
#!/usr/bin/env python """ Synopsis: Display a simple plot using pylab. Usage: python simple_plot.py <func_name> Examples: python simple_plot.py sin python simple_plot.py cos python simple_plot.py tan """ import sys import pylab as pl def simple(funcName): """Create a simple plot and save the plot to a .png file @param funcName: The name of a function, e.g. sin, cos, tan, ... """ t = pl.arange(0.0, 1.0+0.01, 0.01) funcStr = 'pl.%s(2*2*pl.pi*t)' % (funcName,) s = eval(funcStr) pl.plot(t, s) pl.xlabel('time (s)') pl.ylabel('voltage (mV)') pl.title('About as simple as it gets, folks') pl.grid(True) pl.savefig('simple_plot') pl.show() def usage(): sys.exit(__doc__) def main(): args = sys.argv[1:] if len(args) != 1: usage() simple(args[0]) if __name__ == '__main__': main()
Notes:
pl.plot() creates a plot to be displayed.
pl.xlabel(), pl.ylabel(), and pl.title() add annotations to our plot.
pl.grid(True) adds a grid.
pl.savefig() saves the figure. The file name extension determines the type of file generated. On my machine, .png, .svg, and .ps produce Portable Network Graphics (PNG), Scalable Vector Graphics (SVG), and PostScript files, respectively.
pl.show() shows (makes visible) all the plots we have created. This is typically the last line of your script. Note that in the shell (for example, ipython) when interactive mode is set in your matplotlibrc, show() is not needed.
6.4 Embedding matplotlib in a GUI application
You can display your matplotlib plots inside a GUI application written in Tk, WxPython, ...
There are examples in the examples directory of the matplotlib source distribution. These examples are also available at the matplotlib Web site. Go to matplotlib, then click on "Examples (zip)".
The example files for embedding are named:
embedding_in_gtk.py embedding_in_tk.py embedding_in_wx3.py embedding_in_gtk2.py embedding_in_tk2.py embedding_in_wx4.py embedding_in_gtk3.py embedding_in_wx.py embedding_in_qt.py embedding_in_wx2.py
In general, you can create your plot, possibly testing it interactively in IPython, then use one of the examples for embedding the plot in the GUI tool of your choice.
7 SciPy basic functions
For a tutorial on this see: http://docs.scipy.org/doc/scipy/reference/tutorial/basic.html
That secion covers interaction with Numpy, and in particular:
Index tricks -- Several quick ways to construct arrays. You may want to do:
In [1]: import numpy as np In [2]: np.r_? In [3]: np.c_?
Or, if not in IPython:
>>> help(np.r_) >>> help(np.c_)
A simple example:
In [50]: np.r_[-3:+4] Out[50]: array([-3, -2, -1, 0, 1, 2, 3])
Shape manipulation -- Changing the shape of an array can sometimes be more efficient than creating a new array from the old data with a different shape. Example:
In [2]: a = np.array([[1, 2, 3, 4, 5, 6], [7, 8, 9, 10, 11, 12]]) In [3]: In [3]: a Out[3]: array([[ 1, 2, 3, 4, 5, 6], [ 7, 8, 9, 10, 11, 12]]) In [4]: a.shape Out[4]: (2, 6) In [5]: a.shape = (3, 4) In [6]: a Out[6]: array([[ 1, 2, 3, 4], [ 5, 6, 7, 8], [ 9, 10, 11, 12]])
Polynomials -- Here is a simple example that creates two Polynomial objects and then performs an operation on them:
import scipy as sp def test(): a = sp.poly1d([2, 4, 3, 5]) b = sp.poly1d([5, 3, 2, 4]) print a print b print '-' * 40 print a * b test()
When we run the above, we'll see:
$ python polynomial_example.py 3 2 2 x + 4 x + 3 x + 5 3 2 5 x + 3 x + 2 x + 4 ---------------------------------------- 6 5 4 3 2 10 x + 26 x + 31 x + 50 x + 37 x + 22 x + 20
We can use these polynomials in algebraic expressions (as in the multiplication above); we can integrate them and differentiate them, and we can print them out in a "pretty" way (see above).
Vectorizing functions (vectorize) -- The capability to convert a function that takes scalars as arguments into a function that takes arrays as arguments and is applied element-wise to those arrays to produce a new array. Here is a simple example using a function that takes 3 scalar arguments, from which we produce a "vectorized" function that takes 3 array arguments:
import numpy as np def f1(x, y, z): return x + y + z def test(): a = np.arange(12) b = np.arange(10, 22) c = np.arange(20, 32) print a print b print c print '-' * 40 g1 = np.vectorize(f1) print g1(a, b, c) test()
When run, this produces:
$ python vectorize_example.py [ 0 1 2 3 4 5 6 7 8 9 10 11] [10 11 12 13 14 15 16 17 18 19 20 21] [20 21 22 23 24 25 26 27 28 29 30 31] ---------------------------------------- [30 33 36 39 42 45 48 51 54 57 60 63]
Type handling -- Techniques for testing and changing the type of (the elements in) an array. These include (1) testing to determine if an object is a (Python) scalar) and (2) creating a new array from an existing array with each element "cast" to a new type. And, don't forget that you can get the type of an array by using my_array.dtype.
Other useful functions -- The functions described in this section include
linspace and logspace -- Return equally spaced samples.
select -- Selecting elements from an array based on conditions.
8 Mathematical, Statistic, and Scientific Capabilities
This section lists and gives brief descriptions of the contents of scipy.
Much of the following documentation was generated from within IPython. I used either (1) the help(obj) built-in or (2) IPython's ? operator to view documentation on a module, class, etc, and then, where necessary, used the "s" command from within less (my pager) to save to a file. I've also done some light editing to reformat this for reST (reStructuredText), which is the format for the source of this document. At the end of this document you will find a link that will enable you to view the plain text reST that is the source from which this document was produced. For more on Docutils and reST see: http://docutils.sourceforge.net/
8.1 scipy (top-level)
SciPy: A scientific computing package for Python
Available subpackages:
__core_config__
__scipy_config__
__svn_version__
_import_tools
core_version
fft, fft2, fftn -- discrete Fourier transform. Also see fftshift, fftfreq.
fftpack (package)
integrate (package)
interpolate (package)
linalg (package)
old__init__
optimize (package)
scipy_version
sparse (package)
special (package)
stats (package)
8.2 base
scipy provides functions for defining a multi-dimensional array and useful procedures for Numerical computation. Use the following to get a list of the members of scipy:
>>> import scipy >>> dir(scipy)
Functions:
array - NumPy Array construction
zeros - Return an array of all zeros
empty - Return an unitialized array
shape - Return shape of sequence or array
rank - Return number of dimensions
size - Return number of elements in entire array or a certain dimension
fromstring - Construct array from (byte) string
take - Select sub-arrays using sequence of indices
put - Set sub-arrays using sequence of 1-D indices
putmask - Set portion of arrays using a mask
reshape - Return array with new shape
repeat - Repeat elements of array
choose - Construct new array from indexed array tuple
cross_correlate - Correlate two 1-d arrays
searchsorted - Search for element in 1-d array
sum - Total sum over a specified dimension
average - Average, possibly weighted, over axis or array.
cumsum - Cumulative sum over a specified dimension
product - Total product over a specified dimension
cumproduct - Cumulative product over a specified dimension
alltrue - Logical and over an entire axis
sometrue - Logical or over an entire axis
allclose - Tests if sequences are essentially equal
More Functions:
arrayrange (arange) - Return regularly spaced array
asarray - Guarantee NumPy array
sarray - Guarantee a NumPy array that keeps precision
convolve - Convolve two 1-d arrays
swapaxes - Exchange axes
concatenate - Join arrays together
transpose - Permute axes
sort - Sort elements of array
argsort - Indices of sorted array
argmax - Index of largest value
argmin - Index of smallest value
innerproduct - Innerproduct of two arrays
dot - Dot product (matrix multiplication)
outerproduct - Outerproduct of two arrays
resize - Return array with arbitrary new shape
indices - Tuple of indices
fromfunction - Construct array from universal function
diagonal - Return diagonal array
trace - Trace of array
dump - Dump array to file object (pickle)
dumps - Return pickled string representing data
load - Return array stored in file object
loads - Return array from pickled string
ravel - Return array as 1-D
nonzero - Indices of nonzero elements for 1-D array
shape - Shape of array
where - Construct array from binary result
compress - Elements of array where condition is true
clip - Clip array between two values
ones - Array of all ones
identity - 2-D identity array (matrix)
(Universal) Math Functions -- A universal function (or ufunc for short) is a function that operates on ndarrays in an element-by-element fashion. Note that you can create custom universal functions with np.vectorize. For more information see: http://docs.scipy.org/doc/numpy/reference/ufuncs.html. For example:
In [75]: sp.absolute(np.array([-1, 2, -3, 4, -5, 6, -7])) Out[75]: array([1, 2, 3, 4, 5, 6, 7])
absolute
add
arccos
arccosh
arcsin
arcsinh
arctan
arctan2
arctanh
around
bitwise_and
bitwise_or
bitwise_xor
ceil
conjugate
cos
cosh
divide
divide_safe
equal
exp
fabs
floor
fmod
greater
greater_equal
hypot
invert
left_shift
less
less_equal
log
log10
logical_and
logical_not
logical_or
logical_xor
maximum
minimum
multiply
negative
not_equal
power
right_shift
sign
sin
sinh
sqrt
subtract
tan
tanh
Basic functions used by several sub-packages and useful to have in the main name-space
Type handling:
iscomplexobj -- Test for complex object, scalar result
isrealobj -- Test for real object, scalar result
iscomplex -- Test for complex elements, array result
isreal -- Test for real elements, array result
imag -- Imaginary part
real -- Real part
real_if_close -- Turns complex number with tiny imaginary part to real
isneginf -- Tests for negative infinity
isposinf -- Tests for positive infinity
isnan -- Tests for nans
isinf -- Tests for infinity
isfinite -- Tests for finite numbers
isscalar -- True if argument is a scalar
nan_to_num -- Replaces NaN's with 0 and infinities with large numbers
cast -- Dictionary of functions to force cast to each type
common_type -- Determine the 'minimum common type code' for a group of arrays
mintypecode -- Return minimal allowed common typecode.
Index tricks:
mgrid -- Method which allows easy construction of N-d 'mesh-grids'
r_ -- Append and construct arrays: turns slice objects into ranges and concatenates them, for 2d arrays appends rows.
index_exp -- Konrad Hinsen's index_expression class instance which can be useful for building complicated slicing syntax.
Useful functions:
select -- Extension of where to multiple conditions and choices
extract -- Extract 1d array from flattened array according to mask
insert -- Insert 1d array of values into Nd array according to mask
linspace -- Evenly spaced samples in linear space
logspace -- Evenly spaced samples in logarithmic space
fix -- Round x to nearest integer towards zero
mod -- Modulo mod(x,y) = x % y except keeps sign of y
amax -- Array maximum along axis
amin -- Array minimum along axis
ptp -- Array max-min along axis
cumsum -- Cumulative sum along axis
prod -- Product of elements along axis
cumprod -- Cumluative product along axis
diff -- Discrete differences along axis
angle -- Returns angle of complex argument
unwrap -- Unwrap phase along given axis (1-d algorithm)
sort_complex -- Sort a complex-array (based on real, then imaginary)
trim_zeros -- trim the leading and trailing zeros from 1D array.
vectorize -- a class that wraps a Python function taking scalar arguments into a generalized function which can handle arrays of arguments using the broadcast rules of numerix Python.
alter_numeric -- enhance numeric array behavior
restore_numeric -- restore alterations done by alter_numeric
Shape manipulation:
squeeze -- Return a with length-one dimensions removed.
atleast_1d -- Force arrays to be > 1D
atleast_2d -- Force arrays to be > 2D
atleast_3d -- Force arrays to be > 3D
vstack -- Stack arrays vertically (row on row)
hstack -- Stack arrays horizontally (column on column)
column_stack -- Stack 1D arrays as columns into 2D array
dstack -- Stack arrays depthwise (along third dimension)
split -- Divide array into a list of sub-arrays
hsplit -- Split into columns
vsplit -- Split into rows
dsplit -- Split along third dimension
Matrix (2d array) manipluations:
fliplr -- 2D array with columns flipped
flipud -- 2D array with rows flipped
rot90 -- Rotate a 2D array a multiple of 90 degrees
eye -- Return a 2D array with ones down a given diagonal
diag -- Construct a 2D array from a vector, or return a given diagonal from a 2D array.
mat -- Construct a Matrix
bmat -- Build a Matrix from blocks
For information on the differences between arrays and matrices, see the section titled "'array' or 'matrix'? Which should I use?" at: http://wiki.scipy.org/NumPy_for_Matlab_Users
Polynomials:
poly1d -- A one-dimensional polynomial class
poly -- Return polynomial coefficients from roots
roots -- Find roots of polynomial given coefficients
polyint -- Integrate polynomial
polyder -- Differentiate polynomial
polyadd -- Add polynomials
polysub -- Substract polynomials
polymul -- Multiply polynomials
polydiv -- Divide polynomials
polyval -- Evaluate polynomial at given argument
Import tricks:
ppimport -- Postpone module import until trying to use it
ppimport_attr -- Postpone module import until trying to use its attribute
ppresolve -- Import postponed module and return it.
Machine arithmetics:
machar_single -- MachAr instance storing the parameters of system single precision floating point arithmetics
machar_double -- MachAr instance storing the parameters of system double precision floating point arithmetics
8.3 Constants
Physical and mathematical constants and units.
Access via:
In [1]: from scipy import constants
Mathematical constants:
pi -- Pi
golden -- Golden ratio
Physical constants:
c -- speed of light in vacuum
mu_0 -- the magnetic constant mu_0
epsilon_0 -- the electric constant (vacuum permittivity), epsilon_0
h -- the Planck constant h
hbar -- hbar = h/(2pi)
G -- Newtonian constant of gravitation
g -- standard acceleration of gravity
e -- elementary charge
R -- molar gas constant
alpha -- fine-structure constant
N_A -- Avogadro constant
k -- Boltzmann constant
sigma -- Stefan-Boltzmann constant sigma
Wien -- Wien displacement law constant
Rydberg -- Rydberg constant
m_e -- electron mass
m_p -- proton mass
m_n -- neutron mass
Constants database -- In addition to the above variables, scipy.constants also contains the 2010 CODATA recommended values database containing more physical constants: http://physics.nist.gov/cuu/Constants/index.html
value(key) -- Value in physical_constants indexed by key
unit(key) -- Unit in physical_constants indexed by key
precision(key) -- Relative precision in physical_constants indexed by key
find([sub, disp]) -- Return list of codata.physical_constant keys containing a given string.
ConstantWarning -- Accessing a constant no longer in current CODATA data set
scipy.constants.physical_constants -- Dictionary of physical constants, of the format physical_constants[name] = (value, unit, uncertainty)
Available constants:
alpha particle mass -- 6.64465675e-27 kg
alpha particle mass energy equivalent -- 5.97191967e-10 J
alpha particle mass energy equivalent in MeV -- 3727.37924 MeV
alpha particle mass in u -- 4.00150617913 u
alpha particle molar mass -- 0.00400150617912 kg mol^-1
alpha particle-electron mass ratio -- 7294.2995361
alpha particle-proton mass ratio -- 3.97259968933
Angstrom star -- 1.00001495e-10 m
atomic mass constant -- 1.660538921e-27 kg
atomic mass constant energy equivalent -- 1.492417954e-10 J
atomic mass constant energy equivalent in MeV -- 931.494061 MeV
atomic mass unit-electron volt relationship -- 931494061.0 eV
atomic mass unit-hartree relationship -- 34231776.845 E_h
atomic mass unit-hertz relationship -- 2.2523427168e+23 Hz
atomic mass unit-inverse meter relationship -- 7.5130066042e+14 m^-1
atomic mass unit-joule relationship -- 1.492417954e-10 J
atomic mass unit-kelvin r -- 1.08095408e+13 K
atomic mass unit-kilogram relationship -- 1.660538921e-27 kg
atomic unit of 1st hyperpolarizability -- 3.206361449e-53 C^3 m^3 J^-2
atomic unit of 2nd hyperpolarizability -- 6.23538054e-65 C^4 m^4 J^-3
atomic unit of action -- 1.054571726e-34 J s
atomic unit of charge -- 1.602176565e-19 C
atomic unit of charge density -- 1.081202338e+12 C m^-3
atomic unit of current -- 0.00662361795 A
atomic unit of electric dipole mom. -- 8.47835326e-30 C m
atomic unit of electric field -- 5.14220652e+11 V m^-1
atomic unit of electric field gradient -- 9.717362e+21 V m^-2
atomic unit of electric polarizability -- 1.6487772754e-41 C^2 m^2 J^-1
atomic unit of electric potential -- 27.21138505 V
atomic unit of electric quadrupole mom. -- 4.486551331e-40 C m^2
atomic unit of energy -- 4.35974434e-18 J
atomic unit of force -- 8.23872278e-08 N
atomic unit of length -- 5.2917721092e-11 m
atomic unit of mag. dipole mom. -- 1.854801936e-23 J T^-1
atomic unit of mag. flux density -- 235051.7464 T
atomic unit of magnetizability -- 7.891036607e-29 J T^-2
atomic unit of mass -- 9.10938291e-31 kg
atomic unit of mom.um -- 1.99285174e-24 kg m s^-1
atomic unit of permittivity -- 1.11265005605e-10 F m^-1
atomic unit of time -- 2.4188843265e-17 s
atomic unit of velocity -- 2187691.26379 m s^-1
Avogadro constant -- 6.02214129e+23 mol^-1
Bohr magneton -- 9.27400968e-24 J T^-1
Bohr magneton in eV/T -- 5.7883818066e-05 eV T^-1
Bohr magneton in Hz/T -- 13996245550.0 Hz T^-1
Bohr magneton in inverse meters per tesla -- 46.6864498 m^-1 T^-1
Bohr magneton in K/T -- 0.67171388 K T^-1
Bohr radius -- 5.2917721092e-11 m
Boltzmann constant -- 1.3806488e-23 J K^-1
Boltzmann constant in eV/K -- 8.6173324e-05 eV K^-1
Boltzmann constant in Hz/K -- 20836618000.0 Hz K^-1
Boltzmann constant in inverse meters per kelvin -- 69.503476 m^-1 K^-1
characteristic impedance of vacuum -- 376.730313462 ohm
classical electron radius -- 2.8179403267e-15 m
Compton wavelength -- 2.4263102389e-12 m
Compton wavelength over 2 pi -- 3.86159268e-13 m
conductance quantum -- 7.7480917346e-05 S
conventional value of Josephson constant -- 4.835979e+14 Hz V^-1
conventional value of von Klitzing constant -- 25812.807 ohm
Cu x unit -- 1.00207697e-13 m
deuteron g factor -- 0.8574382308
deuteron mag. mom. -- 4.33073489e-27 J T^-1
deuteron mag. mom. to Bohr magneton ratio -- 0.0004669754556
deuteron mag. mom. to nuclear magneton ratio -- 0.8574382308
deuteron mass -- 3.34358348e-27 kg
deuteron mass energy equivalent -- 3.00506297e-10 J
deuteron mass energy equivalent in MeV -- 1875.612859 MeV
deuteron mass in u -- 2.01355321271 u
deuteron molar mass -- 0.00201355321271 kg mol^-1
deuteron rms charge radius -- 2.1424e-15 m
deuteron-electron mag. mom. ratio -- -0.0004664345537
deuteron-electron mass ratio -- 3670.4829652
deuteron-neutron mag. mom. ratio -- -0.44820652
deuteron-proton mag. mom. ratio -- 0.307012207
deuteron-proton mass ratio -- 1.99900750097
electric constant -- 8.85418781762e-12 F m^-1
electron charge to mass quotient -- -1.758820088e+11 C kg^-1
electron g factor -- -2.00231930436
electron gyromag. ratio -- 1.760859708e+11 s^-1 T^-1
electron gyromag. ratio over 2 pi -- 28024.95266 MHz T^-1
electron mag. mom. -- -9.2847643e-24 J T^-1
electron mag. mom. anomaly -- 0.00115965218076
electron mag. mom. to Bohr magneton ratio -- -1.00115965218
electron mag. mom. to nuclear magneton ratio -- -1838.2819709
electron mass -- 9.10938291e-31 kg
electron mass energy equivalent -- 8.18710506e-14 J
electron mass energy equivalent in MeV -- 0.510998928 MeV
electron mass in u -- 0.00054857990946 u
electron molar mass -- 5.4857990946e-07 kg mol^-1
electron to alpha particle mass ratio -- 0.000137093355578
electron to shielded helion mag. mom. ratio -- 864.058257
electron to shielded proton mag. mom. ratio -- -658.2275971
electron volt -- 1.602176565e-19 J
electron volt-atomic mass unit relationship -- 1.07354415e-09 u
electron volt-hartree relationship -- 0.03674932379 E_h
electron volt-hertz relationship -- 2.417989348e+14 Hz
electron volt-inverse meter relationship -- 806554.429 m^-1
electron volt-joule relationship -- 1.602176565e-19 J
electron volt-kelvin relationship -- 11604.519 K
electron volt-kilogram relationship -- 1.782661845e-36 kg
electron-deuteron mag. mom. ratio -- -2143.923498
electron-deuteron mass ratio -- 0.00027244371095
electron-helion mass ratio -- 0.00018195430761
electron-muon mag. mom. ratio -- 206.7669896
electron-muon mass ratio -- 0.00483633166
electron-neutron mag. mom. ratio -- 960.9205
electron-neutron mass ratio -- 0.00054386734461
electron-proton mag. mom. ratio -- -658.2106848
electron-proton mass ratio -- 0.00054461702178
electron-tau mass ratio -- 0.000287592
electron-triton mass ratio -- 0.00018192000653
elementary charge -- 1.602176565e-19 C
elementary charge over h -- 2.417989348e+14 A J^-1
Faraday constant -- 96485.3365 C mol^-1
Faraday constant for conventional electric current -- 96485.3321 C_90 mol^-1
Fermi coupling constant -- 1.166364e-05 GeV^-2
fine-structure constant -- 0.0072973525698
first radiation constant -- 3.74177153e-16 W m^2
first radiation constant for spectral radiance -- 1.191042869e-16 W m^2 sr^-1
Hartree energy -- 4.35974434e-18 J
Hartree energy in eV -- 27.21138505 eV
hartree-atomic mass unit relationship -- 2.9212623246e-08 u
hartree-electron volt relationship -- 27.21138505 eV
hartree-hertz relationship -- 6.57968392073e+15 Hz
hartree-inverse meter relationship -- 21947463.1371 m^-1
hartree-joule relationship -- 4.35974434e-18 J
hartree-kelvin relationship -- 315775.04 K
hartree-kilogram relationship -- 4.85086979e-35 kg
helion g factor -- -4.255250613
helion mag. mom. -- -1.074617486e-26 J T^-1
helion mag. mom. to Bohr magneton ratio -- -0.001158740958
helion mag. mom. to nuclear magneton ratio -- -2.127625306
helion mass -- 5.00641234e-27 kg
helion mass energy equivalent -- 4.49953902e-10 J
helion mass energy equivalent in MeV -- 2808.391482 MeV
helion mass in u -- 3.0149322468 u
helion molar mass -- 0.0030149322468 kg mol^-1
helion-electron mass ratio -- 5495.8852754
helion-proton mass ratio -- 2.9931526707
hertz-atomic mass unit relationship -- 4.4398216689e-24 u
hertz-electron volt relationship -- 4.135667516e-15 eV
hertz-hartree relationship -- 1.519829846e-16 E_h
hertz-inverse meter relationship -- 3.33564095198e-09 m^-1
hertz-joule relationship -- 6.62606957e-34 J
hertz-kelvin relationship -- 4.7992434e-11 K
hertz-kilogram relationship -- 7.37249668e-51 kg
inverse fine-structure constant -- 137.035999074
inverse meter-atomic mass unit relationship -- 1.3310250512e-15 u
inverse meter-electron volt relationship -- 1.23984193e-06 eV
inverse meter-hartree relationship -- 4.55633525276e-08 E_h
inverse meter-hertz relationship -- 299792458.0 Hz
inverse meter-joule relationship -- 1.986445684e-25 J
inverse meter-kelvin relationship -- 0.01438777 K
inverse meter-kilogram relationship -- 2.210218902e-42 kg
inverse of conductance quantum -- 12906.4037217 ohm
Josephson constant -- 4.8359787e+14 Hz V^-1
joule-atomic mass unit relationship -- 6700535850.0 u
joule-electron volt relationship -- 6.24150934e+18 eV
joule-hartree relationship -- 2.29371248e+17 E_h
joule-hertz relationship -- 1.509190311e+33 Hz
joule-inverse meter relationship -- 5.03411701e+24 m^-1
joule-kelvin relationship -- 7.2429716e+22 K
joule-kilogram relationship -- 1.11265005605e-17 kg
kelvin-atomic mass unit relationship -- 9.2510868e-14 u
kelvin-electron volt relationship -- 8.6173324e-05 eV
kelvin-hartree relationship -- 3.1668114e-06 E_h
kelvin-hertz relationship -- 20836618000.0 Hz
kelvin-inverse meter relationship -- 69.503476 m^-1
kelvin-joule relationship -- 1.3806488e-23 J
kelvin-kilogram relationship -- 1.536179e-40 kg
kilogram-atomic mass unit relationship -- 6.02214129e+26 u
kilogram-electron volt relationship -- 5.60958885e+35 eV
kilogram-hartree relationship -- 2.061485968e+34 E_h
kilogram-hertz relationship -- 1.356392608e+50 Hz
kilogram-inverse meter relationship -- 4.52443873e+41 m^-1
kilogram-joule relationship -- 8.98755178737e+16 J
kilogram-kelvin relationship -- 6.5096582e+39 K
lattice parameter of silicon -- 5.431020504e-10 m
Loschmidt constant (273.15 K, 100 kPa) -- 2.6516462e+25 m^-3
Loschmidt constant (273.15 K, 101.325 kPa) -- 2.6867805e+25 m^-3
mag. constant -- 1.25663706144e-06 N A^-2
mag. flux quantum -- 2.067833758e-15 Wb
Mo x unit -- 1.00209952e-13 m
molar gas constant -- 8.3144621 J mol^-1 K^-1
molar mass constant -- 0.001 kg mol^-1
molar mass of carbon-12 -- 0.012 kg mol^-1
molar Planck constant -- 3.9903127176e-10 J s mol^-1
molar Planck constant times c -- 0.119626565779 J m mol^-1
molar volume of ideal gas (273.15 K, 100 kPa) -- 0.022710953 m^3 mol^-1
molar volume of ideal gas (273.15 K, 101.325 kPa) -- 0.022413968 m^3 mol^-1
molar volume of silicon -- 1.205883301e-05 m^3 mol^-1
muon Compton wavelength -- 1.173444103e-14 m
muon Compton wavelength over 2 pi -- 1.867594294e-15 m
muon g factor -- -2.0023318418
muon mag. mom. -- -4.49044807e-26 J T^-1
muon mag. mom. anomaly -- 0.00116592091
muon mag. mom. to Bohr magneton ratio -- -0.00484197044
muon mag. mom. to nuclear magneton ratio -- -8.89059697
muon mass -- 1.883531475e-28 kg
muon mass energy equivalent -- 1.692833667e-11 J
muon mass energy equivalent in MeV -- 105.6583715 MeV
muon mass in u -- 0.1134289267 u
muon molar mass -- 0.0001134289267 kg mol^-1
muon-electron mass ratio -- 206.7682843
muon-neutron mass ratio -- 0.1124545177
muon-proton mag. mom. ratio -- -3.183345107
muon-proton mass ratio -- 0.1126095272
muon-tau mass ratio -- 0.0594649
natural unit of action -- 1.054571726e-34 J s
natural unit of action in eV s -- 6.58211928e-16 eV s
natural unit of energy -- 8.18710506e-14 J
natural unit of energy in MeV -- 0.510998928 MeV
natural unit of length -- 3.86159268e-13 m
natural unit of mass -- 9.10938291e-31 kg
natural unit of mom.um -- 2.73092429e-22 kg m s^-1
natural unit of mom.um in MeV/c -- 0.510998928 MeV/c
natural unit of time -- 1.28808866833e-21 s
natural unit of velocity -- 299792458.0 m s^-1
neutron Compton wavelength -- 1.3195909068e-15 m
neutron Compton wavelength over 2 pi -- 2.1001941568e-16 m
neutron g factor -- -3.82608545
neutron gyromag. ratio -- 183247179.0 s^-1 T^-1
neutron gyromag. ratio over 2 pi -- 29.1646943 MHz T^-1
neutron mag. mom. -- -9.6623647e-27 J T^-1
neutron mag. mom. to Bohr magneton ratio -- -0.00104187563
neutron mag. mom. to nuclear magneton ratio -- -1.91304272
neutron mass -- 1.674927351e-27 kg
neutron mass energy equivalent -- 1.505349631e-10 J
neutron mass energy equivalent in MeV -- 939.565379 MeV
neutron mass in u -- 1.008664916 u
neutron molar mass -- 0.001008664916 kg mol^-1
neutron to shielded proton mag. mom. ratio -- -0.68499694
neutron-electron mag. mom. ratio -- 0.00104066882
neutron-electron mass ratio -- 1838.6836605
neutron-muon mass ratio -- 8.892484
neutron-proton mag. mom. ratio -- -0.68497934
neutron-proton mass difference -- 2.30557392e-30
neutron-proton mass difference energy equivalent -- 2.0721465e-13
neutron-proton mass difference energy equivalent in MeV -- 1.29333217
neutron-proton mass difference in u -- 0.00138844919
neutron-proton mass ratio -- 1.00137841917
neutron-tau mass ratio -- 0.52879
Newtonian constant of gravitation -- 6.67384e-11 m^3 kg^-1 s^-2
Newtonian constant of gravitation over h-bar c -- 6.70837e-39 (GeV/c^2)^-2
nuclear magneton -- 5.05078353e-27 J T^-1
nuclear magneton in eV/T -- 3.1524512605e-08 eV T^-1
nuclear magneton in inverse meters per tesla -- 0.02542623527 m^-1 T^-1
nuclear magneton in K/T -- 0.00036582682 K T^-1
nuclear magneton in MHz/T -- 7.62259357 MHz T^-1
Planck constant -- 6.62606957e-34 J s
Planck constant in eV s -- 4.135667516e-15 eV s
Planck constant over 2 pi -- 1.054571726e-34 J s
Planck constant over 2 pi in eV s -- 6.58211928e-16 eV s
Planck constant over 2 pi times c in MeV fm -- 197.3269718 MeV fm
Planck length -- 1.616199e-35 m
Planck mass -- 2.17651e-08 kg
Planck mass energy equivalent in GeV -- 1.220932e+19 GeV
Planck temperature -- 1.416833e+32 K
Planck time -- 5.39106e-44 s
proton charge to mass quotient -- 95788335.8 C kg^-1
proton Compton wavelength -- 1.32140985623e-15 m
proton Compton wavelength over 2 pi -- 2.1030891047e-16 m
proton g factor -- 5.585694713
proton gyromag. ratio -- 267522200.5 s^-1 T^-1
proton gyromag. ratio over 2 pi -- 42.5774806 MHz T^-1
proton mag. mom. -- 1.410606743e-26 J T^-1
proton mag. mom. to Bohr magneton ratio -- 0.00152103221
proton mag. mom. to nuclear magneton ratio -- 2.792847356
proton mag. shielding correction -- 2.5694e-05
proton mass -- 1.672621777e-27 kg
proton mass energy equivalent -- 1.503277484e-10 J
proton mass energy equivalent in MeV -- 938.272046 MeV
proton mass in u -- 1.00727646681 u
proton molar mass -- 0.00100727646681 kg mol^-1
proton rms charge radius -- 8.775e-16 m
proton-electron mass ratio -- 1836.15267245
proton-muon mass ratio -- 8.88024331
proton-neutron mag. mom. ratio -- -1.45989806
proton-neutron mass ratio -- 0.99862347826
proton-tau mass ratio -- 0.528063
quantum of circulation -- 0.0003636947552 m^2 s^-1
quantum of circulation times 2 -- 0.0007273895104 m^2 s^-1
Rydberg constant -- 10973731.5685 m^-1
Rydberg constant times c in Hz -- 3.28984196036e+15 Hz
Rydberg constant times hc in eV -- 13.60569253 eV
Rydberg constant times hc in J -- 2.179872171e-18 J
Sackur-Tetrode constant (1 K, 100 kPa) -- -1.1517078
Sackur-Tetrode constant (1 K, 101.325 kPa) -- -1.1648708
second radiation constant -- 0.01438777 m K
shielded helion gyromag. ratio -- 203789465.9 s^-1 T^-1
shielded helion gyromag. ratio over 2 pi -- 32.43410084 MHz T^-1
shielded helion mag. mom. -- -1.074553044e-26 J T^-1
shielded helion mag. mom. to Bohr magneton ratio -- -0.001158671471
shielded helion mag. mom. to nuclear magneton ratio -- -2.127497718
shielded helion to proton mag. mom. ratio -- -0.761766558
shielded helion to shielded proton mag. mom. ratio -- -0.7617861313
shielded proton gyromag. ratio -- 267515326.8 s^-1 T^-1
shielded proton gyromag. ratio over 2 pi -- 42.5763866 MHz T^-1
shielded proton mag. mom. -- 1.410570499e-26 J T^-1
shielded proton mag. mom. to Bohr magneton ratio -- 0.001520993128
shielded proton mag. mom. to nuclear magneton ratio -- 2.792775598
speed of light in vacuum -- 299792458.0 m s^-1
standard acceleration of gravity -- 9.80665 m s^-2
standard atmosphere -- 101325.0 Pa
standard-state pressure -- 100000.0 Pa
Stefan-Boltzmann constant -- 5.670373e-08 W m^-2 K^-4
tau Compton wavelength -- 6.97787e-16 m
tau Compton wavelength over 2 pi -- 1.11056e-16 m
tau mass -- 3.16747e-27 kg
tau mass energy equivalent -- 2.84678e-10 J
tau mass energy equivalent in MeV -- 1776.82 MeV
tau mass in u -- 1.90749 u
tau molar mass -- 0.00190749 kg mol^-1
tau-electron mass ratio -- 3477.15
tau-muon mass ratio -- 16.8167
tau-neutron mass ratio -- 1.89111
tau-proton mass ratio -- 1.89372
Thomson cross section -- 6.652458734e-29 m^2
triton g factor -- 5.957924896
triton mag. mom. -- 1.504609447e-26 J T^-1
triton mag. mom. to Bohr magneton ratio -- 0.001622393657
triton mag. mom. to nuclear magneton ratio -- 2.978962448
triton mass -- 5.0073563e-27 kg
triton mass energy equivalent -- 4.50038741e-10 J
triton mass energy equivalent in MeV -- 2808.921005 MeV
triton mass in u -- 3.0155007134 u
triton molar mass -- 0.0030155007134 kg mol^-1
triton-electron mass ratio -- 5496.9215267
triton-proton mass ratio -- 2.9937170308
unified atomic mass unit -- 1.660538921e-27 kg
von Klitzing constant -- 25812.8074434 ohm
weak mixing angle -- 0.2223
Wien frequency displacement law constant -- 58789254000.0 Hz K^-1
Wien wavelength displacement law constant -- 0.0028977721 m K
{220} lattice spacing of silicon -- 1.920155714e-10 m
Units:
SI prefixes:
yotta -- 10^{24}
zetta -- 10^{21}
exa -- 10^{18}
peta -- 10^{15}
tera -- 10^{12}
giga -- 10^{9}
mega -- 10^{6}
kilo -- 10^{3}
hecto -- 10^{2}
deka -- 10^{1}
deci -- 10^{-1}
centi -- 10^{-2}
milli -- 10^{-3}
micro -- 10^{-6}
nano -- 10^{-9}
pico -- 10^{-12}
femto -- 10^{-15}
atto -- 10^{-18}
zepto -- 10^{-21}
Binary prefixes:
kibi -- 2^{10}
mebi -- 2^{20}
gibi -- 2^{30}
tebi -- 2^{40}
pebi -- 2^{50}
exbi -- 2^{60}
zebi -- 2^{70}
yobi -- 2^{80}
Weight:
gram -- 10^{-3} kg
metric_ton -- 10^{3} kg
grain -- one grain in kg
lb -- one pound (avoirdupous) in kg
oz -- one ounce in kg
stone -- one stone in kg
grain -- one grain in kg
long_ton -- one long ton in kg
short_ton -- one short ton in kg
troy_ounce -- one Troy ounce in kg
troy_pound -- one Troy pound in kg
carat -- one carat in kg
m_u -- atomic mass constant (in kg)
Angle:
degree -- degree in radians
arcmin -- arc minute in radians
arcsec -- arc second in radians
Time:
minute -- one minute in seconds
hour -- one hour in seconds
day -- one day in seconds
week -- one week in seconds
year -- one year (365 days) in seconds
Julian_year -- one Julian year (365.25 days) in seconds
Length:
inch -- one inch in meters
foot -- one foot in meters
yard -- one yard in meters
mile -- one mile in meters
mil -- one mil in meters
pt -- one point in meters
survey_foot -- one survey foot in meters
survey_mile -- one survey mile in meters
nautical_mile -- one nautical mile in meters
fermi -- one Fermi in meters
angstrom -- one Angstrom in meters
micron -- one micron in meters
au -- one astronomical unit in meters
light_year -- one light year in meters
parsec -- one parsec in meters
Pressure:
atm -- standard atmosphere in pascals
bar -- one bar in pascals
torr -- one torr (mmHg) in pascals
psi -- one psi in pascals
Area:
hectare -- one hectare in square meters
acre -- one acre in square meters
Volume:
liter -- one liter in cubic meters
gallon -- one gallon (US) in cubic meters
gallon_imp -- one gallon (UK) in cubic meters
fluid_ounce -- one fluid ounce (US) in cubic meters
fluid_ounce_imp -- one fluid ounce (UK) in cubic meters
bbl -- one barrel in cubic meters
Speed:
kmh -- kilometers per hour in meters per second
mph -- miles per hour in meters per second
mach -- one Mach (approx., at 15 C, 1 atm) in meters per second
knot -- one knot in meters per second
Temperature:
zero_Celsius -- zero of Celsius scale in Kelvin
degree_Fahrenheit -- one Fahrenheit (only differences) in Kelvins
C2K(C) -- Convert Celsius to Kelvin
K2C(K) -- Convert Kelvin to Celsius
F2C(F) -- Convert Fahrenheit to Celsius
C2F(C) -- Convert Celsius to Fahrenheit
F2K(F) -- Convert Fahrenheit to Kelvin
K2F(K) -- Convert Kelvin to Fahrenheit
Energy:
eV -- one electron volt in Joules
calorie -- one calorie (thermochemical) in Joules
calorie_IT -- one calorie (International Steam Table calorie, 1956) in Joules
erg -- one erg in Joules
Btu -- one British thermal unit (International Steam Table) in Joules
Btu_th -- one British thermal unit (thermochemical) in Joules
ton_TNT -- one ton of TNT in Joules
Power:
hp -- one horsepower in watts
Force:
dyn -- one dyne in newtons
lbf -- one pound force in newtons
kgf -- one kilogram force in newtons
Optics:
lambda2nu(lambda\_) -- Convert wavelength to optical frequency
nu2lambda(nu) -- Convert optical frequency to wavelength.
References:
CODATA Recommended Values of the Fundamental Physical Constants 2010. http://physics.nist.gov/cuu/Constants/index.html
8.4 fftpack
Discrete Fourier transforms.
Access via:
In [1]: from scipy import fftpack
Also see: http://docs.scipy.org/doc/scipy/reference/tutorial/fftpack.html
Fast Fourier Transforms (FFTs):
fft(x[, n, axis, overwrite_x]) -- Return discrete Fourier transform of real or complex sequence.
ifft(x[, n, axis, overwrite_x]) -- Return discrete inverse Fourier transform of real or complex sequence.
fft2(x[, shape, axes, overwrite_x]) -- 2-D discrete Fourier transform.
ifft2(x[, shape, axes, overwrite_x]) -- 2-D discrete inverse Fourier transform of real or complex sequence.
fftn(x[, shape, axes, overwrite_x]) -- Return multidimensional discrete Fourier transform.
ifftn(x[, shape, axes, overwrite_x]) -- Return inverse multi-dimensional discrete Fourier transform of arbitrary type sequence x.
rfft(x[, n, axis, overwrite_x]) -- Discrete Fourier transform of a real sequence.
irfft(x[, n, axis, overwrite_x]) -- Return inverse discrete Fourier transform of real sequence x.
dct(x[, type, n, axis, norm, overwrite_x]) -- Return the Discrete Cosine Transform of arbitrary type sequence x.
idct(x[, type, n, axis, norm, overwrite_x]) -- Return the Inverse Discrete Cosine Transform of an arbitrary type sequence.
Differential and pseudo-differential operators:
diff(x[, order, period, _cache]) -- Return k-th derivative (or integral) of a periodic sequence x.
tilbert(x, h[, period, _cache]) -- Return h-Tilbert transform of a periodic sequence x.
itilbert(x, h[, period, _cache]) -- Return inverse h-Tilbert transform of a periodic sequence x.
hilbert(x[, _cache]) -- Return Hilbert transform of a periodic sequence x.
ihilbert(x) -- Return inverse Hilbert transform of a periodic sequence x.
cs_diff(x, a, b[, period, _cache]) -- Return (a,b)-cosh/sinh pseudo-derivative of a periodic sequence.
sc_diff(x, a, b[, period, _cache]) -- Return (a,b)-sinh/cosh pseudo-derivative of a periodic sequence x.
ss_diff(x, a, b[, period, _cache]) -- Return (a,b)-sinh/sinh pseudo-derivative of a periodic sequence x.
cc_diff(x, a, b[, period, _cache]) -- Return (a,b)-cosh/cosh pseudo-derivative of a periodic sequence.
shift(x, a[, period, _cache]) -- Shift periodic sequence x by a: y(u) = x(u+a).
Helper functions:
fftshift(x[, axes]) -- Shift the zero-frequency component to the center of the spectrum.
ifftshift(x[, axes]) -- The inverse of fftshift.
fftfreq(n[, d]) -- Return the Discrete Fourier Transform sample frequencies.
rfftfreq(n[, d]) -- DFT sample frequencies (for usage with rfft, irfft).
Convolutions (scipy.fftpack.convolve):
convolve(x,omega,[swap_real_imag,overwrite_x]) -- Wrapper for convolve.
convolve_z(x,omega_real,omega_imag,[overwrite_x]) -- Wrapper for convolve_z.
init_convolution_kernel(...) -- Wrapper for init_convolution_kernel.
destroy_convolve_cache() -- Wrapper for destroy_convolve_cache.
Other (scipy.fftpack._fftpack):
drfft(x,[n,direction,normalize,overwrite_x]) -- Wrapper for drfft.
zfft(x,[n,direction,normalize,overwrite_x]) -- Wrapper for zfft.
zrfft(x,[n,direction,normalize,overwrite_x]) -- Wrapper for zrfft.
zfftnd(x,[s,direction,normalize,overwrite_x]) -- Wrapper for zfftnd.
destroy_drfft_cache() -- Wrapper for destroy_drfft_cache.
destroy_zfft_cache() -- Wrapper for destroy_zfft_cache.
destroy_zfftnd_cache() -- Wrapper for destroy_zfftnd_cache.
8.5 integrate
Integration and ODEs -- Integrating functions, given function object
Access via:
In [1]: from scipy import integrate
Also see: http://docs.scipy.org/doc/scipy/reference/tutorial/integrate.html
quad(func, a, b[, args, full_output, ...]) -- Compute a definite integral.
dblquad(func, a, b, gfun, hfun[, args, ...]) -- Compute a double integral.
tplquad(func, a, b, gfun, hfun, qfun, rfun) -- Compute a triple (definite) integral.
nquad(func, ranges[, args, opts]) -- Integration over multiple variables.
fixed_quad(func, a, b[, args, n]) -- Compute a definite integral using fixed-order Gaussian quadrature.
quadrature(func, a, b[, args, tol, rtol, ...]) -- Compute a definite integral using fixed-tolerance Gaussian quadrature.
romberg(function, a, b[, args, tol, rtol, ...]) -- Romberg integration of a callable function or method.
Integrating functions, given fixed samples:
cumtrapz(y[, x, dx, axis, initial]) -- Cumulatively integrate y(x) using the composite trapezoidal rule.
simps(y[, x, dx, axis, even]) -- Integrate y(x) using samples along the given axis and the composite Simpson’s rule.
romb(y[, dx, axis, show]) -- Romberg integration using samples of a function.
See also scipy.special for orthogonal polynomials (special) for Gaussian quadrature roots and weights for other weighting factors and regions.
Integrators of ODE systems:
odeint(func, y0, t[, args, Dfun, col_deriv, ...]) -- Integrate a system of ordinary differential equations.
ode(f[, jac]) -- A generic interface class to numeric integrators.
complex_ode(f[, jac]) -- A wrapper of ode for complex systems.
8.6 interpolate
Sub-package for objects used in interpolation.
This sub-package contains spline functions and classes, one-dimensional and multi-dimensional (univariate and multivariate) interpolation classes, Lagrange and Taylor polynomial interpolators, and wrappers for FITPACK and DFITPACK functions.
Access via:
In [2]: from scipy import interpolate
Also see: http://docs.scipy.org/doc/scipy/reference/tutorial/interpolate.html
Univariate interpolation:
interp1d(x, y[, kind, axis, copy, ...]) -- Interpolate a 1-D function.
BarycentricInterpolator(xi[, yi, axis]) -- The interpolating polynomial for a set of points
KroghInterpolator(xi, yi[, axis]) -- Interpolating polynomial for a set of points.
PiecewisePolynomial(xi, yi[, orders, ...]) -- Piecewise polynomial curve specified by points and derivatives
PchipInterpolator(x, y[, axis, extrapolate]) -- PCHIP 1-d monotonic cubic interpolation
barycentric_interpolate(xi, yi, x[, axis]) -- Convenience function for polynomial interpolation.
krogh_interpolate(xi, yi, x[, der, axis]) -- Convenience function for polynomial interpolation.
piecewise_polynomial_interpolate(xi, yi, x) -- Convenience function for piecewise polynomial interpolation.
pchip_interpolate(xi, yi, x[, der, axis]) -- Convenience function for pchip interpolation.
Akima1DInterpolator(x, y) -- Akima interpolator
PPoly(c, x[, extrapolate]) -- Piecewise polynomial in terms of coefficients and breakpoints
BPoly(c, x[, extrapolate]) -- Piecewise polynomial in terms of coefficients and breakpoints
Multivariate interpolation -- Unstructured data:
griddata(points, values, xi[, method, ...]) -- Interpolate unstructured D-dimensional data.
LinearNDInterpolator(points, values[, ...]) -- Piecewise linear interpolant in N dimensions.
NearestNDInterpolator(points, values) -- Nearest-neighbour interpolation in N dimensions.
CloughTocher2DInterpolator(points, values[, tol]) -- Piecewise cubic, C1 smooth, curvature-minimizing interpolant in 2D.
Rbf(*args) -- A class for radial basis function approximation/interpolation of n-dimensional scattered data.
interp2d(x, y, z[, kind, copy, ...]) -- Interpolate over a 2-D grid.
For data on a grid:
interpn(points, values, xi[, method, ...]) -- Multidimensional interpolation on regular grids.
RegularGridInterpolator(points, values[, ...]) -- Interpolation on a regular grid in arbitrary dimensions
RectBivariateSpline(x, y, z[, bbox, kx, ky, s]) -- Bivariate spline approximation over a rectangular mesh.
See also scipy.ndimage.interpolation.map_coordinates
1-D Splines:
UnivariateSpline(x, y[, w, bbox, k, s]) -- One-dimensional smoothing spline fit to a given set of data points.
InterpolatedUnivariateSpline(x, y[, w, bbox, k]) -- One-dimensional interpolating spline for a given set of data points.
LSQUnivariateSpline(x, y, t[, w, bbox, k]) -- One-dimensional spline with explicit internal knots.
The above univariate spline classes have the following methods:
UnivariateSpline.__call__(x[, nu]) -- Evaluate spline (or its nu-th derivative) at positions x.
UnivariateSpline.derivatives(x) -- Return all derivatives of the spline at the point x.
UnivariateSpline.integral(a, b) -- Return definite integral of the spline between two given points.
UnivariateSpline.roots() -- Return the zeros of the spline.
UnivariateSpline.derivative([n]) -- Construct a new spline representing the derivative of this spline.
UnivariateSpline.antiderivative([n]) -- Construct a new spline representing the antiderivative of this spline.
UnivariateSpline.get_coeffs() -- Return spline coefficients.
UnivariateSpline.get_knots() -- Return positions of (boundary and interior) knots of the spline.
UnivariateSpline.get_residual() -- Return weighted sum of squared residuals of the spline approximation: sum((w[i] * (y[i]-s(x[i])))**2, axis=0).
UnivariateSpline.set_smoothing_factor(s) -- Continue spline computation with the given smoothing factor s and with the knots found at the last call.
Functional interface to FITPACK functions:
splrep(x, y[, w, xb, xe, k, task, s, t, ...]) -- Find the B-spline representation of 1-D curve.
splprep(x[, w, u, ub, ue, k, task, s, t, ...]) -- Find the B-spline representation of an N-dimensional curve.
splev(x, tck[, der, ext]) -- Evaluate a B-spline or its derivatives.
splint(a, b, tck[, full_output]) -- Evaluate the definite integral of a B-spline.
sproot(tck[, mest]) -- Find the roots of a cubic B-spline.
spalde(x, tck) -- Evaluate all derivatives of a B-spline.
splder(tck[, n]) -- Compute the spline representation of the derivative of a given spline
splantider(tck[, n]) -- Compute the spline for the antiderivative (integral) of a given spline.
2-D Splines -- For data on a grid:
RectBivariateSpline(x, y, z[, bbox, kx, ky, s]) -- Bivariate spline approximation over a rectangular mesh.
RectSphereBivariateSpline(u, v, r[, s, ...]) -- Bivariate spline approximation over a rectangular mesh on a sphere.
For unstructured data -- BivariateSpline Base class for bivariate splines:
SmoothBivariateSpline(x, y, z[, w, bbox, ...]) -- Smooth bivariate spline approximation.
SmoothSphereBivariateSpline(theta, phi, r[, ...]) -- Smooth bivariate spline approximation in spherical coordinates.
LSQBivariateSpline(x, y, z, tx, ty[, w, ...]) -- Weighted least-squares bivariate spline approximation.
LSQSphereBivariateSpline(theta, phi, r, tt, tp) -- Weighted least-squares bivariate spline approximation in spherical coordinates.
Low-level interface to FITPACK functions:
bisplrep(x, y, z[, w, xb, xe, yb, ye, kx, ...]) -- Find a bivariate B-spline representation of a surface.
bisplev(x, y, tck[, dx, dy]) -- Evaluate a bivariate B-spline and its derivatives.
Additional tools:
lagrange(x, w) -- Return a Lagrange interpolating polynomial.
approximate_taylor_polynomial(f, x, degree, ...) -- Estimate the Taylor polynomial of f at x by polynomial fitting.
See also -- scipy.ndimage.interpolation.map_coordinates, scipy.ndimage.interpolation.spline_filter, scipy.signal.resample, scipy.signal.bspline, scipy.signal.gauss_spline, scipy.signal.qspline1d, scipy.signal.cspline1d, scipy.signal.qspline1d_eval, scipy.signal.cspline1d_eval, scipy.signal.qspline2d, scipy.signal.cspline2d.
8.7 linalg
Linear algebra functions.
See also numpy.linalg for more linear algebra functions. Note that although scipy.linalg imports most of them, identically named functions from scipy.linalg may offer more or slightly differing functionality.
Access via:
In [3]: from scipy import linalg
Also see: http://docs.scipy.org/doc/scipy/reference/tutorial/linalg.html
Basics:
inv(a[, overwrite_a, check_finite]) -- Compute the inverse of a matrix.
solve(a, b[, sym_pos, lower, overwrite_a, ...]) -- Solve the equation a x = b for x.
solve_banded(l_and_u, ab, b[, overwrite_ab, ...]) -- Solve the equation a x = b for x, assuming a is banded matrix.
solveh_banded(ab, b[, overwrite_ab, ...]) -- Solve equation a x = b.
solve_triangular(a, b[, trans, lower, ...]) -- Solve the equation a x = b for x, assuming a is a triangular matrix.
det(a[, overwrite_a, check_finite]) -- Compute the determinant of a matrix
norm(a[, ord]) -- Matrix or vector norm.
lstsq(a, b[, cond, overwrite_a, ...]) -- Compute least-squares solution to equation Ax = b.
pinv(a[, cond, rcond, return_rank, check_finite]) -- Compute the (Moore-Penrose) pseudo-inverse of a matrix.
pinv2(a[, cond, rcond, return_rank, ...]) -- Compute the (Moore-Penrose) pseudo-inverse of a matrix.
pinvh(a[, cond, rcond, lower, return_rank, ...]) -- Compute the (Moore-Penrose) pseudo-inverse of a Hermitian matrix.
kron(a, b) -- Kronecker product.
tril(m[, k]) -- Make a copy of a matrix with elements above the k-th diagonal zeroed.
triu(m[, k]) -- Make a copy of a matrix with elements below the k-th diagonal zeroed.
Eigenvalue problems:
eig(a[, b, left, right, overwrite_a, ...]) -- Solve an ordinary or generalized eigenvalue problem of a square matrix.
eigvals(a[, b, overwrite_a, check_finite]) -- Compute eigenvalues from an ordinary or generalized eigenvalue problem.
eigh(a[, b, lower, eigvals_only, ...]) -- Solve an ordinary or generalized eigenvalue problem for a complex Hermitian or real symmetric matrix.
eigvalsh(a[, b, lower, overwrite_a, ...]) -- Solve an ordinary or generalized eigenvalue problem for a complex Hermitian or real symmetric matrix.
eig_banded(a_band[, lower, eigvals_only, ...]) -- Solve real symmetric or complex hermitian band matrix eigenvalue problem.
eigvals_banded(a_band[, lower, ...]) -- Solve real symmetric or complex hermitian band matrix eigenvalue problem.
Decompositions:
lu(a[, permute_l, overwrite_a, check_finite]) -- Compute pivoted LU decompostion of a matrix.
lu_factor(a[, overwrite_a, check_finite]) -- Compute pivoted LU decomposition of a matrix.
lu_solve(lu_and_piv, b[, trans, ...]) -- Solve an equation system, a x = b, given the LU factorization of a
svd(a[, full_matrices, compute_uv, ...]) -- Singular Value Decomposition.
svdvals(a[, overwrite_a, check_finite]) -- Compute singular values of a matrix.
diagsvd(s, M, N) -- Construct the sigma matrix in SVD from singular values and size M, N.
orth(A) -- Construct an orthonormal basis for the range of A using SVD
cholesky(a[, lower, overwrite_a, check_finite]) -- Compute the Cholesky decomposition of a matrix.
cholesky_banded(ab[, overwrite_ab, lower, ...]) -- Cholesky decompose a banded Hermitian positive-definite matrix
cho_factor(a[, lower, overwrite_a, check_finite]) -- Compute the Cholesky decomposition of a matrix, to use in cho_solve
cho_solve(c_and_lower, b[, overwrite_b, ...]) -- Solve the linear equations A x = b, given the Cholesky factorization of A.
cho_solve_banded(cb_and_lower, b[, ...]) -- Solve the linear equations A x = b, given the Cholesky factorization of A.
polar(a[, side]) -- Compute the polar decomposition.
qr(a[, overwrite_a, lwork, mode, pivoting, ...]) -- Compute QR decomposition of a matrix.
qr_multiply(a, c[, mode, pivoting, ...]) -- Calculate the QR decomposition and multiply Q with a matrix.
qz(A, B[, output, lwork, sort, overwrite_a, ...]) -- QZ decompostion for generalized eigenvalues of a pair of matrices.
schur(a[, output, lwork, overwrite_a, sort, ...]) -- Compute Schur decomposition of a matrix.
rsf2csf(T, Z[, check_finite]) -- Convert real Schur form to complex Schur form.
hessenberg(a[, calc_q, overwrite_a, ...]) -- Compute Hessenberg form of a matrix.
See also scipy.linalg.interpolative – Interpolative matrix decompositions
Matrix Functions:
expm(A[, q]) -- Compute the matrix exponential using Pade approximation.
logm(A[, disp]) -- Compute matrix logarithm.
cosm(A) -- Compute the matrix cosine.
sinm(A) -- Compute the matrix sine.
tanm(A) -- Compute the matrix tangent.
coshm(A) -- Compute the hyperbolic matrix cosine.
sinhm(A) -- Compute the hyperbolic matrix sine.
tanhm(A) -- Compute the hyperbolic matrix tangent.
signm(A[, disp]) -- Matrix sign function.
sqrtm(A[, disp, blocksize]) -- Matrix square root.
funm(A, func[, disp]) -- Evaluate a matrix function specified by a callable.
expm_frechet(A, E[, method, compute_expm, ...]) -- Frechet derivative of the matrix exponential of A in the direction E.
expm_cond(A[, check_finite]) -- Relative condition number of the matrix exponential in the Frobenius norm.
fractional_matrix_power(A, t) --
Matrix Equation Solvers:
solve_sylvester(a, b, q) -- Computes a solution (X) to the Sylvester equation (AX + XB = Q).
solve_continuous_are(a, b, q, r) -- Solves the continuous algebraic Riccati equation, or CARE, defined as (A’X + XA - XBR^-1B’X+Q=0) directly using a Schur decomposition method.
solve_discrete_are(a, b, q, r) -- Solves the disctrete algebraic Riccati equation, or DARE, defined as (X = A’XA-(A’XB)(R+B’XB)^-1(B’XA)+Q), directly using a Schur decomposition method.
solve_discrete_lyapunov(a, q) -- Solves the Discrete Lyapunov Equation (A’XA-X=-Q) directly.
solve_lyapunov(a, q) -- Solves the continuous Lyapunov equation (AX + XA^H = Q) given the values of A and Q using the Bartels-Stewart algorithm.
Special Matrices:
block_diag(*arrs) -- Create a block diagonal matrix from provided arrays.
circulant(c) -- Construct a circulant matrix.
companion(a) -- Create a companion matrix.
dft(n[, scale]) -- Discrete Fourier transform matrix.
hadamard(n[, dtype]) -- Construct a Hadamard matrix.
hankel(c[, r]) -- Construct a Hankel matrix.
hilbert(n) -- Create a Hilbert matrix of order n.
invhilbert(n[, exact]) -- Compute the inverse of the Hilbert matrix of order n.
leslie(f, s) -- Create a Leslie matrix.
pascal(n[, kind, exact]) -- Returns the n x n Pascal matrix.
toeplitz(c[, r]) -- Construct a Toeplitz matrix.
tri(N[, M, k, dtype]) -- Construct (N, M) matrix filled with ones at and below the k-th diagonal.
Low-level routines:
get_blas_funcs(names[, arrays, dtype]) -- Return available BLAS function objects from names.
get_lapack_funcs(names[, arrays, dtype]) -- Return available LAPACK function objects from names.
find_best_blas_type([arrays, dtype]) -- Find best-matching BLAS/LAPACK type.
See also:
scipy.linalg.blas -– Low-level BLAS functions
scipy.linalg.lapack -– Low-level LAPACK functions
8.8 Multi-dimensional image processing
This package contains various functions for multi-dimensional image processing.
Access via:
In [1]: from scipy import ndimage
Also see: http://docs.scipy.org/doc/scipy/reference/tutorial/ndimage.html
Filters (scipy.ndimage.filters):
convolve(input, weights[, output, mode, ...]) -- Multidimensional convolution.
convolve1d(input, weights[, axis, output, ...]) -- Calculate a one-dimensional convolution along the given axis.
correlate(input, weights[, output, mode, ...]) -- Multi-dimensional correlation.
correlate1d(input, weights[, axis, output, ...]) -- Calculate a one-dimensional correlation along the given axis.
gaussian_filter(input, sigma[, order, ...]) -- Multidimensional Gaussian filter.
gaussian_filter1d(input, sigma[, axis, ...]) -- One-dimensional Gaussian filter.
gaussian_gradient_magnitude(input, sigma[, ...]) -- Multidimensional gradient magnitude using Gaussian derivatives.
gaussian_laplace(input, sigma[, output, ...]) -- Multidimensional Laplace filter using gaussian second derivatives.
generic_filter(input, function[, size, ...]) -- Calculates a multi-dimensional filter using the given function.
generic_filter1d(input, function, filter_size) -- Calculate a one-dimensional filter along the given axis.
generic_gradient_magnitude(input, derivative) -- Gradient magnitude using a provided gradient function.
generic_laplace(input, derivative2[, ...]) -- N-dimensional Laplace filter using a provided second derivative function
laplace(input[, output, mode, cval]) -- N-dimensional Laplace filter based on approximate second derivatives.
maximum_filter(input[, size, footprint, ...]) -- Calculates a multi-dimensional maximum filter.
maximum_filter1d(input, size[, axis, ...]) -- Calculate a one-dimensional maximum filter along the given axis.
median_filter(input[, size, footprint, ...]) -- Calculates a multidimensional median filter.
minimum_filter(input[, size, footprint, ...]) -- Calculates a multi-dimensional minimum filter.
minimum_filter1d(input, size[, axis, ...]) -- Calculate a one-dimensional minimum filter along the given axis.
percentile_filter(input, percentile[, size, ...]) -- Calculates a multi-dimensional percentile filter.
prewitt(input[, axis, output, mode, cval]) -- Calculate a Prewitt filter.
rank_filter(input, rank[, size, footprint, ...]) -- Calculates a multi-dimensional rank filter.
sobel(input[, axis, output, mode, cval]) -- Calculate a Sobel filter.
uniform_filter(input[, size, output, mode, ...]) -- Multi-dimensional uniform filter.
uniform_filter1d(input, size[, axis, ...]) -- Calculate a one-dimensional uniform filter along the given axis.
Fourier filters (scipy.ndimage.fourier):
fourier_ellipsoid(input, size[, n, axis, output]) -- Multi-dimensional ellipsoid fourier filter.
fourier_gaussian(input, sigma[, n, axis, output]) -- Multi-dimensional Gaussian fourier filter.
fourier_shift(input, shift[, n, axis, output]) -- Multi-dimensional fourier shift filter.
fourier_uniform(input, size[, n, axis, output]) -- Multi-dimensional uniform fourier filter.
Interpolation (scipy.ndimage.interpolation):
affine_transform(input, matrix[, offset, ...]) -- Apply an affine transformation.
geometric_transform(input, mapping[, ...]) -- Apply an arbritrary geometric transform.
map_coordinates(input, coordinates[, ...]) -- Map the input array to new coordinates by interpolation.
rotate(input, angle[, axes, reshape, ...]) -- Rotate an array.
shift(input, shift[, output, order, mode, ...]) -- Shift an array.
spline_filter(input[, order, output]) -- Multi-dimensional spline filter.
spline_filter1d(input[, order, axis, output]) -- Calculates a one-dimensional spline filter along the given axis.
zoom(input, zoom[, output, order, mode, ...]) -- Zoom an array.
Measurements (scipy.ndimage.measurements):
center_of_mass(input[, labels, index]) -- Calculate the center of mass of the values of an array at labels.
extrema(input[, labels, index]) -- Calculate the minimums and maximums of the values of an array at labels, along with their positions.
find_objects(input[, max_label]) -- Find objects in a labeled array.
histogram(input, min, max, bins[, labels, index]) -- Calculate the histogram of the values of an array, optionally at labels.
label(input[, structure, output]) -- Label features in an array.
labeled_comprehension(input, labels, index, ...) -- Roughly equivalent to [func(input[labels == i]) for i in index].
maximum(input[, labels, index]) -- Calculate the maximum of the values of an array over labeled regions.
maximum_position(input[, labels, index]) -- Find the positions of the maximums of the values of an array at labels.
mean(input[, labels, index]) -- Calculate the mean of the values of an array at labels.
minimum(input[, labels, index]) -- Calculate the minimum of the values of an array over labeled regions.
minimum_position(input[, labels, index]) -- Find the positions of the minimums of the values of an array at labels.
standard_deviation(input[, labels, index]) -- Calculate the standard deviation of the values of an n-D image array, optionally at specified sub-regions.
sum(input[, labels, index]) -- Calculate the sum of the values of the array.
variance(input[, labels, index]) -- Calculate the variance of the values of an n-D image array, optionally at specified sub-regions.
watershed_ift(input, markers[, structure, ...]) -- Apply watershed from markers using an iterative forest transform algorithm.
Morphology (scipy.ndimage.morphology):
binary_closing(input[, structure, ...]) -- Multi-dimensional binary closing with the given structuring element.
binary_dilation(input[, structure, ...]) -- Multi-dimensional binary dilation with the given structuring element.
binary_erosion(input[, structure, ...]) -- Multi-dimensional binary erosion with a given structuring element.
binary_fill_holes(input[, structure, ...]) -- Fill the holes in binary objects.
binary_hit_or_miss(input[, structure1, ...]) -- Multi-dimensional binary hit-or-miss transform.
binary_opening(input[, structure, ...]) -- Multi-dimensional binary opening with the given structuring element.
binary_propagation(input[, structure, mask, ...]) -- Multi-dimensional binary propagation with the given structuring element.
black_tophat(input[, size, footprint, ...]) -- Multi-dimensional black tophat filter.
distance_transform_bf(input[, metric, ...]) -- Distance transform function by a brute force algorithm.
distance_transform_cdt(input[, metric, ...]) -- Distance transform for chamfer type of transforms.
distance_transform_edt(input[, sampling, ...]) -- Exact euclidean distance transform.
generate_binary_structure(rank, connectivity) -- Generate a binary structure for binary morphological operations.
grey_closing(input[, size, footprint, ...]) -- Multi-dimensional greyscale closing.
grey_dilation(input[, size, footprint, ...]) -- Calculate a greyscale dilation, using either a structuring element, or a footprint corresponding to a flat structuring element.
grey_erosion(input[, size, footprint, ...]) -- Calculate a greyscale erosion, using either a structuring element, or a footprint corresponding to a flat structuring element.
grey_opening(input[, size, footprint, ...]) -- Multi-dimensional greyscale opening.
iterate_structure(structure, iterations[, ...]) -- Iterate a structure by dilating it with itself.
morphological_gradient(input[, size, ...]) -- Multi-dimensional morphological gradient.
morphological_laplace(input[, size, ...]) -- Multi-dimensional morphological laplace.
white_tophat(input[, size, footprint, ...]) -- Multi-dimensional white tophat filter.
Utility:
imread(fname[, flatten, mode]) -- Read an image from a file as an array.
8.9 optimize
Optimization and root finding
Access with:
In [4]: from scipy import optimize
Also see: http://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html
Local Optimization:
minimize(fun, x0[, args, method, jac, hess, ...]) -- Minimization of scalar function of one or more variables.
minimize_scalar(fun[, bracket, bounds, ...]) -- Minimization of scalar function of one variable.
OptimizeResult -- Represents the optimization result.
The specific optimization method interfaces below in this subsection are not recommended for use in new scripts; all of these methods are accessible via a newer, more consistent interface provided by the functions above.
General-purpose multivariate methods:
fmin(func, x0[, args, xtol, ftol, maxiter, ...]) -- Minimize a function using the downhill simplex algorithm.
fmin_powell(func, x0[, args, xtol, ftol, ...]) -- Minimize a function using modified Powell’s method.
fmin_cg(f, x0[, fprime, args, gtol, norm, ...]) -- Minimize a function using a nonlinear conjugate gradient algorithm.
fmin_bfgs(f, x0[, fprime, args, gtol, norm, ...]) -- Minimize a function using the BFGS algorithm.
fmin_ncg(f, x0, fprime[, fhess_p, fhess, ...]) -- Unconstrained minimization of a function using the Newton-CG method.
Constrained multivariate methods:
fmin_l_bfgs_b(func, x0[, fprime, args, ...]) -- Minimize a function func using the L-BFGS-B algorithm.
fmin_tnc(func, x0[, fprime, args, ...]) -- Minimize a function with variables subject to bounds, using gradient information in a truncated Newton algorithm.
fmin_cobyla(func, x0, cons[, args, ...]) -- Minimize a function using the Constrained Optimization BY Linear Approximation (COBYLA) method.
fmin_slsqp(func, x0[, eqcons, f_eqcons, ...]) -- Minimize a function using Sequential Least SQuares Programming
Univariate (scalar) minimization methods:
fminbound(func, x1, x2[, args, xtol, ...]) -- Bounded minimization for scalar functions.
brent(func[, args, brack, tol, full_output, ...]) -- Given a function of one-variable and a possible bracketing interval, return the minimum of the function isolated to a fractional precision of tol.
golden(func[, args, brack, tol, full_output]) -- Return the minimum of a function of one variable.
Equation (Local) Minimizers:
leastsq(func, x0[, args, Dfun, full_output, ...]) -- Minimize the sum of squares of a set of equations.
nnls(A, b) -- Solve argmin_x || Ax - b ||_2 for x>=0.
Global Optimization:
anneal(*args, **kwds) -- anneal is deprecated!
basinhopping(func, x0[, niter, T, stepsize, ...]) -- Find the global minimum of a function using the basin-hopping algorithm
brute(func, ranges[, args, Ns, full_output, ...]) -- Minimize a function over a given range by brute force.
Rosenbrock functions:
rosen(x) -- The Rosenbrock function.
rosen_der(x) -- The derivative (i.e.
rosen_hess(x) -- The Hessian matrix of the Rosenbrock function.
rosen_hess_prod(x, p) -- Product of the Hessian matrix of the Rosenbrock function with a vector.
Fitting:
curve_fit(f, xdata, ydata[, p0, sigma, ...]) -- Use non-linear least squares to fit a function, f, to data.
Root finding
Scalar functions:
brentq(f, a, b[, args, xtol, rtol, maxiter, ...]) -- Find a root of a function in given interval.
brenth(f, a, b[, args, xtol, rtol, maxiter, ...]) -- Find root of f in [a,b].
ridder(f, a, b[, args, xtol, rtol, maxiter, ...]) -- Find a root of a function in an interval.
bisect(f, a, b[, args, xtol, rtol, maxiter, ...]) -- Find root of a function within an interval.
newton(func, x0[, fprime, args, tol, ...]) -- Find a zero using the Newton-Raphson or secant method.
Fixed point finding:
fixed_point(func, x0[, args, xtol, maxiter]) -- Find a fixed point of the function.
Multidimensional
General nonlinear solvers:
root(fun, x0[, args, method, jac, tol, ...]) -- Find a root of a vector function.
fsolve(func, x0[, args, fprime, ...]) -- Find the roots of a function.
broyden1(F, xin[, iter, alpha, ...]) -- Find a root of a function, using Broyden’s first Jacobian approximation.
broyden2(F, xin[, iter, alpha, ...]) -- Find a root of a function, using Broyden’s second Jacobian approximation.
Large-scale nonlinear solvers:
newton_krylov(F, xin[, iter, rdiff, method, ...]) -- Find a root of a function, using Krylov approximation for inverse Jacobian.
anderson(F, xin[, iter, alpha, w0, M, ...]) -- Find a root of a function, using (extended) Anderson mixing.
Simple iterations:
excitingmixing(F, xin[, iter, alpha, ...]) -- Find a root of a function, using a tuned diagonal Jacobian approximation.
linearmixing(F, xin[, iter, alpha, verbose, ...]) -- Find a root of a function, using a scalar Jacobian approximation.
diagbroyden(F, xin[, iter, alpha, verbose, ...]) -- Find a root of a function, using diagonal Broyden Jacobian approximation.
Additional information on the nonlinear solvers
Utility Functions:
approx_fprime(xk, f, epsilon, *args) -- Finite-difference approximation of the gradient of a scalar function.
bracket(func[, xa, xb, args, grow_limit, ...]) -- Bracket the minimum of the function.
check_grad(func, grad, x0, *args) -- Check the correctness of a gradient function by comparing it against a (forward) finite-difference approximation of the gradient.
line_search(f, myfprime, xk, pk[, gfk, ...]) -- Find alpha that satisfies strong Wolfe conditions.
show_options([solver, method]) -- Show documentation for additional options of optimization solvers.i
8.10 Orthogonal distance regression
Access via:
In [1]: from scipy import odr
See: http://docs.scipy.org/doc/scipy/reference/odr.html
Data(x[, y, we, wd, fix, meta]) -- The data to fit.
RealData(x[, y, sx, sy, covx, covy, fix, meta]) -- The data, with weightings as actual standard deviations and/or covariances.
Model(fcn[, fjacb, fjacd, extra_args, ...]) -- The Model class stores information about the function you wish to fit.
ODR(data, model[, beta0, delta0, ifixb, ...]) -- The ODR class gathers all information and coordinates the running of the
Output(output) -- The Output class stores the output of an ODR run.
odr(fcn, beta0, y, x[, we, wd, fjacb, ...]) -- Low-level function for ODR.
odr_error -- Exception indicating an error in fitting.
odr_stop -- Exception stopping fitting.
8.11 Signal processing
Access via:
In [1]: from scipy import signal
Also see: http://docs.scipy.org/doc/scipy/reference/tutorial/signal.html
Convolution:
convolve(in1, in2[, mode]) -- Convolve two N-dimensional arrays.
correlate(in1, in2[, mode]) -- Cross-correlate two N-dimensional arrays.
fftconvolve(in1, in2[, mode]) -- Convolve two N-dimensional arrays using FFT.
convolve2d(in1, in2[, mode, boundary, fillvalue]) -- Convolve two 2-dimensional arrays.
correlate2d(in1, in2[, mode, boundary, ...]) -- Cross-correlate two 2-dimensional arrays.
sepfir2d((input, hrow, hcol) -> output) --Convolve the rank-2 input array with the separable filter defined by the rank-1 arrays hrow, and hcol.
B-splines:
bspline(x, n) -- B-spline basis function of order n.
cubic(x) -- A cubic B-spline.
quadratic(x) -- A quadratic B-spline.
gauss_spline(x, n) -- Gaussian approximation to B-spline basis function of order n.
cspline1d(signal[, lamb]) -- Compute cubic spline coefficients for rank-1 array.
qspline1d(signal[, lamb]) -- Compute quadratic spline coefficients for rank-1 array.
cspline2d((input {, lambda, precision}) -- -> ck) -- Return the third-order B-spline coefficients over a regularly spaced input grid for the two-dimensional input image.
qspline2d((input {, lambda, precision}) -- -> qk) -- Return the second-order B-spline coefficients over a regularly spaced input grid for the two-dimensional input image.
cspline1d_eval(cj, newx[, dx, x0]) -- Evaluate a spline at the new set of points.
qspline1d_eval(cj, newx[, dx, x0]) -- Evaluate a quadratic spline at the new set of points.
spline_filter(Iin[, lmbda]) -- Smoothing spline (cubic) filtering of a rank-2 array.
Filtering:
order_filter(a, domain, rank) -- Perform an order filter on an N-dimensional array.
medfilt(volume[, kernel_size]) -- Perform a median filter on an N-dimensional array.
medfilt2d(input[, kernel_size]) -- Median filter a 2-dimensional array.
wiener(im[, mysize, noise]) -- Perform a Wiener filter on an N-dimensional array.
symiirorder1((input, c0, z1 {, ...) -- Implement a smoothing IIR filter with mirror-symmetric boundary conditions using a cascade of first-order sections.
symiirorder2((input, r, omega {, ...) -- Implement a smoothing IIR filter with mirror-symmetric boundary conditions using a cascade of second-order sections.
lfilter(b, a, x[, axis, zi]) -- Filter data along one-dimension with an IIR or FIR filter.
lfiltic(b, a, y[, x]) -- Construct initial conditions for lfilter.
lfilter_zi(b, a) -- Compute an initial state zi for the lfilter function that corresponds to the steady state of the step response.
filtfilt(b, a, x[, axis, padtype, padlen]) -- A forward-backward filter.
savgol_filter(x, window_length, polyorder[, ...]) -- Apply a Savitzky-Golay filter to an array.
deconvolve(signal, divisor) -- Deconvolves divisor out of signal.
hilbert(x[, N, axis]) -- Compute the analytic signal, using the Hilbert transform.
get_window(window, Nx[, fftbins]) -- Return a window.
decimate(x, q[, n, ftype, axis]) -- Downsample the signal by using a filter.
detrend(data[, axis, type, bp]) -- Remove linear trend along axis from data.
resample(x, num[, t, axis, window]) -- Resample x to num samples using Fourier method along the given axis.
Filter design:
bilinear(b, a[, fs]) -- Return a digital filter from an analog one using a bilinear transform.
firwin(numtaps, cutoff[, width, window, ...]) -- FIR filter design using the window method.
firwin2(numtaps, freq, gain[, nfreqs, ...]) -- FIR filter design using the window method.
freqs(b, a[, worN, plot]) -- Compute frequency response of analog filter.
freqz(b[, a, worN, whole, plot]) -- Compute the frequency response of a digital filter.
iirdesign(wp, ws, gpass, gstop[, analog, ...]) -- Complete IIR digital and analog filter design.
iirfilter(N, Wn[, rp, rs, btype, analog, ...]) -- IIR digital and analog filter design given order and critical points.
kaiser_atten(numtaps, width) -- Compute the attenuation of a Kaiser FIR filter.
kaiser_beta(a) -- Compute the Kaiser parameter beta, given the attenuation a.
kaiserord(ripple, width) -- Design a Kaiser window to limit ripple and width of transition region.
savgol_coeffs(window_length, polyorder[, ...]) -- Compute the coefficients for a 1-d Savitzky-Golay FIR filter.
remez(numtaps, bands, desired[, weight, Hz, ...]) -- Calculate the minimax optimal filter using the Remez exchange algorithm.
unique_roots(p[, tol, rtype]) -- Determine unique roots and their multiplicities from a list of roots.
residue(b, a[, tol, rtype]) -- Compute partial-fraction expansion of b(s) / a(s).
residuez(b, a[, tol, rtype]) -- Compute partial-fraction expansion of b(z) / a(z).
invres(r, p, k[, tol, rtype]) -- Compute b(s) and a(s) from partial fraction expansion: r,p,k
Matlab-style IIR filter design:
butter(N, Wn[, btype, analog, output]) -- Butterworth digital and analog filter design.
buttord(wp, ws, gpass, gstop[, analog]) -- Butterworth filter order selection.
cheby1(N, rp, Wn[, btype, analog, output]) -- Chebyshev type I digital and analog filter design.
cheb1ord(wp, ws, gpass, gstop[, analog]) -- Chebyshev type I filter order selection.
cheby2(N, rs, Wn[, btype, analog, output]) -- Chebyshev type II digital and analog filter design.
cheb2ord(wp, ws, gpass, gstop[, analog]) -- Chebyshev type II filter order selection.
ellip(N, rp, rs, Wn[, btype, analog, output]) -- Elliptic (Cauer) digital and analog filter design.
ellipord(wp, ws, gpass, gstop[, analog]) -- Elliptic (Cauer) filter order selection.
bessel(N, Wn[, btype, analog, output]) -- Bessel/Thomson digital and analog filter design.
Continuous-Time Linear Systems:
freqresp(system[, w, n]) -- Calculate the frequency response of a continuous-time system.
lti(*args, **kwords) -- Linear Time Invariant class which simplifies representation.
lsim(system, U, T[, X0, interp]) -- Simulate output of a continuous-time linear system.
lsim2(system[, U, T, X0]) -- Simulate output of a continuous-time linear system, by using the ODE solver scipy.integrate.odeint.
impulse(system[, X0, T, N]) -- Impulse response of continuous-time system.
impulse2(system[, X0, T, N]) -- Impulse response of a single-input, continuous-time linear system.
step(system[, X0, T, N]) -- Step response of continuous-time system.
step2(system[, X0, T, N]) -- Step response of continuous-time system.
bode(system[, w, n]) -- Calculate Bode magnitude and phase data of a continuous-time system.
Discrete-Time Linear Systems:
dlsim(system, u[, t, x0]) -- Simulate output of a discrete-time linear system.
dimpulse(system[, x0, t, n]) -- Impulse response of discrete-time system.
dstep(system[, x0, t, n]) -- Step response of discrete-time system.
LTI Representations:
tf2zpk(b, a) -- Return zero, pole, gain (z,p,k) representation from a numerator, denominator representation of a linear filter.
zpk2tf(z, p, k) -- Return polynomial transfer function representation from zeros
tf2ss(num, den) -- Transfer function to state-space representation.
ss2tf(A, B, C, D[, input]) -- State-space to transfer function.
zpk2ss(z, p, k) -- Zero-pole-gain representation to state-space representation
ss2zpk(A, B, C, D[, input]) -- State-space representation to zero-pole-gain representation.
cont2discrete(sys, dt[, method, alpha]) -- Transform a continuous to a discrete state-space system.
Waveforms:
chirp(t, f0, t1, f1[, method, phi, vertex_zero]) -- Frequency-swept cosine generator.
gausspulse(t[, fc, bw, bwr, tpr, retquad, ...]) -- Return a Gaussian modulated sinusoid: exp(-a t^2) exp(1j*2*pi*fc*t)
sawtooth(t[, width]) -- Return a periodic sawtooth or triangle waveform.
square(t[, duty]) -- Return a periodic square-wave waveform.
sweep_poly(t, poly[, phi]) -- Frequency-swept cosine generator, with a time-dependent frequency.
Window functions:
get_window(window, Nx[, fftbins]) -- Return a window.
barthann(M[, sym]) -- Return a modified Bartlett-Hann window.
bartlett(M[, sym]) -- Return a Bartlett window.
blackman(M[, sym]) -- Return a Blackman window.
blackmanharris(M[, sym]) -- Return a minimum 4-term Blackman-Harris window.
bohman(M[, sym]) -- Return a Bohman window.
boxcar(M[, sym]) -- Return a boxcar or rectangular window.
chebwin(M, at[, sym]) -- Return a Dolph-Chebyshev window.
cosine(M[, sym]) -- Return a window with a simple cosine shape.
flattop(M[, sym]) -- Return a flat top window.
gaussian(M, std[, sym]) -- Return a Gaussian window.
general_gaussian(M, p, sig[, sym]) -- Return a window with a generalized Gaussian shape.
hamming(M[, sym]) -- Return a Hamming window.
hann(M[, sym]) -- Return a Hann window.
kaiser(M, beta[, sym]) -- Return a Kaiser window.
nuttall(M[, sym]) -- Return a minimum 4-term Blackman-Harris window according to Nuttall.
parzen(M[, sym]) -- Return a Parzen window.
slepian(M, width[, sym]) -- Return a digital Slepian (DPSS) window.
triang(M[, sym]) -- Return a triangular window.
Wavelets:
cascade(hk[, J]) -- Return (x, phi, psi) at dyadic points K/2**J from filter coefficients.
daub(p) -- The coefficients for the FIR low-pass filter producing Daubechies wavelets.
morlet(M[, w, s, complete]) -- Complex Morlet wavelet.
qmf(hk) -- Return high-pass qmf filter from low-pass
ricker(points, a) -- Return a Ricker wavelet, also known as the “Mexican hat wavelet”.
cwt(data, wavelet, widths) -- Continuous wavelet transform.
Peak finding:
find_peaks_cwt(vector, widths[, wavelet, ...]) -- Attempt to find the peaks in a 1-D array.
argrelmin(data[, axis, order, mode]) -- Calculate the relative minima of data.
argrelmax(data[, axis, order, mode]) -- Calculate the relative maxima of data.
argrelextrema(data, comparator[, axis, ...]) -- Calculate the relative extrema of data.
Spectral Analysis:
periodogram(x[, fs, window, nfft, detrend, ...]) -- Estimate power spectral density using a periodogram.
welch(x[, fs, window, nperseg, noverlap, ...]) -- Estimate power spectral density using Welch’s method.
lombscargle(x, y, freqs) -- Computes the Lomb-Scargle periodogram.
vectorstrength(events, period) -- Determine the vector strength of the events corresponding to the given period.
8.12 sparse
SciPy 2-D sparse matrix package for numeric data.
Access via:
In [1]: from scipy import sparse
See: http://docs.scipy.org/doc/scipy/reference/sparse.html
Sparse matrix classes:
bsr_matrix(arg1[, shape, dtype, copy, blocksize]) -- Block Sparse Row matrix
coo_matrix(arg1[, shape, dtype, copy]) -- A sparse matrix in COOrdinate format.
csc_matrix(arg1[, shape, dtype, copy]) -- Compressed Sparse Column matrix
csr_matrix(arg1[, shape, dtype, copy]) -- Compressed Sparse Row matrix
dia_matrix(arg1[, shape, dtype, copy]) -- Sparse matrix with DIAgonal storage
dok_matrix(arg1[, shape, dtype, copy]) -- Dictionary Of Keys based sparse matrix.
lil_matrix(arg1[, shape, dtype, copy]) -- Row-based linked list sparse matrix
Functions:
Building sparse matrices:
eye(m[, n, k, dtype, format]) -- Sparse matrix with ones on diagonal
identity(n[, dtype, format]) -- Identity matrix in sparse format
kron(A, B[, format]) -- kronecker product of sparse matrices A and B
kronsum(A, B[, format]) -- kronecker sum of sparse matrices A and B
diags(diagonals, offsets[, shape, format, dtype]) -- Construct a sparse matrix from diagonals.
spdiags(data, diags, m, n[, format]) -- Return a sparse matrix from diagonals.
block_diag(mats[, format, dtype]) -- Build a block diagonal sparse matrix from provided matrices.
tril(A[, k, format]) -- Return the lower triangular portion of a matrix in sparse format
triu(A[, k, format]) -- Return the upper triangular portion of a matrix in sparse format
bmat(blocks[, format, dtype]) -- Build a sparse matrix from sparse sub-blocks
hstack(blocks[, format, dtype]) -- Stack sparse matrices horizontally (column wise)
vstack(blocks[, format, dtype]) -- Stack sparse matrices vertically (row wise)
rand(m, n[, density, format, dtype, ...]) -- Generate a sparse matrix of the given shape and density with uniformly distributed values.
Identifying sparse matrices:
issparse(x)
isspmatrix(x)
isspmatrix_csc(x)
isspmatrix_csr(x)
isspmatrix_bsr(x)
isspmatrix_lil(x)
isspmatrix_dok(x)
isspmatrix_coo(x)
isspmatrix_dia(x)
Submodules:
csgraph
linalg
Exceptions:
SparseEfficiencyWarning
SparseWarning
8.13 special
Special Functions.
Access via:
In [1]: from scipy import special
Also see: http://docs.scipy.org/doc/scipy/reference/tutorial/special.html
Airy functions:
airy(z) -- Airy functions and their derivatives.
airye(z) -- Exponentially scaled Airy functions and their derivatives.
ai_zeros(nt) -- Compute the zeros of Airy Functions Ai(x) and Ai’(x), a and a’ respectively, and the associated values of Ai(a’) and Ai’(a).
bi_zeros(nt) -- Compute the zeros of Airy Functions Bi(x) and Bi’(x), b and b’ respectively, and the associated values of Ai(b’) and Ai’(b).
Elliptic Functions and Integrals:
ellipj(u, m) -- Jacobian elliptic functions
ellipk(m) -- Computes the complete elliptic integral of the first kind.
ellipkm1(p) -- The complete elliptic integral of the first kind around m=1.
ellipkinc(phi, m) -- Incomplete elliptic integral of the first kind
ellipe(m) -- Complete elliptic integral of the second kind
ellipeinc(phi,m) -- Incomplete elliptic integral of the second kind
Bessel Functions:
jn(v, z) -- Bessel function of the first kind of real order v
jv(v, z) -- Bessel function of the first kind of real order v
jve(v, z) -- Exponentially scaled Bessel function of order v
yn(n,x) -- Bessel function of the second kind of integer order
yv(v,z) -- Bessel function of the second kind of real order
yve(v,z) -- Exponentially scaled Bessel function of the second kind of real order
kn(n, x) -- Modified Bessel function of the second kind of integer order n
kv(v,z) -- Modified Bessel function of the second kind of real order v
kve(v,z) -- Exponentially scaled modified Bessel function of the second kind.
iv(v,z) -- Modified Bessel function of the first kind of real order
ive(v,z) -- Exponentially scaled modified Bessel function of the first kind
hankel1(v, z) -- Hankel function of the first kind
hankel1e(v, z) -- Exponentially scaled Hankel function of the first kind
hankel2(v, z) -- Hankel function of the second kind
hankel2e(v, z) -- Exponentially scaled Hankel function of the second kind
The following is not an universal function:
lmbda(v, x) -- Compute sequence of lambda functions with arbitrary order v and their derivatives.
Zeros of Bessel Functions -- These are not universal functions:
jnjnp_zeros(nt) -- Compute nt (<=1200) zeros of the Bessel functions Jn and Jn’ and arange them in order of their magnitudes.
jnyn_zeros(n, nt) -- Compute nt zeros of the Bessel functions Jn(x), Jn’(x), Yn(x), and Yn’(x), respectively.
jn_zeros(n, nt) -- Compute nt zeros of the Bessel function Jn(x).
jnp_zeros(n, nt) -- Compute nt zeros of the Bessel function Jn’(x).
yn_zeros(n, nt) -- Compute nt zeros of the Bessel function Yn(x).
ynp_zeros(n, nt) -- Compute nt zeros of the Bessel function Yn’(x).
y0_zeros(nt[, complex]) -- Returns nt (complex or real) zeros of Y0(z), z0, and the value of Y0’(z0) = -Y1(z0) at each zero.
y1_zeros(nt[, complex]) -- Returns nt (complex or real) zeros of Y1(z), z1, and the value of Y1’(z1) = Y0(z1) at each zero.
y1p_zeros(nt[, complex]) -- Returns nt (complex or real) zeros of Y1’(z), z1’, and the value of Y1(z1’) at each zero.
Faster versions of common Bessel Functions:
j0(x) -- Bessel function the first kind of order 0
j1(x) -- Bessel function of the first kind of order 1
y1(x) -- Bessel function of the second kind of order 1
y0(x) -- Bessel function of the second kind of order 0
i0(x) -- Modified Bessel function of order 0
i0e(x) -- Exponentially scaled modified Bessel function of order 0.
i1(x) -- Modified Bessel function of order 1
i1e(x) -- Exponentially scaled modified Bessel function of order 0.
k0(x) -- Modified Bessel function K of order 0
k0e(x) -- Exponentially scaled modified Bessel function K of order 0
k1(x) -- Modified Bessel function of the first kind of order 1
k1e(x) -- Exponentially scaled modified Bessel function K of order 1
Integrals of Bessel Functions:
itj0y0(x) -- Integrals of Bessel functions of order 0
it2j0y0(x) -- Integrals related to Bessel functions of order 0
iti0k0(x) -- Integrals of modified Bessel functions of order 0
it2i0k0(x) -- Integrals related to modified Bessel functions of order 0
besselpoly(a, lmb, nu) -- Weighed integral of a Bessel function.
Derivatives of Bessel Functions:
jvp(v, z[, n]) -- Return the nth derivative of Jv(z) with respect to z.
yvp(v, z[, n]) -- Return the nth derivative of Yv(z) with respect to z.
kvp(v, z[, n]) -- Return the nth derivative of Kv(z) with respect to z.
ivp(v, z[, n]) -- Return the nth derivative of Iv(z) with respect to z.
h1vp(v, z[, n]) -- Return the nth derivative of H1v(z) with respect to z.
h2vp(v, z[, n]) -- Return the nth derivative of H2v(z) with respect to z.
Spherical Bessel Functions -- These are not universal functions:
sph_jn(n, z) -- Compute the spherical Bessel function jn(z) and its derivative for all orders up to and including n.
sph_yn(n, z) -- Compute the spherical Bessel function yn(z) and its derivative for all orders up to and including n.
sph_jnyn(n, z) -- Compute the spherical Bessel functions, jn(z) and yn(z) and their derivatives for all orders up to and including n.
sph_in(n, z) -- Compute the spherical Bessel function in(z) and its derivative for all orders up to and including n.
sph_kn(n, z) -- Compute the spherical Bessel function kn(z) and its derivative for all orders up to and including n.
sph_inkn(n, z) -- Compute the spherical Bessel functions, in(z) and kn(z) and their derivatives for all orders up to and including n.
Riccati-Bessel Functions -- These are not universal functions:
riccati_jn(n, x) -- Compute the Ricatti-Bessel function of the first kind and its derivative for all orders up to and including n.
riccati_yn(n, x) -- Compute the Ricatti-Bessel function of the second kind and its derivative for all orders up to and including n.
Struve Functions:
struve(v,x) -- Struve function
modstruve(v, x) -- Modified Struve function
itstruve0(x) -- Integral of the Struve function of order 0
it2struve0(x) -- Integral related to Struve function of order 0
itmodstruve0(x) -- Integral of the modified Struve function of order 0
Raw Statistical Functions -- See also scipy.stats: Friendly versions of these functions.
bdtr(k, n, p) -- Binomial distribution cumulative distribution function.
bdtrc(k, n, p) -- Binomial distribution survival function.
bdtri(k, n, y) -- Inverse function to bdtr vs.
btdtr(a,b,x) -- Cumulative beta distribution.
btdtri(a,b,p) -- p-th quantile of the beta distribution.
fdtr(dfn, dfd, x) -- F cumulative distribution function
fdtrc(dfn, dfd, x) -- F survival function
fdtri(dfn, dfd, p) -- Inverse to fdtr vs x
gdtr(a,b,x) -- Gamma distribution cumulative density function.
gdtrc(a,b,x) -- Gamma distribution survival function.
gdtria(p, b, x[, out]) -- Inverse of gdtr vs a.
gdtrib(a, p, x[, out]) -- Inverse of gdtr vs b.
gdtrix(a, b, p[, out]) -- Inverse of gdtr vs x.
nbdtr(k, n, p) -- Negative binomial cumulative distribution function
nbdtrc(k,n,p) -- Negative binomial survival function
nbdtri(k, n, y) -- Inverse of nbdtr vs p
pdtr(k, m) -- Poisson cumulative distribution function
pdtrc(k, m) -- Poisson survival function
pdtri(k,y) -- Inverse to pdtr vs m
stdtr(df,t) -- Student t distribution cumulative density function
stdtridf(p,t) -- Inverse of stdtr vs df
stdtrit(df,p) -- Inverse of stdtr vs t
chdtr(v, x) -- Chi square cumulative distribution function
chdtrc(v,x) -- Chi square survival function
chdtri(v,p) -- Inverse to chdtrc
ndtr(x) -- Gaussian cumulative distribution function
ndtri(y) -- Inverse of ndtr vs x
smirnov(n,e) -- Kolmogorov-Smirnov complementary cumulative distribution function
smirnovi(n,y) -- Inverse to smirnov
kolmogorov(y) -- Complementary cumulative distribution function of Kolmogorov distribution
kolmogi(p) -- Inverse function to kolmogorov
tklmbda(x, lmbda) -- Tukey-Lambda cumulative distribution function
logit(x) -- Logit ufunc for ndarrays.
expit(x) -- Expit ufunc for ndarrays.
boxcox(x, lmbda) -- Compute the Box-Cox transformation.
boxcox1p(x, lmbda) -- Compute the Box-Cox transformation of 1 + x.
Gamma and Related Functions:
gamma(z) -- Gamma function
gammaln(z) -- Logarithm of absolute value of gamma function
gammasgn(x) -- Sign of the gamma function.
gammainc(a, x) -- Incomplete gamma function
gammaincinv(a, y) -- Inverse to gammainc
gammaincc(a,x) -- Complemented incomplete gamma integral
gammainccinv(a,y) -- Inverse to gammaincc
beta(a, b) -- Beta function.
betaln(a, b) -- Natural logarithm of absolute value of beta function.
betainc(a, b, x) -- Incomplete beta integral.
betaincinv(a, b, y) -- Inverse function to beta integral.
psi(z) -- Digamma function
rgamma(z) -- Gamma function inverted
polygamma(n, x) -- Polygamma function which is the nth derivative of the digamma (psi) function.
multigammaln(a, d) -- Returns the log of multivariate gamma, also sometimes called the generalized gamma.
Error Function and Fresnel Integrals:
erf(z) -- Returns the error function of complex argument.
erfc(x) -- Complementary error function, 1 - erf(x).
erfcx(x) -- Scaled complementary error function, exp(x^2) erfc(x).
erfi(z) -- Imaginary error function, -i erf(i z).
erfinv(y) -- Inverse function for erf
erfcinv(y) -- Inverse function for erfc
wofz(z) -- Faddeeva function
dawsn(x) -- Dawson’s integral.
fresnel(z) -- Fresnel sin and cos integrals
fresnel_zeros(nt) -- Compute nt complex zeros of the sine and cosine Fresnel integrals S(z) and C(z).
modfresnelp(x) -- Modified Fresnel positive integrals
modfresnelm(x) -- Modified Fresnel negative integrals
These are not universal functions:
erf_zeros(nt) -- Compute nt complex zeros of the error function erf(z).
fresnelc_zeros(nt) -- Compute nt complex zeros of the cosine Fresnel integral C(z).
fresnels_zeros(nt) -- Compute nt complex zeros of the sine Fresnel integral S(z).
Legendre Functions:
lpmv(m, v, x) -- Associated legendre function of integer order.
sph_harm Compute spherical harmonics.
These are not universal functions:
clpmn(m, n, z[, type]) -- Associated Legendre function of the first kind, Pmn(z)
lpn(n, z) -- Compute sequence of Legendre functions of the first kind (polynomials), Pn(z) and derivatives for all degrees from 0 to n (inclusive).
lqn(n, z) -- Compute sequence of Legendre functions of the second kind, Qn(z) and derivatives for all degrees from 0 to n (inclusive).
lpmn(m, n, z) -- Associated Legendre function of the first kind, Pmn(z)
lqmn(m, n, z) -- Associated Legendre functions of the second kind, Qmn(z) and its derivative, Qmn'(z) of order m and degree n.
Orthogonal polynomials -- The following functions evaluate values of orthogonal polynomials:
eval_legendre(n, x[, out]) -- Evaluate Legendre polynomial at a point.
eval_chebyt(n, x[, out]) -- Evaluate Chebyshev T polynomial at a point.
eval_chebyu(n, x[, out]) -- Evaluate Chebyshev U polynomial at a point.
eval_chebyc(n, x[, out]) -- Evaluate Chebyshev C polynomial at a point.
eval_chebys(n, x[, out]) -- Evaluate Chebyshev S polynomial at a point.
eval_jacobi(n, alpha, beta, x[, out]) -- Evaluate Jacobi polynomial at a point.
eval_laguerre(n, x[, out]) -- Evaluate Laguerre polynomial at a point.
eval_genlaguerre(n, alpha, x[, out]) -- Evaluate generalized Laguerre polynomial at a point.
eval_hermite(n, x[, out]) -- Evaluate Hermite polynomial at a point.
eval_hermitenorm(n, x[, out]) -- Evaluate normalized Hermite polynomial at a point.
eval_gegenbauer(n, alpha, x[, out]) -- Evaluate Gegenbauer polynomial at a point.
eval_sh_legendre(n, x[, out]) -- Evaluate shifted Legendre polynomial at a point.
eval_sh_chebyt(n, x[, out]) -- Evaluate shifted Chebyshev T polynomial at a point.
eval_sh_chebyu(n, x[, out]) -- Evaluate shifted Chebyshev U polynomial at a point.
eval_sh_jacobi(n, p, q, x[, out]) -- Evaluate shifted Jacobi polynomial at a point.
The functions below, in turn, return orthopoly1d objects, which functions similarly as numpy.poly1d. The orthopoly1d class also has an attribute weights which returns the roots, weights, and total weights for the appropriate form of Gaussian quadrature. These are returned in an n x 3 array with roots in the first column, weights in the second column, and total weights in the final column:
legendre(n[, monic]) -- Returns the nth order Legendre polynomial, P_n(x), orthogonal over [-1,1] with weight function 1.
chebyt(n[, monic]) -- Return nth order Chebyshev polynomial of first kind, Tn(x).
chebyu(n[, monic]) -- Return nth order Chebyshev polynomial of second kind, Un(x).
chebyc(n[, monic]) -- Return nth order Chebyshev polynomial of first kind, Cn(x).
chebys(n[, monic]) -- Return nth order Chebyshev polynomial of second kind, Sn(x).
jacobi(n, alpha, beta[, monic]) -- Returns the nth order Jacobi polynomial, P^(alpha,beta)_n(x) orthogonal over [-1,1] with weighting function (1-x)**alpha (1+x)**beta with alpha,beta > -1.
laguerre(n[, monic]) -- Return the nth order Laguerre polynoimal, L_n(x), orthogonal over
genlaguerre(n, alpha[, monic]) -- Returns the nth order generalized (associated) Laguerre polynomial,
hermite(n[, monic]) -- Return the nth order Hermite polynomial, H_n(x), orthogonal over
hermitenorm(n[, monic]) -- Return the nth order normalized Hermite polynomial, He_n(x), orthogonal
gegenbauer(n, alpha[, monic]) -- Return the nth order Gegenbauer (ultraspherical) polynomial,
sh_legendre(n[, monic]) -- Returns the nth order shifted Legendre polynomial, P^*_n(x), orthogonal over [0,1] with weighting function 1.
sh_chebyt(n[, monic]) -- Return nth order shifted Chebyshev polynomial of first kind, Tn(x).
sh_chebyu(n[, monic]) -- Return nth order shifted Chebyshev polynomial of second kind, Un(x).
sh_jacobi(n, p, q[, monic]) -- Returns the nth order Jacobi polynomial, G_n(p,q,x) orthogonal over [0,1] with weighting function (1-x)**(p-q) (x)**(q-1) with p>q-1 and q > 0.
Warning: Large-order polynomials obtained from these functions are numerically unstable. orthopoly1d objects are converted to poly1d, when doing arithmetic. numpy.poly1d works in power basis and cannot represent high-order polynomials accurately, which can cause significant inaccuracy.
Hypergeometric Functions:
hyp2f1(a, b, c, z) -- Gauss hypergeometric function 2F1(a, b; c; z).
hyp1f1(a, b, x) -- Confluent hypergeometric function 1F1(a, b; x)
hyperu(a, b, x) -- Confluent hypergeometric function U(a, b, x) of the second kind
hyp0f1(v, z) -- Confluent hypergeometric limit function 0F1.
hyp2f0(a, b, x, type) -- Hypergeometric function 2F0 in y and an error estimate
hyp1f2(a, b, c, x) -- Hypergeometric function 1F2 and error estimate
hyp3f0(a, b, c, x) -- Hypergeometric function 3F0 in y and an error estimate
Parabolic Cylinder Functions:
pbdv(v, x) -- Parabolic cylinder function D
pbvv(v,x) -- Parabolic cylinder function V
pbwa(a,x) -- Parabolic cylinder function W
These are not universal functions:
pbdv_seq(v, x) -- Compute sequence of parabolic cylinder functions Dv(x) and their derivatives for Dv0(x)..Dv(x) with v0=v-int(v).
pbvv_seq(v, x) -- Compute sequence of parabolic cylinder functions Dv(x) and their derivatives for Dv0(x)..Dv(x) with v0=v-int(v).
pbdn_seq(n, z) -- Compute sequence of parabolic cylinder functions Dn(z) and their derivatives for D0(z)..Dn(z).
Mathieu and Related Functions:
mathieu_a(m,q) -- Characteristic value of even Mathieu functions
mathieu_b(m,q) -- Characteristic value of odd Mathieu functions
These are not universal functions:
mathieu_even_coef(m, q) -- Compute expansion coefficients for even Mathieu functions and modified Mathieu functions.
mathieu_odd_coef(m, q) -- Compute expansion coefficients for even Mathieu functions and modified Mathieu functions.
The following return both function and first derivative:
mathieu_cem(m,q,x) -- Even Mathieu function and its derivative
mathieu_sem(m, q, x) -- Odd Mathieu function and its derivative
mathieu_modcem1(m, q, x) -- Even modified Mathieu function of the first kind and its derivative
mathieu_modcem2(m, q, x) -- Even modified Mathieu function of the second kind and its derivative
mathieu_modsem1(m,q,x) -- Odd modified Mathieu function of the first kind and its derivative
mathieu_modsem2(m, q, x) -- Odd modified Mathieu function of the second kind and its derivative
Spheroidal Wave Functions:
pro_ang1(m,n,c,x) -- Prolate spheroidal angular function of the first kind and its derivative
pro_rad1(m,n,c,x) -- Prolate spheroidal radial function of the first kind and its derivative
pro_rad2(m,n,c,x) -- Prolate spheroidal radial function of the secon kind and its derivative
obl_ang1(m, n, c, x) -- Oblate spheroidal angular function of the first kind and its derivative
obl_rad1(m,n,c,x) -- Oblate spheroidal radial function of the first kind and its derivative
obl_rad2(m,n,c,x) -- Oblate spheroidal radial function of the second kind and its derivative.
pro_cv(m,n,c) -- Characteristic value of prolate spheroidal function
obl_cv(m, n, c) -- Characteristic value of oblate spheroidal function
pro_cv_seq(m, n, c) -- Compute a sequence of characteristic values for the prolate spheroidal wave functions for mode m and n’=m..n and spheroidal parameter c.
obl_cv_seq(m, n, c) -- Compute a sequence of characteristic values for the oblate spheroidal wave functions for mode m and n’=m..n and spheroidal parameter c.
The following functions require pre-computed characteristic value:
pro_ang1_cv(m,n,c,cv,x) -- Prolate sheroidal angular function pro_ang1 for precomputed characteristic value
pro_rad1_cv(m,n,c,cv,x) -- Prolate sheroidal radial function pro_rad1 for precomputed characteristic value
pro_rad2_cv(m,n,c,cv,x) -- Prolate sheroidal radial function pro_rad2 for precomputed characteristic value
obl_ang1_cv(m, n, c, cv, x) -- Oblate sheroidal angular function obl_ang1 for precomputed characteristic value
obl_rad1_cv(m,n,c,cv,x) -- Oblate sheroidal radial function obl_rad1 for precomputed characteristic value
obl_rad2_cv(m,n,c,cv,x) -- Oblate sheroidal radial function obl_rad2 for precomputed characteristic value
Kelvin Functions:
kelvin(x) -- Kelvin functions as complex numbers
kelvin_zeros(nt) -- Compute nt zeros of all the Kelvin functions returned in a length 8 tuple of arrays of length nt.
ber(x) -- Kelvin function ber.
bei(x) -- Kelvin function bei
berp(x) -- Derivative of the Kelvin function ber
beip(x) -- Derivative of the Kelvin function bei
ker(x) -- Kelvin function ker
kei(x) -- Kelvin function ker
kerp(x) -- Derivative of the Kelvin function ker
keip(x) -- Derivative of the Kelvin function kei
These are not universal functions:
ber_zeros(nt) -- Compute nt zeros of the Kelvin function ber x
bei_zeros(nt) -- Compute nt zeros of the Kelvin function bei x
berp_zeros(nt) -- Compute nt zeros of the Kelvin function ber’ x
beip_zeros(nt) -- Compute nt zeros of the Kelvin function bei’ x
ker_zeros(nt) -- Compute nt zeros of the Kelvin function ker x
kei_zeros(nt) -- Compute nt zeros of the Kelvin function kei x
kerp_zeros(nt) -- Compute nt zeros of the Kelvin function ker’ x
keip_zeros(nt) -- Compute nt zeros of the Kelvin function kei’ x
Combinatorics:
comb(N, k[, exact, repetition]) -- The number of combinations of N things taken k at a time.
perm(N, k[, exact]) -- Permutations of N things taken k at a time, i.e., k-permutations of N.
Other Special Functions:
binom(n, k) -- Binomial coefficient
expn(n, x) -- Exponential integral E_n
exp1(z) -- Exponential integral E_1 of complex argument z
expi(x) -- Exponential integral Ei
factorial(n[, exact]) -- The factorial function, n! = special.gamma(n+1).
factorial2(n[, exact]) -- Double factorial.
factorialk(n, k[, exact]) -- n(!!...!) = multifactorial of order k
shichi(x) -- Hyperbolic sine and cosine integrals
sici(x) -- Sine and cosine integrals
spence(x) -- Dilogarithm integral
lambertw(z[, k, tol]) -- Lambert W function [R416].
zeta(x, q) -- Hurwitz zeta function
zetac(x) -- Riemann zeta function minus 1.
Convenience Functions:
cbrt(x) -- Cube root of x
exp10(x) -- 10**x
exp2(x) -- 2**x
radian(d, m, s) -- Convert from degrees to radians
cosdg(x) -- Cosine of the angle x given in degrees.
sindg(x) -- Sine of angle given in degrees
tandg(x) -- Tangent of angle x given in degrees.
cotdg(x) -- Cotangent of the angle x given in degrees.
log1p(x) -- Calculates log(1+x) for use when x is near zero
expm1(x) -- exp(x) - 1 for use when x is near zero.
cosm1(x) -- cos(x) - 1 for use when x is near zero.
round(x) -- Round to nearest integer
xlogy(x, y) -- Compute x*log(y) so that the result is 0 if x = 0.
xlog1py(x, y) -- Compute x*log1p(y) so that the result is 0 if x = 0.
8.14 stats
Statistical functions -- This module contains a large number of probability distributions as well as a growing library of statistical functions.
Access via:
In [1]: from scipy import stats
Also see: http://docs.scipy.org/doc/scipy/reference/tutorial/stats.html
For an overview, in IPython, do:
In [1]: from scipy import stats In [2]: stats?
Each included distribution is an instance of the class rv_continous: For each given name the following methods are available:
rv_continuous([momtype, a, b, xtol, ...]) -- A generic continuous random variable class meant for subclassing.
rv_continuous.pdf(x, *args, **kwds) -- Probability density function at x of the given RV.
rv_continuous.logpdf(x, *args, **kwds) -- Log of the probability density function at x of the given RV.
rv_continuous.cdf(x, *args, **kwds) -- Cumulative distribution function of the given RV.
rv_continuous.logcdf(x, *args, **kwds) -- Log of the cumulative distribution function at x of the given RV.
rv_continuous.sf(x, *args, **kwds) -- Survival function (1-cdf) at x of the given RV.
rv_continuous.logsf(x, *args, **kwds) -- Log of the survival function of the given RV.
rv_continuous.ppf(q, *args, **kwds) -- Percent point function (inverse of cdf) at q of the given RV.
rv_continuous.isf(q, *args, **kwds) -- Inverse survival function at q of the given RV.
rv_continuous.moment(n, *args, **kwds) -- n’th order non-central moment of distribution.
rv_continuous.stats(*args, **kwds) -- Some statistics of the given RV
rv_continuous.entropy(*args, **kwds) -- Differential entropy of the RV.
rv_continuous.fit(data, *args, **kwds) -- Return MLEs for shape, location, and scale parameters from data.
rv_continuous.expect([func, args, loc, ...]) -- Calculate expected value of a function with respect to the distribution.
Calling the instance as a function returns a frozen pdf whose shape, location, and scale parameters are fixed.
Similarly, each discrete distribution is an instance of the class rv_discrete:
rv_discrete([a, b, name, badvalue, ...]) -- A generic discrete random variable class meant for subclassing.
rv_discrete.rvs(*args, **kwargs) -- Random variates of given type.
rv_discrete.pmf(k, *args, **kwds) -- Probability mass function at k of the given RV.
rv_discrete.logpmf(k, *args, **kwds) -- Log of the probability mass function at k of the given RV.
rv_discrete.cdf(k, *args, **kwds) -- Cumulative distribution function of the given RV.
rv_discrete.logcdf(k, *args, **kwds) -- Log of the cumulative distribution function at k of the given RV
rv_discrete.sf(k, *args, **kwds) -- Survival function (1-cdf) at k of the given RV.
rv_discrete.logsf(k, *args, **kwds) -- Log of the survival function of the given RV.
rv_discrete.ppf(q, *args, **kwds) -- Percent point function (inverse of cdf) at q of the given RV
rv_discrete.isf(q, *args, **kwds) -- Inverse survival function (1-sf) at q of the given RV.
rv_discrete.stats(*args, **kwds) -- Some statistics of the given RV
rv_discrete.moment(n, *args, **kwds) -- n’th order non-central moment of distribution.
rv_discrete.entropy(*args, **kwds) -- Differential entropy of the RV.
rv_discrete.expect([func, args, loc, lb, ...]) -- Calculate expected value of a function with respect to the distribution
Continuous distributions:
alpha -- An alpha continuous random variable.
anglit -- An anglit continuous random variable.
arcsine -- An arcsine continuous random variable.
beta -- A beta continuous random variable.
betaprime -- A beta prime continuous random variable.
bradford -- A Bradford continuous random variable.
burr -- A Burr continuous random variable.
cauchy -- A Cauchy continuous random variable.
chi -- A chi continuous random variable.
chi2 -- A chi-squared continuous random variable.
cosine -- A cosine continuous random variable.
dgamma -- A double gamma continuous random variable.
dweibull -- A double Weibull continuous random variable.
erlang -- An Erlang continuous random variable.
expon -- An exponential continuous random variable.
exponweib -- An exponentiated Weibull continuous random variable.
exponpow -- An exponential power continuous random variable.
f -- An F continuous random variable.
fatiguelife -- A fatigue-life (Birnbaum-Sanders) continuous random variable.
fisk -- A Fisk continuous random variable.
foldcauchy -- A folded Cauchy continuous random variable.
foldnorm -- A folded normal continuous random variable.
frechet_r -- A Frechet right (or Weibull minimum) continuous random variable.
frechet_l -- A Frechet left (or Weibull maximum) continuous random variable.
genlogistic -- A generalized logistic continuous random variable.
genpareto -- A generalized Pareto continuous random variable.
genexpon -- A generalized exponential continuous random variable.
genextreme -- A generalized extreme value continuous random variable.
gausshyper -- A Gauss hypergeometric continuous random variable.
gamma -- A gamma continuous random variable.
gengamma -- A generalized gamma continuous random variable.
genhalflogistic -- A generalized half-logistic continuous random variable.
gilbrat -- A Gilbrat continuous random variable.
gompertz -- A Gompertz (or truncated Gumbel) continuous random variable.
gumbel_r -- A right-skewed Gumbel continuous random variable.
gumbel_l -- A left-skewed Gumbel continuous random variable.
halfcauchy -- A Half-Cauchy continuous random variable.
halflogistic -- A half-logistic continuous random variable.
halfnorm -- A half-normal continuous random variable.
hypsecant -- A hyperbolic secant continuous random variable.
invgamma -- An inverted gamma continuous random variable.
invgauss -- An inverse Gaussian continuous random variable.
invweibull -- An inverted Weibull continuous random variable.
johnsonsb -- A Johnson SB continuous random variable.
johnsonsu -- A Johnson SU continuous random variable.
ksone -- General Kolmogorov-Smirnov one-sided test.
kstwobign -- Kolmogorov-Smirnov two-sided test for large N.
laplace -- A Laplace continuous random variable.
logistic -- A logistic (or Sech-squared) continuous random variable.
loggamma -- A log gamma continuous random variable.
loglaplace -- A log-Laplace continuous random variable.
lognorm -- A lognormal continuous random variable.
lomax -- A Lomax (Pareto of the second kind) continuous random variable.
maxwell -- A Maxwell continuous random variable.
mielke -- A Mielke’s Beta-Kappa continuous random variable.
nakagami -- A Nakagami continuous random variable.
ncx2 -- A non-central chi-squared continuous random variable.
ncf -- A non-central F distribution continuous random variable.
nct -- A non-central Student’s T continuous random variable.
norm -- A normal continuous random variable.
pareto -- A Pareto continuous random variable.
pearson3 -- A pearson type III continuous random variable.
powerlaw -- A power-function continuous random variable.
powerlognorm -- A power log-normal continuous random variable.
powernorm -- A power normal continuous random variable.
rdist -- An R-distributed continuous random variable.
reciprocal -- A reciprocal continuous random variable.
rayleigh -- A Rayleigh continuous random variable.
rice -- A Rice continuous random variable.
recipinvgauss -- A reciprocal inverse Gaussian continuous random variable.
semicircular -- A semicircular continuous random variable.
t -- A Student’s T continuous random variable.
triang -- A triangular continuous random variable.
truncexpon -- A truncated exponential continuous random variable.
truncnorm -- A truncated normal continuous random variable.
tukeylambda -- A Tukey-Lamdba continuous random variable.
uniform -- A uniform continuous random variable.
vonmises -- A Von Mises continuous random variable.
wald -- A Wald continuous random variable.
weibull_min -- A Frechet right (or Weibull minimum) continuous random variable.
weibull_max -- A Frechet left (or Weibull maximum) continuous random variable.
wrapcauchy -- A wrapped Cauchy continuous random variable.
Multivariate -- distributions
multivariate_normal -- A multivariate normal random variable.
Discrete -- distributions
bernoulli -- A Bernoulli discrete random variable.
binom -- A binomial discrete random variable.
boltzmann -- A Boltzmann (Truncated Discrete Exponential) random variable.
dlaplace -- A Laplacian discrete random variable.
geom -- A geometric discrete random variable.
hypergeom -- A hypergeometric discrete random variable.
logser -- A Logarithmic (Log-Series, Series) discrete random variable.
nbinom -- A negative binomial discrete random variable.
planck -- A Planck discrete exponential random variable.
poisson -- A Poisson discrete random variable.
randint -- A uniform discrete random variable.
skellam -- A Skellam discrete random variable.
zipf -- A Zipf discrete random variable.
Statistical functions -- Several of these functions have a similar version in scipy.stats.mstats which work for masked arrays.
describe(a[, axis]) -- Computes several descriptive statistics of the passed array.
gmean(a[, axis, dtype]) -- Compute the geometric mean along the specified axis.
hmean(a[, axis, dtype]) -- Calculates the harmonic mean along the specified axis.
kurtosis(a[, axis, fisher, bias]) -- Computes the kurtosis (Fisher or Pearson) of a dataset.
kurtosistest(a[, axis]) -- Tests whether a dataset has normal kurtosis
mode(a[, axis]) -- Returns an array of the modal (most common) value in the passed array.
moment(a[, moment, axis]) -- Calculates the nth moment about the mean for a sample.
normaltest(a[, axis]) -- Tests whether a sample differs from a normal distribution.
skew(a[, axis, bias]) -- Computes the skewness of a data set.
skewtest(a[, axis]) -- Tests whether the skew is different from the normal distribution.
tmean(a[, limits, inclusive]) -- Compute the trimmed mean.
tvar(a[, limits, inclusive]) -- Compute the trimmed variance
tmin(a[, lowerlimit, axis, inclusive]) -- Compute the trimmed minimum
tmax(a[, upperlimit, axis, inclusive]) -- Compute the trimmed maximum
tstd(a[, limits, inclusive]) -- Compute the trimmed sample standard deviation
tsem(a[, limits, inclusive]) -- Compute the trimmed standard error of the mean.
nanmean(x[, axis]) -- Compute the mean over the given axis ignoring nans.
nanstd(x[, axis, bias]) -- Compute the standard deviation over the given axis, ignoring nans.
nanmedian(x[, axis]) -- Compute the median along the given axis ignoring nan values.
variation(a[, axis]) -- Computes the coefficient of variation, the ratio of the biased standard deviation to the mean.
cumfreq(a[, numbins, defaultreallimits, weights]) -- Returns a cumulative frequency histogram, using the histogram function.
histogram2(a, bins) -- Compute histogram using divisions in bins.
histogram(a[, numbins, defaultlimits, ...]) -- Separates the range into several bins and returns the number of instances in each bin.
itemfreq(a) -- Returns a 2-D array of item frequencies.
percentileofscore(a, score[, kind]) -- The percentile rank of a score relative to a list of scores.
scoreatpercentile(a, per[, limit, ...]) -- Calculate the score at a given percentile of the input sequence.
relfreq(a[, numbins, defaultreallimits, weights]) -- Returns a relative frequency histogram, using the histogram function.
binned_statistic(x, values[, statistic, ...]) -- Compute a binned statistic for a set of data.
binned_statistic_2d(x, y, values[, ...]) -- Compute a bidimensional binned statistic for a set of data.
binned_statistic_dd(sample, values[, ...]) -- Compute a multidimensional binned statistic for a set of data.
obrientransform(*args) -- Computes the O’Brien transform on input data (any number of arrays).
signaltonoise(a[, axis, ddof]) -- The signal-to-noise ratio of the input data.
bayes_mvs(data[, alpha]) -- Bayesian confidence intervals for the mean, var, and std.
sem(a[, axis, ddof]) -- Calculates the standard error of the mean (or standard error of measurement) of the values in the input array.
zmap(scores, compare[, axis, ddof]) -- Calculates the relative z-scores.
zscore(a[, axis, ddof]) -- Calculates the z score of each value in the sample, relative to the sample mean and standard deviation.
sigmaclip(a[, low, high]) -- Iterative sigma-clipping of array elements.
threshold(a[, threshmin, threshmax, newval]) -- Clip array to a given value.
trimboth(a, proportiontocut[, axis]) -- Slices off a proportion of items from both ends of an array.
trim1(a, proportiontocut[, tail]) -- Slices off a proportion of items from ONE end of the passed array distribution.
f_oneway(*args) -- Performs a 1-way ANOVA.
pearsonr(x, y) -- Calculates a Pearson correlation coefficient and the p-value for testing non-correlation.
spearmanr(a[, b, axis]) -- Calculates a Spearman rank-order correlation coefficient and the p-value to test for non-correlation.
pointbiserialr(x, y) -- Calculates a point biserial correlation coefficient and the associated p-value.
kendalltau(x, y[, initial_lexsort]) -- Calculates Kendall’s tau, a correlation measure for ordinal data.
linregress(x[, y]) -- Calculate a regression line
ttest_1samp(a, popmean[, axis]) -- Calculates the T-test for the mean of ONE group of scores.
ttest_ind(a, b[, axis, equal_var]) -- Calculates the T-test for the means of TWO INDEPENDENT samples of scores.
ttest_rel(a, b[, axis]) -- Calculates the T-test on TWO RELATED samples of scores, a and b.
kstest(rvs, cdf[, args, N, alternative, mode]) -- Perform the Kolmogorov-Smirnov test for goodness of fit.
chisquare(f_obs[, f_exp, ddof, axis]) -- Calculates a one-way chi square test.
power_divergence(f_obs[, f_exp, ddof, axis, ...]) -- Cressie-Read power divergence statistic and goodness of fit test.
ks_2samp(data1, data2) -- Computes the Kolmogorov-Smirnov statistic on 2 samples.
mannwhitneyu(x, y[, use_continuity]) -- Computes the Mann-Whitney rank test on samples x and y.
tiecorrect(rankvals) -- Tie correction factor for ties in the Mann-Whitney U and Kruskal-Wallis H tests.
rankdata(a[, method]) -- Assign ranks to data, dealing with ties appropriately.
ranksums(x, y) -- Compute the Wilcoxon rank-sum statistic for two samples.
wilcoxon(x[, y, zero_method, correction]) -- Calculate the Wilcoxon signed-rank test.
kruskal(*args) -- Compute the Kruskal-Wallis H-test for independent samples
friedmanchisquare(*args) -- Computes the Friedman test for repeated measurements
ansari(x, y) -- Perform the Ansari-Bradley test for equal scale parameters
bartlett(*args) -- Perform Bartlett’s test for equal variances
levene(*args, **kwds) -- Perform Levene test for equal variances.
shapiro(x[, a, reta]) -- Perform the Shapiro-Wilk test for normality.
anderson(x[, dist]) -- Anderson-Darling test for data coming from a particular distribution
anderson_ksamp(samples[, midrank]) -- The Anderson-Darling test for k-samples.
binom_test(x[, n, p]) -- Perform a test that the probability of success is p.
fligner(*args, **kwds) -- Perform Fligner’s test for equal variances.
mood(x, y[, axis]) -- Perform Mood’s test for equal scale parameters.
boxcox(x[, lmbda, alpha]) -- Return a positive dataset transformed by a Box-Cox power transformation.
boxcox_normmax(x[, brack, method]) -- Compute optimal Box-Cox transform parameter for input data.
boxcox_llf(lmb, data) -- The boxcox log-likelihood function.
entropy(pk[, qk, base]) -- Calculate the entropy of a distribution for given probability values.
Contingency table functions:
chi2_contingency(observed[, correction, lambda_]) -- Chi-square test of independence of variables in a contingency table.
contingency.expected_freq(observed) -- Compute the expected frequencies from a contingency table.
contingency.margins(a) -- Return a list of the marginal sums of the array a.
fisher_exact(table[, alternative]) -- Performs a Fisher exact test on a 2x2 contingency table.
Plot-tests:
ppcc_max(x[, brack, dist]) -- Returns the shape parameter that maximizes the probability plot correlation coefficient for the given data to a one-parameter family of distributions.
ppcc_plot(x, a, b[, dist, plot, N]) -- Returns (shape, ppcc), and optionally plots shape vs.
probplot(x[, sparams, dist, fit, plot]) -- Calculate quantiles for a probability plot, and optionally show the plot.
boxcox_normplot(x, la, lb[, plot, N]) -- Compute parameters for a Box-Cox normality plot, optionally show it.
Masked statistics functions:
Statistical functions for masked arrays (scipy.stats.mstats):
scipy.stats.mstats.argstoarray
scipy.stats.mstats.betai
scipy.stats.mstats.chisquare
scipy.stats.mstats.count_tied_groups
scipy.stats.mstats.describe
scipy.stats.mstats.f_oneway
scipy.stats.mstats.f_value_wilks_lambda
scipy.stats.mstats.find_repeats
scipy.stats.mstats.friedmanchisquare
scipy.stats.mstats.kendalltau
scipy.stats.mstats.kendalltau_seasonal
scipy.stats.mstats.kruskalwallis
scipy.stats.mstats.kruskalwallis
scipy.stats.mstats.ks_twosamp
scipy.stats.mstats.ks_twosamp
scipy.stats.mstats.kurtosis
scipy.stats.mstats.kurtosistest
scipy.stats.mstats.linregress
scipy.stats.mstats.mannwhitneyu
scipy.stats.mstats.plotting_positions
scipy.stats.mstats.mode
scipy.stats.mstats.moment
scipy.stats.mstats.mquantiles
scipy.stats.mstats.msign
scipy.stats.mstats.normaltest
scipy.stats.mstats.obrientransform
scipy.stats.mstats.pearsonr
scipy.stats.mstats.plotting_positions
scipy.stats.mstats.pointbiserialr
scipy.stats.mstats.rankdata
scipy.stats.mstats.scoreatpercentile
scipy.stats.mstats.sem
scipy.stats.mstats.signaltonoise
scipy.stats.mstats.skew
scipy.stats.mstats.skewtest
scipy.stats.mstats.spearmanr
scipy.stats.mstats.theilslopes
scipy.stats.mstats.threshold
scipy.stats.mstats.tmax
scipy.stats.mstats.tmean
scipy.stats.mstats.tmin
scipy.stats.mstats.trim
scipy.stats.mstats.trima
scipy.stats.mstats.trimboth
scipy.stats.mstats.trimmed_stde
scipy.stats.mstats.trimr
scipy.stats.mstats.trimtail
scipy.stats.mstats.tsem
scipy.stats.mstats.ttest_onesamp
scipy.stats.mstats.ttest_ind
scipy.stats.mstats.ttest_onesamp
scipy.stats.mstats.ttest_rel
scipy.stats.mstats.tvar
scipy.stats.mstats.variation
scipy.stats.mstats.winsorize
scipy.stats.mstats.zmap
scipy.stats.mstats.zscore
Univariate and multivariate kernel density estimation (scipy.stats.kde):
gaussian_kde(dataset[, bw_method]) -- Representation of a kernel-density estimate using Gaussian kernels.
For many more stat related functions install the software R and the interface package rpy.
8.15 Miscellaneous routines
Miscellaneous routines
Access via:
In [1]: from scipy import misc
Note that the Python Imaging Library (PIL) is not a dependency of SciPy and therefore the pilutil module is not available on systems that don’t have PIL installed.
bytescale(data[, cmin, cmax, high, low]) -- Byte scales an array (image).
central_diff_weights(Np[, ndiv]) -- Return weights for an Np-point central derivative.
comb(N, k[, exact, repetition]) -- The number of combinations of N things taken k at a time.
derivative(func, x0[, dx, n, args, order]) -- Find the n-th derivative of a function at a point.
factorial(n[, exact]) -- The factorial function, n! = special.gamma(n+1).
factorial2(n[, exact]) -- Double factorial.
factorialk(n, k[, exact]) -- n(!!...!) = multifactorial of order k
fromimage(im[, flatten]) -- Return a copy of a PIL image as a numpy array.
imfilter(arr, ftype) -- Simple filtering of an image.
imread(name[, flatten]) -- Read an image from a file as an array.
imresize(arr, size[, interp, mode]) -- Resize an image.
imrotate(arr, angle[, interp]) -- Rotate an image counter-clockwise by angle degrees.
imsave(name, arr[, format]) -- Save an array as an image.
imshow(arr) -- Simple showing of an image through an external viewer.
info([object, maxwidth, output, toplevel]) -- Get help information for a function, class, or module.
lena() -- Get classic image processing example image, Lena, at 8-bit grayscale bit-depth, 512 x 512 size.
logsumexp(a[, axis, b]) -- Compute the log of the sum of exponentials of input elements.
pade(an, m) -- Return Pade approximation to a polynomial as the ratio of two polynomials.
toimage(arr[, high, low, cmin, cmax, pal, ...]) -- Takes a numpy array and returns a PIL image.
who([vardict]) -- Print the Numpy arrays in the given dictionary.