API#
Entry Points#
Canvas
|
An abstract canvas representing the space in which to bin. |
|
Compute a reduction by pixel, mapping data to pixels as one or more lines. |
|
Compute a reduction by pixel, mapping data to pixels as points. |
|
Sample a raster dataset by canvas size and bounds. |
|
Compute a reduction by pixel, mapping data to pixels as a triangle. |
|
Check that parameter settings are valid for this object |
|
Compute a reduction by pixel, mapping data to pixels as a filled area region |
|
Compute a reduction by pixel, mapping data to pixels as one or more filled polygons. |
|
Samples a recti- or curvi-linear quadmesh by canvas size and bounds. |
Pipeline
|
A datashading pipeline callback. |
Edge Bundling#
alias of |
|
|
Iteratively group edges and return as paths suitable for datashading. |
Glyphs#
|
A point, with center at |
|
An unstructured mesh of triangles, with vertices defined by |
|
|
|
|
|
|
|
|
|
A line, with vertices defined by |
|
|
|
A collection of lines (on line per row) with vertices defined by the lists of columns in |
|
|
|
|
|
|
|
|
|
A filled area glyph. |
|
|
|
|
|
|
|
|
|
|
|
A filled area glyph The area to be filled is the region from the line defined by |
|
|
|
|
|
|
|
|
|
|
Reductions#
|
Whether any elements in |
|
Count elements in each bin, returning the result as a uint32, or a float32 if using antialiasing. |
|
Apply the provided reduction separately per category. |
|
First value encountered in |
|
Last value encountered in |
|
Sum of square differences from the mean of all elements in |
|
Maximum value of all elements in |
|
Mean of all elements in |
|
Minimum value of all elements in |
|
Mode (most common value) of all the values encountered in |
|
Standard Deviation of all elements in |
|
Sum of all elements in |
|
A collection of named reductions. |
|
Variance of all elements in |
|
Returns values from a |
The table below indicates which Reduction
classes are supported on the CPU (e.g. using
pandas
), on CPU with Dask (e.g. using dask.dataframe
), on the GPU (e.g. using cudf
),
and on the GPU with Dask (e.g. using dask-cudf
). The final two columns indicate which reductions
support antialiased lines and which can be used as the selector
in a
where
reduction.
CPU |
CPU + Dask |
GPU |
GPU + Dask |
Antialiasing |
Within |
|
---|---|---|---|---|---|---|
yes |
yes |
yes |
yes |
yes |
||
|
yes |
yes |
yes |
yes |
yes |
|
yes |
yes |
yes |
yes |
yes |
||
yes |
yes |
yes |
yes |
yes |
yes |
|
|
yes |
yes |
yes |
yes |
yes |
yes |
yes |
yes |
yes |
yes |
yes |
yes |
|
|
yes |
yes |
yes |
yes |
yes |
yes |
yes |
yes |
yes |
yes |
yes |
yes |
|
|
yes |
yes |
yes |
yes |
yes |
yes |
yes |
yes |
yes |
yes |
yes |
||
yes |
yes |
yes |
yes |
yes |
yes |
|
|
yes |
yes |
yes |
yes |
yes |
yes |
yes |
yes |
yes |
yes |
|||
yes |
yes |
yes |
yes |
yes |
||
yes |
yes |
yes |
yes |
The mode
reduction is not listed in the table and can only be used
with Canvas.raster
. A by
reduction supports anything that its
contained reduction (that is applied separately to each category) supports.
Categorizers
|
A variation on category_codes that assigns categories by binning a continuous-valued column. |
|
A variation on category_codes that assigns categories using an integer column, modulo a base. |
Transfer Functions#
Image
|
|
|
|
|
Images
|
A list of HTML-representable objects to display in a table. |
|
Set the number of columns to use in the HTML table. |
Other
|
Spread pixels in an image dynamically based on the image density. |
|
Return a new image, with the background set to color. |
|
Convert a DataArray to an image by choosing an RGBA pixel color for each value. |
|
Spread pixels in an image. |
|
Combine images together, overlaying later images onto earlier ones. |
Definitions#
- class datashader.Canvas(plot_width=600, plot_height=600, x_range=None, y_range=None, x_axis_type='linear', y_axis_type='linear')[source]#
An abstract canvas representing the space in which to bin.
- Parameters:
- plot_width, plot_heightint, optional
Width and height of the output aggregate in pixels.
- x_range, y_rangetuple, optional
A tuple representing the bounds inclusive space
[min, max]
along the axis.- x_axis_type, y_axis_typestr, optional
The type of the axis. Valid options are
'linear'
[default], and'log'
.
Methods
area
(source, x, y[, agg, axis, y_stack])Compute a reduction by pixel, mapping data to pixels as a filled area region
line
(source[, x, y, agg, axis, geometry, ...])Compute a reduction by pixel, mapping data to pixels as one or more lines.
points
(source[, x, y, agg, geometry])Compute a reduction by pixel, mapping data to pixels as points.
polygons
(source, geometry[, agg])Compute a reduction by pixel, mapping data to pixels as one or more filled polygons.
quadmesh
(source[, x, y, agg])Samples a recti- or curvi-linear quadmesh by canvas size and bounds.
raster
(source[, layer, upsample_method, ...])Sample a raster dataset by canvas size and bounds.
trimesh
(vertices, simplices[, mesh, agg, ...])Compute a reduction by pixel, mapping data to pixels as a triangle.
validate
()Check that parameter settings are valid for this object
validate_ranges
validate_size
- class datashader.Pipeline(df, glyph, agg=<datashader.reductions.count object>, transform_fn=<function identity>, color_fn=<function shade>, spread_fn=<function dynspread>, width_scale=1.0, height_scale=1.0)[source]#
A datashading pipeline callback.
Given a declarative specification, creates a callable with the following signature:
callback(x_range, y_range, width, height)
where
x_range
andy_range
form the bounding box on the viewport, andwidth
andheight
specify the output image dimensions.- Parameters:
- dfpandas.DataFrame, dask.DataFrame
- glyphGlyph
The glyph to bin by.
- aggReduction, optional
The reduction to compute per-pixel. Default is
count()
.- transform_fncallable, optional
A callable that takes the computed aggregate as an argument, and returns another aggregate. This can be used to do preprocessing before passing to the
color_fn
function.- color_fncallable, optional
A callable that takes the output of
tranform_fn
, and returns anImage
object. Default isshade
.- spread_fncallable, optional
A callable that takes the output of
color_fn
, and returns anotherImage
object. Default isdynspread
.- height_scale: float, optional
Factor by which to scale the provided height
- width_scale: float, optional
Factor by which to scale the provided width
Methods
__call__
([x_range, y_range, width, height])Compute an image from the specified pipeline.
- class datashader.bundling.hammer_bundle(*, accuracy, advect_iterations, batch_size, decay, initial_bandwidth, iterations, max_segment_length, min_segment_length, tension, include_edge_id, source, target, weight, x, y, name)[source]#
Iteratively group edges and return as paths suitable for datashading.
Breaks each edge into a path with multiple line segments, and iteratively curves this path to bundle edges into groups.
Methods
__call__
(nodes, edges, **params)Convert a graph data structure into a path structure for plotting
instance
(**params)Return an instance of this class, copying parameters from any existing instance provided.
Parameters inherited from:
datashader.bundling.connect_edges
: x, y, source, target, include_edge_idweight
= param.String(allow_None=True, allow_refs=False, default=’weight’, label=’Weight’, nested_refs=False, rx=<param.reactive.reactive_ops object at 0x7f8ac6ec2b00>)Column name for each edge weight. If None, weights are ignored.
initial_bandwidth
= param.Number(allow_refs=False, bounds=(0.0, None), default=0.05, inclusive_bounds=(True, True), label=’Initial bandwidth’, nested_refs=False, rx=<param.reactive.reactive_ops object at 0x7f8ac6ec27a0>)Initial value of the bandwidth….
decay
= param.Number(allow_refs=False, bounds=(0.0, 1.0), default=0.7, inclusive_bounds=(True, True), label=’Decay’, nested_refs=False, rx=<param.reactive.reactive_ops object at 0x7f8ac6ec29e0>)Rate of decay in the bandwidth value, with 1.0 indicating no decay.
iterations
= param.Integer(allow_refs=False, bounds=(1, None), default=4, inclusive_bounds=(True, True), label=’Iterations’, nested_refs=False, rx=<param.reactive.reactive_ops object at 0x7f8ac6ec2620>)Number of passes for the smoothing algorithm
batch_size
= param.Integer(allow_refs=False, bounds=(1, None), default=20000, inclusive_bounds=(True, True), label=’Batch size’, nested_refs=False, rx=<param.reactive.reactive_ops object at 0x7f8ac6ec3ca0>)Number of edges to process together
tension
= param.Number(allow_refs=False, bounds=(0, None), default=0.3, inclusive_bounds=(True, True), label=’Tension’, nested_refs=False, rx=<param.reactive.reactive_ops object at 0x7f8ac6ec3a90>)Exponential smoothing factor to use when smoothing
accuracy
= param.Integer(allow_refs=False, bounds=(1, None), default=500, inclusive_bounds=(True, True), label=’Accuracy’, nested_refs=False, rx=<param.reactive.reactive_ops object at 0x7f8ac6ec2a70>)Number of entries in table for…
advect_iterations
= param.Integer(allow_refs=False, bounds=(0, None), default=50, inclusive_bounds=(True, True), label=’Advect iterations’, nested_refs=False, rx=<param.reactive.reactive_ops object at 0x7f8ac6ec3d30>)Number of iterations to move edges along gradients
min_segment_length
= param.Number(allow_refs=False, bounds=(0, None), default=0.008, inclusive_bounds=(True, True), label=’Min segment length’, nested_refs=False, rx=<param.reactive.reactive_ops object at 0x7f8ac6ec2950>)Minimum length (in data space?) for an edge segment
max_segment_length
= param.Number(allow_refs=False, bounds=(0, None), default=0.016, inclusive_bounds=(True, True), label=’Max segment length’, nested_refs=False, rx=<param.reactive.reactive_ops object at 0x7f8ac6ec2c20>)Maximum length (in data space?) for an edge segment
- class datashader.glyphs.Point(x, y)[source]#
A point, with center at
x
andy
.Points map each record to a single bin. Points falling exactly on the upper bounds are treated as a special case, mapping into the previous bin rather than being cropped off.
- Parameters:
- x, ystr
Column names for the x and y coordinates of each point.
- Attributes:
- inputs
ndims
The number of dimensions required in the data structure this Glyph is constructed from.
- x_label
- y_label
Methods
expand_aggs_and_cols
(append)Create a decorator that can be used on functions that accept *aggs_and_cols as a variable length argument. The decorator will replace *aggs_and_cols with a fixed number of arguments.
compute_bounds_dask
compute_x_bounds
compute_y_bounds
maybe_expand_bounds
required_columns
to_cupy_array
validate
- class datashader.glyphs.Triangles(x, y, z=None, weight_type=True, interp=True)[source]#
An unstructured mesh of triangles, with vertices defined by
xs
andys
.- Parameters:
- xs, ys, zslist of str
Column names of x, y, and (optional) z coordinates of each vertex.
- Attributes:
- inputs
ndims
The number of dimensions required in the data structure this Glyph is constructed from.
- x_label
- y_label
Methods
expand_aggs_and_cols
(append)Create a decorator that can be used on functions that accept *aggs_and_cols as a variable length argument. The decorator will replace *aggs_and_cols with a fixed number of arguments.
compute_bounds_dask
compute_x_bounds
compute_y_bounds
maybe_expand_bounds
required_columns
to_cupy_array
validate
- class datashader.glyphs.PolygonGeom(geometry)[source]#
- Attributes:
- geom_dtypes
- inputs
ndims
The number of dimensions required in the data structure this Glyph is constructed from.
- x_label
- y_label
Methods
expand_aggs_and_cols
(append)Create a decorator that can be used on functions that accept *aggs_and_cols as a variable length argument. The decorator will replace *aggs_and_cols with a fixed number of arguments.
compute_bounds_dask
compute_x_bounds
compute_y_bounds
maybe_expand_bounds
required_columns
to_cupy_array
validate
- class datashader.glyphs.QuadMeshRaster(x, y, name)[source]#
- Attributes:
- inputs
ndims
The number of dimensions required in the data structure this Glyph is constructed from.
- x_label
- y_label
Methods
expand_aggs_and_cols
(append)Create a decorator that can be used on functions that accept *aggs_and_cols as a variable length argument. The decorator will replace *aggs_and_cols with a fixed number of arguments.
compute_bounds_dask
compute_x_bounds
compute_y_bounds
infer_interval_breaks
is_upsample
maybe_expand_bounds
to_cupy_array
validate
- class datashader.glyphs.QuadMeshRectilinear(x, y, name)[source]#
- Attributes:
- inputs
ndims
The number of dimensions required in the data structure this Glyph is constructed from.
- x_label
- y_label
Methods
expand_aggs_and_cols
(append)Create a decorator that can be used on functions that accept *aggs_and_cols as a variable length argument. The decorator will replace *aggs_and_cols with a fixed number of arguments.
compute_bounds_dask
compute_x_bounds
compute_y_bounds
infer_interval_breaks
maybe_expand_bounds
to_cupy_array
validate
- class datashader.glyphs.QuadMeshCurvilinear(x, y, name)[source]#
- Attributes:
- inputs
ndims
The number of dimensions required in the data structure this Glyph is constructed from.
- x_label
- y_label
Methods
expand_aggs_and_cols
(append)Create a decorator that can be used on functions that accept *aggs_and_cols as a variable length argument. The decorator will replace *aggs_and_cols with a fixed number of arguments.
compute_bounds_dask
compute_x_bounds
compute_y_bounds
infer_interval_breaks
maybe_expand_bounds
to_cupy_array
validate
- class datashader.glyphs.LineAxis0(x, y)[source]#
A line, with vertices defined by
x
andy
.- Parameters:
- x, ystr
Column names for the x and y coordinates of each vertex.
- Attributes:
- inputs
ndims
The number of dimensions required in the data structure this Glyph is constructed from.
- x_label
- y_label
Methods
expand_aggs_and_cols
(append)Create a decorator that can be used on functions that accept *aggs_and_cols as a variable length argument. The decorator will replace *aggs_and_cols with a fixed number of arguments.
compute_bounds_dask
compute_x_bounds
compute_y_bounds
maybe_expand_bounds
required_columns
set_line_width
to_cupy_array
validate
- class datashader.glyphs.LineAxis0Multi(x, y)[source]#
- Attributes:
- inputs
ndims
The number of dimensions required in the data structure this Glyph is constructed from.
- x_label
- y_label
Methods
expand_aggs_and_cols
(append)Create a decorator that can be used on functions that accept *aggs_and_cols as a variable length argument. The decorator will replace *aggs_and_cols with a fixed number of arguments.
compute_bounds_dask
compute_x_bounds
compute_y_bounds
maybe_expand_bounds
required_columns
set_line_width
to_cupy_array
validate
- class datashader.glyphs.LinesAxis1(x, y)[source]#
A collection of lines (on line per row) with vertices defined by the lists of columns in
x
andy
- Parameters:
- x, ylist
Lists of column names for the x and y coordinates
- Attributes:
- inputs
ndims
The number of dimensions required in the data structure this Glyph is constructed from.
- x_label
- y_label
Methods
expand_aggs_and_cols
(append)Create a decorator that can be used on functions that accept *aggs_and_cols as a variable length argument. The decorator will replace *aggs_and_cols with a fixed number of arguments.
compute_bounds_dask
compute_x_bounds
compute_y_bounds
maybe_expand_bounds
required_columns
set_line_width
to_cupy_array
validate
- class datashader.glyphs.LinesAxis1XConstant(x, y)[source]#
- Attributes:
- inputs
ndims
The number of dimensions required in the data structure this Glyph is constructed from.
- x_label
- y_label
Methods
expand_aggs_and_cols
(append)Create a decorator that can be used on functions that accept *aggs_and_cols as a variable length argument. The decorator will replace *aggs_and_cols with a fixed number of arguments.
compute_bounds_dask
compute_x_bounds
compute_y_bounds
maybe_expand_bounds
required_columns
set_line_width
to_cupy_array
validate
- class datashader.glyphs.LinesAxis1YConstant(x, y)[source]#
- Attributes:
- inputs
ndims
The number of dimensions required in the data structure this Glyph is constructed from.
- x_label
- y_label
Methods
expand_aggs_and_cols
(append)Create a decorator that can be used on functions that accept *aggs_and_cols as a variable length argument. The decorator will replace *aggs_and_cols with a fixed number of arguments.
compute_bounds_dask
compute_x_bounds
compute_y_bounds
maybe_expand_bounds
required_columns
set_line_width
to_cupy_array
validate
- class datashader.glyphs.LinesAxis1Ragged(x, y)[source]#
- Attributes:
- inputs
ndims
The number of dimensions required in the data structure this Glyph is constructed from.
- x_label
- y_label
Methods
expand_aggs_and_cols
(append)Create a decorator that can be used on functions that accept *aggs_and_cols as a variable length argument. The decorator will replace *aggs_and_cols with a fixed number of arguments.
compute_bounds_dask
compute_x_bounds
compute_y_bounds
maybe_expand_bounds
required_columns
set_line_width
to_cupy_array
validate
- class datashader.glyphs.LineAxis1Geometry(geometry)[source]#
- Attributes:
- geom_dtypes
- inputs
ndims
The number of dimensions required in the data structure this Glyph is constructed from.
- x_label
- y_label
Methods
expand_aggs_and_cols
(append)Create a decorator that can be used on functions that accept *aggs_and_cols as a variable length argument. The decorator will replace *aggs_and_cols with a fixed number of arguments.
compute_bounds_dask
compute_x_bounds
compute_y_bounds
maybe_expand_bounds
required_columns
set_line_width
to_cupy_array
validate
- class datashader.glyphs.AreaToZeroAxis0(x, y)[source]#
A filled area glyph. The area to be filled is the region from the line defined by
x
andy
and the y=0 line- Parameters:
- x, y
Column names for the x and y coordinates of each vertex.
- Attributes:
- inputs
ndims
The number of dimensions required in the data structure this Glyph is constructed from.
- x_label
- y_label
Methods
expand_aggs_and_cols
(append)Create a decorator that can be used on functions that accept *aggs_and_cols as a variable length argument. The decorator will replace *aggs_and_cols with a fixed number of arguments.
compute_bounds_dask
compute_x_bounds
compute_y_bounds
maybe_expand_bounds
required_columns
to_cupy_array
validate
- class datashader.glyphs.AreaToZeroAxis0Multi(x, y)[source]#
- Attributes:
- inputs
ndims
The number of dimensions required in the data structure this Glyph is constructed from.
- x_label
- y_label
Methods
expand_aggs_and_cols
(append)Create a decorator that can be used on functions that accept *aggs_and_cols as a variable length argument. The decorator will replace *aggs_and_cols with a fixed number of arguments.
compute_bounds_dask
compute_x_bounds
compute_y_bounds
maybe_expand_bounds
required_columns
to_cupy_array
validate
- class datashader.glyphs.AreaToZeroAxis1(x, y)[source]#
- Attributes:
- inputs
ndims
The number of dimensions required in the data structure this Glyph is constructed from.
- x_label
- y_label
Methods
expand_aggs_and_cols
(append)Create a decorator that can be used on functions that accept *aggs_and_cols as a variable length argument. The decorator will replace *aggs_and_cols with a fixed number of arguments.
compute_bounds_dask
compute_x_bounds
compute_y_bounds
maybe_expand_bounds
required_columns
to_cupy_array
validate
- class datashader.glyphs.AreaToZeroAxis1XConstant(x, y)[source]#
- Attributes:
- inputs
ndims
The number of dimensions required in the data structure this Glyph is constructed from.
- x_label
- y_label
Methods
expand_aggs_and_cols
(append)Create a decorator that can be used on functions that accept *aggs_and_cols as a variable length argument. The decorator will replace *aggs_and_cols with a fixed number of arguments.
compute_bounds_dask
compute_x_bounds
compute_y_bounds
maybe_expand_bounds
required_columns
to_cupy_array
validate
- class datashader.glyphs.AreaToZeroAxis1YConstant(x, y)[source]#
- Attributes:
- inputs
ndims
The number of dimensions required in the data structure this Glyph is constructed from.
- x_label
- y_label
Methods
expand_aggs_and_cols
(append)Create a decorator that can be used on functions that accept *aggs_and_cols as a variable length argument. The decorator will replace *aggs_and_cols with a fixed number of arguments.
compute_bounds_dask
compute_x_bounds
compute_y_bounds
maybe_expand_bounds
required_columns
to_cupy_array
validate
- class datashader.glyphs.AreaToZeroAxis1Ragged(x, y)[source]#
- Attributes:
- inputs
ndims
The number of dimensions required in the data structure this Glyph is constructed from.
- x_label
- y_label
Methods
expand_aggs_and_cols
(append)Create a decorator that can be used on functions that accept *aggs_and_cols as a variable length argument. The decorator will replace *aggs_and_cols with a fixed number of arguments.
compute_bounds_dask
compute_x_bounds
compute_y_bounds
maybe_expand_bounds
required_columns
to_cupy_array
validate
- class datashader.glyphs.AreaToLineAxis0(x, y, y_stack)[source]#
A filled area glyph The area to be filled is the region from the line defined by
x
andy[0]
and the line defined byx
andy[1]
.- Parameters:
- x
Column names for the x and y coordinates of each vertex.
- y
List or tuple of length two containing the column names of the y-coordinates of the two curves that define the area region.
- Attributes:
- inputs
ndims
The number of dimensions required in the data structure this Glyph is constructed from.
- x_label
- y_label
Methods
expand_aggs_and_cols
(append)Create a decorator that can be used on functions that accept *aggs_and_cols as a variable length argument. The decorator will replace *aggs_and_cols with a fixed number of arguments.
compute_bounds_dask
compute_x_bounds
compute_y_bounds
maybe_expand_bounds
required_columns
to_cupy_array
validate
- class datashader.glyphs.AreaToLineAxis0Multi(x, y, y_stack)[source]#
- Attributes:
- inputs
ndims
The number of dimensions required in the data structure this Glyph is constructed from.
- x_label
- y_label
Methods
expand_aggs_and_cols
(append)Create a decorator that can be used on functions that accept *aggs_and_cols as a variable length argument. The decorator will replace *aggs_and_cols with a fixed number of arguments.
compute_bounds_dask
compute_x_bounds
compute_y_bounds
maybe_expand_bounds
required_columns
to_cupy_array
validate
- class datashader.glyphs.AreaToLineAxis1(x, y, y_stack)[source]#
- Attributes:
- inputs
ndims
The number of dimensions required in the data structure this Glyph is constructed from.
- x_label
- y_label
Methods
expand_aggs_and_cols
(append)Create a decorator that can be used on functions that accept *aggs_and_cols as a variable length argument. The decorator will replace *aggs_and_cols with a fixed number of arguments.
compute_bounds_dask
compute_x_bounds
compute_y_bounds
maybe_expand_bounds
required_columns
to_cupy_array
validate
- class datashader.glyphs.AreaToLineAxis1XConstant(x, y, y_stack)[source]#
- Attributes:
- inputs
ndims
The number of dimensions required in the data structure this Glyph is constructed from.
- x_label
- y_label
Methods
expand_aggs_and_cols
(append)Create a decorator that can be used on functions that accept *aggs_and_cols as a variable length argument. The decorator will replace *aggs_and_cols with a fixed number of arguments.
compute_bounds_dask
compute_x_bounds
compute_y_bounds
maybe_expand_bounds
required_columns
to_cupy_array
validate
- class datashader.glyphs.AreaToLineAxis1YConstant(x, y, y_stack)[source]#
- Attributes:
- inputs
ndims
The number of dimensions required in the data structure this Glyph is constructed from.
- x_label
- y_label
Methods
expand_aggs_and_cols
(append)Create a decorator that can be used on functions that accept *aggs_and_cols as a variable length argument. The decorator will replace *aggs_and_cols with a fixed number of arguments.
compute_bounds_dask
compute_x_bounds
compute_y_bounds
maybe_expand_bounds
required_columns
to_cupy_array
validate
- class datashader.glyphs.AreaToLineAxis1Ragged(x, y, y_stack)[source]#
- Attributes:
- inputs
ndims
The number of dimensions required in the data structure this Glyph is constructed from.
- x_label
- y_label
Methods
expand_aggs_and_cols
(append)Create a decorator that can be used on functions that accept *aggs_and_cols as a variable length argument. The decorator will replace *aggs_and_cols with a fixed number of arguments.
compute_bounds_dask
compute_x_bounds
compute_y_bounds
maybe_expand_bounds
required_columns
to_cupy_array
validate
- class datashader.reductions.any(column=None)[source]#
Whether any elements in
column
map to each bin.- Parameters:
- columnstr, optional
If provided, any elements in
column
that areNaN
are skipped.
- Attributes:
- inputs
- nan_check_column
Methods
is_categorical
()Return
True
if this is or contains a categorical reduction.is_where
()Return
True
if this is awhere
reduction or directly wraps a where reduction.uses_cuda_mutex
()Return
True
if this Reduction needs to use a CUDA mutex to ensure that it is threadsafe across CUDA threads.uses_row_index
(cuda, partitioned)Return
True
if this Reduction uses a row index virtual column.out_dshape
validate
- class datashader.reductions.count(column=None, self_intersect=True)[source]#
Count elements in each bin, returning the result as a uint32, or a float32 if using antialiasing.
- Parameters:
- columnstr, optional
If provided, only counts elements in
column
that are notNaN
. Otherwise, counts every element.
- Attributes:
- inputs
- nan_check_column
Methods
is_categorical
()Return
True
if this is or contains a categorical reduction.is_where
()Return
True
if this is awhere
reduction or directly wraps a where reduction.uses_cuda_mutex
()Return
True
if this Reduction needs to use a CUDA mutex to ensure that it is threadsafe across CUDA threads.uses_row_index
(cuda, partitioned)Return
True
if this Reduction uses a row index virtual column.out_dshape
validate
- class datashader.reductions.count_cat(column)[source]#
Count of all elements in
column
, grouped by category. Alias for by(…,count()), for backwards compatibility.- Parameters:
- columnstr
Name of the column to aggregate over. Column data type must be categorical. Resulting aggregate has a outer dimension axis along the categories present.
- Attributes:
- cat_column
- inputs
- nan_check_column
- val_column
Methods
is_categorical
()Return
True
if this is or contains a categorical reduction.is_where
()Return
True
if this is awhere
reduction or directly wraps a where reduction.uses_cuda_mutex
()Return
True
if this Reduction needs to use a CUDA mutex to ensure that it is threadsafe across CUDA threads.uses_row_index
(cuda, partitioned)Return
True
if this Reduction uses a row index virtual column.out_dshape
validate
- class datashader.reductions.first(column: str | SpecialColumn | None = None)[source]#
First value encountered in
column
.Useful for categorical data where an actual value must always be returned, not an average or other numerical calculation.
Currently only supported for rasters, externally to this class.
- Parameters:
- columnstr
Name of the column to aggregate over. If the data type is floating point,
NaN
values in the column are skipped.
- Attributes:
- inputs
- nan_check_column
Methods
is_categorical
()Return
True
if this is or contains a categorical reduction.is_where
()Return
True
if this is awhere
reduction or directly wraps a where reduction.uses_cuda_mutex
()Return
True
if this Reduction needs to use a CUDA mutex to ensure that it is threadsafe across CUDA threads.uses_row_index
(cuda, partitioned)Return
True
if this Reduction uses a row index virtual column.out_dshape
validate
- class datashader.reductions.last(column: str | SpecialColumn | None = None)[source]#
Last value encountered in
column
.Useful for categorical data where an actual value must always be returned, not an average or other numerical calculation.
Currently only supported for rasters, externally to this class.
- Parameters:
- columnstr
Name of the column to aggregate over. If the data type is floating point,
NaN
values in the column are skipped.
- Attributes:
- inputs
- nan_check_column
Methods
is_categorical
()Return
True
if this is or contains a categorical reduction.is_where
()Return
True
if this is awhere
reduction or directly wraps a where reduction.uses_cuda_mutex
()Return
True
if this Reduction needs to use a CUDA mutex to ensure that it is threadsafe across CUDA threads.uses_row_index
(cuda, partitioned)Return
True
if this Reduction uses a row index virtual column.out_dshape
validate
- class datashader.reductions.m2(column: str | SpecialColumn | None = None)[source]#
Sum of square differences from the mean of all elements in
column
.Intermediate value for computing
var
andstd
, not intended to be used on its own.- Parameters:
- columnstr
Name of the column to aggregate over. Column data type must be numeric.
NaN
values in the column are skipped.
- Attributes:
- inputs
- nan_check_column
Methods
is_categorical
()Return
True
if this is or contains a categorical reduction.is_where
()Return
True
if this is awhere
reduction or directly wraps a where reduction.uses_cuda_mutex
()Return
True
if this Reduction needs to use a CUDA mutex to ensure that it is threadsafe across CUDA threads.uses_row_index
(cuda, partitioned)Return
True
if this Reduction uses a row index virtual column.out_dshape
validate
- class datashader.reductions.max(column: str | SpecialColumn | None = None)[source]#
Maximum value of all elements in
column
.- Parameters:
- columnstr
Name of the column to aggregate over. Column data type must be numeric.
NaN
values in the column are skipped.
- Attributes:
- inputs
- nan_check_column
Methods
is_categorical
()Return
True
if this is or contains a categorical reduction.is_where
()Return
True
if this is awhere
reduction or directly wraps a where reduction.uses_cuda_mutex
()Return
True
if this Reduction needs to use a CUDA mutex to ensure that it is threadsafe across CUDA threads.uses_row_index
(cuda, partitioned)Return
True
if this Reduction uses a row index virtual column.out_dshape
validate
- class datashader.reductions.mean(column: str | SpecialColumn | None = None)[source]#
Mean of all elements in
column
.- Parameters:
- columnstr
Name of the column to aggregate over. Column data type must be numeric.
NaN
values in the column are skipped.
- Attributes:
- inputs
- nan_check_column
Methods
is_categorical
()Return
True
if this is or contains a categorical reduction.is_where
()Return
True
if this is awhere
reduction or directly wraps a where reduction.uses_cuda_mutex
()Return
True
if this Reduction needs to use a CUDA mutex to ensure that it is threadsafe across CUDA threads.uses_row_index
(cuda, partitioned)Return
True
if this Reduction uses a row index virtual column.validate
- class datashader.reductions.min(column: str | SpecialColumn | None = None)[source]#
Minimum value of all elements in
column
.- Parameters:
- columnstr
Name of the column to aggregate over. Column data type must be numeric.
NaN
values in the column are skipped.
- Attributes:
- inputs
- nan_check_column
Methods
is_categorical
()Return
True
if this is or contains a categorical reduction.is_where
()Return
True
if this is awhere
reduction or directly wraps a where reduction.uses_cuda_mutex
()Return
True
if this Reduction needs to use a CUDA mutex to ensure that it is threadsafe across CUDA threads.uses_row_index
(cuda, partitioned)Return
True
if this Reduction uses a row index virtual column.out_dshape
validate
- class datashader.reductions.mode(column: str | SpecialColumn | None = None)[source]#
Mode (most common value) of all the values encountered in
column
.Useful for categorical data where an actual value must always be returned, not an average or other numerical calculation.
Currently only supported for rasters, externally to this class. Implementing it for other glyph types would be difficult due to potentially unbounded data storage requirements to store indefinite point or line data per pixel.
- Parameters:
- columnstr
Name of the column to aggregate over. If the data type is floating point,
NaN
values in the column are skipped.
- Attributes:
- inputs
- nan_check_column
Methods
is_categorical
()Return
True
if this is or contains a categorical reduction.is_where
()Return
True
if this is awhere
reduction or directly wraps a where reduction.uses_cuda_mutex
()Return
True
if this Reduction needs to use a CUDA mutex to ensure that it is threadsafe across CUDA threads.uses_row_index
(cuda, partitioned)Return
True
if this Reduction uses a row index virtual column.out_dshape
validate
- class datashader.reductions.std(column: str | SpecialColumn | None = None)[source]#
Standard Deviation of all elements in
column
.- Parameters:
- columnstr
Name of the column to aggregate over. Column data type must be numeric.
NaN
values in the column are skipped.
- Attributes:
- inputs
- nan_check_column
Methods
is_categorical
()Return
True
if this is or contains a categorical reduction.is_where
()Return
True
if this is awhere
reduction or directly wraps a where reduction.uses_cuda_mutex
()Return
True
if this Reduction needs to use a CUDA mutex to ensure that it is threadsafe across CUDA threads.uses_row_index
(cuda, partitioned)Return
True
if this Reduction uses a row index virtual column.validate
- class datashader.reductions.sum(column=None, self_intersect=True)[source]#
Sum of all elements in
column
.Elements of resulting aggregate are nan if they are not updated.
- Parameters:
- columnstr
Name of the column to aggregate over. Column data type must be numeric.
NaN
values in the column are skipped.
- Attributes:
- inputs
- nan_check_column
Methods
is_categorical
()Return
True
if this is or contains a categorical reduction.is_where
()Return
True
if this is awhere
reduction or directly wraps a where reduction.uses_cuda_mutex
()Return
True
if this Reduction needs to use a CUDA mutex to ensure that it is threadsafe across CUDA threads.uses_row_index
(cuda, partitioned)Return
True
if this Reduction uses a row index virtual column.out_dshape
validate
- class datashader.reductions.summary(**kwargs)[source]#
A collection of named reductions.
Computes all aggregates simultaneously, output is stored as a
xarray.Dataset
.Notes
A single pass of the source dataset using antialiased lines can either be performed using a single-stage aggregation (e.g.
self_intersect=True
) or two stages (self_intersect=False
). If asummary
contains acount
orsum
reduction withself_intersect=False
, or any offirst
,last
ormin
, then the antialiased line pass will be performed in two stages.Examples
A reduction for computing the mean of column “a”, and the sum of column “b” for each bin, all in a single pass.
>>> import datashader as ds >>> red = ds.summary(mean_a=ds.mean('a'), sum_b=ds.sum('b'))
- Attributes:
- inputs
Methods
is_categorical
uses_row_index
validate
- class datashader.reductions.var(column: str | SpecialColumn | None = None)[source]#
Variance of all elements in
column
.- Parameters:
- columnstr
Name of the column to aggregate over. Column data type must be numeric.
NaN
values in the column are skipped.
- Attributes:
- inputs
- nan_check_column
Methods
is_categorical
()Return
True
if this is or contains a categorical reduction.is_where
()Return
True
if this is awhere
reduction or directly wraps a where reduction.uses_cuda_mutex
()Return
True
if this Reduction needs to use a CUDA mutex to ensure that it is threadsafe across CUDA threads.uses_row_index
(cuda, partitioned)Return
True
if this Reduction uses a row index virtual column.validate
- class datashader.reductions.where(selector: Reduction, lookup_column: str | None = None)[source]#
Returns values from a
lookup_column
corresponding to aselector
reduction that is applied to some other column.If
lookup_column
isNone
then it uses the index of the row in the DataFrame instead of a named column. This is returned as an int64 aggregation with -1 used to denote no value.- Parameters:
- selector: Reduction
Reduction used to select the values of the
lookup_column
which are returned by thiswhere
reduction.- lookup_columnstr | None
Column containing values that are returned from this
where
reduction, orNone
to return row indexes instead.
Examples
>>> canvas.line(df, 'x', 'y', agg=ds.where(ds.max("value"), "other"))
This returns the values of the “other” column that correspond to the maximum of the “value” column in each bin.
- Attributes:
- inputs
- nan_check_column
Methods
is_categorical
()Return
True
if this is or contains a categorical reduction.is_where
()Return
True
if this is awhere
reduction or directly wraps a where reduction.uses_cuda_mutex
()Return
True
if this Reduction needs to use a CUDA mutex to ensure that it is threadsafe across CUDA threads.uses_row_index
(cuda, partitioned)Return
True
if this Reduction uses a row index virtual column.out_dshape
validate
- class datashader.transfer_functions.Image(data: ~typing.Any = <NA>, coords: ~collections.abc.Sequence[~collections.abc.Sequence | ~pandas.core.indexes.base.Index | ~xarray.core.dataarray.DataArray] | ~collections.abc.Mapping | None = None, dims: str | ~collections.abc.Iterable[~collections.abc.Hashable] | None = None, name: ~collections.abc.Hashable | None = None, attrs: ~collections.abc.Mapping | None = None, indexes: ~collections.abc.Mapping[~typing.Any, ~xarray.core.indexes.Index] | None = None, fastpath: bool = False)[source]#
- Attributes:
- T
attrs
Dictionary storing arbitrary metadata with this array.
chunks
Tuple of block lengths for this dataarray’s data, in order of dimensions, or None if the underlying data is not a dask array.
chunksizes
Mapping from dimension names to block lengths for this dataarray’s data, or None if the underlying data is not a dask array.
coords
Mapping of
DataArray
objects corresponding to coordinate variables.data
The DataArray’s data as an array.
dims
Tuple of dimension names associated with this array.
dtype
Data-type of the array’s elements.
encoding
Dictionary of format-specific settings for how this array should be serialized.
imag
The imaginary part of the array.
indexes
Mapping of pandas.Index objects used for label based indexing.
loc
Attribute for location based indexing like pandas.
name
The name of this array.
nbytes
Total bytes consumed by the elements of this DataArray’s data.
ndim
Number of array dimensions.
real
The real part of the array.
shape
Tuple of array dimensions.
size
Number of elements in the array.
sizes
Ordered mapping from dimension names to lengths.
values
The array’s data converted to numpy.ndarray.
variable
Low level interface to the Variable object for this DataArray.
xindexes
Mapping of
Index
objects used for label based indexing.
Methods
all
([dim, keep_attrs])Reduce this DataArray's data by applying
all
along some dimension(s).any
([dim, keep_attrs])Reduce this DataArray's data by applying
any
along some dimension(s).argmax
([dim, axis, keep_attrs, skipna])Index or indices of the maximum of the DataArray over one or more dimensions.
argmin
([dim, axis, keep_attrs, skipna])Index or indices of the minimum of the DataArray over one or more dimensions.
argsort
([axis, kind, order])Returns the indices that would sort this array.
as_numpy
()Coerces wrapped data and coordinates into numpy arrays, returning a DataArray.
assign_attrs
(*args, **kwargs)Assign new attrs to this object.
assign_coords
([coords])Assign new coordinates to this object.
astype
(dtype, *[, order, casting, subok, ...])Copy of the xarray object, with data cast to a specified type.
bfill
(dim[, limit])Fill NaN values by propagating values backward
broadcast_equals
(other)Two DataArrays are broadcast equal if they are equal after broadcasting them against each other such that they have the same dimensions.
broadcast_like
(other, *[, exclude])Broadcast this DataArray against another Dataset or DataArray.
chunk
([chunks, name_prefix, token, lock, ...])Coerce this array's data into a dask arrays with the given chunks.
clip
([min, max, keep_attrs])Return an array whose values are limited to
[min, max]
.close
()Release any resources linked to this object.
coarsen
([dim, boundary, side, coord_func])Coarsen object for DataArrays.
combine_first
(other)Combine two DataArray objects, with union of coordinates.
compute
(**kwargs)Manually trigger loading of this array's data from disk or a remote source into memory and return a new array.
conj
()Complex-conjugate all elements.
conjugate
()Return the complex conjugate, element-wise.
convert_calendar
(calendar[, dim, align_on, ...])Convert the DataArray to another calendar.
copy
([deep, data])Returns a copy of this array.
count
([dim, keep_attrs])Reduce this DataArray's data by applying
count
along some dimension(s).cumprod
([dim, skipna, keep_attrs])Reduce this DataArray's data by applying
cumprod
along some dimension(s).cumsum
([dim, skipna, keep_attrs])Reduce this DataArray's data by applying
cumsum
along some dimension(s).cumulative
(dim[, min_periods])Accumulating object for DataArrays.
cumulative_integrate
([coord, datetime_unit])Integrate cumulatively along the given coordinate using the trapezoidal rule.
curvefit
(coords, func[, reduce_dims, ...])Curve fitting optimization for arbitrary functions.
diff
(dim[, n, label])Calculate the n-th order discrete difference along given axis.
differentiate
(coord[, edge_order, datetime_unit])Differentiate the array with the second order accurate central differences.
dot
(other[, dim])Perform dot product of two DataArrays along their shared dims.
drop
([labels, dim, errors])Backward compatible method based on drop_vars and drop_sel
drop_duplicates
(dim, *[, keep])Returns a new DataArray with duplicate dimension values removed.
drop_encoding
()Return a new DataArray without encoding on the array or any attached coords.
drop_indexes
(coord_names, *[, errors])Drop the indexes assigned to the given coordinates.
drop_isel
([indexers])Drop index positions from this DataArray.
drop_sel
([labels, errors])Drop index labels from this DataArray.
drop_vars
(names, *[, errors])Returns an array with dropped variables.
dropna
(dim, *[, how, thresh])Returns a new array with dropped labels for missing values along the provided dimension.
equals
(other)True if two DataArrays have the same dimensions, coordinates and values; otherwise False.
expand_dims
([dim, axis, ...])Return a new object with an additional axis (or axes) inserted at the corresponding position in the array shape.
ffill
(dim[, limit])Fill NaN values by propagating values forward
fillna
(value)Fill missing values in this object.
from_dict
(d)Convert a dictionary into an xarray.DataArray
from_iris
(cube)Convert a iris.cube.Cube into an xarray.DataArray
from_series
(series[, sparse])Convert a pandas.Series into an xarray.DataArray.
get_axis_num
(dim)Return axis number(s) corresponding to dimension(s) in this array.
get_index
(key)Get an index for a dimension, with fall-back to a default RangeIndex
groupby
(group[, squeeze, restore_coord_dims])Returns a DataArrayGroupBy object for performing grouped operations.
groupby_bins
(group, bins[, right, labels, ...])Returns a DataArrayGroupBy object for performing grouped operations.
head
([indexers])Return a new DataArray whose data is given by the the first n values along the specified dimension(s).
identical
(other)Like equals, but also checks the array name and attributes, and attributes on all coordinates.
idxmax
([dim, skipna, fill_value, keep_attrs])Return the coordinate label of the maximum value along a dimension.
idxmin
([dim, skipna, fill_value, keep_attrs])Return the coordinate label of the minimum value along a dimension.
integrate
([coord, datetime_unit])Integrate along the given coordinate using the trapezoidal rule.
interp
([coords, method, assume_sorted, kwargs])Interpolate a DataArray onto new coordinates
interp_calendar
(target[, dim])Interpolates the DataArray to another calendar based on decimal year measure.
interp_like
(other[, method, assume_sorted, ...])Interpolate this object onto the coordinates of another object, filling out of range values with NaN.
interpolate_na
([dim, method, limit, ...])Fill in NaNs by interpolating according to different methods.
isel
([indexers, drop, missing_dims])Return a new DataArray whose data is given by selecting indexes along the specified dimension(s).
isin
(test_elements)Tests each value in the array for whether it is in test elements.
isnull
([keep_attrs])Test each value in the array for whether it is a missing value.
item
(*args)Copy an element of an array to a standard Python scalar and return it.
load
(**kwargs)Manually trigger loading of this array's data from disk or a remote source into memory and return this array.
map_blocks
(func[, args, kwargs, template])Apply a function to each block of this DataArray.
max
([dim, skipna, keep_attrs])Reduce this DataArray's data by applying
max
along some dimension(s).mean
([dim, skipna, keep_attrs])Reduce this DataArray's data by applying
mean
along some dimension(s).median
([dim, skipna, keep_attrs])Reduce this DataArray's data by applying
median
along some dimension(s).min
([dim, skipna, keep_attrs])Reduce this DataArray's data by applying
min
along some dimension(s).notnull
([keep_attrs])Test each value in the array for whether it is not a missing value.
pad
([pad_width, mode, stat_length, ...])Pad this array along one or more dimensions.
persist
(**kwargs)Trigger computation in constituent dask arrays
pipe
(func, *args, **kwargs)Apply
func(self, *args, **kwargs)
plot
alias of
DataArrayPlotAccessor
polyfit
(dim, deg[, skipna, rcond, w, full, cov])Least squares polynomial fit.
prod
([dim, skipna, min_count, keep_attrs])Reduce this DataArray's data by applying
prod
along some dimension(s).quantile
(q[, dim, method, keep_attrs, ...])Compute the qth quantile of the data along the specified dimension.
query
([queries, parser, engine, missing_dims])Return a new data array indexed along the specified dimension(s), where the indexers are given as strings containing Python expressions to be evaluated against the values in the array.
rank
(dim, *[, pct, keep_attrs])Ranks the data.
reduce
(func[, dim, axis, keep_attrs, keepdims])Reduce this array by applying func along some dimension(s).
reindex
([indexers, method, tolerance, copy, ...])Conform this object onto the indexes of another object, filling in missing values with
fill_value
.reindex_like
(other, *[, method, tolerance, ...])Conform this object onto the indexes of another object, for indexes which the objects share.
rename
([new_name_or_name_dict])Returns a new DataArray with renamed coordinates, dimensions or a new name.
reorder_levels
([dim_order])Rearrange index levels using input order.
resample
([indexer, skipna, closed, label, ...])Returns a Resample object for performing resampling operations.
reset_coords
([names, drop])Given names of coordinates, reset them to become variables.
reset_index
(dims_or_levels[, drop])Reset the specified index(es) or multi-index level(s).
roll
([shifts, roll_coords])Roll this array by an offset along one or more dimensions.
rolling
([dim, min_periods, center])Rolling window object for DataArrays.
rolling_exp
([window, window_type])Exponentially-weighted moving window.
round
(*args, **kwargs)Round an array to the given number of decimals.
searchsorted
(v[, side, sorter])Find indices where elements of v should be inserted in a to maintain order.
sel
([indexers, method, tolerance, drop])Return a new DataArray whose data is given by selecting index labels along the specified dimension(s).
set_close
(close)Register the function that releases any resources linked to this object.
set_index
([indexes, append])Set DataArray (multi-)indexes using one or more existing coordinates.
set_xindex
(coord_names[, index_cls])Set a new, Xarray-compatible index from one or more existing coordinate(s).
shift
([shifts, fill_value])Shift this DataArray by an offset along one or more dimensions.
sortby
(variables[, ascending])Sort object by labels or values (along an axis).
squeeze
([dim, drop, axis])Return a new object with squeezed data.
stack
([dim, create_index, index_cls])Stack any number of existing dimensions into a single new dimension.
std
([dim, skipna, ddof, keep_attrs])Reduce this DataArray's data by applying
std
along some dimension(s).sum
([dim, skipna, min_count, keep_attrs])Reduce this DataArray's data by applying
sum
along some dimension(s).swap_dims
([dims_dict])Returns a new DataArray with swapped dimensions.
tail
([indexers])Return a new DataArray whose data is given by the the last n values along the specified dimension(s).
thin
([indexers])Return a new DataArray whose data is given by each n value along the specified dimension(s).
to_dask_dataframe
([dim_order, set_index])Convert this array into a dask.dataframe.DataFrame.
to_dataframe
([name, dim_order])Convert this array and its coordinates into a tidy pandas.DataFrame.
to_dataset
([dim, name, promote_attrs])Convert a DataArray to a Dataset.
to_dict
([data, encoding])Convert this xarray.DataArray into a dictionary following xarray naming conventions.
to_index
()Convert this variable to a pandas.Index.
to_iris
()Convert this array into a iris.cube.Cube
to_masked_array
([copy])Convert this array into a numpy.ma.MaskedArray
to_netcdf
([path, mode, format, group, ...])Write DataArray contents to a netCDF file.
to_numpy
()Coerces wrapped data to numpy and returns a numpy.ndarray.
to_pandas
()Convert this array into a pandas object with the same shape.
to_series
()Convert this array into a pandas.Series.
to_unstacked_dataset
(dim[, level])Unstack DataArray expanding to Dataset along a given level of a stacked coordinate.
to_zarr
([store, chunk_store, mode, ...])Write DataArray contents to a Zarr store
transpose
(*dim[, transpose_coords, missing_dims])Return a new DataArray object with transposed dimensions.
unify_chunks
()Unify chunk size along all chunked dimensions of this DataArray.
unstack
([dim, fill_value, sparse])Unstack existing dimensions corresponding to MultiIndexes into multiple new dimensions.
var
([dim, skipna, ddof, keep_attrs])Reduce this DataArray's data by applying
var
along some dimension(s).weighted
(weights)Weighted DataArray operations.
where
(cond[, other, drop])Filter elements from this object according to a condition.
dt
reset_encoding
str
to_bytesio
to_pil
- item(*args)[source]#
Copy an element of an array to a standard Python scalar and return it.
- Parameters:
- *argsArguments (variable number and type)
none: in this case, the method only works for arrays with one element (a.size == 1), which element is copied into a standard Python scalar object and returned.
int_type: this argument is interpreted as a flat index into the array, specifying which element to copy and return.
tuple of int_types: functions as does a single int_type argument, except that the argument is interpreted as an nd-index into the array.
- Returns:
- zStandard Python scalar object
A copy of the specified element of the array as a suitable Python scalar
Notes
When the data type of a is longdouble or clongdouble, item() returns a scalar array object because there is no available Python scalar that would not lose information. Void arrays return a buffer object for item(), unless fields are defined, in which case a tuple is returned.
item is very similar to a[args], except, instead of an array scalar, a standard Python scalar is returned. This can be useful for speeding up access to elements of the array and doing arithmetic on elements of the array using Python’s optimized math.
Examples
>>> np.random.seed(123) >>> x = np.random.randint(9, size=(3, 3)) >>> x array([[2, 2, 6], [1, 3, 6], [1, 0, 1]]) >>> x.item(3) 1 >>> x.item(7) 0 >>> x.item((0, 1)) 2 >>> x.item((2, 2)) 1
For an array with object dtype, elements are returned as-is.
>>> a = np.array([np.int64(1)], dtype=object) >>> a.item() #return np.int64 np.int64(1)
- datashader.transfer_functions.dynspread(img, threshold=0.5, max_px=3, shape='circle', how=None, name=None)[source]#
Spread pixels in an image dynamically based on the image density.
Spreading expands each pixel a certain number of pixels on all sides according to a given shape, merging pixels using a specified compositing operator. This can be useful to make sparse plots more visible. Dynamic spreading determines how many pixels to spread based on a density heuristic. Spreading starts at 1 pixel, and stops when the fraction of adjacent non-empty pixels reaches the specified threshold, or the max_px is reached, whichever comes first.
- Parameters:
- imgImage
- thresholdfloat, optional
A tuning parameter in [0, 1], with higher values giving more spreading.
- max_pxint, optional
Maximum number of pixels to spread on all sides.
- shapestr, optional
The shape to spread by. Options are ‘circle’ [default] or ‘square’.
- howstr, optional
The name of the compositing operator to use when combining pixels. Default of None uses ‘over’ operator for Image objects and ‘add’ operator otherwise.
- datashader.transfer_functions.set_background(img, color=None, name=None)[source]#
Return a new image, with the background set to color.
- Parameters:
- imgImage
- colorcolor name or tuple, optional
The background color. Can be specified either by name, hexcode, or as a tuple of
(red, green, blue)
values.
- datashader.transfer_functions.shade(agg, cmap=['lightblue', 'darkblue'], color_key=['#e41a1c', '#377eb8', '#4daf4a', '#984ea3', '#ff7f00', '#ffff33', '#a65628', '#f781bf', '#999999', '#66c2a5', '#fc8d62', '#8da0cb', '#a6d854', '#ffd92f', '#e5c494', '#ffffb3', '#fb8072', '#fdb462', '#fccde5', '#d9d9d9', '#ccebc5', '#ffed6f'], how='eq_hist', alpha=255, min_alpha=40, span=None, name=None, color_baseline=None, rescale_discrete_levels=False)[source]#
Convert a DataArray to an image by choosing an RGBA pixel color for each value.
Requires a DataArray with a single data dimension, here called the “value”, indexed using either 2D or 3D coordinates.
For a DataArray with 2D coordinates, the RGB channels are computed from the values by interpolated lookup into the given colormap
cmap
. The A channel is then set to the given fixedalpha
value for all non-zero values, and to zero for all zero values. A dictionarycolor_key
that specifies categories (values inagg
) and corresponding colors can be provided to support discrete coloring 2D aggregates, i.e aggregates with a single category per pixel, with no mixing. The A channel is set the givenalpha
value for all pixels in the categories specified incolor_key
, and to zero otherwise.DataArrays with 3D coordinates are expected to contain values distributed over different categories that are indexed by the additional coordinate. Such an array would reduce to the 2D-coordinate case if collapsed across the categories (e.g. if one did
aggc.sum(dim='cat')
for a categorical dimensioncat
). The RGB channels for the uncollapsed, 3D case are mixed from separate values over all categories. They are computed by averaging the colors in the providedcolor_key
(with one color per category), weighted by the array’s value for that category. The A channel is then computed from the array’s total value collapsed across all categories at that location, ranging from the specifiedmin_alpha
to the maximum alpha value (255).- Parameters:
- aggDataArray
- cmaplist of colors or matplotlib.colors.Colormap, optional
The colormap to use for 2D agg arrays. Can be either a list of colors (specified either by name, RGBA hexcode, or as a tuple of
(red, green, blue)
values.), or a matplotlib colormap object. Default is["lightblue", "darkblue"]
.- color_keydict or iterable
The colors to use for a categorical agg array. In 3D case, it can be either a
dict
mapping from field name to colors, or an iterable of colors in the same order as the record fields, and including at least that many distinct colors. In 2D case,color_key
must be adict
where all keys are categories, and values are corresponding colors. Number of categories does not necessarily equal to the number of unique values in the agg DataArray.- howstr or callable, optional
The interpolation method to use, for the
cmap
of a 2D DataArray or the alpha channel of a 3D DataArray. Valid strings are ‘eq_hist’ [default], ‘cbrt’ (cube root), ‘log’ (logarithmic), and ‘linear’. Callables take 2 arguments - a 2-dimensional array of magnitudes at each pixel, and a boolean mask array indicating missingness. They should return a numeric array of the same shape, withNaN
values where the mask was True.- alphaint, optional
Value between 0 - 255 representing the alpha value to use for colormapped pixels that contain data (i.e. non-NaN values). Also used as the maximum alpha value when alpha is indicating data value, such as for single colors or categorical plots. Regardless of this value,
NaN
values are set to be fully transparent when doing colormapping.- min_alphafloat, optional
The minimum alpha value to use for non-empty pixels when alpha is indicating data value, in [0, 255]. Use a higher value to avoid undersaturation, i.e. poorly visible low-value datapoints, at the expense of the overall dynamic range. Note that
min_alpha
will not take any effect when doing discrete categorical coloring for 2D case as the aggregate can have only a single value to denote the category.- spanlist of min-max range, optional
Min and max data values to use for 2D colormapping, and 3D alpha interpolation, when wishing to override autoranging.
- namestring name, optional
Optional string name to give to the Image object to return, to label results for display.
- color_baselinefloat or None
Baseline for calculating how categorical data mixes to determine the color of a pixel. The color for each category is weighted by how far that category’s value is above this baseline value, out of the total sum across all categories’ values. A value of zero is appropriate for counts and for other physical quantities for which zero is a meaningful reference; each category then contributes to the final color in proportion to how much each category contributes to the final sum. However, if values can be negative or if they are on an interval scale where values e.g. twice as far from zero are not twice as high (such as temperature in Fahrenheit), then you will need to provide a suitable baseline value for use in calculating color mixing. A value of None (the default) means to take the minimum across the entire aggregate array, which is safe but may not weight the colors as you expect; any categories with values near this baseline will contribute almost nothing to the final color. As a special case, if the only data present in a pixel is at the baseline level, the color will be an evenly weighted average of all such categories with data (to avoid the color being undefined in this case).
- rescale_discrete_levelsboolean, optional
If
how='eq_hist
and there are only a few discrete values, thenrescale_discrete_levels=True
decreases the lower limit of the autoranged span so that the values are rendering towards the (more visible) top of thecmap
range, thus avoiding washout of the lower values. Has no effect ifhow!=`eq_hist
. Default is False.
- datashader.transfer_functions.spread(img, px=1, shape='circle', how=None, mask=None, name=None)[source]#
Spread pixels in an image.
Spreading expands each pixel a certain number of pixels on all sides according to a given shape, merging pixels using a specified compositing operator. This can be useful to make sparse plots more visible.
- Parameters:
- imgImage or other DataArray
- pxint, optional
Number of pixels to spread on all sides
- shapestr, optional
The shape to spread by. Options are ‘circle’ [default] or ‘square’.
- howstr, optional
The name of the compositing operator to use when combining pixels. Default of None uses ‘over’ operator for Image objects and ‘add’ operator otherwise.
- maskndarray, shape (M, M), optional
The mask to spread over. If provided, this mask is used instead of generating one based on px and shape. Must be a square array with odd dimensions. Pixels are spread from the center of the mask to locations where the mask is True.
- namestring name, optional
Optional string name to give to the Image object to return, to label results for display.