Plotting Pitfalls#
Common plotting pitfalls that get worse with large data#
When working with large datasets, visualizations are often the only practical way to understand the properties of that dataset – it’s too easy to get fooled by statistical measures computed blindly, yet too many data points to examine each one! Thus it is very important to be aware of some common plotting problems that are minor inconveniences with small datasets but very serious problems with larger ones.
We’ll cover:
You can skip to the end if you just want to see an illustration of these problems.
This notebook requires HoloViews, colorcet, and matplotlib, and optionally scikit-image, which can be installed with:
conda install holoviews colorcet matplotlib scikit-image
We’ll first load the plotting libraries and set up some defaults:
import numpy as np
np.random.seed(42)
import holoviews as hv
from holoviews.operation.datashader import datashade
from holoviews import opts, dim
hv.extension('matplotlib')
from colorcet import fire
datashade.cmap=fire[50:]
opts.defaults(
opts.Image(cmap="gray_r", axiswise=True),
opts.Points(cmap="bwr", edgecolors='k', s=50, alpha=1.0), # Remove color_index=2
opts.RGB(bgcolor="white", show_grid=False),
opts.Scatter3D(color=dim('c'), fig_size=250, cmap='bwr', edgecolor='k', s=50, alpha=1.0)) #color_index=3
1. Overplotting#
Let’s consider plotting some 2D data points that come from two separate categories, here plotted as blue and red in A and B below. When the two categories are overlaid, the appearance of the result can be very different depending on which one is plotted first:
def points(offset=0.5, pts=300, s=1):
blues = (np.random.normal( offset,size=pts), np.random.normal(s*offset,size=pts), -s * np.ones((pts)))
return hv.Points(blues, vdims=['c']).opts(color=dim('c'))
blues, reds = points(s=1), points(s=-1)
blues + reds + (reds * blues) + (blues * reds)
Plots C and D shown the same distribution of points, yet they give a very different impression of which category is more common, which can lead to incorrect decisions based on this data. Of course, both are equally common in this case, so neither C nor D accurately reflects the data. The cause for this problem is simply occlusion:
hmap = hv.HoloMap({0:blues,0.000001:reds,1:blues,2:reds}, kdims=['level'])
hv.Scatter3D(hmap.collapse(), kdims=['x','y','level'], vdims=['c'])
Occlusion of data by other data is called overplotting or overdrawing, and it occurs whenever a datapoint or curve is plotted on top of another datapoint or curve, obscuring it. It’s thus a problem not just for scatterplots, as here, but for curve plots, 3D surface plots, 3D bar graphs, and any other plot type where data can be obscured.
2. Oversaturation#
You can reduce problems with overplotting by using transparency/opacity, via the alpha parameter provided to control opacity in most plotting programs. E.g. if alpha is 0.1, full color saturation will be achieved only when 10 points overlap, reducing the effects of plot ordering but making it harder to see individual points:
layout = blues + reds + (reds * blues) + (blues * reds)
layout.opts(opts.Points(s=50, alpha=0.1))
Here C and D are improved in that they look very similar, but if they were truly accurate plots they would be identical, since the distributions are identical. If you compare the two plots closely, you can still see a few locations with oversaturation, a problem that will occur when more than 10 points overlap. In this example the oversaturated points are located near the middle of the plot, but the only way to know whether they are there would be to plot both versions and compare, or to examine the pixel values to see if any have reached full saturation (a necessary but not sufficient condition for oversaturation). Locations where saturation has been reached have problems similar to overplotting, because only the last 10 points plotted will affect the final color (for alpha of 0.1).
Worse, even if one has set the alpha value to approximately or usually avoid oversaturation, as in the plot above, the correct value depends on the dataset. If there are more points overlapping in that particular region, a manually adjusted alpha setting that worked well for a previous dataset will systematically misrepresent the new dataset:
blues, reds = points(pts=600, s=1), points(pts=600, s=-1)
layout = blues + reds + (reds * blues) + (blues * reds)
layout.opts(opts.Points(alpha=0.1))
Here C and D again look qualitatively different, yet still represent the same distributions. Since we’re assuming that the point of the visualization is to reveal the underlying dataset, having to tune visualization parameters manually based on the properties of the dataset itself is a serious problem.
To make it even more complicated, the correct alpha also depends on the dot size, because smaller dots have less overlap for the same dataset. With smaller dots, C and D look more similar, but the color of the dots is now difficult to see in all cases because the dots are too transparent for this size:
layout = blues + reds + (reds * blues) + (blues * reds)
layout.opts(opts.Points(s=10, alpha=0.1, edgecolor=None))
As you can see, it is very difficult to find settings for the dotsize and alpha parameters that correctly reveal the data, even for relatively small and obvious datasets like these. With larger datasets with unknown contents, it is difficult to detect that such problems are occurring, leading to false conclusions based on inappropriately visualized data.
3. Undersampling#
With a single category instead of the multiple categories shown above, oversaturation simply obscures spatial differences in density. For instance, 10, 20, and 2000 single-category points overlapping will all look the same visually, for alpha=0.1. Let’s again consider an example that has a sum of two normal distributions slightly offset from one another, but no longer using color to separate them into categories:
def gaussians(specs=[(1.5,0,1.0),(-1.5,0,1.0)],num=100):
"""
A concatenated list of points taken from 2D Gaussian distributions.
Each distribution is specified as a tuple (x,y,s), where x,y is the mean
and s is the standard deviation. Defaults to two horizontally
offset unit-mean Gaussians.
"""
np.random.seed(1)
dists = [(np.random.normal(x,s,num), np.random.normal(y,s,num)) for x,y,s in specs]
return np.hstack([d[0] for d in dists]), np.hstack([d[1] for d in dists])
points = (hv.Points(gaussians(num=600), label="600 points", group="Small dots") +
hv.Points(gaussians(num=60000), label="60000 points", group="Small dots") +
hv.Points(gaussians(num=600), label="600 points", group="Tiny dots") +
hv.Points(gaussians(num=60000), label="60000 points", group="Tiny dots"))
points.opts(
opts.Points('Small_dots', s=1, alpha=1),
opts.Points('Tiny_dots', s=0.1, alpha=0.1))
Just as shown for the multiple-category case above, finding settings to avoid overplotting and oversaturation is difficult. The “Small dots” setting (size 0.1, full alpha) works fairly well for a sample of 600 points A, but it has serious overplotting issues for larger datasets, obscuring the shape and density of the distribution B. Using the “Tiny dots” setting (10 times smaller dots, alpha 0.1) works well for the larger dataset D, but not at all for the 600-point dataset C. Clearly, not all of these settings are accurately conveying the underlying distribution, as they all appear quite different from one another. Similar problems occur for the same size of dataset, but with greater or lesser levels of overlap between points, which of course varies with every new dataset.
In any case, as dataset size increases, at some point plotting a full scatterplot like any of these will become impractical with current plotting software. At this point, people often simply subsample their dataset, plotting 10,000 or perhaps 100,000 randomly selected datapoints. But as panel A shows, the shape of an undersampled distribution can be very difficult or impossible to make out, leading to incorrect conclusions about the distribution. Such problems can occur even when taking very large numbers of samples, if examining sparsely populated regions of the space, which will approximate panel A for some plot settings and panel C for others. The actual shape of the distribution is only visible if sufficient datapoints are available in that region and appropriate plot settings are used, as in D, but ensuring that both conditions are true is a quite difficult process of trial and error, making it very likely that important features of the dataset will be missed.
To avoid undersampling large datasets, researchers often use 2D histograms visualized as heatmaps, rather than scatterplots showing individual points. A heatmap has a fixed-size grid regardless of the dataset size, so that they can make use of all the data. Heatmaps effectively approximate a probability density function over the specified space, with coarser heatmaps averaging out noise or irrelevant variations to reveal an underlying distribution, and finer heatmaps able to represent more details in the distribution.
Let’s look at some heatmaps with different numbers of bins for the same two-Gaussians distribution:
def heatmap(coords,bins=10,offset=0.0,transform=lambda d,m:d, label=None):
"""
Given a set of coordinates, bins them into a 2d histogram grid
of the specified size, and optionally transforms the counts
and/or compresses them into a visible range starting at a
specified offset between 0 and 1.0.
"""
hist,xs,ys = np.histogram2d(coords[0], coords[1], bins=bins)
counts = hist[:,::-1].T
transformed = transform(counts,counts!=0)
span = transformed.max()-transformed.min()
compressed = np.where(counts!=0,offset+(1.0-offset)*transformed/span,0)
args = dict(label=label) if label else {}
return hv.Image(compressed,bounds=(xs[-1],ys[-1],xs[1],ys[1]),**args)
hv.Layout([heatmap(gaussians(num=60000),bins) for bins in [8,20,200]])
As you can see, a too-coarse binning grid A cannot represent this distribution faithfully, but with enough bins C, the heatmap will approximate a tiny-dot scatterplot like plot D in the previous figure. For intermediate grid sizes B the heatmap can average out the effects of undersampling; B is actually a more faithful representation of the distribution than C is (which we know is two offset 2D Gaussians), while C more faithfully represents the sampling (i.e., the individual points drawn from this distribution). Thus choosing a good binning grid size for a heatmap does take some expertise and knowledge of the goals of the visualization, and it’s always useful to look at multiple binning-grid spacings for comparison. Still, at least the binning parameter is something meaningful at the data level (how coarse a view of the data is desired?) rather than just a plotting detail (what size and transparency should I use for the points?) that must be determined arbitrarily.
In any case, at least in principle, the heatmap approach can entirely avoid the first three problems above: overplotting (since multiple data points sum arithmetically into the grid cell, without obscuring one another), oversaturation (because the minimum and maximum counts observed can automatically be mapped to the two ends of a visible color range), and undersampling (since the resulting plot size is independent of the number of data points, allowing it to use an unbounded amount of incoming data).
4. Undersaturation#
Of course, heatmaps come with their own plotting pitfalls. One rarely appreciated issue common to both heatmaps and alpha-based scatterplots is undersaturation, where large numbers of data points can be missed entirely because they are spread over many different heatmap bins or many nearly transparent scatter points. To look at this problem, let’s again consider a set of multiple 2D Gaussians, but this time with different amounts of spread (standard deviation):
dist = gaussians(specs=[(2,2,0.03), (2,-2,0.1), (-2,-2,0.5), (-2,2,1.0), (0,0,3)],num=10000)
hv.Points(dist) + hv.Points(dist).opts(s=0.03) + hv.Points(dist).opts(s=0.01, alpha=0.05)