# Timeseries¶

In many domains it is common to plot scalar values as a function of time (or other single dimensions). As long as the total number of datapoints is relatively low (in the tens of thousands, perhaps) and there are only a few separate curves involved, most plotting packages will do well. However, for longer or more frequent sampling, you'll be required to subsample your data before plotting, potentially missing important peaks or troughs in the data. And even just a few timeseries visualized together quickly run into highly misleading overplotting issues, making the most recently plotted curves unduly prominent.

For applications with many datapoints or when visualizing multiple curves, datashader provides a principled way to view *all* of your data. In this example, we will synthesize several time-series curves so that we know their properties, and then show how datashader can reveal them.

```
import datetime
import pandas as pd
import numpy as np
import xarray as xr
import datashader as ds
import datashader.transfer_functions as tf
from collections import OrderedDict
```

### Create some fake timeseries data¶

Here we create a fake time series signal, then generate many noisy versions of it. We will also add a couple of "rogue" lines, with different statistical properties, and see how well those stand out from the rest.

```
# Constants
np.random.seed(2)
n = 100000 # Number of points
cols = list('abcdefg') # Column names of samples
start = datetime.datetime(2010, 10, 1, 0) # Start time
# Generate a fake signal
signal = np.random.normal(0, 0.3, size=n).cumsum() + 50
# Generate many noisy samples from the signal
noise = lambda var, bias, n: np.random.normal(bias, var, n)
data = {c: signal + noise(1, 10*(np.random.random() - 0.5), n) for c in cols}
# Add some "rogue lines" that differ from the rest
cols += ['x'] ; data['x'] = signal + np.random.normal(0, 0.02, size=n).cumsum() # Gradually diverges
cols += ['y'] ; data['y'] = signal + noise(1, 20*(np.random.random() - 0.5), n) # Much noisier
cols += ['z'] ; data['z'] = signal # No noise at all
# Pick a few samples from the first line and really blow them out
locs = np.random.choice(n, 10)
data['a'][locs] *= 2
# Create a dataframe
data['Time'] = [start + datetime.timedelta(minutes=1)*i for i in range(n)]
df = pd.DataFrame(data)
df.tail()
```

The native datashader API illustrated here does not support datetimes directly so we create a new column casting the datetimes to integer:

```
df['ITime'] = pd.to_datetime(df['Time']).astype('int64')
```

Now we can compute the x- and y-ranges:

```
# Default plot ranges:
x_range = (df.iloc[0].ITime, df.iloc[-1].ITime)
y_range = (1.2*signal.min(), 1.2*signal.max())
print("x_range: {0} y_range: {0}".format(x_range,y_range))
```

### Plotting *all* the datapoints¶

With datashader, we can plot *all* the datapoints for a given timeseries. Let's select the first curve 'a' and draw it into an aggregate grid, connecting each datapoint in the series:

```
%%time
cvs = ds.Canvas(x_range=x_range, y_range=y_range, plot_height=300, plot_width=900)
aggs= OrderedDict((c, cvs.line(df, 'ITime', c)) for c in cols)
img = tf.shade(aggs['a'])
```

```
img
```

The result looks similar to what you might find in any plotting program, but it uses all 100,000 datapoints, and would work similarly for 1, 10, or 100 million points (determined by the `n`

attribute above).

Why is using all the datapoints important? To see, let's downsample the data by a factor of 10, plotting 10,000 datapoints for the same curve:

```
mask = (df.index % 10) == 0
tf.shade(cvs.line(df[mask][['a','ITime']], 'ITime', 'a'))
```

The resulting plot is similar, but now none of the "blown up" datapoints (sharp spikes) that were clearly visible in the first version are visible at all! They didn't happen to be amongst the sampled points, and thus do not show up in this plot, which should never be a problem using datashader with all the points.

### Overplotting problems¶

What happens if we then overlay multiple such curves? In a traditional plotting program, there would be serious issues with overplotting, because these curves are highly overlapping. To show what would typically happen, let's merge the images corresponding to each of the curves:

```
renamed = [aggs[key].rename({key: 'value'}) for key in aggs]
merged = xr.concat(renamed, 'cols')
tf.shade(merged.any(dim='cols'))
```

The `any`

operator merges all the data such that any pixel that is lit up for any curve is lit up in the final result. Clearly, it is difficult to see any structure in this fully overplotted data; all we can see is the envelope of these curves, i.e. the minimum and maximum value of any curve for any given time point. It remains completely unclear how the various curves in the set relate to each other. Here we know that we put in one particularly noisy curve, which presumably determines the envelope, but there's no way to tell that from the plot.

We can of course try giving each curve a different color:

```
colors = ["red", "grey", "black", "purple", "pink",
"yellow", "brown", "green", "orange", "blue"]
imgs = [tf.shade(aggs[i], cmap=[c]) for i, c in zip(cols, colors)]
tf.stack(*imgs)
```

But that doesn't help much; there are 10 curves, but only three or four colors are visible, due to overplotting. Problems like that will just get much worse if there are 100, 1000, or 1 million curves. Moreover, the results will look entirely different if we plot them in the opposite order:

```
tf.stack(*reversed(imgs))
```

Having the visualization look completely different for an arbitrary choice like the plotting order is a serious problem if we want to understand the properties of this data, from the data itself.

### Trends and outliers¶

So, what might we be interested in understanding when plotting many curves? One possibility is the combination of (a) the overall trends, and (b) any curves (and individual datapoints) that differ from those trends.

To look at the trends, we should combine the plots not by overplotting as in each of the above examples, but using operators that accurately reveal overlap between the curves. When doing so, we won't try to discern individual curves directly by assigning them unique colors, but instead try to show areas of the curves that establish the trends and differ from them. (Assigning colors per curve could be done as for the racial categories in census.ipynb, but that won't be further investigated here.)

Instead of the `.any()`

operator above that resulted in complete overplotting, or the `tf.stack`

operator that depended strongly on the plotting order, let's use the `.sum()`

operator to reveal the full patterns of overlap arithmetically:

```
total = tf.shade(merged.sum(dim='cols'), how='linear')
total
```

With study, the overall structure of this dataset should be clear, according to what we know we put in when we created them:

- Individual rogue datapoints from curve 'a' are clearly visible (the seven sharp spikes)
- The trend is clearly visible (for the viridis colormap, the darkest greens show the areas of highest overlap)
- Line 'x' that gradually diverges from the trend is clearly visible (as the light blue (low-count) areas that increase below the right half of the plot).

(Note that if you change the random seed or the number of datapoints, the specific values and locations will differ from those mentioned in the text.)

None of these observations would have been possible with downsampled, overplotted curves as would be typical of other plotting approaches.

### Highlighting specific curves¶

The data set also includes a couple of traces that are difficult to detect in the `.sum()`

plot above, one with no noise and one with much higher noise. One way to detect such issues is to highlight each of the curves in turn, and display it in relation to the datashaded average values. For instance, those two curves (each highlighted in red below) stand out against the pack as having less or more noise than is typical:

```
tf.stack(total, tf.shade(aggs['z'], cmap=["lightblue", "red"]))
```

```
tf.stack(total, tf.shade(aggs['y'], cmap=["lightblue", "red"]))
```

### Area plots¶

As an alternative to plotting time-series data as a line, the same data can be plotted as a filled area plot. This approach serves to emphasise whether the time series has a positive or negative sign. Here is an example of a filled area plot for the 'a' curve.

```
cvs = ds.Canvas(x_range=x_range, y_range=y_range, plot_height=300, plot_width=900)
agg = cvs.area(df, x='ITime', y='a')
img = tf.shade(agg)
img
```

By specifying the y_stack argument, an area plot can also be used to display the difference between two curves. Here is an example of an area plot of the difference between the 'a' and 'b' curves

```
cvs = ds.Canvas(x_range=x_range, y_range=y_range, plot_height=300, plot_width=900)
agg = cvs.area(df, x='ITime', y='a', y_stack='b')
img = tf.shade(agg)
img
```

### Dynamic Plots¶

In practice, it might be difficult to cycle through each of the curves to find one that's different, as done above. Perhaps a criterion based on similarity could be devised, choosing the curve most dissimilar from the rest to plot in this way, which would be an interesting topic for future research. In any case, one thing that can be achieved with HoloViews is to make the plot fully interactive, with direct support for datetimes so that the viewer can zoom in and discover such patterns dynamically with correctly formatted axes.

```
import holoviews as hv
from holoviews.operation.datashader import datashade
hv.extension('bokeh')
```