spiketools.rate

Rate estimation utilities for spike trains.

All functions accept spike trains in the shared spiketimes representation described in spiketools.conversion.

Examples

Shared example setup used throughout the documentation:

import numpy as np

from spiketools import (
    gamma_spikes,
    gaussian_kernel,
    kernel_rate,
    rate_integral,
    sliding_counts,
    triangular_kernel,
)

np.random.seed(0)
rates = np.array([6.0] * 10 + [5.6, 6.3, 5.9, 6.5, 5.8, 6.1, 5.7, 6.4, 6.0, 5.5], dtype=float)
orders = np.array([0.2] * 10 + [1.0, 2.0, 2.0, 3.0, 1.0, 2.0, 3.0, 2.0, 1.0, 3.0], dtype=float)
spiketimes = gamma_spikes(rates=rates, order=orders, tlim=[0.0, 5000.0], dt=1.0)

Inspect both smoothing kernels directly:

gauss = gaussian_kernel(25.0, dt=5.0, nstd=2.0)
# Match the Gaussian cutoff at +/- 50 ms on the dt = 5 ms grid.
tri = triangular_kernel((25.0 * 2.0) / (np.sqrt(6.0) * 5.0), dt=5.0)

print(np.round(gauss[:5], 6))
print(np.round(tri[:5], 6))

Example output:

[0.000415, 0.000886, 0.00175, 0.003188, 0.005362]
[0.0, 0.002, 0.004, 0.006, 0.008]

Run the rate analysis on the same spike trains:

pooled_rate, rate_time = kernel_rate(spiketimes, gauss, tlim=[0.0, 5000.0], dt=5.0, pool=True)
counts, count_time = sliding_counts(spiketimes, window=250.0, dt=5.0, tlim=[0.0, 5000.0])
integrated = rate_integral(pooled_rate[0], dt=5.0)

print(np.round(pooled_rate[0, :5], 3))
print(np.round(counts.mean(axis=0)[:5], 3))
print(np.round(integrated[:5], 3))

Example output:

[13.121, 13.105, 12.739, 12.063, 11.223]
[2.6, 2.5, 2.6, 2.4, 2.5]
[0.066, 0.131, 0.195, 0.255, 0.311]
  1"""Rate estimation utilities for spike trains.
  2
  3All functions accept spike trains in the shared `spiketimes` representation
  4described in :mod:`spiketools.conversion`.
  5
  6Examples
  7--------
  8Shared example setup used throughout the documentation:
  9
 10```python
 11import numpy as np
 12
 13from spiketools import (
 14    gamma_spikes,
 15    gaussian_kernel,
 16    kernel_rate,
 17    rate_integral,
 18    sliding_counts,
 19    triangular_kernel,
 20)
 21
 22np.random.seed(0)
 23rates = np.array([6.0] * 10 + [5.6, 6.3, 5.9, 6.5, 5.8, 6.1, 5.7, 6.4, 6.0, 5.5], dtype=float)
 24orders = np.array([0.2] * 10 + [1.0, 2.0, 2.0, 3.0, 1.0, 2.0, 3.0, 2.0, 1.0, 3.0], dtype=float)
 25spiketimes = gamma_spikes(rates=rates, order=orders, tlim=[0.0, 5000.0], dt=1.0)
 26```
 27
 28Inspect both smoothing kernels directly:
 29
 30```python
 31gauss = gaussian_kernel(25.0, dt=5.0, nstd=2.0)
 32# Match the Gaussian cutoff at +/- 50 ms on the dt = 5 ms grid.
 33tri = triangular_kernel((25.0 * 2.0) / (np.sqrt(6.0) * 5.0), dt=5.0)
 34
 35print(np.round(gauss[:5], 6))
 36print(np.round(tri[:5], 6))
 37```
 38
 39Example output:
 40
 41```text
 42[0.000415, 0.000886, 0.00175, 0.003188, 0.005362]
 43[0.0, 0.002, 0.004, 0.006, 0.008]
 44```
 45
 46Run the rate analysis on the same spike trains:
 47
 48```python
 49pooled_rate, rate_time = kernel_rate(spiketimes, gauss, tlim=[0.0, 5000.0], dt=5.0, pool=True)
 50counts, count_time = sliding_counts(spiketimes, window=250.0, dt=5.0, tlim=[0.0, 5000.0])
 51integrated = rate_integral(pooled_rate[0], dt=5.0)
 52
 53print(np.round(pooled_rate[0, :5], 3))
 54print(np.round(counts.mean(axis=0)[:5], 3))
 55print(np.round(integrated[:5], 3))
 56```
 57
 58Example output:
 59
 60```text
 61[13.121, 13.105, 12.739, 12.063, 11.223]
 62[2.6, 2.5, 2.6, 2.4, 2.5]
 63[0.066, 0.131, 0.195, 0.255, 0.311]
 64```
 65"""
 66
 67from __future__ import annotations
 68
 69import numpy as np
 70import pylab
 71from scipy.signal import convolve2d
 72
 73from .conversion import (
 74    get_time_limits,
 75    spiketimes_to_binary,
 76)
 77
 78__all__ = [
 79    "gaussian_kernel",
 80    "kernel_rate",
 81    "rate_integral",
 82    "sliding_counts",
 83    "triangular_kernel",
 84]
 85
 86
 87def gaussian_kernel(sigma, dt=1.0, nstd=3.0):
 88    r"""Return a normalized Gaussian kernel for rate smoothing.
 89
 90    Parameters
 91    ----------
 92    sigma:
 93        Kernel width in ms.
 94    dt:
 95        Temporal resolution of the target grid in ms.
 96    nstd:
 97        Half-width of the kernel in units of `sigma`.
 98
 99    Definition
100    ----------
101    On a grid $t_k = k\,dt$ centered at zero, this function samples the
102    truncated Gaussian
103
104    $$
105    G_{\sigma, n_\mathrm{std}}(t) =
106    \begin{cases}
107    \frac{\exp\left(-(t / \sigma)^2\right)}
108    {\int_{-n_\mathrm{std}\sigma}^{n_\mathrm{std}\sigma} \exp\left(-(u / \sigma)^2\right)\,du}
109    & \text{for } |t| \le n_\mathrm{std}\,\sigma, \\
110    0 & \text{otherwise,}
111    \end{cases}
112    $$
113
114    and rescales the sampled support so that `kernel.sum() * dt = 1`.
115
116    <img src="spiketools_assets/gaussian_kernel_example.png"
117         alt="Gaussian kernel example"
118         onerror="this.onerror=null;this.src='../spiketools_assets/gaussian_kernel_example.png';" />
119
120    Examples
121    --------
122    >>> kernel = gaussian_kernel(2.0, dt=1.0, nstd=1.0)
123    >>> round(kernel.sum(), 3)
124    1.0
125    """
126    t = pylab.arange(-nstd * sigma, nstd * sigma + dt, dt)
127    gauss = pylab.exp(-t ** 2 / sigma ** 2)
128    gauss /= gauss.sum() * dt
129    return gauss
130
131
132def triangular_kernel(sigma, dt=1):
133    r"""Return a normalized triangular smoothing kernel.
134
135    Parameters
136    ----------
137    sigma:
138        Target width parameter in ms.
139    dt:
140        Temporal resolution of the target grid in ms.
141
142    Definition
143    ----------
144    Let $a = \sigma \sqrt{6}$. The underlying continuous kernel is
145
146    $$
147    T_\sigma(t) = \frac{\max\left(1 - |t| / a, 0\right)}
148    {\int_{-\infty}^{\infty} \max\left(1 - |u| / a, 0\right)\,du}
149    $$
150
151    and the discrete samples are normalized so that `kernel.sum() * dt = 1`.
152
153    <img src="spiketools_assets/triangular_kernel_example.png"
154         alt="Triangular kernel example"
155         onerror="this.onerror=null;this.src='../spiketools_assets/triangular_kernel_example.png';" />
156
157    Examples
158    --------
159    >>> kernel = triangular_kernel(1.0, dt=1.0)
160    >>> round(kernel.sum(), 3)
161    1.0
162    """
163    half_base = int(pylab.around(sigma * pylab.sqrt(6)))
164    half_kernel = pylab.linspace(0.0, 1.0, half_base + 1)
165    kernel = pylab.append(half_kernel, half_kernel[:-1][::-1])
166    kernel /= dt * kernel.sum()
167    return kernel
168
169
170def kernel_rate(spiketimes, kernel, tlim=None, dt=1.0, pool=True):
171    r"""Estimate smoothed firing rates from `spiketimes`.
172
173    Parameters
174    ----------
175    spiketimes:
176        Canonical spike representation with times in ms.
177    kernel:
178        One-dimensional kernel normalized to unit integral in seconds.
179    tlim:
180        Optional `[tmin, tmax]` interval in ms.
181    dt:
182        Bin width in ms used for discretization before convolution.
183    pool:
184        If `True`, average across trials or units before convolution and return
185        a single population rate trace. If `False`, return one trace per row.
186
187    Returns
188    -------
189    tuple[np.ndarray, np.ndarray]
190        `(rates, time)` where `rates` is in spikes/s.
191
192    Definition
193    ----------
194    If `b_i[k]` is the discretized spike train on row `i`, then the returned
195    estimate is the discrete kernel convolution
196
197    $$
198    r_i[n] = 1000 \sum_k b_i[k]\,K[n-k]
199    $$
200
201    where $K$ is normalized so that $\sum_k K[k]\,dt = 1$. If `pool=True`, the
202    mean spike train across rows is convolved before returning the result.
203
204    <img src="spiketools_assets/kernel_rate_example.png"
205         alt="Kernel rate example"
206         onerror="this.onerror=null;this.src='../spiketools_assets/kernel_rate_example.png';" />
207
208    Examples
209    --------
210    >>> spikes = np.array([[0.0, 2.0], [0.0, 0.0]])
211    >>> kernel = gaussian_kernel(1.0, dt=1.0, nstd=1.0)
212    >>> rates, time = kernel_rate(spikes, kernel, tlim=[0.0, 4.0], dt=1.0, pool=False)
213    >>> rates.shape, time.shape
214    ((1, 2), (2,))
215    """
216    if tlim is None:
217        tlim = get_time_limits(spiketimes)
218
219    binary, time = spiketimes_to_binary(spiketimes, tlim, dt)
220
221    if pool:
222        binary = binary.mean(axis=0)[pylab.newaxis, :]
223
224    rates = convolve2d(binary, kernel[pylab.newaxis, :], "same")
225    kwidth = len(kernel)
226    rates = rates[:, int(kwidth / 2) : -int(kwidth / 2)]
227    time = time[int(kwidth / 2) : -int(kwidth / 2)]
228    return rates * 1000.0, time
229
230
231def sliding_counts(spiketimes, window, dt=1.0, tlim=None):
232    r"""Count spikes inside a sliding window.
233
234    Parameters
235    ----------
236    spiketimes:
237        Canonical spike representation.
238    window:
239        Window width in ms.
240    dt:
241        Step size of the discretized binary representation in ms.
242    tlim:
243        Optional `[tmin, tmax]` interval in ms.
244
245    Definition
246    ----------
247    With $W = \text{window} / dt$ bins and binned spike train `b_i[k]`, this function
248    returns
249
250    $$
251    C_i[n] = \sum_{k = 0}^{W - 1} b_i[n + k]
252    $$
253
254    for each row `i`.
255
256    <img src="spiketools_assets/sliding_counts_example.png"
257         alt="Sliding counts example"
258         onerror="this.onerror=null;this.src='../spiketools_assets/sliding_counts_example.png';" />
259
260    Examples
261    --------
262    >>> spikes = np.array([[0.0, 2.0], [0.0, 0.0]])
263    >>> counts, time = sliding_counts(spikes, window=2.0, dt=1.0, tlim=[0.0, 4.0])
264    >>> counts.astype(int).tolist()
265    [[1, 1, 1]]
266    """
267    if tlim is None:
268        tlim = get_time_limits(spiketimes)
269    binary, time = spiketimes_to_binary(spiketimes, dt=dt, tlim=tlim)
270
271    kernel = pylab.ones((1, int(window // dt)))
272    counts = convolve2d(binary, kernel, "valid")
273
274    dif = time.shape[0] - counts.shape[1]
275    time = time[int(np.ceil(dif / 2)) : int(-dif / 2)]
276
277    return counts, time
278
279
280def rate_integral(rate, dt):
281    r"""Integrate a rate trace in spikes/s to expected spike counts.
282
283    Parameters
284    ----------
285    rate:
286        One-dimensional rate trace in spikes/s.
287    dt:
288        Sampling interval in ms.
289
290    Definition
291    ----------
292    The cumulative expected spike count is
293
294    $$
295    I[n] = \sum_{k \le n} r[k]\,dt / 1000
296    $$
297
298    which is the discrete-time version of integrating the rate over time.
299
300    <img src="spiketools_assets/rate_integral_example.png"
301         alt="Rate integral example"
302         onerror="this.onerror=null;this.src='../spiketools_assets/rate_integral_example.png';" />
303
304    Examples
305    --------
306    >>> rate_integral(np.array([500.0, 500.0]), dt=1.0).tolist()
307    [0.5, 1.0]
308    """
309    return pylab.cumsum(rate / 1000.0) * dt
def gaussian_kernel(sigma, dt=1.0, nstd=3.0):
 88def gaussian_kernel(sigma, dt=1.0, nstd=3.0):
 89    r"""Return a normalized Gaussian kernel for rate smoothing.
 90
 91    Parameters
 92    ----------
 93    sigma:
 94        Kernel width in ms.
 95    dt:
 96        Temporal resolution of the target grid in ms.
 97    nstd:
 98        Half-width of the kernel in units of `sigma`.
 99
100    Definition
101    ----------
102    On a grid $t_k = k\,dt$ centered at zero, this function samples the
103    truncated Gaussian
104
105    $$
106    G_{\sigma, n_\mathrm{std}}(t) =
107    \begin{cases}
108    \frac{\exp\left(-(t / \sigma)^2\right)}
109    {\int_{-n_\mathrm{std}\sigma}^{n_\mathrm{std}\sigma} \exp\left(-(u / \sigma)^2\right)\,du}
110    & \text{for } |t| \le n_\mathrm{std}\,\sigma, \\
111    0 & \text{otherwise,}
112    \end{cases}
113    $$
114
115    and rescales the sampled support so that `kernel.sum() * dt = 1`.
116
117    <img src="spiketools_assets/gaussian_kernel_example.png"
118         alt="Gaussian kernel example"
119         onerror="this.onerror=null;this.src='../spiketools_assets/gaussian_kernel_example.png';" />
120
121    Examples
122    --------
123    >>> kernel = gaussian_kernel(2.0, dt=1.0, nstd=1.0)
124    >>> round(kernel.sum(), 3)
125    1.0
126    """
127    t = pylab.arange(-nstd * sigma, nstd * sigma + dt, dt)
128    gauss = pylab.exp(-t ** 2 / sigma ** 2)
129    gauss /= gauss.sum() * dt
130    return gauss

Return a normalized Gaussian kernel for rate smoothing.

Parameters
  • sigma:: Kernel width in ms.
  • dt:: Temporal resolution of the target grid in ms.
  • nstd:: Half-width of the kernel in units of sigma.
Definition

On a grid $t_k = k\,dt$ centered at zero, this function samples the truncated Gaussian

$$ G_{\sigma, n_\mathrm{std}}(t) = \begin{cases} \frac{\exp\left(-(t / \sigma)^2\right)} {\int_{-n_\mathrm{std}\sigma}^{n_\mathrm{std}\sigma} \exp\left(-(u / \sigma)^2\right)\,du} & \text{for } |t| \le n_\mathrm{std}\,\sigma, \ 0 & \text{otherwise,} \end{cases} $$

and rescales the sampled support so that kernel.sum() * dt = 1.

Gaussian kernel example

Examples
>>> kernel = gaussian_kernel(2.0, dt=1.0, nstd=1.0)
>>> round(kernel.sum(), 3)
1.0
def kernel_rate(spiketimes, kernel, tlim=None, dt=1.0, pool=True):
171def kernel_rate(spiketimes, kernel, tlim=None, dt=1.0, pool=True):
172    r"""Estimate smoothed firing rates from `spiketimes`.
173
174    Parameters
175    ----------
176    spiketimes:
177        Canonical spike representation with times in ms.
178    kernel:
179        One-dimensional kernel normalized to unit integral in seconds.
180    tlim:
181        Optional `[tmin, tmax]` interval in ms.
182    dt:
183        Bin width in ms used for discretization before convolution.
184    pool:
185        If `True`, average across trials or units before convolution and return
186        a single population rate trace. If `False`, return one trace per row.
187
188    Returns
189    -------
190    tuple[np.ndarray, np.ndarray]
191        `(rates, time)` where `rates` is in spikes/s.
192
193    Definition
194    ----------
195    If `b_i[k]` is the discretized spike train on row `i`, then the returned
196    estimate is the discrete kernel convolution
197
198    $$
199    r_i[n] = 1000 \sum_k b_i[k]\,K[n-k]
200    $$
201
202    where $K$ is normalized so that $\sum_k K[k]\,dt = 1$. If `pool=True`, the
203    mean spike train across rows is convolved before returning the result.
204
205    <img src="spiketools_assets/kernel_rate_example.png"
206         alt="Kernel rate example"
207         onerror="this.onerror=null;this.src='../spiketools_assets/kernel_rate_example.png';" />
208
209    Examples
210    --------
211    >>> spikes = np.array([[0.0, 2.0], [0.0, 0.0]])
212    >>> kernel = gaussian_kernel(1.0, dt=1.0, nstd=1.0)
213    >>> rates, time = kernel_rate(spikes, kernel, tlim=[0.0, 4.0], dt=1.0, pool=False)
214    >>> rates.shape, time.shape
215    ((1, 2), (2,))
216    """
217    if tlim is None:
218        tlim = get_time_limits(spiketimes)
219
220    binary, time = spiketimes_to_binary(spiketimes, tlim, dt)
221
222    if pool:
223        binary = binary.mean(axis=0)[pylab.newaxis, :]
224
225    rates = convolve2d(binary, kernel[pylab.newaxis, :], "same")
226    kwidth = len(kernel)
227    rates = rates[:, int(kwidth / 2) : -int(kwidth / 2)]
228    time = time[int(kwidth / 2) : -int(kwidth / 2)]
229    return rates * 1000.0, time

Estimate smoothed firing rates from spiketimes.

Parameters
  • spiketimes:: Canonical spike representation with times in ms.
  • kernel:: One-dimensional kernel normalized to unit integral in seconds.
  • tlim:: Optional [tmin, tmax] interval in ms.
  • dt:: Bin width in ms used for discretization before convolution.
  • pool:: If True, average across trials or units before convolution and return a single population rate trace. If False, return one trace per row.
Returns
  • tuple[np.ndarray, np.ndarray]: (rates, time) where rates is in spikes/s.
Definition

If b_i[k] is the discretized spike train on row i, then the returned estimate is the discrete kernel convolution

$$ r_i[n] = 1000 \sum_k b_i[k]\,K[n-k] $$

where $K$ is normalized so that $\sum_k K[k]\,dt = 1$. If pool=True, the mean spike train across rows is convolved before returning the result.

Kernel rate example

Examples
>>> spikes = np.array([[0.0, 2.0], [0.0, 0.0]])
>>> kernel = gaussian_kernel(1.0, dt=1.0, nstd=1.0)
>>> rates, time = kernel_rate(spikes, kernel, tlim=[0.0, 4.0], dt=1.0, pool=False)
>>> rates.shape, time.shape
((1, 2), (2,))
def rate_integral(rate, dt):
281def rate_integral(rate, dt):
282    r"""Integrate a rate trace in spikes/s to expected spike counts.
283
284    Parameters
285    ----------
286    rate:
287        One-dimensional rate trace in spikes/s.
288    dt:
289        Sampling interval in ms.
290
291    Definition
292    ----------
293    The cumulative expected spike count is
294
295    $$
296    I[n] = \sum_{k \le n} r[k]\,dt / 1000
297    $$
298
299    which is the discrete-time version of integrating the rate over time.
300
301    <img src="spiketools_assets/rate_integral_example.png"
302         alt="Rate integral example"
303         onerror="this.onerror=null;this.src='../spiketools_assets/rate_integral_example.png';" />
304
305    Examples
306    --------
307    >>> rate_integral(np.array([500.0, 500.0]), dt=1.0).tolist()
308    [0.5, 1.0]
309    """
310    return pylab.cumsum(rate / 1000.0) * dt

Integrate a rate trace in spikes/s to expected spike counts.

Parameters
  • rate:: One-dimensional rate trace in spikes/s.
  • dt:: Sampling interval in ms.
Definition

The cumulative expected spike count is

$$ I[n] = \sum_{k \le n} r[k]\,dt / 1000 $$

which is the discrete-time version of integrating the rate over time.

Rate integral example

Examples
>>> rate_integral(np.array([500.0, 500.0]), dt=1.0).tolist()
[0.5, 1.0]
def sliding_counts(spiketimes, window, dt=1.0, tlim=None):
232def sliding_counts(spiketimes, window, dt=1.0, tlim=None):
233    r"""Count spikes inside a sliding window.
234
235    Parameters
236    ----------
237    spiketimes:
238        Canonical spike representation.
239    window:
240        Window width in ms.
241    dt:
242        Step size of the discretized binary representation in ms.
243    tlim:
244        Optional `[tmin, tmax]` interval in ms.
245
246    Definition
247    ----------
248    With $W = \text{window} / dt$ bins and binned spike train `b_i[k]`, this function
249    returns
250
251    $$
252    C_i[n] = \sum_{k = 0}^{W - 1} b_i[n + k]
253    $$
254
255    for each row `i`.
256
257    <img src="spiketools_assets/sliding_counts_example.png"
258         alt="Sliding counts example"
259         onerror="this.onerror=null;this.src='../spiketools_assets/sliding_counts_example.png';" />
260
261    Examples
262    --------
263    >>> spikes = np.array([[0.0, 2.0], [0.0, 0.0]])
264    >>> counts, time = sliding_counts(spikes, window=2.0, dt=1.0, tlim=[0.0, 4.0])
265    >>> counts.astype(int).tolist()
266    [[1, 1, 1]]
267    """
268    if tlim is None:
269        tlim = get_time_limits(spiketimes)
270    binary, time = spiketimes_to_binary(spiketimes, dt=dt, tlim=tlim)
271
272    kernel = pylab.ones((1, int(window // dt)))
273    counts = convolve2d(binary, kernel, "valid")
274
275    dif = time.shape[0] - counts.shape[1]
276    time = time[int(np.ceil(dif / 2)) : int(-dif / 2)]
277
278    return counts, time

Count spikes inside a sliding window.

Parameters
  • spiketimes:: Canonical spike representation.
  • window:: Window width in ms.
  • dt:: Step size of the discretized binary representation in ms.
  • tlim:: Optional [tmin, tmax] interval in ms.
Definition

With $W = \text{window} / dt$ bins and binned spike train b_i[k], this function returns

$$ C_i[n] = \sum_{k = 0}^{W - 1} b_i[n + k] $$

for each row i.

Sliding counts example

Examples
>>> spikes = np.array([[0.0, 2.0], [0.0, 0.0]])
>>> counts, time = sliding_counts(spikes, window=2.0, dt=1.0, tlim=[0.0, 4.0])
>>> counts.astype(int).tolist()
[[1, 1, 1]]
def triangular_kernel(sigma, dt=1):
133def triangular_kernel(sigma, dt=1):
134    r"""Return a normalized triangular smoothing kernel.
135
136    Parameters
137    ----------
138    sigma:
139        Target width parameter in ms.
140    dt:
141        Temporal resolution of the target grid in ms.
142
143    Definition
144    ----------
145    Let $a = \sigma \sqrt{6}$. The underlying continuous kernel is
146
147    $$
148    T_\sigma(t) = \frac{\max\left(1 - |t| / a, 0\right)}
149    {\int_{-\infty}^{\infty} \max\left(1 - |u| / a, 0\right)\,du}
150    $$
151
152    and the discrete samples are normalized so that `kernel.sum() * dt = 1`.
153
154    <img src="spiketools_assets/triangular_kernel_example.png"
155         alt="Triangular kernel example"
156         onerror="this.onerror=null;this.src='../spiketools_assets/triangular_kernel_example.png';" />
157
158    Examples
159    --------
160    >>> kernel = triangular_kernel(1.0, dt=1.0)
161    >>> round(kernel.sum(), 3)
162    1.0
163    """
164    half_base = int(pylab.around(sigma * pylab.sqrt(6)))
165    half_kernel = pylab.linspace(0.0, 1.0, half_base + 1)
166    kernel = pylab.append(half_kernel, half_kernel[:-1][::-1])
167    kernel /= dt * kernel.sum()
168    return kernel

Return a normalized triangular smoothing kernel.

Parameters
  • sigma:: Target width parameter in ms.
  • dt:: Temporal resolution of the target grid in ms.
Definition

Let $a = \sigma \sqrt{6}$. The underlying continuous kernel is

$$ T_\sigma(t) = \frac{\max\left(1 - |t| / a, 0\right)} {\int_{-\infty}^{\infty} \max\left(1 - |u| / a, 0\right)\,du} $$

and the discrete samples are normalized so that kernel.sum() * dt = 1.

Triangular kernel example

Examples
>>> kernel = triangular_kernel(1.0, dt=1.0)
>>> round(kernel.sum(), 3)
1.0