### Digital filters: the basic logic behind…

# Digital filters: the basic logic behind…

December 3, 2014 11 CommentsWhen building your schematic you often run into a situation when input of the module receives data periodically. It is rather obvious when the input is stream, but similar situation may occur even with green/ruby triggers, mouse events, midi etc. Such periodical stream of data is called Digital Signal (and it will be called that way further on). In such cases you often run into a situation, when you need the input to be “smoother” and not change so rapidly, or exact opposite – extract only the changes in the signal. Here digital filters come to the scene.

To understand a basic logic behind the filters let’s create a model situation:

You have a knob which outputs green signal and a module that operates on stream and receives this knob as an input (a simple “gain” module for this example). When you connect the knob to the module and turn it, you experience a zipping noise (green changes very slowly compared to sample-rate of the stream, so when the value is transformed to stream it changes is “steps”). You need to “smooth” the input from the knob.

**Finite Impulse Response Filter (FIR)**

One obvious way to do it is to create a module (or code block) which has one input and one output. And the module works in a way that output is 50%value of current input + 50% value of previous input. This obviously makes the change between current and previous value to be half as strong at the output – you basically average the values over time.

Now, how do we analyze what the filter actually does? The most common way, to analyze a filter is to analyze its impulse response. We feed the filter with input that is 1 on first sample and 0 on all others (There is a very handy primitive in FS that outputs impulse) and measure the output of the filter into the graph (the impulse response). This particular filter will output 0.5, 0.5, 0, 0, 0, … (In this case it is pretty obvious why – you can do the math yourself very easily). As you can see, the impulse response becomes zero after second sample – the impulse response is 2 samples long = **Finite. **In fact, you can see it is identical with the coefficients of the filter.

This is very handy property of FIR filters – you can measure an Impulse response of a device you want to mimic and feed the impulse response into FIR filter. The FIR filter will then simulate that device, as long as the device preforms only static filtering and non-modulating delays. Commonly this is used to implement guitar cabinet simulation and reverb simulation. FIR filter is also called Convolver, because it preforms convolution = directly applies an impulse response to a signal.

Another analysis we can do, is to convert the time-domain impulse response into frequency-domain using Discrete Fourier Transform (DFT – which is mainly implemented via FFT algorithm). This will show how the filter affects certain frequencies in the spectrum (how it changes amplitudes and phases). For this particular filter we can see, that it does not affect the low-frequencies (especially DC – holding constant value = frequency of zero Hz) and reduces the highest frequencies to zero.

In fact, you can use Inverse Fourier Transform to create impulse response with any desired amplitude/phase response (including linear phase) and feed it to FIR filter. How to do that is beyond this article, but there are several important properties to mention. First of all frequencies in DFT are scaled in linear fashion, however humans intuitively work in logarithmic scale. FIR filters therefore have exceptional razor-blade precision in high frequencies, but very poor resolution in low frequencies. To alter low frequency content more precisely, you need to increase the size of impulse response and CPU cost of FIR filter rises rapidly. Keep in mind that for 1000 samples long IR the filter has to do 1000 multiplications and 999 additions plus storing previous inputs. Even though there are FIR filter implementations that reduce the CPU load significantly (frequency domain convolvers) FIR filters are still the most CPU heavy filters out there.

**Sumary:**

pros:

- simple to design
- very flexible amplitude/phase response (suitable for device modeling and linear phase filtering)
- sharp cutoffs possible in high frequencies

cons:

- CPU heavy
- Low precision of amplitude response in low frequencies
- very hard to implement modulation/morphing in time

**Infinite Impulse Response Filter (IIR)**

Another obvious and similarly simple way to “smooth” the knob output is to make the output 50% current input + 50% previous output. This will result in similar reduction of “fast changes” and similarly preserves “holding values”, but there are some interesting properties to this type of topology. FIR filters only used inputs, to calculate the output – they used feed-forward topology. This filter uses also the output, so it introduces feed-backward.

Now, let’s do the same analysis as we did with FIR filter – let’s measure the impulse response and DFT of that impulse response. Clearly you can see, the impulse response goes like this: 0.5, 0.25, 0.125, 0.0625,.. The values get smaller and smaller, but never really reach zero – the impulse response is **infinite**. That’s in theory though – in reality the values get smaller and smaller until they are rounded to zero and impulse response ends. Similarly in real-life IIR filters, the impulse response decays and blends with the noise to a degree, that it disappears in it. Also notice, that unlike in FIR filter, where IR was identical with the coefficients, here the relationship is much more complex.

IIR filters are much superior to FIR filters in some situations. First of all because their impulse response is infinite, it is not hard to guess that even simple IIR filters can affect low frequency content with very high precision (frequency-wise). They are however much more sensitive to rounding error and not all combinations of coefficients do work (filters with 10 and more coefficients rarely work). Imagine for example the output would be 200% the previous output. The value would double each sample until it reaches clipping level (and potentially may damage the speakers and ears of a listener) or reach “infinity” and all math breaks at that point. You definitely need to be more careful when designing them. Also, because their impulse response decays to zero, they are very likely to introduce denormals – always add some denormal protection into the filter design.

The big advantage of IIR filter is, that you in fact need only a handful of coefficients to create complex amplitude/phase responses. It is a common practice to create a chain of simple filters to implement complicated responses. To calculate the right coefficients for an IIR filter is unfortunately high-end math (Z-tranform, least squares method, evolutionary algorithm,… just to mention few possible ways). As a designer you probably don’t want to become a mathematician just to be able to finish your project. Fortunately, mathematicians out there have done the work for you and published many most common filter-response-types. Many of which were implemented in flowstone too (RBJ Biquad Filters for example).

**Sumary:**

pros:

- lighter on CPU
- big variety of responses, even with simple filters
- easy to “parametrize” and modulate
- superior resolution in low frequencies
- many “ready to go” examples

cons:

- risk of instability
- high probability of denormals
- require complex mathematics to develop “from scratch”
- more sensitive to rounding errors

### Leave a comment

You must be logged in to post a comment.

## 11 Comments

Thanks KG, this is a nice easy to understand foundation. I like how you have included a primitive version that will make it easier to understand for the more visual people.

Thanks Exo. To beginners filters might seem like black magic. I wanted to show that there is actually a basic every day logic behind it (averaging for example). I’d also like to encourage people to contribute to the blog by making articles about DFT and Z-transform, to expand the basis of knowledge around here. I’m already working on a frequency-domain convolver and used least squares method and evolutionary algorithm in filter design several times already, so examples/extensions might come soon.

Yes the way I first understood how filters worked was to consider that it is just averaging/smoothing values over time, so starting with a knob example makes sense.

I have never gone beyond the basics with filters. As you say writing one from scratch involves higher math, which is currently beyond me. Examples of how to calculate the coefficients would be most welcome

Well, there is RBJ’s cookbook for all basic filter types (2nd order IIR) at http://www.musicdsp.org/files/Audio-EQ-Cookbook.txt.

The cookbook is an example of what I meant on the dsp forums. I read the first 6 lines and get an overload of terms: bilinear transform, blt frequency warping, frequency relocation, bandwidth readjustment, compressed bandwidth

They are just mentioned, explaining nothing, as if it would be the most normal thing in the world. That’s where I normally have to give up.

In cases like the RBJ cookbook, where I don’t understand 99% of the text. I opt to copy – paste the code test it if it works and does what I expect and call it a day. The thing is based on Z-transform which is pretty much high-end math educated on math-oriented faculties at universities. It takes time and money to learn it from scratch (you mostly pay people to explain and learn you this kind of stuff, or even pay them to do the stuff for you).

Here and there you can find presentations and publications that explain the stuff from the ground I’m familiar with, but learning to walk before run is a frustrating time-and-effort consuming process, especially without the guidance of (paid) expert. That’s simply how the world works (and always had): money =power = knowledge = time.

Yeah, it is a cookbook, it provides recipes how to calculate filter coefficients. Y need not bother to understand it, just use it.

But if I just use it, I will never learn the relationships. I mean, it doesn’t help me in designing my own filter.

A question to the example used in this article. I learned that linearity basically means that changes in the output signal reflect changes in the input signal. Does that mean that by using an impulse and looking at results like “0.5, 0.25, 0.125″ etc. and even never reaching 0 (while the input signal changes from 1 to 0), that filter is a non-linear one?

NO… linear system means that it has 3 following properties:

1.the linear system is not volume dependent – it doesn’t matter if you boost/cut the signal before or after the system – you get the same results. Distortion is nonlinear, because the amount of harmonics is dependent on the volume of the input.

2. It is stable in time – it doesn’t matter if you add delay before or after the system – result will be the same. Flanger is non-linear because it has parameters that modulate in time = you get different results depending on how you time the input.

3. it is additive – if you process two signals separately and then add them, you get the same result as when you first add them and then process them. Auto-tune is not linear, because although it is time stable and volume independent it works only on individual monophonic voices – you get different result if you first sum the tracks and then process them.

Linear systems have impulse response and certain phase/frequency response. Both IIR and FIR filters are linear. because they have all above mentioned properties. When system is linear, it doesn’t really say anything about how the output should look like depending on the input, as long as it follows those 3 properties.

Once again, thank you so much! I think I got it now. It was just that I mixed up the linearity of the system with the linearity of the function. In this case the function was not linear (0.5, 0.25, 0.125, etc.), but that doesn’t matter as long as the system proves the points for system linearity, right?

Exactly – it is different type of linearity. The system is linear in the fashion, that for frequency “f” it will multiply the frequency magnitude by factor A ( so for any magnitude “m” the output will be “A*m” = linear function – the factor A may be different for each frequency – you can plot them into the magnitude response). And Also introduce some delay ~ phase shift – also per frequency.

The shape of the impulse response has nothing to do with linearity in this context. In fact, Impulse response will almost always be some sort of exponential decay (in case of IIR filters).

Linear-phase filter is another case (another type of linearity) – the phase plot is a line (higher the frequency – higher the phase shift) = it delays all frequencies the same amount.

Zero-phase filter is a (theoretical) filter, that doesn’t delay any frequency at all. But that is impossible, because its impulse response would have to start “ringing” before the input arrives ( =look into the future). For off line processing that is possible, by using linear phase filter (which delays all frequencies the same amount) and latency compensation (shifting the audio left after/before the filter is applied).