Understanding Fourier Series

white android tablet turned on displaying a graph
Photo by Burak K on Pexels.com

Comparing Functions and vectors.

\vec{v} function f(x)

Finite dimensional Infinite dimensional

A vector can be written in the following different ways,
\vec{V} = V_x \hat{x} + V_y \hat{y} + V_z \hat{z}
\hskip .5cm = (V \cdot \hat{x}) \hat{x} + (V \cdot \hat{y}) \hat{y} + (V \cdot \hat{z}) \hat{z}
If the decomposition is along an orthogonal frame along the vectors \vec{a},\, \vec{b} and \vec{c} then the expression would be,
\vec{V} = (\vec{V} \cdot \hat{a}) \hat{a} + (\vec{V} \cdot \hat{b}) \hat{b} + (\vec{V} \cdot \hat{c}) \hat{c}
\hskip .5cm = \frac{\vec{V} \cdot \vec{a}}{\vec{a}\cdot\vec{a}} \vec{a} + \frac{\vec{V} \cdot \vec{b}}{\vec{b}\cdot\vec{b}} \vec{b} + \frac{\vec{V} \cdot \vec{c}}{\vec{c}\cdot\vec{c}} \vec{c}

In general the dot product of two $n-$dimensional vectors \vec{V} = (V_1, V_2,...,V_n) and \vec{W} = (W_1,W_2,...,W_n), can be written as,
\vec{V} \cdot \vec{W} = \sum_{i=1}^n V_i W_i.

It is useful to think of a real function f(x) over an interval
[a,b] as a vector with infinite components. Here the argument serves
as an index and the function value as the vector component. Analogous to vector dot product, the dot product between two functions $f$ and $g$ defined over the same interval can be written as,
(f,g) = \int_a^b f(x) g(x) dx.

Using this definition of the dot product, one can show that the following functions
are orthogonal to each-other (mutual dot products are zero) on the interval
[0, 2\pi].
f_1(x) = 1, \sin{x}, \sin{2x}, \sin{3x}, ...,\cos{x}, \cos{2x}, \cos{3x},...

Thus in parallel with writing a vector in terms of it’s components, one can write any (finite, smooth and continuous on [0, 2\pi] (I am not trying to be mathematically precise, the aim is to give an intuitive feel)) function in terms of
the above basis functions in the same manner,
f(x) = \frac{(f(x),1)}{(1,1)} 1 + \frac{(f(x),\sin(x)}{(\sin(x), \sin(x))} \sin(x) + \frac{(f(x),\sin(2x)}{(\sin(2x), \sin(2x))} \sin(2x) + ...
\hskip.2cm + \frac{(f(x),\cos(x)}{(\cos(x), \cos(x))} \cos(x) + \frac{(f(x),\cos(2x)}{(\cos(2x), \cos(2x))} \cos(2x) +....
Notice the similarity of the expression of a function in terms of it’s components and a vector in terms of it’s components. Hence decomposition of a function in its Fourier components is quite akin to decomposition of a vector in its Cartesian components.

My clan

One amazing thing I find about fellow academics is the ease with which we connect.

Even when I meet them after decades, the shared knowledge goes beyond technical. There is an open-ness which allows us to trust each-other, joke about ourselves, gripe about students ( yes, most of the people I know do that while admitting that they themselves were the same) and discuss health, family and society.

Regression, How to model

This year my linear algebra class is using regression to model real world data. The data ranges from climate change to bank interests to chemical reactions.

A standard question is, given the data, how do we choose the model. Since most of the data is in two variables say (x,y). Here is the usual process.

First Plot the data, using scatter plot. This will give you an idea as to “do you expect a linear or a nonlinear relationship between x and y?”

Consider if smoothing the data will help. If your graph looks like a noisy line or a noisy quadratic, rolling average will make it smoother.

Decide on the model. Looking at the scatter plot you should be able to get an idea if the relationship between x and y is linear, quadratic, cubic and so on.

Write out your model, for example y = a + b x + c x^2. Thus each value of the data point when plugged into the model will give you a linear equation in the parameters a,b,c. For the collection of these linear equations you can write the matrix equation A \vec{x} = \vec{b} (note that here \vec{x} contains the parameters as its elements).

Use the normal equation A^TA \vec{x} = A^T \vec{b} or the equation \vec{x} = (A^T A)^{-1} A^T \vec{b} to find the best fit parameter values (\vec{x}).

The easiest way of solving the above equations is to use matlab or mathematica, which have built-in functions for matrix manipulations. (most of the programming languages like C or python also may have corresponding libraries). However writing the code is better in terms of gaining skills and making your foundations stronger.

Please note that even if you find splines easy to use for interpolation, regression is a better choice for modeling as the resultant equation is simpler.

Remember that when the parameters are found by minimizing the magnitude square of the error vector using calculus, one would get the same result for the best fit parameters. That method is known as “the least square method“.

You may post your doubts below, and if you are at the Ahmedabad University then catch me after a class.