Problem 15
[ calculus ]

Problem 15: DFT and multiplication of two polynomials of degree ‘n’

The fourier transform is a well known mathematical transform that changes a given function from its space/time domain representation to a frequency (\(\omega\)) domain representation. In the continuous case, the transformation is an integral of a function \(f(x)\) times \(exp(-i.\omega.x)\) over the entire real axis. Therefore, the discrete counter part of the fourier transform naturally results in a summation.

In this article, our objective is to draw a natural connection between the solution to the computational problem of multiplying two polynomials of degreen ‘n-1’ (say \(A(x)\) and \(B(x)\)) in time complexity O(n.log(n)) and the discrete fourier transform.

Consider a polynomial of degree ‘n-1’ given below:

\[A(x)=a_{0} x^{0}+a_{1} x^{1}+\cdots+a_{n-1} x^{n-1}\]

We need \(n\) points to determine all the \(n\) coefficients of a polynomial function that passess through the points. Therefore, we could find the product of the two polynomials by evaluating each of the polynomials at \(2n - 1\) points for obtaining the resultant polynomial of degree \(2n - 2\). It is easy to see that evaluation of the value of the resultant product of \(A(x)\) and \(B(x)\) at the \(2n - 1\) points would have a time complexity of O(n).

The computationally expensive part comes here - We need to calculate the \(2n - 1\) coefficients from the \(2n - 1\) points such that the overall time complexity remains O(n.log(n)). This calls for a fairly common concept involved in developing algorithms - “Divide and Conquer”.

Let us the define the DFT of the polynomial A(x) as the vector of values of A(x) (of degree ‘n-1’) at the \(n\) th roots of unity and denotes these points as \(\omega_{n,i}\) for \(i^{th}\) root i.e. equal to \(e^{\frac{2\pi i}{n}}\). The inverse DFT simply recovers the coefficients from the values of the polynomial.

We can now set up the “Divide” part of the divide and conquer as follows -

Define two derived polynomials with degree \(\frac{n}{2} -1\) -

\[\begin{array}{l} A_{0}(x)=a_{0} x^{0}+a_{2} x^{1}+\cdots+a_{n-2} x^{\frac{n}{2}-1} \\ A_{1}(x)=a_{1} x^{0}+a_{3} x^{1}+\cdots+a_{n-1} x^{\frac{n}{2}-1} \end{array}\]

Then, one may note that: \(A(x)=A_{0}\left(x^{2}\right)+x A_{1}\left(x^{2}\right)\)

Let the time complexity of obtaining DFT of a polynomial of degree ‘n-1’ (with n coefficients) be \(T(n)\). Since each of the polynomials \(A_{0}(x)\) and \(A_{1}(x)\) have \(\frac{n}{2}\) coefficients, the time complexity of obtaining the DFT of each of them can be denoted as \(T(\frac{n}{2})\). In addition to this, all the \(n\) values of \(A(x)\) at the \(n\) points would have to be found using the recursively calculated values of \(A_{0}(x)\) and \(A_{1}(x)\) and this can be done in \(O(n)\) time.

Therefore, a recurrence relation can be estabilished for obtaining the DFT of the polynomail A(x) having degree ‘n-1’ : \(T(n) = 2T\left(\frac{n}{2}\right) + O(n)\).

We have shown that the DFT of C(x) using the coefficients of A(x) and B(x) can be calculated in O(\(n.log(n)\)) time. (How?)

The inverse DFT turns out to be a strikingly similar to DFT. To make this clear, let us say that we have the DFT of A(x) of degree ‘n-1’. Therefore, we know the values of the polynomial at the \(n^{th}\) roots of unity.

In matrix form we may write this as -

\[Y = V A\]

where \(Y\) is the vector of the values, \(V\) is a matrix containing the \(n^{th}\) roots of unity and its powers and \(A\) is the vector of coefficients of \(A(x)\). Note that we know \(Y\) and \(V\). We can invert \(V\) to find \(A\) :

\[A = V^{-1} Y\]

We could have chosen any set of \(n\) points to represent \(A(x)\) but using the \(n^{th}\) roots of unity saves us a lot of computational load. This is because \(V^-1\) in this case is simply \(\frac{\bar{V}}{n}\). The proof is presented below -

The (i,j)th elements of \(V\) turns out to be \(\omega_{n, i.j}\). The conjugate of this is \(\overline{\omega_{n,i.j}}\). Note that \(\omega_{n,k} = \omega_{n}^{k}\). To show that \(\frac{\bar{V}}{n}\) is the inverse, we can simply do a matrix multiplication.

Let the product of the matrices be \(P\) then, (i,j)th element of \(P\) is given by -

\[P_{i, j}=\sum_{k=0}^{n-1} \frac{\omega_{n}^{i.k}.\left(\frac{1}{\omega_{n}}\right)^{k.j}}{n}\]

Clearly, \(P_{i,i} = 1\) and \(P_{i,j}\) with \(i \neq j\) can shown to be equal to zero by summing up the resulting geometric progression.

Therefore, \(A = \frac{\bar{V}}{n} Y\) and this effectively implies that the coefficients can be obtained by performing a DFT of a polynomial which has the coefficient vector \(Y\). We already saw how DFT can be performed in O(\(n.log(n)\)). Therefore, the whole task of multiplying out two polynomials can be performed in O(\(n log(n)\)) which is significantly faster than a naive algorithm that would take O(\(n^2\)) time.

Divide and Conquer

Vandermonde Matrix

Similar post on FFT with code