Sampling
What is Sampling?
Sampling is a methodology of representing a signal with less than the signal itself.
We can do better than just describing a signal by specifying the value of the dependent variable for each possible value of the independent variable. The concept is explained with the following examples where 'x(t)' is the dependent variable and 't' is the independent variable.
Here 'x(t)' is defined by a sinusoidal relation with a phase constant , amplitude and angular frequency. Now the knowledge of these three parameters suffices to describe 'x(t)' completely. Thus we are able to compute 'x(t)' without depending on the independent variable 't'.
Consider another example given below:
Here x(t) is a polynomial in 't' of degree 'N' and can be computed completely if we know the coefficients a0, a1, a2,........an.
Thus we observe that the apriori information we had that allowed us to represent these signals. In the first case we knew that 'x(t)' is a pure sinusoid and in the second case we knew that it was a polynomial of degree 'N'.
Thus, as a method of using Apriori information available to represent a signal economically is one way of defining sampling.
A Common Approach for Signal Representation: The approach most often used to economically represent a signal is to look at the values of the dependent variable as a set of properly chosen values of the independent variable such that these 'tuples' and the 'apriori' information can be used to reconstruct the signal completely.
Lets say we know that some signal 'x(t)' is a pure sinusoid described by the three quantities amplitude (Ao ) , angular frequency (ω0 ),and phase constant ( ). For 't1 , t2 & t3 ' values of 't' we get the following three independent equations. :
From the observed values of the signal x(t1), x(t2) and x(t3) at t1, t2 and t3, the parameters of the signal Ao,ω0 and can be determined.
Consider another example: Let x(t) be a polynomial of order 'N' which is represented mathematically as shown below. It is further represented in the form of a matrix where the LHS is the 'apriori' information.
Thus we observe that, this system can be solved as the determinant of the square matrix on the LHS so long as .
Thus given the 'apriori' information, the entire information about the signal is contained in its value at N + 1 distinct points. You have seen two examples, where 'apriori' information, and "samples" of a signal at certain values of the independent variable help us reconstruct the signal completely.
But If you have no Apriori information you can do no better than to represent the signal as it is.
Even knowing about the continuity of a signal is 'apriori' information. Further we can talk of the relative measure of the 'apriori' information. This can be done by observing the size of the set in which that signal occurs. The larger the set, the lesser the 'apriori' information we have. For example, knowing that the signal is sinusoidal is much larger an 'apriori' information than knowing that it is continuous as the set of sine functions is much smaller than the set of continuous functions.
The main challenge in sampling and reconstruction is to make the best use of 'apriori' information in order to represent a signal by its samples most economically.
In the next lecture, we focus on a special class of signals those that are Band-limited (this is the 'apriori' information we shall have) and see how such signals can be reconstructed from their samples.
Conclusion:
From this lecture you have learnt :
41 videos|52 docs|33 tests
|
1. What is sampling and reconstruction? |
2. Why is sampling important in signal processing? |
3. What is the Nyquist-Shannon sampling theorem? |
4. What are the common techniques used for signal reconstruction? |
5. Can sampling and reconstruction introduce errors in the signal? |
|
Explore Courses for Electrical Engineering (EE) exam
|