As we see $f_{xy}$ and $f_{yx}$ are the same. This is often the case.
+
Clairaut’s theorem
+
Clairaut’s theorem states that:
+
+
If $f_{xy}$ and $f_{yx}$ are continuous, then $f_{xy} = f_{yx}$
+
+
We’ll save the actual proof for another day, but we can “explain” why this theorem holds:
+
Explaination
+
Suppose we have:
+$$
+f(x, y) = x^n y^m
+$$
+
The 1st & 2nd order partial derivatives of this polynomial are:
+
$$
+f_x(x, y) = n \cdot x^{n - 1} y^m
+$$
+
$$
+f_y(x, y) = m \cdot x^n y^{m - 1}
+$$
+
$$
+f_{xy}(x, y) = m \cdot n \cdot x^{n - 1} y^{m - 1}
+$$
+
$$
+f_{yx}(x, y) = n \cdot m \cdot x^{n - 1} y^{m - 1}
+$$
+
We see that this holds true for polynomials. Since any function can be approximated by polynomials, it’s not surprising Clairaut’s theorem holds.
+
Remark
+
Suppose, $f$, is a composition of $sin(x)$, $cos(x)$, polynomials and the exponential function. For example:
+$$
+f(x, y) = e^{cos(xy - 3 \cdot sin(xy))}
+$$
+
Since all of these functions are differentiable any amount of times and all of these are continuous. This must mean that partial derivatives of any order must also be continuous.
+
This means we can easily use Clairaut’s theorem, since it only applies to functions that are continuous. Therefore we do not need to know if the partial derivatives are continuous ahead of time.
Trying to first compute $f_x$ will prove to be quite tricky, but using Clairaut’s theorem this problem is trivial:
+$$
+f_y(x, y) = x
+$$
+
$$
+\boxed{f_yx(x, y) = 1}
+$$
+
Notation
+
There are a lot of different notations for partial derivatives.
+We’ll mainly use $f_x$ and $f_y$.
+
However, the most common notation is:
+$$
+f_x = \dfrac{\partial f}{\partial x}
+$$
+
$$
+f_y = \dfrac{\partial f}{\partial y}
+$$
+
In the case for higher order partial derivatives:
+$$
+f_{xx} = \dfrac{\partial \left(\dfrac{\partial f}{\partial x}\ \right)}{\partial x} = \dfrac{\partial^2 f}{\partial x^2}
+$$
We can derive how many times we want, with how many variables we want:
+$$
+f_{xxx}, f_{xxy}, f_{xyx}, \ldots
+$$
+
This also includes Clairaut’s theorem.
+
Geometrical sense
+
Let’s try to understand what derivatives in two variables looks like geometricaly.
+
+
For one variable, it is:
+$f’(a) =$ the slope of the tangent line to the graph at $(a, f(a))$. We could also say that this is the rate of change at point $a$.
+
+
For two variables:
+$f_x(a, b)$ intersects the graph by the vertical plane, $y = b$. $C$, is the curve of the intersection. In other words $f_x(a, b)$ = the slope of the tangent line $T$.
+
We could also say:
+
+
Rate of change in the direction of the variable (we are differanting)
+
+
Differentiability
+
For one function, the following statement holds:
+
+
$f$ is differentiable $\Rightarrow$ $f$ has derivate at $a$, meaning $f’(a)$ exists.
+
+
When dealing with several variables, this becomes tricky. Let’s look at the definition and see what we can do:
This just means we have a small change in $x$, so let’s say:
+$$
+\Delta x = h
+$$
+
Let’s define the small change in the function value as:
+$$
+\Delta y := f(a + \Delta x) - f(a)
+$$
+
Then, the derivate is:
+$$
+f’(a) = \lim_{\Delta x \to 0} \dfrac{\Delta y}{\Delta x}
+$$
+
Let’s move around the terms:
+$$
+\lim_{\Delta x \to 0} \dfrac{\Delta y}{\Delta x} - f’(a) = 0
+$$
+
Let’s now call this for $\varepsilon$:
+$$
+\varepsilon = \dfrac{\Delta y}{\Delta x} - f’(a)
+$$
+
We can define this as a function of $\Delta x$:
+$$
+\varepsilon = \varepsilon(\Delta x)
+$$
+
Which means:
+$$
+\varepsilon \Delta x = \Delta y - f’(a) \Delta x
+$$
+
Which finally means:
+$$
+\boxed{\Delta y = \varepsilon \Delta x + f’(a) \Delta x}
+$$
+
And as $\Delta x \to 0$, $\varepsilon \to 0$ as well.
+
For functions of two variables, we can now instead define $\Delta z$:
+$$
+\Delta z = \ldots = \boxed{f_x(a, b)\Delta x + f_y(a, b)\Delta y + \varepsilon_1 \Delta x + \varepsilon_2 \Delta y}
+$$
+
+
diff --git a/school/SSY081/index.html b/school/SSY081/index.html
index 6d76fde8..d10ad8b2 100644
--- a/school/SSY081/index.html
+++ b/school/SSY081/index.html
@@ -1,8 +1,8 @@
- Transforms, signals and systems: Part 1 - Introduction & signal operations | rezvan
-
+ Transforms, signals and systems: Part 1 - Signals | rezvan
+
@@ -261,22 +261,80 @@
-
Transforms, signals and systems: Part 1 - Introduction & signal operations
Aug 29, 2023
+
Transforms, signals and systems: Part 1 - Signals
Aug 29, 2023
Introductions
In this series we’ll cover what we mean with transforms, signals and systems. How they relate and are used in the real world.
-
Signals & systems
-
Let’s first define what signals and systems are.
+
Signals
+
In this part we’ll try to understand signals, classify these. Perform different signal operations and lastly understand and use signal *models.
+Let’s first define what a signal is
Definition
A signal is a set of information or data. Any physical quantity that varies over time, space or any other variable or variables.
+
We will usually define signals with mathematical functions.
+
Signal classifications
+
There are different types of signals and representations of signals. Let’s list these:
+
+
+
Continuous VS Discrete (Time)
+
+
+
Continuous VS Discrete (Amplitude)
+
+
+
Periodic VS Aperiodic
+
+
+
Deterministic VS Stochastic
+
+
+
We’ll properly define each of these, let’s start with the time representation:
+
Continuous VS Discrete (Time)
+
As we can see, the discrete representation, is points that are spread with a time interval $T$.
+
+
+
Continuous VS Discrete (Amplitude)
+
+
As we can see, this is quantization, analog $\to$ digital.
+
Even, Odd & Periodic
+
Let’s also define what an even, odd and periodic functions are.
+
An even function is symmetrical about the vertical axis.
+
+
An odd function is anti-symmetrical about the vertical axis.
+
+
A periodic function has a fundamental period (minimum), $T_0$. Which also means it has a fundamental frequency, $f_0 = \dfrac{1}{T_0}$.
+
We sometimes define the fundamental frequency in angular velocity instead of Hz, which means, $\omega_0 = 2\pi f_0 = \dfrac{2\pi}{T_0}$.
+
+
Deterministic VS Stochastic
+
These are quite easy to define.
-
A system is an entity that processes a signal or a set of signals and outputs a signal or set of signals.
+
Deterministic signal: Its physical description is known completely (mathematical or graphical).
+
+
+
Stochastic signal: Values are only known in probabilistic terms.
-
We usually define signals with mathematical functions.
Signal operations
-
We’ll cover signal operations for time-continuous signals.
+
Now that we have defined what signals are, what operations can we perform? Since they are mathematical functions, we can perform a whole row of operations.
+
Let’s start in the time-continuous world. We’ll list all the operations we can perform.
+
+
+
Amplitude scaling (Gain)
+
+
+
DC (Offset)
+
+
+
Time scaling
+
+
+
Reflection (Time inversion)
+
+
+
Time shift
+
+
+
Let’s go through them all and define them.
Amplitude scaling
$$
f(t) \newline
@@ -318,8 +376,145 @@
Time shift
\Phi(t) = f(t \pm T)
$$
-
Summary
-
Will update/add more :]
+
Summary of operations
+
+
+
+
Operation
+
Continuous
+
+
+
+
+
DC
+
$f(t) \to A + f(t)$
+
+
+
Amplitude scaling
+
$f(t) \to Af(t)$
+
+
+
Time scaling
+
$f(t) \to f(at)$
+
+
+
Reflection
+
$f(t) \to f(-t)$
+
+
+
Time shift
+
$f(t) \to f(t \pm t_0)$
+
+
+
+
Signal models
+
We’ll now cover how we can (usually) model these signals, we’ll look at three functions which model signals.
+
These are:
+
+
+
Unit step function
+
+
+
Unit impulse function (also called the Dirac delta function)
+
+
+
Exponential function
+
+
+
Unit step function
+
The unit step function is defined as:
+
$$
+u(t) =
+\begin{cases}
+1 & t \geq 0 \newline
+0 & t < 0
+\end{cases}
+$$
+
+
This means we can represent rectangular signals as linear combination of the unit step function. For example:
+
$$
+f(t) = u(t - 2) - u(t - 4)
+$$
+
+
Unit impulse function (Dirac delta function)
+
We define dirac delta function as the following:
+$$
+\delta(t) = 0 \ | \ t \neq 0
+$$
+
$$
+\int_{-\infty}^{\infty} \delta(t)\ dt = 1
+$$
+
+
We’ll see that we can define discrete time-signals with this function! But the main power with the dirac delta function is property to sample/sift:
+
Suppose we have a function, $\phi(t)$, which is continuous at $t = 0$. We can perform:
+$$
+\phi(t)\delta(t) = \phi(0)\delta(t)
+$$
The area under the product of a function with an impulse, $\delta(t)$, is equal to the value of that function at the instant where the unit impulse is located.
+
Exponential function
+
We define the exponential function with complex numbers:
+
$$
+e^{st} \ | \ s = \sigma + j\omega
+$$
+
This means:
+$$
+e^{st} = e^{t(\sigma + j\omega)} = e^{t\sigma + j\omega t} = e^{t\sigma} \cdot e^{j\omega t} = e^{t\sigma}(cos \omega t + j sin \omega t)
+$$
+
We have some special cases where we get:
+
+
A constant $k = ke^{0t} \ | \ (s = 0)$
+
A monotonic exponential $e^{\sigma t} \ | \ (\omega = 0, s = \omega)$
+
A sinusoid $cos \omega t \ | \ (\sigma = 0, s = \pm j\omega)$
+
A exp. varying $e^{\sigma t} cos \omega t \ | \ (s = \omega \pm j\omega)$