Dynamics Reference

    Vector Calculus

    Dot product

    The dot product (also called the inner product or scalar product) is defined by

    Dot product from components.
    $$ \vec{a} \cdot \vec{b}= a_1 b_1 + a_2 b_2 + a_3 b_3 $$

    An alternative expression for the dot product can be given in terms of the lengths of the vectors and the angle between them:

    Dot product from length/angle. #rvv-ed
    $$ \vec{a} \cdot \vec{b}= a b \cos\theta $$

    We will present a simple 2D proof here. A more complete proof in 3D uses the law of cosines.

    Start with two vectors \( \vec{a} \) and \( \vec{b} \) with an angle \( \theta \) between them, as shown below.

    Observe that the angle \( \theta \) between vectors \( \vec{a} \) and \( \vec{b} \) is the difference between the \( \theta_a \) and \( \theta_b \) from horizontal.

    If we use the angle sum formula for cosine, we have

    $$ \begin{aligned} a b \cos\theta &= a b \cos(\theta_b - \theta_a) \\ &= a b (\cos\theta_b \cos\theta_a + \sin\theta_b \sin\theta_a) \end{aligned} $$

    We now want to express the sine and cosine of \( \theta_a \) and \( \theta_b \) in terms of the of \( \vec{a} \) and \( \vec{b} \).

    We re-arrange the expression so that we can use the fact that \( a_1 = a \cos\theta_a \) and \( a_2 = a \sin\theta_a \), and similarly for \( \vec{b} \). This gives:

    $$ \begin{aligned} a b \cos\theta &= (a \cos\theta_a) (b \cos\theta_b) + (a \sin\theta_a) (b \sin\theta_b) \\ &= a_1 b_1 + a_2 b_2 \\ &= \vec{a} \cdot \vec{b} \end{aligned} $$

    The fact that we can write the dot product in terms of components as well as in terms of lengths and angle is very helpful for calculating the length and angles of vectors from the component representations.

    Length and angle from dot product. #rvv-el
    $$ \begin{aligned} a &= \sqrt{\vec{a} \cdot\vec{a}} \\ \cos\theta &= \frac{\vec{b}\cdot \vec{a}}{b a}\end{aligned} $$

    The angle between \( \vec{a} \) and itself is $\theta = 0$, so \( \vec{a} \cdot \vec{a} = a^2 \cos 0 = a^2 \), which gives the first equation for the length in terms of the dot product.

    The second equation is a rearrangement of #rvv-ed.

    If two vectors have zero dot product \( \vec{a} \cdot \vec{b} = 0 \) then they have an angle of \( \theta = 90^\circ = \frac{\pi}{2}\rm\ rad \) between them and we say that the vectors are perpendicular, orthogonal, or normal to each other.

    In 2D we can easily find a perpendicular vector by rotating \( \vec{a} \) counterclockwise with the following equation.

    Counterclockwise perpendicular vector in 2D. #rvv-en
    $$ \vec{a}^\perp = -a_2\,\hat\imath + a_1\hat\jmath $$

    It is easy to check that \( \vec{a}^\perp \) is always perpendicular to \( \vec{a} \):

    $$ \vec{a} \cdot \vec{a}^\perp = (a_1\,\hat\imath + a_2\,\hat\jmath) \cdot (-a_2\,\hat\imath + a_1\hat\jmath) = -a_1 a_2 + a_2 a_1 = 0. $$
    The fact that \( \vec{a}^\perp \) is a \( +90^\circ \) rotation of \( \vec{a} \) is apparent from Figure #rvv-fn.

    In 2D there are two perpendicular directions to a given vector \( \vec{a} \), given by \( \vec{a}^\perp \) and \( -\vec{a}^\perp \). In 3D there is are many perpendicular vectors, and there is no simple formula like #rvv-en for 3D.

    The perpendicular vector \( \vec{a}^\perp \) is always a \( +90^\circ \) rotation of \( \vec{a} \).

    Dot product identities

    Dot product symmetry. #rvi-ed
    $$ \vec{a} \cdot \vec{b} = \vec{b} \cdot \vec{a} $$

    Using the coordinate expression #rvv-es gives:

    $$ \vec{a} \cdot \vec{b} = a_1 b_1 + a_2 b_2 + a_3 b_3 = b_1 a_1 + b_2 a_2 + b_3 a_3 = \vec{b} \cdot \vec{a}. $$

    Dot product vector length. #rvi-eg
    $$ \vec{a} \cdot \vec{a} = \|a\|^2 $$

    Using the coordinate expression #rvv-es gives:

    $$ \vec{a} \cdot \vec{a} = a_1 a_1 + a_2 a_2 + a_3 a_3 = \|a\|^2. $$

    Dot product bi-linearity. #rvi-ei
    $$ \begin{aligned} \vec{a} \cdot(\vec{b} + \vec{c}) &=\vec{a} \cdot \vec{b} + \vec{a}\cdot \vec{c} \\ (\vec{a} +\vec{b}) \cdot \vec{c} &=\vec{a} \cdot \vec{c} + \vec{b}\cdot \vec{c} \\ \vec{a} \cdot (\beta\vec{b}) &= \beta (\vec{a} \cdot\vec{b}) = (\beta \vec{a}) \cdot\vec{b}\end{aligned} $$

    Using the coordinate expression #rvv-es gives:

    $$ \begin{aligned} \vec{a} \cdot (\vec{b} + \vec{c}) &= a_1 (b_1 + c_1) + a_2 (b_2 + c_2) + a_3 (b_3 + c_3) \\ &= (a_1 b_1 + a_2 b_2 + a_3 b_3) + (a_1 c_1 + a_2 c_2 + a_3 c_3) \\ &= \vec{a} \cdot \vec{b} + \vec{a} \cdot \vec{c} \\ (\vec{a} + \vec{b}) \cdot \vec{c} &= (a_1 + b_1) c_1 + (a_2 + b_2) c_2 + (a_3 + b_3) c_3 \\ &= (a_1 c_1 + a_2 c_2 + a_3 c_3) + (b_1 c_1 + a_2 c_2 + a_3 c_3) \\ &= \vec{a} \cdot \vec{c} + \vec{b} \cdot \vec{c} \\ \vec{a} \cdot (\beta \vec{b}) &= a_1 (\beta b_1) + a_2 (\beta b_2) + a_3 (\beta b_3) \\ &= \beta (a_1 b_1 + a_2 b_2 + a_3 b_3) \\ &= \beta (\vec{a} \cdot \vec{b}) \\ &= (\beta a_1) b_1 + (\beta a_2) b_2 + (\beta a_3) b_3 \\ &= (\beta \vec{a}) \cdot \vec{b}. \end{aligned} $$

    Cross product

    The cross product can be defined in terms of components by:

    Cross product in components. #rvv-ex
    $$ \vec{a} \times \vec{b} = (a_2 b_3 - a_3 b_2) \,\hat{\imath} + (a_3 b_1 - a_1 b_3) \,\hat{\jmath} + (a_1 b_2 - a_2 b_1) \,\hat{k} $$

    It is sometimes more convenient to work with cross products of individual basis vectors, which are related as follows.

    Cross products of basis vectors. #rvv-eo
    $$ \begin{aligned}\hat\imath \times \hat\jmath &= \hat{k}& \hat\jmath \times \hat{k} &= \hat\imath& \hat{k} \times \hat\imath &= \hat\jmath \\\hat\jmath \times \hat\imath &= -\hat{k}& \hat{k} \times \hat\jmath &= -\hat\imath& \hat\imath \times \hat{k} &= -\hat\jmath \\\end{aligned} $$

    Writing the basis vectors in terms of themselves gives the components:

    $$ \begin{aligned} i_1 &= 1 & i_2 &= 0 & i_3 &= 0 \\ j_1 &= 0 & j_2 &= 1 & j_3 &= 0 \\ k_1 &= 0 & k_2 &= 0 & k_3 &= 1. \end{aligned} $$
    These values can now be substituted into the definition #rvv-ex. For example,
    $$ \begin{aligned} \hat\imath \times \hat\jmath &= (i_2 j_3 - i_3 j_2) \,\hat{\imath} + (i_3 j_1 - i_1 j_3) \,\hat{\jmath} + (i_1 j_2 - i_2 j_1) \,\hat{k} \\ &= (0 \times 0 - 0 \times 1) \,\hat{\imath} + (0 \times 0 - 1 \times 0) \,\hat{\jmath} + (1 \times 1 - 0 \times 0) \,\hat{k} \\ &= \hat{k} \end{aligned} $$
    The other combinations can be computed similarly.

    Warning: The cross product is not associative. #rvv-wc

    The cross product is not associative, meaning that in general

    $$ \vec{a} \times (\vec{b} \times \vec{c}) \ne (\vec{a} \times \vec{b}) \times \vec{c}. $$
    For example,
    $$ \begin{aligned} \hat{\imath} \times (\hat{\imath} \times \hat{\jmath}) &= \hat{\imath} \times \hat{k} = - \hat{\jmath} \\ (\hat{\imath} \times \hat{\imath}) \times \hat{\jmath} &= \vec{0} \times \hat{\jmath} = \vec{0}. \end{aligned} $$
    This means that we should never write an expression like
    $$ \vec{a} \times \vec{b} \times \vec{c} $$
    because it is not clear in which order we should perform the cross products. Instead, if we have more than one cross product, we should always use parentheses to indicate the order.

    Rather than using components, the cross product can be defined by specifying the length and direction of the resulting vector. The direction of \( \vec{a} \times \vec{b} \) is orthogonal to both \( \vec{a} \) and \( \vec{b} \), with the direction given by the right-hand rule. The magnitude of the cross product is given by:

    Cross product length. #rvv-el2
    $$ \| \vec{a} \times \vec{b} \| = a b \sin\theta $$

    Using Lagrange's identity we can calculate:

    $$ \begin{aligned} \| \vec{a} \times \vec{b} \|^2 &= \|\vec{a}\|^2 \|\vec{b}\|^2 - (\vec{a} \cdot \vec{b})^2 \\ &= a^2 b^2 - (a b \cos\theta)^2 \\ &= a^2 b^2 (1 - \cos^2\theta) \\ &= a^2 b^2 \sin^2\theta. \end{aligned} $$
    Taking the square root of this expression gives the desired cross-product length formula.

    This second form of the cross product definition can also be related to the area of a parallelogram.

    The area of a parallelogram is the length of the base multiplied by the perpendicular height, which is also the magnitude of the cross product of the side vectors.

    A useful special case of the cross product occurs when vector \( \vec{a} \) is in the 2D \( \hat\imath,\hat\jmath \) plane and the other vector is in the orthogonal \( \hat{k} \) direction. In this case the cross product rotates \( \vec{a} \) by \( 90^\circ \) counterclockwise to give the perpendicular vector \( \vec{a}^\perp \), as follows.

    Cross product of out-of-plane vector \( \hat{k} \) with 2D vector \( \vec{a} = a_1\,\hat\imath + a_2\,\hat\jmath \). #rvv-e9
    $$ \hat{k} \times \vec{a} = \vec{a}^\perp $$

    Using #rvv-eo we can compute:

    $$ \begin{aligned} \hat{k} \times \vec{a} &= \hat{k} \times (a_1\,\hat\imath + a_2\,\hat\jmath) \\ & a_1 (\hat{k} \times \hat\imath) + a_2 (\hat{k} \times \hat\jmath) \\ &= a_1\,\hat\jmath - a_2\,\hat\imath \\ &= \vec{a}^\perp. \end{aligned} $$

    Cross product identities

    Cross product anti-symmetry. #rvi-ea
    $$ \begin{aligned} \vec{a} \times\vec{b} = - \vec{b} \times\vec{a}\end{aligned} $$

    Writing the component expression #rvv-ex gives:

    $$ \begin{aligned} \vec{a} \times \vec{b} &= (a_2 b_3 - a_3 b_2) \,\hat{\imath} + (a_3 b_1 - a_1 b_3) \,\hat{\jmath} + (a_1 b_2 - a_2 b_1) \,\hat{k} \\ &= -(a_3 b_2 - a_2 b_3) \,\hat{\imath} - (a_1 b_3 - a_3 b_1) \,\hat{\jmath} - (a_2 b_1 - a_1 b_2) \,\hat{k} \\ &= -\vec{b} \times \vec{a}. \end{aligned} $$

    Cross product self-annihilation. #rvi-ez
    $$ \begin{aligned}\vec{a} \times \vec{a} = 0\end{aligned} $$

    From anti-symmetry #rvi-ea we have:

    $$ \begin{aligned} \vec{a} \times \vec{a} &= - \vec{a} \times \vec{a} \\ 2 \vec{a} \times \vec{a} &= 0 \\ \vec{a} \times \vec{a} &= 0. \end{aligned} $$

    Cross product bi-linearity. #rvi-eb2
    $$ \begin{aligned}\vec{a} \times (\vec{b} + \vec{c})&= \vec{a} \times \vec{b} + \vec{a} \times \vec{c} \\(\vec{a} + \vec{b}) \times \vec{c}&= \vec{a} \times \vec{c} + \vec{b} \times \vec{c} \\\vec{a} \times (\beta \vec{b})&= \beta (\vec{a} \times \vec{b})= (\beta \vec{a}) \times \vec{b}\end{aligned} $$

    Writing the component expression #rvv-ex for the first equation gives:

    $$ \begin{aligned} \vec{a} \times (\vec{b} + \vec{c}) &= (a_2 (b_3 + c_3) - a_3 (b_2 + c_2)) \,\hat{\imath} \\ &\quad + (a_3 (b_1 + c_1) - a_1 (b_3 + c_3)) \,\hat{\jmath} \\ &\quad + (a_1 (b_2 + c_2) - a_2 (b_1 + c_1)) \,\hat{k} \\ &= \Big((a_2 b_3 - a_3 b_2) \,\hat{\imath} + (a_3 b_1 - a_1 b_3) \,\hat{\jmath} + (a_1 b_2 - a_2 b_1) \,\hat{k} \Big) \\ &\quad + \Big((a_2 c_3 - a_3 c_2) \,\hat{\imath} + (a_3 c_1 - a_1 c_3) \,\hat{\jmath} + (a_1 c_2 - a_2 c_1) \,\hat{k} \Big) \\ &= \vec{a} \times \vec{b} + \vec{a} \times \vec{c}. \\ \end{aligned} $$
    The second equation follows similarly, and for the third equation we have:
    $$ \begin{aligned} \vec{a} \times (\beta \vec{b}) &= (a_2 (\beta b_3) - a_3 (\beta b_2)) \,\hat{\imath} + (a_3 (\beta b_1) - a_1 (\beta b_3)) \,\hat{\jmath} + (a_1 (\beta b_2) - a_2 (\beta b_1)) \,\hat{k} \\ &= \beta \Big( (a_2 b_3 - a_3 b_2) \,\hat{\imath} + (a_3 b_1 - a_1 b_3) \,\hat{\jmath} + (a_1 b_2 - a_2 b_1) \,\hat{k} \Big) \\ &= \beta (\vec{a} \times \vec{b}). \end{aligned} $$
    The last part of the third equation can be seen with a similar derivation.

    Derivatives

    Time-dependent vectors can be differentiated in exactly the same way that we differentiate scalar functions. For a time-dependent vector \( \vec{a}(t) \), the derivative \( \dot{\vec{a}}(t) \) is:

    Vector derivative definition. #rvc-ed
    $$ \begin{aligned} \dot{\vec{a}}(t)&= \frac{d}{dt} \vec{a}(t) = \lim_{\Delta t\to 0} \frac{\vec{a}(t + \Delta t) -\vec{a}(t)}{\Delta t}\end{aligned} $$

    Note that vector derivatives are a purely geometric concept. They don't rely on any basis or coordinates, but are just defined in terms of the physical actions of adding and scaling vectors.

    Increment:

    \(\Delta t = \) 2 s

    Show:

    Time:

    \(t = \) 0 s

    Vector derivatives shown as functions of \(t\) and \(\Delta t\). We can hold \(t\) fixed and vary \(\Delta t\) to see how the approximate derivative \( \Delta\vec{a}/\Delta t \) approaches \( \dot{\vec{a}} \). Alternatively, we can hold \(\Delta t\) fixed and vary \(t\) to see how the approximation changes depending on how \( \vec{a} \) is changing.

    We will use either the dot notation \( \dot{\vec{a}}(t) \) (also known as Newton notation) or the full derivative notation \( \frac{d\vec{a}(t)}{dt} \) (also known as Leibniz notation), depending on which is clearer and more convenient. We will often not write the time dependency explicitly, so we might write just \( \dot{\vec{a}} \) or \( \frac{d\vec{a}}{dt} \). See below for more details.

    Newton versus Leibniz Notation

    Most people know who Isaac Newton is, but perhaps fewer have heard of Gottfried Leibniz. Leibniz was a prolific mathematician and a contemporary of Newton. Both of them claimed to have invented calculus independently of each other, and this became the source of a bitter rivalry between the two of them. Each of them had different notation for derivatives, and both notations are commonly used today.

    Leibniz notation is meant to be reminiscent of the definition of a derivative:

    $$ \frac{dy}{dt}=\lim_{\Delta t\rightarrow0}\frac{\Delta y}{\Delta t}. $$

    Newton notation is meant to be compact:

    $$ \dot{y} = \frac{dy}{dt}. $$

    Note that a superscribed dot always denotes differentiation with respect to time \(t\). A superscribed dot is never used to denote differentiation with respect to any other variable, such as \(x\).

    But what about primes? A prime is used to denote differentiation with respect to a function's argument. For example, suppose we have a function \(f=f(x)\). Then

    $$ f'(x) = \frac{df}{dx}. $$

    Suppose we have another function \(g=g(s)\). Then

    $$ g'(s) = \frac{dg}{ds}. $$

    As you can see, while a superscribed dot always denotes differentiation with respect to time \(t\), a prime can denote differentiation with respect to any variable; but that variable is always the function's argument.

    Sometimes, for convenience, we drop the argument altogether. So, if we know that \(y=y(x)\), then \(y'\) is understood to be the same as \(y'(x)\). This is sloppy, but it is very common in practice.

    Each notation has advantages and disadvantages. The main advantage of Newton notation is that it is compact: it does not take a lot of effort to write a dot or a prime over a variable. However, the price you pay for convenience is clarity. The main advantage of Leibniz notation is that it is absolutely clear exactly which variable you are differentiating with respect to.

    Notice how, with Leibniz notation, you can imagine the \(dx\)'s "cancelling out" on the right-hand side, leaving you with \(dy/dt\).

    Derivatives and vector "positions"

    When thinking about vector derivatives, it is important to remember that vectors don't have positions. Even if a vector is drawn moving about, this is irrelevant for the derivative. Only changes to length and direction are important.

    Show:

    Movement: bounce stretch circle twist slider rotate vertical fly

    Vector derivatives for moving vectors. Vector movement is irrelevant when computing vector derivatives.

    Cartesian

    In a fixed basis we differentiate a vector by differentiating each component:

    Vector derivative in components. #rvc-ec
    $$ \dot{\vec{a}}(t) = \dot{a}_1(t)\,\hat{\imath} + \dot{a}_2(t)\,\hat{\jmath} + \dot{a}_3(t) \,\hat{k} $$

    Writing a time-dependent vector expression in a fixed basis gives:

    $$ \vec{a}(t) = a_1(t)\,\hat{\imath} + a_2(t) \,\hat{\jmath}. $$
    Using the definition #rvc-ed of the vector derivative gives:
    $$ \begin{aligned}\dot{\vec{a}}(t) &= \lim_{\Delta t \to 0}\frac{\vec{a}(t + \Delta t) -\vec{a}(t)}{\Delta t} \\ &= \lim_{\Delta t\to 0} \frac{(a_1(t + \Delta t) \,\hat{\imath} +a_2(t + \Delta t) \,\hat{\jmath}) - (a_1(t)\,\hat{\imath} + a_2(t) \,\hat{\jmath})}{\Delta t} \\&= \lim_{\Delta t \to 0} \frac{(a_1(t + \Delta t)- a_1(t)) \,\hat{\imath} + (a_2(t + \Delta t) -a_2(t)) \,\hat{\jmath}}{\Delta t} \\ &=\left(\lim_{\Delta t \to 0} \frac{a_1(t + \Delta t) -a_1(t)}{\Delta t} \right) \,\hat{\imath} +\left(\lim_{\Delta t \to 0} \frac{a_2(t + \Delta t) -a_2(t) }{\Delta t}\right) \,\hat{\jmath} \\ &=\dot{a}_1(t) \,\hat{\imath} + \dot{a}_2(t)\,\hat{\jmath}\end{aligned} $$
    The second-to-last line above is simply the definition of the scalar derivative, giving the scalar derivatives of the component functions \(a_1(t)\) and \(a_2(t)\).

    Warning: Differentiating each component is only valid if the basis is fixed. #rvc-wc

    When we differentiate a vector by differentiating each component and leaving the basis vectors unchanged, we are assuming that the basis vectors themselves are not changing with time. If they are, then we need to take this into account as well.

    Time: \(t = \) 0 s
    Show:
    Basis: \( \hat\imath,\hat\jmath \) \( \hat{u},\hat{v} \)

    The vector derivative decomposed into components. This demonstrates graphically that each component of a vector in a particular basis is simply a scalar function, and the corresponding derivative component is the regular scalar derivative.

    Example Problem: Differentiating a vector function. #vcc-dvf

    The vector \( \vec{r}(t) \) is given by

    $$ \vec{r}(t) = 2t^2 \,\hat\imath + 3t \,\hat\jmath $$
    What is \( \dot{\vec{r}}(t) = \frac{d}{dt} \, \vec{r}(t) \)?

    $$ \dot{\vec{r}}(t) = \frac{d}{dt} [ 2t^2\hat\imath + 3t\hat\jmath ] = 4t\hat\imath + 2t^2\frac{d}{dt} \hat\imath + 3\hat\jmath + 3t\frac{d}{dt} \hat\jmath = 4t\hat\imath + 3\hat\jmath $$

    Non-Cartesian: Polar basis

    Example Problem: Differentiating a vector function. #vcc-dvf2

    The vector \( \vec{r}(t) \) is given by

    $$ \vec{r}(t) = 2t^2 \,\hat{e}_r + 3t \,\hat{e}_\theta $$
    What is \( \dot{\vec{r}}(t) = \frac{d}{dt} \, \vec{r}(t) \)?

    $$ \dot{\vec{r}}(t) = \frac{d}{dt} [ 2t^2\hat{e}_r + 3t\hat{e}_\theta ] = 4t\hat{e}_r + 2t^2\frac{d}{dt} \hat{e}_r + 3\hat{e}_\theta + 3t\frac{d}{dt} \hat{e}_\theta $$

    • If only \( r \) changes: \( \dot{\hat{e}}_r = 0 \) and \( \dot{\hat{e}}_\theta = 0 \)
    • If only \( \theta \) changes: \( \dot{\hat{e}}_r \neq 0 \) and \( \dot{\hat{e}}_\theta \neq 0 \)

    Chain rule

    Leibniz notation is very convenient for remembering the chain rule. Consider the following examples of the chain rule in the two notations:

    $$ \begin{aligned}&\text{Newton:}&\dot{y}=y'(x)\dot{x} \\ &\text{Leibniz:}&\frac{dy}{dt}=\frac{dy}{dx}\frac{dx}{dt}.\end{aligned} $$

    Notice how, with Leibniz notation, you can imagine the \(dx\)'s "cancelling out" on the right-hand side, leaving you with \(dy/dt\).

    The chain rule also applies to vector functions. This is helpful for parameterizing vectors in terms of arc-length \(s\) or other quantities different than time \(t\).

    Chain rule for vectors.
    $$ \frac{d}{dt} \vec{a} (s(t)) = \frac{d\vec{a}}{ds}(s(t)) \frac{ds}{dt}(t) = \frac{d\vec{a}}{ds} \dot{s} $$
    Example Problem: Chain rule. #vcc-chr

    A vector is defined in terms of an angle \(\theta\) by \( \vec{r}(\theta) = \cos\theta\,\hat\imath + \sin\theta\,\hat\jmath \). If the angle is given by \(\theta(t) = t^3\), what is \( \dot{\vec{r}} \)?

    We can use the chain rule to compute:

    $$ \begin{aligned} \frac{d}{dt} \vec{r} &= \frac{d}{d\theta} \vec{r} \cdot \frac{d}{dt} \theta \\ &= \left(\frac{d}{d\theta}\Big( \cos\theta\,\hat\imath + \sin\theta\,\hat\jmath \Big) \cdot \frac{d}{dt} (t^3) \right) \\ &= \left(\Big( -\sin\theta\,\hat\imath + \cos\theta\,\hat\jmath \Big) \cdot (3 t^2) \right) \\ &= -3 t^2 \sin(t^3)\,\hat\imath + 3 t^2 \cos(t^3)\,\hat\jmath \end{aligned} $$

    Alternatively, we can evaluate \( \vec{r} \) as a function of \(t\) first and then differentiate it with respect to time, using the scalar chain rule for each component:

    $$ \frac{d}{dt} \vec{r} = \frac{d}{dt} \Big( \cos(t^3)\,\hat\imath + \sin(t^3)\,\hat\jmath \Big) \ = -3 t^2 \sin(t^3)\,\hat\imath + 3 t^2 \cos(t^3),\hat\jmath $$

    Integration

    Cartesian:

    $$ \vec{v} = \dot{x} \hat\imath + \dot{y} \hat\jmath $$
    • Integrate \( \dot{x} \) and \( \dot{y} \) to find \(x\) and \(y\).

    Polar:

    $$ \vec{v} = \dot{r} \hat{e}_r + \dot{r} \dot{\theta} \hat{e}_\theta $$
    • Reduce \( \dot{r} , r\dot{\theta} \) to just \( \dot{r} \) and \( \dot{\theta} \) before integrating.
    • Integrate scalars \( (\dot{r} , r\dot{\theta}) \) rather than vector \( \vec{v} \).

    Basic idea:

    1. Reduce the vector equations to scalae equations.
    2. Integrate coordinates
    3. Reconstruct position vector

    Cartesian basis

    The Riemann-sum definition of the vector integral is:

    Vector integral. #rvc-ei
    $$ \int_0^t \vec{a}(\tau) \, d\tau= \lim_{N \to \infty} \underbrace{\sum_{i=1}^N \vec{a}(\tau_i) \Delta\tau}_{\vec{S}_N}\qquad \tau_i = \frac{i - 1}{N}\qquad \Delta \tau = \frac{1}{N} $$

    In the above definition \( \vec{S}_N \) is the sum with \(N\) intervals, written here using the left-hand edge \( \tau_i \) in each interval.

    Time:

    \(t = \) 0 s

    Show:

    Segments:

    \(N = \) 1

    Integral of a vector function \( \vec{a}(t) \), together with the approximation using a Riemann sum.

    Just like vector derivatives, vector integrals only use the geometric concepts of scaling and addition, and do not rely on using a basis. If we do write a vector function in terms of a fixed basis, then we can integrate each component:

    Vector integral in components. #rvc-et
    $$ \int_0^t \vec{a}(\tau) \, d\tau= \left( \int_0^t a_1(\tau) \, d\tau \right) \,\hat\imath+ \left( \int_0^t a_2(\tau) \, d\tau \right) \,\hat\jmath+ \left( \int_0^t a_3(\tau) \, d\tau \right) \,\hat{k} $$

    Consider a time-dependent vector \( \vec{a}(t) \) written in components with a fixed basis:

    $$ \vec{a}(t) = a_1(t) \,\hat\imath + a_2(t) \,\hat\jmath. $$
    Using the definition #rvc-ei of the vector integral gives:
    $$ \begin{aligned}\int_0^t \vec{a}(\tau) \, d\tau&= \lim_{N \to \infty} \sum_{i=1}^N\vec{a}(\tau_i) \Delta\tau \\&= \lim_{N \to \infty} \sum_{i=1}^N\left( a_1(\tau_i) \,\hat\imath+ a_2(\tau_j) \,\hat\jmath \right) \Delta\tau \\&= \lim_{N \to \infty} \left( \sum_{i=1}^Na_1(\tau_i) \Delta\tau \,\hat\imath+ \sum_{i=1}^N a_2(\tau_j) \Delta\tau \,\hat\jmath \right) \\&= \left( \lim_{N \to \infty} \sum_{i=1}^Na_1(\tau_i) \Delta\tau \right) \,\hat\imath+ \left( \lim_{N \to \infty}\sum_{i=1}^N a_2(\tau_j) \Delta\tau \right) \,\hat\jmath \\&= \left( \int_0^t a_1(\tau) \, d\tau \right) \,\hat\imath+ \left( \int_0^t a_2(\tau) \, d\tau \right) \,\hat\jmath.\end{aligned} $$
    The second-to-last line used the Riemann-sum definition of regular scalar integrals of \( a_1(t) \) and \( a_2(t) \).

    Warning: Integrating each component is only valid if the basis is fixed. #rvc-wi

    Integrating a vector function by integrating each component separately is only valid if the basis vectors are not changing with time. If the basis vectors are changing then we must either transform to a fixed basis or otherwise take this change into account.

    Example Problem: Integrating a vector function. #rvc-xi

    The vector \( \vec{a}(t) \) is given by

    $$ \vec{a}(t) = \Big(2 \sin(t + 1) + t^2 \Big) \,\hat\imath+ \Big(3 - 3 \cos(2t)\Big) \,\hat\jmath. $$
    What is \( \int_0^t \vec{a}(\tau) \, d\tau \)?

    $$ \begin{aligned}\int_0^t \vec{a}(\tau) \,d\tau&= \left(\int_0^t \Big(2 \sin(\tau + 1) + \tau^2 \Big)\,d\tau\right) \,\hat\imath+ \left(\int_0^t \Big(3 - 3 \cos(2\tau)\Big)\,d\tau\right) \,\hat\jmath \\&=\left[-2 \cos(\tau + 1) + \frac{\tau^3}{3}\right]_{\tau=0}^{\tau=t} \,\hat\imath+ \left[3 \tau - \frac{3}{2} \sin(2\tau)\right]_{\tau=0}^{\tau=t} \,\hat\jmath \\&= \left( -2\cos(t + 1) + 2 \cos(1)+ \frac{t^3}{3}\right)\,\hat\imath+ \left(3t - \frac{3}{2} \sin(2t)\right)\,\hat\jmath.\end{aligned} $$

    Warning: The dummy variable of integration must be different to the limit variable. #rvc-wd

    In the coordinate integral expression #rvc-ei, it is important that the component expressions \(a_1(t)\), \(a_2(t)\) are re-written with a different dummy variable such as \(\tau\) when integrating. If we used $\tau$ for integration but kept \(a_1(t)\) then we would obtain

    $$ \int_0^t a_1(t) \,d\tau= \left[a_1(t) \, \tau\right]_{\tau = 0}^{\tau = t}= a_1(t) \, t, $$
    which is not what we mean by the integral. Alternatively, if we leave everything as \(t\) then we would obtain
    $$ \int_0^t a_1(t) \,dt $$
    which is a meaningless expression, as dummy variables must only appear inside an integral.

    Polar basis

    Vector Scalar equations Initial conditions
    $$ \vec{v} = 2\hat{e}_r + \hat{e}_\theta $$
    $$ \dot{r} = 2 $$
    $$ r(0) = 1 $$
    $$ \vec{v} = \dot{r}\hat{e}_r + r \dot{\theta}\hat{e}_\theta $$
    $$ r \dot{\theta} = 1 $$
    $$ \theta(0) = 0 $$
    Radial:
    $$ \begin{aligned} \dot{r} &= 2 \\ r &=\int\dot{r} dt \\ r &=2t + c \\ r(0) &=1\quad ,\quad c=1 \\ r(t) &=2t + 1 \\ \end{aligned} $$
    Angular:
    $$ \begin{aligned} r\dot{\theta} &=1 \\ \dot{\theta} &=\frac{1}{2t+1} \\ \theta &=\int \dot{\theta} dt \\ \theta &=\frac{1}{2} ln(2t+1) + c \\ \theta(0) &=0 \quad , \quad c=0 \\ \theta(t) &= \frac{1}{2} ln (2t+1) \\ \end{aligned} $$

    Solving equations

    Steps:

    1. Write vector unknowns in term of scalar variables
    2. Expand vector equations and equate components to get scalar variables
    3. Solve scalar equations for scalar unknowns
    4. Reconstruct vector unknowns
    Example Problem: Rotating particle. #vcc-rpt

    A particle is rotating in the \(XY\) plane with a variable angular velocity \( \vec{\omega} \). At one instant in time, the particle has:

    $$ \vec{r} = 2\hat\imath - \jmath m $$
    and
    $$ \vec{a} = -4\hat\imath - 3\jmath m/s^2 $$

    1. How many scalar variables?
    2. How many vector equations?
    3. How many scalar equations?
    1. 2 (\( \vec{\omega}_z , \vec{\alpha}_z \))
    2. 1 (\( \vec{a} = \alpha_z \vec{r} - \omega_z \vec{r} \))
    3. 2 (one for each \( \hat\imath , \hat\jmath \))
    Example Problem: Extra practice. #vcc-ept

    Given:

    $$ \vec{r} = (2, -1) , \vec{a} = (-4, - 3) $$
    Find \( \vec{\omega} , \vec{\alpha} \)

    Step by step:

    1. Scalar variables: (\( \vec{\omega}_z , \vec{\alpha}_z \))
    2. Expand vector equations:
      $$ \begin{aligned} \vec{a} &=\vec{\alpha}_z \vec{r} - \vec{\omega}_z^2 \vec{r} \\ (-4, -3) &=\alpha_z (1,2) - \omega_z^2 (2,-1) \\ \hat\imath : -4 &=1 \alpha_z -2 \omega_z^2 \\ (\hat\jmath : -3 &=2 \alpha_z + \omega_z^2) \\ \end{aligned} $$
    3. Solve scalar equations (substitution):
      $$ \begin{aligned} \hat\jmath : \omega_z^2 &=-3 -2 \alpha_z \\ \hat\imath : -4 &=\alpha_z - 2(-3-2\alpha_z) \\ \hat\imath : -4 &=5 \alpha_z +6 \\ \alpha_z &=-2 \\ \omega_z^2 &=-3 -2 (-2) = 1 \\ \omega_z &=\pm 1 \\ \end{aligned} $$
    4. Reconstruct vector variables:
      $$ \begin{aligned} \vec\omega &= \pm \hat{k} \rm\ rad/s \\ \vec\alpha &=-2 \hat{k} \rm\ rad/s^2 \\ \end{aligned} $$
    Example Problem: Follow up question. #vcc-fwq

    Do we need to have the same number of vector equations and vector unknowns?

    No

    Example Problem: Follow up question. #vcc-fwq2

    Do we need to have the same number of scalar equations and scalar unknowns?

    Yes