My *weave and spiral* code is an example of applying math in a colorful way! Namely, I apply radians, arc length, sine and cosine, and function transformations, and, like every program in VPython, I use *vectors*. A spiral in the xz-plane can be obtained by (**bold** denotes a vector):

θ = 2 π spiralRate t

r = r_{start} (1 + θ/π)

**x** = ( -r cos(θ), 0, r sin(θ) )

where the signs can be changed (and the cosine and sine can be swapped) to change the starting location and spiral direction, *t* is time, and the position vector **x** is given as (x,y,z). Note that the default for VPython is to have the x and y directions be the "usual" ones, and the +z direction is towards you out of the screen. Can you predict the starting location and direction of the above spiral? If *spiralRate* = 2 and *t* = 1/2, how many spirals have been done? At this time, what are θ and r?

A weave traveling in the x direction is obtained by:

distance = speed t

phase = 2 π distance / cycleLength

**x**_{1} = ( x_{start} + distance, A sin(2 phase), 1.5 A sin(phase) )

**x**_{2} = ( x_{start} + distance, A sin(2 (phase + 2 π / 3)), 1.5 A sin(phase + 2 π / 3) )

**x**_{3} = ( x_{start} + distance, A sin(2 (phase + 4 π / 3)), 1.5 A sin(phase + 4 π / 3) )

where *A* is an amplitude, and *cycleLength* = 6.7 *A* seems to be a good choice. Change the 1.5 if you'd like. Can you figure out how to change the direction of the weave? Thinking about the path a single object takes, why is there a 2 multiplying the phase for motion in the *y* component? Why are there phase shifts added to the 2nd and 3rd objects?

Eventually, you may wish to see my code for how both a spiral and weave can be combined!

My *bounce* code applies the following fundamental physics equations (**bold** denotes a vector):

**a** = Σ**F** / m

average **a** = Δ**v** / Δt

average **v** = Δ**x** / Δt

However, since this is a computer taking tiny time steps, Δt, I can write these tiny time intervals as *dt* to denote that I no longer have to think about average values (the average over a tiny amount of time gives the actual instantaneous value at that time!). So the equations can be rearranged to look like:

**a** = Σ**F** / m

**v** = **v**_{0} + **a** dt

**x** = **x**_{0} + **v** dt

where **x**_{0} and **v**_{0} are from the previous time step (Δ**x** = **x** - **x**_{0} and Δ**v** = **v** - **v**_{0}). For simulating *any* 3D motion, we can use these equations in this order to calculate the position **x** at each new moment in time! You just first need to be able to calculate the forces! Why do you get more accurate results by making *dt* as small as possible? What would happen if you set *dt* = 0.00000000000001?

My *gravity* code uses the above equations for **a**, **v**, and **x** in 2D with multiple objects! This simulation is more difficult due to the nature of inverse-square laws such as Newton's law of universal gravitation. The force gets strong very quickly during close approaches, so adaptive step sizes (having more time steps when objects are closer) is needed. However, even doing adaptive steps has problems because the very simple numerical method described above (called the Euler method) is like trying to push a shopping cart in a straight line by pushing with just one finger of one hand (you are not allowed to move your finger!).

To do this type of thing more realistically (unlike how I did it), we could calculate the results of high-speed close encounters without directly simulating them much like how the *bounce* code did the elastic bounce by just changing the sign of the vertical velocity. In more complicated simulations such as this three-body gravity simulation when all three objects are very close, the by-hand calculations cannot be done (which is usually why you're simulating it in the first place), so numerical methods such as Runge-Kutta methods must be used. To do this, we must first convert the 6 second order differential equations (two for each object since we are in 2D) into 12 first order differential equations. The Runge-Kutta method makes my *Runge-Kutta gravity* code look very nice if you select adaptive stepping! Unlike my first gravity code, the Runge-Kutta code involves calculus!

If you would like, you may try to reduce the step size of the non-Runge-Kutta code (the gravity code that uses the Euler method) until good results are obtained, and you will see that the necessary step sizes are so incredibly small that the code will run incredibly slowly.

We just saw that the Runge-Kutta method is needed for simulations where accuracy is important. So can we now simulate everything? Theoretically, yes. However, simulating continuous material—fluids or solids—will take an unrealistically powerful computer because we would have to simulate every atom. If you really enjoy making simulations, the field of CFD (computation fluid dynamics) may interest you. Fluids (or elastic/plastic solids) are simulated by calculating pressure, temperature, density, velocity, etc. at various points located throughout the fluid. Just like our simulations here, these values are calculated at a large number of snapshots in time. This is quite advanced stuff—advanced and **powerful**—that will let you simulate so many things: electromagnetic fields according to Maxwell's equations, quantum wave functions, and the curved spacetime metrics of Einstein's general relativity.

Compared to fluids, we can much more easily simulate *rigid objects* such as a double pendulum! To do this, simple numerical methods should work just fine (though I will use the Runge-Kutta method anyway). However, the *physics* needs to be more sophisticated because we are not simulating every atom: Lagrangian mechanics will be used as shown in the previous link. Here is my *double pendulum* code. This is a fascinating system because you cannot calculate the solution by hand (analytically)! The motion is chaotic, so it must be simulated.

As for the two lines of code with *lots* of calculations, I used a computer algebra system called Mathematica to do the algebra for me to generate those two lines of code so that I wouldn't have to constantly wonder if I did the algebra correctly.

As a final demonstration of the sort of things that can be simulated, I do a freely bouncing object in my *rigid-object bounce* code. I technically use the Runge-Kutta method for the free-fall motion, though this only involves a slight change to one line of code:

y = y_{0} + v_{y} dt

of Euler's method becomes

y = y_{0} + v_{y} dt - ½ g dt^{2}

where g is the magnitude of free-fall acceleration. Without using the Runge-Kutta method, energy is not conserved, so the object may keep bouncing higher and higher!

To do the elastic bounce, no numerical method is used. Instead, we need physics! Use conservation of kinetic energy to get

v_{x}^{2} + v_{y}^{2} + I/m ω^{2} = (v_{x} + Δv_{x})^{2} + (v_{y} + Δv_{y})^{2} + I/m (ω + Δω)^{2}

where v_{x} and v_{y} are the components of velocity of the center of mass (CM), I is the moment of inertia of the CM, Δv_{x} and Δv_{y} are the total change in velocity throughout the bounce, and

Δω = m/I (r_{x} Δv_{y} - r_{y} Δv_{x})

is the total change in angular speed throughout the bounce (in radians per time unit), where **r** is the vector from the CM to the corner that is bouncing off the ground. If the floor has no friction, Δv_{x} = 0, then solving for Δv_{y} is not difficult. For friction, I chose to implement the "rolling without slipping" condition

v_{x} + Δv_{x} = r_{y} (ω + Δω)

As for the two lines of code with *lots* of calculations, I used a computer algebra system called Mathematica to do the with-friction algebra for me so that I wouldn't have to constantly wonder if I did the algebra correctly. It was still tricky because there were two solutions of the equations, and one was for when r_{x} < 0, and the other was for when r_{x} > 0. Because the solutions differed only by a sign, I could use |r_{x}| / r_{x} to use the same code for both solutions. I also had to be careful to realize that

√(r_{y}^{2}) / r_{y} = |r_{y}| / r_{y} = -1