There are two kinds of error that we look at: random and systematic error. Random error is unpredictable and caused by limited precision of apparatus, or uncontrolled factors in the environment that affect the data. Systematic error is human error, incorrectly selecting apparatus or inaccuracies in measurement. It is systematic error that we focus on, because although there are minor fluxes in gravitational field, or perhaps increased temperature causes the tape to expand as the spark generator continues, or the power supply might have experienced unstable voltage, it is much more likely that measuring the distance between marks on a tape with an old wooden meter stick, that might even be slightly bent, introduces much more.
But before we continue discussion, I introduce the almighty spark generator. This 1.86 meter long stick features an electromagnet at the top to hold an object in place. When dropped, the two cables on each side of the object generate spark at a regular time interval (in this case, 60 Hz), burning marks into the spark-sensitive paper. We then use a meter stick, a wooden flat rectangular cuboid with lines on them, to measure the distance between each spark mark. The increasing distance between each subsequent mark indicates positive acceleration. The result, the total distance from a first significant mark, is then recorded into the exotic mythological software known as Microsoft Excel.
The above spreadsheet was painstakingly produced--but only because Mac computers are a pain in the ass to use. If we had used PC's in class, I would have been able to reproduce that while mastering quilting with the other hand. I digress... The first and second columns show the time in 60 Hz increments and the measured distance from the first point on the spark paper, respectively. The third is the change in x, or the distance between each recorded distance and the one prior. The mid-interval time is, of course, the half-way point between each recorded time, and mid-interval speed represents the change in the distance between each measurement over the change in time.
So why do all this? The progress of computer technology has changed the way we approach experiments, allowing us to make use of an iterative computational process, and modeling approximate formulas, instead of deducting them through mathematical analysis. With Excel, we are able to see thousands of data points with a drag of the mouse. And then graph them to see the relation.
The first graph shows the relation of time against distance. The ideal formula is such that:
x(t) = v0t + 0.5at2
The second graph shows the relation of the derivative, speed against time. This is given by the formula:
x'(t) = v(t) = v0 + at
In this case, we could see that the initial velocity is 0.54 m/s, and the acceleration is 9.53 m/s2. The values are a little different compared to before because our data points are not perfect, thus the best-fit line equation that models the the data are calculated a bit differently. However, as they are within a difference of 0.02 m/s and 0.05 m/s2 respectively, the R2 figure (which Microsoft describes as the "square of the Pearson function moment correlation coefficient") is pretty high.
From these graphs, we note that speed and time have a linear relationship, which proves that acceleration (the derivative of speed, and the slope of this graph) is constant. The linearity also proves that the instantaneous velocity in any time interval (with constant acceleration) is the same as the average velocity of that time interval. This is because the average of a straight line is its center. Or more rigorously, we say that, where a is constant:
(xf - xi) / (tf - ti) = (vf + vi) / 2
(v0t2 + 0.5a(t2)2 - v0t1 - 0.5a(t1)2) / (t2 - t1) = (v0 + at2 + v0 + at1) / 2
v0 (t2-t1) / (t2-t1) + 0.5a (t2 + t1)(t2 - t1) / (t2 - t1) = v0 + a (t2 + t1) / 2
v0 + a (t2 + t1) / 2 = v0 + a (t2 + t1) / 2
Now that we've made sense of our data, we compare our experimental value with the accepted value. We have derived the gravitational constant from our graphs to be 9.53 m/s2 or 9.58 m/s2. However, we expect the theoretical gravitational constant to be 9.8 m/s2. The discrepancy here can be expressed by uncertainty -- ah-ha! We're back a full circle to the start of our journey. If we take a middle point from our 2 values, say 9.555 m/s2, and compare it to the accepted value , we find that our value deviates by 2.55%.
Uncertainty is scalar, and any error introduced can only deviate the experimental value further. Since it doesn't make sense that error cancels each other out, people have devised methods to capture uncertainty which involve taking the absolute values of each factor or trial. The first method, the average deviation from the mean, adds the deviation from from each trial and divides the total by the number of trials. A second, better, method, called the standard deviation (also called the RMS, or root mean squared), takes the squares of each deviation, dividing the total, then taking the root--in the opposite order of its name. The standard deviation is the accepted method in science because it is better at quantifying the spread. With the average deviation, it is possible to have a wide variance in data come to the same deviation figure. Standard deviation better represents outliers, and also makes it possible, provided the data set is large enough, to plot a distribution curve.
To demonstrate, we take the experimental gravitational constant from each trial (i.e. from each group) in class and calculate a standard deviation.
The first 2 columns indicate that there are 8 trials, which different gravitational constants concluded from each. That they are all under the expected value shows that it is likely a systematic error--most likely air resistance. The average is 948.41 cm/s2. Deviations are taken from the individual trial values from this average, then squared. By dividing this number by 8 trials, then taking the square root, we arrive at the standard deviation value of 8.71 cm/s2. 75% of the trials fall within one standard deviation, which is better than the theoretical 68%. Standard deviation is less than 1% (0.92%) from the mean. We express our final conclusion as (9.48 ± .0871) m/s2.
If we plug our numbers into the normal distribution function (using 953 cm/s2 as x), we seem to be within 39.8% of normal distribution.
No comments:
Post a Comment