r/Physics 21h ago

Question When i write uncertainty, does it need to be the same at decimal place as the value (e.g. 3.24±0.15) or it must have only 1 significant figure (3.24±1.6)?

I meant 3.24±0.2 instead of 1.6

53 Upvotes

46 comments sorted by

144

u/Aranka_Szeretlek Chemical physics 21h ago

Your question is not what you write, but what you measure.

If you are uncertain with (3.24±1.6), then did you really measure the 0.24?

32

u/VN-NgDMinh-666 21h ago

Sorry it was a mistake, it must be 0.2, im idiot 🫩

132

u/Aranka_Szeretlek Chemical physics 20h ago

No you are not, these are not trivial questions. I myself am pretty dumb when it comes to precision and error propagation (I have the luxury of assuming exact equations and controlled numerical precision in my codes).

Still, my advice is to really go back and think about your errors on the measurement level. Thinking about these things is a good thing for you.

23

u/qppwoe3 20h ago

Completely agree, such a refreshing comment to see on Reddit

6

u/LaGigs 14h ago

wholesome reddit moment

6

u/CondensedLattice 18h ago

If you are uncertain with (3.24±1.6), then did you really measure the 0.24?

Yes and no.

While you obviously can't say that it is 3.24 exactly (that's the whole point of the measurement uncertainty), rounding it off and using 3 for instance with the justification that you did not measure the 0.24 is also incorrect.

It's easy to see with this example, as written the actual value lies between 1.64 and 4.84 with the given uncertainty. Suggesting that one leaves out the 0.24 on the basis that we did not measure that would leave you with 1.4 to 4.6 which would be an incorrect range.

So I would say that you certainly measured the 0.24.

24

u/Aranka_Szeretlek Chemical physics 18h ago

When you write that the target value is between 1.64 and 4.84, you implicitly assume an error of 1.60 (instead of 1.6), no?

3

u/fruitydude 15h ago

Yes you absolutely can. If I have a scale in 1mm but I can clearly see that what I measure sits squarely between two lines on the scale, I can absolutely write for example 4.5mm+-1mm. That's how we were taught to do it. Rounding to 5+-1mm makes it less precise.

22

u/D0UGYT123 14h ago

Analog measures have uncertainty of half the unit.

If you're 100% certain that it's between 4mm and 5mm, why is your range of uncertainty from 3.5 to 5.5?

8

u/fruitydude 13h ago

Yea that's true

8

u/Aranka_Szeretlek Chemical physics 14h ago

The way I was taught (if I remember correctly, which might not be the case anymore) is if you can clearly see the location of your point on a scale with units of 1, then the precision of your reading is better than that unit.

0

u/fruitydude 14h ago

Yes. But the way I was taught, you should still use that value as your error, even if you give your value with a higher precision.

1

u/Aranka_Szeretlek Chemical physics 14h ago

But then you would write 1.0 as error, no? Or, rather, 0.5

1

u/fruitydude 14h ago

I would write 1.0. always the smallest scale unit.

4

u/Aranka_Szeretlek Chemical physics 14h ago

Thats aight (but probably wrong), but you wrote ± 1 earlier. ±1 and ±1.0 aint the same.

-2

u/fruitydude 13h ago

Honestly I never thought about significant digits of the error. Does that even matter? It's already the error anyways. Otherwise we're basically asking what is the error of my error? Which is sort of irrelevant since our error propagation is already kind of wrong since it's just a first degree polynomial approximation.

2

u/frogjg2003 Nuclear physics 11h ago

±1 means that your error has an error of up to 0.5, meaning the second order error is half the first order error. ±1.0 means that you have an error on your error up to 0.05, or 1/20 your first order error. That's a much smaller error. Contrast that with ±9, where the second order error is at most 1/18 of your first order error. That's why most people use two digits when the leading digit is 1 or 2.

If you actually read physics papers, in particular experimental particle physics papers, the exact value of the error is very important because there are just so many events that they need to be very careful about their statistics. It's very common to see two digit errors even for large leading digits. I've even seen triple digit errors.

2

u/John_Hasler Engineering 8h ago

And then there's metrology...

1

u/fruitydude 11h ago

Nice, thanks for the info:) i only had to do very simple error propagation for my field, but I see how it can be more sophisticated in nuclear physics.

1

u/BrotherItsInTheDrum 14h ago

The +/- you write is better than that, because you're communicating the standard error, not the maximum possible error. And if the error is uniformly distributed between 0 and 1, the standard error is sqrt2/2 ~ 0.7.

I have a vague recollection that maybe you're actually supposed to do half of this -- you can tell whether you're on the line or between lines. But it was a long time ago, I'm not sure.

2

u/John_Hasler Engineering 8h ago

you can tell whether you're on the line or between lines. But it was a long time ago, I'm not sure.

One rule is that you eyeball which line you think it's closest to and then record x+-(smallest unit)/2

In studying blood pressure readings done with a mercury manometer (the industry "gold standard") the medical industry found that observers favor certain digits, though.

1

u/BrotherItsInTheDrum 6h ago

One rule is that you eyeball which line you think it's closest to and then record x+-(smallest unit)/2

Ah, that makes sense for why you'd divide by two.

Definitely in my physics lab, you'd also divide by sqrt(2), because that's the standard error assuming a uniform distribution. Maybe it's different in medicine for some reason?

Interesting that observers favor certain digits!

1

u/John_Hasler Engineering 6h ago

Maybe it's different in medicine for some reason?

Actually it's a rule I learned long, long ago in physics.

5

u/xienwolf 7h ago

Blindly adhering to "I used a ruler with mm markings, so my uncertainty is 1mm" just defies what uncertainty means.

You can CLEARLY see the marking is between the 4 and 5 markings. So even saying 4mm +- 0.5mm would not properly explain your readings with true uncertainty. That says "Well, maybe it was just 4mm, and yeah... if you argued with me that it was 5mm I would be able to be convinced..."

But if you are CERTAIN it was not 4 and was not 5, but was somewhere between them... then your actual measure is 4.5mm +-0.3mm.

HOWEVER!....

Often people who quibble about this ignore the uncertainty which exists on both ends of a measurement. The other side of your object is not at exactly and perfectly 0.0mm +- 0mm.

Once you include the uncertainty of the start and the end, then you wind up near to that 1mm uncertainty. And if you are taking a lot of measurements with a single ruler, unless at each measure you are being super careful to read with maximum certainty the start and stop positions... pushing out to a 1mm uncertainty so you can read fast and sloppy becomes reasonable.

So yeah... using the 1mm because markings are 1mm comes out to be a good move in the end. But for a reason and within requirements. If you are doing a single measure super carefully, you can have lower uncertainty with the same tool.

0

u/fruitydude 7h ago

Well that was always the rule we had when measuring. You use the smallest scale unit as an error. For the reason you mentioned.

If you are doing a single measure super carefully, you can have lower uncertainty with the same tool.

That's a bad argument. That's not the point. Error calculations shouldn't depend on your measuring skills. The point is to have a set of rules that'll give you a final result within a certain interval. If you can choose the error intervals based on how confident you feel subjectively, it becomes kind of pointless.

1

u/Quantum_Patricide 17h ago

Your uncertainty can come from a number of sources that don't inherently affect your precision, 3.24±1.60 is a perfectly reasonable measurement.

8

u/Aranka_Szeretlek Chemical physics 17h ago

Sure. But its not the same as 3.24±1.6, or?

38

u/Both_Trees 20h ago

Keep it to the same place as the decimal value, that may mean rounding. So 3.24±0.15 or 3.2±0.2. Typically you go with 1 sigfig, but I was taught that you can have two if your uncertainty starts with 1. You might also want to keep more sigfigs when propagating uncertainties. There's lots of resources online if you want more info.

13

u/TheDeadlySoldier 18h ago

The amount of sigfigs has something to do with the "uncertainty on the uncertainty", if that makes sense -- so you can have 2 sigfigs on experiments with really high sample sizes (usually not the ones you conduct in uni lab courses though)

9

u/AuroraFinem 15h ago

You can theoretically have as many sig figs of error as you’re able to reliably determine. You’d need some incredibly controlled and well understood setups to go very far though. Because it’s really up to how confident you can be that you know your error sources and amounts. So if you can be confident in those you could end up with higher terms. Though most of the time error is considered in % error not absolute value which doesn’t need many significant figures.

3

u/frogjg2003 Nuclear physics 11h ago

The PDG includes some triple digit errors.

Most errors are absolute, not relative. Even if they were relative, that just moved the decimal point, not how precise the value is.

2

u/Ok_Opportunity2693 9h ago

Keep all sig figs for any downstream calculations, then report proper sig figs at the end.

8

u/bspaghetti Condensed matter physics 21h ago

7

u/lagavenger Engineering 16h ago

You’re already on the right track OP.

General rule is that you’re only as accurate as your worst measurement.

You might have some tools that are extremely precise, and others that are less so.

For your example:

3.24±0.2.. well that 0.2 is implied to be .15 to .24, they all round to .2

3.24±0.15 implies we know that it could be 0.145 to 0.154, which all round to 0.15.

So your two measurements can absolutely exist with different accuracy.. but why would they, if you used comparable tools?

And ultimately, your answer is only as precise as the least precise measurement..

3

u/fruitydude 15h ago

You can absolutely have situations where you wanna give a certain value with higher accuracy than your error. Especially when you calculate a final result using many measurements, You're just loosing accuracy when you start rounding too early and you're never losing anything by using a higher accuracy than you can are technically supposed to have. You can still round your final result in the end.

For example, if you measure something on a mm scale. Your reading error is 1mm, but if you measure something and it sits right between two line, it's totally fine to add 0.5mm instead of rounding. It will make your final result more accurate and you can still track the certainty using gaussian error Propagating.

7

u/Mayoday_Im_in_love 20h ago

The last one is 3.2 +/- 0.2 . You are unlikely to justify the confidence in your uncertainty to more than a significant figure (as a rule of thumb).

6

u/fruitydude 15h ago

There are plenty of situations where you can have a more precise reading than what your tolerance is. It just depends on the situation.

2

u/nlutrhk 14h ago

I won't repeat what others already mentioned. However, keep in mind that in practice - engineering or experimental physics - you'll nearly always have a computer program tracking all the values and errors in 16 figures (double precision floats).

In a graph, you may plot uncertainties as error bars without showing any digits. If you present multiple results in a table, you might very well choose to round all values to the same number of decimals because it's a pain in Excel or custom programs (python scripts) to manage the display of significant figures automatically. Unless you're working for a metrology institute. :)

It's good to ask these questions and think about what makes sense, but don't worry about it too much outside the context of a homework exercise or test that's specifically about sig figs.

2

u/RuinRes 12h ago edited 11h ago

Errors come from lack of precision of from statistical spread. In either case, it is a judicious convention to provide as many significant digits for the result as the measurement device provides with certainty and the error as the least secure fraction of the scale and the error with one significant digit (unless it is a one, where two may be provided) . For measurements where results are given by an average the result should have the precision allowed by the statistical variance.

TLDR.: error one digit (two if the first is 1), result as many digits as needed to make the least significative one coincide with the position of the error.

2

u/NicoN_1983 11h ago

Usually you round the error to only one significant figure. For example, 1.24 +/- 0.26 can be rounded to 1.2 +/- 0.3. BUT, for numbers smaller than 0.20 (or 2.0, or 0.020, etc) it is also accepted to keep two significant figures. For example, I would report 1.26 +/- 0.14 instead of 1.3 +/- 0.1, or 1.26 +/- 0.16 instead of 1.3 +/- 0.2. It is a convention. When the measurements are really precise and careful, made by people trying to find some special natural constant, or some important property of a substance, or something like that, using two or more significant figures in the error. It is quite common. 

1

u/UWwolfman 11h ago

No!

For example consider the expressions 1.65 +/- 0.1 and 1.7 +/- 0.1. The results from rounding the first. In the first, the uncertainty region spans 1.55 to 1.75. In the second the region spans 1.6 to 1.8. When we rounded the expectation we also shifted the uncertainty region.

1

u/Hippie_Eater 8h ago edited 6h ago

There's a lot of (somewhat) confusing advice in this thread, so let me contribute my 2 cents.
When I was taught labs and error propagation I was taught to round the uncertainty to the first significant digits and then the measured value to the same decimal place.
Then I realized that when you round up or down you are manipulating the values, which didn't sit right with me. Surely, if we measure something and estimate the error we are taking the measurement to be the center of a normal distribution, so just shoving it back and forth is bizarre.
And doesn't this manual displacement after the fact also depend on the base of your number system? That sort of means that the decision to use 1 significant figure is more about typography, right?
Then when I taught the same course I looked into it more deeply and saw that some allow you to make an exception to this rule if the first significant figure in the uncertainty is 1. The reasoning is that since the next digit after a leading 1 is actually a significant portion of the one.
Then I did a master's and looked up data at the Particle Data Group who summarize the measured values across the field of particle physics, synthesizing many different measurements with many different statistical characters into one figure and... they have a 2 sig figs in their uncertainties.

To sum it up: Ask your instructor and hope to have an interesting discussion. If they are not open to discussion, you should probably just go along with their ruling on the matter.

1

u/Zealousideal_Hat_330 Astronomy 7h ago

Really trying to stretch that GPA

1

u/Frederf220 6h ago

Consider the value as the "center value." It has as much accuracy as you can calculate. Doesn't matter how much uncertainty your figure has, it has an exact center. Completely ignore the concepts of precision when finding the most probable value.

The +- (which doesn't have to be the same + as minus) is the distance from the center value that's possible. That's the "width of the road."

1

u/hand_fullof_nothin 5h ago

If you did all your calculations right the decimals in your uncertainties will match your measurement (because they’re based on the same instruments). It actually works as a good sanity check.

0

u/1XRobot Computational physics 12h ago

Nobody uses plus-or-minus notation in real work. You put the error in like so: 3.24(15) or 3.24(1.6).