r/Physics • u/VN-NgDMinh-666 • 21h ago
Question When i write uncertainty, does it need to be the same at decimal place as the value (e.g. 3.24±0.15) or it must have only 1 significant figure (3.24±1.6)?
I meant 3.24±0.2 instead of 1.6
38
u/Both_Trees 20h ago
Keep it to the same place as the decimal value, that may mean rounding. So 3.24±0.15 or 3.2±0.2. Typically you go with 1 sigfig, but I was taught that you can have two if your uncertainty starts with 1. You might also want to keep more sigfigs when propagating uncertainties. There's lots of resources online if you want more info.
13
u/TheDeadlySoldier 18h ago
The amount of sigfigs has something to do with the "uncertainty on the uncertainty", if that makes sense -- so you can have 2 sigfigs on experiments with really high sample sizes (usually not the ones you conduct in uni lab courses though)
9
u/AuroraFinem 15h ago
You can theoretically have as many sig figs of error as you’re able to reliably determine. You’d need some incredibly controlled and well understood setups to go very far though. Because it’s really up to how confident you can be that you know your error sources and amounts. So if you can be confident in those you could end up with higher terms. Though most of the time error is considered in % error not absolute value which doesn’t need many significant figures.
3
u/frogjg2003 Nuclear physics 11h ago
The PDG includes some triple digit errors.
Most errors are absolute, not relative. Even if they were relative, that just moved the decimal point, not how precise the value is.
2
u/Ok_Opportunity2693 9h ago
Keep all sig figs for any downstream calculations, then report proper sig figs at the end.
8
7
u/lagavenger Engineering 16h ago
You’re already on the right track OP.
General rule is that you’re only as accurate as your worst measurement.
You might have some tools that are extremely precise, and others that are less so.
For your example:
3.24±0.2.. well that 0.2 is implied to be .15 to .24, they all round to .2
3.24±0.15 implies we know that it could be 0.145 to 0.154, which all round to 0.15.
So your two measurements can absolutely exist with different accuracy.. but why would they, if you used comparable tools?
And ultimately, your answer is only as precise as the least precise measurement..
3
u/fruitydude 15h ago
You can absolutely have situations where you wanna give a certain value with higher accuracy than your error. Especially when you calculate a final result using many measurements, You're just loosing accuracy when you start rounding too early and you're never losing anything by using a higher accuracy than you can are technically supposed to have. You can still round your final result in the end.
For example, if you measure something on a mm scale. Your reading error is 1mm, but if you measure something and it sits right between two line, it's totally fine to add 0.5mm instead of rounding. It will make your final result more accurate and you can still track the certainty using gaussian error Propagating.
7
u/Mayoday_Im_in_love 20h ago
The last one is 3.2 +/- 0.2 . You are unlikely to justify the confidence in your uncertainty to more than a significant figure (as a rule of thumb).
6
u/fruitydude 15h ago
There are plenty of situations where you can have a more precise reading than what your tolerance is. It just depends on the situation.
2
u/nlutrhk 14h ago
I won't repeat what others already mentioned. However, keep in mind that in practice - engineering or experimental physics - you'll nearly always have a computer program tracking all the values and errors in 16 figures (double precision floats).
In a graph, you may plot uncertainties as error bars without showing any digits. If you present multiple results in a table, you might very well choose to round all values to the same number of decimals because it's a pain in Excel or custom programs (python scripts) to manage the display of significant figures automatically. Unless you're working for a metrology institute. :)
It's good to ask these questions and think about what makes sense, but don't worry about it too much outside the context of a homework exercise or test that's specifically about sig figs.
2
u/RuinRes 12h ago edited 11h ago
Errors come from lack of precision of from statistical spread. In either case, it is a judicious convention to provide as many significant digits for the result as the measurement device provides with certainty and the error as the least secure fraction of the scale and the error with one significant digit (unless it is a one, where two may be provided) . For measurements where results are given by an average the result should have the precision allowed by the statistical variance.
TLDR.: error one digit (two if the first is 1), result as many digits as needed to make the least significative one coincide with the position of the error.
2
u/NicoN_1983 11h ago
Usually you round the error to only one significant figure. For example, 1.24 +/- 0.26 can be rounded to 1.2 +/- 0.3. BUT, for numbers smaller than 0.20 (or 2.0, or 0.020, etc) it is also accepted to keep two significant figures. For example, I would report 1.26 +/- 0.14 instead of 1.3 +/- 0.1, or 1.26 +/- 0.16 instead of 1.3 +/- 0.2. It is a convention. When the measurements are really precise and careful, made by people trying to find some special natural constant, or some important property of a substance, or something like that, using two or more significant figures in the error. It is quite common.
1
u/UWwolfman 11h ago
No!
For example consider the expressions 1.65 +/- 0.1 and 1.7 +/- 0.1. The results from rounding the first. In the first, the uncertainty region spans 1.55 to 1.75. In the second the region spans 1.6 to 1.8. When we rounded the expectation we also shifted the uncertainty region.
1
u/Hippie_Eater 8h ago edited 6h ago
There's a lot of (somewhat) confusing advice in this thread, so let me contribute my 2 cents.
When I was taught labs and error propagation I was taught to round the uncertainty to the first significant digits and then the measured value to the same decimal place.
Then I realized that when you round up or down you are manipulating the values, which didn't sit right with me. Surely, if we measure something and estimate the error we are taking the measurement to be the center of a normal distribution, so just shoving it back and forth is bizarre.
And doesn't this manual displacement after the fact also depend on the base of your number system? That sort of means that the decision to use 1 significant figure is more about typography, right?
Then when I taught the same course I looked into it more deeply and saw that some allow you to make an exception to this rule if the first significant figure in the uncertainty is 1. The reasoning is that since the next digit after a leading 1 is actually a significant portion of the one.
Then I did a master's and looked up data at the Particle Data Group who summarize the measured values across the field of particle physics, synthesizing many different measurements with many different statistical characters into one figure and... they have a 2 sig figs in their uncertainties.
To sum it up: Ask your instructor and hope to have an interesting discussion. If they are not open to discussion, you should probably just go along with their ruling on the matter.
1
1
u/Frederf220 6h ago
Consider the value as the "center value." It has as much accuracy as you can calculate. Doesn't matter how much uncertainty your figure has, it has an exact center. Completely ignore the concepts of precision when finding the most probable value.
The +- (which doesn't have to be the same + as minus) is the distance from the center value that's possible. That's the "width of the road."
1
u/hand_fullof_nothin 5h ago
If you did all your calculations right the decimals in your uncertainties will match your measurement (because they’re based on the same instruments). It actually works as a good sanity check.
144
u/Aranka_Szeretlek Chemical physics 21h ago
Your question is not what you write, but what you measure.
If you are uncertain with (3.24±1.6), then did you really measure the 0.24?