Chronology |
Current Month |
Current Thread |
Current Date |

[Year List] [Month List (current year)] | [Date Index] [Thread Index] | [Thread Prev] [Thread Next] | [Date Prev] [Date Next] |

*From*: Paul Nord <Paul.Nord@valpo.edu>*Date*: Tue, 12 Oct 2021 16:03:00 -0500

Thanks, John!

This is the second lab we do in this course. We give them relatively

simple tools to acquire the data. It's early in the pedagogy and it is

enough to tell them, "one click on the counter means that one decay

happened in the sample." They move on to decay spectrum analysis,

coincidence timing measurements, dE/dx, and other topics with better

tools. This lab is a bit of an intro to the fundamentals of atomic and

nuclear physics.

In the analysis, we ask them to try a variety of techniques to extract the

half-life of the two isotopes. While I'm a little disappointed that my

analysis suggests that the best we can do is to measure these values within

10% of the accepted value, I'm much happier with this MCMC method.

Traditionally we tell them that one can fit part of the curve and then

subtract that from the data and fit the other part of the curve and go back

and forth. One can measure the background independently and then subtract

that from the data before fitting. Or one could mix up the order of those

fits and repeat the operation until they get an answer they like. Aye,

there's the rub.

The data can be found here:

https://github.com/paulnord/bayesian_analysis/blob/main/my_data.csv

The columns are end_time, total_counts. The number of counts were recorded

every 30 seconds for the first 15 minutes. Then every 12 hours two

readings were taken spaced two minutes apart. The students told me that

t=0 is when they put the sample it, but you may choose to question that

assumption.

I did this lab as an undergraduate. What I remember most was that even

though the data collection was very simple, the analysis was very complex.

It seems that over the years we have determined an order of operations and

a data collection sequence that generally give answers that are in good

agreement with accepted values. If I'm interpreting what the MCMC is doing

correctly, it has considered a thousand possible parameterizations of the

model and optimized those to match the data. The mean values from all of

those models do reasonably agree with the data. But the data does not

constrain the model sufficiently to report values as precise as we've been

getting from our analysis.

Paul

On Tue, Oct 12, 2021 at 2:41 PM John Denker via Phys-l <

phys-l@mail.phys-l.org> wrote:

On 10/12/21 11:34 AM, Paul Nord wrote:

I was hoping for some feedback on this analysis. Specifically, what didrepresentation

you think of my conclusion:

"All of these models generate curves which are very close to the data.

While the errors seem very large, they are actually a better

of the true uncertainty in applying this model to this data. Manymuch

least-squares fitting functions will give uncertainties which give too

confidence in the model predictions."

I have been following this with interest.

Here's why this is important: AFAICT there are very few examples of

assignments where students are expected to measure the uncertainty

of the actual data set

In contrast, there are eleventy squillion assigments where they

are required to calculate a predicted uncertainty, but then don't

check it against experiment, which is IMHO insane.

So my zeroth-order feedback is: You're on the right track.

AFAICT you are making solid progress in an important direction.

I'll help if I can.

=============

At the next level of detail, I don't know enough to say anything

definite about the conclusion quoted above. However I will say:

-- As a rule of thumb, it's true that:

a) most least-squares routines are trash, even when applied to

Gaussians.

b) applying least squares to Poisson data is begging for trouble.

c) when there are 5 fitting parameters, it's likely that there are

correlations, whereupon the whole notion of "error bars" becomes

problematic (to put it politely).

-- If you post the raw data somewhere (google docs or whatever) I might

find time to take a whack at it with my tools. No promises.

I assume each row in the data set is of the form

bin start time, bin end time, counts in the bin

or something like that. Time-stamped events would be better, but I

can cope with binned data if necessary.

_______________________________________________

Forum for Physics Educators

Phys-l@mail.phys-l.org

https://www.phys-l.org/mailman/listinfo/phys-l

**Follow-Ups**:**Re: [Phys-L] Analysis of Half-Life measurement using PyStan***From:*John Denker <jsd@av8n.com>

**Re: [Phys-L] Analysis of Half-Life measurement using PyStan***From:*John Denker <jsd@av8n.com>

**Re: [Phys-L] Half-Life measurement***From:*John Denker <jsd@av8n.com>

**References**:**[Phys-L] re Bayesian Inference in Half-Life measurement***From:*Brian Whatcott <betwys1@sbcglobal.net>

**[Phys-L] Analysis of Half-Life measurement using PyStan***From:*Paul Nord <Paul.Nord@valpo.edu>

**Re: [Phys-L] Analysis of Half-Life measurement using PyStan***From:*Paul Nord <Paul.Nord@valpo.edu>

**Re: [Phys-L] Analysis of Half-Life measurement using PyStan***From:*John Denker <jsd@av8n.com>

- Prev by Date:
**[Phys-L] re Analysis of Half-Life measurement using PyStan** - Next by Date:
**Re: [Phys-L] Analysis of Half-Life measurement using PyStan** - Previous by thread:
**Re: [Phys-L] Analysis of Half-Life measurement using PyStan** - Next by thread:
**Re: [Phys-L] Analysis of Half-Life measurement using PyStan** - Index(es):