Effect of number of data sets of collision file in two step method in FLUKA

Dear FLUKA experts,

I have one query and observation regarding two step methods in FLUKA. Usually, we observe relative statistical error in the output and if it is below 10% or less, then that result can be accepted and accordingly we set number of histories (so that unnecessary computational time wastage can be avoided). If we are unable to get the output or if the statistical error is high despite of using large number of history (of course this will depend upon the computational facilities available to us), then appropriate biasing can be introduced if needed.

When our problem requires a two step method, usually in first step, the position coordinates, energy, direction, weight etc are stored in collision file and they will be subsequently used in second step. In this first case, we are combining our collision files and lets say there are total X no of data points for N1 number of histories. But there is no way to predict before hand whether these X no of data points are enough to use in second step or should we increase no of histories (lets say N2), such that we will get more data points (say Y). Now, if we use either X or Y no of data points, in the second step, the relative statistical error is less than 10 % in both the cases. But the mean value is slightly different, and while comparing with a known model, result with Y data set is closer to the literature value.

Let me give one example to clarify this matter.

I have implemented some modifications of Source sampling with variable weight and probability - #7 by horvathd, and I am able to generate submersion kerma values and compared my results with ICRP 144.

Here is the thing that I observed: [estimated quantity = kerma rate (unit = nGy/hr per Bq/m3) ]

Situation A:
For 0.1 MeV, the cloud radius is 265 m. I have devided it into 7 shells. I got 256836 data sets for a history of 5E+9. In the second step: FLUKA result: 1.74E-02; ICRP result: 1.75E-02. They are very close!

Situation B1:
For 1.5 MeV, the cloud radius is 550 m .

If I divide it into 7 shells: in first step I got 61481 number of data sets for a history of 5E+9. For that, in the second step: FLUKA result: 3.30E-01, ICRP result: 3.49E-01. , not very close!

Situation B2:
Now, if I increase the number of shells i.e. I have divided it into 11 shells, I got 95569 data sets for a history of 5E+9. For that, in the second step: FLUKA result: 3.48E-01, ICRP result: 3.49E-01. Very close!

Now in both B1 and B2, the relative statistical error in all bins of USRBDX output (I have converted the USRBDX output into kerma externally) is very less than 10%; which is expected I guess, because that is governed by the number of histories used in 2nd step.

Now, for validation study, we can follow trial methods to come to a conclusion how many data sets are required to get the result correctly. But apart from the validation study, is there any way to judge whether the number of data sets from the collision file is enough for the second step ? Since there is no concept of statistical error when we are using USRDUMP card and we are collecting particle information (position, energy, weitght etc).

The mail got bigger, but I hope, I am able to explain my query.

Thanks and regards,
Riya

Dear @riya

As you can read in this post the statistics of the second step will never be better than what you got in the first step of your simulation. So even if the number of primaries in your second step is larger than in the first step, the the reduction of the statistical error that follows is not real since you will just resample the same distribution multiple times.

As a crude estimate for the number of primaries needed in your first step simulation you could proceed as follows: run a full simulation (without the splitting into two steps) with only few primaries and then scale the number of primaries with the \sim 1/\sqrt{N} rule to get the desired precision of your results. But maybe someone with more expertise in this sort of simulation can give you a more helpful answer.

Best,
Lorenzo

Dear @lorenzo.mercolli,

Yes, I agree with the first paragraph of your post and that’s what my concern is.

In my present case, since the cloud dimension is so big as compared to the receptor, without two step, it will not give any result.

Regards,
Riya

Hi @riya,

if running just one step gives you no reasonable result at all upon which you can base your estimate regarding the required nr. of primary particles, then you might resort to looking at a different but linked quantity. In your case you are looking at energy deposition and this quantity is of course linked to the fluence. The statistical significance of your energy deposition will certainly not be better than the one of the fluence of the impacting particles. So I would suggest you have a look at the uncertainty of this quantity and try to lower that as much as (reasonably) possible. The uncertainty of the E dep will usually be higher than that but this can be resolved by running the second step multiple times.

Just keep in mind that the final statistical error estimates that you will see are meaningless because they do not include the systematic error coming from the uncertainties in the first step.

Hope that helps
Chris

Thank you @ctheis ,

I will score other suitable quantity.

Regards,
Riya