Dear FLUKA experts,
I have one query and observation regarding two step methods in FLUKA. Usually, we observe relative statistical error in the output and if it is below 10% or less, then that result can be accepted and accordingly we set number of histories (so that unnecessary computational time wastage can be avoided). If we are unable to get the output or if the statistical error is high despite of using large number of history (of course this will depend upon the computational facilities available to us), then appropriate biasing can be introduced if needed.
When our problem requires a two step method, usually in first step, the position coordinates, energy, direction, weight etc are stored in collision file and they will be subsequently used in second step. In this first case, we are combining our collision files and lets say there are total X no of data points for N1 number of histories. But there is no way to predict before hand whether these X no of data points are enough to use in second step or should we increase no of histories (lets say N2), such that we will get more data points (say Y). Now, if we use either X or Y no of data points, in the second step, the relative statistical error is less than 10 % in both the cases. But the mean value is slightly different, and while comparing with a known model, result with Y data set is closer to the literature value.
Let me give one example to clarify this matter.
I have implemented some modifications of Source sampling with variable weight and probability - #7 by horvathd, and I am able to generate submersion kerma values and compared my results with ICRP 144.
Here is the thing that I observed: [estimated quantity = kerma rate (unit = nGy/hr per Bq/m3) ]
Situation A:
For 0.1 MeV, the cloud radius is 265 m. I have devided it into 7 shells. I got 256836 data sets for a history of 5E+9. In the second step: FLUKA result: 1.74E-02; ICRP result: 1.75E-02. They are very close!
Situation B1:
For 1.5 MeV, the cloud radius is 550 m .
If I divide it into 7 shells: in first step I got 61481 number of data sets for a history of 5E+9. For that, in the second step: FLUKA result: 3.30E-01, ICRP result: 3.49E-01. , not very close!
Situation B2:
Now, if I increase the number of shells i.e. I have divided it into 11 shells, I got 95569 data sets for a history of 5E+9. For that, in the second step: FLUKA result: 3.48E-01, ICRP result: 3.49E-01. Very close!
Now in both B1 and B2, the relative statistical error in all bins of USRBDX output (I have converted the USRBDX output into kerma externally) is very less than 10%; which is expected I guess, because that is governed by the number of histories used in 2nd step.
Now, for validation study, we can follow trial methods to come to a conclusion how many data sets are required to get the result correctly. But apart from the validation study, is there any way to judge whether the number of data sets from the collision file is enough for the second step ? Since there is no concept of statistical error when we are using USRDUMP card and we are collecting particle information (position, energy, weitght etc).
The mail got bigger, but I hope, I am able to explain my query.
Thanks and regards,
Riya