Normalization in two steps simulations

Dear David @horvathd ,

Sorry one quick query. Using the above sampling method, that you explained, let’s say we are storing particle position, direction, weight, energy using USRDUMP and mgdrawBDX. Now in a two step method, normally, we have to normalise / multiply our result (obtained in second step) by (total number of points i.e data points in first step/ total history in first step). In this scenario, will that be same or will it be different as we have introduced variable weights while sampling ?

Regards,
Riya

Dear Riya,

there is no difference in the normalization, only you have to do the normalization with the total primary weight, instead of total particle number. This also applies if biasing was used during step 1.

So the normalization formula is:

Result_{normalized} = Result_{step2} \cdot \frac{Tot. weight_{recorded}}{Tot. weight_{step1}}

This assumes, that the recorded weights are correctly applies in the source for step 2.

Cheers,
David

1 Like

Dear @horvathd ,

Please correct me if I am wrong.

Total weight in step 2 = sum of weight printed in all collision files. Am I right ? ?

What is total weight in step 1 in this context (considering variable weight probabilities assigned at various areas throughout the source volume during sampling) ? Is there anyway that FLUKA prints that value? Like in USRTRACK output, it prints total number of histories.

Regards,

Riya

Dear Riya,

you are correct, I updated my previous post to make it more clear.

The total primary weight is just the sum of the weight of the primaries. It is printed at the end of the *xxx.out file.

Cheers,
David

Thank you so much David @horvathd for the detailed explanation.

Regards,
Riya