Dear Giuseppe,

Thank you for your response. I apologize that I gave you a wrong formula but to clarify,

I considered all the bins in the *_tab.lis data and taking the weighted sum of the percentage errors (Counts_i * error_i) and then dividing by the sum of counts as shown. I believe this is the weighted sum of the percentage errors? Which I call the average uncertainty.

The presentation (https://indico.cern.ch/event/1352709/contributions/5822000/attachments/2837035/4958530/02_Monte_Carlo_Basics_2024_INTA.pdf) states that these error bars reduce as the number of primaries simulated (NPS) increase and that is indeed true. Also, the average value computed above reduces with increasing NPS BUT My problem is that I did not see it following this relationship.

For instance, a simulation where I set NPS = 600k in the input file and ran 5 cycles, I got this from my calculations:

Average Uncertainty (U_1) = 42.7305

Total Primaries Run (N_1) = 3000000

But with NPS = 55000000 in the files and still running 5 cycIes, I get this

Average Uncertainty (U_2)= 12.1090

Total Primaries Run (N_2) = 275000000

AND this:

N_2/N_1 = 91.67

(U_1/U_2)^2 = 12.45

Even the Values from the *sum.lis for the Total response do Not follow this relationship as shown

N_1 = 3000000

sigma_1 = 0.2291394 #%

N_2 = 275000000

sigma_2 = 5.9575517E-02 #%

N_2 /N_1 = 91.67

(sigma_1/sigma_2)^2 = 14.79

The reason for doing this that I wish to use the uncertainty and NPS from the first simulation ( a few primaries) to determine the NPS that will result to an uncertainty less that a certain value. That is, if my target average uncertainty (U2) is to be less than 10%, then I need to have

N2 > N1*((U1/U2)^2). So, even after rounding up NPS to the next 100k, I still got sigma = 12.1 %

Here are the files

example.inp (3.4 KB)

example_46_sum.lis (98.8 KB)

example_46_tab.lis (77.9 KB)

output_fort_46_sum.lis (174.3 KB)