Efficiency calibration HPGe detector

Dear FLUKA experts,
I am trying to modelling my HPGe detector to calculate the efficiency calibration for different geometries.
In order to validate the model, I am comparing experimental measurements of my check source with my simulations.

Starting with a simple front size configuration, the results have some discrepancy in lower energy, as you see in the follow table:

  1. I suspect that the discrepancy in low energy range is related to my EMF cut setting. Could you check my input file to tell me if I’m setting something wrong?

  2. I’m using source routine to simulate my check source with multiple radionuclides (Na-22 + Eu-155) with the relative activity fraction corrected for the radioactive decay. The original activity fraction was 50-50 %, while the actual which I implemented in source routine is: 53 % (Eu-155) and 47 % (Na-22). The source has cylindrical shape and matchs with the region called “active”. Given that the FLUKA results are normalized per total activity, I multiply them by 2 (the inverse of initial relative activity fraction) and divide by respective braching values.
    Do I did any mistakes?

AEGIS.flair (5.0 KB)
AEGIS.inp (3.5 KB)
source.f (9.8 KB)

Thank You.

Dear @corrado.tine,

Could you please let me know which version of FLUKA are you using? Your user routine seems to have the INCLUDEs in an old format.

Just to double check, by efficiency you mean the particle detection efficiency (number of detected particles vs. total passing)?

  1. You have set the EMF cards already at the lowest possible.

  2. You can use a source routine with the two sources, or alternatively, run the simulations separately and then rescale the results with the corresponding relative activity.

Regarding the normalization, according to the Note 1 of the DCYSCORE card in the user manual:

 If WHAT(1) = -1, all quantities are normalized per unit primary weight, or per decay if the source has been defined as a radioactive isotope.

As is your case.