Clarification about normalization of energy deposition (GeV/primary) to number of particles

Dear Expert,

I would like to ask for clarification regarding the correct normalization and scaling of energy deposition in a two-step FLUKA simulation.

Step 1

  • I simulated 1×10⁷ primary particles.

  • I scored a USRBDX fluence on a boundary.

  • The USRBDX scoring was:

    • Integrated over solid angle (let’s say I took 0 to 8 sr with 1 sr interval, which is used to expolate a small solid angle)

    • No area normalization was applied.

  • Therefore, the obtained spectrum represents the particle fluence per primary of Step 1, integrated over the selected solid angle.

  • Simulation was performed in 5 cycles

Step 2

  • I used the small solid-angle extrapolated USRBDX spectrum from Step 1 as a source for a second simulation.

  • I ran 1×10⁶ source particles, again using 5 cycles.

  • I understand that FLUKA internally normalizes and redistributes particles when a USRBDX spectrum is used as a source.

  • I scored region-wise energy deposition using USRBIN, which is GeV/primary. Lets assume I got 0.1 GeV/primary.

My question

I would like to ask the correct scaling procedure to obtain the physical total energy deposition corresponding to the number of particles used.

Should I multiply the USRBIN result by:

  1. The number of Step-1 primaries (1×10⁷), (or 5 cycles means 5e7, although I understand that FLUKA takes an average of 5 cycles.)
  2. Or, the number of Step-2 primaries (1×10⁶), ( Or 5e6 particles as 5 cycles).

Thank you very much for your help.

How do you do so? If you sample from the step-1 spectrum in a source routine, you will get step-2 results normalized to one particle crossing the USRBDX boundary within the indicated angular range. However, please note that 8 sr is not at all a small solid angle and you should properly set the particle direction in the source routine. Also, it should better be a current spectrum, rather than a fluence spectrum. As for the area normalization, it’s irrelevant for the sampling.
Thereby, the step-2 results should be simply multiplied by the spectrum integral (reported in the sum.lis file of step 1). This will eventually give you values normalized to one step-1 primary particle, to be further scaled by your physical beam intensity.
On the other hand, the two multiplication options you indicate at the end make no sense, since the number of simulated particles has no physical meaning.

Dear @ceruttif ,

Thank you for your explanation. To clarify how I do it:

  • Step 1: I simulate 1×10⁷ electrons on tungsten (5 cycles) and score photon fluence at a boundary using USRBDX, over 0–8 sr in 80 bins , each of 0.1 sr (to observe the distribution of photons).

For 0-8 sr my sum.lis files gives me

Total primaries run:         5000000
   Total weight of the primaries run:   5000000.00    


  Detector n:            1 (           1 )  divv1     
     (Area:            1.00000000      cmq,
      distr. scored:           7    ,
      from reg.           7  to            8 ,
      one way scoring,
      fluence scoring scoring)

     Tot. resp. (Part/cmq/pr)   1.497247      +/-  3.5496633E-02 %
     ( -->      (Part/pr)       1.497247      +/-  3.5496633E-02 % )

  • But My detector makes a very small solid angle let’s say (~0.0001 sr), Statistically , it is not good to score spectrum. So, I use the 60-bins fluence spectrum to extrapolate the photon distribution to this very small solid angle of my detector. and integrate the resulting spectrum over 0-0.0001 sr to get dN/dE spectrum. The integral of this spectrum gives the total number of particles, which is approximately 2×10⁻⁵.

(Although I understand that I don’t need the full 0–8 sr range, I could perform the extrapolation using only the 0–2 or 3 sr solid angles)

  • I use this dN/dE spectrum corresponding to my detector’s solid angle in the second simulation, in a custom source routine and directing them toward the detector. USRBIN then gives energy for region, for example, 5e-5 GeV/primary

Now, my understanding is, I am essentially only interested in the spectrum for my detector’s solid angle, which is 0 to 0.0001 sr, and I obtained this by extrapolating from 0-8 sr. Therefore, although the sum.lis file for the full 60 bins gives 1.497 particles/primary, I think I should not use this value; instead, I should use the total integral of my extrapolated spectrum, which gives 2×10⁻⁵ particles/primary.

The sampling I use in Step 2 is 1×10⁶ particles over 5 cycles, but this affects only statistics, not normalization.

So, to scale my energy deposition, it should be

E_depPerElectron = 5 x 10^-5 GeV/primary X 2 x 10^-5 photon/electron = 1 x 10^-9 GeV/electron

It can be weighted to any number of electrons. For 1e7 electron it will be E=1e7 ×1×1e−9=0.01 GeV.

Is it correct?

  1. So, my main question is: the normalization factor in the Step 2 simulation for the E_dep value is the spectrum integral.

For this, should I use integral value over 0–8 sr, which gives 1.497 particles/primary; from sum.lis file?

OR

, Should I use the integral value of the extrapolated spectrum for 0-0.0001 sr ( corresponding to the detector’s solid angle), which is 2e-5 particles/primary ?

  1. Plus,

My other question is, if I don’t multiply my result by the integration values of the spectrum, what will it mean? Will it be irrelevant?

Thank you very much for your time.

Regards

Shubham Agarwal

Correct.

Correct.

It will give the energy deposited by one photon reaching the detector.

Dear @ceruttif ,

Thank you very much for the clarification.

Regards

Shubham