Spectrum sampling in source_newgen.f

Dear @horvathd ,

Continuing the discussion from Questions about source_newgen.f:

  1. I found some doubts regarding your spectra. For example, in that spectra shared in the above discussion, at the initial point, for x = 1E-11 GeV (0.01 eV), the y value is nearly 1E+10 neutrons per cm2 per GeV; whereas in the neutron_spectrum.txt file, that value at 0.01 eV is 2.37979E+14 neutrons per cm2 per eV which is not maching with the fluka obtained spectra.

  2. Apart from this, considering the text file and the source.f shared in the discussion, I tried to reproduce the spectra obtained by you but I got slightly lower value. Is my input correctly made to get the source spectra or am I missing some neutrons because of my geometry ? (I only made small change in the source file by adding a point source coordinate).

neutron_spectrum.txt (2.9 KB)
en.flair (1.9 KB)
en.inp (1.2 KB)
source_newgen.f (9.3 KB)
image

Regards,
Riya

Dear Riya,

  1. The difference in the previous topic is coming from the normalization of result. As you know, the FLUKA results are normalized to the primary particle (weight), while the real measurement takes into account the real intensity of the beam.

    Furthermore, during the the reading of the spectrum file, the intensities are normalized to create a CDF with a maximum value of one.

  2. In your new simulation you are using a USRTRACK scoring instead of a USRBDX one. While the USRTRACK scoring is called a fluence scoring, in reality it is a track length density scoring, which is different compared to the way USRBDX scoring works. Thus, it gives a similar but different result.

    If you take a look at the unnormalized, integrated results in the sum.lis files for both scoring, you will see that the:

    • USRBDX scoring will give you approximately 1.0, meaning one neutron crosses the surface per primary neutron.

    • USRTRACK result will be around 4.0, meaning the neutrons will travel 4 cm in the TARGET region per primary neutron.

    To increase the difference you also included the volume of the region, thus numerically decreasing the result from the USRTRACK scoring.

I hope this clarifies the differences.

Cheers,
David

2 Likes

4 posts were split to a new topic: Converting a histogram to a spectrum

8 posts were split to a new topic: Difference between USRBDX and USRTRACK

Thank you @horvathd for the useful discussion.

Regarding this comment, should I increase the target volume or decrease it while using USRBDX??

Regards,
Riya

Dear Riya,

You always should use the appropriate normalization for your scorings, I just mentioned this to explain the difference in the numerical values of your results.

Cheers,
David

1 Like

Dear @horvathd,

Sorry to open this thread again. Can you please explain the codes you wrote for spectrum sampling in your spare time ?

What does this LUNOUT refer to ?

     if ( first_run ) then
        write ( LUNOUT, * ) '# ------------------------------------------------'
        write ( LUNOUT, * ) '#'
        write ( LUNOUT, * ) '# Sampling from spectrum requested'
        write ( LUNOUT, * ) '#'

What is the scale factor here ? What does this energy_unit refer to?

        scale_factor = energy_unit_to_scale_factor(energy_unit)

        bins = -1

        call OAUXFI( filename, LUNRDB, 'OLD', ierr )

        if ( ierr .eq. 1 ) then
           call FLABRT('sample_spectrum_momentum_energy', 'Unable to open spectrum file')
        end if

What are t1, t2 here ? Are these the the data present in spectrum txt file?

        do
           read ( LUNRDB, *, iostat = io ) t1, t2
           if ( io .eq. 0 ) then
              bins = bins + 1
  •       Failed to read line corretly, possible comment - skipping
             else if ( io .gt. 0 ) then
                cycle
    
  •       End of file
             else if ( io .lt. 0 ) then
                exit
             end if
          end do
    
          allocate( energy( 0:bins ) )
          allocate( intensity( 0:bins ) )
          allocate( probability( 0:bins ) )
    
          write ( LUNOUT, "(A, I6)" ) ' # Number of points: ', bins
          write ( LUNOUT, * ) '#'
          write ( LUNOUT, * ) '# Processed spectrum:'
          write ( LUNOUT, * ) '#'
    

Which line ensures the conversion of eV to GeV? The spectrum file had units in GeV. Don’t we have to incorporate a factor of 1E-9 with the input energy data?

        write ( LUNOUT, * ) '# E [GeV],       Intensity'

        line = 0

        rewind( LUNRDB )

        do
           read ( LUNRDB, *, iostat = io ) t1, t2
           if ( io .eq. 0 ) then

What is the purpose of scale_factor multiplication here ? Is it for the conversion of eV to GeV?

              energy(line) = t1 * scale_factor
              intensity(line) = t2 * scale_factor

              write ( LUNOUT, "(2ES15.7)" ) energy(line), intensity(line)

              line = line + 1
  •       Failed to read line corretly, possible comment - skipping
             else if ( io .gt. 0 ) then
                cycle
    
  •       End of file
             else if ( io .lt. 0) then
                exit
             end if
          end do
    
          probability(0) = ZERZER
    

Does this block represent the formulation of CDF?

        do line = 1, bins
           probability(line) = (energy(line) - energy(line-1)) * (intensity(line) + intensity(line - 1)) + probability(line - 1)
        end do

        write ( LUNOUT, * ) '#'
        write ( LUNOUT, * ) '# End of spectrum'
        write ( LUNOUT, * ) '#'
        write ( LUNOUT, * ) '# ------------------------------------------------'

        probability = probability / probability(bins)

        first_run = .false.

     end if

     random_probability = FLRNDM( xdummy )

     do line = 1, bins
        if (probability(line) .ge. random_probability) then
           random_bin = line
           exit
        end if
     end do

     sample = (random_probability - probability(line - 1)) / (probability(line) - probability(line - 1))

     if (intensity(line) == intensity(line - 1)) then
        continue
     else

What does this first line of sample mean? What is the role of sqrt value?

        sample = sample * (intensity(line) - intensity(line - 1)) * (intensity(line) + intensity(line - 1))
        sample = (sqrt(sample + intensity(line - 1)**2) - intensity(line - 1)) / (intensity(line) - intensity(line - 1))
     end if

     sample = sample * (energy(line) - energy(line - 1)) + energy(line - 1)

What is this ANGLGB?

     if ( sample .lt. ANGLGB ) then
        sample = sample_spectrum ( filename, energy_unit )
     end if

     return
  end

Regards,
Riya

1 Like

Dear Riya,

What does this LUNOUT refer to ?

LUNOUT is the variable storing the logical unit number associated with the _.log_file.

What is the scale factor here ? What does this energy_unit refer to?

the scale_factor variable stores the conversion factor between energy units. The energy_unit is the string containing the energy unit of the spectrum file provided during the call to this functions.

What are t1, t2 here ? Are these the the data present in spectrum txt file?

t1 and t2 are just temporary variables, into which the variables from a line of the spectrum file is read.

Which line ensures the conversion of eV to GeV? The spectrum file had units in GeV. Don’t we have to incorporate a factor of 1E-9 with the input energy data?

What is the purpose of scale_factor multiplication here ? Is it for the conversion of eV to GeV?

This is the energy conversion part, as you assumed. (Note, the conversion of intensity is theoretically incorrect, but since it is normalized to one primary by FLUKA, it doesn’t lead to an issue. But it will be fixed in the released version.)

Does this block represent the formulation of CFD?

Yes, the probability array stores the CDF values.

What does this first line of sample mean? What is the role of sqrt value?

It is the calculation using the Inverse transform sampling (Inverse transform sampling - Wikipedia) for sampling from a line between two points.

What is this ANGLGB?

ANGLGB equals to 5.0D-16, and it is used to prevent zero or negative energies.

Cheers,
David

3 Likes

Thank you @horvathd for the detailed explanation.

  1. Regarding the following comment, my concern is if the spectrum.txt file is having MeV unit(instead of eV unit present in this case), where to give commands to convert it in GeV since FLUKA uses GeV units? When it is reading the string in energy_unit, will it automatically convert it to GeV or do I have to give that multiplication factor somewhere in this code? Similarly, if the unit is already in GeV unit, then no conversion is required. In such cases, which lines do we have to modify in this code?
  1. If in the spectrum txt file, the spectrum is having a unit of cm^-2 s^-1 only (instead of cm^-2 s^-1 eV^-1 that was present in this thread ), then I feel the following lines (See the bottom most quoted lines) need to be modified. I mean, the multiplicative term (energy(line) - energy(line-1)) is appearing since the spectrum had eV^-1 unit, right ? So if there is no eV^-1 unit in the spectra, then the previous line will be the following: (Can you please verify this statement? and if any other modification is needed in other places ??)
do line = 1, bins
           probability(line) =  (intensity(line) + intensity(line - 1)) + probability(line - 1)
        end do

Regards,
Riya

Dear Riya,

  1. When you call the sample_spectrum() you specify the unit "eV", then the script will translate it into the appropriate scaling factor.

  2. No change necessary, because where the script calculates the area below the spectrum between two energies (omitting the division by two, because the probabilities is normalized.)

Cheers,
David

1 Like

Thank you @horvathd for the detailed explanation.

Regards,
Riya