Variance Reduction for Pulse-Height Scoring

Dear FLUKA experts

My goal is to simulate the gamma-ray detector response of a NaI(Tl) scintillation detector using pulse-height spectra, i.e. histograms of detector counts versus deposited energy. For that purpose, I’ve used the detect card together with the user-routine usreou.f to obtain the deposited energy event-by-event (as described in this post: Energy deposition: DETECT or MGDRAW). This works very well for small geometries and low attenuating materials.

In contrast, for complex geometries and high attenuating materials, the required number of primaries for equivalent statistics increases dramatically. Therefore, I would like to incorprate variance reduction techniques to decrease the computation times. I’ve tested already simple biasing options, which are “allowed” according to the manual with the detect card. Specifically, I’ve adapted the source in such a way that the direction of the particles is forced towards the detector (as proposed in this post: Biasing for growing radioactive spheres) and I’ve increased the transport threshold of the photons and electrons in “unimportant” regions. Unfortunately, these measures are not sufficient for my needs. In this context, I have the following questions:

  1. Are there any pulse-height scoring cards in FLUKA, which allow variance biasing (importance biasing, weighting windows,…), similar to the F8 tally in MCNP (which allows variance biasing)?

  2. Is there a way to combine variance biasing (other than the already tested) with the detect or eventbin card?

  3. It is stated in the manual, that “normally” the eventbin card can’t be combined with non-analog simulations. For the detect card, the manual states that is not possible at all. I wonder, in which cases are non-analog simulations allowed for the individual cards? Can I use variance biasing in my case, i.e. monoenergetic photon-beam together with pulse-height scoring in a specific region (the NaI crystal)? BTW, I don’t use the coincidence/anti-coincidence capabilities of the detect card.

  4. Does the weight assigned to each primary in the eventbin output file has any meaning in non-analog simulations? Or in other words, can these weights be used to do “fractional binning” to create histograms, i.e. not count integer “energy deposition” events but fractional ones according to the weight in the output file?

  5. How are the weights in the eventbin output file computed? If non-analog simulations are not allowed with the eventbin card, why are the weights provided in the output? Is there maybe an implementation of a “deconvolution approach” as described in:

Unfortuantely, due to publication restrictions, I can’t share my input files. However, I can go into more details, if you need to know more about some specific modelling features.

Thank you all in advance!

1-2-3. No. As you correctly noticed, event-by-event output by EVENTBIN and DETECT is not compatible with the use of the available biasing techniques. By construction, instead of getting the unbiased (i.e. physical) energy deposition amount and the respective weight by which it shall be counted to your purposes, the present scoring structure just deals with the product of the two, not allowing to disentangle them (we may consider to improve this in the future).
4-5. The event weight appearing in the EVENTBIN output just allows to deal with biasing performed at the level of the source.f user routine, where primary particles can be given a customized weight, which is what one finds recalled there.
Therefore, other kinds of optimization, as the ones you mention, shall be devised (sorry for the quite limited help).

Dear Mr. Cerutti

Thank you very much for your detailed response.