2-Step Phase Space Normalization

I have a simulation that involves some inefficient processes (i.e. partial transmission target) as well as some geometry I would like to optimize (i.e. a collimator). Therefore I am using phase spaces (PS) to split the simulation into two steps.

Step 1: I collect a phase space after the partial transmission target and before the collimator.

Step 2: I use the Step 1 phase space to track through the collimator and collect another phase space at the exit of the collimator.

I then use the phase space from Step 2 to track particles to a water phantom when I exam dose distributions and such.

These phase spaces are produced in separate simulations and I am careful to kill particles passing through my phase space planes at each step so as not to double count.

My question is about normalization. If I measure dose in my phantom, I want it to be related to input electron as Dose/primary (electron). As I have it now, it is Dose/primary (photon from second phase space).

I believe I need to multiply my final measured value (in this case dose) by the ratio of the input particles to output particles for each step like,

Dose/Photon from 2nd PS * Input Photons to Step 2 / Output Photons in 2nd PS * Input Electrons to Step 1 / Output Photons in 1st PS.

First of all, is my approach correct? The way I understand it, each step of my simulation produces a ratio of input particles to output particles and this becomes a conversion factor for that step.

One main question I have is…When I generate a phase space, I randomly sample from it more particles than exist in the phase space. Say my input PS has 1e6 photons, I sample 5e6, and the output PS has 1e5 photons (no biasing, so all with a weight of 1). When I am determining the ratio of input to output, do I use the number of primaries 5e6 sample, or the number of unique input particles 1e6?

Dear Dirk,

First of all, apologies for the late answer.
I will try to reply to your question about the normalization and the output you get at the end of your third step simulation.

Build-in scorings, as the USRBIN to get the DOSE, provide the magnitude values per primary unit weight. Since there is no biasing in your simulations the result of your scoring will be GeV/g per primary of the last step, what you convert to Gy.

Now, since this simulation, the third step, uses a source file, you have to multiply the dose by the number of particle loaded divided by the primaries of the step(s) done before. In your case, you have a three step simulation, therefore you have to multiply the dose by:

Dose (GeV/g per unit primary weight) * (loaded step2 / N2) * (loaded step1 / N1)

where N1 is the number of primaries run in your first step and N2 in the second step. loaded step1/2 are the loaded events from the source file in the second and third steps. In order to get these values, search in the output file a similar line than the following one:

Loaded 3190935 tracks for 2350382 events

the number of events is what you should take (the second value in the example). Just to clarify in your case: the loaded step1 value is on the output of your second step simulation and loaded step2 in the output file of the third step.

Hope these guidelines will help.

Hi,
building on Marta’s answer, I shall point out that the useful output file message she refers to, is provided by a custom source routine that is used to load the particle sample of the previous step, so you cannot find it by default. Anyway, summarizing the fundamental guidelines:

  • In principle, the particle file produced in the previous step should contain, in addition to all relevant particle information, also the event number, such as to allow you to load in a primary history of the subsequent step all the particles coming from the same previous event together. This explains the difference between number of tracks (i.e., particles) and number of events in the printout above.
  • Based on the former point, the normalization factor from one step to another is the ratio between the number of events (rather than particles) recorded in the previous step file for further sampling and the total number of primary histories run in the previous step to produce that file (as determined by the respective START card parameter, to be multiplied by the number of cycles and jobs, considering that in case of multiple cycles and jobs, one should distinguish between possibly identical event numbers in the produced file). This answers your last question, where the value to be taken is 1e6 (since the dependence on the number of histories sampled in the subsequent step - 5e6 - is already removed by the usual built-in scoring normalization applying to any FLUKA run).

In case of more steps, this factor chain (which is not what you wrote above, where you inverted both ratios) will give you the scored quantity per primary particle of the first step (i.e., dose per primary electron).

Thank you for your reply as well as @msabateg. Since they are related to each other, I will combine both in this response.

Unfortunately, I find that I now have more questions. Let me attempt to explain the way that I understand the situation and the terminology.

As I understand it, “event” refers to all the particles generated by a given input primary. In my case, if a input primary electron that is the 54th primary being run, produces 2 photons, both of those photons belong to the same event and even thought they are unique particles, carry the same event number.

Based on the suggestions, I added a record of the particle event to my phase space with the use of NCASE. I also added a summary of the phase space with usrout.f to give me the final phase space particle number, total weight, and total number of events. I have attached files for a simple 3-step simulation using 2 generated phase spaces.

Running it gives me:
(Step), (Input Type), (Primaries), (PS Particles), (PS Weight), (PS Events)
Step 1, Electrons, 1e6, 3189, 3189.0, 3185
Step 2, Step 1 PS, 3189, 1382, 1382.0, 1382
Step 3, Step 2 PS , 1e6, --, – , –

If I measure dose in Step 3, I should normalize the dose as

Dose (Gy/primary) * (Step 2 PS Events/Step 2 Primaries) * (Step 1 PS Events/Step 1 Primaries) =
Dose (Gy/primary) * (1382/3189) * (3185/1e6) = Dose (Gy/electron)

I see that I have two issues.

  1. For my simple case it is easy to determine the phase space events. In the case where I run a much larger simulation and combine many runs and cycles into a large phase space file, it would be useful to have an output as given by @msabateg where the total events are given for a phase space file being read in. It was mentioned that this was done via a custom source routine. Any details on implementing such functionality would be greatly appreciated.

  2. I have been reading in my phase space using the “read_phase_space_file” routine in source.newgen.f. It does not appear that it has the capability of recognizing event number nor does it seem to load more than one particle on the stack at once. It seems like to be able to implement a phase space file as @ceruttif has suggested, I would need to load in all the particles from the same primary on the stack at once. What is the best way to approach this? If I don’t have the ability to load more than one particle on the stack at once, how will this affect my outcome?

  3. The distinction between normalizing with phase space event vs. particle number is not always specified in the other forum threads I have seen. Just for my own curiosity, what is fundamental reason to events as opposed to particle number?

I am incredibly grateful for the help.

PhaseSpace_Normalization.flair (23.8 KB)
usrout.f (2.0 KB)
mgdraw.phasespace.f (15.1 KB)
source_Bias.phasespace.f (20.1 KB)

Correct.

Imagine you have one primary event generating 9 (correlated) particles in the relevant phase space and another primary event generating 1 particle only. In the subsequent step, the contribution originating from the latter should count as one half (one primary event out of two) and not one tenth (1 particle out of 10).

You have that ability, thanks to the NPFLKA counter that has to be increased by 1 in order to load additional ‘primaries’, as in the aforementioned custom routine
source_twosteps.f (10.6 KB).

The latter is implemented in the standard source.f format and not adapted yet to the new convenient source_newgen.f format, where the NPFLKA counter does not appear as directly accessible (I let @horvathd further comment if applicable).

Dear Dirk and Francesco,

it is possible to push multiple particles to the stack with source_newgen.f by calling the set_primary() subroutine after each particle configuration.

However, the read_phase_space_file() subroutine was not designed to read event-by-event data, so for this use case, you need to supply your data reading and sampling code.

Cheers,
David

Thank you for the response as well as the source example.

Thank you for the information. I will indeed try it.