# Reading from a Phase Space Representing Input Power Not Particles

I have a interesting phase space file. It is the output from an x-ray device where each point or “ray” in the phase space carries the same amount of power. Each ray represents a bundle of photons with the same energy and phase space parameters. Therefore, the number of photons in each ray is dependent on the energy of that ray such that the number of photons for a particular ray is the (Total Input Power/Number of Rays)/(Energy of the ith Ray) = Photons in ith Ray/second. Clearly I cannot just sample the phase space as that would not represent the input photon distribution properly.

My question is: How to handle this phase space?

I was thinking of two approaches.

1. Use the particle weight to normalize each “ray” to a physical photon. From what I understand, the final scoring would then be normalized per photon. This is where I am getting a bit confused about how the weighting works. Let’s say a particular ray contains 5024 photons, would that ray’s weight then be 5024 or would it be 1/5024 = 0.000199?

2. For each ray sampled from the phase space file, load multiple photons (the number of photons in that particular ray) onto the stack simultaneously. From what I can understand reading the source.f documentation this is possible. Then I would directly track the appropriate number of photons for each ray where each photon has a weight of 1. This leads to another question; how would this be accomplished using the new source.newgen.f file?

Any suggestions would be greatly appreciated.

Dear Dirk,

I will start with option 2: Since the stack has a limited size (70000 to be exact), you may not be able to load enough primaries to get the exact ratios between the “rays”.

In the source_newgen.f you may call the set_primary() function multiple time to load additional primaries into the stack, but this was not tested. Also, you will have to include a loop setting the different parameters for each primary.

Option 1 is – in my opinion – the way to go.

The weight in this scenario can be interpreted as a ratio between the number of photons in two “rays”:

For example you fix “Ray 1” as the base, and you fix its weight to 1.0. The the weight of “Ray 2” would be the number of photons in “Ray 2” divided by number of photons in “Ray 1”.

Then you run the simulation sampling each “ray” uniformly.

The result in FLUKA are normalized to the total primary weight, so to get your final result, you will only need to multiply with the total number of photons (All “rays” combined.)

Cheers,
David