Normalisation of Results when using User-defined Source

Dear experts,

How is the ‘per unit primary’ in scoring units interpreted when using a user-defined source which reads phase space variables from a file, including particle weights?
For instance, I am running a simulation which reads initial particle parameters from an external collision file for 10’000 primaries; included in this is a statistical weight for each particle that depends on the particle type and its kinematics. When reading the results from a USRBIN detector, say, they are normalised ‘per unit primary’ - what does that mean in this case? The total particle weight is stated as 6e11, which I assume is the sum of the individual particle weights multiplied by the number of primaries. Is this 6e11 then the effective number of incident particles?
Additionally, if I want to scale my results to be valid for 1e9 incident particles, how would I account for this?

Many thanks and best regards,
Kyle

Since you refer to a user defined source, it all depends on the way the routine has been eventually coded and in particular the WEIPRI variable has been calculated. According to what you wrote, I assume that the latter (representing in FLUKA the total particle weight) got the 6e11 value.
A fake particle of statistical weight X represents X real particles, so a result per unit primary weight, as FLUKA is meant to provide, is normalized to one real particle and ready to be multiplied by the number of particles one has in reality.
Now, you seem to actually handle a special case where one primary history includes from the beginning several (fake) particles [or may be not, I could not fully understand]. This may be the case of an inelastic collision, considered as a primary event (instead of a single incident particle) and generating many products that are all loaded in the same primary event. However, in this case one wants to normalize the results to one real collision (and not to one real collision product!), in order to finally multiply by the collision rate or number.
I cannot say what your collision file represents, since I do not know how it was generated nor how it is actually used in your source routine. Moreover, it’s not clear if the file was generated by 10,000 previous events or you are running 10,000 events after reading it. Anyway, it looks like your results are normalized by FLUKA (that has divided them by the total particle weight of 6e11) to one of the real incident particles the file represents. Therefore, if in reality you have 1e9 incident particles, you should just multiply by 1e9.
Nevertheless, as mentioned above for the collision product case, if you simulate several particles in the same primary event (on average 6e7 real particles per event?!), I’m not convinced that normalizing to one real particle (rather than event) is correct.