Parallel issue when increase number primary

Dear Experts,
I have a problem when I try to run in parallel when I increase the number of the primary particle.

describe code:
I use source_newgen.f to define my primary particle and read from histogram.txt. I use three types of the histogram (just cut of energy is different)
I set 6 cores and for each core set 1000 primary particles (for 5 cycles).
for 300MeV histogram,5 core of 6 core run and finish simulation without any error and just one core stop during simulation without any spatial error massage.
for 1GeV all of the cores stop during simulation
also I check it for 4 core(for 1Gev) and simulation stop…and I am sure I have enough RAM (just 20% of ram usage)

but When I test for 10 primary particle every thing is OK.

also I run in one core (500 primary particles and 2 cycles) and every thing is OK for all histogram.

Do you have any Idea to solve this problem…?

Best regards,


this is my code:
E3T30P.flair (5.8 KB)
histogram.txt (228.1 KB)
source_newgen.f (19.0 KB)

Dear Mohammad,

the actual error message can be found in the .err (.out and/or .log) file.

In your case it is:

**** Photomuon interaction not performed because of missing room in FLUKA stack ***

You should use a more reasonable interaction length biasing factor (Bias inter-L) with the muon pair production by photons.


Dear David,
Thanks so much unfortunately I can not find any good reference to set reasonable interaction length biasing factor (Bias inter-L), as I understand from this topic:

just I play with the number until the number of generations of muons will be stable.

Do you know any recommendation reference for setting number Bias inter-L?

By the way I test my simulation when I increase Bias inter-L and it is work…
Best regards,

Dear @rezaei.m.p,

Have a look at note 10 of the LAM-BIAS manual card.