What are the best scoring and material cards to simulate absorbed energy fraction in a B4C mirror?

Dear Fluka experts,

I am trying to simulate the absorbed energy fraction versus depth.
It is a silicon mirror coated with 50nm B4C. X-ray source with photon energy [1keV-12keV]. In order to see if I did it right, I am comparing my results with the paper that did the simulations using Gean4. (https://www.researching.cn/ArticlePdf/m00005/2023/21/2/023401.pdf)

Result from my simulations with Fluka:

  1. When materials are irradiated by ultrashort femtosecond x-ray pulses, the process of electron excitation and relaxation will be involved (like photoelectron, Auger decay, ionization). How can I make sure that all these parameters are taken into account in the simulations? So are there important secondary interactions that are not accounted for by the BEAMPART card?

  2. I use USRBIN (part: Beampart) 1D. Is the unit 1/(primary*cm)?

  3. What do the BIN numbers in scoping cards mean? Is it better to define large numbers? Like (51 BIN)?

  4. If we consider this definition for refractive index: n=1-delta-i*beta.
    My understanding is that in the OPT-PROP card (type: blank), refraction is “1-delta” and absorption is “absorption coefficient”. What about beta?

Absorbed Energy.flair (4.0 KB)
Absorbed Energy.inp (2.9 KB)

I really appreciate your kind help.

Best regards,

Dear @marziyeh.tavakkoly,

50 nm is a bit beyond the capabilities of the code. Please, have a look at this very interesting post for a detailed explanation.

Hallo, note in addition that you are comparing completely different quantities: absorbed energy (ENERGY) and particle fluence (BEAMPART). The unit of the latter, as retrieved from your Cartesian USRBIN, is 1/cm^2 per primary. If you want to reproduce the published plot, you need to score ENERGY (GeV/cm^3 per primary) and plot the 1D projection (that is averaged over the scoring transverse area), to be multiplied by the scoring transverse area (cm^2) and divided by the total deposited energy (GeV).
Any BIN number between 21 and 99, not used in a scoring card of different type, is just fine. It has no special meaning, it determines the logical unit of the concerned output file.
Relevant physics processes as the ones you mention are already taken into account by default (with PRECISIOn DEFAULTS), but the validity of the physics description is challenged by the tiny dimension you are interested in, as mentioned above and discussed in the linked post.
The OPT-PROP cards are totally irrelevant since you are not asking for optical photon production in parallel.
MAT-PROP and STERNHEI should be there only if you have good reasons to overwrite default values.
The EMFFLUO card is not needed (fluorescence is already on with PRECISIOn DEFAULTS).

Dear both,

I really appreciate your kind help.

Then for nanometer thickness, the results are not accurate and it does not depend on Fluka, it is for all Monte Carlo simulations, right?
Because I read something similar to my case that has been done with Penelope.
So it is better to do this with a single thick layer.
Thanks again!

Dear Marziyeh,

I come a bit late to the party with just a few general physical comments for your consideration.

Then for nanometer thickness, the results are not accurate and it does not depend on Fluka, it is for all Monte Carlo simulations, right?

If I follow correctly, you have 1-12 keV photons impinging on a 50 nm slab of B4C or Ru on a Si substrate and want to examine energy deposition in this geometry.

Your incoming photons will be absorbed and emit photoelectrons (along with fluorescence gammas and Auger electrons as the inner-shell vacancy relaxes), which will deposit energy as they propagate in the geometry, being eventually stopped in the “depths” of the several-micron-thick Si substrate. All of these interaction mechanisms are accounted for if you pass a DEFAULTS card with PRECISIOn as suggested above.

The produced 1-12 keV photoelectrons propagating in the slab+substrate will deposit energy in two complementary (and carefully consistent) ways:

  • Energy losses larger than 1 keV will lead to the production and explicit transport of a secondary electron (delta ray), thanks to your EMFCUT card with electron production/transport thresholds down to 1 keV.
  • Smaller losses are deposited along the electron step as per FLUKA’s ionization model, relying on a stopping power calculation with energy-loss fluctuations applied on top.

Underpinning this scheme (adopted in various flavors by basically all general-purpose MC codes) is the adoption of an ionization model for charged particles in a material, derived under the assumption that the material is “infinite”, homogeneous, and isotropic. The problem is that when you try to resolve things at the few nm scale, there are aspects in the energy loss of low-energy electrons (which ultimately govern energy deposition problems) near solid interfaces which deviate significantly from the conventional “bulk material” picture.

The energy loss of electrons below ~5 keV in the first few nm near the boundary between two solids is subject to a series of interesting physical effects. For instance, there are energy loss features confined to the first couple nm within the surface (excitation of surface plasmons), considerably changing the energy-loss picture one has from homogeneous and isotropic media as ordinarily adopted in general-purpose codes. To address these local details in the first few nm near solid interfaces, one needs detailed optical response functions of the solid material at either side, and a non-negligible amount of computation to eventually obtain an “energy loss function” that depends on the distance to the interface, the direction of motion (surprising differences exist depending on whether the electron goes in or out), the energy, etc., as per Figs 6 and 8 of https://analyticalsciencejournals.onlinelibrary.wiley.com/doi/epdf/10.1002/sia.5175.

On the other hand, both electrons and photons sufficiently below 1 keV start to feel details the electronic structure / local binding environment of the target atom/molecule/crystal, with direct impact on both the differential cross section for elastic scattering of electrons and the absorption of low-energy photons…

As you can see, the huge amount of highly case-specific information one needs to rigorously address these details, and the possibly diverging CPU time involved, make them necessarily fall out of scope general-purpose codes (including here FLUKA, PENELOPE, and others).

As a side remark, various codes will track electrons down to different energies: 1 keV for FLUKA, 100 eV or so for PENELOPE (the developers of the latter are exquisitely careful to stress that for e- below 1 keV simulation results should be taken as a semi-quantitative guess). In any case, they won’t account for the aspects addressed above, which become increasingly relevant the lower you go in energy and the smaller you go in scale towards the nm.

While being well aware of these formal considerations, one then turns to practical life. Overall, one may still employ general-purpose codes for a semi-quantitative assessment of energy deposition in geometries a mere few nm or tens of nm thick, but being well aware that if one pushes too much, one eventually risks missing relevant physical details.

“Too long, didn’t read” version: one may (ab)use general-purpose MC codes for low-energy electron transport problems in geometries some nm thick, but it would be wise to exercise caution and take the result as no more than a semi-quantitative assessment.



1 Like

Dear Francesc,

I really appreciate your taking the time to explain things.
The information you and your colleagues have given me has been very helpful.