Unable to access executable in Slurm

Dear Experts,

when I wanna to send my job to cluster (Slurm), Slurm return me this massage:

[prun] Error: Unable to access executable -> /usr/local/fluka/bin/rfluka

but I test fluka install and work correctly when I try this command line:

prun /usr/local/fluka/bin/rfluka -M 2 /home/rezaei/fluka_simulation/test_1/first.inp >>Batchfile.out

but I can not run when I use this script whit sbatch command:


#SBATCH -J rezaei               # Job name
#SBATCH -o job.%j.out         # Name of stdout output file (%j expands to jobId)
#SBATCH -p p1                # Partition to be used: p1, p2, or pa (=p1+p2)
#SBATCH -n 1             # Total number of mpi tasks requested
#SBATCH -t 165:00:00           # Run time (hh:mm:ss) - 1.5 hours, Run time should be less than 168 hours!
#SBATCH --export=ALL
# Available paritions
# p1: support up to 96 MPI tasks, default partition
# p2: support up to 64 MPI tasks
# pa: support up to 160 MPI tasks, actually p1+p2
export PATH=$PATH:/usr/local/fluka/bin

export flukapath=/usr/local/fluka/bin/rfluka

export CURRENT_RUN=/home/rezaei/fluka_simulation/test_1
export NAME_RUN=first.inp


prun /usr/local/fluka/bin/rfluka -M ${cycle} ${CURRENT_RUN}/${NAME_RUN}  >>Batchfile.out

Do you have any idea about this issue?

thanks in advance,

I think this is a question for the administrator of your cluster

I find the answer:
it seems that it is related to the filesystem.

because the /usr/local directory is not exported to compute nodes and not mounted by compute nodes, the executable is not reachable from compute nodes.

there are two solutions, i think.

  1. run executable from the manager node, which is not possible under current setup

  2. install fluka in your home directory, which looks promising.

so I copy fluka file to my directory and solve issue.

Thanks so much