Create scoring output in separate directory

Hello,

I run FLUKA on afs, but have very large scoring output files, which exceed my afs workspace. So I wonder if it’s possible that the output is written directly to eos, where I have much more space (,while the simulation should still run on afs!).

In case this can be done, I suppose it would be implemented in the shell script I use to submit my jobs to condor (below). I tried a few things myself but didn’t succeed.

Can you help?

Thanks,
Michael

#!/bin/bash
# Script to launch flair-FLUKA jobs on a cluster using CONDOR
# Adapted for running FLUKA with pythia events

NAME="${!#}"
RUNFILE="${NAME}.sh"
SUBFILE="${NAME}.sub"
SUBFOLD="${NAME}_fluka"
INPFILE="${NAME}.inp"

mkdir $SUBFOLD
mkdir $PWD/$SUBFOLD/output/

mv $INPFILE $SUBFOLD
# pp-events and magfld are the default needed files 
# for pp collisions
ln -s $PWD/pp-events.txt $PWD/$SUBFOLD
ln -s $PWD/magfld.txt $PWD/$SUBFOLD/magfld.txt
ln -s $PWD/rd48_pion.dat $PWD/$SUBFOLD/rd48_pion.dat
ln -s $PWD/sidae.dat $PWD/$SUBFOLD/sidae.dat
ln -s $PWD/sidan.dat $PWD/$SUBFOLD/sidan.dat
ln -s $PWD/ngroupe.dat $PWD/$SUBFOLD/ngroupe.dat
ln -s $PWD/sidap.dat $PWD/$SUBFOLD/sidap.dat
ln -s $PWD/lngwei.txt $PWD/$SUBFOLD/lngwei.txt
# for every other file, 
# generate the equivalent sym links

NAME=`echo ${NAME} | cut -c 1-15`
# RUNFILE
cat > ${RUNFILE} << EOF
#!/bin/bash
cd ${PWD}/${SUBFOLD}
set -x
source /cvmfs/sft.cern.ch/lcg/contrib/gcc/9.2.0/x86_64-centos7/setup.sh
export FLUPRO=$FLUPRO
export FLUFOR=$FLUFOR
$*
EOF
# SUBFILE
# At +JobFlavour (empty)- 
# change by choosing the equivalent queue 
# for all job flavours, see: https://batchdocs.web.cern.ch/local/submit.html
cat > ${SUBFILE} << EOF
executable  = ${PWD}/${RUNFILE}
output      = ${PWD}/${SUBFOLD}/output/${NAME}.out
error       = ${PWD}/${SUBFOLD}/output/${NAME}.err
log         = ${PWD}/${SUBFOLD}/output/${NAME}.log
+JobFlavour = "testmatch"
queue
EOF

chmod +x $RUNFILE
condor_submit $SUBFILE

One way would be to modify the rfluka script and create a custom one that in the end of the cycle run instead of moving to the parent directory you move the files to a custom in EOS.

Another way would be (like what we are doing for our nTOF simulations), is to copy all files to the condor temporary running directory, run the simulation and in the end copy the files to eos.

Thanks, I modified the rfluka script and it works!

Michael