Is anyone else having problems with the new /dev/null output restrictions on PBS? After some digging we have found that there are some python packages that write to /dev/null including IPython and numba. How are you avoiding getting your jobs killed?
You can still write to /dev/null
, just don't make it part of the PBS directive in job scripts from what I gather. I.e., PBS -o /dev/null
or PBS -e /dev/null
should not be used (long story about how PBS is capable of doing data staging), but you can certainly send output to local /dev/null
as part of normal bash/tcsh unless you are seeing different. Do you have a specific example?
Thanks for the reply Jared. It provides some focus on where we should look for the problem.
Something like this is completely valid. Like I said, as long as it's not final place for PBS, then you're fine. If you really don't want your files around, an alternative to not having them persist is to look at the -R
option to qsub or #PBS -R [o|e|oe|eo]
#!/bin/bash
#PBS -l walltime=5:00
#PBS -l select=1:ncpus=2
#PBS -q casper
#PBS -j oe
echo hello world
echo hi there > /dev/null
date
(
curl -sL "https://en.wikipedia.org/api/rest_v1/page/random/summary"
date
) > /dev/null
echo finished!!
Last updated: May 16 2025 at 17:14 UTC