Hi, I just updated my xarray package, after which I started a new session on JupyterHub and am receiving a new error when trying to import PBS Cluster
Here is what I get:
dask_jobqueue import PBSCluster
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
Input In [1], in <module>
13 from distributed import wait
14 #from ncar_jobqueue import NCARCluster
---> 16 from dask_jobqueue import PBSCluster
17 import dask
File /glade/work/mberdahl/miniconda/envs/pangeo/lib/python3.9/site-packages/dask_jobqueue/__init__.py:3, in <module>
1 # flake8: noqa
2 from . import config
----> 3 from .core import JobQueueCluster
4 from .moab import MoabCluster
5 from .pbs import PBSCluster
File /glade/work/mberdahl/miniconda/envs/pangeo/lib/python3.9/site-packages/dask_jobqueue/core.py:20, in <module>
18 from distributed.deploy.local import nprocesses_nthreads
19 from distributed.scheduler import Scheduler
---> 20 from distributed.utils import tmpfile
22 logger = logging.getLogger(__name__)
24 job_parameters = """
25 cores : int
26 Total number of cores per job
(...)
63 Name of Dask worker. This is typically set by the Cluster
64 """.strip()
ImportError: cannot import name 'tmpfile' from 'distributed.utils' (/glade/work/mberdahl/miniconda/envs/pangeo/lib/python3.9/site-packages/distributed/utils.py)
Thoughts? Thanks
My initial thought would be to try updating dask
and/or dask-jobqueue
. Sometimes xarray updates have breaking changes. Looking at the versions of things in your environment can be helpful too, I usually do this with conda list
.
Last updated: May 16 2025 at 17:14 UTC