@@ -430,14 +430,12 @@ def open_dataset(
430430 "netcdf4" over "h5netcdf" over "scipy" (customizable via
431431 ``netcdf_engine_order`` in ``xarray.set_options()``). A custom backend
432432 class (a subclass of ``BackendEntrypoint``) can also be used.
433- chunks : int, dict, 'auto', 'dask-auto' or None, default: None
433+ chunks : int, dict, 'auto', or None, default: None
434434 If provided, used to load the data into dask arrays.
435435
436436 - ``chunks="auto"`` will use a chunking scheme that never splits encoded
437437 chunks. If encoded chunks are small then "auto" takes multiples of them
438438 over the largest dimension.
439- - ``chunks="dask-auto"`` will use dask ``auto`` chunking taking into account the
440- engine preferred chunks.
441439 - ``chunks=None`` skips using dask. This uses xarray's internally private
442440 :ref:`lazy indexing classes <internal design.lazy indexing>`,
443441 but data is eagerly loaded into memory as numpy arrays when accessed.
@@ -677,14 +675,12 @@ def open_dataarray(
677675 "netcdf4" over "h5netcdf" over "scipy" (customizable via
678676 ``netcdf_engine_order`` in ``xarray.set_options()``). A custom backend
679677 class (a subclass of ``BackendEntrypoint``) can also be used.
680- chunks : int, dict, 'auto', 'dask-auto', or None, default: None
678+ chunks : int, dict, 'auto', or None, default: None
681679 If provided, used to load the data into dask arrays.
682680
683681 - ``chunks="auto"`` will use a chunking scheme that never splits encoded
684682 chunks. If encoded chunks are small then "auto" takes multiples of them
685683 over the largest dimension.
686- - ``chunks='dask-auto'`` will use dask ``auto`` chunking taking into account the
687- engine preferred chunks.
688684 - ``chunks=None`` skips using dask. This uses xarray's internally private
689685 :ref:`lazy indexing classes <internal design.lazy indexing>`,
690686 but data is eagerly loaded into memory as numpy arrays when accessed.
@@ -906,11 +902,9 @@ def open_datatree(
906902 "h5netcdf" over "netcdf4" (customizable via ``netcdf_engine_order`` in
907903 ``xarray.set_options()``). A custom backend class (a subclass of
908904 ``BackendEntrypoint``) can also be used.
909- chunks : int, dict, 'auto', 'dask-auto', or None, default: None
905+ chunks : int, dict, 'auto', or None, default: None
910906 If provided, used to load the data into dask arrays.
911907
912- - ``chunks="dask-auto"`` will use dask ``auto`` chunking taking into account the
913- engine preferred chunks.
914908 - ``chunks="auto"`` will use a chunking scheme that never splits encoded
915909 chunks. If encoded chunks are small then "auto" takes multiples of them
916910 over the largest dimension.
@@ -1155,14 +1149,12 @@ def open_groups(
11551149 ``xarray.set_options()``). A custom backend class (a subclass of
11561150 ``BackendEntrypoint``) can also be used.
11571151 can also be used.
1158- chunks : int, dict, 'auto', 'dask-auto', or None, default: None
1152+ chunks : int, dict, 'auto', or None, default: None
11591153 If provided, used to load the data into dask arrays.
11601154
11611155 - ``chunks="auto"`` will use a chunking scheme that never splits encoded
11621156 chunks. If encoded chunks are small then "auto" takes multiples of them
11631157 over the largest dimension.
1164- - ``chunks="dask-auto"`` will use dask ``auto`` chunking taking into account the
1165- engine preferred chunks.
11661158 - ``chunks=None`` skips using dask. This uses xarray's internally private
11671159 :ref:`lazy indexing classes <internal design.lazy indexing>`,
11681160 but data is eagerly loaded into memory as numpy arrays when accessed.
@@ -1430,7 +1422,7 @@ def open_mfdataset(
14301422 concatenation along more than one dimension is desired, then ``paths`` must be a
14311423 nested list-of-lists (see ``combine_nested`` for details). (A string glob will
14321424 be expanded to a 1-dimensional list.)
1433- chunks : int, dict, 'auto', 'dask-auto', or None, optional
1425+ chunks : int, dict, 'auto', or None, optional
14341426 Dictionary with keys given by dimension names and values given by chunk sizes.
14351427 In general, these should divide the dimensions of each dataset. If int, chunk
14361428 each dimension by ``chunks``. By default, chunks will be chosen to match the
0 commit comments