Skip to content
This repository was archived by the owner on Mar 31, 2026. It is now read-only.

Commit a916c7e

Browse files
1 parent df36eaf commit a916c7e

File tree

299 files changed

+376
-134997
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

299 files changed

+376
-134997
lines changed

google/cloud/spanner_v1/services/spanner/async_client.py

Lines changed: 46 additions & 43 deletions
Original file line numberDiff line numberDiff line change
@@ -314,14 +314,14 @@ async def create_session(
314314
transaction internally, and count toward the one transaction
315315
limit.
316316
317-
Active sessions use additional server resources, so it is a good
317+
Active sessions use additional server resources, so it's a good
318318
idea to delete idle and unneeded sessions. Aside from explicit
319-
deletes, Cloud Spanner may delete sessions for which no
320-
operations are sent for more than an hour. If a session is
321-
deleted, requests to it return ``NOT_FOUND``.
319+
deletes, Cloud Spanner can delete sessions when no operations
320+
are sent for more than an hour. If a session is deleted,
321+
requests to it return ``NOT_FOUND``.
322322
323323
Idle sessions can be kept alive by sending a trivial SQL query
324-
periodically, e.g., ``"SELECT 1"``.
324+
periodically, for example, ``"SELECT 1"``.
325325
326326
.. code-block:: python
327327
@@ -477,10 +477,10 @@ async def sample_batch_create_sessions():
477477
should not be set.
478478
session_count (:class:`int`):
479479
Required. The number of sessions to be created in this
480-
batch call. The API may return fewer than the requested
480+
batch call. The API can return fewer than the requested
481481
number of sessions. If a specific number of sessions are
482482
desired, the client can make additional calls to
483-
BatchCreateSessions (adjusting
483+
``BatchCreateSessions`` (adjusting
484484
[session_count][google.spanner.v1.BatchCreateSessionsRequest.session_count]
485485
as necessary).
486486
@@ -561,7 +561,7 @@ async def get_session(
561561
timeout: Union[float, object] = gapic_v1.method.DEFAULT,
562562
metadata: Sequence[Tuple[str, Union[str, bytes]]] = (),
563563
) -> spanner.Session:
564-
r"""Gets a session. Returns ``NOT_FOUND`` if the session does not
564+
r"""Gets a session. Returns ``NOT_FOUND`` if the session doesn't
565565
exist. This is mainly useful for determining whether a session
566566
is still alive.
567567
@@ -799,7 +799,7 @@ async def delete_session(
799799
metadata: Sequence[Tuple[str, Union[str, bytes]]] = (),
800800
) -> None:
801801
r"""Ends a session, releasing server resources associated
802-
with it. This will asynchronously trigger cancellation
802+
with it. This asynchronously triggers the cancellation
803803
of any operations that are running with this session.
804804
805805
.. code-block:: python
@@ -899,7 +899,7 @@ async def execute_sql(
899899
metadata: Sequence[Tuple[str, Union[str, bytes]]] = (),
900900
) -> result_set.ResultSet:
901901
r"""Executes an SQL statement, returning all results in a single
902-
reply. This method cannot be used to return a result set larger
902+
reply. This method can't be used to return a result set larger
903903
than 10 MiB; if the query yields more data than that, the query
904904
fails with a ``FAILED_PRECONDITION`` error.
905905
@@ -913,6 +913,9 @@ async def execute_sql(
913913
[ExecuteStreamingSql][google.spanner.v1.Spanner.ExecuteStreamingSql]
914914
instead.
915915
916+
The query string can be SQL or `Graph Query Language
917+
(GQL) <https://cloud.google.com/spanner/docs/reference/standard-sql/graph-intro>`__.
918+
916919
.. code-block:: python
917920
918921
# This snippet has been automatically generated and should be regarded as a
@@ -1006,6 +1009,9 @@ def execute_streaming_sql(
10061009
individual row in the result set can exceed 100 MiB, and no
10071010
column value can exceed 10 MiB.
10081011
1012+
The query string can be SQL or `Graph Query Language
1013+
(GQL) <https://cloud.google.com/spanner/docs/reference/standard-sql/graph-intro>`__.
1014+
10091015
.. code-block:: python
10101016
10111017
# This snippet has been automatically generated and should be regarded as a
@@ -1243,7 +1249,7 @@ async def read(
12431249
r"""Reads rows from the database using key lookups and scans, as a
12441250
simple key/value style alternative to
12451251
[ExecuteSql][google.spanner.v1.Spanner.ExecuteSql]. This method
1246-
cannot be used to return a result set larger than 10 MiB; if the
1252+
can't be used to return a result set larger than 10 MiB; if the
12471253
read matches more data than that, the read fails with a
12481254
``FAILED_PRECONDITION`` error.
12491255
@@ -1573,7 +1579,7 @@ async def commit(
15731579
any time; commonly, the cause is conflicts with concurrent
15741580
transactions. However, it can also happen for a variety of other
15751581
reasons. If ``Commit`` returns ``ABORTED``, the caller should
1576-
re-attempt the transaction from the beginning, re-using the same
1582+
retry the transaction from the beginning, reusing the same
15771583
session.
15781584
15791585
On very rare occasions, ``Commit`` might return ``UNKNOWN``.
@@ -1643,7 +1649,7 @@ async def sample_commit():
16431649
commit with a temporary transaction is non-idempotent.
16441650
That is, if the ``CommitRequest`` is sent to Cloud
16451651
Spanner more than once (for instance, due to retries in
1646-
the application, or in the transport library), it is
1652+
the application, or in the transport library), it's
16471653
possible that the mutations are executed more than once.
16481654
If this is undesirable, use
16491655
[BeginTransaction][google.spanner.v1.Spanner.BeginTransaction]
@@ -1729,16 +1735,15 @@ async def rollback(
17291735
timeout: Union[float, object] = gapic_v1.method.DEFAULT,
17301736
metadata: Sequence[Tuple[str, Union[str, bytes]]] = (),
17311737
) -> None:
1732-
r"""Rolls back a transaction, releasing any locks it holds. It is a
1738+
r"""Rolls back a transaction, releasing any locks it holds. It's a
17331739
good idea to call this for any transaction that includes one or
17341740
more [Read][google.spanner.v1.Spanner.Read] or
17351741
[ExecuteSql][google.spanner.v1.Spanner.ExecuteSql] requests and
17361742
ultimately decides not to commit.
17371743
17381744
``Rollback`` returns ``OK`` if it successfully aborts the
17391745
transaction, the transaction was already aborted, or the
1740-
transaction is not found. ``Rollback`` never returns
1741-
``ABORTED``.
1746+
transaction isn't found. ``Rollback`` never returns ``ABORTED``.
17421747
17431748
.. code-block:: python
17441749
@@ -1850,12 +1855,12 @@ async def partition_query(
18501855
[ExecuteStreamingSql][google.spanner.v1.Spanner.ExecuteStreamingSql]
18511856
to specify a subset of the query result to read. The same
18521857
session and read-only transaction must be used by the
1853-
PartitionQueryRequest used to create the partition tokens and
1854-
the ExecuteSqlRequests that use the partition tokens.
1858+
``PartitionQueryRequest`` used to create the partition tokens
1859+
and the ``ExecuteSqlRequests`` that use the partition tokens.
18551860
18561861
Partition tokens become invalid when the session used to create
18571862
them is deleted, is idle for too long, begins a new transaction,
1858-
or becomes too old. When any of these happen, it is not possible
1863+
or becomes too old. When any of these happen, it isn't possible
18591864
to resume the query, and the whole operation must be restarted
18601865
from the beginning.
18611866
@@ -1951,15 +1956,15 @@ async def partition_read(
19511956
[StreamingRead][google.spanner.v1.Spanner.StreamingRead] to
19521957
specify a subset of the read result to read. The same session
19531958
and read-only transaction must be used by the
1954-
PartitionReadRequest used to create the partition tokens and the
1955-
ReadRequests that use the partition tokens. There are no
1959+
``PartitionReadRequest`` used to create the partition tokens and
1960+
the ``ReadRequests`` that use the partition tokens. There are no
19561961
ordering guarantees on rows returned among the returned
1957-
partition tokens, or even within each individual StreamingRead
1958-
call issued with a partition_token.
1962+
partition tokens, or even within each individual
1963+
``StreamingRead`` call issued with a ``partition_token``.
19591964
19601965
Partition tokens become invalid when the session used to create
19611966
them is deleted, is idle for too long, begins a new transaction,
1962-
or becomes too old. When any of these happen, it is not possible
1967+
or becomes too old. When any of these happen, it isn't possible
19631968
to resume the read, and the whole operation must be restarted
19641969
from the beginning.
19651970
@@ -2053,25 +2058,23 @@ def batch_write(
20532058
timeout: Union[float, object] = gapic_v1.method.DEFAULT,
20542059
metadata: Sequence[Tuple[str, Union[str, bytes]]] = (),
20552060
) -> Awaitable[AsyncIterable[spanner.BatchWriteResponse]]:
2056-
r"""Batches the supplied mutation groups in a collection
2057-
of efficient transactions. All mutations in a group are
2058-
committed atomically. However, mutations across groups
2059-
can be committed non-atomically in an unspecified order
2060-
and thus, they must be independent of each other.
2061-
Partial failure is possible, i.e., some groups may have
2062-
been committed successfully, while some may have failed.
2063-
The results of individual batches are streamed into the
2064-
response as the batches are applied.
2065-
2066-
BatchWrite requests are not replay protected, meaning
2067-
that each mutation group may be applied more than once.
2068-
Replays of non-idempotent mutations may have undesirable
2069-
effects. For example, replays of an insert mutation may
2070-
produce an already exists error or if you use generated
2071-
or commit timestamp-based keys, it may result in
2072-
additional rows being added to the mutation's table. We
2073-
recommend structuring your mutation groups to be
2074-
idempotent to avoid this issue.
2061+
r"""Batches the supplied mutation groups in a collection of
2062+
efficient transactions. All mutations in a group are committed
2063+
atomically. However, mutations across groups can be committed
2064+
non-atomically in an unspecified order and thus, they must be
2065+
independent of each other. Partial failure is possible, that is,
2066+
some groups might have been committed successfully, while some
2067+
might have failed. The results of individual batches are
2068+
streamed into the response as the batches are applied.
2069+
2070+
``BatchWrite`` requests are not replay protected, meaning that
2071+
each mutation group can be applied more than once. Replays of
2072+
non-idempotent mutations can have undesirable effects. For
2073+
example, replays of an insert mutation can produce an already
2074+
exists error or if you use generated or commit timestamp-based
2075+
keys, it can result in additional rows being added to the
2076+
mutation's table. We recommend structuring your mutation groups
2077+
to be idempotent to avoid this issue.
20752078
20762079
.. code-block:: python
20772080

0 commit comments

Comments
 (0)