Once you have compiled a client library, whether using a local installation or bazel, it is time for the fun part: actually running it!
Create a virtual environment for the library:
$ virtualenv ~/.local/client-lib --python=`which python3.7`
$ source ~/.local/client-lib/bin/activateNext, install the library:
$ cd dest/
$ pip install --editable .Now it is time to play with it! Here is a test script:
# This is the client library generated by this plugin.
from google.cloud import vision
# Instantiate the client.
#
# If you need to manually specify credentials, do so here.
# More info: https://cloud.google.com/docs/authentication/getting-started
#
# If you wish, you can send `transport='grpc'` or `transport='http'`
# to change which underlying transport layer is being used.
ia = vision.ImageAnnotatorClient()
# Send the request to the server and get the response.
response = ia.batch_annotate_images({
'requests': [{
'features': [{
'type': vision.Feature.Type.LABEL_DETECTION,
}],
'image': {'source': {
'image_uri': 'https://images.pexels.com/photos/67636'
'/rose-blue-flower-rose-blooms-67636.jpeg',
}},
}],
})
print(response)