Releases: googleapis/google-cloud-java
0.3.0
gcloud-java renamed to google-cloud
gcloud-java has been deprecated and renamed to google-cloud.
If you are using Maven, add this to your pom.xml file
<dependency>
<groupId>com.google.cloud</groupId>
<artifactId>google-cloud</artifactId>
<version>0.3.0</version>
</dependency>If you are using Gradle, add this to your dependencies
compile 'com.google.cloud:google-cloud:0.3.0'If you are using SBT, add this to your dependencies
libraryDependencies += "com.google.cloud" % "google-cloud" % "0.3.0"gcloud-java-<service> renamed to google-cloud-<service>
Service-specific artifacts have also been renamed from gcloud-java-<service> to google-cloud-<service>. See the following for examples of adding google-cloud-datastore as a dependency:
<dependency>
<groupId>com.google.cloud</groupId>
<artifactId>google-cloud-datastore</artifactId>
<version>0.3.0</version>
</dependency>If you are using Gradle, add this to your dependencies
compile 'com.google.cloud:google-cloud-datastore:0.3.0'If you are using SBT, add this to your dependencies
libraryDependencies += "com.google.cloud" % "google-cloud-datastore" % "0.3.0"Other changes
GCLOUD_PROJECTenvironment variable is now deprecated, useGOOGLE_CLOUD_PROJECTto set your default project id.- The project URL is now: https://googlecloudplatform.github.io/google-cloud-java/
- The GitHub repo is now: https://github.com/GoogleCloudPlatform/google-cloud-java/
0.2.8
Features
Datastore
gcloud-java-datastorenow uses Datastore v1 (#1169)
Translate
gcloud-java-translate, a new client library to interact with Google Translate, is released and is in alpha. See the docs for more information.
See TranslateExample for a complete example or API Documentation forgcloud-java-translatejavadoc.
The following snippet shows how to detect the language of some text and how to translate some text.
Complete source code can be found on
DetectLanguageAndTranslate.java.
import com.google.cloud.translate.Detection;
import com.google.cloud.translate.Translate;
import com.google.cloud.translate.Translate.TranslateOption;
import com.google.cloud.translate.TranslateOptions;
import com.google.cloud.translate.Translation;
Translate translate = TranslateOptions.defaultInstance().service();
Detection detection = translate.detect("Hola");
String detectedLanguage = detection.language();
Translation translation = translate.translate(
"World",
TranslateOption.sourceLanguage("en"),
TranslateOption.targetLanguage(detectedLanguage));
System.out.printf("Hola %s%n", translation.translatedText());Fixes
Core
SocketExceptionand "insufficient data written"IOExceptionare now retried (#1187)
Storage NIO
0.2.7
Fixes
BigQuery
- String setters for
DeprecationStatustimestamps are removed fromDeprecationStatus.Builder. Getters are still available inDeprecationStatusfor legacy support (#1127). - Fix table's
StreamingBufferto allowoldestEntryTimeto benull(#1141). - Add support for
useLegacySqltoQueryRequestandQueryJobConfiguration(#1142).
Datastore
- Fix Datastore exceptions conversion: use
getNumber()instead ofordinal()to getDatastoreException's error code (#1140). - Use HTTP transport factory, as set via
DatastoreOptions, to perform service requests (#1144).
Logging
- Set
gcloud-javauser agent ingcloud-java-logging, as done for other modules (#1147).
PubSub
- Change Pub/Sub endpoint from
pubsub-experimental.googleapis.comtopubsub.googleapis.com(#1149).
0.2.6
Features
BigQuery
- Add support for time-partitioned tables. For example, you can now create a time partitioned table using the following code:
TableId tableId = TableId.of(datasetName, tableName);
TimePartitioning partitioning = TimePartitioning.of(Type.DAY);
// You can also set the expiration
// TimePartitioning partitioning = TimePartitioning.of(Type.DAY, 2592000000);
StandardTableDefinition tableDefinition = StandardTableDefinition.builder()
.schema(tableSchema)
.timePartitioning(partitioning)
.build();
Table createdTable = bigquery.create(TableInfo.of(tableId, tableDefinition));Logging
gcloud-java-logging, a new client library to interact with Stackdriver Logging, is released and is in alpha. See the docs for more information.
gcloud-java-logginguses gRPC as transport layer, which is not (yet) supported by App Engine Standard.gcloud-java-loggingwill work on App Engine Flexible.
See LoggingExample for a complete example or API Documentation forgcloud-java-loggingjavadoc.
The following snippet shows how to write and list log entries. Complete source code can be found on
WriteAndListLogEntries.java.
import com.google.cloud.MonitoredResource;
import com.google.cloud.Page;
import com.google.cloud.logging.LogEntry;
import com.google.cloud.logging.Logging;
import com.google.cloud.logging.Logging.EntryListOption;
import com.google.cloud.logging.LoggingOptions;
import com.google.cloud.logging.Payload.StringPayload;
import java.util.Collections;
import java.util.Iterator;
LoggingOptions options = LoggingOptions.defaultInstance();
try(Logging logging = options.service()) {
LogEntry firstEntry = LogEntry.builder(StringPayload.of("message"))
.logName("test-log")
.resource(MonitoredResource.builder("global")
.addLabel("project_id", options.projectId())
.build())
.build();
logging.write(Collections.singleton(firstEntry));
Page<LogEntry> entries = logging.listLogEntries(
EntryListOption.filter("logName=projects/" + options.projectId() + "/logs/test-log"));
Iterator<LogEntry> entryIterator = entries.iterateAll();
while (entryIterator.hasNext()) {
System.out.println(entryIterator.next());
}
}The following snippet, instead, shows how to use a java.util.logging.Logger to write log entries to Stackdriver Logging. The snippet installs a Stackdriver Logging handler using
LoggingHandler.addHandler(Logger, LoggingHandler). Notice that this could also be done through the logging.properties file, adding the following line:
com.google.cloud.examples.logging.snippets.AddLoggingHandler.handlers=com.google.cloud.logging.LoggingHandler}
The complete code can be found on AddLoggingHandler.java.
import com.google.cloud.logging.LoggingHandler;
import java.util.logging.Logger;
Logger logger = Logger.getLogger(AddLoggingHandler.class.getName());
LoggingHandler.addHandler(logger, new LoggingHandler());
logger.warning("test warning");0.2.5
Features
Storage NIO
gcloud-java-nio, a new client library that allows to interact with Google Cloud Storage using Java's NIO API, is released and is in alpha. Not all NIO features have been implemented yet, see the docs for more information.
The simplest way to get started withgcloud-java-niois withPathsandFiles:
Path path = Paths.get(URI.create("gs://bucket/lolcat.csv"));
List<String> lines = Files.readAllLines(path, StandardCharsets.UTF_8);InputStream and OutputStream can also be used for streaming:
Path path = Paths.get(URI.create("gs://bucket/lolcat.csv"));
try (InputStream input = Files.newInputStream(path)) {
// use input stream
}To configure a bucket per-environment, you can use the FileSystem API:
FileSystem fs = FileSystems.getFileSystem(URI.create("gs://bucket"));
byte[] data = "hello world".getBytes(StandardCharsets.UTF_8);
Path path = fs.getPath("/object");
Files.write(path, data);
List<String> lines = Files.readAllLines(path, StandardCharsets.UTF_8);If you don't want to rely on Java SPI, which requires a META-INF file in your jar generated by Google Auto, you can instantiate this file system directly as follows:
CloudStorageFileSystem fs = CloudStorageFileSystem.forBucket("bucket");
byte[] data = "hello world".getBytes(StandardCharsets.UTF_8);
Path path = fs.getPath("/object");
Files.write(path, data);
data = Files.readAllBytes(path);For instructions on how to add Google Cloud Storage NIO support to a legacy jar see this example. For more examples see here.
Fixes
Storage
- Fix
BlobReadChannelto support reading and seeking files larger thanInteger.MAX_VALUEbytes
0.2.4
Features
Pub/Sub
gcloud-java-pubsub, a new client library to interact with Google Cloud Pub/Sub, is released and is in alpha. See the docs for more information.
gcloud-java-pubsubuses gRPC as transport layer, which is not (yet) supported by App Engine Standard.gcloud-java-pubsubwill work on App Engine Flexible.
See PubSubExample for a complete example or API Documentation forgcloud-java-pubsubjavadoc.
The following snippet shows how to create a Pub/Sub topic and asynchronously publish messages to it. See CreateTopicAndPublishMessages.java for the full source code.
try (PubSub pubsub = PubSubOptions.defaultInstance().service()) {
Topic topic = pubsub.create(TopicInfo.of("test-topic"));
Message message1 = Message.of("First message");
Message message2 = Message.of("Second message");
topic.publishAsync(message1, message2);
}The following snippet, instead, shows how to create a Pub/Sub pull subscription and asynchronously pull messages from it. See CreateSubscriptionAndPullMessages.java for the full source code.
try (PubSub pubsub = PubSubOptions.defaultInstance().service()) {
Subscription subscription =
pubsub.create(SubscriptionInfo.of("test-topic", "test-subscription"));
MessageProcessor callback = new MessageProcessor() {
@Override
public void process(Message message) throws Exception {
System.out.printf("Received message \"%s\"%n", message.payloadAsString());
}
};
// Create a message consumer and pull messages (for 60 seconds)
try (MessageConsumer consumer = subscription.pullAsync(callback)) {
Thread.sleep(60_000);
}
}0.2.3
Features
BigQuery
- Add support for the
BYTESdatatype. A field of typeBYTEScan be created by usingField.Value.bytes(). Thebyte[] bytesValue()method is added toFieldValueto return the value of a field as a byte array. - A
Job waitFor(WaitForOption... waitOptions)method is added toJobclass. This method waits for the job to complete and returns job's updated information:
Job completedJob = job.waitFor();
if (completedJob == null) {
// job no longer exists
} else if (completedJob.status().error() != null) {
// job failed, handle error
} else {
// job completed successfully
}By default, the job status is checked every 500 milliseconds, to configure this value WaitForOption.checkEvery(long, TimeUnit) can be used. WaitForOption.timeout(long, TimeUnit), instead, sets the maximum time to wait.
Core
AuthCredentials.createFor(String)andAuthCredentials.createFor(String, Date)methods have been added to createAuthCredentialsobjects given an OAuth2 access token (and possibly its expiration date).
Compute
- A
Operation waitFor(WaitForOption... waitOptions)method is added toOperationclass. This method waits for the operation to complete and returns operation's updated information:
Operation completedOperation = operation.waitFor();
if (completedOperation == null) {
// operation no longer exists
} else if (completedOperation.errors() != null) {
// operation failed, handle error
} else {
// operation completed successfully
}By default, the operation status is checked every 500 milliseconds, to configure this value WaitForOption.checkEvery(long, TimeUnit) can be used. WaitForOption.timeout(long, TimeUnit), instead, sets the maximum time to wait.
Datastore
Datastore.putandDatastoreBatchWriter.putnow support entities with incomplete keys. Bothputmethods return the just updated/created entities. AputWithDeferredIdAllocationmethod has been also added toDatastoreBatchWriter.
Fixes
Storage
0.2.2
Features
Core
Clockabstract class is moved out ofServiceOptions.ServiceOptions.clock()is now used byRetryHelperin all service calls. This enables mocking theClocksource used for retries when testing your code.
Storage
- Refactor storage batches to use the common
BatchResultclass. Sending batch requests in Storage is now as simple as in DNS. See the following example of sending a batch request:
StorageBatch batch = storage.batch();
BlobId firstBlob = BlobId.of("bucket", "blob1");
BlobId secondBlob = BlobId.of("bucket", "blob2");
BlobId thirdBlob = BlobId.of("bucket", "blob3");
// Users can either register a callback on an operation
batch.delete(firstBlob).notify(new BatchResult.Callback<Boolean, StorageException>() {
@Override
public void success(Boolean result) {
// handle delete result
}
@Override
public void error(StorageException exception) {
// handle exception
}
});
// Ignore its result
batch.update(BlobInfo.builder(secondBlob).contentType("text/plain").build());
StorageBatchResult<Blob> result = batch.get(thirdBlob);
batch.submit();
// Or get the result
Blob blob = result.get(); // returns the operation's result or throws StorageExceptionFixes
Datastore
- Update datastore client to accept IP addresses for localhost (#1002).
LocalDatastoreHelpernow uses https to download the emulator - thanks to @pehrs (#942).- Add example on embedded entities to
DatastoreExample(#980).
Storage
- Fix
StorageImpl.signUrlfor blob names that start with "/" - thanks to @clementdenis (#1013). - Fix
readAllBytespermission error on Google AppEngine (#1010).
0.2.1
Features
Compute
gcloud-java-compute, a new client library to interact with Google Compute Engine is released and is in alpha. See the docs for more information. See ComputeExample for a complete example or API Documentation forgcloud-java-computejavadoc.
The following snippet shows how to create a region external IP address, a persistent boot disk and a virtual machine instance that uses both the IP address and the persistent disk. See CreateAddressDiskAndInstance.java for the full source code.
// Create a service object
// Credentials are inferred from the environment.
Compute compute = ComputeOptions.defaultInstance().service();
// Create an external region address
RegionAddressId addressId = RegionAddressId.of("us-central1", "test-address");
Operation operation = compute.create(AddressInfo.of(addressId));
// Wait for operation to complete
while (!operation.isDone()) {
Thread.sleep(1000L);
}
// Check operation errors
operation = operation.reload();
if (operation.errors() == null) {
System.out.println("Address " + addressId + " was successfully created");
} else {
// inspect operation.errors()
throw new RuntimeException("Address creation failed");
}
// Create a persistent disk
ImageId imageId = ImageId.of("debian-cloud", "debian-8-jessie-v20160329");
DiskId diskId = DiskId.of("us-central1-a", "test-disk");
ImageDiskConfiguration diskConfiguration = ImageDiskConfiguration.of(imageId);
DiskInfo disk = DiskInfo.of(diskId, diskConfiguration);
operation = compute.create(disk);
// Wait for operation to complete
while (!operation.isDone()) {
Thread.sleep(1000L);
}
// Check operation errors
operation = operation.reload();
if (operation.errors() == null) {
System.out.println("Disk " + diskId + " was successfully created");
} else {
// inspect operation.errors()
throw new RuntimeException("Disk creation failed");
}
// Create a virtual machine instance
Address externalIp = compute.getAddress(addressId);
InstanceId instanceId = InstanceId.of("us-central1-a", "test-instance");
NetworkId networkId = NetworkId.of("default");
PersistentDiskConfiguration attachConfiguration =
PersistentDiskConfiguration.builder(diskId).boot(true).build();
AttachedDisk attachedDisk = AttachedDisk.of("dev0", attachConfiguration);
NetworkInterface networkInterface = NetworkInterface.builder(networkId)
.accessConfigurations(AccessConfig.of(externalIp.address()))
.build();
MachineTypeId machineTypeId = MachineTypeId.of("us-central1-a", "n1-standard-1");
InstanceInfo instance =
InstanceInfo.of(instanceId, machineTypeId, attachedDisk, networkInterface);
operation = compute.create(instance);
// Wait for operation to complete
while (!operation.isDone()) {
Thread.sleep(1000L);
}
// Check operation errors
operation = operation.reload();
if (operation.errors() == null) {
System.out.println("Instance " + instanceId + " was successfully created");
} else {
// inspect operation.errors()
throw new RuntimeException("Instance creation failed");
}Datastore
options(String namespace)method has been added toLocalDatastoreHelperallowing to create testing options for a specific namespace (#936).ofmethods have been added toListValueto support specific types (String,long,double,boolean,DateTime,LatLng,Key,FullEntityandBlob).addValuemethods have been added toListValue.Builderto support the same set of specific types (#934).
DNS
- Support for batches has been added to
gcloud-java-dns(#940). Batches allow to perform a number of operations in one single RPC request.
Fixes
Core
- The causing exception is now chained in
BaseServiceException.getCause()(#774).
0.2.0
Features
General
gcloud-javahas been repackaged.com.google.gcloudhas now changed tocom.google.cloud, and we're releasing our artifacts on maven under the Group IDcom.google.cloudrather thancom.google.gcloud. The new way to add our library as a dependency in your project is as follows:
If you're using Maven, add this to your pom.xml file
<dependency>
<groupId>com.google.cloud</groupId>
<artifactId>gcloud-java</artifactId>
<version>0.2.0</version>
</dependency>
If you are using Gradle, add this to your dependencies
compile 'com.google.cloud:gcloud-java:0.2.0'
If you are using SBT, add this to your dependencies
libraryDependencies += "com.google.cloud" % "gcloud-java" % "0.2.0"
Storage
- The interface
ServiceAccountSignerwas added. BothAppEngineAuthCredentialsandServiceAccountAuthCredentialsextend this interface and can be used to sign Google Cloud Storage blob URLs (#701, #854).
Fixes
General
- The default RPC retry parameters were changed to align with the backoff policy requirement listed in the Service Level Agreements (SLAs) for Cloud BigQuery, and Cloud Datastore, and Cloud Storage (#857, #860).
- The expiration date is now properly populated for App Engine credentials (#873, #894).
gcloud-javanow uses the project ID given in the credentials file specified by the environment variableGOOGLE_APPLICATION_CREDENTIALS(if set) (#845).
BigQuery
Job'sisDonemethod is fixed to return true if the job is complete or the job doesn't exist (#853).
Datastore
LocalGcdHelperhas been renamed toRemoteDatastoreHelper, and the command line startup/shutdown of the helper has been removed. The helper is now more consistent with other modules' test helpers and can be used via thecreate,start, andstopmethods (#821).ListValueno longer rejects empty lists, since Cloud Datastore v1beta3 supports empty array values (#862).
DNS
- There were some minor changes to
ChangeRequest, namely addingreload/isDonemethods and changing the method signature ofapplyTo(#849).
Storage
RemoteGcsHelperwas renamed toRemoteStorageHelperto be more consistent with other modules' test helpers (#821).