diff --git a/examples/cloudrun/README.md b/examples/cloudrun/README.md new file mode 100644 index 00000000..491a043a --- /dev/null +++ b/examples/cloudrun/README.md @@ -0,0 +1,26 @@ +# Node.js Samples for Cloud Run + +This directory contains samples demonstrating how to connect to Cloud SQL from Cloud Run using various Node.js ORMs and the [Cloud SQL Node.js Connector](https://github.com/GoogleCloudPlatform/cloud-sql-nodejs-connector). + +## Available Samples + +* [Knex.js](./knex) +* [Prisma](./prisma) +* [Sequelize](./sequelize) +* [TypeORM](./typeorm) + +Each ORM directory contains subdirectories for supported databases (MySQL, PostgreSQL, SQL Server), and each example includes implementations in: +* CommonJS (`.cjs`) +* ES Modules (`.mjs`) +* TypeScript (`.ts`) + +## Prerequisites + +1. A Google Cloud Project with billing enabled. +2. A Cloud SQL instance. +3. A Cloud Run service account with the `Cloud SQL Client` IAM role. +4. For IAM Authentication, the service account must be added as a database user. + +## Deployment + +Refer to the `README.md` in each ORM directory for specific deployment instructions. diff --git a/examples/cloudrun/knex/README.md b/examples/cloudrun/knex/README.md new file mode 100644 index 00000000..046eb0db --- /dev/null +++ b/examples/cloudrun/knex/README.md @@ -0,0 +1,160 @@ +# Connecting Cloud Run to Cloud SQL with the Node.js Connector + +This guide provides a comprehensive walkthrough of how to connect a Cloud Run service to a Cloud SQL instance using the Cloud SQL Node.js Connector. It covers connecting to instances with both public and private IP addresses and demonstrates how to handle database credentials securely. + +## Develop a Node.js Application + +The following Node.js applications demonstrate how to connect to a Cloud SQL instance using the Cloud SQL Node.js Connector. + +### `mysql2/index.cjs` and `pg/index.mjs` + +These files contain the core application logic for connecting to a Cloud SQL for MySQL or PostgreSQL instance. They provide two separate authentication methods, each exposed at a different route: +- `/`: Password-based authentication +- `/iam`: IAM-based authentication + +### `tedious/index.ts` + +This file contains the core application logic for connecting to a Cloud SQL for SQL Server instance. It uses the `cloud-sql-nodejs-connector` to create a database connection pool with password-based authentication at the `/` route. + +> [!NOTE] +> +> Cloud SQL for SQL Server does not support IAM database authentication. + +## Lazy Instantiation + +In a Cloud Run service, global variables are initialized when the container instance starts up. The application instance then handles subsequent requests until the container is spun down. + +The `Connector` and `knex` objects are defined as global variables (initially set to `null`) and are lazily instantiated (created only when needed) inside the request handlers. + +This approach offers several benefits: + +1. **Faster Startup:** By deferring initialization until the first request, the Cloud Run service can start listening for requests almost immediately, reducing cold start latency. +2. **Resource Efficiency:** Expensive operations, like establishing background connections or fetching secrets, are only performed when actually required. +3. **Connection Reuse:** Once initialized, the global `Connector` and `knex` instances are reused for all subsequent requests to that container instance. This prevents the overhead of creating new connections for every request and avoids hitting connection limits. + +## IAM Authentication Prerequisites + +For IAM authentication to work, you must ensure two things: + +1. **The Cloud Run service's service account has the `Cloud SQL Client` role.** You can grant this role with the following command: + ```bash + gcloud projects add-iam-policy-binding PROJECT_ID \ + --member="serviceAccount:SERVICE_ACCOUNT_EMAIL" \ + --role="roles/cloudsql.client" + ``` + Replace `PROJECT_ID` with your Google Cloud project ID and `SERVICE_ACCOUNT_EMAIL` with the email of the service account your Cloud Run service is using. + +2. **The service account is added as a database user to your Cloud SQL instance.** You can do this with the following command: + ```bash + gcloud sql users create SERVICE_ACCOUNT_EMAIL \ + --instance=INSTANCE_NAME \ + --type=cloud_iam_user + ``` + Replace `SERVICE_ACCOUNT_EMAIL` with the same service account email and `INSTANCE_NAME` with your Cloud SQL instance name. + +## Deploy the Application to Cloud Run + +Follow these steps to deploy the application to Cloud Run. + +### Build and Push the Docker Image + +1. **Enable the Artifact Registry API:** + + ```bash + gcloud services enable artifactregistry.googleapis.com + ``` + +2. **Create an Artifact Registry repository:** + + ```bash + gcloud artifacts repositories create REPO_NAME \ + --repository-format=docker \ + --location=REGION + ``` + +3. **Configure Docker to authenticate with Artifact Registry:** + + ```bash + gcloud auth configure-docker REGION-docker.pkg.dev + ``` + +4. **Build the Docker image (replace `mysql2` with `pg` or `tedious` as needed):** + + ```bash + docker build -t REGION-docker.pkg.dev/PROJECT_ID/REPO_NAME/IMAGE_NAME mysql2 + ``` + +5. **Push the Docker image to Artifact Registry:** + + ```bash + docker push REGION-docker.pkg.dev/PROJECT_ID/REPO_NAME/IMAGE_NAME + ``` + +### Deploy to Cloud Run + +Deploy the container image to Cloud Run using the `gcloud run deploy` command. + +**Sample Values:** +* `SERVICE_NAME`: `my-cloud-run-service` +* `REGION`: `us-central1` +* `PROJECT_ID`: `my-gcp-project-id` +* `REPO_NAME`: `my-artifact-repo` +* `IMAGE_NAME`: `my-app-image` +* `INSTANCE_CONNECTION_NAME`: `my-gcp-project-id:us-central1:my-instance-name` +* `DB_USER`: `my-db-user` (for password-based authentication) +* `DB_IAM_USER`: `my-service-account@my-gcp-project-id.iam.gserviceaccount.com` (for IAM-based authentication) +* `DB_NAME`: `my-db-name` +* `DB_PASSWORD`: `my-user-pass-name` +* `VPC_NETWORK`: `my-vpc-network` +* `SUBNET_NAME`: `my-vpc-subnet` + +**For MySQL and PostgreSQL (Public IP):** + +```bash +gcloud run deploy SERVICE_NAME \ + --image=REGION-docker.pkg.dev/PROJECT_ID/REPO_NAME/IMAGE_NAME \ + --set-env-vars=DB_USER=DB_USER,DB_IAM_USER=DB_IAM_USER,DB_NAME=DB_NAME,INSTANCE_CONNECTION_NAME=INSTANCE_CONNECTION_NAME \ + --region=REGION \ + --update-secrets=DB_PASSWORD=DB_PASSWORD:latest +``` + +**For MySQL and PostgreSQL (Private IP):** + +```bash +gcloud run deploy SERVICE_NAME \ + --image=REGION-docker.pkg.dev/PROJECT_ID/REPO_NAME/IMAGE_NAME \ + --set-env-vars=DB_USER=DB_USER,DB_IAM_USER=DB_IAM_USER,DB_NAME=DB_NAME,INSTANCE_CONNECTION_NAME=INSTANCE_CONNECTION_NAME,IP_TYPE=PRIVATE \ + --network=VPC_NETWORK \ + --subnet=SUBNET_NAME \ + --vpc-egress=private-ranges-only \ + --region=REGION \ + --update-secrets=DB_PASSWORD=DB_PASSWORD:latest +``` + +**For SQL Server (Public IP):** + +```bash +gcloud run deploy SERVICE_NAME \ + --image=REGION-docker.pkg.dev/PROJECT_ID/REPO_NAME/IMAGE_NAME \ + --set-env-vars=DB_USER=DB_USER,DB_NAME=DB_NAME,INSTANCE_CONNECTION_NAME=INSTANCE_CONNECTION_NAME \ + --region=REGION \ + --update-secrets=DB_PASSWORD=DB_PASSWORD:latest +``` + +**For SQL Server (Private IP):** + +```bash +gcloud run deploy SERVICE_NAME \ + --image=REGION-docker.pkg.dev/PROJECT_ID/REPO_NAME/IMAGE_NAME \ + --set-env-vars=DB_USER=DB_USER,DB_NAME=DB_NAME,INSTANCE_CONNECTION_NAME=INSTANCE_CONNECTION_NAME,IP_TYPE=PRIVATE \ + --network=VPC_NETWORK \ + --subnet=SUBNET_NAME \ + --vpc-egress=private-ranges-only \ + --region=REGION \ + --update-secrets=DB_PASSWORD=DB_PASSWORD:latest +``` + +> [!NOTE] +> **`For PSC connections`** +> +> To connect to the Cloud SQL instance with PSC connection type, create a PSC endpoint, a DNS zone and DNS record for the instance in the same VPC network as the Cloud Run service and replace the `IP_TYPE` in the deploy command with `PSC`. To configure DNS records, refer to [Connect to an instance using Private Service Connect](https://docs.cloud.google.com/sql/docs/mysql/configure-private-service-connect) guide diff --git a/examples/cloudrun/knex/mysql2/Dockerfile b/examples/cloudrun/knex/mysql2/Dockerfile new file mode 100644 index 00000000..7461eb1b --- /dev/null +++ b/examples/cloudrun/knex/mysql2/Dockerfile @@ -0,0 +1,22 @@ +# Use the official Node.js image. +# https://hub.docker.com/_/node +FROM node:25-slim + +# Create and change to the app directory. +WORKDIR /usr/src/app + +# Copy application dependency manifests to the container image. +# A wildcard is used to ensure both package.json AND package-lock.json are copied. +# Copying this separately prevents re-running npm install on every code change. +COPY package*.json ./ + +# Install production dependencies. +# If you add a package-lock.json speed your build by switching to 'npm ci'. +# RUN npm ci --only=production +RUN npm install --omit=dev + +# Copy local code to the container image. +COPY . . + +# Run the web service on container startup. +CMD ["node", "index.cjs"] diff --git a/examples/cloudrun/knex/mysql2/index.cjs b/examples/cloudrun/knex/mysql2/index.cjs new file mode 100644 index 00000000..3da55238 --- /dev/null +++ b/examples/cloudrun/knex/mysql2/index.cjs @@ -0,0 +1,160 @@ +// Copyright 2025 Google LLC +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// https://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +const express = require('express'); +const {Connector, IpAddressTypes} = require('@google-cloud/cloud-sql-connector'); +const knex = require('knex'); + +const app = express(); + +// set up rate limiter: maximum of five requests per minute +const RateLimit = require('express-rate-limit'); +const limiter = RateLimit({ + // 15 minutes + windowMs: 15 * 60 * 1000, + // max 100 requests per windowMs + max: 100, +}); + +// apply rate limiter to all requests +app.use(limiter); + +// Connector and connection pools are initialized as null to allow for lazy instantiation. +// Lazy instantiation is a best practice for Cloud Run applications because it allows +// the application to start faster and only initialize connections when they are needed. +// This is especially important in serverless environments where applications may be +// started and stopped frequently. +let connector = null; +let passwordPool = null; +let iamPool = null; + +// Helper to get the IP type enum from string +function getIpType(ipTypeStr) { + const ipType = ipTypeStr || 'PUBLIC'; + if (ipType === 'PRIVATE') { + return IpAddressTypes.PRIVATE; + } else if (ipType === 'PSC') { + return IpAddressTypes.PSC; + } else { + return IpAddressTypes.PUBLIC; + } +} + +// Function to create a database connection pool using IAM authentication +async function createIamConnectionPool() { + const instanceConnectionName = process.env.INSTANCE_CONNECTION_NAME; + // IAM service account email + const dbUser = process.env.DB_IAM_USER; + const dbName = process.env.DB_NAME; + const ipType = getIpType(process.env.IP_TYPE); + + // Creates a new connector object. + if (!connector) { + connector = new Connector(); + } + + // Get the connection options for the Cloud SQL instance. + const clientOpts = await connector.getOptions({ + instanceConnectionName, + ipType: ipType, + authType: 'IAM', + }); + + // Create a new knex connection pool. + return knex({ + client: 'mysql2', + connection: { + ...clientOpts, + user: dbUser, + database: dbName, + }, + }); +} + +// Function to create a database connection pool using password authentication +async function createPasswordConnectionPool() { + const instanceConnectionName = process.env.INSTANCE_CONNECTION_NAME; + // Database username + const dbUser = process.env.DB_USER; + const dbName = process.env.DB_NAME; + const dbPassword = process.env.DB_PASSWORD; + const ipType = getIpType(process.env.IP_TYPE); + + // Creates a new connector object. + if (!connector) { + connector = new Connector(); + } + + // Get the connection options for the Cloud SQL instance. + const clientOpts = await connector.getOptions({ + instanceConnectionName, + ipType: ipType, + }); + + // Create a new knex connection pool. + return knex({ + client: 'mysql2', + connection: { + ...clientOpts, + user: dbUser, + password: dbPassword, + database: dbName, + }, + }); +} + +// Helper to get or create the password pool +async function getPasswordConnectionPool() { + if (!passwordPool) { + passwordPool = await createPasswordConnectionPool(); + } + return passwordPool; +} + +// Helper to get or create the IAM pool +async function getIamConnectionPool() { + if (!iamPool) { + iamPool = await createIamConnectionPool(); + } + return iamPool; +} + +app.get('/', async (req, res) => { + try { + const db = await getPasswordConnectionPool(); + // Use knex to run a simple query + const result = await db.raw('SELECT 1'); + // Knex raw result for mysql2 is [rows, fields] + res.send(`Database connection successful (password authentication), result: ${JSON.stringify(result[0])}`); + } catch (err) { + console.error(err); + res.status(500).send(`Error connecting to the database (password authentication): ${err.message}`); + } +}); + +app.get('/iam', async (req, res) => { + try { + const db = await getIamConnectionPool(); + const result = await db.raw('SELECT 1'); + res.send(`Database connection successful (IAM authentication), result: ${JSON.stringify(result[0])}`); + } catch (err) { + console.error(err); + res.status(500).send(`Error connecting to the database (IAM authentication): ${err.message}`); + } +}); + +const port = parseInt(process.env.PORT) || 8080; +app.listen(port, () => { + console.log(`Server running on port ${port}`); +}); diff --git a/examples/cloudrun/knex/mysql2/package.json b/examples/cloudrun/knex/mysql2/package.json new file mode 100644 index 00000000..8c44d181 --- /dev/null +++ b/examples/cloudrun/knex/mysql2/package.json @@ -0,0 +1,19 @@ +{ + "name": "knex-mysql-cloudrun", + "version": "1.0.0", + "description": "Knex MySQL example for Cloud Run (CommonJS)", + "main": "index.cjs", + "type": "commonjs", + "scripts": { + "start": "node index.cjs" + }, + "dependencies": { + "@google-cloud/cloud-sql-connector": "^1.8.4", + "express": "^5.1.0", + "knex": "^3.1.0", + "mysql2": "^3.15.2", + "express-rate-limit": "8.2.1" + }, + "devDependencies": { + } +} diff --git a/examples/cloudrun/knex/pg/Dockerfile b/examples/cloudrun/knex/pg/Dockerfile new file mode 100644 index 00000000..c7f350a8 --- /dev/null +++ b/examples/cloudrun/knex/pg/Dockerfile @@ -0,0 +1,22 @@ +# Use the official Node.js image. +# https://hub.docker.com/_/node +FROM node:25-slim + +# Create and change to the app directory. +WORKDIR /usr/src/app + +# Copy application dependency manifests to the container image. +# A wildcard is used to ensure both package.json AND package-lock.json are copied. +# Copying this separately prevents re-running npm install on every code change. +COPY package*.json ./ + +# Install production dependencies. +# If you add a package-lock.json speed your build by switching to 'npm ci'. +# RUN npm ci --only=production +RUN npm install --omit=dev + +# Copy local code to the container image. +COPY . . + +# Run the web service on container startup. +CMD ["node", "index.mjs"] diff --git a/examples/cloudrun/knex/pg/index.mjs b/examples/cloudrun/knex/pg/index.mjs new file mode 100644 index 00000000..ba087688 --- /dev/null +++ b/examples/cloudrun/knex/pg/index.mjs @@ -0,0 +1,158 @@ +// Copyright 2025 Google LLC +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// https://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +import express from 'express'; +import {Connector, IpAddressTypes} from '@google-cloud/cloud-sql-connector'; +import knex from 'knex'; + +const app = express(); + +// set up rate limiter: maximum of five requests per minute +const RateLimit = require('express-rate-limit'); +const limiter = RateLimit({ + // 15 minutes + windowMs: 15 * 60 * 1000, + // max 100 requests per windowMs + max: 100, +}); + +// apply rate limiter to all requests +app.use(limiter); + +// Connector and connection pools are initialized as null to allow for lazy instantiation. +// Lazy instantiation is a best practice for Cloud Run applications because it allows +// the application to start faster and only initialize connections when they are needed. +// This is especially important in serverless environments where applications may be +// started and stopped frequently. +let connector = null; +let passwordPool = null; +let iamPool = null; + +// Helper to get the IP type enum from string +function getIpType(ipTypeStr) { + const ipType = ipTypeStr || 'PUBLIC'; + if (ipType === 'PRIVATE') { + return IpAddressTypes.PRIVATE; + } else if (ipType === 'PSC') { + return IpAddressTypes.PSC; + } else { + return IpAddressTypes.PUBLIC; + } +} + +// Function to create a database connection pool using IAM authentication +async function createIamConnectionPool() { + const instanceConnectionName = process.env.INSTANCE_CONNECTION_NAME; + // IAM service account email + const dbUser = process.env.DB_IAM_USER; + const dbName = process.env.DB_NAME; + const ipType = getIpType(process.env.IP_TYPE); + + // Creates a new connector object. + if (!connector) { + connector = new Connector(); + } + + // Get the connection options for the Cloud SQL instance. + const clientOpts = await connector.getOptions({ + instanceConnectionName, + ipType: ipType, + authType: 'IAM', + }); + + // Create a new knex connection pool. + return knex({ + client: 'pg', + connection: { + ...clientOpts, + user: dbUser, + database: dbName, + }, + }); +} + +// Function to create a database connection pool using password authentication +async function createPasswordConnectionPool() { + const instanceConnectionName = process.env.INSTANCE_CONNECTION_NAME; + // Database username + const dbUser = process.env.DB_USER; + const dbName = process.env.DB_NAME; + const dbPassword = process.env.DB_PASSWORD; + const ipType = getIpType(process.env.IP_TYPE); + + // Creates a new connector object. + if (!connector) { + connector = new Connector(); + } + + // Get the connection options for the Cloud SQL instance. + const clientOpts = await connector.getOptions({ + instanceConnectionName, + ipType: ipType, + }); + + // Create a new knex connection pool. + return knex({ + client: 'pg', + connection: { + ...clientOpts, + user: dbUser, + password: dbPassword, + database: dbName, + }, + }); +} + +// Helper to get or create the password pool +async function getPasswordConnectionPool() { + if (!passwordPool) { + passwordPool = await createPasswordConnectionPool(); + } + return passwordPool; +} + +// Helper to get or create the IAM pool +async function getIamConnectionPool() { + if (!iamPool) { + iamPool = await createIamConnectionPool(); + } + return iamPool; +} + +app.get('/', async (req, res) => { + try { + const db = await getPasswordConnectionPool(); + const result = await db.raw('SELECT 1'); + res.send(`Database connection successful (password authentication), result: ${JSON.stringify(result.rows)}`); + } catch (err) { + console.error(err); + res.status(500).send(`Error connecting to the database (password authentication): ${err.message}`); + } +}); + +app.get('/iam', async (req, res) => { + try { + const db = await getIamConnectionPool(); + const result = await db.raw('SELECT 1'); + res.send(`Database connection successful (IAM authentication), result: ${JSON.stringify(result.rows)}`); + } catch (err) { + console.error(err); + res.status(500).send(`Error connecting to the database (IAM authentication): ${err.message}`); + } +}); + +const port = parseInt(process.env.PORT) || 8080; +app.listen(port, () => { + console.log(`Server running on port ${port}`); +}); diff --git a/examples/cloudrun/knex/pg/package.json b/examples/cloudrun/knex/pg/package.json new file mode 100644 index 00000000..02928573 --- /dev/null +++ b/examples/cloudrun/knex/pg/package.json @@ -0,0 +1,19 @@ +{ + "name": "knex-pg-cloudrun", + "version": "1.0.0", + "description": "Knex PostgreSQL example for Cloud Run (ESM)", + "main": "index.mjs", + "type": "module", + "scripts": { + "start": "node index.mjs" + }, + "dependencies": { + "@google-cloud/cloud-sql-connector": "^1.8.4", + "express": "^5.1.0", + "knex": "^3.1.0", + "pg": "^8.16.3", + "express-rate-limit": "8.2.1" + }, + "devDependencies": { + } +} diff --git a/examples/cloudrun/knex/tedious/Dockerfile b/examples/cloudrun/knex/tedious/Dockerfile new file mode 100644 index 00000000..9885d948 --- /dev/null +++ b/examples/cloudrun/knex/tedious/Dockerfile @@ -0,0 +1,32 @@ +# Use a Node.js image to build the application +FROM node:25 as builder + +# Create and change to the app directory. +WORKDIR /usr/src/app + +# Copy application dependency manifests to the container image. +COPY package*.json ./ +COPY tsconfig.json ./ + +# Install dependencies and build the application +RUN npm install +COPY . . +RUN npm run build + +# Use a slim Node.js image for the production environment +FROM node:20-slim + +# Create and change to the app directory. +WORKDIR /usr/src/app + +# Copy package.json to the container image. +COPY package*.json ./ + +# Install production dependencies. +RUN npm install --omit=dev + +# Copy the built application from the builder stage. +COPY --from=builder /usr/src/app/dist ./dist + +# Run the web service on container startup. +CMD ["node", "dist/index.js"] diff --git a/examples/cloudrun/knex/tedious/index.ts b/examples/cloudrun/knex/tedious/index.ts new file mode 100644 index 00000000..a01e58ef --- /dev/null +++ b/examples/cloudrun/knex/tedious/index.ts @@ -0,0 +1,127 @@ +// Copyright 2025 Google LLC +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// https://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +import express from 'express'; +import { + AuthTypes, + Connector, + IpAddressTypes, +} from '@google-cloud/cloud-sql-connector'; +import knex, {Knex} from 'knex'; + +const app = express(); + +// set up rate limiter: maximum of five requests per minute +const RateLimit = require('express-rate-limit'); +const limiter = RateLimit({ + // 15 minutes + windowMs: 15 * 60 * 1000, + // max 100 requests per windowMs + max: 100, +}); + +// apply rate limiter to all requests +app.use(limiter); + +// Connector and connection pools are initialized as null to allow for lazy instantiation. +// Lazy instantiation is a best practice for Cloud Run applications because it allows +// the application to start faster and only initialize connections when they are needed. +// This is especially important in serverless environments where applications may be +// started and stopped frequently. +let connector: Connector | null = null; +let passwordPool: Knex | null = null; + +// Helper to get the IP type enum from string +function getIpType(ipTypeStr: string | undefined) { + const ipType = ipTypeStr || 'PUBLIC'; + if (ipType === 'PRIVATE') { + return IpAddressTypes.PRIVATE; + } else if (ipType === 'PSC') { + return IpAddressTypes.PSC; + } else { + return IpAddressTypes.PUBLIC; + } +} + +// Function to create a database connection pool using password authentication +async function createPasswordConnectionPool() { + const instanceConnectionName = process.env.INSTANCE_CONNECTION_NAME || ''; + // Database username + const dbUser = process.env.DB_USER || ''; + const dbName = process.env.DB_NAME || ''; + const dbPassword = process.env.DB_PASSWORD || ''; + const ipType = getIpType(process.env.IP_TYPE); + + if (!connector) { + connector = new Connector(); + } + + // Get the connection options for the Cloud SQL instance. + const clientOpts = await connector.getTediousOptions({ + instanceConnectionName, + ipType: ipType, + authType: AuthTypes.PASSWORD, + }); + + // Create a new knex connection pool. + return knex({ + client: 'mssql', + connection: { + server: '127.0.0.1', // address doesn't matter, connector hijacks it + user: dbUser, + password: dbPassword, + database: dbName, + options: { + ...clientOpts, + // tedious driver-specific connector options + encrypt: true, + trustServerCertificate: true, + }, + }, + // Set min/max pool size, and other pool options + pool: {min: 1, max: 5}, + }); +} + +// Helper to get or create the password pool +async function getPasswordConnectionPool() { + if (!passwordPool) { + passwordPool = await createPasswordConnectionPool(); + } + return passwordPool; +} + +app.get('/', async (req, res) => { + try { + const db = await getPasswordConnectionPool(); + const result = await db.raw('SELECT 1'); + res.send( + 'Database connection successful (password authentication), result: ' + + `${JSON.stringify(result)}` + ); + } catch (err: unknown) { + console.error(err); + res + .status(500) + .send( + 'Error connecting to the database (password authentication): ' + + `${(err as Error).message}` + ); + } +}); + +const port = process.env.PORT || 8080; +app.listen(port, () => { + console.log(`Server running on port ${port}`); +}); diff --git a/examples/cloudrun/knex/tedious/package.json b/examples/cloudrun/knex/tedious/package.json new file mode 100644 index 00000000..01ec53c7 --- /dev/null +++ b/examples/cloudrun/knex/tedious/package.json @@ -0,0 +1,23 @@ +{ + "name": "knex-sqlserver-cloudrun", + "version": "1.0.0", + "description": "Knex SQL Server example for Cloud Run (TypeScript)", + "main": "index.ts", + "scripts": { + "start": "ts-node index.ts", + "build": "tsc" + }, + "dependencies": { + "express": "^5.1.0", + "@google-cloud/cloud-sql-connector": "^1.8.4", + "tedious": "^18.6.1", + "knex": "^3.1.0", + "express-rate-limit": "8.2.1" + }, + "devDependencies": { + "@types/express": "^5.0.5", + "@types/node": "^20.11.0", + "ts-node": "^10.9.2", + "typescript": "^5.3.3" + } +} diff --git a/examples/cloudrun/knex/tedious/tsconfig.json b/examples/cloudrun/knex/tedious/tsconfig.json new file mode 100644 index 00000000..c1efcf4a --- /dev/null +++ b/examples/cloudrun/knex/tedious/tsconfig.json @@ -0,0 +1,11 @@ +{ + "compilerOptions": { + "target": "es2016", + "module": "commonjs", + "outDir": "./dist", + "strict": true, + "esModuleInterop": true, + "skipLibCheck": true, + "forceConsistentCasingInFileNames": true + } +} diff --git a/examples/cloudrun/prisma/README.md b/examples/cloudrun/prisma/README.md new file mode 100644 index 00000000..79e210aa --- /dev/null +++ b/examples/cloudrun/prisma/README.md @@ -0,0 +1,130 @@ +# Connecting Cloud Run to Cloud SQL with the Node.js Connector + +This guide provides a comprehensive walkthrough of how to connect a Cloud Run service to a Cloud SQL instance using the Cloud SQL Node.js Connector. It covers connecting to instances with both public and private IP addresses and demonstrates how to handle database credentials securely. + +## Develop a Node.js Application + +The following Node.js applications demonstrate how to connect to a Cloud SQL instance using the Cloud SQL Node.js Connector. + +### `mysql/index.mjs` and `postgresql/index.ts` + +These files contain the core application logic for connecting to a Cloud SQL for MySQL or PostgreSQL instance. They provide two separate authentication methods, each exposed at a different route: +- `/`: Password-based authentication +- `/iam`: IAM-based authentication + + +## Lazy Instantiation + +In a Cloud Run service, global variables are initialized when the container instance starts up. The application instance then handles subsequent requests until the container is spun down. + +The `Connector` and `PrismaClient` objects are defined as global variables (initially set to `null`) and are lazily instantiated (created only when needed) inside the request handlers. + +This approach offers several benefits: + +1. **Faster Startup:** By deferring initialization until the first request, the Cloud Run service can start listening for requests almost immediately, reducing cold start latency. +2. **Resource Efficiency:** Expensive operations, like establishing background connections or fetching secrets, are only performed when actually required. +3. **Connection Reuse:** Once initialized, the global `Connector` and `PrismaClient` instances are reused for all subsequent requests to that container instance. This prevents the overhead of creating new connections for every request and avoids hitting connection limits. + +## IAM Authentication Prerequisites + +For IAM authentication to work, you must ensure two things: + +1. **The Cloud Run service's service account has the `Cloud SQL Client` role.** You can grant this role with the following command: + ```bash + gcloud projects add-iam-policy-binding PROJECT_ID \ + --member="serviceAccount:SERVICE_ACCOUNT_EMAIL" \ + --role="roles/cloudsql.client" + ``` + Replace `PROJECT_ID` with your Google Cloud project ID and `SERVICE_ACCOUNT_EMAIL` with the email of the service account your Cloud Run service is using. + +2. **The service account is added as a database user to your Cloud SQL instance.** You can do this with the following command: + ```bash + gcloud sql users create SERVICE_ACCOUNT_EMAIL \ + --instance=INSTANCE_NAME \ + --type=cloud_iam_user + ``` + Replace `SERVICE_ACCOUNT_EMAIL` with the same service account email and `INSTANCE_NAME` with your Cloud SQL instance name. + +## Deploy the Application to Cloud Run + +Follow these steps to deploy the application to Cloud Run. + +### Build and Push the Docker Image + +1. **Enable the Artifact Registry API:** + + ```bash + gcloud services enable artifactregistry.googleapis.com + ``` + +2. **Create an Artifact Registry repository:** + + ```bash + gcloud artifacts repositories create REPO_NAME \ + --repository-format=docker \ + --location=REGION + ``` + +3. **Configure Docker to authenticate with Artifact Registry:** + + ```bash + gcloud auth configure-docker REGION-docker.pkg.dev + ``` + +4. **Build the Docker image (replace `mysql` with `postgresql` as needed):** + + ```bash + docker build -t REGION-docker.pkg.dev/PROJECT_ID/REPO_NAME/IMAGE_NAME mysql + ``` + +5. **Push the Docker image to Artifact Registry:** + + ```bash + docker push REGION-docker.pkg.dev/PROJECT_ID/REPO_NAME/IMAGE_NAME + ``` + +### Deploy to Cloud Run + +Deploy the container image to Cloud Run using the `gcloud run deploy` command. + +**Sample Values:** +* `SERVICE_NAME`: `my-cloud-run-service` +* `REGION`: `us-central1` +* `PROJECT_ID`: `my-gcp-project-id` +* `REPO_NAME`: `my-artifact-repo` +* `IMAGE_NAME`: `my-app-image` +* `INSTANCE_CONNECTION_NAME`: `my-gcp-project-id:us-central1:my-instance-name` +* `DB_USER`: `my-db-user` (for password-based authentication) +* `DB_IAM_USER`: `my-service-account@my-gcp-project-id.iam.gserviceaccount.com` (for IAM-based authentication) +* `DB_NAME`: `my-db-name` +* `DB_PASSWORD`: `my-user-pass-name` +* `VPC_NETWORK`: `my-vpc-network` +* `SUBNET_NAME`: `my-vpc-subnet` + +**For MySQL and PostgreSQL (Public IP):** + +```bash +gcloud run deploy SERVICE_NAME \ + --image=REGION-docker.pkg.dev/PROJECT_ID/REPO_NAME/IMAGE_NAME \ + --set-env-vars=DB_USER=DB_USER,DB_IAM_USER=DB_IAM_USER,DB_NAME=DB_NAME,INSTANCE_CONNECTION_NAME=INSTANCE_CONNECTION_NAME \ + --region=REGION \ + --update-secrets=DB_PASSWORD=DB_PASSWORD:latest +``` + +**For MySQL and PostgreSQL (Private IP):** + +```bash +gcloud run deploy SERVICE_NAME \ + --image=REGION-docker.pkg.dev/PROJECT_ID/REPO_NAME/IMAGE_NAME \ + --set-env-vars=DB_USER=DB_USER,DB_IAM_USER=DB_IAM_USER,DB_NAME=DB_NAME,INSTANCE_CONNECTION_NAME=INSTANCE_CONNECTION_NAME,IP_TYPE=PRIVATE \ + --network=VPC_NETWORK \ + --subnet=SUBNET_NAME \ + --vpc-egress=private-ranges-only \ + --region=REGION \ + --update-secrets=DB_PASSWORD=DB_PASSWORD:latest +``` + +> [!NOTE] +> **`For PSC connections`** +> +> To connect to the Cloud SQL instance with PSC connection type, create a PSC endpoint, a DNS zone and DNS record for the instance in the same VPC network as the Cloud Run service and replace the `IP_TYPE` in the deploy command with `PSC`. To configure DNS records, refer to [Connect to an instance using Private Service Connect](https://docs.cloud.google.com/sql/docs/mysql/configure-private-service-connect) guide diff --git a/examples/cloudrun/prisma/mysql/Dockerfile b/examples/cloudrun/prisma/mysql/Dockerfile new file mode 100644 index 00000000..a63e52b2 --- /dev/null +++ b/examples/cloudrun/prisma/mysql/Dockerfile @@ -0,0 +1,33 @@ +# Use the official Node.js image. +# https://hub.docker.com/_/node +FROM node:25-slim + +# Create and change to the app directory. +WORKDIR /usr/src/app + +# Copy application dependency manifests to the container image. +# A wildcard is used to ensure both package.json AND package-lock.json are copied. +# Copying this separately prevents re-running npm install on every code change. +COPY package*.json ./ +COPY schema.prisma ./ + +# Debian Bullseye/Bookworm's OpenSSL is version 3. Prisma currently requires version 1.1. +# See https://github.com/prisma/prisma/issues/17335 +RUN apt-get update && apt-get install -y wget && \ + wget http://deb.debian.org/debian/pool/main/o/openssl/libssl1.1_1.1.1w-0+deb11u1_amd64.deb && \ + dpkg -i libssl1.1_1.1.1w-0+deb11u1_amd64.deb && \ + rm libssl1.1_1.1.1w-0+deb11u1_amd64.deb + +# Install production dependencies. +# If you add a package-lock.json speed your build by switching to 'npm ci'. +# RUN npm ci --only=production +RUN npm install --omit=dev + +# Generate Prisma Client +RUN npx prisma generate + +# Copy local code to the container image. +COPY . . + +# Run the web service on container startup. +CMD ["node", "index.mjs"] diff --git a/examples/cloudrun/prisma/mysql/index.mjs b/examples/cloudrun/prisma/mysql/index.mjs new file mode 100644 index 00000000..99c71020 --- /dev/null +++ b/examples/cloudrun/prisma/mysql/index.mjs @@ -0,0 +1,196 @@ +// Copyright 2025 Google LLC +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// https://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +import express from 'express'; +import {resolve} from 'node:path'; +import {Connector, IpAddressTypes} from '@google-cloud/cloud-sql-connector'; +import {PrismaClient} from '@prisma/client'; + +const app = express(); + +// set up rate limiter: maximum of five requests per minute +const RateLimit = require('express-rate-limit'); +const limiter = RateLimit({ + // 15 minutes + windowMs: 15 * 60 * 1000, + // max 100 requests per windowMs + max: 100, +}); + +// apply rate limiter to all requests +app.use(limiter); + +// Connector and connection pools are initialized as null to allow for lazy instantiation. +// Lazy instantiation is a best practice for Cloud Run applications because it allows +// the application to start faster and only initialize connections when they are needed. +// This is especially important in serverless environments where applications may be +// started and stopped frequently. +let connector = null; +let passwordClient = null; +let iamClient = null; +let passwordCleanup = null; +let iamCleanup = null; + +// Helper to get the IP type enum from string +function getIpType(ipTypeStr) { + const ipType = ipTypeStr || 'PUBLIC'; + if (ipType === 'PRIVATE') { + return IpAddressTypes.PRIVATE; + } else if (ipType === 'PSC') { + return IpAddressTypes.PSC; + } else { + return IpAddressTypes.PUBLIC; + } +} + +// Function to create a database connection pool using IAM authentication +async function createIamConnectionPool() { + const instanceConnectionName = process.env.INSTANCE_CONNECTION_NAME; + // IAM service account email + const dbUser = process.env.DB_IAM_USER; + const dbName = process.env.DB_NAME; + const ipType = getIpType(process.env.IP_TYPE); + + // Creates a new connector object. + if (!connector) { + connector = new Connector(); + } + + // Creates a randomly named unix socket path for the local proxy. + const path = resolve('/tmp', `mysql-iam-${Date.now()}.socket`); + + // The startLocalProxy method starts a local proxy that listens on the + // specified unix socket path. This allows the application to connect to + // the database using a standard database driver. + await connector.startLocalProxy({ + instanceConnectionName, + ipType: ipType, + authType: 'IAM', + listenOptions: {path}, + }); + + // URL encode the user for IAM service accounts + const datasourceUrl = `mysql://${encodeURIComponent(dbUser)}@localhost/${dbName}?socket=${path}`; + const prisma = new PrismaClient({datasourceUrl}); + + // Returns the prisma client and a cleanup function to close the connection. + return { + prisma, + async close() { + await prisma.$disconnect(); + }, + }; +} + +// Function to create a database connection pool using password authentication +async function createPasswordConnectionPool() { + const instanceConnectionName = process.env.INSTANCE_CONNECTION_NAME; + // Database username + const dbUser = process.env.DB_USER; + const dbName = process.env.DB_NAME; + const dbPassword = process.env.DB_PASSWORD; + const ipType = getIpType(process.env.IP_TYPE); + + if (!connector) { + connector = new Connector(); + } + + // Creates a randomly named unix socket path for the local proxy. + const path = resolve('/tmp', `mysql-pw-${Date.now()}.socket`); + + // The startLocalProxy method starts a local proxy that listens on the + // specified unix socket path. This allows the application to connect to + // the database using a standard database driver. + await connector.startLocalProxy({ + instanceConnectionName, + ipType: ipType, + listenOptions: {path}, + }); + + // The datasourceUrl is the connection string for the database. It is + // constructed using the database user, password, and the unix socket path. + const datasourceUrl = `mysql://${dbUser}:${dbPassword}@localhost/${dbName}?socket=${path}`; + const prisma = new PrismaClient({datasourceUrl}); + + // Returns the prisma client and a cleanup function to close the connection. + return { + prisma, + async close() { + await prisma.$disconnect(); + }, + }; +} + +// Helper to get or create the password pool +async function getPasswordConnectionPool() { + if (!passwordClient) { + const { prisma, close } = await createPasswordConnectionPool(); + passwordClient = prisma; + passwordCleanup = close; + } + return passwordClient; +} + +// Helper to get or create the IAM pool +async function getIamConnectionPool() { + if (!iamClient) { + const { prisma, close } = await createIamConnectionPool(); + iamClient = prisma; + iamCleanup = close; + } + return iamClient; +} + +app.get('/', async (req, res) => { + try { + const prisma = await getPasswordConnectionPool(); + const result = await prisma.$queryRaw`SELECT 1`; + const serialized = JSON.stringify(result, (key, value) => + typeof value === 'bigint' ? value.toString() : value + ); + res.send(`Database connection successful (password authentication), result: ${serialized}`); + } catch (err) { + console.error(err); + res.status(500).send(`Error connecting to the database (password authentication): ${err.message}`); + } +}); + +app.get('/iam', async (req, res) => { + try { + const prisma = await getIamConnectionPool(); + const result = await prisma.$queryRaw`SELECT 1`; + const serialized = JSON.stringify(result, (key, value) => + typeof value === 'bigint' ? value.toString() : value + ); + res.send(`Database connection successful (IAM authentication), result: ${serialized}`); + } catch (err) { + console.error(err); + res.status(500).send(`Error connecting to the database (IAM authentication): ${err.message}`); + } +}); + +const port = parseInt(process.env.PORT) || 8080; +const server = app.listen(port, () => { + console.log(`Server running on port ${port}`); +}); + +process.on('SIGTERM', async () => { + console.log('SIGTERM signal received: closing HTTP server'); + server.close(async () => { + console.log('HTTP server closed'); + if (passwordCleanup) await passwordCleanup(); + if (iamCleanup) await iamCleanup(); + if (connector) connector.close(); + }); +}); diff --git a/examples/cloudrun/prisma/mysql/package.json b/examples/cloudrun/prisma/mysql/package.json new file mode 100644 index 00000000..12bb6cec --- /dev/null +++ b/examples/cloudrun/prisma/mysql/package.json @@ -0,0 +1,19 @@ +{ + "name": "prisma-mysql-cloudrun", + "version": "1.0.0", + "description": "Prisma MySQL example for Cloud Run (ESM)", + "main": "index.mjs", + "type": "module", + "scripts": { + "start": "node index.mjs" + }, + "dependencies": { + "@google-cloud/cloud-sql-connector": "^1.8.4", + "express": "^5.1.0", + "prisma": "^5.22.0", + "@prisma/client": "^5.22.0", + "express-rate-limit": "8.2.1" + }, + "devDependencies": { + } +} diff --git a/examples/cloudrun/prisma/mysql/schema.prisma b/examples/cloudrun/prisma/mysql/schema.prisma new file mode 100644 index 00000000..824567cd --- /dev/null +++ b/examples/cloudrun/prisma/mysql/schema.prisma @@ -0,0 +1,17 @@ +// This is your Prisma schema file, +// learn more about it in the docs: https://pris.ly/d/prisma-schema + +generator client { + provider = "prisma-client-js" +} + +datasource db { + provider = "mysql" + url = env("DATABASE_URL") +} + +model User { + id Int @id @default(autoincrement()) + email String @unique + name String? +} diff --git a/examples/cloudrun/prisma/postgresql/Dockerfile b/examples/cloudrun/prisma/postgresql/Dockerfile new file mode 100644 index 00000000..1fc9fe2d --- /dev/null +++ b/examples/cloudrun/prisma/postgresql/Dockerfile @@ -0,0 +1,50 @@ +# Use a Node.js image to build the application +FROM node:25 as builder + +# Create and change to the app directory. +WORKDIR /usr/src/app + +# Copy application dependency manifests to the container image. +COPY package*.json ./ +COPY tsconfig.json ./ +COPY schema.prisma ./ + +# Debian Bullseye/Bookworm's OpenSSL is version 3. Prisma currently requires version 1.1. +# See https://github.com/prisma/prisma/issues/17335 +RUN apt-get update && apt-get install -y wget && \ + wget http://deb.debian.org/debian/pool/main/o/openssl/libssl1.1_1.1.1w-0+deb11u1_amd64.deb && \ + dpkg -i libssl1.1_1.1.1w-0+deb11u1_amd64.deb && \ + rm libssl1.1_1.1.1w-0+deb11u1_amd64.deb + +# Install dependencies and build the application +RUN npm install +COPY . . +RUN npx prisma generate +RUN npm run build + +# Use a slim Node.js image for the production environment +FROM node:20-slim + +# Create and change to the app directory. +WORKDIR /usr/src/app + +# Copy package.json to the container image. +COPY package*.json ./ +COPY schema.prisma ./ + +# Debian Bullseye/Bookworm's OpenSSL is version 3. Prisma currently requires version 1.1. +# See https://github.com/prisma/prisma/issues/17335 +RUN apt-get update && apt-get install -y wget && \ + wget http://deb.debian.org/debian/pool/main/o/openssl/libssl1.1_1.1.1w-0+deb11u1_amd64.deb && \ + dpkg -i libssl1.1_1.1.1w-0+deb11u1_amd64.deb && \ + rm libssl1.1_1.1.1w-0+deb11u1_amd64.deb + +# Install production dependencies. +RUN npm install --omit=dev + +# Copy the built application from the builder stage. +COPY --from=builder /usr/src/app/dist ./dist +COPY --from=builder /usr/src/app/node_modules/.prisma ./.prisma + +# Run the web service on container startup. +CMD ["node", "dist/index.js"] diff --git a/examples/cloudrun/prisma/postgresql/index.ts b/examples/cloudrun/prisma/postgresql/index.ts new file mode 100644 index 00000000..5a9b7608 --- /dev/null +++ b/examples/cloudrun/prisma/postgresql/index.ts @@ -0,0 +1,229 @@ +// Copyright 2025 Google LLC +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// https://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +import express from 'express'; +import {resolve} from 'node:path'; +import fs from 'node:fs'; +import { + Connector, + IpAddressTypes, + AuthTypes, +} from '@google-cloud/cloud-sql-connector'; +import {PrismaClient} from '@prisma/client'; + +const app = express(); + +// set up rate limiter: maximum of five requests per minute +const RateLimit = require('express-rate-limit'); +const limiter = RateLimit({ + // 15 minutes + windowMs: 15 * 60 * 1000, + // max 100 requests per windowMs + max: 100, +}); + +// apply rate limiter to all requests +app.use(limiter); + +// Connector and connection pools are initialized as null to allow for lazy instantiation. +// Lazy instantiation is a best practice for Cloud Run applications because it allows +// the application to start faster and only initialize connections when they are needed. +// This is especially important in serverless environments where applications may be +// started and stopped frequently. +let connector: Connector | null = null; +let passwordClient: PrismaClient | null = null; +let iamClient: PrismaClient | null = null; +let passwordCleanup: (() => Promise) | null = null; +let iamCleanup: (() => Promise) | null = null; + +// Helper to get the IP type enum from string +function getIpType(ipTypeStr: string | undefined) { + const ipType = ipTypeStr || 'PUBLIC'; + if (ipType === 'PRIVATE') { + return IpAddressTypes.PRIVATE; + } else if (ipType === 'PSC') { + return IpAddressTypes.PSC; + } else { + return IpAddressTypes.PUBLIC; + } +} + +// Function to create a database connection pool using IAM authentication +async function createIamConnectionPool() { + const instanceConnectionName = process.env.INSTANCE_CONNECTION_NAME || ''; + // IAM service account email + const dbUser = process.env.DB_IAM_USER || ''; + const dbName = process.env.DB_NAME || ''; + const ipType = getIpType(process.env.IP_TYPE); + + // Creates a new connector object. + if (!connector) { + connector = new Connector(); + } + + // Creates a randomly named unix socket path for the local proxy. + const dir = resolve('/tmp', `pg-iam-${Date.now()}`); + fs.mkdirSync(dir, {recursive: true}); + const path = resolve(dir, '.s.PGSQL.5432'); + + // The startLocalProxy method starts a local proxy that listens on the + // specified unix socket path. This allows the application to connect to + // the database using a standard database driver. + const cleanup = await connector.startLocalProxy({ + instanceConnectionName, + ipType: ipType, + authType: AuthTypes.IAM, + listenOptions: {path}, + }); + + // URL encode the user for IAM service accounts + const datasourceUrl = `postgresql://${encodeURIComponent( + dbUser + )}@localhost/${dbName}?host=${dir}`; + const prisma = new PrismaClient({datasourceUrl}); + + // Returns the prisma client and a cleanup function to close the connection. + return { + prisma, + async close() { + await prisma.$disconnect(); + cleanup(); + }, + }; +} + +// Function to create a database connection pool using password authentication +async function createPasswordConnectionPool() { + const instanceConnectionName = process.env.INSTANCE_CONNECTION_NAME || ''; + // Database username + const dbUser = process.env.DB_USER || ''; + const dbName = process.env.DB_NAME || ''; + const dbPassword = process.env.DB_PASSWORD || ''; + const ipType = getIpType(process.env.IP_TYPE); + + // Creates a new connector object. + if (!connector) { + connector = new Connector(); + } + + // Creates a randomly named unix socket path for the local proxy. + const dir = resolve('/tmp', `pg-pw-${Date.now()}`); + fs.mkdirSync(dir, {recursive: true}); + const path = resolve(dir, '.s.PGSQL.5432'); + + // The startLocalProxy method starts a local proxy that listens on the + // specified unix socket path. This allows the application to connect to + // the database using a standard database driver. + const cleanup = await connector.startLocalProxy({ + instanceConnectionName, + ipType: ipType, + listenOptions: {path}, + }); + + // The datasourceUrl is the connection string for the database. It is + // constructed using the database user, password, and the unix socket path. + const datasourceUrl = `postgresql://${dbUser}:${dbPassword}@localhost/${dbName}?host=${dir}`; + const prisma = new PrismaClient({datasourceUrl}); + + // Returns the prisma client and a cleanup function to close the connection. + return { + prisma, + async close() { + await prisma.$disconnect(); + cleanup(); + }, + }; +} + +// Helper to get or create the password pool +async function getPasswordConnectionPool() { + if (!passwordClient) { + const {prisma, close} = await createPasswordConnectionPool(); + passwordClient = prisma; + passwordCleanup = close; + } + return passwordClient; +} + +// Helper to get or create the IAM pool +async function getIamConnectionPool() { + if (!iamClient) { + const {prisma, close} = await createIamConnectionPool(); + iamClient = prisma; + iamCleanup = close; + } + return iamClient; +} + +app.get('/', async (req, res) => { + try { + const prisma = await getPasswordConnectionPool(); + const result = await prisma.$queryRaw`SELECT 1`; + const serialized = JSON.stringify(result, (key, value) => + typeof value === 'bigint' ? value.toString() : value + ); + res.send( + 'Database connection successful (password authentication), result: ' + + `${serialized}` + ); + } catch (err: unknown) { + console.error(err); + res + .status(500) + .send( + 'Error connecting to the database (password authentication): ' + + `${(err as Error).message}` + ); + } +}); + +app.get('/iam', async (req, res) => { + try { + const prisma = await getIamConnectionPool(); + const result = await prisma.$queryRaw`SELECT 1`; + const serialized = JSON.stringify(result, (key, value) => + typeof value === 'bigint' ? value.toString() : value + ); + res.send( + 'Database connection successful (IAM authentication), result: ' + + `${serialized}` + ); + } catch (err: unknown) { + console.error(err); + res + .status(500) + .send( + 'Error connecting to the database (IAM authentication): ' + + `${(err as Error).message}` + ); + } +}); + +const port = process.env.PORT ? parseInt(process.env.PORT) : 8080; + +app.listen(port, () => { + console.log(`Listening on port ${port}`); +}); + +process.on('SIGTERM', async () => { + if (passwordCleanup) { + await passwordCleanup(); + } + if (iamCleanup) { + await iamCleanup(); + } + if (connector) { + connector.close(); + } +}); diff --git a/examples/cloudrun/prisma/postgresql/package.json b/examples/cloudrun/prisma/postgresql/package.json new file mode 100644 index 00000000..658794f8 --- /dev/null +++ b/examples/cloudrun/prisma/postgresql/package.json @@ -0,0 +1,23 @@ +{ + "name": "prisma-pg-cloudrun", + "version": "1.0.0", + "description": "Prisma PostgreSQL example for Cloud Run (TypeScript)", + "main": "index.ts", + "scripts": { + "start": "ts-node index.ts", + "build": "tsc" + }, + "dependencies": { + "@google-cloud/cloud-sql-connector": "^1.3.0", + "express": "^5.1.0", + "prisma": "^5.22.0", + "@prisma/client": "^5.22.0", + "express-rate-limit": "8.2.1" + }, + "devDependencies": { + "@types/express": "^5.0.5", + "@types/node": "^20.11.0", + "ts-node": "^10.9.2", + "typescript": "^5.3.3" + } +} diff --git a/examples/cloudrun/prisma/postgresql/schema.prisma b/examples/cloudrun/prisma/postgresql/schema.prisma new file mode 100644 index 00000000..e11e5421 --- /dev/null +++ b/examples/cloudrun/prisma/postgresql/schema.prisma @@ -0,0 +1,17 @@ +// This is your Prisma schema file, +// learn more about it in the docs: https://pris.ly/d/prisma-schema + +generator client { + provider = "prisma-client-js" +} + +datasource db { + provider = "postgresql" + url = env("DATABASE_URL") +} + +model User { + id Int @id @default(autoincrement()) + email String @unique + name String? +} diff --git a/examples/cloudrun/prisma/postgresql/tsconfig.json b/examples/cloudrun/prisma/postgresql/tsconfig.json new file mode 100644 index 00000000..c1efcf4a --- /dev/null +++ b/examples/cloudrun/prisma/postgresql/tsconfig.json @@ -0,0 +1,11 @@ +{ + "compilerOptions": { + "target": "es2016", + "module": "commonjs", + "outDir": "./dist", + "strict": true, + "esModuleInterop": true, + "skipLibCheck": true, + "forceConsistentCasingInFileNames": true + } +} diff --git a/examples/cloudrun/sequelize/README.md b/examples/cloudrun/sequelize/README.md new file mode 100644 index 00000000..5ee250e4 --- /dev/null +++ b/examples/cloudrun/sequelize/README.md @@ -0,0 +1,160 @@ +# Connecting Cloud Run to Cloud SQL with the Node.js Connector + +This guide provides a comprehensive walkthrough of how to connect a Cloud Run service to a Cloud SQL instance using the Cloud SQL Node.js Connector. It covers connecting to instances with both public and private IP addresses and demonstrates how to handle database credentials securely. + +## Develop a Node.js Application + +The following Node.js applications demonstrate how to connect to a Cloud SQL instance using the Cloud SQL Node.js Connector. + +### `mysql2/index.ts` and `pg/index.cjs` + +These files contain the core application logic for connecting to a Cloud SQL for MySQL or PostgreSQL instance. They provide two separate authentication methods, each exposed at a different route: +- `/`: Password-based authentication +- `/iam`: IAM-based authentication + +### `tedious/index.mjs` + +This file contains the core application logic for connecting to a Cloud SQL for SQL Server instance. It uses the `cloud-sql-nodejs-connector` to create a database connection pool with password-based authentication at the `/` route. + +> [!NOTE] +> +> Cloud SQL for SQL Server does not support IAM database authentication. + +## Lazy Instantiation + +In a Cloud Run service, global variables are initialized when the container instance starts up. The application instance then handles subsequent requests until the container is spun down. + +The `Connector` and `Sequelize` objects are defined as global variables (initially set to `null`) and are lazily instantiated (created only when needed) inside the request handlers. + +This approach offers several benefits: + +1. **Faster Startup:** By deferring initialization until the first request, the Cloud Run service can start listening for requests almost immediately, reducing cold start latency. +2. **Resource Efficiency:** Expensive operations, like establishing background connections or fetching secrets, are only performed when actually required. +3. **Connection Reuse:** Once initialized, the global `Connector` and `Sequelize` instances are reused for all subsequent requests to that container instance. This prevents the overhead of creating new connections for every request and avoids hitting connection limits. + +## IAM Authentication Prerequisites + +For IAM authentication to work, you must ensure two things: + +1. **The Cloud Run service's service account has the `Cloud SQL Client` role.** You can grant this role with the following command: + ```bash + gcloud projects add-iam-policy-binding PROJECT_ID \ + --member="serviceAccount:SERVICE_ACCOUNT_EMAIL" \ + --role="roles/cloudsql.client" + ``` + Replace `PROJECT_ID` with your Google Cloud project ID and `SERVICE_ACCOUNT_EMAIL` with the email of the service account your Cloud Run service is using. + +2. **The service account is added as a database user to your Cloud SQL instance.** You can do this with the following command: + ```bash + gcloud sql users create SERVICE_ACCOUNT_EMAIL \ + --instance=INSTANCE_NAME \ + --type=cloud_iam_user + ``` + Replace `SERVICE_ACCOUNT_EMAIL` with the same service account email and `INSTANCE_NAME` with your Cloud SQL instance name. + +## Deploy the Application to Cloud Run + +Follow these steps to deploy the application to Cloud Run. + +### Build and Push the Docker Image + +1. **Enable the Artifact Registry API:** + + ```bash + gcloud services enable artifactregistry.googleapis.com + ``` + +2. **Create an Artifact Registry repository:** + + ```bash + gcloud artifacts repositories create REPO_NAME \ + --repository-format=docker \ + --location=REGION + ``` + +3. **Configure Docker to authenticate with Artifact Registry:** + + ```bash + gcloud auth configure-docker REGION-docker.pkg.dev + ``` + +4. **Build the Docker image (replace `mysql2` with `pg` or `tedious` as needed):** + + ```bash + docker build -t REGION-docker.pkg.dev/PROJECT_ID/REPO_NAME/IMAGE_NAME mysql + ``` + +5. **Push the Docker image to Artifact Registry:** + + ```bash + docker push REGION-docker.pkg.dev/PROJECT_ID/REPO_NAME/IMAGE_NAME + ``` + +### Deploy to Cloud Run + +Deploy the container image to Cloud Run using the `gcloud run deploy` command. + +**Sample Values:** +* `SERVICE_NAME`: `my-cloud-run-service` +* `REGION`: `us-central1` +* `PROJECT_ID`: `my-gcp-project-id` +* `REPO_NAME`: `my-artifact-repo` +* `IMAGE_NAME`: `my-app-image` +* `INSTANCE_CONNECTION_NAME`: `my-gcp-project-id:us-central1:my-instance-name` +* `DB_USER`: `my-db-user` (for password-based authentication) +* `DB_IAM_USER`: `my-service-account@my-gcp-project-id.iam.gserviceaccount.com` (for IAM-based authentication) +* `DB_NAME`: `my-db-name` +* `DB_PASSWORD`: `my-user-pass-name` +* `VPC_NETWORK`: `my-vpc-network` +* `SUBNET_NAME`: `my-vpc-subnet` + +**For MySQL and PostgreSQL (Public IP):** + +```bash +gcloud run deploy SERVICE_NAME \ + --image=REGION-docker.pkg.dev/PROJECT_ID/REPO_NAME/IMAGE_NAME \ + --set-env-vars=DB_USER=DB_USER,DB_IAM_USER=DB_IAM_USER,DB_NAME=DB_NAME,INSTANCE_CONNECTION_NAME=INSTANCE_CONNECTION_NAME \ + --region=REGION \ + --update-secrets=DB_PASSWORD=DB_PASSWORD:latest +``` + +**For MySQL and PostgreSQL (Private IP):** + +```bash +gcloud run deploy SERVICE_NAME \ + --image=REGION-docker.pkg.dev/PROJECT_ID/REPO_NAME/IMAGE_NAME \ + --set-env-vars=DB_USER=DB_USER,DB_IAM_USER=DB_IAM_USER,DB_NAME=DB_NAME,INSTANCE_CONNECTION_NAME=INSTANCE_CONNECTION_NAME,IP_TYPE=PRIVATE \ + --network=VPC_NETWORK \ + --subnet=SUBNET_NAME \ + --vpc-egress=private-ranges-only \ + --region=REGION \ + --update-secrets=DB_PASSWORD=DB_PASSWORD:latest +``` + +**For SQL Server (Public IP):** + +```bash +gcloud run deploy SERVICE_NAME \ + --image=REGION-docker.pkg.dev/PROJECT_ID/REPO_NAME/IMAGE_NAME \ + --set-env-vars=DB_USER=DB_USER,DB_NAME=DB_NAME,INSTANCE_CONNECTION_NAME=INSTANCE_CONNECTION_NAME \ + --region=REGION \ + --update-secrets=DB_PASSWORD=DB_PASSWORD:latest +``` + +**For SQL Server (Private IP):** + +```bash +gcloud run deploy SERVICE_NAME \ + --image=REGION-docker.pkg.dev/PROJECT_ID/REPO_NAME/IMAGE_NAME \ + --set-env-vars=DB_USER=DB_USER,DB_NAME=DB_NAME,INSTANCE_CONNECTION_NAME=INSTANCE_CONNECTION_NAME,IP_TYPE=PRIVATE \ + --network=VPC_NETWORK \ + --subnet=SUBNET_NAME \ + --vpc-egress=private-ranges-only \ + --region=REGION \ + --update-secrets=DB_PASSWORD=DB_PASSWORD:latest +``` + +> [!NOTE] +> **`For PSC connections`** +> +> To connect to the Cloud SQL instance with PSC connection type, create a PSC endpoint, a DNS zone and DNS record for the instance in the same VPC network as the Cloud Run service and replace the `IP_TYPE` in the deploy command with `PSC`. To configure DNS records, refer to [Connect to an instance using Private Service Connect](https://docs.cloud.google.com/sql/docs/mysql/configure-private-service-connect) guide diff --git a/examples/cloudrun/sequelize/mysql2/Dockerfile b/examples/cloudrun/sequelize/mysql2/Dockerfile new file mode 100644 index 00000000..2b51c0b0 --- /dev/null +++ b/examples/cloudrun/sequelize/mysql2/Dockerfile @@ -0,0 +1,32 @@ +# Use a Node.js image to build the application +FROM node:25 AS builder + +# Create and change to the app directory. +WORKDIR /usr/src/app + +# Copy application dependency manifests to the container image. +COPY package*.json ./ +COPY tsconfig.json ./ + +# Install dependencies and build the application +RUN npm install +COPY . . +RUN npm run build + +# Use a slim Node.js image for the production environment +FROM node:20-slim + +# Create and change to the app directory. +WORKDIR /usr/src/app + +# Copy package.json to the container image. +COPY package*.json ./ + +# Install production dependencies. +RUN npm install --omit=dev + +# Copy the built application from the builder stage. +COPY --from=builder /usr/src/app/dist ./dist + +# Run the web service on container startup. +CMD ["node", "dist/index.js"] \ No newline at end of file diff --git a/examples/cloudrun/sequelize/mysql2/index.ts b/examples/cloudrun/sequelize/mysql2/index.ts new file mode 100644 index 00000000..aaefd2ea --- /dev/null +++ b/examples/cloudrun/sequelize/mysql2/index.ts @@ -0,0 +1,195 @@ +// Copyright 2025 Google LLC +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// https://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +import express from 'express'; +import { + AuthTypes, + Connector, + IpAddressTypes, +} from '@google-cloud/cloud-sql-connector'; +import {Sequelize} from '@sequelize/core'; + +const app = express(); + +// set up rate limiter: maximum of five requests per minute +const RateLimit = require('express-rate-limit'); +const limiter = RateLimit({ + // 15 minutes + windowMs: 15 * 60 * 1000, + // max 100 requests per windowMs + max: 100, +}); + +// apply rate limiter to all requests +app.use(limiter); + +// Connector and connection pools are initialized as null to allow for lazy instantiation. +// Lazy instantiation is a best practice for Cloud Run applications because it allows +// the application to start faster and only initialize connections when they are needed. +// This is especially important in serverless environments where applications may be +// started and stopped frequently. +let connector: Connector | null = null; +let passwordPool: Sequelize | null = null; +let iamPool: Sequelize | null = null; + +// Helper to get the IP type enum from string +function getIpType(ipTypeStr: string | undefined) { + const ipType = ipTypeStr || 'PUBLIC'; + if (ipType === 'PRIVATE') { + return IpAddressTypes.PRIVATE; + } else if (ipType === 'PSC') { + return IpAddressTypes.PSC; + } else { + return IpAddressTypes.PUBLIC; + } +} + +// Function to create a database connection pool using IAM authentication +async function createIamConnectionPool() { + const instanceConnectionName = process.env.INSTANCE_CONNECTION_NAME || ''; + // IAM service account email + const dbUser = process.env.DB_IAM_USER || ''; + const dbName = process.env.DB_NAME || ''; + const ipType = getIpType(process.env.IP_TYPE); + + // Creates a new connector object. + if (!connector) { + connector = new Connector(); + } + + // Get the connection options for the Cloud SQL instance. + const clientOpts = await connector.getOptions({ + instanceConnectionName, + ipType: ipType, + authType: AuthTypes.IAM, + }); + + // Create a new Sequelize connection pool. + return new Sequelize({ + dialect: 'mysql', + username: dbUser, + port: 3306, + database: dbName, + dialectOptions: { + ...clientOpts, + }, + }); +} + +// Function to create a database connection pool using password authentication +async function createPasswordConnectionPool() { + const instanceConnectionName = process.env.INSTANCE_CONNECTION_NAME || ''; + // Database username + const dbUser = process.env.DB_USER || ''; + const dbName = process.env.DB_NAME || ''; + const dbPassword = process.env.DB_PASSWORD || ''; + const ipType = getIpType(process.env.IP_TYPE); + + // Creates a new connector object. + if (!connector) { + connector = new Connector(); + } + + // Get the connection options for the Cloud SQL instance. + const clientOpts = await connector.getOptions({ + instanceConnectionName, + ipType: ipType, + authType: AuthTypes.PASSWORD, + }); + + // Create a new Sequelize connection pool. + return new Sequelize({ + dialect: 'mysql', + username: dbUser, + password: dbPassword, + port: 3306, + database: dbName, + dialectOptions: { + ...clientOpts, + }, + }); +} + +// Helper to get or create the password pool +async function getPasswordConnectionPool() { + if (!passwordPool) { + passwordPool = await createPasswordConnectionPool(); + } + return passwordPool; +} + +// Helper to get or create the IAM pool +async function getIamConnectionPool() { + if (!iamPool) { + iamPool = await createIamConnectionPool(); + } + return iamPool; +} + +app.get('/', async (req, res) => { + try { + const db = await getPasswordConnectionPool(); + await db.authenticate(); + const [results] = await db.query('SELECT 1'); + res.send( + 'Database connection successful (password authentication), result: ' + + `${JSON.stringify(results)}` + ); + } catch (err: unknown) { + console.error(err); + res + .status(500) + .send( + 'Error connecting to the database (password authentication): ' + + `${(err as Error).message}` + ); + } +}); + +app.get('/iam', async (req, res) => { + try { + const db = await getIamConnectionPool(); + await db.authenticate(); + const [results] = await db.query('SELECT 1'); + res.send( + 'Database connection successful (IAM authentication), result: ' + + `${JSON.stringify(results)}` + ); + } catch (err: unknown) { + console.error(err); + res + .status(500) + .send( + 'Error connecting to the database (IAM authentication): ' + + `${(err as Error).message}` + ); + } +}); + +const port = parseInt(process.env.PORT || '8080'); +app.listen(port, () => { + console.log(`Server running on port ${port}`); +}); + +process.on('SIGTERM', async () => { + if (passwordPool) { + await passwordPool.close(); + } + if (iamPool) { + await iamPool.close(); + } + if (connector) { + connector.close(); + } +}); diff --git a/examples/cloudrun/sequelize/mysql2/package.json b/examples/cloudrun/sequelize/mysql2/package.json new file mode 100644 index 00000000..58a0b948 --- /dev/null +++ b/examples/cloudrun/sequelize/mysql2/package.json @@ -0,0 +1,23 @@ +{ + "name": "sequelize-mysql-cloudrun", + "version": "1.0.0", + "description": "Sequelize MySQL example for Cloud Run (TypeScript)", + "main": "index.ts", + "scripts": { + "start": "node dist/index.js", + "build": "tsc" + }, + "dependencies": { + "@google-cloud/cloud-sql-connector": "^1.8.4", + "express": "^5.1.0", + "@sequelize/core": "7.0.0-alpha.37", + "mysql2": "^3.15.2", + "express-rate-limit": "8.2.1" + }, + "devDependencies": { + "@types/express": "^5.0.5", + "@types/node": "^20.11.0", + "ts-node": "^10.9.2", + "typescript": "^5.3.3" + } +} diff --git a/examples/cloudrun/sequelize/mysql2/tsconfig.json b/examples/cloudrun/sequelize/mysql2/tsconfig.json new file mode 100644 index 00000000..c1efcf4a --- /dev/null +++ b/examples/cloudrun/sequelize/mysql2/tsconfig.json @@ -0,0 +1,11 @@ +{ + "compilerOptions": { + "target": "es2016", + "module": "commonjs", + "outDir": "./dist", + "strict": true, + "esModuleInterop": true, + "skipLibCheck": true, + "forceConsistentCasingInFileNames": true + } +} diff --git a/examples/cloudrun/sequelize/pg/Dockerfile b/examples/cloudrun/sequelize/pg/Dockerfile new file mode 100644 index 00000000..7461eb1b --- /dev/null +++ b/examples/cloudrun/sequelize/pg/Dockerfile @@ -0,0 +1,22 @@ +# Use the official Node.js image. +# https://hub.docker.com/_/node +FROM node:25-slim + +# Create and change to the app directory. +WORKDIR /usr/src/app + +# Copy application dependency manifests to the container image. +# A wildcard is used to ensure both package.json AND package-lock.json are copied. +# Copying this separately prevents re-running npm install on every code change. +COPY package*.json ./ + +# Install production dependencies. +# If you add a package-lock.json speed your build by switching to 'npm ci'. +# RUN npm ci --only=production +RUN npm install --omit=dev + +# Copy local code to the container image. +COPY . . + +# Run the web service on container startup. +CMD ["node", "index.cjs"] diff --git a/examples/cloudrun/sequelize/pg/index.cjs b/examples/cloudrun/sequelize/pg/index.cjs new file mode 100644 index 00000000..f16cd16b --- /dev/null +++ b/examples/cloudrun/sequelize/pg/index.cjs @@ -0,0 +1,156 @@ +// Copyright 2025 Google LLC +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// https://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +const express = require('express'); +const {Connector, IpAddressTypes} = require('@google-cloud/cloud-sql-connector'); +const {Sequelize} = require('@sequelize/core'); + +const app = express(); + +// set up rate limiter: maximum of five requests per minute +const RateLimit = require('express-rate-limit'); +const limiter = RateLimit({ + // 15 minutes + windowMs: 15 * 60 * 1000, + // max 100 requests per windowMs + max: 100, +}); + +// apply rate limiter to all requests +app.use(limiter); + +// Connector and connection pools are initialized as null to allow for lazy instantiation. +// Lazy instantiation is a best practice for Cloud Run applications because it allows +// the application to start faster and only initialize connections when they are needed. +// This is especially important in serverless environments where applications may be +// started and stopped frequently. +let connector = null; +let passwordPool = null; +let iamPool = null; + +// Helper to get the IP type enum from string +function getIpType(ipTypeStr) { + const ipType = ipTypeStr || 'PUBLIC'; + if (ipType === 'PRIVATE') { + return IpAddressTypes.PRIVATE; + } else if (ipType === 'PSC') { + return IpAddressTypes.PSC; + } else { + return IpAddressTypes.PUBLIC; + } +} + +// Function to create a database connection pool using IAM authentication +async function createIamConnectionPool() { + const instanceConnectionName = process.env.INSTANCE_CONNECTION_NAME; + // IAM service account email + const dbUser = process.env.DB_IAM_USER; + const dbName = process.env.DB_NAME; + const ipType = getIpType(process.env.IP_TYPE); + + // Creates a new connector object. + if (!connector) { + connector = new Connector(); + } + + // Get the connection options for the Cloud SQL instance. + const clientOpts = await connector.getOptions({ + instanceConnectionName, + ipType: ipType, + authType: 'IAM', + }); + + // Create a new Sequelize connection pool. + return new Sequelize({ + dialect: 'postgres', + user: dbUser, + database: dbName, + ...clientOpts, + }); +} + +// Function to create a database connection pool using password authentication +async function createPasswordConnectionPool() { + const instanceConnectionName = process.env.INSTANCE_CONNECTION_NAME; + // Database username + const dbUser = process.env.DB_USER; + const dbName = process.env.DB_NAME; + const dbPassword = process.env.DB_PASSWORD; + const ipType = getIpType(process.env.IP_TYPE); + + // Creates a new connector object. + if (!connector) { + connector = new Connector(); + } + + // Get the connection options for the Cloud SQL instance. + const clientOpts = await connector.getOptions({ + instanceConnectionName, + ipType: ipType, + }); + + // Create a new Sequelize connection pool. + return new Sequelize({ + dialect: 'postgres', + user: dbUser, + password: dbPassword, + database: dbName, + ...clientOpts, + }); +} + +// Helper to get or create the password pool +async function getPasswordConnectionPool() { + if (!passwordPool) { + passwordPool = await createPasswordConnectionPool(); + } + return passwordPool; +} + +// Helper to get or create the IAM pool +async function getIamConnectionPool() { + if (!iamPool) { + iamPool = await createIamConnectionPool(); + } + return iamPool; +} + +app.get('/', async (req, res) => { + try { + const db = await getPasswordConnectionPool(); + await db.authenticate(); + const [results, metadata] = await db.query('SELECT 1'); + res.send(`Database connection successful (password authentication), result: ${JSON.stringify(results)}`); + } catch (err) { + console.error(err); + res.status(500).send(`Error connecting to the database (password authentication): ${err.message}`); + } +}); + +app.get('/iam', async (req, res) => { + try { + const db = await getIamConnectionPool(); + await db.authenticate(); + const [results, metadata] = await db.query('SELECT 1'); + res.send(`Database connection successful (IAM authentication), result: ${JSON.stringify(results)}`); + } catch (err) { + console.error(err); + res.status(500).send(`Error connecting to the database (IAM authentication): ${err.message}`); + } +}); + +const port = parseInt(process.env.PORT) || 8080; +app.listen(port, () => { + console.log(`Server running on port ${port}`); +}); diff --git a/examples/cloudrun/sequelize/pg/package.json b/examples/cloudrun/sequelize/pg/package.json new file mode 100644 index 00000000..372f148e --- /dev/null +++ b/examples/cloudrun/sequelize/pg/package.json @@ -0,0 +1,18 @@ +{ + "name": "sequelize-pg-cloudrun", + "version": "1.0.0", + "description": "Sequelize PostgreSQL example for Cloud Run (CommonJS)", + "main": "index.cjs", + "type": "commonjs", + "scripts": { + "start": "node index.cjs" + }, + "dependencies": { + "@google-cloud/cloud-sql-connector": "^1.8.4", + "express": "^5.1.0", + "@sequelize/core": "^7.0.0-alpha.41", + "@sequelize/postgres": "^7.0.0-alpha.41", + "pg": "^8.16.3", + "express-rate-limit": "8.2.1" + } +} diff --git a/examples/cloudrun/sequelize/tedious/Dockerfile b/examples/cloudrun/sequelize/tedious/Dockerfile new file mode 100644 index 00000000..c7f350a8 --- /dev/null +++ b/examples/cloudrun/sequelize/tedious/Dockerfile @@ -0,0 +1,22 @@ +# Use the official Node.js image. +# https://hub.docker.com/_/node +FROM node:25-slim + +# Create and change to the app directory. +WORKDIR /usr/src/app + +# Copy application dependency manifests to the container image. +# A wildcard is used to ensure both package.json AND package-lock.json are copied. +# Copying this separately prevents re-running npm install on every code change. +COPY package*.json ./ + +# Install production dependencies. +# If you add a package-lock.json speed your build by switching to 'npm ci'. +# RUN npm ci --only=production +RUN npm install --omit=dev + +# Copy local code to the container image. +COPY . . + +# Run the web service on container startup. +CMD ["node", "index.mjs"] diff --git a/examples/cloudrun/sequelize/tedious/index.mjs b/examples/cloudrun/sequelize/tedious/index.mjs new file mode 100644 index 00000000..723e0a6a --- /dev/null +++ b/examples/cloudrun/sequelize/tedious/index.mjs @@ -0,0 +1,112 @@ +// Copyright 2025 Google LLC +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// https://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +import express from 'express'; +import {Connector, IpAddressTypes} from '@google-cloud/cloud-sql-connector'; +import {Sequelize} from '@sequelize/core'; + +const app = express(); + +// set up rate limiter: maximum of five requests per minute +const RateLimit = require('express-rate-limit'); +const limiter = RateLimit({ + // 15 minutes + windowMs: 15 * 60 * 1000, + // max 100 requests per windowMs + max: 100, +}); + +// apply rate limiter to all requests +app.use(limiter); + +// Connector and connection pools are initialized as null to allow for lazy instantiation. +// Lazy instantiation is a best practice for Cloud Run applications because it allows +// the application to start faster and only initialize connections when they are needed. +// This is especially important in serverless environments where applications may be +// started and stopped frequently. +let connector = null; +let passwordPool = null; + +// Helper to get the IP type enum from string +function getIpType(ipTypeStr) { + const ipType = ipTypeStr || 'PUBLIC'; + if (ipType === 'PRIVATE') { + return IpAddressTypes.PRIVATE; + } else if (ipType === 'PSC') { + return IpAddressTypes.PSC; + } else { + return IpAddressTypes.PUBLIC; + } +} + +// Function to create a database connection pool using password authentication +async function createPasswordConnectionPool() { + const instanceConnectionName = process.env.INSTANCE_CONNECTION_NAME; + // Database username + const dbUser = process.env.DB_USER; + const dbName = process.env.DB_NAME; + const dbPassword = process.env.DB_PASSWORD; + const ipType = getIpType(process.env.IP_TYPE); + + // Creates a new connector object. + if (!connector) { + connector = new Connector(); + } + + // Get the connection options for the Cloud SQL instance. + const clientOpts = await connector.getTediousOptions({ + instanceConnectionName, + ipType: ipType, + }); + + // Create a new Sequelize connection pool. + return new Sequelize({ + dialect: 'mssql', + server: 'localhost', + database: dbName, + authentication: { + type: "default", + options: { + userName: dbUser, + password: dbPassword, + } + }, + ...clientOpts.options, + }); +} + +// Helper to get or create the password pool +async function getPasswordConnectionPool() { + if (!passwordPool) { + passwordPool = await createPasswordConnectionPool(); + } + return passwordPool; +} + +app.get('/', async (req, res) => { + try { + const db = await getPasswordConnectionPool(); + await db.authenticate(); + const [results, metadata] = await db.query('SELECT 1'); + res.send(`Database connection successful (password authentication), result: ${JSON.stringify(results)}`); + } catch (err) { + console.error(err); + res.status(500).send(`Error connecting to the database (password authentication): ${err.message}`); + } +}); + +const port = parseInt(process.env.PORT) || 8080; +app.listen(port, () => { + console.log(`Server running on port ${port}`); +}); diff --git a/examples/cloudrun/sequelize/tedious/package.json b/examples/cloudrun/sequelize/tedious/package.json new file mode 100644 index 00000000..21f07847 --- /dev/null +++ b/examples/cloudrun/sequelize/tedious/package.json @@ -0,0 +1,20 @@ +{ + "name": "sequelize-sqlserver-cloudrun", + "version": "1.0.0", + "description": "Sequelize SQL Server example for Cloud Run (ESM)", + "main": "index.mjs", + "type": "module", + "scripts": { + "start": "node index.mjs" + }, + "dependencies": { + "@google-cloud/cloud-sql-connector": "^1.8.4", + "express": "^5.1.0", + "@sequelize/core": "^7.0.0-alpha.37", + "@sequelize/mssql": "^7.0.0-alpha.37", + "tedious": "^18.6.1", + "express-rate-limit": "8.2.1" + }, + "devDependencies": { + } +} diff --git a/examples/cloudrun/typeorm/README.md b/examples/cloudrun/typeorm/README.md new file mode 100644 index 00000000..e7e41e38 --- /dev/null +++ b/examples/cloudrun/typeorm/README.md @@ -0,0 +1,160 @@ +# Connecting Cloud Run to Cloud SQL with the Node.js Connector + +This guide provides a comprehensive walkthrough of how to connect a Cloud Run service to a Cloud SQL instance using the Cloud SQL Node.js Connector. It covers connecting to instances with both public and private IP addresses and demonstrates how to handle database credentials securely. + +## Develop a Node.js Application + +The following Node.js applications demonstrate how to connect to a Cloud SQL instance using the Cloud SQL Node.js Connector. + +### `mysql2/index.cjs` and `pg/index.mjs` + +These files contain the core application logic for connecting to a Cloud SQL for MySQL or PostgreSQL instance. They provide two separate authentication methods, each exposed at a different route: +- `/`: Password-based authentication +- `/iam`: IAM-based authentication + +### `tedious/index.ts` + +This file contains the core application logic for connecting to a Cloud SQL for SQL Server instance. It uses the `cloud-sql-nodejs-connector` to create a database connection pool with password-based authentication at the `/` route. + +> [!NOTE] +> +> Cloud SQL for SQL Server does not support IAM database authentication. + +## Lazy Instantiation + +In a Cloud Run service, global variables are initialized when the container instance starts up. The application instance then handles subsequent requests until the container is spun down. + +The `Connector` and `DataSource` objects are defined as global variables (initially set to `null`) and are lazily instantiated (created only when needed) inside the request handlers. + +This approach offers several benefits: + +1. **Faster Startup:** By deferring initialization until the first request, the Cloud Run service can start listening for requests almost immediately, reducing cold start latency. +2. **Resource Efficiency:** Expensive operations, like establishing background connections or fetching secrets, are only performed when actually required. +3. **Connection Reuse:** Once initialized, the global `Connector` and `DataSource` instances are reused for all subsequent requests to that container instance. This prevents the overhead of creating new connections for every request and avoids hitting connection limits. + +## IAM Authentication Prerequisites + +For IAM authentication to work, you must ensure two things: + +1. **The Cloud Run service's service account has the `Cloud SQL Client` role.** You can grant this role with the following command: + ```bash + gcloud projects add-iam-policy-binding PROJECT_ID \ + --member="serviceAccount:SERVICE_ACCOUNT_EMAIL" \ + --role="roles/cloudsql.client" + ``` + Replace `PROJECT_ID` with your Google Cloud project ID and `SERVICE_ACCOUNT_EMAIL` with the email of the service account your Cloud Run service is using. + +2. **The service account is added as a database user to your Cloud SQL instance.** You can do this with the following command: + ```bash + gcloud sql users create SERVICE_ACCOUNT_EMAIL \ + --instance=INSTANCE_NAME \ + --type=cloud_iam_user + ``` + Replace `SERVICE_ACCOUNT_EMAIL` with the same service account email and `INSTANCE_NAME` with your Cloud SQL instance name. + +## Deploy the Application to Cloud Run + +Follow these steps to deploy the application to Cloud Run. + +### Build and Push the Docker Image + +1. **Enable the Artifact Registry API:** + + ```bash + gcloud services enable artifactregistry.googleapis.com + ``` + +2. **Create an Artifact Registry repository:** + + ```bash + gcloud artifacts repositories create REPO_NAME \ + --repository-format=docker \ + --location=REGION + ``` + +3. **Configure Docker to authenticate with Artifact Registry:** + + ```bash + gcloud auth configure-docker REGION-docker.pkg.dev + ``` + +4. **Build the Docker image (replace `mysql2` with `pg` or `tedious` as needed):** + + ```bash + docker build -t REGION-docker.pkg.dev/PROJECT_ID/REPO_NAME/IMAGE_NAME mysql2 + ``` + +5. **Push the Docker image to Artifact Registry:** + + ```bash + docker push REGION-docker.pkg.dev/PROJECT_ID/REPO_NAME/IMAGE_NAME + ``` + +### Deploy to Cloud Run + +Deploy the container image to Cloud Run using the `gcloud run deploy` command. + +**Sample Values:** +* `SERVICE_NAME`: `my-cloud-run-service` +* `REGION`: `us-central1` +* `PROJECT_ID`: `my-gcp-project-id` +* `REPO_NAME`: `my-artifact-repo` +* `IMAGE_NAME`: `my-app-image` +* `INSTANCE_CONNECTION_NAME`: `my-gcp-project-id:us-central1:my-instance-name` +* `DB_USER`: `my-db-user` (for password-based authentication) +* `DB_IAM_USER`: `my-service-account@my-gcp-project-id.iam.gserviceaccount.com` (for IAM-based authentication) +* `DB_NAME`: `my-db-name` +* `DB_PASSWORD`: `my-user-pass-name` +* `VPC_NETWORK`: `my-vpc-network` +* `SUBNET_NAME`: `my-vpc-subnet` + +**For MySQL and PostgreSQL (Public IP):** + +```bash +gcloud run deploy SERVICE_NAME \ + --image=REGION-docker.pkg.dev/PROJECT_ID/REPO_NAME/IMAGE_NAME \ + --set-env-vars=DB_USER=DB_USER,DB_IAM_USER=DB_IAM_USER,DB_NAME=DB_NAME,INSTANCE_CONNECTION_NAME=INSTANCE_CONNECTION_NAME \ + --region=REGION \ + --update-secrets=DB_PASSWORD=DB_PASSWORD:latest +``` + +**For MySQL and PostgreSQL (Private IP):** + +```bash +gcloud run deploy SERVICE_NAME \ + --image=REGION-docker.pkg.dev/PROJECT_ID/REPO_NAME/IMAGE_NAME \ + --set-env-vars=DB_USER=DB_USER,DB_IAM_USER=DB_IAM_USER,DB_NAME=DB_NAME,INSTANCE_CONNECTION_NAME=INSTANCE_CONNECTION_NAME,IP_TYPE=PRIVATE \ + --network=VPC_NETWORK \ + --subnet=SUBNET_NAME \ + --vpc-egress=private-ranges-only \ + --region=REGION \ + --update-secrets=DB_PASSWORD=DB_PASSWORD:latest +``` + +**For SQL Server (Public IP):** + +```bash +gcloud run deploy SERVICE_NAME \ + --image=REGION-docker.pkg.dev/PROJECT_ID/REPO_NAME/IMAGE_NAME \ + --set-env-vars=DB_USER=DB_USER,DB_NAME=DB_NAME,INSTANCE_CONNECTION_NAME=INSTANCE_CONNECTION_NAME \ + --region=REGION \ + --update-secrets=DB_PASSWORD=DB_PASSWORD:latest +``` + +**For SQL Server (Private IP):** + +```bash +gcloud run deploy SERVICE_NAME \ + --image=REGION-docker.pkg.dev/PROJECT_ID/REPO_NAME/IMAGE_NAME \ + --set-env-vars=DB_USER=DB_USER,DB_NAME=DB_NAME,INSTANCE_CONNECTION_NAME=INSTANCE_CONNECTION_NAME,IP_TYPE=PRIVATE \ + --network=VPC_NETWORK \ + --subnet=SUBNET_NAME \ + --vpc-egress=private-ranges-only \ + --region=REGION \ + --update-secrets=DB_PASSWORD=DB_PASSWORD:latest +``` + +> [!NOTE] +> **`For PSC connections`** +> +> To connect to the Cloud SQL instance with PSC connection type, create a PSC endpoint, a DNS zone and DNS record for the instance in the same VPC network as the Cloud Run service and replace the `IP_TYPE` in the deploy command with `PSC`. To configure DNS records, refer to [Connect to an instance using Private Service Connect](https://docs.cloud.google.com/sql/docs/mysql/configure-private-service-connect) guide diff --git a/examples/cloudrun/typeorm/mysql2/Dockerfile b/examples/cloudrun/typeorm/mysql2/Dockerfile new file mode 100644 index 00000000..7461eb1b --- /dev/null +++ b/examples/cloudrun/typeorm/mysql2/Dockerfile @@ -0,0 +1,22 @@ +# Use the official Node.js image. +# https://hub.docker.com/_/node +FROM node:25-slim + +# Create and change to the app directory. +WORKDIR /usr/src/app + +# Copy application dependency manifests to the container image. +# A wildcard is used to ensure both package.json AND package-lock.json are copied. +# Copying this separately prevents re-running npm install on every code change. +COPY package*.json ./ + +# Install production dependencies. +# If you add a package-lock.json speed your build by switching to 'npm ci'. +# RUN npm ci --only=production +RUN npm install --omit=dev + +# Copy local code to the container image. +COPY . . + +# Run the web service on container startup. +CMD ["node", "index.cjs"] diff --git a/examples/cloudrun/typeorm/mysql2/index.cjs b/examples/cloudrun/typeorm/mysql2/index.cjs new file mode 100644 index 00000000..942ad89c --- /dev/null +++ b/examples/cloudrun/typeorm/mysql2/index.cjs @@ -0,0 +1,156 @@ +// Copyright 2025 Google LLC +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// https://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +const express = require('express'); +const {Connector, IpAddressTypes} = require('@google-cloud/cloud-sql-connector'); +const typeorm = require('typeorm'); + +const app = express(); + +// set up rate limiter: maximum of five requests per minute +const RateLimit = require('express-rate-limit'); +const limiter = RateLimit({ + // 15 minutes + windowMs: 15 * 60 * 1000, + // max 100 requests per windowMs + max: 100, +}); + +// apply rate limiter to all requests +app.use(limiter); + +// Connector and connection pools are initialized as null to allow for lazy instantiation. +// Lazy instantiation is a best practice for Cloud Run applications because it allows +// the application to start faster and only initialize connections when they are needed. +// This is especially important in serverless environments where applications may be +// started and stopped frequently. +let connector = null; +let passwordPool = null; +let iamPool = null; + +// Helper to get the IP type enum from string +function getIpType(ipTypeStr) { + const ipType = ipTypeStr || 'PUBLIC'; + if (ipType === 'PRIVATE') { + return IpAddressTypes.PRIVATE; + } else if (ipType === 'PSC') { + return IpAddressTypes.PSC; + } else { + return IpAddressTypes.PUBLIC; + } +} + +// Function to create a database connection pool using IAM authentication +async function createIamConnectionPool() { + const instanceConnectionName = process.env.INSTANCE_CONNECTION_NAME; + // IAM service account email + const dbUser = process.env.DB_IAM_USER; + const dbName = process.env.DB_NAME; + const ipType = getIpType(process.env.IP_TYPE); + + // Creates a new connector object. + if (!connector) { + connector = new Connector(); + } + + // Get the connection options for the Cloud SQL instance. + const clientOpts = await connector.getOptions({ + instanceConnectionName, + ipType: ipType, + authType: 'IAM', + }); + + // Create a new TypeORM data source. + return new typeorm.DataSource({ + type: 'mysql', + username: dbUser, + database: dbName, + extra: clientOpts, + }); +} + +// Function to create a database connection pool using password authentication +async function createPasswordConnectionPool() { + const instanceConnectionName = process.env.INSTANCE_CONNECTION_NAME; + // Database username + const dbUser = process.env.DB_USER; + const dbName = process.env.DB_NAME; + const dbPassword = process.env.DB_PASSWORD; + const ipType = getIpType(process.env.IP_TYPE); + + // Creates a new connector object. + if (!connector) { + connector = new Connector(); + } + + // Get the connection options for the Cloud SQL instance. + const clientOpts = await connector.getOptions({ + instanceConnectionName, + ipType: ipType, + }); + + // Create a new TypeORM data source. + return new typeorm.DataSource({ + type: 'mysql', + username: dbUser, + password: dbPassword, + database: dbName, + extra: clientOpts, + }); +} + +// Helper to get or create the password pool +async function getPasswordConnectionPool() { + if (!passwordPool) { + passwordPool = await createPasswordConnectionPool(); + await passwordPool.initialize(); + } + return passwordPool; +} + +// Helper to get or create the IAM pool +async function getIamConnectionPool() { + if (!iamPool) { + iamPool = await createIamConnectionPool(); + await iamPool.initialize(); + } + return iamPool; +} + +app.get('/', async (req, res) => { + try { + const db = await getPasswordConnectionPool(); + const result = await db.query('SELECT 1'); + res.send(`Database connection successful (password authentication), result: ${JSON.stringify(result)}`); + } catch (err) { + console.error(err); + res.status(500).send(`Error connecting to the database (password authentication): ${err.message}`); + } +}); + +app.get('/iam', async (req, res) => { + try { + const db = await getIamConnectionPool(); + const result = await db.query('SELECT 1'); + res.send(`Database connection successful (IAM authentication), result: ${JSON.stringify(result)}`); + } catch (err) { + console.error(err); + res.status(500).send(`Error connecting to the database (IAM authentication): ${err.message}`); + } +}); + +const port = parseInt(process.env.PORT) || 8080; +app.listen(port, () => { + console.log(`Server running on port ${port}`); +}); diff --git a/examples/cloudrun/typeorm/mysql2/package.json b/examples/cloudrun/typeorm/mysql2/package.json new file mode 100644 index 00000000..9dbebbf8 --- /dev/null +++ b/examples/cloudrun/typeorm/mysql2/package.json @@ -0,0 +1,19 @@ +{ + "name": "typeorm-mysql-cloudrun", + "version": "1.0.0", + "description": "TypeORM MySQL example for Cloud Run (CommonJS)", + "main": "index.cjs", + "type": "commonjs", + "scripts": { + "start": "node index.cjs" + }, + "dependencies": { + "@google-cloud/cloud-sql-connector": "^1.8.4", + "express": "^5.1.0", + "typeorm": "^0.3.27", + "mysql2": "^3.15.2", + "express-rate-limit": "8.2.1" + }, + "devDependencies": { + } +} diff --git a/examples/cloudrun/typeorm/pg/Dockerfile b/examples/cloudrun/typeorm/pg/Dockerfile new file mode 100644 index 00000000..c7f350a8 --- /dev/null +++ b/examples/cloudrun/typeorm/pg/Dockerfile @@ -0,0 +1,22 @@ +# Use the official Node.js image. +# https://hub.docker.com/_/node +FROM node:25-slim + +# Create and change to the app directory. +WORKDIR /usr/src/app + +# Copy application dependency manifests to the container image. +# A wildcard is used to ensure both package.json AND package-lock.json are copied. +# Copying this separately prevents re-running npm install on every code change. +COPY package*.json ./ + +# Install production dependencies. +# If you add a package-lock.json speed your build by switching to 'npm ci'. +# RUN npm ci --only=production +RUN npm install --omit=dev + +# Copy local code to the container image. +COPY . . + +# Run the web service on container startup. +CMD ["node", "index.mjs"] diff --git a/examples/cloudrun/typeorm/pg/index.mjs b/examples/cloudrun/typeorm/pg/index.mjs new file mode 100644 index 00000000..88d9b9de --- /dev/null +++ b/examples/cloudrun/typeorm/pg/index.mjs @@ -0,0 +1,156 @@ +// Copyright 2025 Google LLC +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// https://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +import express from 'express'; +import {Connector, IpAddressTypes} from '@google-cloud/cloud-sql-connector'; +import typeorm from 'typeorm'; + +const app = express(); + +// set up rate limiter: maximum of five requests per minute +const RateLimit = require('express-rate-limit'); +const limiter = RateLimit({ + // 15 minutes + windowMs: 15 * 60 * 1000, + // max 100 requests per windowMs + max: 100, +}); + +// apply rate limiter to all requests +app.use(limiter); + +// Connector and connection pools are initialized as null to allow for lazy instantiation. +// Lazy instantiation is a best practice for Cloud Run applications because it allows +// the application to start faster and only initialize connections when they are needed. +// This is especially important in serverless environments where applications may be +// started and stopped frequently. +let connector = null; +let passwordPool = null; +let iamPool = null; + +// Helper to get the IP type enum from string +function getIpType(ipTypeStr) { + const ipType = ipTypeStr || 'PUBLIC'; + if (ipType === 'PRIVATE') { + return IpAddressTypes.PRIVATE; + } else if (ipType === 'PSC') { + return IpAddressTypes.PSC; + } else { + return IpAddressTypes.PUBLIC; + } +} + +// Function to create a database connection pool using IAM authentication +async function createIamConnectionPool() { + const instanceConnectionName = process.env.INSTANCE_CONNECTION_NAME; + // IAM service account email + const dbUser = process.env.DB_IAM_USER; + const dbName = process.env.DB_NAME; + const ipType = getIpType(process.env.IP_TYPE); + + // Creates a new connector object. + if (!connector) { + connector = new Connector(); + } + + // Get the connection options for the Cloud SQL instance. + const clientOpts = await connector.getOptions({ + instanceConnectionName, + ipType: ipType, + authType: 'IAM', + }); + + // Create a new TypeORM data source. + return new typeorm.DataSource({ + type: 'postgres', + username: dbUser, + database: dbName, + extra: clientOpts, + }); +} + +// Function to create a database connection pool using password authentication +async function createPasswordConnectionPool() { + const instanceConnectionName = process.env.INSTANCE_CONNECTION_NAME; + // Database username + const dbUser = process.env.DB_USER; + const dbName = process.env.DB_NAME; + const dbPassword = process.env.DB_PASSWORD; + const ipType = getIpType(process.env.IP_TYPE); + + // Creates a new connector object. + if (!connector) { + connector = new Connector(); + } + + // Get the connection options for the Cloud SQL instance. + const clientOpts = await connector.getOptions({ + instanceConnectionName, + ipType: ipType, + }); + + // Create a new TypeORM data source. + return new typeorm.DataSource({ + type: 'postgres', + username: dbUser, + password: dbPassword, + database: dbName, + extra: clientOpts, + }); +} + +// Helper to get or create the password pool +async function getPasswordConnectionPool() { + if (!passwordPool) { + passwordPool = await createPasswordConnectionPool(); + await passwordPool.initialize(); + } + return passwordPool; +} + +// Helper to get or create the IAM pool +async function getIamConnectionPool() { + if (!iamPool) { + iamPool = await createIamConnectionPool(); + await iamPool.initialize(); + } + return iamPool; +} + +app.get('/', async (req, res) => { + try { + const db = await getPasswordConnectionPool(); + const result = await db.query('SELECT 1'); + res.send(`Database connection successful (password authentication), result: ${JSON.stringify(result)}`); + } catch (err) { + console.error(err); + res.status(500).send(`Error connecting to the database (password authentication): ${err.message}`); + } +}); + +app.get('/iam', async (req, res) => { + try { + const db = await getIamConnectionPool(); + const result = await db.query('SELECT 1'); + res.send(`Database connection successful (IAM authentication), result: ${JSON.stringify(result)}`); + } catch (err) { + console.error(err); + res.status(500).send(`Error connecting to the database (IAM authentication): ${err.message}`); + } +}); + +const port = parseInt(process.env.PORT) || 8080; +app.listen(port, () => { + console.log(`Server running on port ${port}`); +}); diff --git a/examples/cloudrun/typeorm/pg/package.json b/examples/cloudrun/typeorm/pg/package.json new file mode 100644 index 00000000..04694541 --- /dev/null +++ b/examples/cloudrun/typeorm/pg/package.json @@ -0,0 +1,19 @@ +{ + "name": "typeorm-pg-cloudrun", + "version": "1.0.0", + "description": "TypeORM PostgreSQL example for Cloud Run (ESM)", + "main": "index.mjs", + "type": "module", + "scripts": { + "start": "node index.mjs" + }, + "dependencies": { + "@google-cloud/cloud-sql-connector": "^1.8.4", + "express": "^5.1.0", + "typeorm": "^0.3.27", + "pg": "^8.16.3", + "express-rate-limit": "8.2.1" + }, + "devDependencies": { + } +} diff --git a/examples/cloudrun/typeorm/tedious/Dockerfile b/examples/cloudrun/typeorm/tedious/Dockerfile new file mode 100644 index 00000000..bfc78ad2 --- /dev/null +++ b/examples/cloudrun/typeorm/tedious/Dockerfile @@ -0,0 +1,33 @@ +# Use a Node.js image to build the application +FROM node:25 as builder + +# Create and change to the app directory. +WORKDIR /usr/src/app + +# Copy application dependency manifests to the container image. +COPY package*.json ./ +COPY tsconfig.json ./ + +# Install dependencies and build the application +RUN npm install +COPY . . +RUN npm run build + +# Use a slim Node.js image for the production environment +FROM node:20-slim + +# Create and change to the app directory. +WORKDIR /usr/src/app + +# Copy package.json to the container image. +COPY package*.json ./ + +# Install production dependencies. +RUN npm install --omit=dev + +# Copy the built application from the builder stage. +COPY --from=builder /usr/src/app/dist ./dist + +# Run the web service on container startup. +CMD ["node", "dist/index.js"] + diff --git a/examples/cloudrun/typeorm/tedious/index.ts b/examples/cloudrun/typeorm/tedious/index.ts new file mode 100644 index 00000000..4293a9c6 --- /dev/null +++ b/examples/cloudrun/typeorm/tedious/index.ts @@ -0,0 +1,129 @@ +// Copyright 2025 Google LLC +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// https://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +import express from 'express'; +import {Connector, IpAddressTypes} from '@google-cloud/cloud-sql-connector'; +import {DataSource} from 'typeorm'; + +const app = express(); + +// set up rate limiter: maximum of five requests per minute +const RateLimit = require('express-rate-limit'); +const limiter = RateLimit({ + // 15 minutes + windowMs: 15 * 60 * 1000, + // max 100 requests per windowMs + max: 100, +}); + +// apply rate limiter to all requests +app.use(limiter); + +// Connector and connection pools are initialized as null to allow for lazy instantiation. +// Lazy instantiation is a best practice for Cloud Run applications because it allows +// the application to start faster and only initialize connections when they are needed. +// This is especially important in serverless environments where applications may be +// started and stopped frequently. +let connector: Connector | null = null; +let passwordPool: DataSource | null = null; + +// Helper to get the IP type enum from string +function getIpType(ipTypeStr: string | undefined) { + const ipType = ipTypeStr || 'PUBLIC'; + if (ipType === 'PRIVATE') { + return IpAddressTypes.PRIVATE; + } else if (ipType === 'PSC') { + return IpAddressTypes.PSC; + } else { + return IpAddressTypes.PUBLIC; + } +} + +// Function to create a database connection pool using password authentication +async function createPasswordConnectionPool() { + const instanceConnectionName = process.env.INSTANCE_CONNECTION_NAME || ''; + // Database username + const dbUser = process.env.DB_USER || ''; + const dbName = process.env.DB_NAME || ''; + const dbPassword = process.env.DB_PASSWORD || ''; + const ipType = getIpType(process.env.IP_TYPE); + + // Creates a new connector object. + if (!connector) { + connector = new Connector(); + } + + // Get the connection options for the Cloud SQL instance. + const clientOpts = await connector.getTediousOptions({ + instanceConnectionName, + ipType: ipType, + }); + + // Create a new TypeORM data source. + return new DataSource({ + type: 'mssql', + username: dbUser, + password: dbPassword, + database: dbName, + extra: { + server: '127.0.0.1', // address doesn't matter, connector hijacks it + options: { + ...clientOpts, + port: 1433, + }, + }, + }); +} + +// Helper to get or create the password pool +async function getPasswordConnectionPool() { + if (!passwordPool) { + passwordPool = await createPasswordConnectionPool(); + await passwordPool.initialize(); + } + return passwordPool; +} + +app.get('/', async (req, res) => { + try { + const db = await getPasswordConnectionPool(); + const result = await db.query('SELECT 1'); + res.send( + 'Database connection successful (password authentication), result: ' + + `${JSON.stringify(result)}` + ); + } catch (err: unknown) { + console.error(err); + res + .status(500) + .send( + 'Error connecting to the database (password authentication): ' + + `${(err as Error).message}` + ); + } +}); + +const port = parseInt(process.env.PORT || '8080'); +app.listen(port, () => { + console.log(`Server running on port ${port}`); +}); + +process.on('SIGTERM', async () => { + if (passwordPool) { + await passwordPool.destroy(); + } + if (connector) { + connector.close(); + } +}); diff --git a/examples/cloudrun/typeorm/tedious/package.json b/examples/cloudrun/typeorm/tedious/package.json new file mode 100644 index 00000000..1c117d34 --- /dev/null +++ b/examples/cloudrun/typeorm/tedious/package.json @@ -0,0 +1,23 @@ +{ + "name": "typeorm-sqlserver-cloudrun", + "version": "1.0.0", + "description": "TypeORM SQL Server example for Cloud Run (TypeScript)", + "main": "index.ts", + "scripts": { + "start": "ts-node index.ts", + "build": "tsc" + }, + "dependencies": { + "@google-cloud/cloud-sql-connector": "^1.8.4", + "express": "^5.1.0", + "mssql": "^10.0.2", + "typeorm": "^0.3.17", + "express-rate-limit": "8.2.1" + }, + "devDependencies": { + "@types/express": "^5.0.5", + "@types/node": "^20.11.0", + "ts-node": "^10.9.2", + "typescript": "^5.3.3" + } +} \ No newline at end of file diff --git a/examples/cloudrun/typeorm/tedious/tsconfig.json b/examples/cloudrun/typeorm/tedious/tsconfig.json new file mode 100644 index 00000000..322fe38a --- /dev/null +++ b/examples/cloudrun/typeorm/tedious/tsconfig.json @@ -0,0 +1,13 @@ +{ + "compilerOptions": { + "target": "es2016", + "module": "commonjs", + "outDir": "./dist", + "strict": true, + "esModuleInterop": true, + "skipLibCheck": true, + "forceConsistentCasingInFileNames": true, + "emitDecoratorMetadata": true, + "experimentalDecorators": true + } +}