Skip to content

Commit 49492f0

Browse files
authored
Merge pull request #43325 from github/repo-sync
Repo sync
2 parents 72de5c7 + 4893354 commit 49492f0

File tree

95 files changed

+1845086
-1892
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

95 files changed

+1845086
-1892
lines changed

.github/aw/actions-lock.json

Lines changed: 0 additions & 14 deletions
This file was deleted.

.github/workflows/dependabot-triage-agent.lock.yml

Lines changed: 0 additions & 1280 deletions
This file was deleted.

.github/workflows/dependabot-triage-agent.md

Lines changed: 0 additions & 129 deletions
This file was deleted.
Lines changed: 141 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,141 @@
1+
---
2+
title: Adding nodes to a high availability configuration
3+
shortTitle: Adding nodes to HA
4+
intro: 'Add nodes to the primary high availability (HA) datacenter. This is intended to offload CPU-intensive tasks from the primary node, allowing for horizontal scaling of the {% data variables.product.prodname_ghe_server %} instance.'
5+
versions:
6+
ghes: '>= 3.18'
7+
type: how_to
8+
topics:
9+
- High availability
10+
- Enterprise
11+
- Infrastructure
12+
allowTitleToDifferFromFilename: true
13+
---
14+
15+
> [!NOTE]
16+
> The ability to add additional compute nodes to HA is in {% data variables.release-phases.public_preview %} and subject to change. During the preview, please share any feedback with your customer success team.
17+
18+
19+
For {% data variables.product.prodname_ghe_server %} customers looking to scale horizontally, migrating to and operating a cluster is an option, but is resource-intensive and time-consuming. As an alternative, we recommend adding nodes to an HA configuration.
20+
21+
The terms "additional node" and "stateless node" are used interchangeably throughout this article. Stateless nodes can only be added to HA deployments that contain at least one replica.
22+
23+
## Additional nodes
24+
25+
Of all the services running on a {% data variables.product.prodname_ghe_server %} appliance, Unicorn is often the most CPU and memory intensive, closely followed by Aqueduct, Git, and MySQL. Because Unicorn and Aqueduct are stateless services, they are well-suited for horizontal scaling and can run on a separate set of nodes. The remaining services can continue operating with a single instance per datacenter.
26+
27+
Additional nodes allow you to scale web and job workloads horizontally. They can also offload Unicorn and Aqueduct from the primary node, freeing up substantial compute and memory resources for the remaining stateful services. If you are experiencing performance-related outages due to high CPU usage by Unicorn instances, adding additional nodes is recommended. There are no significant restrictions on the number of these nodes you can add within a datacenter.
28+
29+
## Criteria
30+
31+
If you are experiencing degraded performance due to an overloaded primary node in an HA configuration, you should consider adding additional nodes to your HA environment. By scaling web and job roles horizontally beyond the primary node, these extra nodes can help reduce the load on the primary host.
32+
33+
For example, if you notice backlogs in Unicorn or Aqueduct queues, or are experiencing other types of resource contention, you should consider this approach. Even if there isn't visible queuing, running out of CPU on the primary node is another clear signal. In these cases, you can add additional nodes and reduce the number of workers per node, so the primary node handles less of the overall workload.
34+
35+
## Adding a node
36+
37+
Each node you add to an HA deployment is a virtual machine (VM) running the {% data variables.product.prodname_ghe_server %} software. It should be running the same software as the primary. Generally, a stateless node does not need to match the primary's memory, CPU, or storage specifications. However, both the stateless node and the primary instance require sub-millisecond connectivity. Replica connectivity requirements remain unchanged.
38+
39+
To add nodes to the primary datacenter in an HA configuration, use the `ghe-add-node` command. The `ghe-add-node` command sets up the current appliance as a node within the HA deployment, and is intended to offload CPU-intensive tasks from the primary data node, enabling horizontal scaling. These nodes are designed to handle web and job workloads, allowing for more efficient workload distribution and management.
40+
This command takes the form:
41+
42+
``` shell copy
43+
/usr/local/share/enterprise/ghe-add-node PRIMARY_IP [--hostname HOSTNAME]
44+
```
45+
46+
- `PRIMARY_IP`: The IP address of the primary node.
47+
- `HOSTNAME` (optional): Desired hostname for the added host.
48+
49+
For example, to add a node with hostname `ghes-node-1` to the HA primary instance with IP address `192.168.1.1` in the HA primary datacenter, you would run the following command:
50+
51+
``` shell copy
52+
/usr/local/share/enterprise/ghe-add-node 192.168.1.1 --hostname ghes-node-1
53+
```
54+
55+
Then, on the primary node, you must run the following commands:
56+
57+
``` shell copy
58+
ghe-config-apply
59+
ghe-cluster-balance rebalance --yes
60+
```
61+
62+
The `ghe-config-apply` command is a requirement to add stateless nodes.
63+
64+
For the public preview, we have not specifically tested for downtime, and it's not clear if a maintenance window is required.
65+
66+
## Removing an additional node
67+
68+
To remove a node, run `ghe-remove-node` from the node you want to remove. Then, on the primary node, you must run:
69+
70+
``` shell copy
71+
ghe-config-apply
72+
```
73+
74+
The `ghe-config-apply` command is a requirement to remove stateless nodes.
75+
76+
For the public preview, we have not specifically tested for downtime, and it's not clear if a maintenance window is required.
77+
78+
## Reprovisioning a node that previously hosted {% data variables.product.prodname_ghe_server %}
79+
80+
You can use a node that previously hosted and ran {% data variables.product.prodname_ghe_server %} as a stateless node. To do so, the node should be updated to version 3.18 or above and all the nodes in the deployment must be running the same version. On that node, check if `/data/user/common/cluster.conf` already exists. If it does, you will need to perform cleanup before running `ghe-add-node` command on the stateless node.
81+
82+
For example:
83+
84+
``` shell copy
85+
sudo rm -f /etc/github/cluster /data/user/common/cluster.conf
86+
sudo timeout -k4 10 systemctl stop wireguard 2>/dev/null || sudo ip link delete tun0 || true
87+
```
88+
89+
## Limits and behavior
90+
91+
There is no theoretical limit to the number of nodes you can add. However, in practice, adding too many nodes can cause issues and impact stability or performance. At this time, newly added nodes will process a predefined set of tasks. You are not able to choose which type of tasks are offloaded. All APIs can be processed by the additional node.
92+
93+
If a Git operation is in the path, there is logic in place to process Git operations only on the primary node. Git operations are not handled by the additional node. For example, branch deletion is a Git operation, and won't be handled by the stateless node.
94+
95+
Stateless nodes do not run Elasticsearch workloads, but they do run kafka-lite.
96+
97+
## System and networking requirements
98+
99+
Generally, stateless nodes don't need to match the memory, CPU, and storage specs of the primary node. System requirements should take into account the existing resource consumption of web and job services on the primary node, and whether the primary node will completely offload those workloads to the new node.
100+
101+
The stateless node and the primary instance require sub-millisecond connectivity. Generally, all nodes within the primary datacenter require sub-millisecond connectivity. Replica connectivity requirements remain unchanged.
102+
103+
## Traffic routing and request handling
104+
105+
Primary routes the traffic to the additional nodes. In case of multiple stateless nodes, the primary sends new connections to the server with the fewest active connections at that moment.
106+
107+
## Upgrading an HA deployment with additional nodes
108+
109+
The following is an example upgrade sequence:
110+
111+
* Start maintenance window.
112+
* Stop replicas.
113+
* Upgrade stateless nodes in parallel.
114+
* Upgrade the primary node.
115+
* Upgrade the replicas. They can be upgraded in parallel or sequentially depending on your disaster recovery preferences.
116+
* Start replicas.
117+
* Remove maintenance window.
118+
119+
The additional nodes should not cause additional downtime during upgrades.
120+
121+
## Failover and disaster recovery behavior
122+
123+
There is no need to "tear down" additional nodes, as they do not contain any data.
124+
125+
During failover, the replica node is removed from the original deployment and converted to a standalone node. Stateless nodes should be re-attached to the promoted replica, similar to how additional replicas are re-attached after a failover.
126+
127+
If the primary node is functional and you want to promote a replica to be primary, you should remove stateless nodes from the primary with the `ghe-remove-node` command, before re-adding them to the promoted node.
128+
129+
If the primary node is unreachable and unrecoverable, stateless nodes can be re-added without removing them from the original primary.
130+
131+
## Monitoring, logs, and support bundles
132+
133+
On the primary node, the Management Console monitoring dashboards display metrics for all nodes, including the stateless nodes. Commands such as `ghe-cluster-nodes` and `ghe-cluster-status` contain details on stateless nodes. All Management Console requests are served by the primary node.
134+
135+
Logs are stored locally on the stateless nodes. They can be exported from these nodes to third-party log management services.
136+
137+
You can use the `ghe-cluster-support-bundle` and `ghe-support-bundle` commands to generate and upload cluster or single-node bundles.
138+
139+
## Known limitations
140+
141+
This feature is not designed for monorepos, but the addition of new stateless nodes may indirectly improve monorepo operations by reducing web and job workloads on the primary node. There are no autoscaling and scaledown features.
Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
---
2+
title: Additional nodes
3+
intro: 'You can configure additional nodes to offload stateless workloads from the primary node in your {% data variables.product.prodname_ghe_server %} instance.'
4+
versions:
5+
ghes: '>= 3.18'
6+
topics:
7+
- Enterprise
8+
children:
9+
- /configuring-additional-nodes
10+
---

content/admin/monitoring-and-managing-your-instance/index.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -14,5 +14,6 @@ children:
1414
- /configuring-high-availability
1515
- /caching-repositories
1616
- /multiple-data-disks
17+
- /additional-nodes
1718
shortTitle: 'Monitor and manage your instance'
1819
---

content/copilot/how-tos/use-copilot-agents/coding-agent/extend-coding-agent-with-mcp.md

Lines changed: 43 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ redirect_from:
1414
- /copilot/how-tos/agents/copilot-coding-agent/extend-coding-agent-with-mcp
1515
- /copilot/how-tos/agents/coding-agent/extend-coding-agent-with-mcp
1616
contentType: how-tos
17-
category:
17+
category:
1818
- Integrate Copilot with your tools
1919
---
2020

@@ -96,6 +96,15 @@ Note that all `string` and `string[]` fields besides `tools` & `type` support su
9696

9797
## Example configurations
9898

99+
The examples below show MCP server configurations for different providers.
100+
101+
* [Sentry](#example-sentry)
102+
* [Notion](#example-notion)
103+
* [Azure](#example-azure)
104+
* [Cloudflare](#example-cloudflare)
105+
* [Azure DevOps](#example-azure-devops)
106+
* [Atlassian](#example-atlassian)
107+
99108
### Example: Sentry
100109

101110
The [Sentry MCP server](https://github.com/getsentry/sentry-mcp) gives {% data variables.product.prodname_copilot_short %} authenticated access to exceptions recorded in [Sentry](https://sentry.io).
@@ -250,6 +259,39 @@ To use the Azure DevOps MCP server with {% data variables.copilot.copilot_coding
250259
}
251260
```
252261

262+
### Example: Atlassian
263+
264+
The [Atlassian MCP server](https://github.com/atlassian/atlassian-mcp-server) gives {% data variables.product.prodname_copilot_short %} authenticated access to your Atlassian apps, including Jira, Compass, and Confluence.
265+
266+
For more information about authenticating to the Atlassian MCP server using an API key, see [Configuring authentication via API token](https://support.atlassian.com/atlassian-rovo-mcp-server/docs/configuring-authentication-via-api-token/) in the Atlassian documentation.
267+
268+
```javascript copy
269+
// If you copy and paste this example, you will need to remove the comments prefixed with `//`, which are not valid JSON.
270+
{
271+
"mcpServers": {
272+
"atlassian-rovo-mcp": {
273+
"command": "npx",
274+
"type": "local",
275+
"tools": ["*"],
276+
"args": [
277+
"mcp-remote@latest",
278+
"https://mcp.atlassian.com/v1/mcp",
279+
// We can use the $ATLASSIAN_API_KEY environment variable which is passed
280+
// to the server because of the `env` value below.
281+
"--header",
282+
"Authorization: Basic $ATLASSIAN_API_KEY"
283+
],
284+
"env": {
285+
// The value of the `COPILOT_MCP_ATLASSIAN_API_KEY` secret will be passed
286+
// to the server command as an environment variable
287+
// called `ATLASSIAN_API_KEY`.
288+
"ATLASSIAN_API_KEY": "$COPILOT_MCP_ATLASSIAN_API_KEY"
289+
}
290+
}
291+
}
292+
}
293+
```
294+
253295
## Reusing your MCP configuration from {% data variables.product.prodname_vscode %}
254296

255297
If you have already configured MCP servers in {% data variables.product.prodname_vscode_shortname %}, you can leverage a similar configuration for {% data variables.copilot.copilot_coding_agent %}.

content/rest/issues/index.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,8 +16,9 @@ children:
1616
- /assignees
1717
- /comments
1818
- /events
19-
- /issues
2019
- /issue-dependencies
20+
- /issue-field-values
21+
- /issues
2122
- /labels
2223
- /milestones
2324
- /sub-issues
Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
---
2+
title: REST API endpoints for issue field values
3+
shortTitle: Issue field values
4+
intro: Use the REST API to view and manage issue field values for issues.
5+
versions: # DO NOT MANUALLY EDIT. CHANGES WILL BE OVERWRITTEN BY A 🤖
6+
fpt: '*'
7+
ghec: '*'
8+
ghes: '*'
9+
topics:
10+
- API
11+
autogenerated: rest
12+
allowTitleToDifferFromFilename: true
13+
---
14+
15+
<!-- Content after this section is automatically generated -->

0 commit comments

Comments
 (0)