You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If everything is setup properly, this alert will reach Robusta. It will show up in the Robusta UI, Slack, and other configured sinks.
11
38
39
+
.. note::
40
+
41
+
It might take a few minutes for the alert to arrive due to AlertManager's `group_wait` and `group_interval` settings. More info `here <https://prometheus.io/docs/alerting/latest/configuration/#:~:text=How%20long%20to%20wait%20before%20sending%20a%20notification%20about%20new%20alerts%20that%0A%23%20are%20added%20to%20a%20group%20of%20alerts%20for%20which%20an%20initial%20notification%20has%0A%23%20already%20been%20sent>`_.
42
+
12
43
.. details:: I configured AlertManager, but I'm not receiving alerts?
# grafana_api_key: <YOUR GRAFANA EDITOR API KEY> # (1)
83
83
# alertmanager_flavor: grafana
84
84
85
+
# If using a multi-tenant prometheus or alertmanager, pass the org id to all queries
86
+
# prometheus_additional_headers:
87
+
# X-Scope-OrgID: <org id>
88
+
# alertmanager_additional_headers:
89
+
# X-Scope-OrgID: <org id>
90
+
85
91
.. code-annotations::
86
92
1. This is necessary for Robusta to create silences when using Grafana Alerts, because of minor API differences in the AlertManager embedded in Grafana.
Copy file name to clipboardExpand all lines: docs/configuration/alertmanager-integration/grafana-alert-manager.rst
+6Lines changed: 6 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -83,6 +83,12 @@ Modify and add the following config to ``generated_values.yaml`` and :ref:`updat
83
83
# prometheus_additional_labels:
84
84
# cluster: 'CLUSTER_NAME_HERE'
85
85
86
+
# If using a multi-tenant prometheus or alertmanager, pass the org id to all queries
87
+
# prometheus_additional_headers:
88
+
# X-Scope-OrgID: <org id>
89
+
# alertmanager_additional_headers:
90
+
# X-Scope-OrgID: <org id>
91
+
86
92
.. code-annotations::
87
93
1. This is necessary for Robusta to create silences when using Grafana Alerts, because of minor API differences in the AlertManager embedded in Grafana.
1. Put Robusta's route as the first route, to guarantee it receives alerts. If you can't do so, you must guarantee all previous routes has ``continue: true`` set.
34
+
2. Keep sending alerts to receivers defined after Robusta.
35
+
3. Important, so Robusta knows when alerts are resolved.
12
36
13
-
.. include:: ./_alertmanager-config.rst
14
37
15
38
.. include:: ./_testing_integration.rst
16
39
17
-
Configure Metric Querying
40
+
Configure Metrics Querying
18
41
====================================
19
42
20
-
Metrics querying lets Robusta pull metrics and create silences.
43
+
Robusta can query metrics and create silences using Victoria Metrics. If both are in the same Kubernetes cluster, Robusta can auto-detect the Victoria Metrics service. To verify, go to the "Apps" tab in Robusta, select an application, and check for usage graphs.
21
44
22
-
Add the following to ``generated_values.yaml`` and :ref:`update Robusta <Simple Upgrade>`.
45
+
If auto-detection fails you must add the ``prometheus_url`` parameter and :ref:`update Robusta <Simple Upgrade>`.
# Add any labels that are relevant to the specific cluster (optional)
32
54
# prometheus_additional_labels:
33
55
# cluster: 'CLUSTER_NAME_HERE'
@@ -39,6 +61,12 @@ Add the following to ``generated_values.yaml`` and :ref:`update Robusta <Simple
39
61
# grafana_api_key: <YOUR GRAFANA EDITOR API KEY> # (1)
40
62
# alertmanager_flavor: grafana
41
63
64
+
# If using a multi-tenant prometheus or alertmanager, pass the org id to all queries
65
+
# prometheus_additional_headers:
66
+
# X-Scope-OrgID: <org id>
67
+
# alertmanager_additional_headers:
68
+
# X-Scope-OrgID: <org id>
69
+
42
70
.. code-annotations::
43
71
1. This is necessary for Robusta to create silences when using Grafana Alerts, because of minor API differences in the AlertManager embedded in Grafana.
Copy file name to clipboardExpand all lines: docs/configuration/holmesgpt/toolsets/coralogix_logs.rst
+7-4Lines changed: 7 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -29,7 +29,7 @@ Configuration
29
29
30
30
.. md-tab-set::
31
31
32
-
.. md-tab-item:: Robusta Helm Chat
32
+
.. md-tab-item:: Robusta Helm Chart
33
33
34
34
.. code-block:: yaml
35
35
@@ -68,7 +68,8 @@ Configuration
68
68
Advanced Configuration
69
69
^^^^^^^^^^^^^^^^^^^^^^
70
70
71
-
**Frequent logs and archive**
71
+
Frequent logs and archive
72
+
****************************
72
73
73
74
By default, holmes fetched the logs from the `Frequent search <https://coralogix.com/docs/user-guides/account-management/tco-optimizer/logs/#frequent-search-data-high-priority>`_
74
75
tier and only fetch logs from the `Archive` tier if the frequent search returned no result.
@@ -98,7 +99,8 @@ Here is a description of each possible log retrieval methodology:
98
99
- **FREQUENT_SEARCH_FALLBACK** Search logs in the archive first. If there are no results, fallback to searching the frequent logs.
99
100
- **BOTH_FREQUENT_SEARCH_AND_ARCHIVE** Always use both the frequent search and the archive to fetch logs. The result contains merged data which is deduplicated and sorted by timestamp.
100
101
101
-
**Search labels**
102
+
Search labels
103
+
***************
102
104
103
105
You can tweak the labels used by the toolset to identify kubernetes resources. This is **optional** and only needed if your
104
106
logs settings for ``pod``, ``namespace``, ``application`` and ``subsystem`` differ from the defaults in the example below.
@@ -124,7 +126,8 @@ You can verify what labels to use by attempting to run a query in the coralogix
124
126
:align:center
125
127
126
128
127
-
**Disabling the default toolset**
129
+
Disabling the default toolset
130
+
*********************************
128
131
129
132
If Coralogix is your primary datasource for logs, it is **advised** to disable the default HolmesGPT logging
130
133
tool by disabling the ``kubernetes/logs`` toolset. Without this. HolmesGPT may still use kubectl to
0 commit comments