diff --git a/docs/reports/reports/advanced-reporting.mdx b/docs/reports/reports/advanced-reporting.mdx index 39c5564a6..a7edf8f44 100644 --- a/docs/reports/reports/advanced-reporting.mdx +++ b/docs/reports/reports/advanced-reporting.mdx @@ -6,15 +6,15 @@ sidebar_position: 2 import ThemedImage from '@theme/ThemedImage' import useBaseUrl from '@docusaurus/useBaseUrl' +Device42's Advanced Reporting engine is a schedulable reporting platform that lets you create reports with visualizations and export the output in different formats. A selection of predefined reports is included, and you can add an unlimited number of user-defined reports. + :::caution -We are primarily invested in **[Standard Reports](standard-reports.mdx)** and **[Insights+](insights-plus.mdx)** and encourage customers to use those first. +We are primarily invested in **[Standard Reports](/reports/reports/standard-reports)** and **[Insights+](/reports/reports/insights-plus)** and encourage customers to use those first. ::: ## Advanced Reporting Engine Overview -Device42's Advanced Reporting engine is a quantum leap forward in functionality. It is a fully incorporated, schedulable BI or reporting platform that lets you create both ad-hoc reports that may include visualizations, and export the output in different formats as desired. A changing selection of predefined reports is included with Advanced Reporting, while an unlimited number may be added under **User-Defined** reports, exported, imported, and shared! - -Head to **Analytics > Reports > Advanced Reporting** to get started: +Navigate to **Analytics > Reports > Advanced Reporting** to get started. Reports > Advanced Reporting** to get started: }} /> -Device42's Advanced Reporting engine is extremely flexible, and you can use it to take your reporting to the next level, extracting insight and business intelligence from your infrastructure data. - -In this section, you'll learn how to select data and build reports in advanced reporting. If you prefer a video, see [this advanced reporting blog post](https://www.device42.com/blog/2018/04/25/advanced-reporting-video-walk-through/) for a quick advanced reporting walkthrough! +This page covers how to run, customize, schedule, and build advanced reports. For a video walkthrough, see [this advanced reporting blog post](https://www.device42.com/blog/2018/04/25/advanced-reporting-video-walk-through/). -### Running an existing report +### Run an Existing Report -Running an existing report in advanced reporting is fast and easy. - -Head to **Analytics > Reports > Advanced Reporting** and expand **Pre-defined reports** on the left by clicking the arrow to show the list of existing reports. +Navigate to **Analytics > Reports > Advanced Reporting** and expand **Pre-defined reports** on the left by clicking the arrow to show the list of existing reports. 1. Choose a report from the list, highlight it, and click the **play icon** to run the report. @@ -64,9 +60,9 @@ Head to **Analytics > Reports > Advanced Reporting** and expand **Pre-defined re }} /> -### Customizing an existing report +### Customize an Existing Report -Customizing an existing report is easy! Simply make a copy and edit the copy, or double-click any user-created report to edit it directly. +To customize an existing report, make a copy and edit the copy, or double-click any user-created report to edit it directly. 1. To customize an existing report, either right-click or click on the hamburger menu and choose **Duplicate** to make a copy: @@ -80,7 +76,7 @@ Customizing an existing report is easy! Simply make a copy and edit the copy, or 2. Now, double-click the report you just copied, make your desired edits, and run as before. -### Scheduling Advanced Reports +### Schedule Advanced Reports Click the hamburger menu, select **Schedule**, and define the report to send out. @@ -112,8 +108,6 @@ To edit the schedule of a report, click the **pencil icon** on the far right of }} /> -* * * - ## Building a Custom Advanced Report The following is the basic flow for building an advanced report. @@ -129,7 +123,7 @@ The following is the basic flow for building an advanced report. }} /> -3. Next, head to the **Filter and sort** tab to configure your sorting and to filter your data. +3. Go to the **Filter and sort** tab to configure sorting and filtering for your data. -### Sorting and Ordering Report Data +### Sort and Order Report Data -Device42 uses joins to handle sorting in Advanced Reporting by matching primary and foreign keys. Note that Advanced Reporting does a pretty good job attempting to set these for you, but for complicated reports, you will have to set or customize the joins yourself. You'll want to create joins that result in the data you care about, consulting the Device42 data dictionary when necessary. Note that each primary key should almost always have a matching foreign key, from being a primary key matching to a foreign key. If you are seeing data that you didn't expect, there may still be joins that need to be modified. Access joins by clicking the **gear icon** and selecting **Joins**. +Device42 uses joins to handle sorting in Advanced Reporting by matching primary and foreign keys. Advanced Reporting attempts to set these automatically, but for complex reports, you may need to set or customize the joins yourself. Consult the Device42 data dictionary when necessary. Each primary key should almost always have a matching foreign key. If you are seeing unexpected data, check whether any joins need to be modified. Access joins by clicking the **gear icon** and selecting **Joins**. ## Use a DOQL Query To Create a Report Dataset @@ -179,9 +173,7 @@ With Device42's Advanced Reporting, you can use DOQL (a SQL derivative) to creat To create a report based on a DOQL query, name the report, and then choose **Add SQL** during the **Category** selection step of report creation instead of choosing any categories. -For a full set of steps in more detail, with pictures, see the dedicated [Use Custom SQL in Advanced Report](reports/reports/use-custom-sql-advanced-report.mdx) page. - -* * * +For detailed steps, see [Use Custom SQL in Advanced Report](/reports/reports/use-custom-sql-advanced-report). ## Reporting Data Categories @@ -209,7 +201,7 @@ You can browse all available categories: ### The Viewer Entity Relationship Diagram (ERD) -The Viewer Entity Relationship Diagram, or ERD for short, is a handy and powerful way to navigate the Device42 database schema. It's useful for visualizing relationships and writing DOQL queries. You can quickly navigate to the ERD from the Advanced Reporting page: +The Viewer Entity Relationship Diagram (ERD) is a visual way to navigate the Device42 database schema. It is useful for understanding relationships and writing DOQL queries. Navigate to the ERD from the Advanced Reporting page. -Clicking the **Entity Relationship Diagram** button brings up the ERD. Here is an example screenshot of the ERD. For more information, check out the dedicated [Device42 ERD/Viewer Schema page](reports/device42-doql/db-viewer-schema.mdx). +Clicking the **Entity Relationship Diagram** button brings up the ERD. For more information, see [Device42 ERD/Viewer Schema](/reports/device42-doql/db-viewer-schema). In this example, the **Find** field contains the search keyword `affinity`, and therefore much of the ERD has been greyed out, while the items that match the `affinity` search query are visible: diff --git a/docs/reports/reports/aws-migration-evaluator.mdx b/docs/reports/reports/aws-migration-evaluator.mdx index 2572cab60..bdc87d644 100644 --- a/docs/reports/reports/aws-migration-evaluator.mdx +++ b/docs/reports/reports/aws-migration-evaluator.mdx @@ -6,9 +6,9 @@ sidebar_position: 3 import ThemedImage from '@theme/ThemedImage' import useBaseUrl from '@docusaurus/useBaseUrl' -You can generate a Device42 AWS Migration Evaluator report from Device42's predefined reports and upload the report to the AWS Migration Evaluator portal. To see a sample report, see the [AWS Migration Business Case](https://d1.awsstatic.com/asset-repository/tso-logic/MigrationEvaluator_TSOLogic_AWS_BusinessCaseSample.pdf). +You can generate a Device42 AWS Migration Evaluator report from Device42's predefined reports and upload it to the AWS Migration Evaluator portal. To see a sample report, see the [AWS Migration Business Case](https://d1.awsstatic.com/asset-repository/tso-logic/MigrationEvaluator_TSOLogic_AWS_BusinessCaseSample.pdf). -To generate the report, follow the steps below. +## Generate the Report 1. Navigate to **Insights+** via the header menu. @@ -34,7 +34,9 @@ To generate the report, follow the steps below. 4. Combine the CSV files into a single `.xlsx` file for consumption by AWS. Each tab should be named to match its data. -Migration Evaluator will provide console access to the customer so you can securely upload the data to us for processing. +Migration Evaluator provides console access so you can securely upload the data for processing. + +## Upload the Report Before you upload the Device42 report, you may want to open and edit the `.xlsx` file to: @@ -46,10 +48,10 @@ In this example, the Device42 report is named `TSO Logic.xlsx`. 1. In the Migration Evaluator console, select **Discover > Self Reported Files**, and then click **Upload**. 2. Select your Device42 report file, and click **Upload**. -![](/assets/images/WEB-504_AWS-D42-file-import-1.png) +![Migration Evaluator file upload](/assets/images/WEB-504_AWS-D42-file-import-1.png) -![](/assets/images/WEB-504_AWS-D42-file-import-3.png) +![Upload confirmation](/assets/images/WEB-504_AWS-D42-file-import-3.png) :::tip -To learn more about Migration Evaluator, please visit [https://aws.amazon.com/migration-evaluator/resources/](https://aws.amazon.com/migration-evaluator/resources/). +To learn more about Migration Evaluator, see [https://aws.amazon.com/migration-evaluator/resources/](https://aws.amazon.com/migration-evaluator/resources/). ::: diff --git a/docs/reports/reports/aws-migration-hub.mdx b/docs/reports/reports/aws-migration-hub.mdx index 19b5d92d5..97498510a 100644 --- a/docs/reports/reports/aws-migration-hub.mdx +++ b/docs/reports/reports/aws-migration-hub.mdx @@ -2,12 +2,15 @@ title: "AWS Migration Hub" sidebar_position: 4 --- + import ThemedImage from '@theme/ThemedImage' import useBaseUrl from '@docusaurus/useBaseUrl' -You can generate a Device42 AWS Migration Hub report from Device42's predefined reports and upload the report to the AWS Migration Hub portal. Follow the steps below. +You can generate a Device42 AWS Migration Hub report from Device42's predefined reports and upload the report to the AWS Migration Hub portal. + +## Generate the Report -- Select **Analytics > Advanced Reporting**, and from the left panel, go to **Pre-Defined Reports > Integrations > Workload Portability**. +1. Select **Analytics > Advanced Reporting**, and from the left panel, go to **Pre-Defined Reports > Integrations > Workload Portability**. -- Select the **AWS Migration Hub** hamburger menu, choose **Export As**, and then click **CSV**. This generates and downloads a report — the amount of time required to generate the report depends on how much Device42 data you have. +2. Select the **AWS Migration Hub** hamburger menu, choose **Export As**, and then click **CSV**. This generates and downloads a report — the time required depends on how much Device42 data you have. -After the Device42 AWS Migration Hub report downloads, go to your AWS Console. +## Upload the Report to AWS + +After the report downloads, go to your AWS Console. -- Click **Services**, and then search for **Migration Hub**. +1. Click **Services**, then search for **Migration Hub**. -![](/assets/images/WEB-515_AWS-MH-Console-2.png) + ![AWS Console services search](/assets/images/WEB-515_AWS-MH-Console-2.png) -- Once you are on the Migration Hub home page, click **Tools**, and then select **Import** on the Discovery Tools page. +2. On the Migration Hub home page, click **Tools**, then select **Import** on the Discovery Tools page. -![](/assets/images/WEB-515_AWS-MH-Console-3.png) + ![Migration Hub tools](/assets/images/WEB-515_AWS-MH-Console-3.png) -![](/assets/images/WEB-515_AWS-MH-Console-4-1.png) + ![Discovery tools import](/assets/images/WEB-515_AWS-MH-Console-4-1.png) -**Note:** AWS requires that the import file be on an AWS S3 bucket. You must navigate to AWS S3 and upload the Device42 AWS Migration Hub file to an S3 bucket. +:::note +AWS requires that the import file be on an AWS S3 bucket. You must upload the Device42 AWS Migration Hub file to an S3 bucket before importing. +::: -![](/assets/images/WEB-515_AWS-MH-Console-5-Import-3.png) +![S3 upload requirement](/assets/images/WEB-515_AWS-MH-Console-5-Import-3.png) -- Navigate to your AWS S3 console and upload your Device42 AWS Migration Hub file to the appropriate S3 bucket. -- After you have uploaded your Device42 file to the AWS S3 bucket, copy the file's **Object URL** link. +3. Navigate to your AWS S3 console and upload the Device42 AWS Migration Hub file to the appropriate S3 bucket. -![](/assets/images/WEB-515_AWS-MH-Console-6-Import-Object-URL-fromS3-5.png) +4. After uploading the file, copy its **Object URL** link. -- Go back to Migration Hub and paste the object URL in the **Amazon S3 Object URL** field, enter an **Import name**, and click **Import**. + ![S3 object URL](/assets/images/WEB-515_AWS-MH-Console-6-Import-Object-URL-fromS3-5.png) -![](/assets/images/WEB-515_AWS-MH-Console-7-Import-URL-OLD-2.png) +5. Go back to Migration Hub and paste the object URL in the **Amazon S3 Object URL** field, enter an **Import name**, and click **Import**. + + ![Import URL field](/assets/images/WEB-515_AWS-MH-Console-7-Import-URL-OLD-2.png) The AWS import process starts and displays a list of imports. -![](/assets/images/WEB-515_AWS-MH-Console-8-Import-list.png) +![Import list](/assets/images/WEB-515_AWS-MH-Console-8-Import-list.png) + +## View Imported Data -- You can click on the name of the import file to see the imported information in detail. +Click the name of the import file to see the imported information in detail. -![](/assets/images/WEB-515_AWS-MH-Console-9-Import-details.png) +![Import details](/assets/images/WEB-515_AWS-MH-Console-9-Import-details.png) -- You can also click on a name in the **Server Info** column to see details about that particular server, with information about that imported item. +Click a name in the **Server Info** column to see details about that server. -![](/assets/images/WEB-515_AWS-MH-Console-10-Import-details.png) +![Server details](/assets/images/WEB-515_AWS-MH-Console-10-Import-details.png) -- You can also click **Applications** to view information about the applications created in Device42 that were imported. +Click **Applications** to view information about the applications created in Device42 that were imported. -![](/assets/images/WEB-515_AWS-MH-Console-11-Apps.png) +![Applications view](/assets/images/WEB-515_AWS-MH-Console-11-Apps.png) diff --git a/docs/reports/reports/cloud-endure-device42.mdx b/docs/reports/reports/cloud-endure-device42.mdx index 6d53cadda..4fc9ffb76 100644 --- a/docs/reports/reports/cloud-endure-device42.mdx +++ b/docs/reports/reports/cloud-endure-device42.mdx @@ -6,13 +6,13 @@ sidebar_position: 5 import ThemedImage from '@theme/ThemedImage' import useBaseUrl from '@docusaurus/useBaseUrl' -After you have performed your HyperVisors / \*nix / Windows scans and associated your business applications with your devices using the Device42 Business Application functionality, you are ready to prepare for your migration to AWS using CloudEndure. +After performing your Hypervisors, \*nix, and Windows scans and associating business applications with devices using the Device42 Business Application functionality, you can prepare for your migration to AWS using CloudEndure. -Device42 has streamlined the process of conducting cloud migrations to AWS by integrating with the CloudEndure Blueprint. With a few short clicks, Device42 users can assess which workloads have the CloudEndure agent loaded as well as export blueprints for CloudEndure migrations. +Device42 integrates with the CloudEndure Blueprint to streamline cloud migrations to AWS. You can assess which workloads have the CloudEndure agent installed and export blueprints for CloudEndure migrations. -## Here's How It Works +## Assess CloudEndure Agent Status -When performing a migration with CloudEndure, the first step is to ensure the CloudEndure agent is installed on the workloads in scope, which you can easily view from the built-in **CloudEndure Prep** report. +When performing a migration with CloudEndure, first ensure the CloudEndure agent is installed on the workloads in scope. You can check this from the built-in **CloudEndure Prep** report. - Select **Analytics > Advanced Reporting**. @@ -40,19 +40,19 @@ After ensuring the workloads in scope have the CloudEndure agent installed, down }} /> -### What Happens Next? +### Run the Mass Blueprint Setter -Once you have generated the CloudEndure **CSV** file from **Advanced Reporting**, you must make sure to download the [Mass Blueprint Setter](https://docs.cloudendure.com/Content/Scripts/CloudEndure%20mass%20blueprints%20setter.zip) script. +After generating the CloudEndure **CSV** file from **Advanced Reporting**, download the [Mass Blueprint Setter](https://docs.cloudendure.com/Content/Scripts/CloudEndure%20mass%20blueprints%20setter.zip) script. -This script requires **Python2.7** to run, and in order to have the blueprint set in CloudEndure, the devices in the **CSV** file must have the CloudEndure agent installed and be connected to your chosen project. +This script requires **Python 2.7**. The devices in the **CSV** file must have the CloudEndure agent installed and be connected to your chosen project. Before running the script, you must open the CloudEndure CSV file and add the project name in the first column (**projectName**) for each device. Include the relevant name of the matching project for each listed machine. -![](/assets/images/WEB-607_Cloud-Endure-4.png) +![CloudEndure CSV with project names](/assets/images/WEB-607_Cloud-Endure-4.png) There will also be other blank columns for **iamRole**, **placementGroup**, and others that you can supply in the CSV if you have already made decisions for these values for each machine. -Once the CSV file is prepped, you can then run the **Mass Blueprint Setter** script. +After prepping the CSV file, run the **Mass Blueprint Setter** script. Use `python CE_Update_Blueprints.py -h` to list all available options: @@ -75,10 +75,9 @@ optional arguments: When ready to run, supply all of the above parameters. If there are any issues when running the script, a `.log` file will be created in the same directory. -Once completed, all the machines in the CSV file that match machines in CloudEndure will have their respective blueprints updated. +When the script completes, all machines in the CSV file that match machines in CloudEndure will have their blueprints updated. ### Reference Links -CE API Docs available with Sample Scripts - [https://docs.cloudendure.com/Content/Getting\_Started\_with\_CloudEndure/API/API.htm](https://docs.cloudendure.com/Content/Getting_Started_with_CloudEndure/API/API.htm) - -Download the Mass Blueprint Setter - [https://docs.cloudendure.com/Content/Scripts/CloudEndure%20mass%20blueprints%20setter.zip](https://docs.cloudendure.com/Content/Scripts/CloudEndure%20mass%20blueprints%20setter.zip) +- [CE API Docs with Sample Scripts](https://docs.cloudendure.com/Content/Getting_Started_with_CloudEndure/API/API.htm) +- [Download the Mass Blueprint Setter](https://docs.cloudendure.com/Content/Scripts/CloudEndure%20mass%20blueprints%20setter.zip) diff --git a/docs/reports/reports/cloud-recommendation-engine.mdx b/docs/reports/reports/cloud-recommendation-engine.mdx index 0d55991ff..f413218e3 100644 --- a/docs/reports/reports/cloud-recommendation-engine.mdx +++ b/docs/reports/reports/cloud-recommendation-engine.mdx @@ -6,16 +6,16 @@ sidebar_position: 6 import ThemedImage from '@theme/ThemedImage' import useBaseUrl from '@docusaurus/useBaseUrl' -The Cloud Recommendation Engine is a powerful feature that can provide you with exactly the details you need to plan your next cloud migration, compare costs between Amazon AWS, Microsoft Azure, Google Cloud Platform, Oracle, and VMware Cloud on AWS cloud platforms, and right-size your next cloud deployment. +The Cloud Recommendation Engine provides the details you need to plan a cloud migration, compare costs between Amazon AWS, Microsoft Azure, Google Cloud Platform, Oracle, and VMware Cloud on AWS, and right-size your cloud deployment. Select **Analytics > Reports > Cloud Recommendation Engine** and get clear recommendations for sizing cloud instances to suit your physical or virtual workloads. **CRE Feature Overview** -- The Cloud Recommendation Engine leverages Device42's industry-leading autodiscovery and resource utilization data to understand your workloads. You can now also select Business Applications you have created as an option when you run a CRE report. This simplifies moving devices in groups, waves, or by logical groupings defined by applications. +- The Cloud Recommendation Engine leverages Device42's industry-leading discovery and resource utilization data to understand your workloads. You can also select Business Applications as an option when running a CRE report, which simplifies moving devices in groups, waves, or by logical groupings defined by applications. - Cloud instance recommendations are provided for Amazon AWS, Microsoft Azure, Google Cloud Platform, Oracle, and VMware Cloud on AWS cloud platforms (including instance sizing recommendations and pricing information). - Instance recommendations are calculated based on a combination of your observed workloads (CPU, RAM, HDD, NIC, and so on) and Resource Utilization data (if available), plus a user-selected **Safety Factor** that you can set via the slider. For example, if your current instance peaked at 16GB RAM usage and you chose a 50% safety factor, CRE would size the cloud instance at 24GB. -- CRE reports now include the following monthly and annual cost information for all vendors: +- CRE reports include the following monthly and annual cost information for all vendors: - On-Demand Instance Cost - 1-Year Reserved Instance Cost – Prorated 3-Year Reserved Instance Cost @@ -38,16 +38,16 @@ Select **Analytics > Reports > Cloud Recommendation Engine** and get clear recom }} /> -## How Does the Cloud Recommendation Engine Work? +## How the Cloud Recommendation Engine Works 1. It creates a summary of CPU, memory, and OS info based on autodiscovered inventory data. 2. If you have the resource utilization feature enabled, peak CPU and memory usage over the last month is factored in. -3. When you click the **Send to Cloud and Analyze Data** button, Device42 anonymizes your data and sends it to our cloud servers, finding matching AWS, Azure, GCP, Oracle, or VMware Cloud on AWS workloads. +3. When you click the **Send to Cloud and Analyze Data** button, Device42 anonymizes your data and sends it to Device42's cloud servers, finding matching AWS, Azure, GCP, Oracle, or VMware Cloud on AWS workloads. 4. Device42’s bots do the hard work, returning workload recommendations that have been best-matched to each particular device in your Main Appliance. The anonymized data is then re-matched with your actual device names, and an output sheet is generated that contains both your device names and matched workloads for the following scenarios: - - AWS based on inventory data - - AWS based on resource utilization - - Azure based on inventory data + - AWS based on inventory data + - AWS based on resource utilization + - Azure based on inventory data - Azure based on resource utilization - GCP based on inventory data - GCP based on resource utilization @@ -93,10 +93,10 @@ connect-eu.device42.io https/443 ## Global Cloud Recommendation Engine Settings and Scheduling -You can use Global Cloud Recommendation Engine Settings to select settings and options for your CRE report and schedule the report to run on a daily basis. Changes you make to the Global CRE Settings are reflected on the **Analytics > Reports > Cloud Recommendation Engine** page. +You can use Global Cloud Recommendation Engine Settings to select settings and options for your CRE report and schedule the report to run on a daily basis. Changes you make to the Global CRE Settings are reflected on the **Analytics > Reports > Cloud Recommendation Engine** page. -- Select **Tools > Settings > Global Settings** to display the settings page and then click **Edit** at the top right of the page. -- Scroll down to the bottom of the page, click **Show** to expand the CRE settings section. +- Select **Tools > Settings > Global Settings** to display the settings page and then click **Edit** at the top right of the page. +- Scroll down to the bottom of the page, click **Show** to expand the CRE settings section. -- To schedule the CRE report, check the **Run CRE on schedule** box and select the **Scheduled time window** for the report. +- To schedule the CRE report, check the **Run CRE on schedule** box and select the **Scheduled time window** for the report. -- Click **Save** to save the CRE settings. +- Click **Save** to save the CRE settings. diff --git a/docs/reports/reports/discovery-quality-scores.mdx b/docs/reports/reports/discovery-quality-scores.mdx index 862cc7497..3951c1cee 100644 --- a/docs/reports/reports/discovery-quality-scores.mdx +++ b/docs/reports/reports/discovery-quality-scores.mdx @@ -6,11 +6,15 @@ sidebar_position: 8 import ThemedImage from '@theme/ThemedImage' import useBaseUrl from '@docusaurus/useBaseUrl' -The Discovery Scores page provides users with the ability to view the success of their discovery jobs on a granular level. Via the quality scores page, users can see each device that was discovered, the target IP it was discovered from _(which is also a link to that job's page)_, the job name, time stamps, and status of port check and auth. +The Discovery Scores page lets you view the success of your discovery jobs at a granular level. For each discovered device, you can see the target IP it was discovered from, the job name, timestamps, and the status of port check and auth. -The discovery scores have been refactored to reflect what Device42 does during discovery more accurately. When Device42 communicates with an API rather than an individual endpoint (for example, a virtual machine), Device42 now creates an **API Manager** discovery score record to report successes or failures against the API endpoint directly. Devices returned by the API will then generate individual discovery score records. +When Device42 communicates with an API rather than an individual endpoint (for example, a virtual machine), it creates an **API Manager** discovery score record to report successes or failures against the API endpoint directly. Devices returned by the API then generate individual discovery score records. -To view the Discovery Scores page, select  **Analytics > Discovery Status > Discovery Scores** from the main menu. +To view the Discovery Scores page, select **Analytics > Discovery Status > Discovery Scores** from the main menu. + +## Score Summary Charts + +The top of the page displays pie charts — **Success**, **Scores**, and **Queue** — for a quick visual summary of your discovery score status. Hover over the charts, click on a chart section or legend entry, and filter the scores list for additional context. Discovery Status > Di }} /> -The Discovery Scores page now includes Score Summary pie charts at the top of the page – **Success**, **Scores**, and **Queue** – for a quick visual summary of your discovery score status. You can hover over the charts, click on a **Discovery Success** chart section or the chart legend, and quickly filter the scores list for additional context. -The Discovery Scores page now also includes a column for **Other** to rate the success of discovery functions that don’t fall into the four existing categories (basic, software, services, applications). This may account for API calls, for example. This number typically represents how many actions we took and how many of them were successful or returned data. This number has no correlation to how many devices or records were created, but rather a generic measure of all the steps Device42 took to gather information. The page also contains two additional columns – **Queue** and **Cumulative Score** – that reflect ingestion status and the cumulative success score for the actions attempted by the job. +### Score List Columns + +The score list includes the following columns. + +| Column | Description | +|--------|-------------| +| **Discovery Target** | The IP or FQDN targeted by the discovery job that found this device. | +| **Job Name** | The name of the discovery job that discovered the device. | +| **Job Start Time** | The time the discovery job started. | +| **Sub Type** | The chosen subtype for the job. | +| **Port Check** | Status of the first step of discovery: connecting to the target port (for example, port 22 for Linux SSH). Green check (success) or red X (failure). | +| **Auth** | Status of authentication using the credentials in the job settings. Green check (success) or red X (failure). | +| **Discovery Successful** | Green check when both Port Check and Auth succeed. Red X if either fails. Marked as PARTIAL or OK if the device is added but some scores failed. | +| **Sudo Access** | Whether sudo access is allowed. Green check if allowed, red cross if not, dash if not applicable. | +| **Ignored** | A [Device Ignore Rule](/infrastructure-management/devices/device-ignore-rules) was applied and the device was excluded. | +| **Ignore Rule** | The text entered in the **Ignored text contains** field when the Device Ignore Rule was created. | +| **Success** | Whether the device was successfully discovered (and ignored if applicable). | +| **Object** | The discovered device. | +| **Unprocessed Device** | Whether there is an unprocessed device requiring further attention. | +| **Inventory** | Count of basic inventory items discovered. | +| **Software** | Count of discovered software. | +| **Services** | Count of discovered services. | +| **Applications** | Count of discovered applications. | +| **Other** | Success rating for discovery functions outside the four main categories. A generic measure of all steps Device42 took to gather information. | +| **Queue** | Queue-processing success for the job. | +| **Cumulative Score** | Cumulative success score for actions attempted by the job. Click into the job for a list of attempted actions. | + -By default, discovered devices are sorted from newest to oldest, most recently discovered listed first. Each device is shown on its own line, which offers information about how it was discovered, including the target discovery IP _(which also links back to each job’s View Discovery Score detail page)_ of the job that discovered it, the job name _(which links to the job’s the View Discovery Job page)_, the job’s timestamps, and a red/green ‘at-a-glance’ view of the discovery status. Green indicates discovery success, while red indicates there might have been an issue with that particular stat. +## View the Discovery Score Details Page -Clicking on any of the items in the **Discovery Targets** column will bring you to a View Discovery Score details page for that particular item. +Click any item in the **Discovery Target** column to open the View Discovery Score details page for that item. + +**Partial Failure** indicates that some of the discovery was successful, but not all components were. The detailed view helps narrow down what could not succeed. -**Partial Failure**: If any of the discovery aspects fail, this will result in a "partial failure" on the main Discovery Score page. A partial failure indicates that a portion of the discovery was successful, but not all components were. This is where the detailed view (above) is helpful, as it allows you to narrow in on what could not succeed. In future iterations, we will provide more detail on how to remediate these failures. - -## Discovery Score Column Details - -Each discovered device in the list on the Discovery Score view page includes helpful statistics that offer insight into what was discovered from each device. - -Device pages now display the five latest discovery scores, with the ability to scroll down to see more. - -The following is a short explanation of the fields present on the Discovery Score page: - - +## View Discovery Job Status -**Discovery Quality Scores Page Column List**: - -- **Discovery Target**: The IP or FQDN that was targeted by the discovery job that found this device. -- **Job Name**: The name of the discovery job that discovered the device. -- **Job Start Time**: The time the discovery job started. -- **Sub Type**: The chosen subtype for the job. -- **Port Check**: The port check references the first step of each discovery job, in which an attempt is made to connect to the target discovery port, for example, a connection is attempted port 22 for a Linux SSH-based discovery - Green Check (success) / Red-X (failure). -- **Auth**: This field reports the status of authentication to a given endpoint using the credentials supplied in the discovery job's settings - Green Check (success) / Red-X (failure). -- **Discovery Successful**: Success (green check mark) will be shown when both Port check and Auth succeed. If either of those two fails, the success column shows Red-X. _Note that it will only be marked as a failure if the device is not added - otherwise, based on scores, it will be marked as PARTIAL or OK._ -- **Sudo Access**: This field indicates whether sudo access is allowed for the discovery job. The icon will be a green check mark if sudo access is allowed, a red cross if it is not, and a dash if not applicable (for non-nix scores). -- **Ignored**: This field indicates a successful connection to a device to which a [Device Ignore Rule](../../infrastructure-management/devices/device-ignore-rules.mdx) was applied, and that the device was ruled out and ignored. -- **Ignore Rule**: This is the text entered in the **Ignored text contains** field when the [Device Ignore Rule](../../infrastructure-management/devices/device-ignore-rules.mdx) was created. -- **Success**: This field indicates whether the device was successfully discovered (and ignored if applicable). -- **Object**: The discovered device. -- **Unprocessed Device**: This field indicates whether there is an unprocessed device requiring further attention. -- **Inventory**: A count of basic inventory items discovered. -- **Software**: A count of discovered software. -- **Services**: A count of discovered services. -- **Applications**: A count of discovered applications. -- **Other**: A rating of the success of discovery functions that don’t fall into the four existing categories (basic, software, services, applications). -- **Queue**: The queue-processing success for the job. -- **Cumulative Score**: The cumulative success score for the actions attempted by the job. Click into the job for a list of attempted actions. - -The progress section of each supported discovery has been centralized and redone to show more consistent progress, and the objects-added count has been converted to a hyperlink to show the discovery scores for newly added objects more quickly. - -Click an item in the **Job Name** column of the Select Discovery Score page to see the View Discovery Job page for that job. +Click an item in the **Job Name** column to open the View Discovery Job page. This page shows the job's progress, including progress bars for Detailed Discovery and Advanced Discovery phases, and a summary panel with counts for auth failures, discovery exceptions, successes, objects added or updated, and total actions attempted. Exports (CSV)** menu gives you options for exporting data from Device42 into CSV files. - -You can also export data via the **Reports** menu under **Analytics** and via the APIs. +The **Tools > Exports (CSV)** menu gives you options for exporting data from Device42 into CSV files. You can also export data via the **Reports** menu under **Analytics** and via the APIs. ## Export Records and Generate Reports @@ -28,7 +26,7 @@ The **Rack Device** report shows the physical device and rack relationships. ## Export Records List -The table below shows all the report options available for exporting: +The following report options are available for exporting: | | | | |-------------------|-------------------|-------------------| diff --git a/docs/reports/reports/index.mdx b/docs/reports/reports/index.mdx index c89c56a54..2c72d9b28 100644 --- a/docs/reports/reports/index.mdx +++ b/docs/reports/reports/index.mdx @@ -2,4 +2,12 @@ title: "Reports" --- -These sections cover entries found in the reporting menu of the Device42 appliance. Note that the much more capable Advanced Reporting Engine has superseded what is now referred to as "Legacy Reporting", and should be utilized for creation of any new reports going forward. +Device42 provides several tools for reporting on and analyzing your infrastructure data, from interactive dashboards and visual analytics to scheduled reports and audit logs. + +Device42 offers three reporting options: + +- **[Insights+](/reports/reports/insights-plus):** The recommended option for new dashboards and analytics. A BI platform for interactive dashboards and visuals with deep data exploration capabilities. +- **[Advanced Reporting](/reports/reports/advanced-reporting):** Supports tabular reports (ExpressView and ExpressReport) as well as charts, maps, gauges, and dashboards. +- **[Standard Reports](/reports/reports/standard-reports):** Simple tabular reports for quick queries (formerly known as "Classic Reports"). + +This section also covers [alerts and notifications](/reports/reports/setup-alerts-and-notifications), [discovery scores](/reports/reports/discovery-quality-scores), [audit logs](/reports/reports/object-history-aka-audit-log), and tools for exporting data. diff --git a/docs/reports/reports/insights-plus.mdx b/docs/reports/reports/insights-plus.mdx index 7388c5030..881e2b4f9 100644 --- a/docs/reports/reports/insights-plus.mdx +++ b/docs/reports/reports/insights-plus.mdx @@ -6,21 +6,19 @@ sidebar_position: 1 import ThemedImage from '@theme/ThemedImage' import useBaseUrl from '@docusaurus/useBaseUrl' -Device42 **Insights+** provides integrated analytics that leverage the breadth and depth of Device42 discovery to help you make sense of your data through visuals and dashboards so that you can make better, more informed business decisions _quickly_. +Device42 **Insights+** is a BI platform that provides integrated analytics through interactive dashboards and visualizations. It leverages Device42's discovered data to help you identify patterns, trends, and outliers across your entire estate. -Insights+ identifies patterns, trends, and outliers in data sets across your entire estate, elevating your performance with data understanding. - -The combination of automatic discovered data and visualization empowers you and your IT teams with a more accurate understanding of your environment that would take the most senior IT staff years to understand –  which helps reduce the time it takes to restore service, increase the speed of root cause discovery, and better plan for capacity growth. +Insights+ combines discovered data with visualization to give you and your IT teams a more accurate understanding of your environment — helping reduce the time to restore service, speed up root cause analysis, and plan for capacity growth. :::note -Additional curated dashboards are available on our Insights+ GitHub page: [https://github.com/device42/insights](https://github.com/device42/insights). Follow the instructions on the page to download and import the dashboards. +Additional curated dashboards are available on the Insights+ GitHub page: [https://github.com/device42/insights](https://github.com/device42/insights). Follow the instructions on the page to download and import the dashboards. ::: ## Use Insights+ -Click on **Insights+** in the Device42 main menu to display the Insights+ home page, and then select the visualizations you want to see. +Click on **Insights+** in the Device42 main menu to display the Insights+ home page, and then select the visualizations you want to see. -**Note**: You can also select and display Insights+ dashboards on the Device42 [home page](getstarted/using-device42/home-page-widgets-and-global-search.mdx). +You can also select and display Insights+ dashboards on the Device42 [home page](/getstarted/using-device42/home-page-widgets-and-global-search). -- Click **DBB Cookbook** to go to the cookbook documentation pages. -- Click **Data Dictionary** to see the available Data Building Blocks. -- Click **Import** to get new or updated dashboards as they become available. (Note you must be a super admin user or have the **Feature | Update Insights+ Dashboards** permission to import.) -- Click **Reports** to create or edit email [Reports and Alerts](#email-reports-and-alerts). -- Click **How it Works?** to view the Insights+ documentation page. -- Click **Repository** to go to the Insights+ GitHub page. +- Click **DBB Cookbook** to go to the cookbook documentation pages. +- Click **Data Dictionary** to see the available Data Building Blocks. +- Click **Import** to get new or updated dashboards as they become available. (Note you must be a super admin user or have the **Feature | Update Insights+ Dashboards** permission to import.) +- Click **Reports** to create or edit email [Reports and Alerts](#email-reports-and-alerts). +- Click **How it Works?** to view the Insights+ documentation page. +- Click **Repository** to go to the Insights+ GitHub page. - Click on a dashboard to see its charts and graphs. The list of dashboards and charts appears in the left panel. -- Many Insights+ dashboards include filters you can use to refine the visualizations to see the information you want. You can collapse the filters to increase the dashboard viewing area. +- Many Insights+ dashboards include filters you can use to refine the visualizations to see the information you want. You can collapse the filters to increase the dashboard viewing area. @@ -161,7 +159,7 @@ You can create and edit email reports and alerts for dashboards or charts that a - Alerts and reports can have multiple owners (including the creator) who can all modify the alert or report. - Alerts and reports use the Device42 mail server settings (**Tools > Settings > Mail Server Settings**) to send alert and report emails – these need to be set for emails to function correctly. -Click **Reports** at the top right of the Insights+ home page, a dashboard, or a chart to display the Alerts and Reports page. +Click **Reports** at the top right of the Insights+ home page, a dashboard, or a chart to display the Alerts and Reports page. -The list page displays existing reports or alerts and includes **Actions** options for viewing logs and editing or deleting the reports or alerts. Click **Alerts** or **Reports** at the top left to display the items you want. +The list page displays existing reports or alerts and includes **Actions** options for viewing logs and editing or deleting the reports or alerts. Click **Alerts** or **Reports** at the top left to display the items you want. -- Enter a **Name** for the report, select the report **Owners** and add a **Description** if you want to. Note that **Owners** should include the report creator and any other users you want to be able to modify the report. -- The report is **Active** by default. -- Use the **Report schedule** dropdowns to set the schedule or enter a **CRON Schedule**. -- You can select or enter **Schedule settings** for **Log Retention** and **Working Timeout**. -- Select either **Dashboard** or **Chart** as the **Message content** and then use the dropdown to select the specific dashboard or chart you want. Select **Ignore cache when generating screen shot** to have Insights+ regenerate the dashboard or chart graphic rather than using a cached version. -- Select **Email** as the **Notification method**, and then enter the recipient email addresses (separated by commas or semicolons). -- Click **Save** to save the report. +- Enter a **Name** for the report, select the report **Owners** and add a **Description** if you want to. Note that **Owners** should include the report creator and any other users you want to be able to modify the report. +- The report is **Active** by default. +- Use the **Report schedule** dropdowns to set the schedule or enter a **CRON Schedule**. +- You can select or enter **Schedule settings** for **Log Retention** and **Working Timeout**. +- Select either **Dashboard** or **Chart** as the **Message content** and then use the dropdown to select the specific dashboard or chart you want. Select **Ignore cache when generating screen shot** to have Insights+ regenerate the dashboard or chart graphic rather than using a cached version. +- Select **Email** as the **Notification method**, and then enter the recipient email addresses (separated by commas or semicolons). +- Click **Save** to save the report. ### Alerts -Click **+ Alert** to add a new alert, and click the **Edit** icon to edit an existing alert. Insights+ displays the add/edit page. An alert lets you define an alert condition for the notification, but requires a SQL query to create the condition. The condition test is triggered according to the schedule you set for the alert. +Click **+ Alert** to add a new alert, and click the **Edit** icon to edit an existing alert. Insights+ displays the add/edit page. An alert lets you define an alert condition for the notification, but requires a SQL query to create the condition. The condition test is triggered according to the schedule you set for the alert. -- Enter a **Name** for the alert, select the alert **Owner** and add a **Description** if you want to. Note that **Owners** should include the alert creator and any other users you want to be able to modify the alert. -- The alert is **Active** by default. -- Select the **Database** to use (this should always be `d42_viewer_mt`) for the **Alert condition** and enter the **SQL Query** for the condition. Select a **Trigger Alert If…** operator from the dropdown, and then select the value for the statement to be used with the SQL query. +- Enter a **Name** for the alert, select the alert **Owner** and add a **Description** if you want to. Note that **Owners** should include the alert creator and any other users you want to be able to modify the alert. +- The alert is **Active** by default. +- Select the **Database** to use (this should always be `d42_viewer_mt`) for the **Alert condition** and enter the **SQL Query** for the condition. Select a **Trigger Alert If…** operator from the dropdown, and then select the value for the statement to be used with the SQL query. -- **SQL Query** expects a SQL statement to poll Device42 for an aggregate value based on the input SQL. This means you'll want to use a SQL statement that has a `WHERE` clause and provides the output as a single aggregate value like `COUNT`, `SUM`, `MAX`, or `MIN`. +- **SQL Query** expects a SQL statement to poll Device42 for an aggregate value based on the input SQL. This means you'll want to use a SQL statement that has a `WHERE` clause and provides the output as a single aggregate value like `COUNT`, `SUM`, `MAX`, or `MIN`. Here are a few examples: @@ -244,13 +242,13 @@ WHERE DATE(first_added) = CURRENT_DATE AND tags IS NULL WITH device_capacity AS ( Select -     a.device_pk Device_ID, -     ROUND(sum(c.capacity - c.free_capacity) / sum(c.capacity) * 100,2) used_percentage + a.device_pk Device_ID, + ROUND(sum(c.capacity - c.free_capacity) / sum(c.capacity) * 100,2) used_percentage From -     view_device_v2 a -     Left Join view_mountpoint_v1 c on c.device_fk = a.device_pk -    Where c.capacity>0 -                and a.network_device = 'f' + view_device_v2 a + Left Join view_mountpoint_v1 c on c.device_fk = a.device_pk + Where c.capacity>0 + and a.network_device = 'f' GROUP BY 1 ) SELECT COUNT(*) FROM device_capacity @@ -258,15 +256,15 @@ WHERE DATE(first_added) = CURRENT_DATE AND tags IS NULL ``` -- Use the **Report schedule** dropdowns to set the schedule or enter a **CRON Schedule**. -- You can select or enter **Schedule settings** for **Log Retention**, **Working Timeout**, and **Grace Period**. -- Select either **Dashboard** or **Chart** as the **Message content**, and then use the dropdown to select the specific dashboard or chart you want. Select **Ignore cache when generating screen shot** to have Insights+ regenerate the dashboard or chart graphic rather than using a cached version. -- Select **Email** as the **Notification method**, and then enter the recipient email addresses (separated by commas or semicolons). -- Click **Save** to save the alert. +- Use the **Report schedule** dropdowns to set the schedule or enter a **CRON Schedule**. +- You can select or enter **Schedule settings** for **Log Retention**, **Working Timeout**, and **Grace Period**. +- Select either **Dashboard** or **Chart** as the **Message content**, and then use the dropdown to select the specific dashboard or chart you want. Select **Ignore cache when generating screen shot** to have Insights+ regenerate the dashboard or chart graphic rather than using a cached version. +- Select **Email** as the **Notification method**, and then enter the recipient email addresses (separated by commas or semicolons). +- Click **Save** to save the alert. ## DCIM Dataset, Chart, and Alert Example -This section provides examples of how to take existing Device42 datasets and transform them into analytics of your own. For these datasets, you will first need to clone the dataset into one that you own and manage, so that our system dashboards are not affected by any analysis you want to do. This is most easily done by exploring the dataset and viewing it in the SQL Editor. Here, you can save it as a new dataset and label it something meaningful. +This section provides examples of how to take existing Device42 datasets and transform them into your own analytics. For these datasets, you first need to clone the dataset into one that you own and manage, so that system dashboards are not affected by your analysis. The easiest way to do this is to explore the dataset, view it in the SQL Editor, and save it as a new dataset with a meaningful name. ### Power Flow @@ -274,19 +272,19 @@ Power Flow is one of the datasets driving the Infrastructure Analysis dashboard, Since Device42 currently provides several point-in-time metrics on Power usage, you may want to develop a time-series analysis of power usage to predict future power needs. For that, a time series line chart will do well. -Start by selecting **View All Charts** and finding the **#Time** option in the list. +Start by selecting **View All Charts** and finding the **#Time** option in the list. ![](/assets/images/Insights_view-all-charts-NM.png) -Choose the **Time-series Smooth Line** to return a clean looking graph. In the **Time** column, use **Start Time** from the dataset. **End Time** is also a suitable option depending on how you want to perform the analysis. Since it only returns day-level values, keep the **Time Grain** as **Day**. +Choose the **Time-series Smooth Line** to return a clean looking graph. In the **Time** column, use **Start Time** from the dataset. **End Time** is also a suitable option depending on how you want to perform the analysis. Since it only returns day-level values, keep the **Time Grain** as **Day**. ![](/assets/images/Insights_time-series-NM.png) -We want to measure the max power apparent from all PDUs, but let’s target the display by grouping the data by **Room** and **Rack**, so we can break down the totals into meaningful categories. +To measure the max power apparent from all PDUs, group the data by **Room** and **Rack** to break down the totals into meaningful categories. -However, since the dataset returns all metrics and time periods, we need to filter down to the singular measurement we want to analyze. Let’s use the 'MAX' metric and the '1 Day' time period to get the peak value for power apparent measured each day. +Since the dataset returns all metrics and time periods, filter down to the specific measurement you want to analyze. Use the `MAX` metric and the `1 Day` time period to get the peak value for power apparent measured each day. -We end up with something like this, but you can choose to customize the formatting in the **Customize** tab, where you will find color schemes, labels, values, and other options. +The result looks like the following example. You can customize the formatting in the **Customize** tab, where you will find color schemes, labels, values, and other options. ![](/assets/images/Insights_customize-NM.png) @@ -298,15 +296,15 @@ For example, you can take the cloned dataset and set up a bar chart, as in the i ![](/assets/images/Insights_power-impact-NM.png) -By just setting up the **Dimensions** as **pdu** and the **Metrics** as a **count**, you quickly get an idea of how ubiquitous your PDUs are across your network. You can change this to a sunburst chart to include the percentage of devices on each PDU. It is easy to add a filter to this chart to look at a particular building, room, or rack using the appropriate fields. You can swap to measure the distinct business apps affected. +Set the **Dimensions** to **pdu** and the **Metrics** to **count** to see how widely your PDUs are distributed across your network. You can change this to a sunburst chart to include the percentage of devices on each PDU, add a filter to look at a particular building, room, or rack, or swap to measure the distinct business apps affected. This dataset is ideal for impact analysis or incident management use cases – it helps narrow down the scope of information to just the affected devices in the case of an outage. ### DC Capacity – Alert Example -Data Center Capacity is a dataset that supports the Risk Center dashboard. It provides capacity information at building, room, and rack levels. Similar to power flow, we must filter down to a specific type of filter when analyzing this dataset, or the aggregation values won’t be accurate.  It also makes a great example of how to use Insights+ alerts to your advantage. +Data Center Capacity is a dataset that supports the Risk Center dashboard. It provides capacity information at building, room, and rack levels. Similar to Power Flow, you must filter down to a specific type when analyzing this dataset, or the aggregation values will not be accurate. This dataset also makes a good example of how to use Insights+ alerts. -First, we’ll need to create a chart that provides some immediate value. Let’s look for racks with a limited number of network ports available. We will keep it simple and just have it return some tabular data. After cloning the dataset, create a chart from it and first set up the handful of columns we think are important: +To create a chart that provides immediate value, look for racks with a limited number of network ports available. After cloning the dataset, create a chart from it and set up the following columns: - Building - Room @@ -315,15 +313,15 @@ First, we’ll need to create a chart that provides some immediate value. Let’ - `network_device_count` - `network_port_count ` -In the filters, we want to display only the racks (case-sensitive) that have less than some threshold of their network ports available for use. This is done through one **Simple** filter and one **Custom SQL** filter. +In the filters, display only the racks (case-sensitive) that have less than a given threshold of their network ports available. This is done through one **Simple** filter and one **Custom SQL** filter. ![](/assets/images/Insights_simple-and-custom-sql-NM.png) -This will return a table of data showcasing racks that have less than 10% of their network ports available and it is easy to manipulate as needed. Now, let’s create both a report and an alert. A report is a scheduled execution of a chart or dashboard that can be emailed as an image or a CSV file – reports work well for tabular data, so let’s choose that option. This schedule will send the file at noon EDT once a week on Mondays, regardless of what data is returned. +This returns a table of racks that have less than 10% of their network ports available. From here, you can create both a report and an alert. A report is a scheduled execution of a chart or dashboard that can be emailed as an image or a CSV file — reports work well for tabular data. The following example schedule sends the file at noon EDT once a week on Mondays, regardless of what data is returned. ![](/assets/images/Insights_report-setup-NM.png) -An alert, while still a scheduled execution, will only send the email if a certain condition is met. In this case, let’s set up an email that will be sent every day if there are any racks that have no remaining netports available. This query was built by using the `rack_port_count` part of the base query we used in the chart and wrapping a `COUNT` around it. Alerts will sometimes require some SQL knowledge to enable intelligent conditions. +An alert, while still a scheduled execution, only sends the email if a certain condition is met. The following example sends an email every day if there are any racks with no remaining network ports available. This query uses the `rack_port_count` part of the base query from the chart and wraps a `COUNT` around it. Alerts may require some SQL knowledge to define effective conditions. ![](/assets/images/Insights_alert-setup-NM.png) @@ -332,21 +330,21 @@ An alert, while still a scheduled execution, will only send the email if a certa ```sql WITH rack_ports AS ( SELECT d.rack_fk -            ,count(*) total_network_port_count -            ,(SELECT COUNT(*) FROM view_netport_v1 np WHERE d.device_pk = np.device_fk AND np.remote_netport_fk IS NOT NULL OR d.device_pk = np.second_device_fk AND np.remote_netport_fk IS NOT NULL) used_network_port_count + ,count(*) total_network_port_count + ,(SELECT COUNT(*) FROM view_netport_v1 np WHERE d.device_pk = np.device_fk AND np.remote_netport_fk IS NOT NULL OR d.device_pk = np.second_device_fk AND np.remote_netport_fk IS NOT NULL) used_network_port_count FROM view_device_v2 d JOIN view_netport_v1 np ON d.device_pk = np.device_fk OR d.device_pk = np.second_device_fk JOIN view_rack_v1 r ON r.rack_pk = d.rack_fk WHERE -             d.network_device IS TRUE -             AND d.rack_fk IS NOT NULL + d.network_device IS TRUE + AND d.rack_fk IS NOT NULL GROUP BY -             d.rack_fk,d.device_pk) + d.rack_fk,d.device_pk) SELECT count(DISTINCT rack_fk) FROM rack_ports WHERE -            total_network_port_count = used_network_port_count + total_network_port_count = used_network_port_count ``` @@ -363,9 +361,10 @@ You can quickly delete a saved dataset by following these steps. Note that delet dark: useBaseUrl('/assets/images/insights-plus/insights-create-chart-dark.png'), }} /> -

-2. You'll be directed to a table listing your existing datasets. Identify the dataset you want to delete and move your cursor to the far right, under the **Actions** column. A **trashcan icon** and **pencil icon** will appear as you hover over it. Click on the **trashcan icon**. + + +2. A table listing your existing datasets is displayed. Identify the dataset you want to delete and hover over the far right of its row, under the **Actions** column. A **trashcan icon** and **pencil icon** appear. Click the **trashcan icon**. -

-3. A modal window will appear informing you of the number of charts and dashboards that will be affected by deleting the selected dataset. Type `DELETE` into the text box and click on the **Delete** button to delete the dataset. + + +3. A confirmation dialog shows the number of charts and dashboards that will be affected. Type `DELETE` into the text box and click **Delete** to remove the dataset. Jobs Dashboard**. The dashboard has three other sub-dashboards: **Completed Jobs**, **Queue Processing Stats**, and **Other Jobs Summary**. +Navigate to **Analytics > Jobs Dashboard** from the main menu. Jobs Dashboard**. The dashboard has Check the box next to the job name(s) you want to kill, then select one of the **Kill selected job(s)** options from the dropdown menu: -* The first **Allow Pending Jobs to Complete** option will terminate the ongoing discovery but let any payloads in the queue finish processing. -* The second **Remove Pending Jobs Immediately** option will terminate the ongoing discovery job and remove any payloads in the queue. If a discovery is half-finished when cancelled, it will be ended, and no further devices will be added or updated. However, if timed correctly, there may be an edge case where payloads are dispatched to the MA after emptying the queue. +- The first **Allow Pending Jobs to Complete** option will terminate the ongoing discovery but let any payloads in the queue finish processing. +- The second **Remove Pending Jobs Immediately** option will terminate the ongoing discovery job and remove any payloads in the queue. If a discovery is half-finished when cancelled, it will be ended, and no further devices will be added or updated. However, if timed correctly, there may be an edge case where payloads are dispatched to the MA after emptying the queue. -**Delete or Export From Completed Jobs** +### Delete or Export Completed Jobs -You can also delete a completed job from the Completed Jobs page or export one or more completed jobs to a CSV. Check the box next to the line(s) you are interested in and choose your desired action from the **Actions** dropdown menu. +You can delete a completed job or export one or more completed jobs to a CSV. Check the box next to the job(s) you want to act on and choose your desired action from the **Actions** dropdown menu. Discovery Status > Periodic Jobs**. +The Periodic Jobs page lets you view and manage the periodic sampling that discovery jobs perform on objects. Navigate to **Analytics > Discovery Status > Periodic Jobs**. Using the **Actions** menu options, you can quickly disable: diff --git a/docs/reports/reports/relutech-for-aws-migration.mdx b/docs/reports/reports/relutech-for-aws-migration.mdx index 9d785a016..1e58f6adc 100644 --- a/docs/reports/reports/relutech-for-aws-migration.mdx +++ b/docs/reports/reports/relutech-for-aws-migration.mdx @@ -6,17 +6,11 @@ sidebar_position: 13 import ThemedImage from '@theme/ThemedImage'; import useBaseUrl from '@docusaurus/useBaseUrl'; -Relutech and Device42 have teamed up to ease migrations to AWS. Device42's deep discovery provides Relutech with the information required to price your on-premise physical assets for purchase and leaseback, as well as third-party maintenance. Relutech then purchases those assets and provides them back to the customer on a leasing schedule. As you migrate those workloads to AWS, the workload is then rolled off of your lease, thereby reducing your on-premises costs as you increase consumption in AWS. +Relutech and Device42 have teamed up to ease migrations to AWS. Device42's deep discovery provides Relutech with the information required to price your on-premises physical assets for purchase and leaseback, as well as third-party maintenance. Relutech then purchases those assets and provides them back to the customer on a leasing schedule. As you migrate those workloads to AWS, the workload is then rolled off of your lease, thereby reducing your on-premises costs as you increase consumption in AWS. -## Getting Started +## Set Up Discovery Jobs -The instructions below provide a recommended approach to capture your Physical (bare metal) server inventory.  Using Device42's Operating System-level discovery, you will be able to collect inventory data relating to each server targeted on the network. Subsequent SNMP scans against a BMC (for example, iDRAC, HP iLO, and so on) will augment the data collected from OS-level scanning by identifying management MACs and IPs, as well as by discovering the parts installed on each targeted server. Please follow the steps below to create and execute these inventory jobs. - -The initial recommended approach starts by performing OS-level scans, followed by SNMP scans against a BMC, then by the Warranty sync (if applicable). - -1. **Hypervisor/\*nix/Windows:** Ensure that these sets of OS-level jobs are configured and run first. This discovery process will create the device record and capture parts that include CPU and HBA Card information. -2. **SNMP:** Once the OS-level scans have been completed, create and execute SNMP scans against the management IPs of the targeted servers.  These jobs will update the existing devices by adding additional parts, as well as the management MAC Address and IP Address. The SNMP scan captures part information such as RAM, Disk(s), and PSU. -3. **Warranty:** Depending on the vendor, these particular jobs will retrieve service contracts associated with each server from the vendor system. If the vendor system is unavailable, the warranty data can be added manually via the UI or via spreadsheet imports. Please see the referenced link below for documentation on this discovery in Device42. +The recommended approach to capturing your physical (bare metal) server inventory involves three types of discovery jobs, run in the following order: OS-level scans first, then SNMP scans against the BMC, then the Warranty sync. Each job type builds on the data collected by the previous one. -- Then keep the **Port** and **ICMP/TCP Port Check** settings as standard access settings. +- Keep the **Port** and **ICMP/TCP Port Check** settings as the standard defaults. **Integrations** > **IT Asset Disposition** > **Relutech**. +After the discovery jobs have been completed, navigate to **Insights+** to extract the data for Relutech. The Relutech report is located under **Advanced Reporting** > **Integrations** > **IT Asset Disposition** > **Relutech**. -For more information about Relutech, click [here](https://relutech.com/request-a-quote/). +For more information, visit the [Relutech website](https://relutech.com/request-a-quote/). diff --git a/docs/reports/reports/run-book.mdx b/docs/reports/reports/run-book.mdx index 3237e287f..aeaf24ef9 100644 --- a/docs/reports/reports/run-book.mdx +++ b/docs/reports/reports/run-book.mdx @@ -3,9 +3,10 @@ title: "Run Book" sidebar_position: 14 --- +A run book is a comprehensive export of your Device42 CMDB data, delivered as an Excel spreadsheet with data organized into sheets by object category. -A run book contains most of your critical data. Create a run book under **Analytics > Reports > Generate Run Book**. The run book will take a few minutes to generate, depending on the size of your CMDB, and will be delivered as an Excel spreadsheet with data organized into sheets according to object categories. +## Generate a Run Book -## Example Excel Run Book +Navigate to **Analytics > Reports > Generate Run Book** to create a run book. The report takes a few minutes to generate, depending on the size of your CMDB. ![Run book spreadsheet example](/assets/images/run-book/run-book-example.png) diff --git a/docs/reports/reports/save-and-schedule-reports.mdx b/docs/reports/reports/save-and-schedule-reports.mdx index e58d43a98..cd3082af7 100644 --- a/docs/reports/reports/save-and-schedule-reports.mdx +++ b/docs/reports/reports/save-and-schedule-reports.mdx @@ -6,11 +6,11 @@ sidebar_position: 15 import ThemedImage from '@theme/ThemedImage'; import useBaseUrl from '@docusaurus/useBaseUrl' -You can save, schedule, and export reports to Excel or tab-delimited (`.tsv`) files. +Standard Reports can be saved for reuse, scheduled to run automatically and emailed to recipients, or exported as Excel or tab-delimited (`.tsv`) files. This page covers how to configure mail server settings, set up recurring report schedules, and export reports. ## Mail Server Settings for Scheduling Reports -Go to **Tools > Settings > Mail Server Settings** to add the mail server settings. Please note that passwords are not saved on the page. If you change any field and a password is required, you will need to re-enter the password. +Navigate to **Tools > Settings > Mail Server Settings** to add the mail server settings. Passwords are not saved on the page, so if you change any field and a password is required, you will need to re-enter it. Settings > Mail Server Settings** to add the mail server setting }} /> -## Understand Scheduling +## Schedule a Report On the Standard Report add page, toggle the **Report Schedule** option on to reveal the email address field and schedule options. @@ -38,7 +38,7 @@ Add a unique name for the report to save it. When you click **Save**, Device42 s }} /> -Also, please go to **Tools > Settings > Time Settings** to verify that your time zone settings are correct. If you need to change the time settings (other than the time zone), you will need to do this from your VM console menu options. +Verify that your time zone settings are correct under **Tools > Settings > Time Settings**. To change time settings other than the time zone, use the VM console menu options. -You can close the Report Progress modal window at any time, and the report will continue to run. To view the report's progress or download the completed report, head to **Analytics > Excel Reports Status**. +You can close the Report Progress modal window at any time, and the report will continue to run. To view the report's progress or download the completed report, navigate to **Analytics > Excel Reports Status**. Admins & Permissions > Escalation Profiles**. - - If you select **Alert Integration**, select or add the integration used for the alert. Manage integrations from **Tools > Integrations > External Integrations**. Visit [https://www.device42.com/integrations/](https://www.device42.com/integrations/) to look up supported integrations. + - If you select **Alert Integration**, select or add the integration used for the alert. Manage integrations from **Tools > Integrations > External Integrations**. See [https://www.device42.com/integrations/](https://www.device42.com/integrations/) for supported integrations. As you construct your alert, Device42 displays the alert definition based on the options you select. The example shows an alert with a **Rule Type** of `Operating System` and **Trigger** of `Count > 1` that sends a notification to the `Alert Group 1` escalation profile. @@ -202,7 +202,7 @@ sources={{ ## Customizing Alerts with Notification Variables -The new alerts engine offers powerful alerting variables you can use to configure custom alert emails that include useful data for each alert **Rule Type**. +The alerts engine offers variables you can use to configure custom alert emails that include useful data for each alert **Rule Type**. You can use all the tags in both the alert message and subject line: diff --git a/docs/reports/reports/standard-reports.mdx b/docs/reports/reports/standard-reports.mdx index 4e19c5617..4ac90ffbf 100644 --- a/docs/reports/reports/standard-reports.mdx +++ b/docs/reports/reports/standard-reports.mdx @@ -6,9 +6,7 @@ sidebar_position: 15.5 import ThemedImage from '@theme/ThemedImage' import useBaseUrl from '@docusaurus/useBaseUrl' -The traditional "Classic Reports" menu option has been renamed **Standard Reports** under the **Analytics** menu. Standard reports are generally the quickest and easiest way to query and report on information within Device42. - -The standard reporting feature is the first place to go to answer difficult business questions with system data. These reports integrate this powerful tabular reporting functionality with DOQL queries, MA object tables, and the catalog of existing reports. +Standard Reports (under **Analytics** in the main menu) are the recommended option for tabular reporting in Device42. They provide a quick way to query and report on information by integrating with DOQL queries, MA object tables, and the catalog of existing reports. For interactive dashboards and visual analytics, see [Insights+](/reports/reports/insights-plus). -Use the standard reporting options to create tabular reports directly from an existing report or from a list within the UI, or use a custom DOQL to retrieve and populate the data. These reports can then be exported or scheduled to email, providing recipients with data-rich tabular exports from the system that are easy to configure and digest. - -## Creating a Standard Report +## Create a Standard Report You can create a standard report in several ways: -- Using DOQL -- From a pre-defined report -- From a list page -- As a guided report +- **DOQL query:** Write a custom query against the Device42 database. +- **Pre-defined report:** Use an existing report as a tab in a multi-tab report. +- **List page:** Generate a report from any object list page with filters applied. +- **Guided report:** Select an object type and configure filters through the UI. -### Creating a Report Using DOQL +### Create a Report Using DOQL A DOQL statement, or Device42 Object Query Language statement, is a query that can be run against the Device42 database in the **Insights+** interface, through the application, or using the RESTful API. DOQLs can be used in Standard Reporting to generate the data to display in the results table. @@ -47,7 +43,7 @@ To create reports using DOQL, you should be familiar with and consult the **Data To use DOQL to create or edit a standard report, name the report and then select **DOQL Query** from the report-type dropdown. Enter the DOQL commands in the text box. The start of a sample select statement is displayed in the DOQL text box. If an error is encountered during parsing (for example, due to invalid syntax or referencing a table or field that does not exist), an error message will show at the bottom of the text box. -### Creating a Report From a Pre-Defined Report +### Create a Report From a Pre-Defined Report Create a multi-tab standard report with one tab containing the results of a previous existing report. Select the **Pre-Defined Report** option and choose from a set of existing reports to be included as a separate tab. @@ -60,7 +56,7 @@ Create a multi-tab standard report with one tab containing the results of a prev }} /> -### Creating a Report From a Guided Report +### Create a Report From a Guided Report Select the **Guided Report** option and the object to create the report on. Filters can be added and set to alter the report. @@ -80,7 +76,7 @@ Select the **Guided Report** option and the object to create the report on. Filt }} /> -### Creating a Report From a List Page +### Create a Report From a List Page You can create a new standard report from any of the object list pages. A standard report can be emailed and scheduled. For example, you can create a standard report from the following list pages under the **Resources** menu: @@ -97,7 +93,7 @@ You can create a new standard report from any of the object list pages. A standa - Software In Use - Secrets -From the object list page, apply filters to a list, then create a standard report using the **Report** button at the top of the table. +From the object list page, apply filters to narrow the data, then click the **Report** button at the top of the table. The filters you applied carry over to the report. -## Exporting Standard Reports +## Export Standard Reports -As with the previous classic reports, you can view the report output on the screen or export the results in tab-separated values (TSV) or Excel formats. +You can view the report output on the screen or export the results in tab-separated values (TSV) or Excel formats. -## Scheduling Standard Reports +## Schedule Standard Reports Toggle **Report Schedule** on and include a comma-separated list of recipients to send the report to. diff --git a/docs/reports/reports/use-custom-sql-advanced-report.mdx b/docs/reports/reports/use-custom-sql-advanced-report.mdx index 7ace38622..2000cd079 100644 --- a/docs/reports/reports/use-custom-sql-advanced-report.mdx +++ b/docs/reports/reports/use-custom-sql-advanced-report.mdx @@ -3,19 +3,18 @@ title: "Use Custom SQL in Advanced Report" sidebar_position: 17 --- -Advanced reporting in Device42, introduced mid-2018, empowers you to create and schedule extremely complex reports that the old reporting engine simply couldn't handle. You can also combine all of the built-in reports and features that advanced reporting includes out-of-the-box with your own custom SQL objects to achieve an entirely new level of reporting flexibility, including using SQL's ability to pre-process and transform objects as part of your query. +Advanced Reporting lets you create and schedule complex reports using custom SQL objects. You can combine the built-in reports and features with your own custom SQL to pre-process and transform data as part of your query. ## Create an Advanced Report Using Custom SQL -1. Head to **Analytics > Reports > Advanced Reporting**. +1. Navigate to **Analytics > Reports > Advanced Reporting**. 2. Add a new Advanced Report and give it a name. -3. Don’t select any categories; instead, choose the **Add SQL** button at the bottom right. -4. Give your new SQL object a name, and then proceed to enter your query into the **Custom SQL Object** form. Select a unique key field from the dropdown at the bottom of the window (a unique key is required). Click **Okay** twice: +3. Skip the category selection and click the **Add SQL** button at the bottom right. +4. Name your SQL object, then enter your query in the **Custom SQL Object** form. Select a unique key field from the dropdown at the bottom of the window (required). Click **Okay** twice. ![Create advanced report with custom SQL](/assets/images/create_advanced_report_custom_SQL-1.png) - _*You’ll notice all of the categories are now greyed out. The SQL query you provided has taken care of this, so there’s no need to choose any here._ + All categories are now greyed out, as the SQL query replaces category selection. -5. Head to the **Layout** tab. Use the report designer to format your report data by choosing the fields you’d like to include in the report. -6. Click **Finish** to save. -7. You’re done! You can go back and edit as you see fit. Run your report at any time, or sort and filter it further. +5. Go to the **Layout** tab and use the report designer to choose which fields to include in the report. +6. Click **Finish** to save. You can edit, run, sort, or filter the report at any time.