Monitor Microsoft Azureedit

In this tutorial, you’ll learn how to monitor your Microsoft Azure deployments using Elastic Observability: Logs and Metrics.

What you’ll learnedit

You’ll learn how to:

  • Set up an Azure service principal.
  • Ingest metrics using the Metricbeat Azure module and view those metrics in Kibana.
  • Export Azure activity logs through Event Hubs.
  • Ingest logs using the Filebeat Azure module and view those logs in Kibana.

Before you beginedit

Create a deployment using our hosted Elasticsearch Service on Elastic Cloud. The deployment includes an Elasticsearch cluster for storing and searching your data, and Kibana for visualizing and managing your data. For more information, see Spin up the Elastic Stack.

Step 1: Create an Azure service principal and set permissionsedit

The Azure Monitor REST API allows you to get insights into your Azure resources using different operations. To access the Azure Monitor REST API you need to use the Azure Resource Manager authentication model. Therefore, all requests must be authenticated with Azure Active Directory (Azure AD). You can create the service principal using the Azure portal or Azure PowerShell. Then, you need to grant access permission, which is detailed here. This tutorial uses the Azure portal.

Create an Azure service principaledit

  1. Go to the Azure Management Portal. Search and click on Azure Active Directory.

    Search and click on Azure Active Directory

  2. Click on App registrations in the navigation pane of the selected Active Directory and then click on New registration.

    Click on App registrations

  3. Type the name of your application (this tutorial uses monitor-azure) and click on Register (leave all the other options with the default value).

    Register an application

    Copy the Application (client) ID, and save it for future reference. This id is required to configure Metricbeat to connect to your Azure account.

  4. Click on Certificates & secrets. Then, click on New client secret to create a new security key.

    Click on new client secret

  5. Type a key description and select a key duration in the expire list. Click on Add to create a client secret. The next page will display the key value under the Value field. Copy the secret and save it (along with your Client ID) for future reference.

    This is your only chance to copy this value. You can’t retrieve the key value after you leave the page.

Grant access permission for your service principaledit

After creating the Azure service principal you need to grant it the correct permission. You need Reader permission to configure Metricbeat to monitor your services.

  1. On Azure Portal, search and click on Subscriptions.

    Search and click on Subscriptions

  2. In the Subscriptions page, click on your subscription.
  3. Click on Access control (IAM) in the subscription navigation pane.
  4. Click on Add and select Add role assignment.
  5. Select the Reader role.
  6. In the Select field, type the description name of the configured service principal (monitor-azure).

    Add role assignment

  7. Select the application and click on save to grant the service principal access to your subscription.

Step 2: Install and configure Metricbeatedit

This tutorial assumes the Elastic cluster is already running. Make sure you have your cloud ID and your credentials on hand.

To monitor Microsoft Azure using the Elastic Stack, you need two main components: an Elastic deployment to store and analyze the data and an agent to collect and ship the data.

Two agents can be used to monitor Azure: Metricbeat is used to monitor metrics, and Filebeat to monitor logs. You can run the agents on any machine. This tutorial uses a small Azure instance, B2s (2 vCPUs, 4 GB memory), with an Ubuntu distribution.

Install Metricbeatedit

Download and install Metricbeat.

curl -L -O https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-7.12.1-amd64.deb
sudo dpkg -i metricbeat-7.12.1-amd64.deb

Set up assetsedit

Metricbeat comes with predefined assets for parsing, indexing, and visualizing your data. Run the following command to load these assets. It may take a few minutes.

./metricbeat setup -e -E 'cloud.id=YOUR_DEPLOYMENT_CLOUD_ID' -E 'cloud.auth=elastic:YOUR_SUPER_SECRET_PASS' 

Substitute your Cloud ID and an administrator’s username:password in this command. To find your Cloud ID, click on your deployment.

Setting up Metricbeat is an admin-level task that requires extra privileges. As a best practice, use an administrator role to set up, and a more restrictive role for event publishing (which you will do next).

Configure Metricbeat outputedit

Next, you are going to configure Metricbeat output to Elasticsearch Service.

  1. Use the Metricbeat keystore to store secure settings. Store the Cloud ID in the keystore.

    ./metricbeat keystore create
    echo -n "<Your Deployment Cloud ID>" | ./metricbeat keystore add CLOUD_ID --stdin
  2. To store metrics in Elasticsearch with minimal permissions, create an API key to send data from Metricbeat to Elasticsearch Service. Log into Kibana (you can do so from the Cloud Console without typing in any permissions) and select ManagementDev Tools. Send the following request:

    POST /_security/api_key
    {
      "name": "metricbeat-monitor",
      "role_descriptors": {
        "metricbeat_writer": {
          "cluster": ["monitor", "read_ilm"],
          "index": [
            {
              "names": ["metricbeat-*"],
              "privileges": ["view_index_metadata", "create_doc"]
            }
          ]
        }
      }
    }
  3. The response contains an api_key and an id field, which can be stored in the Metricbeat keystore in the following format: id:api_key.

    echo -n "IhrJJHMB4JmIUAPLuM35:1GbfxhkMT8COBB4JWY3pvQ" | ./metricbeat keystore add ES_API_KEY --stdin

    Make sure you specify the -n parameter; otherwise, you will have painful debugging sessions due to adding a newline at the end of your API key.

  4. To see if both settings have been stored, run the following command:

    ./metricbeat keystore list
  5. To configure Metricbeat to output to Elasticsearch Service, edit the metricbeat.yml configuration file. Add the following lines to the end of the file.

    cloud.id: ${CLOUD_ID}
    output.elasticsearch:
      api_key: ${ES_API_KEY}
  6. Finally, test if the configuration is working. If it is not working, verify if you used the right credentials and add them again.

    ./metricbeat test output

Now that the output is working, you are going to set up the input (Azure).

Step 3: Configure Metricbeat Azure moduleedit

To collect metrics from Microsoft Azure, use the Metricbeat Azure module. This module periodically fetches monitoring metrics from Microsoft Azure using the Azure Monitor REST API.

Extra Azure charges on metric queries my be generated by this module. Please see additional notes about metrics and costs for more details.

  1. The azure module configuration needs three ids and one secret. Use the commands below to store each one of them in the keystore.

    echo -n "<client_id>" | ./metricbeat keystore add AZURE_CLIENT_ID --stdin
    echo -n "<client_secret>" | ./metricbeat keystore add AZURE_CLIENT_SECRET --stdin
    echo -n "<tenant_id>" | ./metricbeat keystore add AZURE_TENANT_ID --stdin
    echo -n "<subscription_id>" | ./metricbeat keystore add AZURE_SUBSCRIPTION_ID --stdin

    You can find the tenant_id in the main Azure Active Directory page. You can find the subscription_id in the main Subscriptions page.

  2. Enable the Azure module.

    ./metricbeat modules enable azure
  3. Edit the modules.d/azure.yml file to collect compute_vms metrics.

    - module: azure
      metricsets:
      - compute_vm  
      enabled: true
      period: 300s  
      client_id: '${AZURE_CLIENT_ID:""}'  
      client_secret: '${AZURE_CLIENT_SECRET:""}'  
      tenant_id: '${AZURE_TENANT_ID:""}'  
      subscription_id: '${AZURE_SUBSCRIPTION_ID:""}'  
      refresh_list_interval: 600s

    The compute_vm metricset is a predefined metricset that collects metrics from the virtual machines.

    Collects metrics every 5 minutes. The period for compute_vm metricset should be 300s or multiples of 300s.

    The unique identifier for the application (also known as Application ID, which you copied earlier).

    The client/application secret/key (copied earlier).

    The unique identifier of the Azure Active Directory instance.

    The unique identifier for the azure subscription.

  4. To check if Metricbeat can collect data, test the input by running the following command:

    ./metricbeat test modules azure

    Metricbeat will print compute_vms metrics to the terminal, if the setup is correct.

    If it returns a timeout error, try again. The test modules timeout is short.

  5. Edit the modules.d/azure.yml file to also collect billing metrics.

    - module: azure
      metricsets:
      - billing  
      enabled: true
      period: 24h  
      client_id: '${AZURE_CLIENT_ID:""}'
      client_secret: '${AZURE_CLIENT_SECRET:""}'
      tenant_id: '${AZURE_TENANT_ID:""}'
      subscription_id: '${AZURE_SUBSCRIPTION_ID:""}'
      refresh_list_interval: 600s

    The billing metricset is a predefined metricset that collects relevant usage data and forecast information of the subscription configured.

    Collects metrics every 24 hours. The period for billing metricset should be 24h or multiples of 24h.

  6. When the input and output are ready, start Metricbeat to collect the data.

    ./metricbeat -e
  7. Finally, log into Kibana and open the [Metricbeat Azure] Compute VMs Overview dashboard.

    Metricbeat azure compute vms overview dashboard

    The VM Available Memory visualization might be empty if you only have Linux VMs as discussed here.

    You can also check the [Metricbeat Azure] Billing overview dashboard, even though it might take longer to collect data.

    Metricbeat azure compute vms overview dashboard

Step 4: Install and configure Filebeatedit

Now that Metricbeat is up and running, configure Filebeat to collect Azure logs.

Install Filebeatedit

Download and install Filebeat.

curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.12.1-amd64.deb
sudo dpkg -i filebeat-7.12.1-amd64.deb

Set up assetsedit

Filebeat comes with predefined assets for parsing, indexing, and visualizing your data. Run the following command to load these assets. It may take a few minutes.

./filebeat setup -e -E 'cloud.id=YOUR_DEPLOYMENT_CLOUD_ID' -E 'cloud.auth=elastic:YOUR_SUPER_SECRET_PASS' 

Substitute your Cloud ID and an administrator’s username:password in this command. To find your Cloud ID, click on your deployment.

Setting up Filebeat is an admin-level task that requires extra privileges. As a best practice, use an administrator role to set up and a more restrictive role for event publishing (which you will do next).

Configure Filebeat outputedit

Next, you are going to configure Filebeat output to Elasticsearch Service.

  1. Use the Filebeat keystore to store secure settings. Store the Cloud ID in the keystore.

    ./filebeat keystore create
    echo -n "<Your Deployment Cloud ID>" | ./filebeat keystore add CLOUD_ID --stdin
  2. To store logs in Elasticsearch with minimal permissions, create an API key to send data from Filebeat to Elasticsearch Service. Log into Kibana (you can do so from the Cloud Console without typing in any permissions) and select ManagementDev Tools. Send the following request:

    POST /_security/api_key
    {
      "name": "filebeat-monitor-gcp",
      "role_descriptors": {
        "filebeat_writer": {
          "cluster": [
            "monitor",
            "read_ilm",
            "cluster:admin/ingest/pipeline/get", 
            "cluster:admin/ingest/pipeline/put" 
          ],
          "index": [
            {
              "names": ["filebeat-*"],
              "privileges": ["view_index_metadata", "create_doc"]
            }
          ]
        }
      }
    }

    Filebeat needs extra cluster permissions to publish logs, which differs from the Metricbeat configuration. You can find more details here.

  3. The response contains an api_key and an id field, which can be stored in the Filebeat keystore in the following format: id:api_key.

    echo -n "IhrJJHMB4JmIUAPLuM35:1GbfxhkMT8COBB4JWY3pvQ" | ./filebeat keystore add ES_API_KEY --stdin

    Make sure you specify the -n parameter; otherwise, you will have painful debugging sessions due to adding a newline at the end of your API key.

  4. To see if both settings have been stored, run the following command:

    ./filebeat keystore list
  5. To configure Filebeat to output to Elasticsearch Service, edit the filebeat.yml configuration file. Add the following lines to the end of the file.

    cloud.id: ${CLOUD_ID}
    output.elasticsearch:
      api_key: ${ES_API_KEY}
  6. Finally, test if the configuration is working. If it is not working, verify that you used the right credentials and, if necessary, add them again.

    ./filebeat test output

Now that the output is working, you are going to set up the input (Azure).

Step 5: Create an event hub and configure diagnostics settingsedit

To collect logs from Microsoft Azure, use the Filebeat Azure module. This module periodically fetches logs that have been forwarded to an Azure event hub. There are four available filesets: activitylogs, platformlogs, signinlogs, and auditlogs. This tutorial covers the activitylogs fileset.

Create Event Hubs namespaceedit

You have different options to create an event hub, such as Azure portal or PowerShell. This tutorial uses PowerShell.

  1. Open the Azure PowerShell:

    Open the Azure PowerShell
  2. Run the following command to create an Event Hub namespace:

    New-AzEventHubNamespace -ResourceGroupName monitor-azure-resource-group `
      -NamespaceName monitor-azure-namespace `
      -Location northeurope

    You can only stream logs to event hubs in the same region. Make sure to choose an appropriate region.

  3. Create an event hub:

    New-AzEventHub -ResourceGroupName monitor-azure-resource-group `
      -NamespaceName monitor-azure-namespace `
      -EventHubName monitor-azure-event-hub `
      -MessageRetentionInDays 3 `

    Adjust the -MessageRetentionInDays value to your needs. This tutorial uses the default value suggested in the Azure quick start.

Configure activity logs to stream to the event hubedit

  1. Navigate to Activity log.

    Navigate to Activity log
  2. Click on Diagnostics settings.

    Click on diagnostics settings
  3. Click on Add diagnostic setting.
  4. Configure it and click on Save.

    Configure on diagnostics settings

    Select the log categories that you want. Select stream to an event hub and make sure to configure the correct namespace event hub.

Step 6: Configure Filebeat Azure moduleedit

There are 2 configuration values that you need to get from the Azure portal: connection_string and storage_account_key.

Add the connection_string to the keystoreedit

  1. Navigate to Event Hubs.

    Search event hubs
  2. Click on the created namespace (monitor-azure-namespace).
  3. Click on shared policies and then on the RootManageSharedAccessKey policy.
  4. Copy the Connection string–primary key value and add it to the keystore.

    Click on policy
    echo -n "<Your Connection string-primary key>" | ./filebeat keystore add AZURE_CONNECTION_STRING --stdin

Add storage_account_key to the keystoreedit

A Blob Storage account is required in order to store/retrieve/update the offset or state of the eventhub messages. This means that after stopping the Filebeat azure module it can start back up at the spot that it stopped processing messages.

  1. Create a storage account. (You can also use an existing one.)

    New-AzStorageAccount -ResourceGroupName monitor-azure-resource-group `
      -Name monitorazurestorage `
      -Location northeurope `
      -SkuName Standard_ZRS `
      -Kind StorageV2
  2. Navigate to Storage accounts.

    Search storage accounts
  3. Click on the created storage account and then on Access keys.

    Click on access keys
  4. Click on Show keys and copy one of the keys.

    Click on view keys
  5. Add the storage_account_key to the keystore.

    echo -n "<Your Storage account key>" | ./filebeat keystore add AZURE_STORAGE_KEY --stdin

Configure Filebeat Azure moduleedit

  1. Enable the Filebeat Azure module.

    ./filebeat modules enable azure
  2. Edit the modules.d/azure.yml file with the following configurations.

    - module: azure
      # All logs
      activitylogs:
        enabled: true
        var:
          eventhub: "monitor-azure-event-hub"  
          consumer_group: "$Default"  
          connection_string: "${AZURE_CONNECTION_STRING}"  
          storage_account: "monitorazurestorage"  
          storage_account_key: "${AZURE_STORAGE_KEY}"  

    Collects logs from the monitor-azure-event-hub event hub.

    Uses the default consumer group.

    Connects using the configured connection string.

    Persists the state of the eventhub messages to the monitorazurestorage storage account.

    Uses the configured key to authenticate to the storage account.

    You cannot remove any of the other filesets from the configuration file. You must have them in the config file and disabled.

  3. Start Filebeat to collect the logs.

    ./filebeat -e
  4. Finally, log into Kibana and open the [Filebeat Azure] Cloud Overview dashboard.

    Filebeat azure cloud overview dashboard