The Elastic® APM K8s Attacher allows auto-installation of Elastic APM application agents (e.g., the Elastic APM Java agent) into applications running in your Kubernetes clusters. The mechanism uses a mutating webhook, which is a standard Kubernetes component, but you don’t need to know all the details to use the Attacher. Essentially, you can install the Attacher, add one annotation to any Kubernetes deployment that has an application you want monitored, and that’s it!
In this blog, we’ll walk through a full example from scratch using a Java application. Apart from the Java code and using a JVM for the application, everything else works the same for the other languages supported by the Attacher.
Prerequisites
This walkthrough assumes that the following are already installed on the system: JDK 17, Docker, Kubernetes, and Helm.
The example application
While the application (shown below) is a Java application, it would be easily implemented in any language, as it is just a simple loop that every 2 seconds calls the method chain methodA->methodB->methodC->methodD, with methodC sleeping for 10 milliseconds and methodD sleeping for 200 milliseconds. The choice of application is just to be able to clearly display in the Elastic APM UI that the application is being monitored.
The Java application in full is shown here:
package test;
public class Testing implements Runnable {
public static void main(String[] args) {
new Thread(new Testing()).start();
}
public void run()
{
while(true) {
try {Thread.sleep(2000);} catch (InterruptedException e) {}
methodA();
}
}
public void methodA() {methodB();}
public void methodB() {methodC();}
public void methodC() {
System.out.println("methodC executed");
try {Thread.sleep(10);} catch (InterruptedException e) {}
methodD();
}
public void methodD() {
System.out.println("methodD executed");
try {Thread.sleep(200);} catch (InterruptedException e) {}
}
}
We created a Docker image containing that simple Java application for you that can be pulled from the following Docker repository:
docker.elastic.co/demos/apm/k8s-webhook-test
Deploy the pod
First we need a deployment config. We’ll call the config file webhook-test.yaml, and the contents are pretty minimal — just pull the image and run that as a pod & container called webhook-test in the default namespace:
apiVersion: v1
kind: Pod
metadata:
name: webhook-test
labels:
app: webhook-test
spec:
containers:
- image: docker.elastic.co/demos/apm/k8s-webhook-test
imagePullPolicy: Always
name: webhook-test
This can be deployed normally using kubectl:
kubectl apply -f webhook-test.yaml
The result is exactly as expected:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
webhook-test 1/1 Running 0 10s
$ kubectl logs webhook-test
methodC executed
methodD executed
methodC executed
methodD executed
So far, this is just setting up a standard Kubernetes application with no APM monitoring. Now we get to the interesting bit: adding in auto-instrumentation.
Install Elastic APM K8s Attacher
The first step is to install the Elastic APM K8s Attacher. This only needs to be done once for the cluster — once installed, it is always available. Before installation, we will define where the monitored data will go. As you will see later, we can decide or change this any time. For now, we’ll specify our own Elastic APM server, which is at https://myserver.somecloud:443 — we also have a secret token for authorization to that Elastic APM server, which has value MY_SECRET_TOKEN. (If you want to set up a quick test Elastic APM server, you can do so at https://cloud.elastic.co/).
There are two additional environment variables set for the application that are not generally needed but will help when we see the resulting UI content toward the end of the walkthrough (when the agent is auto-installed, these two variables tell the agent what name to give this application in the UI and what method to trace). Now we just need to define the custom yaml file to hold these. On installation, the custom yaml will be merged into the yaml for the Attacher:
apm:
secret_token: MY_SECRET_TOKEN
namespaces:
- default
webhookConfig:
agents:
java:
environment:
ELASTIC_APM_SERVER_URL: "https://myserver.somecloud:443"
ELASTIC_APM_TRACE_METHODS: "test.Testing#methodB"
ELASTIC_APM_SERVICE_NAME: "webhook-test"
That custom.yaml file is all we need to install the attacher (note we’ve only specified the default namespace for agent auto-installation for now — this can be easily changed, as you’ll see later). Next we’ll add the Elastic charts to helm — this only needs to be done once, then all Elastic charts are available to helm. This is the usual helm add repo command, specifically:
helm repo add elastic https://helm.elastic.co
Now the Elastic charts are available for installation (helm search repo would show you all the available charts). We’re going to use “elastic-webhook” as the name to install into, resulting in the following installation command:
helm install elastic-webhook elastic/apm-attacher --namespace=elastic-apm --create-namespace --values custom.yaml
And that’s it, we now have the Elastic APM K8s Attacher installed and set to send data to the APM server defined in the custom.yaml file! (You can confirm installation with a helm list -A if you need.)
Auto-install the Java agent
The Elastic APM K8s Attacher is installed, but it doesn’t auto-install the APM application agents into every pod — that could lead to problems! Instead the Attacher is deliberately limited to auto-install agents into deployments defined a) by the namespaces listed in the custom.yaml, and b) to those deployments in those namespaces that have a specific annotation “co.elastic.apm/attach.”
So for now, restarting the webhook-test pod we created above won’t have any different effect on the pod, as it isn’t yet set to be monitored. What we need to do is add the annotation. Specifically, we need to add the annotation using the default agent configuration that was installed with the Attacher called “java” for the Java agent (we’ll see later how that agent configuration is altered — the default configuration installs the latest agent version and leaves everything else default for that version). So adding that annotation in to webhook-test yaml gives us the new yaml file contents (the additional config is shown labelled (1)):
apiVersion: v1
kind: Pod
metadata:
name: webhook-test
annotations: #(1)
co.elastic.apm/attach: java #(1)
labels:
app: webhook-test
spec:
containers:
- image: docker.elastic.co/demos/apm/k8s-webhook-test
imagePullPolicy: Always
name: webhook-test
Applying this change gives us the application now monitored:
$ kubectl delete -f webhook-test.yaml
pod "webhook-test" deleted
$ kubectl apply -f webhook-test.yaml
pod/webhook-test created
$ kubectl logs webhook-test
… StartupInfo - Starting Elastic APM 1.45.0 …
And since the agent is now feeding data to our APM server, we can now see it in the UI:
Note that the agent identifies Testing.methodB method as a trace root because of the ELASTIC_APM_TRACE_METHODS environment variable set to test.Testing#methodB in the custom.yaml — this tells the agent to specifically trace that method. The time taken by that method will be available in the UI for each invocation, but we don’t see the sub-methods . . . currently. In the next section, we’ll see how easy it is to customize the Attacher, and in doing so we’ll see more detail about the method chain being executed in the application.
Customizing the agents
In your systems, you’ll likely have development, testing, and production environments. You’ll want to specify the version of the agent to use rather than just pull the latest version whatever that is, you’ll want to have debug on for some applications or instances, and you’ll want to have specific options set to specific values. This sounds like a lot of effort, but the attacher lets you enable these kinds of changes in a very simple way. In this section, we’ll add a configuration that specifies all these changes and we can see just how easy it is to configure and enable it.
We start at the custom.yaml file we defined above. This is the file that gets merged into the Attacher. Adding a new configuration with all the items listed in the last paragraph is easy — though first we need to decide a name for our new configuration. We’ll call it “java-interesting” here. The new custom.yaml in full is (the first part is just the same as before, the new config is simply appended):
apm:
secret_token: MY_SECRET_TOKEN
namespaces:
- default
webhookConfig:
agents:
java:
environment:
ELASTIC_APM_SERVER_URL: "https://myserver.somecloud:443"
ELASTIC_APM_TRACE_METHODS: "test.Testing#methodB"
ELASTIC_APM_SERVICE_NAME: "webhook-test"
java-interesting:
image: docker.elastic.co/observability/apm-agent-java:1.52.0
artifact: "/usr/agent/elastic-apm-agent.jar"
environment:
ELASTIC_APM_SERVER_URL: "https://myserver.somecloud:443"
ELASTIC_APM_TRACE_METHODS: "test.Testing#methodB"
ELASTIC_APM_SERVICE_NAME: "webhook-test"
ELASTIC_APM_ENVIRONMENT: "testing"
ELASTIC_APM_LOG_LEVEL: "debug"
ELASTIC_APM_PROFILING_INFERRED_SPANS_ENABLED: "true"
JAVA_TOOL_OPTIONS: "-javaagent:/elastic/apm/agent/elastic-apm-agent.jar"
Breaking the additional config down, we have:
-
The name of the new config java-interesting
-
The APM Java agent image docker.elastic.co/observability/apm-agent-java
- With a specific version 1.43.0 instead of latest
-
We need to specify the agent jar location (the attacher puts it here)
- artifact: "/usr/agent/elastic-apm-agent.jar"
-
And then the environment variables
-
ELASTIC_APM_SERVER_URL as before
-
ELASTIC_APM_ENVIRONMENT set to testing, useful when looking in the UI
-
ELASTIC_APM_LOG_LEVEL set to debug for more detailed agent output
-
ELASTIC_APM_PROFILING_INFERRED_SPANS_ENABLED turning this on (setting to true) will give us additional interesting information about the method chain being executed in the application
-
And lastly we need to set JAVA_TOOL_OPTIONS to the enable starting the agent "-javaagent:/elastic/apm/agent/elastic-apm-agent.jar" — this is fundamentally how the attacher auto-attaches the Java agent
More configurations and details about configuration options are here for the Java agent, and other language agents are also available.
The application traced with the new configuration
And finally we just need to upgrade the attacher with the changed custom.yaml:
helm upgrade elastic-webhook elastic/apm-attacher --namespace=elastic-apm --create-namespace --values custom.yaml
This is the same command as the original install, but now using upgrade. That’s it — add config to the custom.yaml and upgrade the attacher, and it’s done! Simple.
Of course we still need to use the new config on an app. In this case, we’ll edit the existing webhook-test.yaml file, replacing java with java-interesting, so the annotation line is now:
co.elastic.apm/attach: java-interesting
Applying the new pod config and restarting the pod, you can see the logs now hold debug output:
$ kubectl delete -f webhook-test.yaml
pod "webhook-test" deleted
$ kubectl apply -f webhook-test.yaml
pod/webhook-test created
$ kubectl logs webhook-test
… StartupInfo - Starting Elastic APM 1.44.0 …
… DEBUG co.elastic.apm.agent. …
… DEBUG co.elastic.apm.agent. …
More interesting is the UI. Now that inferred spans is on, the full method chain is visible.
This gives the details for methodB (it takes 211 milliseconds because it calls methodC - 10ms - which calls methodD - 200ms). The times for methodC and methodD are inferred rather than recorded, (inferred rather than traced — if you needed accurate times you would instead add the methods to trace_methods and have them traced too).
Note on the ECK operator
The Elastic Cloud on Kubernetes operator allows you to install and manage a number of other Elastic components on Kubernetes. At the time of publication of this blog, the Elastic APM K8s Attacher is a separate component, and there is no conflict between these management mechanisms — they apply to different components and are independent of each other.
Try it yourself!
This walkthrough is easily repeated on your system, and you can make it more useful by replacing the example application with your own and the Docker registry with the one you use.
Learn more about real-time monitoring with Kubernetes and Elastic Observability.
The release and timing of any features or functionality described in this post remain at Elastic's sole discretion. Any features or functionality not currently available may not be delivered on time or at all.