<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/">
    <channel>
        <title>Elastic Observability Labs - Articles by Francesco Gualazzi</title>
        <link>https://www.elastic.co/observability-labs</link>
        <description>Trusted security news &amp; research from the team at Elastic.</description>
        <lastBuildDate>Thu, 05 Mar 2026 23:55:48 GMT</lastBuildDate>
        <docs>https://validator.w3.org/feed/docs/rss2.html</docs>
        <generator>https://github.com/jpmonette/feed</generator>
        
        <copyright>© 2026. Elasticsearch B.V. All Rights Reserved</copyright>
        <item>
            <title><![CDATA[Universal Profiling: Detecting CO2 and energy efficiency]]></title>
            <link>https://www.elastic.co/observability-labs/blog/universal-profiling-detecting-co2-energy-efficiency</link>
            <guid isPermaLink="false">universal-profiling-detecting-co2-energy-efficiency</guid>
            <pubDate>Mon, 05 Feb 2024 00:00:00 GMT</pubDate>
            <description><![CDATA[Universal Profiling introduces the possibility to capture environmental impact. In this post, we compare Python and Go implementations and showcase the substantial CO2 savings achieved through code optimization.]]></description>
            <content:encoded><![CDATA[<p>A while ago, we posted a <a href="https://www.elastic.co/blog/importing-chess-games-elasticsearch-universal-profiling">blog</a> that detailed how we imported over 4 billion chess games with speed using Python and optimized the code leveraging our Universal Profiling&lt;sup&gt;TM&lt;/sup&gt;. This was based on Elastic Stack running on version 8.9. We are now on <a href="https://www.elastic.co/blog/whats-new-elastic-8-12-0">8.12</a>, and it is time to do a second part that shows how easy it is to observe compiled languages and how Elastic®’s Universal Profiling can help you determine the benefit of a rewrite, both from a cost and environmental friendliness angle.</p>
<h2>Why efficiency matters — for you and the environment</h2>
<p>Data centers are estimated to consume ~3% of global electricity consumption, and their usage is expected to double by 2030.* The cost of a digital service is a close proxy to its computing efficiency, and thus, being more efficient is a win-win: less energy consumed, smaller bill.</p>
<p>In the same scenario, companies want the ability to scale to more users while spending less for each user and are effectively looking into methods of reducing their energy consumption.</p>
<p>In this spirit, <a href="https://www.elastic.co/observability/universal-profiling">Universal Profiling</a> comes equipped with data and visualizations to help determine where efficiency improvement efforts are worth the most.</p>
<p><a href="https://www.elastic.co/blog/continuous-profiling-efficient-cost-effective-applications">Energy efficiency</a> measures how much a digital service consumes to produce an output given an input. It can be measured in multiple ways, and we at Elastic Observability chose CO&lt;sub&gt;2&lt;/sub&gt; emissions and annualized CO&lt;sub&gt;2&lt;/sub&gt; emissions (more details on them later).</p>
<p>Let’s take the example of an e-commerce website: the energy efficiency of the “search inventory” process could be calculated as the average CPU time needed to serve a user request. Once the baseline for this value is determined, changes to the software delivering the search process may result in more or less CPU time consumed for the same feature, resulting in less or more efficient code.</p>
<h2>How to set up and configure wattage and CO2</h2>
<p>You can find a “Settings” button in the top-right corner of the Universal Profiling views. From there, you can customize the coefficient used to calculate CO&lt;sub&gt;2&lt;/sub&gt; emissions tied to profiling data.</p>
<p>The values set here will be used only when the profiles gathered from host agents are not already associated with publicly known data certified by cloud providers. For example, suppose you have a hybrid cloud deployment with a portion of your workload running on-premise and a portion running in GCP. In that case, the values set here will only be used to calculate the CO&lt;sub&gt;2&lt;/sub&gt; emissions for the on-premise machines; we already use all the coefficients as declared by GCP to calculate the emissions of those machines.</p>
<h2>Python vs. Go</h2>
<p>Our first <a href="https://www.elastic.co/blog/importing-chess-games-elasticsearch-universal-profiling">blog post</a> implemented a solution to read PGN chess games, a text representation in Python. It showed how Universal Profiler can be leveraged to identify slow functions and help you rewrite your code faster and more efficiently. At the end of it, we were happy with the Python version. It is still used today to grab the monthly updates from the <a href="https://database.lichess.org/">Lichess database</a> and ingest them into Elasticsearch®. I always wanted a reason to work more with Go, and we rewrote Python to Go. We leveraged goroutines and channels to send data through message passing. You can see more about it in our <a href="https://github.com/philippkahr/blogs/tree/main/universal-profiling">GitHub repository</a>.</p>
<p>Rewriting in Go also means switching from an interpreted language to a compiled one. As with everything in IT, this has benefits as well as disadvantages. One disadvantage is that we must ship debug symbols for the compiled binary. When we build the binary, we can use the symbtool program to ship the debug symbols. Without debug symbols, we see uninterpretable information as frames will be labeled with hexadecimal addresses in the flame graph rather than source code annotations.</p>
<p>First, make sure that your executable includes debug symbols. Go per default builds with debug symbols. You can check this by using file yourbinary. The important part is that it is not stripped.</p>
<pre><code class="language-bash">file lichess
lichess: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, Go BuildID=gufIkqA61WnCh8haeW-2/lfn3ne3U_y8MGoFD4AvT/QJEykzbacbYEmEQpXH6U/MqVbk-402n1k3B8yPB6I, with debug_info, not stripped
</code></pre>
<p>Now we need to push the symbols using symbtool. You must create an Elasticsearch API key as the authentication method. In the Universal Profiler UI in Kibana®, an <strong>Add Data</strong> button in the top right corner will tell you exactly what to do. The command is like this. The -e is the part where you pass through the path of your executable file. In our case, this is lichess as above.</p>
<pre><code class="language-bash">symbtool push-symbols executable -t &quot;ApiKey&quot; -u &quot;elasticsearch-url&quot; -e &quot;lichess&quot;
</code></pre>
<p>Now that debug symbols are available inside the cluster, we can run both implementations with the same file simultaneously and see what Universal Profiler can tell us about it.</p>
<h2>Identifying CO2 and energy efficiency savings</h2>
<p>Python is more frequently scheduled on the CPU. Thus, it runs more often on the hardware and contributes more to the machines’ resource usage.</p>
<p>We use the differential flame graph to identify and automatically calculate the difference in the following comparison. You need to filter on process.thread.name: “python3.11” in the baseline, and for the comparison, filter for lichess.</p>
<p><img src="https://www.elastic.co/observability-labs/assets/images/universal-profiling-detecting-co2-energy-efficiency/1-elastic-blog-uni-profiling.png" alt="1 - universal profiling" /></p>
<p>Looking at the impact of annualized CO&lt;sub&gt;2&lt;/sub&gt; emissions, we see a decrease from 65.32kg of CO&lt;sub&gt;2&lt;/sub&gt; from the Python solution to 16.78kg. That is a difference of 48.54kg CO&lt;sub&gt;2&lt;/sub&gt; savings over a year.</p>
<p>If we take a step back, we’ll want to figure out why Python produces many more emissions. In the flamegraph view, we filter down to just showing Python, and we can click on the first frame called python3.11. A little popup tells us that it caused 32.95kg of emissions. That is nearly 50% of all emissions caused by the runtime. Our program itself caused the other ~32kg of CO&lt;sub&gt;2&lt;/sub&gt;. We immediately reduced 32kg of annual emissions by cutting out the Python interpreter with Go.</p>
<p>We can lock that box using a right click and click <strong>Show more information</strong>.</p>
<p><img src="https://www.elastic.co/observability-labs/assets/images/universal-profiling-detecting-co2-energy-efficiency/2-elastic-blog-uni-profiling.png" alt="2 - universal profiling graphs blue-orange" /></p>
<p>The <strong>Show more information</strong> link displays detailed information about the frame, like sample count, total CPU, core seconds, and dollar costs. We won’t go into more detail in this blog.</p>
<p><img src="https://www.elastic.co/observability-labs/assets/images/universal-profiling-detecting-co2-energy-efficiency/3-elastic-blog-uni-profiling.png" alt="3 impact estimates" /></p>
<h2>Reduce your carbon footprint today with Universal Profiling</h2>
<p>This blog post demonstrates that rewriting your code base can reduce your carbon footprint immensely. Using Universal Profiler, you could do a quick PoC to showcase how much carbon resources can be spared.</p>
<p>Learn how you can <a href="https://www.elastic.co/guide/en/observability/current/profiling-get-started.html">get started</a> with Elastic Universal Profiling today.</p>
<blockquote>
<ul>
<li>Cluster for storing the data where three nodes, each 64GB RAM and 32 CPU cores, are running GCP on Elastic Cloud.</li>
<li>The machine for sending the data is a GCP e2-standard-32, thus 128GB RAM and 32 CPU cores with a 500GB balanced disk to read the games from.</li>
<li>The file used for the games is this <a href="https://database.lichess.org/standard/lichess_db_standard_rated_2023-12.pgn.zst">Lichess database</a> containing 96,909,211 games. The extracted file size is 211GB.</li>
</ul>
</blockquote>
<p><strong>Source:</strong></p>
<p>*<a href="https://media.ccc.de/v/camp2023-57070-energy_consumption_of_data_centers">https://media.ccc.de/v/camp2023-57070-energy_consumption_of_data_centers</a></p>
<p><em>The release and timing of any features or functionality described in this post remain at Elastic's sole discretion. Any features or functionality not currently available may not be delivered on time or at all.</em></p>
]]></content:encoded>
            <category>observability-labs</category>
            <enclosure url="https://www.elastic.co/observability-labs/assets/images/universal-profiling-detecting-co2-energy-efficiency/141935_-_Blog_header_image-_Op1_V1.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Unlocking whole-system visibility with Elastic Universal Profiling™]]></title>
            <link>https://www.elastic.co/observability-labs/blog/whole-system-visibility-elastic-universal-profiling</link>
            <guid isPermaLink="false">whole-system-visibility-elastic-universal-profiling</guid>
            <pubDate>Mon, 25 Sep 2023 00:00:00 GMT</pubDate>
            <description><![CDATA[Visual profiling data can be overwhelming. This blog post aims to demystify continuous profiling and guide you through its unique visualizations. We will equip you with the knowledge to derive quick, actionable insights from Universal Profiling™.]]></description>
            <content:encoded><![CDATA[<h2>Identify, optimize, measure, repeat!</h2>
<p>SREs and developers who want to maintain robust, efficient systems and achieve optimal code performance need effective tools to measure and improve code performance. Profilers are invaluable for these tasks, as they can help you boost your app's throughput, ensure consistent system reliability, and gain a deeper understanding of your code's behavior at runtime. However, traditional profilers can be cumbersome to use, as they often require code recompilation and are limited to specific languages. Additionally, they can also have a high overhead that negatively affects performance and makes them less suitable for quick, real-time debugging in production environments.</p>
<p>To address the limitations of traditional profilers, Elastic&lt;sup&gt;®&lt;/sup&gt; recently <a href="https://www.elastic.co/blog/continuous-profiling-is-generally-available">announced the general availability of Elastic Universal Profiling</a>, a <a href="https://www.elastic.co/observability/universal-profiling">continuous profiling</a> product that is refreshingly straightforward to use, eliminating the need for instrumentation, recompilations, or restarts. Moreover, Elastic Universal Profiling does not require on-host debug symbols and is language-agnostic, allowing you to profile any process running on your machines — from your application's code to third-party libraries and even kernel functions.</p>
<p>However, even the most advanced tools require a certain level of expertise to interpret the data effectively. The wealth of visual profiling data — flamegraphs, stacktraces, or functions — can initially seem overwhelming. This blog post aims to demystify <a href="https://www.elastic.co/observability/universal-profiling">continuous profiling</a> and guide you through its unique visualizations. We will equip you with the knowledge to derive quick, actionable insights from Universal Profiling.</p>
<p>Let’s begin.</p>
<h2>Stacktraces: The cornerstone for profiling</h2>
<h3>It all begins with a stacktrace — a snapshot capturing the cascade of function calls.</h3>
<p>A stacktrace is a snapshot of the call stack of an application at a specific point in time. It captures the sequence of function calls that the program has made up to that point. In this way, a stacktrace serves as a historical record of the call stack, allowing you to trace back the steps that led to a particular state in your application.</p>
<p>Further, stacktraces are the foundational data structure that profilers rely on to determine what an application is executing at any given moment. This is particularly useful when, for instance, your infrastructure monitoring indicates that your application servers are consuming 95% of CPU resources. While utilities such as 'top -H' can show the top processes that are consuming CPU, they lack the granularity needed to identify the specific lines of code (in the top process) responsible for the high usage.</p>
<p>In the case of Elastic Universal Profiling, <a href="https://www.elastic.co/blog/ebpf-observability-security-workload-profiling">eBPF is used</a> to perform sampling of every process that is keeping a CPU core busy. Unlike most instrumentation profilers that focus solely on your application code, Elastic Universal Profiling provides whole-system visibility — it profiles not just your code, but also code you don't own, including third-party libraries and even kernel operations.</p>
<p>The diagram below shows how the Universal Profiling agent works at a very high level. Step 5 indicates the ingestion of the stacktraces into the profiling collector, a new part of the Elastic Stack.</p>
<p>_ <strong>Just</strong> _ <a href="https://www.elastic.co/guide/en/observability/current/profiling-get-started.html">_ <strong>deploy the profiling host agent</strong> _</a> <em><strong>and receive profiling data (in Kibana</strong></em>&lt;sup&gt;&lt;em&gt;®&lt;/em&gt;&lt;/sup&gt;<em><strong>) a few minutes later.</strong></em> <a href="https://www.elastic.co/guide/en/observability/current/profiling-get-started.html">_ <strong>Get started now</strong> _</a>_ <strong>.</strong> _</p>
<p><img src="https://www.elastic.co/observability-labs/assets/images/whole-system-visibility-elastic-universal-profiling/elastic-blog-1-flowchart-linux.png" alt="High-level depiction of how the profiling agent works" /></p>
<ol>
<li>
<p>Unwinder eBPF programs (bytecode) are sent to the kernel.</p>
</li>
<li>
<p>The kernel verifies that the BPF program is safe. If accepted, the program is attached to the probes and executed when the event occurs.</p>
</li>
<li>
<p>The eBPF programs pass the collected data to userspace via maps.</p>
</li>
<li>
<p>The agent reads the collected data from maps. The data transferred from the agent to the maps are process-specific and interpreter-specific meta-information that help the eBPF unwinder programs perform unwinding.</p>
</li>
<li>
<p>Stacktraces, metrics, and metadata are pushed to the Elastic Stack.</p>
</li>
<li>
<p>Visualize data as flamegraphs, stacktraces, and functions via Kibana.</p>
</li>
</ol>
<p>While stacktraces are the key ingredient for most profiling tools, interpreting them can be tricky. Let's take a look at a simple example to make things a bit easier. The table below shows a group of stacktraces from a Java application and assigns each a percentage to indicate its share of CPU time consumption.</p>
<p><strong>Table 1: Grouped Stacktraces with CPU Time Percentage</strong></p>
<table>
<thead>
<tr>
<th>Percentage</th>
<th>Function Calls</th>
</tr>
</thead>
<tbody>
<tr>
<td>60%</td>
<td>startApp -&gt; authenticateUser -&gt; processTransaction</td>
</tr>
<tr>
<td>20%</td>
<td>startApp -&gt; loadAccountDetails -&gt; fetchRecentTransactions</td>
</tr>
<tr>
<td>10%</td>
<td>startApp -&gt; authenticateUser -&gt; processTransaction -&gt; verifyFunds</td>
</tr>
<tr>
<td>2%</td>
<td>startApp -&gt; authenticateUser -&gt; processTransaction -&gt;libjvm.so</td>
</tr>
<tr>
<td>1%</td>
<td>startApp -&gt; authenticateUser -&gt; processTransaction -&gt;libjvm.so -&gt;vmlinux: asm_common_interrupt -&gt;vmlinux: asm_sysvec_apic_timer_interrupt</td>
</tr>
</tbody>
</table>
<p>The percentages above represent the relative frequency of each specific stacktrace compared to the total number of stacktraces collected over the observation period, not actual CPU usage percentages. Also, the libjvm.so and kernel frames (vmlinux:*) in the example are commonly observed with whole-system profilers like Elastic Universal Profiling.</p>
<p>Also, we can see that <strong>60%</strong> of the time is spent in the sequence startApp; authenticateUser; processTransaction. An additional <strong>10%</strong> of the processing time is allocated to verifyFunds, a function invoked by processTransaction. Given these observations, it becomes evident that optimization initiatives would yield the most impact if centered on the processTransaction function, as it is one of the most expensive functions. However, real-world stacktraces can be far more intricate than this example. So how do we make sense of them quickly? The answer to this problem resulted in the creation of flamegraphs.</p>
<h2>Flamegraphs: A visualization of stacktraces</h2>
<p>While the above example may appear straightforward, it scarcely reflects the complexities encountered when aggregating multiple stacktraces across a fleet of machines on a continuous basis. The depth of the stack traces and the numerous branching paths can make it increasingly difficult to pinpoint where code is consuming resources. This is where flamegraphs, a concept popularized by <a href="https://www.brendangregg.com/flamegraphs.html">Brendan Gregg</a>, come into play.</p>
<p>A flamegraph is a visual interpretation of stacktraces, designed to quickly and accurately identify the functions that are consuming the most resources. Each function is represented by a rectangle, where the width of the rectangle represents the amount of time spent in the function, and the number of stacked rectangles represents the stack depth. The stack depth is the number of functions that were called to reach the current function.</p>
<p>Elastic Universal Profiling uses icicle graphs, which is an inverted variant of the standard flamegraph. In an icicle graph, the root function is at the top, and its child functions are shown below their parents –– making it easier to see the hierarchy of functions and how they are related to each other.</p>
<p>In most flamegraphs, the y-axis represents stack depth, but there is no standardization for the x-axis. Some profiling tools use the x-axis to indicate the passage of time; in these instances, the graph is more accurately termed a flame chart. Others sort the x-axis alphabetically. Universal Profiling sorts functions on the x-axis based on relative CPU percentage utilization, starting with the function that consumes the most CPU time on the left, as shown in the example icicle graph below.</p>
<p><img src="https://www.elastic.co/observability-labs/assets/images/whole-system-visibility-elastic-universal-profiling/elastic-blog-2-cpu-time.png" alt="Example icicle graph: The percentage represents relative CPU time, not the real CPU usage time. " /></p>
<h2>Debugging and optimizing performance issues: Stacktraces, TopN functions, flamegraphs</h2>
<p>SREs and SWEs can use Universal Profiling for troubleshooting, debugging, and performance optimization. It builds stacktraces that go from the kernel, through userspace native code, all the way into code running in higher level runtimes, enabling you to <strong>identify performance regressions</strong> , <strong>reduce wasteful computations</strong> , and <strong>debug complex issues faster</strong>.</p>
<p>To this end, Universal Profiling offers three main visualizations: Stacktraces, TopN Functions, and flamegraphs.</p>
<h3>Stacktrace view</h3>
<p>The stacktraces view shows grouped stacktrace graphs by threads, hosts, Kubernetes deployments, and containers. It can be used to detect unexpected CPU spikes across threads and drill down into a smaller time range to investigate further with a flamegraph. Refer to the <a href="https://www.elastic.co/guide/en/observability/current/universal-profiling.html#profiling-stacktraces-intro">documentation</a> for details.</p>
<p><img src="https://www.elastic.co/observability-labs/assets/images/whole-system-visibility-elastic-universal-profiling/elastic-blog-3-wave-patterns.png" alt="Notice the wave pattern in the stacktrace view, enabling you to drill down into a CPU spike " /></p>
<h3>TopN functions view</h3>
<p>Universal Profiling's topN functions view shows the most frequently sampled functions, broken down by CPU time, annualized CO&lt;sub&gt;2&lt;/sub&gt;, and annualized cost estimates. You can use this view to identify the most expensive functions across your entire fleet, and then apply filters to focus on specific components for a more detailed analysis. Clicking on a function name will redirect you to the flamegraph, enabling you to examine the call hierarchy.</p>
<p><img src="https://www.elastic.co/observability-labs/assets/images/whole-system-visibility-elastic-universal-profiling/elastic-blog-4-topN-functions-page.png" alt="TopN functions page" /></p>
<h3>Flamegraphs view</h3>
<p>The flamegraph page is where you will most likely spend the most time, especially when debugging and optimizing. We recommend that you use the guide below to identify performance bottlenecks and optimization opportunities with flamegraphs. The three key elements-conditions to look for are <strong>width</strong> , <strong>hierarchy</strong> , and <strong>height</strong>.</p>
<p><img src="https://www.elastic.co/observability-labs/assets/images/whole-system-visibility-elastic-universal-profiling/elastic-blog-5-icivle-flamegraph.png" alt="Icicle flamegraph: We use the colors to determine different types of code (e.g., native, interpreted, kernel)." /></p>
<p><strong>Width matters:</strong> In icicle graphs, wider rectangles signify functions taking up more CPU time. Always read the graph from left to right and note the widest rectangles, as these are the prime hot spots.</p>
<p><strong>Hierarchy matters:</strong> Navigate the graph's stack to understand function relationships. This vertical examination will help you identify whether one or multiple functions are responsible for performance bottlenecks. This could also uncover opportunities for code improvements, such as swapping an inefficient library or avoiding unnecessary I/O operations.</p>
<p><strong>Height matters:</strong> Elevated or tall stacks in the graph usually point to deep call hierarchies. These can be an indicator of complex and less efficient code structures that may require attention.</p>
<p>Also, when navigating a flamegraph, you may want to look for specific function names to validate your assumptions on their presence: in the Universal Profiling flamegraphs view, there is a “Search” bar at the bottom left corner of the view. You can input a regex, and the match will be highlighted in the flamegraph; by clicking on the left and right arrows next to the Search bar, you can move across the occurrences on the flamegraph and spot callers and callee of the matched function.</p>
<p>In summary,</p>
<ul>
<li><strong>Scan</strong> horizontally from left to right, focusing on width for CPU-intensive functions.</li>
<li><strong>Examine</strong> vertically to examine the stack and spot bottlenecks.</li>
<li><strong>Look</strong> for <strong>towering stacks</strong> to identify potential complexities in the code.</li>
</ul>
<p>To recap, use topN functions to generate optimization hypotheses and validate them with stacktraces and/or flamegraphs. Use stacktraces to monitor CPU utilization trends and to delve into the finer details. Use flamegraphs to quickly debug and optimize your code, using width, hierarchy, and height as guides.</p>
<p>_ <strong>Identify. Optimize. Measure. Repeat!</strong> _</p>
<h2>Measure the impact of your change</h2>
<h3>For the very first time in history, developers can now measure the performance (gained or lost), cloud cost, and carbon footprint impact of every deployed change.</h3>
<p>Once you have identified a performance issue and applied fixes or optimizations to your code, it is essential to measure the impact of your changes. The differential topN functions and differential flamegraph pages are invaluable for this, as they can help you identify regressions and measure your change impact not only in terms of performance but also in terms of carbon emissions and cost savings.</p>
<p><img src="https://www.elastic.co/observability-labs/assets/images/whole-system-visibility-elastic-universal-profiling/elastic-blog-6-uni-profiling.png" alt="A differential function view, showing the performance, CO2, and cost impact of a change" /></p>
<p>The Diff column indicates a change in the function’s rank.</p>
<p>You may need to use tags or other metadata, such as container and deployment name, in combination with time ranges to differentiate between the optimized and non-optimized changes.</p>
<p><img src="https://www.elastic.co/observability-labs/assets/images/whole-system-visibility-elastic-universal-profiling/elastic-blog-7-differential-flamegraph.png" alt="A differential flamegraph showing regression in A/B testing" /></p>
<h2>Universal Profiling: The key to optimizing application resources</h2>
<p>Computational efficiency is no longer just a nice-to-have, but a must-have from both a financial and environmental sustainability perspective. Elastic Universal Profiling provides unprecedented visibility into the runtime behavior of all your applications, so you can identify and optimize the most resource-intensive areas of your code. The result is not merely better-performing software but also reduced resource consumption, lower cloud costs, and a reduction in carbon footprint. Optimizing your code with Universal Profiling is not only the right thing to do for your business, it’s the right thing to do for our world.</p>
<p><a href="https://www.elastic.co/guide/en/observability/current/profiling-get-started.html">Get started</a> with Elastic Universal Profiling today.</p>
<p><em>The release and timing of any features or functionality described in this post remain at Elastic's sole discretion. Any features or functionality not currently available may not be delivered on time or at all.</em></p>
]]></content:encoded>
            <category>observability-labs</category>
            <enclosure url="https://www.elastic.co/observability-labs/assets/images/whole-system-visibility-elastic-universal-profiling/universal-profiling-blog-720x420.jpg" length="0" type="image/jpg"/>
        </item>
    </channel>
</rss>