Understanding Your Results
Learn how to interpret your container analysis results and make data-driven optimization decisions.
Dashboard Overview
The BenchTest dashboard provides a comprehensive view of your container's performance metrics. Here's what you'll find:
Performance Metrics
- Response times
- Throughput
- Error rates
- Resource utilization
Resource Usage
- CPU utilization
- Memory consumption
- Network I/O
- Disk I/O
Key Metrics Explained
CPU Usage
CPU usage indicates how much processing power your container is consuming.
- High CPU usage (>80%) may indicate performance bottlenecks
- Spikes in CPU usage can reveal inefficient operations
- Consistent low CPU usage might suggest over-provisioning
Memory Utilization
Memory utilization shows how much RAM your container is using.
- Memory leaks show as steadily increasing usage
- High memory usage can cause performance degradation
- Swap usage indicates memory pressure
System Resources
System resources include network and disk I/O metrics.
- Network I/O shows data transfer rates
- Disk I/O indicates storage performance
- Resource contention can impact overall performance
Performance Indicators
Understanding these key performance indicators will help you identify and resolve issues:
Response Time
The time taken to respond to requests. High response times may indicate:
- Resource constraints
- Inefficient code paths
- Network latency
Throughput
The number of requests processed per second. Low throughput might be caused by:
- CPU bottlenecks
- Memory limitations
- I/O constraints
Error Rates
The percentage of failed requests. High error rates could indicate:
- Resource exhaustion
- Application bugs
- Configuration issues
Identifying Resource Bottlenecks
Learn to identify and address common resource bottlenecks. For a comprehensive guide on Kubernetes resource optimization, see our detailed blog post.
CPU Bottlenecks
- High CPU utilization (>80%)
- Slow response times
- Increased error rates under load
Memory Bottlenecks
- High memory usage
- Frequent garbage collection
- Out of memory errors
I/O Bottlenecks
- High disk I/O wait times
- Network latency issues
- Slow data transfer rates
Customizing Graph Views
Both the Resource Utilization and Connection Diagnostics tabs include time interval controls in the top left corner that allow you to adjust the granularity of your graphs.
Time Interval Selector
You can choose from the following intervals to either smooth out the graphs for high-level trends or add more detail for granular analysis:
- 1s interval - Maximum detail, shows every second
- 5s interval - High detail, good for short tests
- 15s interval - Balanced view, default for most analyses
- 1m interval - Smoothed view, good for longer tests
- 5m interval - High-level trends
- 15m interval - Very smoothed, for extended tests
- 1h interval - Maximum smoothing for long-running tests
Tip: Use shorter intervals (1s-15s) when you need to identify specific spikes or events. Use longer intervals (1m-1h) when analyzing overall trends and patterns over longer periods of time.
Flame Graph Requirements
Flame graphs provide powerful CPU profiling insights, but they require specific configuration in your containerized application.
Container Build Requirements
Important: Your container must be built with debug symbols enabled. Without debug symbols, flame graphs will not display proper function names and will be less useful for identifying performance bottlenecks.
Do not strip symbols during the build process. For Go applications, avoid using flags like -ldflags="-s -w" in builds intended for profiling, as these flags strip debug information and symbol tables.
Without debug symbols, the flame graph feature will not work correctly and will be unable to show meaningful function names in the visualization.
Organizing and Comparing Tests
BenchTest provides several features to help you organize your test runs and compare performance across different configurations.
Adding Tags to Tests
In the test list, you can add names to your tests to keep them organized and easily identifiable. This is especially useful when running multiple test variations.
- Click on "add name" button in the test list to add or click the pencil icon to edit its name
- Names must be single words (no spaces)
- Maximum 25 characters
- Can contain letters, numbers, hyphens, and underscores
- Each name must be unique within a container
Example names: production, optimized-v2, before-fix
Comparing Multiple Tests
You can compare up to 3 tests side-by-side to analyze performance differences between different configurations, code versions, or optimization attempts.
- Navigate to the comparison page from your container dashboard
- Select up to 3 tests to compare
- View side-by-side metrics including CPU, memory, latency, and throughput
- Use the same time interval controls to adjust graph granularity
- Each test is color-coded for easy identification
Use cases: Compare performance before and after optimization, test different resource limits, or analyze the impact of code changes.
