-
Notifications
You must be signed in to change notification settings - Fork 1.3k
[AMCC-11] dogstatsd: additional metric control mechanism between the sampler and the serializer #37692
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…he flush to the serializer. This is to filter out metric names generated at the aggregation stage, in this case the histogram aggregates.
Go Package Import DifferencesBaseline: cbd7198
|
Regression DetectorRegression Detector ResultsMetrics dashboard Baseline: cbd7198 Optimization Goals: ✅ No significant changes detected
|
perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
---|---|---|---|---|---|---|
➖ | tcp_syslog_to_blackhole | ingress throughput | +0.51 | [+0.44, +0.57] | 1 | Logs |
➖ | quality_gate_logs | % cpu utilization | +0.40 | [-2.34, +3.14] | 1 | Logs bounds checks dashboard |
➖ | otlp_ingest_metrics | memory utilization | +0.32 | [+0.16, +0.49] | 1 | Logs |
➖ | quality_gate_idle_all_features | memory utilization | +0.32 | [+0.24, +0.39] | 1 | Logs bounds checks dashboard |
➖ | uds_dogstatsd_20mb_12k_contexts_20_senders | memory utilization | +0.27 | [+0.23, +0.31] | 1 | Logs |
➖ | file_to_blackhole_0ms_latency_http2 | egress throughput | +0.12 | [-0.52, +0.75] | 1 | Logs |
➖ | ddot_logs | memory utilization | +0.11 | [-0.03, +0.26] | 1 | Logs |
➖ | ddot_metrics | memory utilization | +0.09 | [-0.02, +0.21] | 1 | Logs |
➖ | file_to_blackhole_0ms_latency | egress throughput | +0.08 | [-0.50, +0.66] | 1 | Logs |
➖ | file_to_blackhole_1000ms_latency_linear_load | egress throughput | +0.05 | [-0.18, +0.28] | 1 | Logs |
➖ | file_to_blackhole_0ms_latency_http1 | egress throughput | +0.03 | [-0.57, +0.64] | 1 | Logs |
➖ | file_to_blackhole_500ms_latency | egress throughput | +0.02 | [-0.58, +0.62] | 1 | Logs |
➖ | file_to_blackhole_300ms_latency | egress throughput | +0.00 | [-0.62, +0.62] | 1 | Logs |
➖ | uds_dogstatsd_to_api | ingress throughput | +0.00 | [-0.29, +0.30] | 1 | Logs |
➖ | tcp_dd_logs_filter_exclude | ingress throughput | +0.00 | [-0.01, +0.02] | 1 | Logs |
➖ | file_to_blackhole_1000ms_latency | egress throughput | -0.02 | [-0.64, +0.61] | 1 | Logs |
➖ | file_to_blackhole_100ms_latency | egress throughput | -0.10 | [-0.68, +0.48] | 1 | Logs |
➖ | quality_gate_idle | memory utilization | -0.28 | [-0.34, -0.22] | 1 | Logs bounds checks dashboard |
➖ | docker_containers_memory | memory utilization | -0.74 | [-0.81, -0.66] | 1 | Logs |
➖ | uds_dogstatsd_to_api_cpu | % cpu utilization | -0.84 | [-1.74, +0.05] | 1 | Logs |
➖ | otlp_ingest_logs | memory utilization | -0.87 | [-1.00, -0.74] | 1 | Logs |
➖ | docker_containers_cpu | % cpu utilization | -2.79 | [-5.84, +0.25] | 1 | Logs |
➖ | file_tree | memory utilization | -4.58 | [-4.78, -4.39] | 1 | Logs |
Bounds Checks: ✅ Passed
perf | experiment | bounds_check_name | replicates_passed | links |
---|---|---|---|---|
✅ | docker_containers_cpu | simple_check_run | 10/10 | |
✅ | docker_containers_memory | memory_usage | 10/10 | |
✅ | docker_containers_memory | simple_check_run | 10/10 | |
✅ | file_to_blackhole_0ms_latency | lost_bytes | 10/10 | |
✅ | file_to_blackhole_0ms_latency | memory_usage | 10/10 | |
✅ | file_to_blackhole_0ms_latency_http1 | lost_bytes | 10/10 | |
✅ | file_to_blackhole_0ms_latency_http1 | memory_usage | 10/10 | |
✅ | file_to_blackhole_0ms_latency_http2 | lost_bytes | 10/10 | |
✅ | file_to_blackhole_0ms_latency_http2 | memory_usage | 10/10 | |
✅ | file_to_blackhole_1000ms_latency | memory_usage | 10/10 | |
✅ | file_to_blackhole_1000ms_latency_linear_load | memory_usage | 10/10 | |
✅ | file_to_blackhole_100ms_latency | lost_bytes | 10/10 | |
✅ | file_to_blackhole_100ms_latency | memory_usage | 10/10 | |
✅ | file_to_blackhole_300ms_latency | lost_bytes | 10/10 | |
✅ | file_to_blackhole_300ms_latency | memory_usage | 10/10 | |
✅ | file_to_blackhole_500ms_latency | lost_bytes | 10/10 | |
✅ | file_to_blackhole_500ms_latency | memory_usage | 10/10 | |
✅ | quality_gate_idle | intake_connections | 10/10 | bounds checks dashboard |
✅ | quality_gate_idle | memory_usage | 10/10 | bounds checks dashboard |
✅ | quality_gate_idle_all_features | intake_connections | 10/10 | bounds checks dashboard |
✅ | quality_gate_idle_all_features | memory_usage | 10/10 | bounds checks dashboard |
✅ | quality_gate_logs | intake_connections | 10/10 | bounds checks dashboard |
✅ | quality_gate_logs | lost_bytes | 10/10 | bounds checks dashboard |
✅ | quality_gate_logs | memory_usage | 10/10 | bounds checks dashboard |
Explanation
Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%
Performance changes are noted in the perf column of each table:
- ✅ = significantly better comparison variant performance
- ❌ = significantly worse comparison variant performance
- ➖ = no significant change in performance
A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".
For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
-
Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.
-
Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.
-
Its configuration does not mark it "erratic".
CI Pass/Fail Decision
✅ Passed. All Quality Gates passed.
- quality_gate_idle_all_features, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_idle_all_features, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_idle, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_idle, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check lost_bytes: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
Static quality checks✅ Please find below the results from static quality gates Successful checksInfo
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM for Agent Runtimes
2acee9f
to
e127d1d
Compare
e127d1d
to
36f0629
Compare
This reverts commit 36f0629.
cffb4e8
to
7558cd5
Compare
7558cd5
to
159f026
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left some cosmetic comments, but otherwise LGTM.
func (s *server) createHistogramsBlocklist(metricNames []string) []string { | ||
aggrs := s.config.GetStringSlice("histogram_aggregates") | ||
|
||
percentiles := metrics.ParsePercentiles(s.config.GetStringSlice("histogram_percentiles")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it make sense to move this function to the metrics
package? So that all handling of percentiles is concentrated in one, rather than multiple packages.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's use-case is IMO closer to statsd than the percentiles/histograms themselves, don't you think?
@@ -124,7 +125,7 @@ func (s *TimeSampler) newSketchSeries(ck ckey.ContextKey, points []metrics.Sketc | |||
return ss | |||
} | |||
|
|||
func (s *TimeSampler) flushSeries(cutoffTime int64, series metrics.SerieSink) { | |||
func (s *TimeSampler) flushSeries(cutoffTime int64, series metrics.SerieSink, blocklist *utilstrings.Blocklist) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rather than plumbing it through the entire flush call chain, would it make sense to store the blocklist directly in the time sampler, rather than the worker? That way we will need to refer to it only in two places.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The TimeSampler
isn't waiting on a select or anything, each one are actually managed by the TimeSamplerWorker. I'm not sure that it would decrease the size of the chain since it would be on the worker to pass the list to the sampler: AFAIU (only checked quickly) the worker would still have the plumbing to pass the list to the sampler, it would just be stored somewhere else. Please let me know if you think I should look more into this, I can address it in a separate PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! LGTM, I left a few comments inline but none of them are blockers (feel free to either address them or discard them with an explanation)
func (s *server) createHistogramsBlocklist(metricNames []string) []string { | ||
aggrs := s.config.GetStringSlice("histogram_aggregates") | ||
|
||
percentiles := metrics.ParsePercentiles(s.config.GetStringSlice("histogram_percentiles")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of GetStringSlice("histogram_percentiles")
, pkg/metrics/histogram
uses structure.UnmarshalKey(config, "histogram_percentiles", &c)
. I'm not sure why but I think it's worth digging into it and making both config accesses consistent
/merge |
View all feedbacks in Devflow UI.
The expected merge time in
|
What does this PR do?
Adds an additional processing step when the time samplers is flushing data to the serializer for serialization.
This extra step is necessary since the aggregation from the time sampler might possibly generate new metrics, that would not have been correctly visible to the metric control implementation which is running in the listening part of DogStatsD.
This implementation creates a sublist in order to avoid having to do a complete second pass with the configured list. This sublist contain only the metrics names generated from histograms, a heuristic try to do this using a static list of histogram aggregates and percentile suffixes.
Describe how you validated your changes
Manual tested E2E on an org with the feature enabled, also, will be extensively dogfooded.