June 2, 2025
May Product Update
Performance Tests
We ran performance tests for our staging environment with a battery of diverse ingestion scenarios. With a single node deployment, we saw 250k QPS ingestion without any effect on read performance, graphs continue to load and show data with sub-15s data currency and no sluggishness visible on the UI or query performance. We believe we can double this with the same setup. And since our platform can be scaled horizontally, we are confident that a multinode cluster can safely serve up to 1M ingestions per second.
Onboarding Improvements
While most of these changes below are operational, the backing tech was interesting to build. We are seeing better engagement with our potential customers with -
- Same day playground access
- Assisted onboarding - for teams that need help, our solutions team sets up a meeting with the goal to get meaningful value through Scout Graphs and Alerts in 45 mins.
- More scenarios covered in our docs and demos.
AWS Monitoring Coverage and Improvements
Our focus on getting potential customers on AWS to get high value from Scout continues forward. Along with custom CloudWatch based implementation, we onboarded a prospect customer through the ADOT route, which we believe is the best way to instrument AWS components today.
Postgres Monitoring Improvements
Postgres is one of the best and most popular DBs around and we want to double down on advanced observability for postgres clusters. In May, we started on this with building the ability to ingest over 200 important time-series from the engine, already available to customers today. We will continue to work on this in June with meaningful visualisations.
Operations Improvements
We’re building a control plane for customers to manage self-onboarding, starting with adding managing users and access control. The beta version is available for testing. Most of our configs will soon be available here. So while gitops continues to be our preferred and advised approach to observability configuration, we see the need for a friendly UX.
Anomaly Detection and Knowledge Graph
We continued to run iterations on our ML models for anomaly detection and summary algorithms for building the knowledge graph. As we ingest more production data, our models are improving and continue to bolster our original thesis - we will soon be able to build agents that help our customers reduce downtimes, drastically!
May 1, 2025
Accelerated Playground access
We want to ensure our customers have quick access to a playground to evaluate Scout. Scout now has -
- Fully automated playground creation for customers to interact with and integrate their staging environment
- Self serve user account creation for customers
- Google and Github SSO for user login
Operations Improvements
Our customers consistently highlight how manual dashboard and alert setup increases operational complexity for them. We've addressed this head-on - GitOps for Scout configuration - all dashboards, alerts and notification policies can now be managed through streamlined GitOps workflows Scout integrates with all leading incident management tools, and now sends emails on alerts as well
AWS Monitoring
We've engineered custom workflows to capture and process metrics from AWS Fargate, VPC, and other components that traditionally resist direct OpenTelemetry metric publishing, expanding visibility across your entire cloud infrastructure.
Dashboard delivery
Dashboards are integral to observability. We have built a library of functional dashboards for a variety of components, from database specific or platform specific to dashboards that can be used to monitor services and entire prod environments. As we keep adding new dashboards as we encounter new systems or enhance existing ones, our dashboards are now delivered instantaneously to our customers, requiring zero effort from their end to apply updates.
Help & Guides
We have a docs site now! We have added guides to set up opentelemetry internally and Scout exporter. This will continue to be a work in progress, as our implementations grow.
ScoutMaster
Eat your own dogfood! Who watches the watchmen? ScoutMaster. We now have a central Scout that observes all Scout instances in a region, and helps us respond to situations faster. The integrations are automated, so as soon as we onboard customers on our playground or production on Scout, ScoutMaster starts watching over.
Anomaly Detection (WIP)
As we continue to build Scout and Monk as tools that can help customers reduce downtime drastically, we spent significant time and focus on anomaly detection, a fundamental building block for autonomous downtime prevention. A beta version of this will be available soon for Scout customers who can set alerts when anomalies compared to set patterns are detected.
Knowledge Graph (WIP)
Another important part of autonomous downtime prevention is building a knowledge graph of components and infrastructure for a Customer. A beta version of this will be available soon for our customers so they can visualise relationships between services, between services and components and services and infrastructure. We believe that this graph will help us automate root cause detection and help our customers minimize MTTR.
Mar 18, 2025 - Data Tiering
We're excited to announce enhancements to our data storage capabilities, designed to significantly reduce your storage costs while maintaining complete data access and integrity. These improvements allow you to retain your valuable metrics data for extended periods without the traditional cost barriers.
High-Performance Compression
Implementation of LZ4 Compression Algorithm
- Advanced compression technology that reduces storage footprint by up to 95%
- Minimal CPU overhead with compression speeds exceeding 500 MB/s
- Optimized specifically for time-series metrics data patterns
- Zero impact on query performance while dramatically reducing storage costs
S3-Backed Tiered Storage
Intelligent Multi-Tier Data Management
- Automatic migration of older data to cost-effective storage tiers
- Configurable retention policies (default: data older than 1 month moves to S3)
- Seamless data access regardless of storage location
- Additional long-term archival to Glacier for rarely accessed historical data
Customer Benefits
- Eliminate Data Sampling - 𝘔𝘢𝘪𝘯𝘵𝘢𝘪𝘯 100% 𝘰𝘧 𝘺𝘰𝘶𝘳 𝘮𝘦𝘵𝘳𝘪𝘤𝘴 𝘥𝘢𝘵𝘢 𝘸𝘪𝘵𝘩𝘰𝘶𝘵 𝘤𝘰𝘮𝘱𝘳𝘰𝘮𝘪𝘴𝘦𝘴
- Extended Data Retention - 𝘒𝘦𝘦𝘱 𝘺𝘦𝘢𝘳𝘴 𝘰𝘧 𝘩𝘪𝘴𝘵𝘰𝘳𝘪𝘤𝘢𝘭 𝘥𝘢𝘵𝘢 𝘦𝘤𝘰𝘯𝘰𝘮𝘪𝘤𝘢𝘭𝘭𝘺 𝘧𝘦𝘢𝘴𝘪𝘣𝘭𝘦
- Reduced Cloud Bills - 𝘚𝘪𝘨𝘯𝘪𝘧𝘪𝘤𝘢𝘯𝘵 𝘳𝘦𝘥𝘶𝘤𝘵𝘪𝘰𝘯 𝘪𝘯 𝘴𝘵𝘰𝘳𝘢𝘨𝘦-𝘳𝘦𝘭𝘢𝘵𝘦𝘥 𝘦𝘹𝘱𝘦𝘯𝘴𝘦𝘴
- Simplified Data Management - 𝘈𝘶𝘵𝘰𝘮𝘢𝘵𝘪𝘤 𝘵𝘪𝘦𝘳𝘪𝘯𝘨 𝘳𝘦𝘲𝘶𝘪𝘳𝘦𝘴 𝘯𝘰 𝘮𝘢𝘯𝘶𝘢𝘭 𝘪𝘯𝘵𝘦𝘳𝘷𝘦𝘯𝘵𝘪𝘰𝘯
Availability
These features are now available to all customers today.