Tecton Releases Low-latency Streaming Pipelines for Machine Learning, Allowing Data Teams to Build and Deploy Real-Time Models in Hours Instead of Months


Tecton is the Only Feature Store That Orchestrates Streaming Pipelines for Machine Learning (ML) at Sub-Second Freshness While Providing Native Support for Time Aggregations and Backfills, Expanding the Use of ML to Real-Time Use Cases Such as Fraud Detection, Product Recommendations and Pricing

SAN FRANCISCO, Aug. 10, 2021 (GLOBE NEWSWIRE) -- Tecton, the enterprise feature store company, today announced that it has added low-latency streaming pipelines to its feature store so that organizations can quickly and reliably build real-time ML models.

“Enterprises are increasingly deploying real-time ML to support new customer-facing applications and to automate business processes,” said Kevin Stumpf, co-founder and CTO of Tecton. “The addition of low-latency streaming pipelines to the Tecton feature store enables our customers to build real-time ML applications faster, and with more accurate predictions.”

Real-time ML means that predictions are generated online, at low latency, using an organization’s real-time data; any updates in the data sources are reflected in real-time in the model’s predictions. Real-time ML is valuable for any use case that is sensitive to the freshness of the predictions, such as fraud detection, product recommendations and pricing use cases.

For example, fraud detection models need to generate predictions based not just on what a user was doing yesterday but on what they have been doing for the past few seconds. Similarly, real-time pricing models need to incorporate the supply and demand of a product at the current time, not just from a few hours ago.

The data is the hardest part of building real-time ML models. It requires operational data pipelines which can process features at sub-second freshness, serve features at millisecond latency, while delivering production-grade SLAs. Building these data pipelines is very hard without proper tooling and can add weeks or months to the deployment time of ML projects.

With Tecton, data teams can build and deploy features using streaming data sources like Kafka or Kinesis in hours. Users only need to provide the data transformation logic using powerful Tecton primitives, and Tecton executes this logic in fully-managed operational data pipelines which can process and serve features in real-time. Tecton also processes historical data to create training datasets and backfills that are consistent with the online data and eliminates training / serving skew. Time window aggregations - by far the most common feature type used in real-time ML applications - are supported out-of-the-box with an optimized implementation.

Data teams who are already using real-time ML can now build and deploy models faster, increase prediction accuracy and reduce the load on engineering teams. Data teams that are new to streaming can build a new class of real-time ML applications that require ultra-fresh feature values. Tecton simplifies the most difficult step in the transition to real-time ML - building and operating the streaming ML pipelines.

Additional Resources:

About Tecton
Tecton’s mission is to make world-class ML accessible to every company. Tecton enables data scientists to turn raw data into production-ready features, the predictive signals that feed ML models. The founders created the Uber Michelangelo ML platform, and the team has extensive experience building data systems for industry leaders like Google, Facebook, Airbnb and Uber. Tecton is the main contributor and committer of Feast, the leading open source feature store. Tecton is backed by Andreessen Horowitz and Sequoia and is headquartered in San Francisco with an office in New York. For more information, visit https://www.tecton.ai or follow @tectonAI.

Media and Analyst Contact:
Amber Rowland
amber@therowlandagency.com
+1-650-814-4560