Engineering

e6data’s Architectural Bets: our Head of Engineering’s conversation w/Pete at Zero Prime Podcast

e6data's founding engineer and Head of Engineering, Sudarshan Lakshminarasimhan, unpacks the internals of their compute engine on the Zero Prime podcast

Discussing e6data’s Architectural Bets on the Zero Prime Podcast

Want to see e6data in action?

Learn how data teams power their workloads.

Get Demo
Get Demo

Our founding engineer and Head of Engineering, Sudarshan, recently went on the Zero Prime podcast and unpacked the internals of our compute engine.

“We don’t treat object stores like cold storage. And we don’t think your planner should be the bottleneck in a high-QPS workload.”

Sudarshan, Founding Engineer, e6data

It’s a story of breaking away from the driver-executor model, rethinking scheduling for the object-store era, and why atomic, per-component scaling actually matters.

The Real Problem (2025 edition)

Everyone says “compute and storage are decoupled.” Not really.

  • You scale the cluster because some ad-hoc queries spike.
  • That 10% of your workload defines your baseline cluster size.
  • Your scheduler doesn’t react in real-time, so you over-provision just in case.
  • You get 10% more queries, and you’re forced to double your warehouse size.

Today’s data infra ≠ Today’s compute requirements.

Our Architectural Decisions

We are building e6data by imagining a new playbook. No central coordinator. No one mega-driver. No lock-in to a single table format. Here’s the breakdown:

Core Shifts We Made So Far:

1. Disaggregation of internals
- Separate the planner, metadata ops, and workers.
- Each scales independently, not as a monolith.

2. Dynamic, mid-query scaling
- Queries can scale up/down during execution.
- No pre-provisioning for worst-case. Just-in-time compute.

3. Push-based vectorized execution
- We’re similar to DuckDB/Photon but go deeper on compute orchestration.
- Useful when dealing with 1k+ concurrent user-facing queries.

4. No opinionated stack
- Bring your own catalog, governance layer, and format.
- Plug in; don’t port over.

Why It Matters

  • Cost: We run 1000 QPS workloads at ~60% lower TCO than other engines.
  • Latency: p95 under 2s, even with mixed workloads.
  • No Lock-In: Use Iceberg today. Switch to Delta tomorrow. Doesn’t matter to us.
  • Infra Reuse: Already on Kubernetes? Cool. We sit inside that.

Where We’re Headed

  • Real-time ingest → queryable in <15s from object storage
  • Vector + SQL → cosine similarity inside SQL filters
  • AI-native enhancements → smart partitioning, query rewriting, and auto-guardrails
Share on

Build future-proof data products

Try e6data for your heavy workloads!

Get Started for Free
Get Started for Free
Frequently asked questions (FAQs)
How do I integrate e6data with my existing data infrastructure?

We are universally interoperable and open-source friendly. We can integrate across any object store, table format, data catalog, governance tools, BI tools, and other data applications.

How does billing work?

We use a usage-based pricing model based on vCPU consumption. Your billing is determined by the number of vCPUs used, ensuring you only pay for the compute power you actually consume.

What kind of file formats does e6data support?

We support all types of file formats, like Parquet, ORC, JSON, CSV, AVRO, and others.

What kind of performance improvements can I expect with e6data?

e6data promises a 5 to 10 times faster querying speed across any concurrency at over 50% lower total cost of ownership across the workloads as compared to any compute engine in the market.

What kinds of deployment models are available at e6data ?

We support serverless and in-VPC deployment models. 

How does e6data handle data governance rules?

We can integrate with your existing governance tool, and also have an in-house offering for data governance, access control, and security.

Listen to the full podcast
Apple Podcasts
Spotify
Share this article

e6data’s Architectural Bets: our Head of Engineering’s conversation w/Pete at Zero Prime Podcast

April 23, 2025
/
e6data Team
Engineering
Discussing e6data’s Architectural Bets on the Zero Prime Podcast

Our founding engineer and Head of Engineering, Sudarshan, recently went on the Zero Prime podcast and unpacked the internals of our compute engine.

“We don’t treat object stores like cold storage. And we don’t think your planner should be the bottleneck in a high-QPS workload.”

Sudarshan, Founding Engineer, e6data

It’s a story of breaking away from the driver-executor model, rethinking scheduling for the object-store era, and why atomic, per-component scaling actually matters.

The Real Problem (2025 edition)

Everyone says “compute and storage are decoupled.” Not really.

  • You scale the cluster because some ad-hoc queries spike.
  • That 10% of your workload defines your baseline cluster size.
  • Your scheduler doesn’t react in real-time, so you over-provision just in case.
  • You get 10% more queries, and you’re forced to double your warehouse size.

Today’s data infra ≠ Today’s compute requirements.

Our Architectural Decisions

We are building e6data by imagining a new playbook. No central coordinator. No one mega-driver. No lock-in to a single table format. Here’s the breakdown:

Core Shifts We Made So Far:

1. Disaggregation of internals
- Separate the planner, metadata ops, and workers.
- Each scales independently, not as a monolith.

2. Dynamic, mid-query scaling
- Queries can scale up/down during execution.
- No pre-provisioning for worst-case. Just-in-time compute.

3. Push-based vectorized execution
- We’re similar to DuckDB/Photon but go deeper on compute orchestration.
- Useful when dealing with 1k+ concurrent user-facing queries.

4. No opinionated stack
- Bring your own catalog, governance layer, and format.
- Plug in; don’t port over.

Why It Matters

  • Cost: We run 1000 QPS workloads at ~60% lower TCO than other engines.
  • Latency: p95 under 2s, even with mixed workloads.
  • No Lock-In: Use Iceberg today. Switch to Delta tomorrow. Doesn’t matter to us.
  • Infra Reuse: Already on Kubernetes? Cool. We sit inside that.

Where We’re Headed

  • Real-time ingest → queryable in <15s from object storage
  • Vector + SQL → cosine similarity inside SQL filters
  • AI-native enhancements → smart partitioning, query rewriting, and auto-guardrails
Listen to the full podcast
Share this article

Related posts

View All Posts

Related posts

View All
Product
This is some text inside of a div block.
April 16, 2025
/
e6data Team
Vector Search in MS Fabric: e6data Powers Unified SQL + Semantic Search at 60% lower cost
e6data Team
April 16, 2025
View All
Engineering
This is some text inside of a div block.
April 23, 2025
/
Sweta Singh
Eliminating Redundant Computations in Query Plans with Automatic CTE Detection
Sweta Singh
April 23, 2025
View All
Product
This is some text inside of a div block.
April 14, 2025
/
Rajath Gowda
e6data Integrates with S3 Tables: A New Era in Data Management
Rajath Gowda
April 14, 2025
View All Posts