Does this help with DuckDB concurrency? My main gripe with DuckDB is that you can't write to it from multiple processes at the same time. If you open the database in write mode with one process, you cannot modify it at all from another process without the first process completely releasing it. In fact, you cannot even read from it from another process in this scenario.
So if you typically use a file-backed DuckDB database in one process and want to quickly modify something in that database using the DuckDB CLI (like you might connect SequelPro or DBeaver to make changes to a DB while your main application is 'using' it), then it complains that it's locked by another process and doesn't let you connect to it at all.
This is unlike SQLite, which supports and handles this in a thread-safe manner out of the box. I know it's DuckDB's explicit design decision[0], but it would be amazing if DuckDB could behave more like SQLite when it comes to this sort of thing. DuckDB has incredible quality-of-life improvements with many extra types and functions supported, not to mention all the SQL dialect enhancements allowing you to type much more concise SQL (they call it "Friendly SQL"), which executes super efficiently too.
Hi, DuckDB DevRel here. To have concurrent read-write access to a database, you can use our DuckLake lakehouse format and coordinate concurrent access through a shared Postgres catalog. We released v1.0 yesterday: https://ducklake.select/2026/04/13/ducklake-10/
I updated your reference [0] with this information.
This is exactly what we found. Ingest rates were tough. We partitioned and ran over multiple duckdb instances too (and wrangled the complexity).
I built a distributed DuckDB setup using OpenRaft for state replication. Every node holds a full copy of the database. Writes go through Raft consensus, reads are local. It's more like etcd-with-DuckDB than MotherDuck-lite.
OpenDuck takes a different approach with query federation with a gateway that splits execution across local and remote workers. My use case requires every node to serve reads independently with zero network latency, and to keep running if other nodes go down.
The PostgreSQL dependency for metadata feels heavy. Now you're operating two database systems instead of one. In my setup DuckDB stores both the Raft log and the application data, so there's a single storage engine to reason about.
Not saying my approach is universally better. If you need to query across datasets that don't fit on a single machine, OpenDuck's architecture makes more sense. But if you want replicated state with strong consistency, Raft + DuckDB works very well.
This is an attempt to replicate MotherDucks differential storage and implement hybrid query execution on DuckDB
As someone working in the field I have to admit that I'm not familiar with the terms differential storage nor do I really understand what hybrid execution means. Maybe you could describe it both from a simple technical point of view and what benefits it has to me as a user?
from the post:
Differential storage
Append-only layers with PostgreSQL metadata. DuckDB sees a normal file; OpenDuck persists data as immutable sealed layers addressable from object storage. Snapshots give you consistent reads. One serialized write path, many concurrent readers.
Hybrid (dual) execution
A single query can run partly on your machine and partly on a remote worker. The gateway splits the plan, labels each operator LOCAL or REMOTE, and inserts bridge operators at the boundaries. Only intermediate results cross the wire.
I have a deep appreciation for DuckDB, but I am afraid the confluence of brilliant ideas makes it ever more complicated to adopt —- and DuckLake is another example for this trend.
When I look at SQLite I see a clear message: a database in a file. I think DuckDb is that, too. But it’s also an analytics engine like Polars, works with other DB engines, supports Parquet, comes with a UI, has two separate warehouse ideas which both deviate from DuckDB‘s core ideas.
Yes, DuckLake and Motherduck are separate entities, but they are still part of the ecosystem.
that's a valid concern!
However I'd like to point out that that is exactly the reason why DuckDB relies so heavily on its extension mechanism, even for features that some may consider to be "essential" for an analytical system. Take for example the parquet, json, and httpfs extensions. Also features like the UI you mention are isolated from core DuckDB by living in an extension.
I'd argue that core DuckDB is still very much the same lightweight, portable, no-dependency system that it started out as (and which was very much inspired by how effective SQLite is by being so).
Maybe some interesting behind-the-scenes: to further solidify core DuckDB and guard it from the complexity of its ever growing extension ecosystem, one of the big items currently on our roadmap (see https://duckdb.org/roadmap) is to make significant improvements to DuckDB's stable C extension API.
disclaimer: I work at DuckDB Labs ;)
I read the code. It's a good case study of one-shot output from AI when you ask it to replicate a SaaS product. This is probably better than most because MotherDuck has been open about their techniques to build the product.
Obviously not a production implementation.
There’s a million of these per day and I would never even think about using a single one of them near production data.
Show HN style posts have become completely worthless to me, everything now is just vibe coded cloud chasing slop.
Last week I’ve sent my first PR in duckdb to support iceberg views in catalogs like Polaris! Let’s hope for the best :)
Seems cool! But would be nice to have some "real-world" use cases to see actual usage patterns...
In my case my systems can produce "warnings" when there are some small system warning/errors, that I want to aggregate and review (drill-down) from time to time
I was hesitating between using something like OpenTelemetry to send logs/metrics for those, or just to add a "warnings" table to my Timescaledb and use some aggregates to drill them down and possibly display some chunks to review...
but another possibility, to avoid using Timescaledb/clickhouse and just rely on S3 would be to upload those in a parquet file on a bucket through duckdb, and then query them from time to time to have stats
Would you have a recommendation?
There is no “real world” use case because it’s vibe coded slop.
Does this help with DuckDB concurrency? My main gripe with DuckDB is that you can't write to it from multiple processes at the same time. If you open the database in write mode with one process, you cannot modify it at all from another process without the first process completely releasing it. In fact, you cannot even read from it from another process in this scenario.
So if you typically use a file-backed DuckDB database in one process and want to quickly modify something in that database using the DuckDB CLI (like you might connect SequelPro or DBeaver to make changes to a DB while your main application is 'using' it), then it complains that it's locked by another process and doesn't let you connect to it at all.
This is unlike SQLite, which supports and handles this in a thread-safe manner out of the box. I know it's DuckDB's explicit design decision[0], but it would be amazing if DuckDB could behave more like SQLite when it comes to this sort of thing. DuckDB has incredible quality-of-life improvements with many extra types and functions supported, not to mention all the SQL dialect enhancements allowing you to type much more concise SQL (they call it "Friendly SQL"), which executes super efficiently too.
[0] https://duckdb.org/docs/current/connect/concurrency
Hi, DuckDB DevRel here. To have concurrent read-write access to a database, you can use our DuckLake lakehouse format and coordinate concurrent access through a shared Postgres catalog. We released v1.0 yesterday: https://ducklake.select/2026/04/13/ducklake-10/
I updated your reference [0] with this information.
This is exactly what we found. Ingest rates were tough. We partitioned and ran over multiple duckdb instances too (and wrangled the complexity).
We ending up building a Sqlite + vortex file alternative for our use case: https://spice.ai/blog/introducing-spice-cayenne-data-acceler...
You might find https://github.com/apache/datafusion and https://github.com/datafusion-contrib/datafusion-federation of interest
I built a distributed DuckDB setup using OpenRaft for state replication. Every node holds a full copy of the database. Writes go through Raft consensus, reads are local. It's more like etcd-with-DuckDB than MotherDuck-lite.
OpenDuck takes a different approach with query federation with a gateway that splits execution across local and remote workers. My use case requires every node to serve reads independently with zero network latency, and to keep running if other nodes go down.
The PostgreSQL dependency for metadata feels heavy. Now you're operating two database systems instead of one. In my setup DuckDB stores both the Raft log and the application data, so there's a single storage engine to reason about.
Not saying my approach is universally better. If you need to query across datasets that don't fit on a single machine, OpenDuck's architecture makes more sense. But if you want replicated state with strong consistency, Raft + DuckDB works very well.
This is an attempt to replicate MotherDucks differential storage and implement hybrid query execution on DuckDB
As someone working in the field I have to admit that I'm not familiar with the terms differential storage nor do I really understand what hybrid execution means. Maybe you could describe it both from a simple technical point of view and what benefits it has to me as a user?
from the post:
Differential storage
Append-only layers with PostgreSQL metadata. DuckDB sees a normal file; OpenDuck persists data as immutable sealed layers addressable from object storage. Snapshots give you consistent reads. One serialized write path, many concurrent readers.
Hybrid (dual) execution
A single query can run partly on your machine and partly on a remote worker. The gateway splits the plan, labels each operator LOCAL or REMOTE, and inserts bridge operators at the boundaries. Only intermediate results cross the wire.
I have a deep appreciation for DuckDB, but I am afraid the confluence of brilliant ideas makes it ever more complicated to adopt —- and DuckLake is another example for this trend.
When I look at SQLite I see a clear message: a database in a file. I think DuckDb is that, too. But it’s also an analytics engine like Polars, works with other DB engines, supports Parquet, comes with a UI, has two separate warehouse ideas which both deviate from DuckDB‘s core ideas.
Yes, DuckLake and Motherduck are separate entities, but they are still part of the ecosystem.
that's a valid concern!
However I'd like to point out that that is exactly the reason why DuckDB relies so heavily on its extension mechanism, even for features that some may consider to be "essential" for an analytical system. Take for example the parquet, json, and httpfs extensions. Also features like the UI you mention are isolated from core DuckDB by living in an extension.
I'd argue that core DuckDB is still very much the same lightweight, portable, no-dependency system that it started out as (and which was very much inspired by how effective SQLite is by being so).
Maybe some interesting behind-the-scenes: to further solidify core DuckDB and guard it from the complexity of its ever growing extension ecosystem, one of the big items currently on our roadmap (see https://duckdb.org/roadmap) is to make significant improvements to DuckDB's stable C extension API.
disclaimer: I work at DuckDB Labs ;)
I read the code. It's a good case study of one-shot output from AI when you ask it to replicate a SaaS product. This is probably better than most because MotherDuck has been open about their techniques to build the product.
Obviously not a production implementation.
There’s a million of these per day and I would never even think about using a single one of them near production data.
Show HN style posts have become completely worthless to me, everything now is just vibe coded cloud chasing slop.
Last week I’ve sent my first PR in duckdb to support iceberg views in catalogs like Polaris! Let’s hope for the best :)
Seems cool! But would be nice to have some "real-world" use cases to see actual usage patterns...
In my case my systems can produce "warnings" when there are some small system warning/errors, that I want to aggregate and review (drill-down) from time to time
I was hesitating between using something like OpenTelemetry to send logs/metrics for those, or just to add a "warnings" table to my Timescaledb and use some aggregates to drill them down and possibly display some chunks to review...
but another possibility, to avoid using Timescaledb/clickhouse and just rely on S3 would be to upload those in a parquet file on a bucket through duckdb, and then query them from time to time to have stats
Would you have a recommendation?
There is no “real world” use case because it’s vibe coded slop.