TL;DR
If your PostgreSQL reports are slow and you run everything on one big machine, CedarDB can make queries much faster — without learning a totally new database.
It’s not perfect yet, but it’s impressive.
What is CedarDB?
CedarDB is a modern database built by researchers who have been working on fast databases for almost 20 years.
It is the commercial successor of academic systems like HyPer and Umbra from the Technical University of Munich (TUM).
From the outside, it looks a lot like PostgreSQL:
- Same network protocol
- Same drivers
- Same SQL
But inside, it’s built very differently — and that’s where the speed comes from.
Why is it fast?
You don’t need to know the details, but these ideas matter:
It turns SQL into real machine code
Most databases interpret your query step by step. CedarDB compiles your query, like a program.
This idea comes from the HyPer research on data-centric code generation.
That means:
- Less overhead
- Better CPU usage
- Faster analytics
It uses all your CPU cores
Instead of splitting work badly and leaving cores idle, CedarDB:
- Breaks work into small chunks
- Feeds them to all CPU cores evenly
This approach is often called morsel-driven parallelism.
It is built for SSDs, not old disks
Many databases still use old assumptions from spinning hard drives. CedarDB is designed for modern SSDs, which makes reading large datasets much faster.
This work builds on ideas explored in the Umbra research system.
When does CedarDB shine?
CedarDB is a great fit if:
- You run analytics or reports on PostgreSQL
- Queries take seconds or minutes
- Everything runs on one large machine
- You want speed without switching to a totally new SQL dialect
Typical use cases:
- Reporting
- Dashboards
- Data exploration
- Mixed read/write workloads (HTAP)
Where is it not ready yet?
CedarDB is still young. Be aware of this:
- Many PostgreSQL extensions don’t work (for example PostGIS)
- High availability and backups are still evolving
- Some admin tools expect PostgreSQL system tables that aren’t fully there yet
- It can use a lot of memory on complex queries
This makes it better for experiments and analytics, not critical production systems (yet).
How does it compare (very roughly)?
- PostgreSQL: Stable, flexible, slower for heavy analytics
https://www.postgresql.org/ - CedarDB: Much faster analytics, smaller ecosystem
https://cedardb.com/ - ClickHouse: Great for distributed analytics, very different SQL
https://clickhouse.com/ - DuckDB: Amazing locally, not for servers
https://duckdb.org/
Should you try it?
Yes, try it if:
- PostgreSQL analytics are too slow
- You have one powerful server
- You want faster queries with minimal changes
Wait if:
- You depend on PostgreSQL extensions
- You need rock-solid HA and backups today
Learn more
If you want to go deeper:
- CedarDB: https://cedardb.com/
- HyPer research paper & project: https://wwwdb.in.tum.de/research/projects/Hyper/
- Umbra research project: https://wwwdb.in.tum.de/research/projects/umbra/
- PostgreSQL documentation: https://www.postgresql.org/docs/
- ClickBench (analytics benchmarks): https://github.com/ClickHouse/ClickBench
Final thoughts
CedarDB shows what happens when a database is designed for modern CPUs and SSDs instead of old hardware.
It won’t replace PostgreSQL everywhere, but for analytics on a single machine, it can be a big upgrade — and it’s absolutely worth testing.