Your fast, native, cross‑platform SQL & NoSQL database client (Desktop, with early groundwork for iPadOS) — built in Rust.
Tabular is a lightweight, native database client built with the eframe/egui stack. It focuses on instant startup, responsive UI, safe concurrency, and a distraction‑free workflow for developers, data engineers, and DBAs. Unlike web/electron apps, Tabular ships as a single native binary with a small memory footprint while still offering rich features like autocomplete, multiple drivers, query history, export tools, and self‑update.
- Unified UI for multiple relational & non‑relational databases
- Drivers: PostgreSQL, MySQL/MariaDB, SQLite, SQL Server (TDS), Redis, MongoDB
- Async runtime (Tokio) for non‑blocking execution
- Multiple query tabs & saved query library (
queries/folder) - Query history panel with search & filtering
- Result grid with copy cell / row / full result set
- Export to CSV & XLSX
- Rich value formatting (dates, decimals, JSON, BSON, HEX)
- Connection caching & quick reconnect
- Self‑update (GitHub Releases) with semantic version check
- Configurable data directory (env
TABULAR_DATA_DIR) - Native file dialogs (
rfd) - Cross‑platform theming via egui
- Sandboxing & notarization ready for macOS
The legacy egui::TextEdit editor is being replaced with a custom widget backed by a rope buffer. Current capabilities:
- Multi‑caret editing
- Per‑line syntax highlighting cache (SQL focus first)
- Basic scroll‑to‑caret
- Undo/Redo
- Multi‑line/column selection
Planned next: full autocomplete integration, diff‑based edits, revision tracking, and removal of the legacy path after feature parity.
| Category | Engines / Protocols |
|---|---|
| Relational | PostgreSQL, MySQL/MariaDB, SQLite, Microsoft SQL Server |
| Document | MongoDB (BSON & compression) |
| Key/Value | Redis (async connection manager) |
Notes:
- Microsoft SQL Server uses
tiberius(TDS over TLS) - Redis uses pooled async managers
- SQLite runs in‑process (file mode) — ensure write permissions
- Visit: https://github.com/tabular-id/tabular/releases
- Download the bundle for your platform:
- macOS:
.dmg(notarized) or.pkg(if available) - Linux:
.tar.gz(extract, then place the binary in$HOME/.local/binor/usr/local/bin) - Windows: portable package is planned
- macOS:
- (macOS) Drag
Tabular.appinto/Applications. - Run Tabular.
General requirements:
- Rust (stable; https://rustup.rs)
- Cargo (bundled with rustup)
- Clang/LLVM (for bindgen / some native crates)
- libclang headers (Linux)
- (Linux) pkg-config, OpenSSL dev packages may be required depending on environment
sudo pacman -Syu --needed base-devel clang llvm pkgconf
export LIBCLANG_PATH=/usr/lib
git clone https://github.com/tabular-id/tabular.git
cd tabular
cargo build --releasesudo apt update
sudo apt install -y build-essential clang libclang-dev pkg-config
git clone https://github.com/tabular-id/tabular.git
cd tabular
cargo build --releasexcode-select --install # Command Line Tools
brew install llvm # (opsional) clang lebih baru
git clone https://github.com/tabular-id/tabular.git
cd tabular
cargo build --releaseJika memakai LLVM dari Homebrew:
export LIBCLANG_PATH="$(brew --prefix llvm)/lib"Install the MSVC toolchain, then build with cargo build --release.
cargo install cross
cross build --target aarch64-apple-darwin --release
# Eksperimental iPadOS (butuh wrapper Xcode/iOS)
rustup target add aarch64-apple-ios
cargo build --target aarch64-apple-ios --release./target/release/tabular| Variable | Purpose | Example |
|---|---|---|
| TABULAR_DATA_DIR | Override data directory location | /data/tabular |
| RUST_LOG | Enable logging | RUST_LOG=info ./tabular |
For distributing outside the Mac App Store:
export APPLE_ID="your-apple-id@example.com"
export APPLE_PASSWORD="app-specific-password"
export APPLE_TEAM_ID="TEAMID"
export APPLE_BUNDLE_ID="id.tabular.database"
export APPLE_IDENTITY="Developer ID Application: Your Name (TEAMID)"
export NOTARIZE=1
./build.sh macos --depsStaple & verify:
xcrun stapler staple dist/macos/Tabular-<version>.dmg
spctl -a -vv dist/macos/Tabular.app
codesign --verify --deep --strict --verbose=2 dist/macos/Tabular.appSee macos/Tabular.entitlements for sandbox/network/file access settings. For App Store distribution use a distribution identity and provisioning profile:
export APPLE_IDENTITY="Apple Distribution: Your Name (TEAMID)"
export PROVISIONING_PROFILE="/path/Tabular_AppStore.provisionprofile"
make pkg-macos-storeDefault location:
- macOS / Linux:
~/.tabular
Change it via Preferences (native folder picker) or force it with:
export TABULAR_DATA_DIR="/absolute/custom/path"
./tabularManual migration: copy the old folder to the new location before switching & restarting.
Contents of the data directory:
preferences.*— UI & app settingscache.*— metadata & driver cachesqueries/— saved querieshistory/— executed query history
src/
main.rs # Entry point
window_egui.rs # UI / egui integration
editor.rs # Logika editor query
editor_autocomplete.rs
sidebar_*.rs # Panel samping (database, history, queries)
driver_*.rs # Abstraksi driver database
export.rs # Ekspor CSV / XLSX
self_update.rs # Cek & aplikasi update
config.rs # Preferensi & direktori data
models/ # Struktur data & enum
query_ast/ # Lapisan AST kueri (eksperimental, default aktif)
git clone https://github.com/tabular-id/tabular.git
cd tabular
cargo run| Action | Command |
|---|---|
| Build debug | cargo build |
| Run | cargo run |
| Test | cargo test |
| Lint (clippy) | cargo clippy -- -D warnings |
| Format | cargo fmt |
| Release build | cargo build --release |
RUST_LOG=info cargo run- Create
driver_<engine>.rs2) Implement connection & execution 3) Add feature flag if optional 4) Register inmodules.rs/factory 5) Update README.
| Purpose | Crate |
|---|---|
| UI & App Shell | eframe, egui_extras |
| Async Runtime | tokio, futures, futures-util, tokio-util |
| Relational DB | sqlx (postgres, mysql, sqlite) |
| SQL Server | tiberius |
| Redis | redis |
| MongoDB | mongodb, bson |
| Data Formats | serde, serde_json, chrono, rust_decimal, hex, csv, xlsxwriter |
| File Dialog | rfd |
| Update | reqwest, self_update, semver |
| Logging | log, env_logger, dotenv |
| Utilities | dirs, regex, colorful |
See Cargo.toml for exact versions (current package version: 0.5.18).
Contributions are welcome (bug fixes, new drivers, UI, performance). Suggested workflow:
- Fork & create a feature branch.
- Run
cargo fmt && cargo clippybefore committing. - Ensure release build compiles:
cargo build --release. - Open a PR with a concise description & screenshots (for UI changes).
| Issue | Hint |
|---|---|
| Build fails: clang not found | Install clang / set LIBCLANG_PATH |
| TLS errors on connect | Verify certificates & network reachability |
| SQLite file locked | Close other processes; check file permissions |
| UI freeze on long queries | Use server pagination; streaming improvements WIP |
- Windows build & signing
- iPadOS adaptive layout
- Query formatter / beautifier
- Result pagination for large datasets
- Connection grouping & tags
- Scripting/extension layer
- Secure secrets storage (Keychain/KWallet/Credential Manager)
This project is dual‑licensed:
- GNU Affero General Public License v3 (AGPL‑3.0) — see
LICENSE-AGPL - Commercial License — contact PT. Vneu Teknologi Indonesia (see
LICENSE)
In short: use AGPL for OSS/non‑commercial; obtain a commercial license for closed‑source/commercial integration.
Built with the Rust ecosystem. egui & sqlx projects are especially instrumental.
Made with Rust 🦀 for people who love fast, native tools.
Tabular uses a database‑agnostic AST (Abstract Syntax Tree) to separate query logic from database‑specific implementations. Benefits:
- ✅ Performa Optimal: Plan caching, rewrite optimization
- ✅ Kemudahan Extensibility: Tambah database baru tanpa ubah core logic
- ✅ Type Safety: Compile-time checking dengan Rust
- ✅ Maintainability: Clear separation of concerns
Raw SQL → sqlparser → AST (generic) → Logical Plan
- Uses the
sqlparsercrate for universal SQL parsing - Not aware of any specific database
- Output:
LogicalQueryPlan(database‑agnostic IR)
File: src/query_ast/parser.rs
pub enum LogicalQueryPlan {
Projection { exprs: Vec<Expr>, input: Box<LogicalQueryPlan> },
Filter { predicate: Expr, input: Box<LogicalQueryPlan> },
Sort { items: Vec<SortItem>, input: Box<LogicalQueryPlan> },
Limit { limit: u64, offset: u64, input: Box<LogicalQueryPlan> },
Join { left, right, on, kind },
TableScan { table, alias },
// ... etc
}File: src/query_ast/logical.rs
Benefits:
- All databases share the same structure
- Optimizations apply to all databases
- Easy to visualize and debug
Logical Plan → Apply Rules → Optimized Logical Plan
Rules (apply to all databases):
- Filter pushdown
- Projection pruning
- CTE inlining
- Predicate merging
- Auto‑limit injection
- Pagination rewrite
File: src/query_ast/rewrite.rs
Example:
// Before rewrite:
Projection -> Filter -> Filter -> TableScan
// After rewrite:
Projection -> Filter(merged) -> TableScanOptimized Plan → Dialect → SQL for Target DB
Trait-based:
pub trait SqlDialect {
fn quote_ident(&self, ident: &str) -> String;
fn emit_limit(&self, limit: u64, offset: u64) -> String;
fn supports_window_functions(&self) -> bool;
// ... etc
}Implementations:
MySqlDialect: Backticks,LIMIT n OFFSET mPostgresDialect: Double quotes, window functionsMssqlDialect: Square brackets,TOP n,OFFSET FETCHSqliteDialect: Backticks, limited window supportMongoDialect: Minimal SQL, mostly native operationsRedisDialect: Very limited SQL
Files:
src/query_ast/emitter/mod.rs(core emitter)src/query_ast/emitter/dialect.rs(dialect trait + implementations)
Emitted SQL → Connection Pool → Execute → Results
Trait-based:
#[async_trait]
pub trait DatabaseExecutor {
fn database_type(&self) -> DatabaseType;
async fn execute_query(&self, sql: &str, ...) -> Result<QueryResult, ...>;
fn supports_feature(&self, feature: SqlFeature) -> bool;
}File: src/query_ast/executor.rs
// Level 1: Structural fingerprint (quick pre-check)
let fp = structural_fingerprint(sql); // hash without parsing
// Level 2: Logical plan hash (after parsing)
let plan = parse(sql);
let plan_hash = hash_plan(&plan);
// Cache key includes: plan_hash + db_type + pagination + options
let cache_key = format!("{}::{:?}::{:?}", plan_hash, db_type, pagination);Cache Stats Available:
let (hits, misses) = query_ast::cache_stats();
println!("Cache hit rate: {:.1}%", hits as f64 / (hits + misses) as f64 * 100.0);Same logical plan works for different databases:
Parse once → Rewrite once → Emit N times (one per DB type)
- Use
Arc<LogicalQueryPlan>untuk sharing plans - String interning untuk column/table names
- COW (Copy-on-Write) untuk rewrites
| Feature | Status | Notes |
|---|---|---|
| Basic SELECT parsing | ✅ | Includes JOIN, GROUP BY, HAVING, LIMIT |
| Filter/WHERE | ✅ | AND/OR/NOT, comparison ops |
| Projection/SELECT list | ✅ | Columns, aliases, * |
| Sorting/ORDER BY | ✅ | Multiple columns, ASC/DESC |
| Pagination/LIMIT | ✅ | Rewritten per database |
| JOINs | ✅ | INNER, LEFT, RIGHT, FULL |
| GROUP BY | ✅ | Multiple expressions |
| HAVING | ✅ | Post-aggregation filters |
| DISTINCT | ✅ | Deduplication |
| CTEs/WITH | ✅ | Single-use CTE inlining |
| UNION/UNION ALL | ✅ | Set operations |
| Window Functions | ✅ | ROW_NUMBER, RANK, etc |
| Subqueries | ✅ | Scalar, correlated detection |
| Plan Caching | ✅ | Multi-level with fingerprinting |
| Rewrite Rules | ✅ | 9 rules implemented |
| MySQL Dialect | ✅ | Full support |
| PostgreSQL Dialect | ✅ | Full support |
| SQLite Dialect | ✅ | Full support |
| MS SQL Dialect | ✅ | TOP/OFFSET FETCH syntax |
| MongoDB Dialect | 🟡 | Limited SQL, prefer native |
| Redis Dialect | 🟡 | Very limited |
| Feature | Priority | Effort |
|---|---|---|
| Executor trait implementation | HIGH | Medium |
| Database-specific executors | HIGH | Medium |
| Multi-statement support | MEDIUM | High |
| DDL parsing (CREATE/ALTER/DROP) | MEDIUM | High |
| Advanced subquery optimizations | LOW | High |
| Correlated subquery rewrite | LOW | High |
| Cost-based optimization | LOW | Very High |
Simple SELECT: < 1ms (cache hit: < 0.1ms)
With JOIN: < 5ms (cache hit: < 0.1ms)
Complex (3+ JOINs): < 20ms (cache hit: < 0.1ms)
Repeated queries: 95%+ hit rate
Pagination queries: 90%+ hit rate
Ad-hoc queries: 40%+ hit rate (fingerprint matching)
Per cached plan: ~5KB (typical)
Cache size limit: 1000 plans (configurable)
Total overhead: ~5MB for 1000 plans
- Use the AST pipeline instead of raw SQL where possible
- Implement SqlDialect for your database
- Implement DatabaseExecutor for execution
- Test with real queries from your database
- Measure performance before/after AST integration
- Use standard SQL for best cross-database compatibility
- Avoid database-specific features in shared code
- Let the AST handle optimization (don't manually optimize)
- Check cache stats to verify query reuse
- Keep layers separate (don't mix concerns)
- Add tests for new rewrites (prevent regressions)
- Document dialect differences in code comments
- Profile regularly to catch performance regressions
let debug_str = query_ast::debug_plan(&sql, &db_type)?;
println!("{}", debug_str);
// Output:
// -- debug plan for PostgreSQL
// Projection 3
// Filter "id > 10"
// TableScan(users alias=None)let rules = query_ast::last_rewrite_rules();
println!("Applied rules: {:?}", rules);
// Output: ["auto_limit", "filter_pushdown", "projection_prune"]let (nodes, depth, subqueries, correlated, windows) = query_ast::plan_metrics(&sql)?;
println!("Plan complexity: {} nodes, depth {}", nodes, depth);let (hash, cache_key) = query_ast::plan_structural_hash(&sql, &db_type, pagination, auto_limit)?;
println!("Plan hash: {:x}", hash);src/query_ast/
├── mod.rs # Public API & main compilation pipeline
├── ast.rs # Raw AST wrapper (thin layer over sqlparser)
├── logical.rs # Logical plan IR (database-agnostic)
├── parser.rs # SQL → Logical plan conversion
├── emitter/
│ ├── mod.rs # Plan → SQL emission
│ └── dialect.rs # Database-specific SQL generation
├── rewrite.rs # Optimization rules
├── executor.rs # Execution trait & registry
├── plan_cache.rs # Multi-level caching
└── errors.rs # Error types
src/driver_*.rs # Database-specific drivers (legacy + AST integration)
src/connection.rs # Connection pool management
src/cache_data.rs # Metadata caching (tables, columns, etc)
- sqlparser: SQL parsing (universal)
- tokio: Async runtime
- sqlx: Database drivers (MySQL, PostgreSQL, SQLite)
- tiberius: MS SQL driver
- mongodb: MongoDB native driver
- redis: Redis client
- Cost-Based Optimizer: Choose optimal join order
- Materialized Views: Cache intermediate results
- Parallel Execution: Split queries across cores
- Query Federation: JOIN across different databases
- Compile to native code: LLVM backend for hot queries
- SIMD optimizations: Vectorize filters and aggregations
- GPU acceleration: Offload heavy computations
- Query suggestions: Based on schema and data
- Auto-indexing: Recommend indexes based on query patterns
- Query rewrite hints: AI-powered optimization suggestions
See Adding a New Database to Tabular for step-by-step guide to adding new database support.
- Logical plan changes don't break existing databases
- New rewrites have tests for correctness
- Dialect changes respect database feature sets
- Performance benchmarks show no regressions
- Documentation updated for new features
- Issues: GitHub issues for bugs
- Discussions: GitHub discussions for questions
- Docs: This file + inline code documentation
- Examples: See
tests/query_ast_tests.rs
Last Updated: 2025-11-11 Version: 0.5.18 Maintainer: Tabular Team
This guide explains how to add support for a new database type using the Agnostic AST architecture.
Tabular uses a layered architecture with clear separation of concerns:
Raw SQL → Parser → Logical Plan → Optimizer/Rewriter → Emitter → Executor
↓ ↓ ↓ ↓
DB-Agnostic DB-Agnostic DB-Specific DB-Specific
Add your database to src/models/enums.rs:
#[derive(Clone, PartialEq, Serialize, Deserialize, Debug)]
pub enum DatabaseType {
MySQL,
PostgreSQL,
SQLite,
Redis,
MsSQL,
MongoDB,
YourNewDB, // <- Add here
}
pub enum DatabasePool {
MySQL(Arc<MySqlPool>),
PostgreSQL(Arc<PgPool>),
SQLite(Arc<SqlitePool>),
Redis(Arc<ConnectionManager>),
MsSQL(Arc<MssqlConfigWrapper>),
MongoDB(Arc<MongoClient>),
YourNewDB(Arc<YourDbConnection>), // <- Add here
}Create src/query_ast/emitter/your_db_dialect.rs:
use super::dialect::SqlDialect;
use crate::models::enums::DatabaseType;
pub struct YourDbDialect;
impl SqlDialect for YourDbDialect {
fn db_type(&self) -> DatabaseType {
DatabaseType::YourNewDB
}
fn quote_ident(&self, ident: &str) -> String {
// Your database's identifier quoting
format!("\"{}\"", ident.replace('"', "\"\""))
}
fn emit_limit(&self, limit: u64, offset: u64) -> String {
// Your database's LIMIT syntax
if offset > 0 {
format!(" LIMIT {} OFFSET {}", limit, offset)
} else {
format!(" LIMIT {}", limit)
}
}
fn supports_window_functions(&self) -> bool {
true // or false, depending on your DB
}
fn supports_cte(&self) -> bool {
true // or false
}
fn supports_full_join(&self) -> bool {
true // or false
}
// Override other methods as needed for your database
}Update src/query_ast/emitter/dialect.rs:
pub fn get_dialect(db_type: &DatabaseType) -> Box<dyn SqlDialect> {
match db_type {
DatabaseType::MySQL => Box::new(MySqlDialect),
DatabaseType::PostgreSQL => Box::new(PostgresDialect),
// ... other databases ...
DatabaseType::YourNewDB => Box::new(YourDbDialect), // <- Add here
}
}Create src/query_ast/executors/your_db_executor.rs:
use async_trait::async_trait;
use crate::query_ast::executor::{DatabaseExecutor, QueryResult};
use crate::query_ast::errors::QueryAstError;
use crate::models::enums::DatabaseType;
pub struct YourDbExecutor {
// Connection pool or client reference
}
impl YourDbExecutor {
pub fn new() -> Self {
Self {}
}
}
#[async_trait]
impl DatabaseExecutor for YourDbExecutor {
fn database_type(&self) -> DatabaseType {
DatabaseType::YourNewDB
}
async fn execute_query(
&self,
sql: &str,
database_name: Option<&str>,
connection_id: i64,
) -> Result<QueryResult, QueryAstError> {
// 1. Get connection from pool using connection_id
// 2. Switch database if database_name is provided
// 3. Execute SQL query
// 4. Convert results to (Vec<String>, Vec<Vec<String>>)
// 5. Return results
todo!("Implement query execution for your database")
}
fn validate_query(&self, sql: &str) -> Result<(), QueryAstError> {
// Optional: Add database-specific validation
Ok(())
}
}Register in src/query_ast/executor.rs:
impl ExecutorRegistry {
pub fn with_defaults() -> Self {
let mut registry = Self::new();
// ... existing executors ...
#[cfg(feature = "your_db")]
registry.register(Box::new(super::executors::YourDbExecutor::new()));
registry
}
}Create src/driver_your_db.rs:
use crate::models::enums::DatabaseType;
use crate::window_egui::Tabular;
/// Create connection pool for your database
pub async fn create_connection_pool(
host: &str,
port: u16,
username: &str,
password: &str,
database: Option<&str>,
) -> Result<YourDbConnection, Box<dyn std::error::Error>> {
// Implement connection creation
todo!()
}
/// Fetch tables from your database
pub async fn fetch_tables_from_connection(
tabular: &Tabular,
connection_id: i64,
database_name: Option<String>,
) -> Result<Vec<String>, Box<dyn std::error::Error>> {
// Implement table listing
todo!()
}
/// Fetch columns from a table
pub async fn fetch_columns_from_table(
tabular: &Tabular,
connection_id: i64,
database_name: Option<String>,
table_name: &str,
) -> Result<Vec<(String, String)>, Box<dyn std::error::Error>> {
// Implement column listing (name, type)
todo!()
}In src/connection.rs, add your database to the connection creation logic:
pub async fn get_or_create_connection_pool(
tabular: &mut Tabular,
connection_id: i64,
) -> Result<models::enums::DatabasePool, Box<dyn std::error::Error>> {
// ... existing code ...
match connection.connection_type {
models::enums::DatabaseType::MySQL => { /* ... */ }
models::enums::DatabaseType::PostgreSQL => { /* ... */ }
// ... other databases ...
models::enums::DatabaseType::YourNewDB => {
let client = driver_your_db::create_connection_pool(
&connection.host,
connection.port,
&connection.username,
&connection.password,
connection.database.as_deref(),
).await?;
let pool = models::enums::DatabasePool::YourNewDB(Arc::new(client));
tabular.connection_pools.insert(connection_id, pool.clone());
Ok(pool)
}
}
}In src/sidebar_database.rs, add folder icon and logic:
pub fn create_database_folders_from_connections(connections: &[models::structs::Connection]) -> Vec<models::structs::TreeNode> {
// ... existing code ...
models::enums::DatabaseType::YourNewDB => {
let mut node = models::structs::TreeNode {
name: format!("Your DB Connections ({})", count),
node_type: models::enums::NodeType::YourNewDBFolder,
// ... rest of initialization ...
};
node
}
}Add to src/models/enums.rs:
pub enum NodeType {
// ... existing types ...
YourNewDBFolder, // <- Add here
}#[cfg(test)]
mod tests {
use super::*;
#[tokio::test]
async fn test_your_db_query() {
let executor = YourDbExecutor::new();
let result = executor.execute_query(
"SELECT * FROM users LIMIT 10",
None,
1,
).await;
assert!(result.is_ok());
}
#[test]
fn test_your_db_dialect() {
let dialect = YourDbDialect;
assert_eq!(dialect.quote_ident("table"), "\"table\"");
}
}- Create a test connection in the UI
- Test basic operations: connect, list tables, query data
- Test AST features: filtering, sorting, pagination
- Test error handling: invalid queries, connection failures
- Use connection pools for better performance
- Configure appropriate pool size (5-20 connections typically)
- Implement connection health checks
- The rewrite layer applies database-agnostic optimizations
- Add database-specific optimizations in your executor
- Consider query result caching for expensive queries
The AST layer automatically caches compiled plans. Ensure your SQL emission is deterministic for cache hits.
Add a feature flag for optional compilation:
In Cargo.toml:
[features]
default = ["mysql", "postgres", "sqlite"]
mysql = ["sqlx/mysql"]
postgres = ["sqlx/postgres"]
sqlite = ["sqlx/sqlite"]
your_db = ["your_db_driver"] # <- Add here- Quote Identifiers: Always use
dialect.quote_ident(), never hardcode quotes - NULL Handling: Different databases handle NULL differently
- Type Mapping: Map SQL types to your database's native types
- Transaction Support: Implement proper transaction handling
- Error Messages: Provide clear, actionable error messages
Here's a minimal example of adding CockroachDB (which is PostgreSQL-compatible):
// 1. Add to DatabaseType enum
pub enum DatabaseType {
// ... existing ...
CockroachDB,
}
// 2. Reuse PostgreSQL dialect (it's compatible!)
pub fn get_dialect(db_type: &DatabaseType) -> Box<dyn SqlDialect> {
match db_type {
// ... existing ...
DatabaseType::CockroachDB => Box::new(PostgresDialect), // Reuse!
}
}
// 3. Create executor (can extend PostgreSQL executor)
pub struct CockroachExecutor {
pg_executor: PostgresExecutor, // Composition over inheritance!
}
#[async_trait]
impl DatabaseExecutor for CockroachExecutor {
fn database_type(&self) -> DatabaseType {
DatabaseType::CockroachDB
}
async fn execute_query(&self, sql: &str, db: Option<&str>, conn_id: i64)
-> Result<QueryResult, QueryAstError>
{
// Delegate to PostgreSQL executor since CockroachDB is wire-compatible
self.pg_executor.execute_query(sql, db, conn_id).await
}
}After adding your database, measure:
- Query compilation time (should be < 1ms for simple queries)
- Cache hit rate (should be > 80% for repeated queries)
- Execution overhead (AST layer adds < 5% overhead)
- Check existing drivers (
driver_mysql.rs,driver_postgres.rs) - Review the executor trait in
query_ast/executor.rs - Look at dialect implementations in
query_ast/emitter/dialect.rs - See rewrite rules in
query_ast/rewrite.rsfor optimization ideas
✅ Separation of Concerns: Logic layer independent of database specifics ✅ Reusability: Share optimizations across all databases ✅ Testability: Mock executors for unit testing ✅ Extensibility: Add new databases without touching core logic ✅ Performance: Aggressive caching at every layer ✅ Type Safety: Rust's type system catches errors at compile time
This document explains how to build Tabular for different platforms using the provided build scripts.
- Rust (latest stable version)
- Make (for Unix-like systems)
- Git (for version control)
- Xcode Command Line Tools:
xcode-select --install - lipo: Usually included with Xcode (for universal binaries)
- hdiutil: For creating DMG files (included with macOS)
- GCC/Clang: For native compilation
- Cross: For cross-compilation (
cargo install cross) - Docker: Required by cross for cross-compilation
- MSVC: Visual Studio Build Tools or Visual Studio
- PowerShell: For packaging scripts
The easiest way to build Tabular is using the provided build script:
# Build for current platform only
./build.sh
# Build for specific platform
./build.sh macos
./build.sh linux
./build.sh windows
# Build for all platforms
./build.sh all
# Install dependencies and build
./build.sh all --deps
# Clean and build
./build.sh macos --cleanIf you prefer using Make directly:
# Show available targets
make help
# Install build dependencies
make install-deps
# Build for specific platforms
make bundle-macos
make bundle-linux
make bundle-windows
# Build everything
make release
# Clean build artifacts
make cleanCreates a universal binary that runs on both Intel and Apple Silicon Macs:
make bundle-macosOutputs:
dist/macos/Tabular.app- macOS application bundledist/macos/Tabular.dmg- Disk image for distribution
Creates binaries for x86_64 and aarch64 Linux systems:
make bundle-linuxOutputs:
dist/linux/tabular-x86_64-unknown-linux-gnu.tar.gzdist/linux/tabular-aarch64-unknown-linux-gnu.tar.gz- AppDir structure for potential AppImage creation
Creates executables for x86_64 and aarch64 Windows systems:
make bundle-windowsOutputs:
dist/windows/tabular-x86_64-pc-windows-msvc.zipdist/windows/tabular-aarch64-pc-windows-msvc.zip
# Development build (debug mode)
make dev
# Run the application
make run
# Run tests
make test
# Check code formatting and linting
make check
# Format code
make fmt
# Show project information
make info| Platform | Architecture | Target Triple |
|---|---|---|
| macOS | Intel (x86_64) | x86_64-apple-darwin |
| macOS | Apple Silicon (ARM64) | aarch64-apple-darwin |
| Linux | x86_64 | x86_64-unknown-linux-gnu |
| Linux | ARM64 | aarch64-unknown-linux-gnu |
| Windows | x86_64 | x86_64-pc-windows-msvc |
| Windows | ARM64 | aarch64-pc-windows-msvc |
- Dependency Installation: Install Rust targets and required tools
- Cross-Compilation: Build for each target platform
- Universal Binary Creation: Combine macOS binaries using
lipo - Packaging: Create platform-specific packages (DMG, tar.gz, zip)
- Distribution: Output ready-to-distribute packages
After building, the following structure is created:
tabular/
├── target/ # Rust build artifacts
│ ├── x86_64-apple-darwin/
│ ├── aarch64-apple-darwin/
│ ├── universal-apple-darwin/
│ └── ...
├── dist/ # Distribution packages
│ ├── macos/
│ │ ├── Tabular.app
│ │ └── Tabular.dmg
│ ├── linux/
│ │ ├── tabular-x86_64.tar.gz
│ │ └── tabular-aarch64.tar.gz
│ └── windows/
│ ├── tabular-x86_64.zip
│ └── tabular-aarch64.zip
└── ...
The project includes GitHub Actions workflows for automated building:
- Push to main/develop: Builds all platforms
- Pull requests: Builds all platforms for testing
- Tag push (v)*: Builds and creates a GitHub release
- All builds are saved as GitHub Actions artifacts
- Tagged releases automatically create GitHub releases with binaries
# Solution: Install missing targets
make install-deps# Solution: Install cross and Docker
cargo install cross
# Make sure Docker is running# For development builds, you can skip code signing
# For distribution, you'll need an Apple Developer Certificate# Make sure you have MSVC build tools installed
# Alternative: Use the GNU toolchain (x86_64-pc-windows-gnu)If you encounter issues:
- Check the build logs for specific error messages
- Ensure all prerequisites are installed
- Try cleaning and rebuilding:
make clean && make release - Check the GitHub Issues for known problems
- Universal macOS Binary: The macOS build creates a universal binary that works on both Intel and Apple Silicon Macs
- Cross-Compilation: Linux and Windows builds use cross-compilation for ARM64 targets
- Size Optimization: Release builds include optimizations for smaller binary size
- Dependencies: The build system automatically handles Rust target installation
Here's a complete example of building for all platforms:
# Clone the repository
git clone https://github.com/your-repo/tabular.git
cd tabular
# Install dependencies and build everything
./build.sh all --deps
# Check the results
ls -la dist/This will create distribution-ready packages for macOS, Linux, and Windows.
Join us at Telegram

