Skip to content

Project: Oneiric Engine – Dream Analyzer #71

@kumarbharath851

Description

@kumarbharath851

Track

Creative Apps (GitHub Copilot)

Project Name

Oneiric Engine – Dream Analyzer

GitHub Username

kumarbharath851

Repository URL

https://github.com/kumarbharath851/oneiric-engine.git

Project Description

A full‑stack web app where users paste their dreams and receive Jungian‑inspired psychological, archetypal and personal‑growth insights. The backend calls a mock LLM (switchable to OpenAI/Azure), stores history in SQLite/Prisma, and generates an “8K cinematic prompt” for image tools. Built with Next.js 15, TypeScript and Zod validation; deployable as a demo or foundation for more advanced agent workflows.

Demo Video or Screenshots

Demo Video: https://github.com/kumarbharath851/oneiric-engine/tree/main/demo-videos
Screenshots: https://github.com/kumarbharath851/oneiric-engine/tree/main/public/screenshots

Primary Programming Language

TypeScript/JavaScript

Key Technologies Used

  • Next.js 15 (App Router)
  • TypeScript
  • Prisma + SQLite
  • Zod validation
  • LLM abstraction (mock/OpenAI/Azure)
  • GitHub Copilot for development

Submission Type

Individual

Team Members

No response

Submission Requirements

  • My project meets the track-specific challenge requirements
  • My repository includes a comprehensive README.md with setup instructions
  • My code does not contain hardcoded API keys or secrets
  • I have included demo materials (video or screenshots)
  • My project is my own work with proper attribution for any third-party code
  • I agree to the Code of Conduct
  • I have read and agree to the Disclaimer
  • My submission does NOT contain any confidential, proprietary, or sensitive information
  • I confirm I have the rights to submit this content and grant the necessary licenses

Quick Setup Summary

1. clone & enter repo

git clone https://github.com/kumarbharath851/oneiric-engine.git
cd oneiric-engine

2. install deps

npm install

3. create env file

cat <<'EOF' > .env.local
DATABASE_URL="file:./dev.db"
LLM_PROVIDER="mock" # switch to openai/azure when ready

OPENAI_API_KEY="sk-…" # optional for real LLM

AZURE_OPENAI_KEY="" # optional for Azure

EOF

4. prepare database

npm run db:migrate
npm run db:seed # optional sample dream

5. start dev server

npm run dev

Technical Highlights

  • Full stack: Next.js 15 (App Router) frontend + Next.js route handlers for server logic.
  • Type safety: End‑to‑end TypeScript with shared interfaces and Zod validation for inputs/responses.
  • LLM abstraction: Pluggable LLM client with deterministic mock (fast, repeatable tests) and drop‑in support for OpenAI/Azure via environment variables.
  • Persistence: Prisma + SQLite with migrations and seed script for reproducible sample data.
  • Developer ergonomics: Scripts for migrate/seed/dev, clear README and testing guides, and a screenshot workflow to produce reproducible demo assets.
  • Safety & secrets: All sensitive keys kept out of source; documented .env.local usage and runtime checks to prevent accidental exposure.
  • Documentation & demo: Built screenshot capture guide and demo/video folder so reviewers can reproduce UI states easily.
  • Deployment readiness: Minimal production steps — set env vars, build, and deploy (Vercel/your host); code organized to add CI and secret scanning.

Challenges & Learnings

  • Secrets handling: Never store API keys in repo; enforce .env usage and scan history for accidental commits. This required pruning intermediate docs and validating README examples.
  • Image generation approach: Deciding to keep “Dream on Laser” as a prompt-only artifact avoids shipping API keys and gives users control over which image service to use; it improves security but pushes image generation to external tooling.
  • Cross‑platform issues: Windows CRLF warnings appeared in migration files; resolved by normalizing line endings and ensuring repository settings/gitattributes are consistent.
  • Testing vs. production parity: The mock LLM makes UI testing fast and deterministic, but real LLMs behave differently (latency, token limits, hallucinations). Add integration tests and guardrails (retries, timeouts, schema validation) before swapping to a real model.
  • Documentation hygiene: Multiple overlapping MD files confused contributors; consolidating into focused READMEs and a single SCREENSHOT-CAPTURE-ORDER.md improved clarity.
  • Next steps learned: Add secret scanning in CI, E2E tests (Playwright), basic performance monitoring, and an opt‑in demo deployment so judges can run the app without local setup.

Contact Information

kumarbharath851@gmail.com or bharathkumarnoora@gmail.com

Country/Region

United States

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions