- Description
- Architecture Overview
- Tech Stack
- Project Structure
- Prerequisites
- Installation
- Configuration
- Running the Application
- How It Works
- Available Scripts
- Building for Production
- Troubleshooting
- Contributing
- License
This repository contains a full-stack ChatGPT clone that allows you to run an OpenAI-powered conversational AI in your local browser. It is composed of two independently running processes: a lightweight Express.js backend that acts as a secure proxy to the OpenAI API, and a React + TypeScript frontend served by Vite. The backend exists specifically so that your OpenAI API key is never exposed to the client.
The application supports multiple simultaneous chat sessions, tracks conversation history in client-side state, and provides a clean, responsive UI inspired by the original ChatGPT interface.
┌─────────────────────────────────────────────────────────┐
│ Browser │
│ │
│ React (Vite Dev Server — http://localhost:5173) │
│ │
│ ┌─────────────┐ POST /completions ┌────────────┐ │
│ │ Frontend │ ─────────────────────▶│ Backend │ │
│ │ (React) │ ◀─────────────────────│ (Express) │ │
│ └─────────────┘ JSON response └─────┬──────┘ │
│ │ │
└───────────────────────────────────────────────┼─────────┘
│ HTTPS
┌───────────▼──────────┐
│ OpenAI API │
│ /v1/chat/completions │
│ (gpt-3.5-turbo) │
└──────────────────────┘
The frontend never communicates directly with OpenAI. All API requests flow through the Express server running on port 8000, which injects the secret API key server-side before forwarding the request. This is the correct pattern for any production-adjacent application that integrates a third-party API key.
| Layer | Technology | Version |
|---|---|---|
| Frontend framework | React | 19 |
| Language | TypeScript | 5.9 |
| Build tool / Dev server | Vite | 7 |
| Backend framework | Express | 5 |
| Runtime | Node.js | ≥ 18 (ESM required) |
| API | OpenAI Chat Completions | gpt-3.5-turbo |
| Environment variables | dotenv | 17 |
| CORS middleware | cors | 2.8 |
| Icon library | react-icons | 5 |
| Dev process manager | nodemon | 3 |
| Concurrent process runner | concurrently | (via npx) |
chatgpt-clone/
├── .env # Secret environment variables (never commit this)
├── .gitignore
├── index.html # Vite HTML entry point
├── package.json
├── server.js # Express backend (API proxy to OpenAI)
├── tsconfig.json # TypeScript config for src/
├── tsconfig.node.json # TypeScript config for vite.config.ts
├── vite.config.ts # Vite configuration
├── public/
│ └── robots.txt
└── src/
├── App.tsx # Root component — owns all chat state
├── Chat.tsx # TypeScript interface: Chat { title, role, content }
├── index.css # Global styles + responsive media queries
├── main.tsx # React DOM entry point
├── vite-env.d.ts # Vite environment type declarations
└── components/
├── BottomSection.tsx # Input field + send button + disclaimer
├── Info.tsx # Disclaimer text
├── InputButton.tsx # Send button with loading spinner
├── PageFeed.tsx # Renders current session message list
└── SideBar.tsx # Chat history list + "New Chat" button
Before you begin, ensure the following tools are installed on your machine.
This project uses ES Modules ("type": "module" in package.json), which requires Node.js 18+. To verify your installed version:
node --versionIf you need to install or upgrade Node.js, use the official installer at https://nodejs.org or a version manager such as nvm:
# Install nvm (if not already installed)
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash
# Install and use Node.js 20 LTS
nvm install 20
nvm use 20npm --versionYou must have an active OpenAI account and a valid API key with access to gpt-3.5-turbo. You can obtain one at https://platform.openai.com/api-keys.
Important: OpenAI API usage is billed. Review the pricing page before sending requests. Each response in this application is capped at 100 tokens to limit consumption.
Follow these steps exactly to set up the project from scratch.
git clone https://github.com/milliorn/chatgpt-clone.gitcd chatgpt-cloneThis command installs all runtime and development dependencies declared in package.json, including React, Express, Vite, TypeScript, and supporting libraries.
npm installA node_modules/ directory will be created in the project root. This directory is excluded from version control via .gitignore and should never be committed.
The backend reads your OpenAI API key from an environment variable at startup. You must create a .env file in the project root (the same directory as server.js and package.json).
# From the project root
touch .envOpen .env in your editor and add the following line, replacing the placeholder with your actual key:
CHAT_GPT_API_KEY=your_openai_api_key_hereCritical: The environment variable name is
CHAT_GPT_API_KEY. The backend server reads exactly this name viaprocess.env.CHAT_GPT_API_KEY. Using any other variable name (e.g.OPENAI_API_KEY) will result inundefinedbeing sent as the Authorization token and every request to OpenAI will return a401 Unauthorizederror.Security: The
.envfile contains a secret credential and must never be committed to version control. Verify that.envappears in your.gitignorefile. If you accidentally commit an API key, invalidate it immediately at https://platform.openai.com/api-keys and generate a new one.
The application has two independent processes that must both be running simultaneously: the backend (Express, port 8000) and the frontend (Vite, port 5173).
npm run devThis uses concurrently to start both the backend and the frontend in the same terminal session. You will see interleaved log output from both processes. The application is ready when you see both:
[backend] App is listening on http://localhost:8000
[frontend] ➜ Local: http://localhost:5173/
Open your browser and navigate to http://localhost:5173.
If you need to observe each process's log output independently (useful for debugging), open two terminal windows.
Terminal 1 — Backend:
npm run dev:backendThe backend starts with nodemon, which automatically restarts the Express server whenever server.js is modified. Expected output:
[nodemon] starting `node server.js`
App is listening on http://localhost:8000
Terminal 2 — Frontend:
npm run dev:frontendVite starts its development server with Hot Module Replacement (HMR) enabled. Expected output:
VITE v7.x.x ready in Xms
➜ Local: http://localhost:5173/
➜ Network: http://0.0.0.0:5173/
Open your browser and navigate to http://localhost:5173.
Note: The frontend is configured with
host: "0.0.0.0", meaning it is accessible from other devices on your local network using your machine's local IP address (e.g.,http://192.168.1.x:5173). This is useful for testing on mobile devices.
- The user types a message into the input field in the browser and clicks the send button.
- The
InputButtoncomponent callsgetMessage()inApp.tsx, setting a local loading state that displays a spinner. App.tsxsends aPOSTrequest tohttp://localhost:8000/completionswith a JSON body of{ message: "user input" }.- The Express server receives the request, reads
CHAT_GPT_API_KEYfrom the environment, and constructs an authenticated request tohttps://api.openai.com/v1/chat/completions. - The request body sent to OpenAI specifies the model (
gpt-3.5-turbo), wraps the user message in the requiredmessagesarray format, and caps the response atmax_tokens: 100. - OpenAI returns a JSON response containing a
choicesarray. The server forwards the entire response body to the frontend. - The frontend reads
data.choices[0].message(an object withroleandcontent) and stores it in themessagestate variable. - A
useEffecthook inApp.tsxwatches for changes tomessage. When a new message arrives, it appends both the user's original message and the AI's response to thepreviousChatsstate array as two newChatobjects, both tagged with the current session title. - The
PageFeedcomponent filterspreviousChatsby the current title and renders the conversation list. - The
SideBarcomponent derives a list of unique titles frompreviousChatsand renders them for navigation. Clicking a title restores the view of that session.
All state is managed locally in App.tsx using React hooks. There is no external state management library and no persistent storage — chat history exists only for the duration of the browser session. Refreshing the page clears all history.
| State variable | Type | Purpose |
|---|---|---|
currentTitle |
string |
Title of the active chat session (set to the first user message) |
message |
{ role: string; content: string } |
The latest response message from OpenAI |
previousChats |
Chat[] |
Full history of all messages across all sessions |
value |
string |
Controlled value of the text input field |
All scripts are defined in package.json and run via npm run <script>.
| Script | Command | Description |
|---|---|---|
dev |
concurrently "npm run dev:backend" "npm run dev:frontend" |
Starts both servers concurrently |
dev:backend |
npx nodemon server.js |
Starts the Express server with auto-restart on file changes |
dev:frontend |
vite --host |
Starts the Vite dev server, accessible on the network |
build |
tsc && vite build |
Type-checks the project, then compiles and bundles for production |
preview |
vite preview |
Serves the production build locally for inspection |
prod |
npm run build && npm run preview |
Builds and immediately previews the production output |
git-update |
git fetch && git pull |
Fetches and pulls the latest changes from the remote |
The build script first runs the TypeScript compiler (tsc) in check-only mode ("noEmit": true in tsconfig.json) to catch any type errors, then runs vite build to produce an optimized production bundle.
npm run buildOutput is written to the dist/ directory. To verify the production build locally before deploying:
npm run previewThis serves the dist/ assets via Vite's preview server (default port 4173).
Deployment note: This project's Express backend is a long-running Node.js process and must be hosted on a server that supports that (e.g., a VPS, Railway, Render, Fly.io). The static frontend in
dist/can be served from any static host (Vercel, Netlify, Cloudflare Pages), but you must update the fetch URL inApp.tsxfromhttp://localhost:8000/completionsto your deployed backend's URL.
- Verify both the backend and frontend are running.
- Open browser DevTools (F12) → Network tab → look for a failed
POSTrequest tolocalhost:8000/completions. - Check the backend terminal for error output from the OpenAI API.
- Your
CHAT_GPT_API_KEYvalue in.envis incorrect, invalid, or the variable name is misspelled. - Stop the backend, correct the
.envfile, and restart withnpm run dev:backend.
- The Express backend is not running. Start it with
npm run dev:backendand confirm you seeApp is listening on http://localhost:8000.
- Delete
node_modules/and reinstall:rm -rf node_modules && npm install. - Verify your Node.js version is 18 or higher:
node --version.
-
This is expected. The backend hardcodes
max_tokens: 100inserver.js. To increase the response length, edit line 32 inserver.jsand change the value:max_tokens: 500, // increase as needed
The backend will automatically reload via nodemon.
Contributions are welcome. To contribute:
- Fork the repository on GitHub.
- Create a feature branch:
git checkout -b feature/your-feature-name - Make your changes and ensure there are no TypeScript errors:
npm run build - Commit your changes with a descriptive message.
- Push to your fork and open a Pull Request against the
mainbranch.
Please open an issue first for substantial changes so the approach can be discussed before implementation work begins.
This project is licensed under the MIT License. See the repository for full license terms.