Skip to content

Conversation

@javadtgh
Copy link

@javadtgh javadtgh commented Jul 24, 2025

This commit introduces a multi-server architecture to the Sanai panel, allowing you to manage clients across multiple servers from a central panel.

Key changes include:

  • Database Schema: Added a servers table to store information about slave servers.
  • Server Management: Implemented a new service and controller (MultiServerService and MultiServerController) for CRUD operations on servers.
  • Web UI: Created a new web page for managing servers, accessible from the sidebar.
  • Client Synchronization: Modified the InboundService to synchronize client additions, updates, and deletions across all active slave servers via a REST API.
  • API Security: Added an API key authentication middleware to secure the communication between the master and slave panels.
  • Multi-Server Subscriptions: Updated the subscription service to generate links that include configurations for all active servers.
  • Installation Script: Modified the install.sh script to generate a random API key during installation.

Known Issues:

  • The integration test for client synchronization (TestInboundServiceSync) is currently failing. It seems that the API request to the mock slave server is not being sent correctly or the API key is not being included in the request header. Further investigation is needed to resolve this issue.

What is the pull request?

Which part of the application is affected by the change?

  • Frontend
  • Backend

Type of Changes

  • Bug fix
  • New feature
  • Refactoring
  • Other

Screenshots

@alireza0
Copy link
Collaborator

Thank you for contributing.
Everything is nice so far except our main problem: Sync client/inbound traffics.

For this important problem we should use "sync-sofar" solution to make sure that every usages in all slave servers will count.
Personal prefered solution is to clear all usages from slaves after sync. During traffic sync, all slaves traffics should be added to the master, then evaluate traffic exhaustion. If a client/inbound is exhausted, then disable it on all master/slave.

Other concerns:
Slaves:

  1. Disable web interface.
  2. Limit api communication only by master source
  3. Sync ip limit
  4. Sync certificate files
  5. Consider Listen IP of each inbound (to be ignored or updatable via master)
  6. Consider MultiDomain (to be empty or reserve one config per server)
  7. Consider Outbounds specially for tunnels.
  8. Consider Reverse Proxy

@MHSanaei MHSanaei changed the title feat: Add multi-server support for Sanai panel feat: Add multi-server support Jul 24, 2025
@APT-ZERO
Copy link

3X-UI is growing faster than all other panels!
I'm waiting for Sing-box and PostgreSQL support :D

@MK0ltra
Copy link

MK0ltra commented Jul 25, 2025

Currently it doesn't support already created inbounds/clients. When user adds a slave, I think there should be an option to select which inbounds (along with all their clients) the user wants to duplicate on the slave server (with 0 up and down usage stats).

This commit introduces a multi-server architecture to the Sanai panel, allowing you to manage clients across multiple servers from a central panel.

Key changes include:

- **Database Schema:** Added a `servers` table to store information about slave servers.
- **Server Management:** Implemented a new service and controller (`MultiServerService` and `MultiServerController`) for CRUD operations on servers.
- **Web UI:** Created a new web page for managing servers, accessible from the sidebar.
- **Client Synchronization:** Modified the `InboundService` to synchronize client additions, updates, and deletions across all active slave servers via a REST API.
- **API Security:** Added an API key authentication middleware to secure the communication between the master and slave panels.
- **Multi-Server Subscriptions:** Updated the subscription service to generate links that include configurations for all active servers.
- **Installation Script:** Modified the `install.sh` script to generate a random API key during installation.

**Known Issues:**

- The integration test for client synchronization (`TestInboundServiceSync`) is currently failing. It seems that the API request to the mock slave server is not being sent correctly or the API key is not being included in the request header. Further investigation is needed to resolve this issue.
@MHSanaei MHSanaei force-pushed the feature/multi-server-support branch from 2a6b2ca to 11dc068 Compare July 27, 2025 15:26
@AliAkhgar
Copy link
Contributor

such a nice feature, would love to contrib on it!

@AliAkhgar
Copy link
Contributor

@alireza0 @javadtgh
Please also take External Traffic Inform feature into account, as we are already using this feature to track and maintain a different central server to control, monitor and manage users.
Thanks.

@liamnees
Copy link

To make it multi-server, just add traffic equalization and that's it. The problem with the previous scripts was that in addition to equalizing traffic, they also managed time and clients. Please add a script that has no capabilities other than equalizing traffic and that also does this equalization every 2 minutes. I don't think there will be any problems this way.

@liamnees
Copy link

To make it multi-server, just add traffic equalization and that's it. The problem with the previous scripts was that in addition to equalizing traffic, they also managed time and clients. Please add a script that has no capabilities other than equalizing traffic and that also does this equalization every 2 minutes. I don't think there will be any problems this way.

@aliglzr

@MHSanaei MHSanaei marked this pull request as draft September 4, 2025 10:12
@MHSanaei MHSanaei force-pushed the main branch 4 times, most recently from 1208faf to 5420e64 Compare September 8, 2025 12:32
@MHSanaei MHSanaei requested a review from Copilot September 12, 2025 10:15
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This pull request adds multi-server support to the Sanai panel, enabling centralized management of clients across multiple slave servers from a master panel.

  • Implements complete multi-server architecture with database schema, service layer, and web UI
  • Adds automatic client synchronization between master and slave servers via REST API
  • Introduces API key-based authentication for secure inter-panel communication

Reviewed Changes

Copilot reviewed 18 out of 18 changed files in this pull request and generated 3 comments.

Show a summary per file
File Description
database/model/model.go Adds Server model for storing slave server information
database/db.go Registers Server model for auto-migration
web/service/multi_server_service.go Implements CRUD operations for server management
web/service/setting.go Adds API key management functionality
web/service/inbound.go Adds client synchronization to slave servers
web/controller/multi_server_controller.go HTTP endpoints for server management
web/middleware/auth.go API key authentication middleware
web/controller/xui.go Adds servers page route
web/html/servers.html Web interface for managing slave servers
web/html/component/aSidebar.html Adds servers menu item
sub/subService.go Modifies subscription links to include all active servers
main.go Adds API key command-line parameter
install.sh Generates random API key during installation

Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

Comment on lines +613 to +614
body, _ := json.Marshal(data)
s.syncWithSlaves("POST", "/panel/inbound/api/addClient", bytes.NewReader(body))
Copy link

Copilot AI Sep 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The JSON marshaling error is being silently ignored with _. This could lead to silent failures when syncing with slave servers. The error should be handled properly, and if marshaling fails, the sync operation should be skipped or logged.

Suggested change
body, _ := json.Marshal(data)
s.syncWithSlaves("POST", "/panel/inbound/api/addClient", bytes.NewReader(body))
body, marshalErr := json.Marshal(data)
if marshalErr != nil {
logger.Error("Failed to marshal data for syncWithSlaves:", marshalErr)
} else {
s.syncWithSlaves("POST", "/panel/inbound/api/addClient", bytes.NewReader(body))
}

Copilot uses AI. Check for mistakes.
Comment on lines +886 to +887
body, _ := json.Marshal(data)
s.syncWithSlaves("POST", fmt.Sprintf("/panel/inbound/api/updateClient/%s", clientId), bytes.NewReader(body))
Copy link

Copilot AI Sep 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The JSON marshaling error is being silently ignored with _. This could lead to silent failures when syncing with slave servers. The error should be handled properly, and if marshaling fails, the sync operation should be skipped or logged.

Suggested change
body, _ := json.Marshal(data)
s.syncWithSlaves("POST", fmt.Sprintf("/panel/inbound/api/updateClient/%s", clientId), bytes.NewReader(body))
body, marshalErr := json.Marshal(data)
if marshalErr != nil {
logger.Warning("Failed to marshal data for syncWithSlaves:", marshalErr)
} else {
s.syncWithSlaves("POST", fmt.Sprintf("/panel/inbound/api/updateClient/%s", clientId), bytes.NewReader(body))
}

Copilot uses AI. Check for mistakes.
Comment on lines +2284 to +2288
req, err := http.NewRequest(method, url, body)
if err != nil {
logger.Warningf("Failed to create request for server %s: %v", server.Name, err)
continue
}
Copy link

Copilot AI Sep 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The body parameter is being reused for multiple requests when syncing with multiple servers. After the first request reads from the io.Reader, subsequent requests will receive an empty body. Each server sync should use a fresh copy of the request body.

Copilot uses AI. Check for mistakes.
@MHSanaei MHSanaei force-pushed the main branch 5 times, most recently from 212238b to 5408a2f Compare September 14, 2025 20:09
@dimasmir03
Copy link
Contributor

dimasmir03 commented Sep 16, 2025

как продвигается создание управления несколькими серверами? а то я щас собираюсь свой простой api backend делать для управления несколькими серверами. Может лучше присоединиться к вам и доделать эту классную важную фичу?
А еще, не вижу где, кто, и как разрабатывает фичу эту в данный момент. или может забросили? (извиняюсь, если слепой)

how is the creation of multi-server management progressing? otherwise, I'm going to make my simple backend api for managing multiple servers right now. Maybe it's better to join you and finish this cool important feature?
Also, I don't see where, who, or how this feature is being developed at the moment. or maybe abandoned? (I'm sorry if I'm blind)

@xzyone
Copy link

xzyone commented Sep 18, 2025

Is it possible for other servers to be managed by low-overhead subprograms and controlled via APIs?

@dimasmir03
Copy link
Contributor

dimasmir03 commented Sep 22, 2025

@alireza0 @javadtgh can you tell me if I can join and how can I join the development of this feature and speed up its release?
I see a fork and a separate branch in that fork, but how do I create a PR for it?

@MHSanaei MHSanaei force-pushed the main branch 3 times, most recently from 2c2a8c3 to 49430b3 Compare September 24, 2025 13:42
@MHSanaei MHSanaei force-pushed the main branch 3 times, most recently from ffd4c06 to ee0e309 Compare September 25, 2025 13:08
@konstpic
Copy link
Contributor

@javadtgh @alireza0 @dimasmir03 @MHSanaei

Hi everyone,

I wanted to share my thoughts on the multi-server approach. From my experience, full server management or separate slave panels aren’t strictly necessary for scaling. In my setup, I have two panels running on different servers but using a centralized MySQL database. A simple TCP proxy handles requests in a round-robin manner, and it works reliably while keeping the setup simple.

I would suggest migrating the panel to PostgreSQL first. If your servers are configured identically, there’s no need for an extra “server management” layer. A centralized database plus a proxy load balancer is sufficient. This approach is simpler and already resembles a node-based architecture like n8n.

Thanks for your work! 😊

image

@Wasdalt
Copy link

Wasdalt commented Sep 29, 2025

Can you tell me where I can write about the API bug?

@MHSanaei
Copy link
Owner

@Wasdalt direct telegram channel
https://t.me/XrayUI

@dimasmir03
Copy link
Contributor

@MHSanaei and where did the issue go, you could write about bugs and features there

@Dreyk-Zer0
Copy link

@javadtgh I ask you to consider the scenario where the multi -server panel will be able to work with Proxy Chain settings, where the inbound of one server serves the outbound another. I use such settings in my installations, but now everything has to be steamed by a separate panels.

@dimasmir03
Copy link
Contributor

where the inbound of one server serves the outbound another

1 panel -> 2 panel -> 3 panel chain?

@Dreyk-Zer0
Copy link

where the inbound of one server serves the outbound another

1 panel -> 2 panel -> 3 panel chain?

Yes, but i use 2 step chains.

1 panel -> 2 panel
3 panel -> 4 panel

I need to use inbounds via Wireguard in some infrastructures, but it does not go across the country border. It is also convenient for me to have entry points inside the country and the out nodes are changed as it is conveniently depending on the changing situation.

@Jabi83
Copy link

Jabi83 commented Nov 1, 2025

Have you seen Nodex Script?
It does exactly sync All data between Main server and Nodes..
Nodex - Unofficial version of the 3X-ui Node

@mengyoubenr
Copy link

Why is sock5 creation not supported?

@dimasmir03
Copy link
Contributor

Why is sock5 creation not supported?

In this feature?

@zhaoxueqingxiuyuansoftware

@javadtgh @alireza0 @dimasmir03 @MHSanaei

Hi everyone,

I wanted to share my thoughts on the multi-server approach. From my experience, full server management or separate slave panels aren’t strictly necessary for scaling. In my setup, I have two panels running on different servers but using a centralized MySQL database. A simple TCP proxy handles requests in a round-robin manner, and it works reliably while keeping the setup simple.

I would suggest migrating the panel to PostgreSQL first. If your servers are configured identically, there’s no need for an extra “server management” layer. A centralized database plus a proxy load balancer is sufficient. This approach is simpler and already resembles a node-based architecture like n8n.

Thanks for your work! 😊

image

I agree with your idea and fully support you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.