Architecture #
Overview #
SynergyCP uses a distributed architecture model built around the principle of centralized management, remote execution. A single Master Server acts as the central command hub, coordinating operations across multiple IP Groups (physical locations or providers) while local service nodes handle the actual workload at each IP Group.

An IP Group typically represents your physical presence in a given city or data center location. However, you can also configure multiple IP Groups at the same physical location if you need to segregate infrastructure—for example, to maintain separate DHCP networks or file servers for different customer segments or network architectures.
This design allows operators to manage their entire fleet of bare-metal servers from one interface, regardless of how many IP Groups (data centers) they operate.
Core Components #
Master Server (Central Command) #
The Master Server is the brain of the entire SynergyCP deployment. It runs at one primary location and is responsible for:
- Storing all configuration, client, and server data in a central database
- Providing the web-based management UI and client portal
- Orchestrating provisioning workflows across all IP Groups
- Managing billing integration and API access
There is only one Master Server per SynergyCP deployment. All remote IP Groups communicate back to it for instructions and status reporting.
DHCP Service #
Each IP Group runs its own DHCP service node. This component is used exclusively during OS installations to support PXE boot, which requires DHCP for autoconfiguration. The DHCP server does not provide ongoing IP assignment to servers once they are installed. After provisioning is complete, servers use their statically assigned production IPs and the DHCP service plays no further role until the next reinstall.
Running DHCP locally is essential because DHCP traffic operates at Layer 2 and does not traverse routed networks. Each IP Group’s DHCP node receives its configuration from the Master Server and handles:
- PXE boot responses for bare-metal provisioning
- Temporary IP assignment during the installation process
- Boot menu and kickstart/preseed delivery
IPMI Service (BMC Forwarding Gateway) #
The IPMI service node at each IP Group is a BMC Forwarding Gateway. Because BMC interfaces sit on a private management VLAN that is not reachable from the public internet, all BMC communication flows through this gateway, whether it originates from the Master Server issuing provisioning commands or from a customer launching a KVM session.
SynergyCP supports placing all BMC interfaces on a private LAN so they do not consume public IP space and are not directly exposed to the internet.
When a customer or staff member needs to reach a server’s BMC, SynergyCP creates a temporary forwarding rule on the gateway. The user connects to a public IP on the gateway, and the gateway forwards the traffic through to the server’s private BMC address on the internal network. From the user’s perspective, it behaves as if they have direct access to the BMC.
Access is secured with an automatic ACL. When a user requests access, a forwarding rule is created matching their specific source IP to the target BMC. The rule can be manually removed when the session is over, or it expires automatically after 24 hours.
The recommended forwarding method is IP to IP, which transparently forwards all ports from a given source IP to the private BMC IP. This provides full compatibility with web consoles, KVM sessions, and virtual media across any BMC manufacturer. Because routing is based on the unique combination of source and destination IPs, multiple users can access the same server simultaneously (for example, a customer and a vendor support technician collaborating on an issue), and a single user can access multiple servers at the same IP Group by consuming additional gateway IPs from the pool.
Gateway IP pool sizing is straightforward: you only need as many public IPs as you expect concurrent connections from the same source IP. For most deployments, 3 to 5 IPs is sufficient, and more can be added later as needed.
For full setup instructions, see BMC Forwarding Gateways.
File Service #
The File service node stores and serves OS images used during PXE-based provisioning. Keeping image files local to each IP Group ensures fast, reliable installs without transferring large OS images across WAN links. This node is responsible for:
- Hosting OS templates and installation media
- Serving files over TFTP/HTTP during PXE boot sequences
- Caching images synced from the Master Server
How It Works Together #
When a provisioning job is triggered from the Master Server (either by an admin, a client through the portal, or an API call), the workflow flows outward to the appropriate IP Group:
- The Master Server determines which IP Group the target server belongs to and dispatches the job to the DHCP and File service nodes assigned to that IP Group.
- The Master Server sends IPMI commands through the BMC Forwarding Gateway at that IP Group to set the server’s boot device to PXE and issue a power cycle.
- The server PXE boots, contacting the local DHCP node for an IP address and boot configuration.
- The DHCP node directs the server to the local File node, which serves the OS image.
- The OS installs, the server reboots, and status is reported back to the Master Server.
All of this happens without the Master Server needing direct Layer 2 access to the remote IP Group’s provisioning or IPMI networks.
Scaling to Multiple Locations #
Adding a new IP Group to SynergyCP involves deploying three service nodes (DHCP, IPMI, File) at the new site and registering them with the Master Server. Once connected, the new IP Group is fully managed from the same central interface as all other IP Groups.
This architecture scales horizontally. IP Group 1 might be your primary site co-located with the Master Server, while IP Groups 2, 3, 4, and beyond can be added across different regions or providers. Each IP Group operates independently for local tasks but remains centrally coordinated.
This is discussed further in adding a new data center location.
Network Requirements #
| Path | Protocol | Purpose |
|---|---|---|
| Master to/from service nodes | HTTPS (API) | Job dispatch, status reporting, configuration sync |
| Master to BMC gateway | IPMI (via gateway) | Provisioning commands (boot device, power control) |
| DHCP node to local servers | DHCP/TFTP/HTTP (Layer 2) | PXE boot and IP assignment |
| BMC gateway to local BMCs | IPMI (Layer 2/3) | Forwarded BMC traffic (provisioning and user access) |
| Users to BMC gateway | HTTPS/IPMI (public) | On-demand BMC access via ACL |
| File node to local servers | HTTP/TFTP | OS image delivery |
Specific ports are listed in Ports.
The only traffic that crosses the WAN is the API communication between the Master Server and the remote service nodes. All provisioning-related traffic stays local to each IP Group.