KhueApps
Home/DevOps/Set Up Docker Swarm on a Local Network with Managers and Workers

Set Up Docker Swarm on a Local Network with Managers and Workers

Last updated: October 06, 2025

Overview

This guide shows how to set up Docker Swarm across machines on the same local network. You will create a manager, join worker nodes, open required ports, and deploy a replicated service via an overlay network.

Quickstart

  • One machine will be the manager; others join as workers.
  • Open ports 2377/tcp, 7946/tcp+udp, and 4789/udp on all nodes.
  • Initialize the manager with an advertise address on your LAN.
  • Join workers using the token from the manager.
  • Create an overlay network and deploy a service or stack.

Minimal commands:

# On manager
MANAGER_IP=192.168.1.10
sudo docker swarm init --advertise-addr $MANAGER_IP
sudo docker network create --driver overlay --attachable app-net
sudo docker service create \
  --name web --replicas 3 \
  --publish 8080:80 \
  --network app-net \
  nginx:alpine

# On each worker
# Use token from the manager
WORKER_TOKEN=$(sudo docker swarm join-token -q worker)
sudo docker swarm join --token $WORKER_TOKEN ${MANAGER_IP}:2377

Prerequisites

  • Docker Engine 20.10+ on each node with the same architecture where possible.
  • Nodes reachable over the same LAN (no blocked east-west traffic).
  • Root or sudo access.
  • Basic firewall configuration capability.

Required Ports

PurposePortsProtocol
Swarm management (manager)2377TCP
Node discovery & gossip7946TCP/UDP
Overlay network (VXLAN)4789UDP
Published service portsyour choiceTCP/UDP

Ensure these are open between all nodes. If multiple NICs exist, pick the LAN IP for advertise-addr.

Step-by-step Setup

  1. Pick the manager and advertize the LAN IP
  • Find the LAN IP (for example 192.168.1.10).
  • Initialize the swarm:
sudo docker swarm init --advertise-addr 192.168.1.10
  • Note the printed join commands. You can reprint them any time:
sudo docker swarm join-token worker
sudo docker swarm join-token manager
  1. Open the ports on all nodes
  • Allow 2377/tcp, 7946/tcp, 7946/udp, 4789/udp.
  • Example with ufw:
sudo ufw allow 2377/tcp
sudo ufw allow 7946/tcp
sudo ufw allow 7946/udp
sudo ufw allow 4789/udp
  1. Join workers to the swarm
  • On each worker machine:
sudo docker swarm join --token <WORKER_TOKEN> 192.168.1.10:2377
  • Verify on the manager:
sudo docker node ls
  1. Create an overlay network
  • On the manager:
sudo docker network create --driver overlay --attachable app-net
  1. Deploy a test service
  • On the manager, deploy a replicated service published on port 8080:
sudo docker service create \
  --name web --replicas 3 \
  --publish 8080:80 \
  --network app-net \
  nginx:alpine
  • Verify placement and health:
sudo docker service ls
sudo docker service ps web
curl http://192.168.1.10:8080/
  1. Scale, update, drain
# Scale replicas
sudo docker service scale web=5

# Rolling update the image
sudo docker service update --image nginx:1.25-alpine web

# Drain a node for maintenance
sudo docker node update --availability drain <node-name>

Minimal Working Example: Stack Deploy

Compose file for stack deployment using an overlay network and rolling updates.

stack.yml:

version: "3.8"
services:
  web:
    image: nginx:alpine
    deploy:
      replicas: 3
      update_config:
        parallelism: 1
        order: start-first
      restart_policy:
        condition: on-failure
    ports:
      - "8080:80"
    networks:
      - app-net
networks:
  app-net:
    driver: overlay
    attachable: true

Deploy and verify:

sudo docker stack deploy -c stack.yml webapp
sudo docker stack services webapp
sudo docker stack ps webapp
curl http://192.168.1.10:8080/

Remove the stack when done:

sudo docker stack rm webapp

Node Labels and Placement (Optional)

Pin stateful workloads or special hardware to specific nodes using labels.

# Add a label to a node
sudo docker node update --label-add role=db worker-1

# Excerpt from a stack file to constrain placement
# services:
#   db:
#     image: postgres:16
#     deploy:
#       placement:
#         constraints:
#           - node.labels.role == db

For persistent data, use networked volumes (NFS, SMB, or a volume plugin) or ensure the service is always scheduled to the same node using constraints.

Troubleshooting and Pitfalls

  • Wrong advertise address: On multi-NIC hosts, nodes may try to communicate over the wrong interface. Always set --advertise-addr to your reachable LAN IP.
  • Firewalls blocking gossip/VXLAN: If 7946 or 4789 is blocked, services may deploy but tasks cannot discover peers. Open both TCP and UDP as noted.
  • Mixed Docker versions: Large version gaps can cause scheduling or overlay issues. Keep engines reasonably aligned.
  • Overlapping subnets: Overlay networks clash with host or VPN subnets. Choose non-overlapping CIDRs if you customize overlay subnets.
  • Publishing the same port twice: Only one service can publish a given host port per swarm node.
  • Manager availability: If the manager stops, existing services keep running, but changes require a manager. For HA, run 3 managers on different hosts (odd count for quorum). On small local setups, a single manager is fine.
  • DNS resolution: Swarm provides internal DNS for service names on overlay networks. If name resolution fails, confirm containers are attached to the same overlay.

Performance Notes

  • Overlay overhead: VXLAN adds encapsulation; expect slightly higher latency and lower throughput than host networking.
  • Node locality: Minimize cross-node chatter for chatty services by using placement constraints and spreading stateful components carefully.
  • Healthchecks and updates: Tune update_config (parallelism, delay, order) to avoid thundering herds during rollouts.
  • Logging drivers: Avoid slow remote logging drivers in development; they can throttle containers. Use json-file locally.
  • Resource limits: Set CPU and memory limits in deploy.resources to prevent noisy neighbor problems.
  • Publish mode: By default, swarm uses ingress load balancing for published ports. For best performance to a specific node, use mode=host in the ports section of a stack, at the cost of per-node port binding.

Example of host publish mode in a stack:

services:
  api:
    image: nginx:alpine
    deploy:
      replicas: 2
    ports:
      - target: 80
        published: 8081
        protocol: tcp
        mode: host
    networks: [app-net]
networks:
  app-net:
    driver: overlay

FAQ

  • Can I run this on Windows or macOS? Yes. Use Docker Desktop and enable Swarm. For multiple nodes, use separate machines or VMs on the same LAN.
  • Do I need a registry? Only if you build custom images. For local builds, push to a registry reachable by all nodes or use the same image tag from a public registry.
  • How do services discover each other? Via Swarm’s internal DNS on the overlay network. Use the service name as the hostname.
  • How do I persist data? Use networked volumes (NFS/SMB or a plugin) or constrain the service to a node that holds the data.

Series: Docker

DevOps