GPFS + Firewalld
Suggested firewalld scheme for GPFS (and ESS) config.
We have the following networks:
- 192.168.40.0/24 Internal admin network
- 10.11.14.0/24 GPFS daemon network
- 10.11.16.0/22 HPC clients doing remote mount
- 10.11.15.0/24 CES network (NFS only)
To keep things a bit simple, and not have to create too many explicit port rules, we make the GPFS daemon and internal management network “trusted”. Anything coming from these subnets is allowed:
firewall-cmd --zone=trusted --add-source=10.11.14.0/24 --permanent
firewall-cmd --zone=trusted --add-source=192.168.40.0/24 --permanent
The ESS3500/SSS6000 canisters needs to communicated over a dedicated interlink interface, make this trusted as well:
firewall-cmd --zone=trusted --add-interface=interlink --permanent
To allow for only the needed 1191/tcp and tscCmdPortRange from the HPC cluster, and any other remote clusters, we create a new zone for remote mounting clusters:
firewall-cmd --new-zone remote-clusters --permanent
firewall-cmd --permanent --zone=remote-clusters --set-description="GPFS Remote Clusters, only strictly necessary ports for GPFS remote clusters are accepted."
firewall-cmd --permanent --zone=remote-clusters --set-short="GPFS Remote Clusters"
firewall-cmd --permanent --new-service=gpfs-daemon
firewall-cmd --permanent --service=gpfs-daemon --set-short=gpfs-daemon
firewall-cmd --permanent --service=gpfs-daemon --set-description="Main gpfs-daemon port, used for most inter-node communication between nodes in GPFS clusters. "
firewall-cmd --permanent --service=gpfs-daemon --add-port=1191/tcp
firewall-cmd --permanent --new-service=gpfs-cmdRange
firewall-cmd --permanent --service=gpfs-cmdRange --add-port=60000-61000/tcp
firewall-cmd --permanent --service=gpfs-cmdRange --set-short=gpfs-cmdRange
firewall-cmd --permanent --service=gpfs-cmdRange --set-description="gpfs-cmdRange is a range of ports used for some GPFS administrative commands."
firewall-cmd --zone=remote-clusters --add-service=gpfs-daemon --permanent
firewall-cmd --zone=remote-clusters --add-service=gpfs-cmdRange --permanent
firewall-cmd --zone=remote-clusters --add-source=10.11.16.0/22 --permanent
If ever we need to add a new remote cluster, we just need to add it’s subnet to this zone.
Then for the NFS services, we create another zone with only the NFS related services allowed:
firewall-cmd --new-zone ces --permanent
firewall-cmd --permanent --zone=ces --set-description="CES, Cluster Export Services. Allow access to file protocols offered by CES."
firewall-cmd --permanent --zone=ces --set-short="CES, Cluster Export Services"
firewall-cmd --zone=ces --add-service=nfs --permanent
firewall-cmd --zone=ces --add-service=nfs3 --permanent
firewall-cmd --zone=ces --add-service=rpc-bind --permanent
firewall-cmd --zone=ces --add-port=32765/tcp --permanent
firewall-cmd --zone=ces --add-port=32765/udp --permanent
firewall-cmd --zone=ces --add-port=32767/tcp --permanent
firewall-cmd --zone=ces --add-port=32767/udp --permanent
firewall-cmd --zone=ces --add-port=32768/tcp --permanent
firewall-cmd --zone=ces --add-port=32768/udp --permanent
firewall-cmd --zone=ces --add-port=32769/tcp --permanent
firewall-cmd --zone=ces --add-port=32769/udp --permanent
firewall-cmd --zone=ces --add-source=10.11.15.0/24 --permanent
Here we’ve configured static ports for statd, mnt, nlm and rquota, so we have to tell ganesha about this as well by configuring
mmnfs config change MNT_PORT=32767:NLM_PORT=32769:RQUOTA_PORT=32768:STATD_PORT=32765
Zone can easily be extended by opening port 445/tcp for SMB, and 443 for S3, etc.
Once this has all been configured, we activate the firewall rules with:
firewall-cmd --reload
To keep things simple – all nodes will be running same set of rules. Don’t expect it should matter much that the EMS or ESS ionodes has the CES zone active.
FIXME:
- Dropped access to the scale gui in the “public” zone (firewall-cmd --add-service=https) – or, just restart the gui service, and it will add the port forwarding there…?
- Dropped access to ESA GUI (not sure we care)
- Maybe need some opening for RAS interface as well for utility nodes.