What's Actually Happening
GlusterFS distributed storage volume fails to mount on client. Mount command fails or volume is inaccessible.
The Error You'll See
```bash $ mount -t glusterfs server:/volume /mnt/gluster
Mount failed. Check the log file for more details. ```
Transport error:
Transport endpoint is not connectedPermission denied:
Permission denied. Please check the permissions on the mount point.Volume not found:
Volume volume-name does not existConnection refused:
glusterfs: failed to connect to serverWhy This Happens
- 1.Peer disconnected - Storage nodes not in cluster
- 2.Volume stopped - Volume not started
- 3.Network issues - Cannot reach Gluster nodes
- 4.DNS resolution - Hostname not resolved
- 5.Permission issues - Mount point or volume ACLs
- 6.FUSE not loaded - FUSE kernel module missing
Step 1: Check GlusterFS Service
```bash # Check GlusterFS daemon: systemctl status glusterd
# Start service: systemctl start glusterd
# Check process: ps aux | grep gluster
# Check listening: netstat -tlnp | grep glusterd
# Default port: 24007
# Check logs: journalctl -u glusterd -f tail -f /var/log/glusterfs/glusterd.log
# Check version: gluster --version
# Check FUSE module: lsmod | grep fuse
# Load FUSE: modprobe fuse
# Check FUSE device: ls -la /dev/fuse
# Mount stats: cat /proc/fs/fuse/connections
# Check mount point: ls -la /mnt/gluster
# Create mount point: mkdir -p /mnt/gluster ```
Step 2: Check Peer Status
```bash # List peers in cluster: gluster peer status
# Expected output: Number of Peers: 2
Hostname: node2 Uuid: uuid-here State: Peer in Cluster (Connected)
# Check for disconnected peers: gluster peer status | grep -i "disconnected|unknown"
# Probe peer: gluster peer probe node2
# Detach problematic peer: gluster peer detach node2
# Re-add peer: gluster peer probe node2
# Check peer from other nodes: # On each node, run: gluster peer status
# All nodes should see each other
# Check pool list: gluster pool list
# Check for UUID mismatch: gluster peer status | grep Uuid
# Check hostname resolution: ping node2 nslookup node2
# Use IP instead of hostname: gluster peer probe 192.168.1.2
# Check firewall: iptables -L -n | grep 24007
# Allow GlusterFS ports: iptables -I INPUT -p tcp --dport 24007 -j ACCEPT iptables -I INPUT -p tcp --dport 24008 -j ACCEPT iptables -I INPUT -p tcp --dport 49152:49251 -j ACCEPT ```
Step 3: Check Volume Status
```bash # List volumes: gluster volume list
# Check volume info: gluster volume info volume-name
# Check volume status: gluster volume status volume-name
# Expected output: Status of volume: volume-name Gluster process TCP Port RDMA Port Online PID brick1 49152 0 Y 1234 brick2 49152 0 Y 5678
# Check if volume started: gluster volume status volume-name | grep Online
# Start volume: gluster volume start volume-name
# Stop and restart: gluster volume stop volume-name gluster volume start volume-name
# Check volume options: gluster volume get volume-name all
# Check volume type: gluster volume info volume-name | grep Type
# Volume types: # Distribute, Replicate, Stripe, Disperse
# Check brick status: gluster volume status volume-name detail
# Check brick path: gluster volume info volume-name | grep "Brick"
# Check brick directory exists on nodes: ssh node1 "ls -la /data/brick1" ```
Step 4: Test Network Connectivity
```bash # Test node connectivity: ping node1 ping node2
# Test Gluster port: nc -zv node1 24007
# Test brick ports: nc -zv node1 49152
# Check firewall: ufw status | grep 24007
# Allow ports: ufw allow 24007/tcp ufw allow 24008/tcp ufw allow 49152:49251/tcp
# Check DNS resolution: nslookup node1 dig node1
# Use IP in mount: mount -t glusterfs 192.168.1.1:/volume /mnt/gluster
# Check for NAT issues: # GlusterFS needs direct peer connectivity
# Check MTU: ip link show eth0 | grep mtu
# Check network interface: ip addr show
# Test TCP connection: telnet node1 24007
# Check port range: # Brick ports: 49152-49251 # Glusterd: 24007, 24008
# Check with tcpdump: tcpdump -i eth0 port 24007 or portrange 49152-49251 ```
Step 5: Check Mount Point Permissions
```bash # Check mount point: ls -la /mnt/gluster
# Create mount point: mkdir -p /mnt/gluster
# Set permissions: chmod 755 /mnt/gluster
# Check if already mounted: mount | grep gluster
# Unmount if stuck: umount /mnt/gluster
# Force unmount: umount -f /mnt/gluster umount -l /mnt/gluster
# Check for stale mount: # If transport endpoint not connected: umount -l /mnt/gluster
# Check mount owner: ls -ld /mnt/gluster
# Check client-side glusterfs log: tail -f /var/log/glusterfs/mnt-gluster.log
# Mount with verbose: mount -t glusterfs server:/volume /mnt/gluster -v
# Check fstab entry: cat /etc/fstab | grep gluster
# Manual mount: mount.glusterfs server:/volume /mnt/gluster
# Check mount options: mount | grep gluster ```
Step 6: Fix Volume Options
```bash # Check volume options: gluster volume get volume-name all
# Common issues:
# 1. Auth reject: gluster volume set volume-name auth.reject 192.168.1.100
# Check auth settings: gluster volume get volume-name auth.*
# 2. Client SSL: gluster volume get volume-name client.ssl
# Disable SSL if not needed: gluster volume set volume-name client.ssl off
# 3. Server SSL: gluster volume set volume-name server.ssl off
# 4. Access control: gluster volume set volume-name auth.allow *
# Allow specific clients: gluster volume set volume-name auth.allow 192.168.1.*
# 5. NFS export: gluster volume set volume-name nfs.export-volumes on
# 6. Volume read-only: gluster volume get volume-name features.read-only
# Disable read-only: gluster volume set volume-name features.read-only off
# 7. Quota settings: gluster volume quota volume-name list
# Check for quota limits blocking: gluster volume quota volume-name limit-usage / 1TB
# Reset options: gluster volume reset volume-name ```
Step 7: Check Brick Health
```bash # Check brick status: gluster volume status volume-name detail
# Check for offline bricks: gluster volume status volume-name | grep -i "N\s*$"
# Check brick on specific node: ssh node1 "gluster volume status"
# Check brick process: ps aux | grep glusterfsd
# Restart brick: gluster volume start volume-name force
# Check brick directory: ssh node1 "ls -la /data/brick1"
# Check disk space: ssh node1 "df -h /data/brick1"
# Check disk permissions: ssh node1 "ls -la /data"
# Check for split-brain: gluster volume heal volume-name info
# Heal split-brain: gluster volume heal volume-name full
# Check heal status: gluster volume heal volume-name statistics
# Check brick log: ssh node1 "tail /var/log/glusterfs/bricks/data-brick1.log"
# Check for I/O errors: dmesg | grep -i error ```
Step 8: Check Client Configuration
```bash # Check glusterfs client: which mount.glusterfs
# Check fuse module: lsmod | grep fuse
# Load fuse: modprobe fuse
# Check fuse version: cat /proc/fs/fuse/version
# Check glusterfs-fuse package: rpm -qa | grep glusterfs-fuse dpkg -l | grep glusterfs-client
# Install client: apt install glusterfs-client yum install glusterfs-fuse
# Check client log directory: ls -la /var/log/glusterfs/
# Mount options: mount -t glusterfs server:/volume /mnt/gluster \ -o log-level=DEBUG \ -o log-file=/var/log/glusterfs/client.log
# Check mount options: mount -t glusterfs server:/volume /mnt/gluster \ -o transport=tcp \ -o direct-io-mode=enable
# Specific server mount: mount -t glusterfs node1:/volume /mnt/gluster
# Backup volfile server: mount -t glusterfs node1:/volume /mnt/gluster \ -o backup-volfile-servers=node2:node3
# Use volfile path: mount -t glusterfs node1:/volume.vol /mnt/gluster ```
Step 9: Debug Mount Issues
```bash # Enable debug logging: mount -t glusterfs server:/volume /mnt/gluster \ -o log-level=DEBUG
# Check client log: tail -f /var/log/glusterfs/mnt-gluster.log
# Debug mode: glusterfs --log-level=DEBUG server:/volume /mnt/gluster
# Check mount process: strace mount -t glusterfs server:/volume /mnt/gluster
# Check volfile fetch: curl http://server:24007/volume.vol
# Manual volfile: glusterfs -f /tmp/volume.vol /mnt/gluster
# Check FUSE connection: ls -la /dev/fuse
# Check mounted filesystems: cat /proc/mounts | grep gluster
# Check dmesg: dmesg | grep -i fuse
# Force mount: mount -t glusterfs server:/volume /mnt/gluster -f
# Check for stale NFS: # If using NFS export: showmount -e server mount -t nfs server:/volume /mnt/gluster ```
Step 10: GlusterFS Verification Script
```bash # Create verification script: cat << 'EOF' > /usr/local/bin/check-glusterfs.sh #!/bin/bash
VOLUME=${1:-"volume-name"}
echo "=== GlusterFS Service ===" systemctl status glusterd 2>/dev/null || echo "Service not running"
echo "" echo "=== GlusterFS Process ===" ps aux | grep -E "glusterd|glusterfs" | grep -v grep || echo "No GlusterFS process"
echo "" echo "=== FUSE Module ===" lsmod | grep fuse || echo "FUSE not loaded"
echo "" echo "=== Peer Status ===" gluster peer status 2>/dev/null || echo "Cannot get peer status"
echo "" echo "=== Volume List ===" gluster volume list 2>/dev/null || echo "Cannot list volumes"
echo "" echo "=== Volume Status ===" gluster volume status $VOLUME 2>/dev/null || echo "Volume $VOLUME not found"
echo "" echo "=== Volume Info ===" gluster volume info $VOLUME 2>/dev/null || echo "Cannot get volume info"
echo "" echo "=== Volume Options ===" gluster volume get $VOLUME auth.allow 2>/dev/null || echo "Cannot get options"
echo "" echo "=== Brick Status ===" gluster volume status $VOLUME detail 2>/dev/null | head -30 || echo "Cannot get brick status"
echo "" echo "=== Network Connectivity ===" for node in $(gluster peer status 2>/dev/null | grep Hostname | awk '{print $2}'); do echo "Node: $node" ping -c 2 -W 2 $node 2>&1 | tail -2 nc -zv $node 24007 2>&1 || true done
echo "" echo "=== Current Mounts ===" mount | grep gluster || echo "No GlusterFS mounts"
echo "" echo "=== Firewall ===" iptables -L -n 2>/dev/null | grep -E "24007|49152" || echo "No GlusterFS firewall rules"
echo "" echo "=== Recent Logs ===" tail /var/log/glusterfs/glusterd.log 2>/dev/null | tail -10 || echo "No glusterd log"
echo "" echo "=== Recommendations ===" echo "1. Ensure glusterd service running on all nodes" echo "2. Check all peers connected" echo "3. Verify volume started" echo "4. Confirm bricks online" echo "5. Allow ports 24007, 24008, 49152-49251" echo "6. Check mount point permissions" echo "7. Use backup-volfile-servers for HA" EOF
chmod +x /usr/local/bin/check-glusterfs.sh
# Usage: /usr/local/bin/check-glusterfs.sh volume-name ```
GlusterFS Mount Checklist
| Check | Expected |
|---|---|
| Glusterd running | Service active on nodes |
| Peers connected | All nodes in cluster |
| Volume started | Volume status Online |
| Bricks online | All bricks Y status |
| Network reachable | Ports 24007, 49152 accessible |
| FUSE loaded | Kernel module present |
| Mount point | Directory exists with permissions |
Verify the Fix
```bash # After fixing GlusterFS mount
# 1. Check service running systemctl status glusterd // Active running
# 2. Check peer status gluster peer status // All peers Connected
# 3. Check volume gluster volume status volume-name // All bricks Online
# 4. Mount volume mount -t glusterfs server:/volume /mnt/gluster // Mount successful
# 5. Check mount mount | grep gluster // Shows mounted volume
# 6. Test access ls /mnt/gluster // Files accessible ```
Related Issues
- [Fix NFS Mount Failed](/articles/fix-nfs-mount-still-pointing-to-old-file-server-after-migration)
- [Fix Ceph RBD Mount Failed](/articles/fix-ceph-rbd-mount-failed)
- [Fix Samba Share Not Accessible](/articles/fix-samba-share-not-accessible)