This documentation provides a comprehensive analysis of the Proxmox server configuration. The server is running Proxmox VE 8.3.0 on Linux kernel 6.8.12-8-pve, hosting a single Ubuntu virtual machine with GPU passthrough.
graph TD
subgraph "Physical Hardware"
CPUS[CPU: Multi-core x86_64]
RAM[System RAM]
GPU[GPU: PCI 0000:01:00.0]
GPUAUDIO[GPU Audio: PCI 0000:01:00.1]
subgraph "Storage Devices"
NVME[NVMe: HFM512GD3JX013N 476.9GB]
SSD1[SSD: INRAM SSD 512GB 476.9GB]
SSD2[SSD: SanDisk SDSSDA240G 223.6GB]
SSD3[SSD: INTEL SSDSA2CW160G3 149.1GB]
SSD4[SSD: SanDisk SDSSDHII480G 447.1GB]
end
end
subgraph "Storage Configuration"
ZPOOL[ZFS Mirror Pool: NVME 476GB]
LVMVG[LVM Volume Group: pve 371.6GB]
BACKUP[Backup Storage: proxback 447.1GB]
NVME --> ZPOOL
SSD1 --> ZPOOL
SSD2 --> LVMVG
SSD3 --> LVMVG
SSD4 --> BACKUP
end
subgraph "Virtual Environment"
VM[VM 100: ubuntu]
VMCPU[6 CPU cores]
VMRAM[38.1 GB RAM]
VMDISK1[450GB ZFS Disk]
VMDISK2[244GB LVM Disk]
VMGPU[GPU Passthrough]
VM --> VMCPU
VM --> VMRAM
VM --> VMDISK1
VM --> VMDISK2
VM --> VMGPU
ZPOOL --> VMDISK1
LVMVG --> VMDISK2
GPU --> VMGPU
end
This documentation is divided into five sections, each focusing on a specific aspect of the Proxmox server:
- Storage Configuration
- Crontab Jobs Analysis
- Backup Strategy Documentation
- Virtual Machine Configuration
- Network Configuration and Security Analysis
Parameter |
Value |
Hostname |
proxmox |
OS |
Linux proxmox 6.8.12-8-pve |
Architecture |
x86_64 GNU/Linux |
Proxmox Version |
8.3.0 |
Kernel Version |
6.8.12-8-pve |
PVE Manager Version |
8.3.3 |
- Hostname: proxmox
- IP Address: 192.168.1.250
- Kernel: 6.8.12-8-pve
- Proxmox Version: 8.3.0
- Primary Storage: ZFS mirror pool (NVME)
- Secondary Storage: LVM thin provisioning (local-lvm)
- Backup Storage: Dedicated SSD (proxback)
The server hosts a single VM with the following specifications:
- ID: 100
- Name: ubuntu
- OS: Linux
- CPU: 6 cores, host passthrough
- Memory: 38.1 GB
- Primary Disk: 450GB on ZFS
- Secondary Disk: 244GB on LVM thin provisioning
- GPU: Direct PCI passthrough
¶ Maintenance Procedures
The server has automated maintenance procedures including:
- Daily ZFS scrub and TRIM operations
- Daily backups with 3-day retention
- Automated VM snapshots with 3-day retention
- Weekly and monthly filesystem maintenance
The server has PCI passthrough configured for the VM to access a physical GPU:
PCIe Device |
Function |
VM Assignment |
0000:01:00.0 |
GPU |
VM 100 with x-vga=1 |
0000:01:00.1 |
GPU Audio |
VM 100 |
This configuration indicates that hardware virtualization extensions (VT-d/IOMMU) are enabled in the BIOS and properly configured in the kernel.
The server provides remote access through:
- SSH (implied by management capabilities)
- Proxmox Web Interface (on port 8006)
The system shows the following resource allocation:
-
VM Resource Usage:
- 6 CPU cores dedicated to VM 100
- 38.1 GB RAM dedicated to VM 100
- 694 GB storage dedicated to VM 100 (450 GB on ZFS, 244 GB on LVM)
-
Remaining Host Resources:
- The host retains remaining CPU cores and RAM for system operations
- Approximately 59.57 GB free space in the LVM volume group
- Approximately 348 GB free space in the ZFS pool
¶ Automation and Maintenance
The server contains several custom scripts in the /scripts
directory:
Script |
Purpose |
/scripts/trim_and_scrub.sh |
Performs ZFS maintenance (trim and scrub operations) |
/scripts/snapshot.sh |
Creates and manages VM snapshots with retention policies |
/scripts/oldbacks.sh |
Manages backup retention by removing old backups |
/scripts/oldbacks_backup.sh |
Alternative backup cleanup script (not actively used) |
¶ Maintenance Activities
Regular maintenance activities include:
-
Daily Activities:
- VM backups at 4:00 AM
- ZFS trim and scrub operations at 3:47 AM
- Filesystem checks at 3:10 AM
-
Weekly Activities:
- More comprehensive filesystem checks on Sundays
-
Monthly Activities:
- ZFS TRIM operations on the first Sunday of the month
- ZFS scrubbing on the second Sunday of the month
NetData Monitoring is in place
- Email notifications for backup operations
- Log files for maintenance operations:
/var/log/trim_and_scrub_cron.log
for ZFS maintenance
/scripts/snapshot.log
for VM snapshot operations
/scripts/trim_all.log
for additional trim operation logs
The server has multiple recovery options available:
-
VM Snapshots:
- Automated daily snapshots with 3-day retention
- Currently no active snapshots, but configuration history shows previous snapshots
-
VM Backups:
- Daily backups to dedicated backup storage
- 3-day retention policy
-
Storage Redundancy:
- ZFS mirror configuration for the NVME pool provides protection against disk failure
- Separate physical device for backups
The server employs several performance optimizations:
-
Storage Optimizations:
- Regular TRIM operations for SSD health and performance
- ZFS used for critical VM storage
- Discard enabled for VM disks to maintain SSD performance
-
VM Optimizations:
- Virtio drivers for network and storage
- Direct GPU passthrough for graphics performance
- Host CPU type passthrough for maximum CPU performance
-
Network Optimizations:
- Offload features disabled for potentially better network stability
- Fixed network speed and duplex settings
graph TD
subgraph SystemComponents
PVEAPI[Proxmox API Services]
QEMU[QEMU KVM]
ZFS[ZFS Storage]
LVM[LVM Storage]
BACKUP[Backup System]
SCRIPTS[Custom Scripts]
CRON[Scheduled Tasks]
NET[Network Config]
end
subgraph IntegrationFlow
PVEAPI --> QEMU
QEMU --> VM[VM 100 Ubuntu]
PVEAPI --> ZFS
PVEAPI --> LVM
ZFS --> VM
LVM --> VM
CRON --> SCRIPTS
SCRIPTS --> ZFS
SCRIPTS --> BACKUP
SCRIPTS --> VM
PVEAPI --> BACKUP
BACKUP --> VM
NET --> VM
NET --> PVEAPI
end
subgraph ExternalSystems
DNS[Public DNS]
EMAIL[Email Notifications]
end
NET --> DNS
BACKUP --> EMAIL
This documentation was generated by analyzing the Proxmox server configuration on March 26, 2025. All diagrams were created using Mermaid syntax for clarity and visual representation of the system architecture and workflows.