Add proxmox.md

This commit is contained in:
Javier Vilarroig Christensen 2025-02-12 09:27:31 +01:00
parent e84ff2e0db
commit 04196e5b57
1 changed files with 69 additions and 0 deletions

69
proxmox.md Normal file
View File

@ -0,0 +1,69 @@
# Proxmox
## PVE
### Cheatsheets
#### Cluster maintenance
* Mark a node as in maintenance mode
> ha-manager crm-command node-maintenance enable _node_
* Bind mount for NAS
> mp0: /mnt/pve/NAS,mp=/mount/nas-sp
### Improvements in containers
* UniData
* Move Error pages to Jenkins
* Jenkins Agent
* delete mvn cache daily (confirm) (systemd.timer)
* Upgrade containers to Debian 12Ignoring (git reset) changes that are not suposed to
* Monitoring
* Dependencies
* sql:
* git
* jenikis
* icinga
* EBX Artifacts
* UDE files
* Add permanent VMs (jenkins, git, etc...)
* Archiva
* Replace by something else
# (DRAFT) Proxmox node removal procedure
* Remove node from pve_nodes table as SysDB
* Move the node to maintenance mode
> ha-manager crm-command node-maintenance enable node
* Wait for automated migrations to finished
* Manualy migrate any remaining container
* Destroy all replications pointing to the node
__CAN BE AUTOMATED BY CheckConsitency PIPELINE?__
May be as an option to destroy all replications before creating them again.
That way, all non needed replications will be destroyed.
This can be expensive as it will have to create all replicas again.
If we move to Ceph, all this point bexomes moot.
* Remove the node from HA groups
* Double check there is nohing still attached to the node
* Disable node in icinga
* Reinstall the node from the OVH control pannel
To make sure the server never starts again with the current identity.
That will generate problems.
* _pvecm nodes_
* Confirm that node is not anymore in the list
* _pvecm delnode XXX_
* Is normal to get an error because can't reach the node
# (DRAFT) Add a node to Proxmox cluster procedure
* Prerequisites
* DNS AA record in place
Request to Dr. Watson
* Reverse DNS records in place
Manage in OVH contol pannel
* Install Proxmox V8 using our template
* Customize using teplate script
* If this is a new node, must be added to the spinco_hook script
* Add line to interfaces
> source /etc/network/interfaces.d/*
* Assign right root password
* Request X509 certificate
* Add to cluster using the interface
* Add in icinga
* Add the node to any relevany HA group
* Add node to SLan
* Add node to sys_db pve_nodes table
* Balance load if needed