Initial Docs Push
This commit is contained in:
39
.gitea/workflows/docs-ci.yaml
Normal file
39
.gitea/workflows/docs-ci.yaml
Normal file
@@ -0,0 +1,39 @@
|
|||||||
|
name: Build and Push container image
|
||||||
|
run-name: ${{ gitea.actor }} is building and pushing
|
||||||
|
|
||||||
|
on:
|
||||||
|
create:
|
||||||
|
tags: "*"
|
||||||
|
|
||||||
|
env:
|
||||||
|
GITEA_DOMAIN: git.fjla.uk
|
||||||
|
GITEA_REGISTRY_USER: fred.boniface
|
||||||
|
RESULT_IMAGE_NAME: fred.boniface/fjla-docs
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
build-and-push-image:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
container:
|
||||||
|
image: catthehacker/ubuntu:act-latest
|
||||||
|
options: --privileged
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- name: Checkout repository
|
||||||
|
uses: actions/checkout@v4
|
||||||
|
- name: Set up Docker Buildx
|
||||||
|
uses: docker/setup-buildx-action@v3
|
||||||
|
- name: Login to Gitea Container Registry
|
||||||
|
uses: docker/login-action@v3
|
||||||
|
with:
|
||||||
|
registry: ${{ env.GITEA_DOMAIN }}
|
||||||
|
username: ${{ env.GITEA_REGISTRY_USER }}
|
||||||
|
password: ${{ secrets.REGISTRY_TOKEN }}
|
||||||
|
- name: Build and Push image
|
||||||
|
uses: docker/build-push-action@v6
|
||||||
|
with:
|
||||||
|
context: .
|
||||||
|
file: ./Dockerfile
|
||||||
|
push: true
|
||||||
|
tags: |
|
||||||
|
${{ env.GITEA_DOMAIN }}/${{ env.RESULT_IMAGE_NAME }}:${{ gitea.ref_name }}
|
||||||
|
${{ env.GITEA_DOMAIN }}/${{ env.RESULT_IMAGE_NAME }}:latest
|
||||||
10
Dockerfile
Normal file
10
Dockerfile
Normal file
@@ -0,0 +1,10 @@
|
|||||||
|
FROM python:3.13-slim AS builder
|
||||||
|
WORKDIR /docs
|
||||||
|
COPY requirements.txt .
|
||||||
|
RUN pip install --no-cache-dir -r requirements.txt
|
||||||
|
COPY . .
|
||||||
|
RUN mkdocs build --clean
|
||||||
|
|
||||||
|
FROM nginx:alpine
|
||||||
|
COPY --from=builder /docs/site /usr/share/nginx/html
|
||||||
|
EXPOSE 80
|
||||||
127
docs/Virtualisation/pve-k8s-ceph-config.md
Normal file
127
docs/Virtualisation/pve-k8s-ceph-config.md
Normal file
@@ -0,0 +1,127 @@
|
|||||||
|
# 📚 Complete Ceph CSI Deployment & Troubleshooting Guide
|
||||||
|
|
||||||
|
This guide details the preparation and configuration necessary for successful dynamic provisioning of Ceph RBD (RWO) and CephFS (RWX) volumes in a Kubernetes cluster running on **MicroK8s**, backed by a Proxmox VE (PVE) Ceph cluster.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 1. ⚙️ Ceph Cluster Preparation (Proxmox VE)
|
||||||
|
|
||||||
|
These steps ensure the Ceph backend has the necessary pools and structure.
|
||||||
|
|
||||||
|
* **Create Dedicated Pools:** Create OSD Pools for data, e.g., **`k8s_rbd`** (for RWO), and **`k8s_data`** and **`k8s_metadata`** (for CephFS).
|
||||||
|
* **Create CephFS Metadata Servers (MDS):** Deploy **at least two** Metadata Server (MDS) instances.
|
||||||
|
* **Create CephFS Filesystem:** Create the Ceph Filesystem (e.g., named **`k8s`**), linking the metadata and data pools.
|
||||||
|
* **Create Subvolume Group (Mandatory Fix):** Create the dedicated Subvolume Group **`csi`** inside your CephFS. This is required by the CSI driver's default configuration and fixes the "No such file or directory" error during provisioning.
|
||||||
|
* **CLI Command:** `ceph fs subvolumegroup create k8s csi`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 2. 🔑 Ceph User and Authorization (The Permission Fix)
|
||||||
|
|
||||||
|
This addresses the persistent "Permission denied" errors during provisioning.
|
||||||
|
|
||||||
|
* **Create and Configure Ceph User:** Create the user (`client.kubernetes`) and set permissions for all services. The **wildcard MGR cap** (`mgr "allow *"`) is critical for volume creation.
|
||||||
|
* **Final Correct Caps Command:**
|
||||||
|
```bash
|
||||||
|
sudo ceph auth caps client.kubernetes \
|
||||||
|
mon 'allow r' \
|
||||||
|
mgr "allow *" \
|
||||||
|
mds 'allow rw' \
|
||||||
|
osd 'allow class-read object_prefix rbd_children, allow pool k8s_rbd rwx, allow pool k8s_metadata rwx, allow pool k8s_data rwx'
|
||||||
|
```
|
||||||
|
* **Export Key to Kubernetes Secrets:** Create and place two Secrets with the user key in the correct CSI provisioner namespaces:
|
||||||
|
* **RBD Secret:** `csi-rbd-secret` (in the RBD Provisioner namespace).
|
||||||
|
* **CephFS Secret:** `csi-cephfs-secret` (in the CephFS Provisioner namespace).
|
||||||
|
|
||||||
|
**The secrets should contain the keys: userID & userKey. userID should omit the 'client.' from the ceph output.**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 3. 🌐 Network Configuration and Bi-Directional Routing
|
||||||
|
|
||||||
|
These steps ensure stable, bidirectional communication for volume staging and mounting.
|
||||||
|
|
||||||
|
### A. PVE Host Firewall Configuration
|
||||||
|
|
||||||
|
The PVE firewall must explicitly **allow inbound traffic** from the entire Kubernetes Pod Network to the Ceph service ports.
|
||||||
|
|
||||||
|
| Protocol | Port(s) | Source | Purpose |
|
||||||
|
| :--- | :--- | :--- | :--- |
|
||||||
|
| **TCP** | **6789** | K8s Pod Network CIDR (e.g., `10.1.0.0/16`) | Monitor connection. |
|
||||||
|
| **TCP** | **6800-7300** | K8s Pod Network CIDR | OSD/MDS/MGR data transfer. |
|
||||||
|
|
||||||
|
Alternatively, you may find a 'ceph' macro you can use on the PVE Firewall, if so use the Macro instead of additional rules.
|
||||||
|
|
||||||
|
### B. PVE Host Static Routing (Ceph $\rightarrow$ K8s)
|
||||||
|
|
||||||
|
Add **persistent static routes** on **all PVE Ceph hosts** to allow Ceph to send responses back to the Pod Network.
|
||||||
|
|
||||||
|
* **Action:** Edit `/etc/network/interfaces` on each PVE host:
|
||||||
|
```ini
|
||||||
|
# Example:
|
||||||
|
post-up ip route add <POD_NETWORK_CIDR> via <K8S_NODE_IP> dev <PVE_INTERFACE>
|
||||||
|
# e.g., post-up ip route add 10.1.0.0/16 via 172.30.100.41 dev vmbr0
|
||||||
|
```
|
||||||
|
|
||||||
|
### C. K8s Node IP Forwarding (Gateway Function)
|
||||||
|
|
||||||
|
Enable IP forwarding on **all Kubernetes nodes** so they can route incoming Ceph traffic to the correct Pods.
|
||||||
|
|
||||||
|
* **Action:** Run on all K8s nodes:
|
||||||
|
```bash
|
||||||
|
sudo sysctl net.ipv4.ip_forward=1
|
||||||
|
sudo sh -c 'echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.d/99-sysctl.conf'
|
||||||
|
```
|
||||||
|
|
||||||
|
### D. K8s Static Routing (K8s $\rightarrow$ Ceph) - Conditional/Advanced ⚠️
|
||||||
|
|
||||||
|
This routing is **only required** if the **Ceph Public Network** (the network Ceph Monitors/OSDs listen on) is **not reachable** by your Kubernetes Node's **default gateway**.
|
||||||
|
|
||||||
|
* **Action:** This is implemented via a **Netplan configuration** on the Kubernetes nodes, using multiple routes with different metrics to provide load balancing and automatic failover.
|
||||||
|
* **Example Netplan Configuration (`/etc/netplan/99-ceph-routes.yaml`):**
|
||||||
|
```yaml
|
||||||
|
network:
|
||||||
|
version: 2
|
||||||
|
renderer: networkd
|
||||||
|
ethernets:
|
||||||
|
eth0: # Replace with your primary K8s network interface
|
||||||
|
routes:
|
||||||
|
# Route 1: Directs traffic destined for the first Ceph Monitor IP (10.15.16.1)
|
||||||
|
# through three different PVE hosts (172.30.25.x) as gateways.
|
||||||
|
# The lowest metric (10) is preferred.
|
||||||
|
- to: 10.15.16.1/32
|
||||||
|
via: 172.30.25.10
|
||||||
|
metric: 10
|
||||||
|
- to: 10.15.16.1/32
|
||||||
|
via: 172.30.25.20
|
||||||
|
metric: 100
|
||||||
|
- to: 10.15.16.1/32
|
||||||
|
via: 172.30.25.30
|
||||||
|
metric: 100
|
||||||
|
|
||||||
|
# Route 2: Directs traffic destined for the second Ceph Monitor IP (10.15.16.2)
|
||||||
|
# with a similar failover strategy.
|
||||||
|
- to: 10.15.16.2/32
|
||||||
|
via: 172.30.25.20
|
||||||
|
metric: 10
|
||||||
|
- to: 10.15.16.2/32
|
||||||
|
via: 172.30.25.10
|
||||||
|
metric: 100
|
||||||
|
- to: 10.15.16.2/32
|
||||||
|
via: 172.30.25.30
|
||||||
|
metric: 100
|
||||||
|
```
|
||||||
|
|
||||||
|
Use route priorities (Lower is Higher) to prefer the most direct path, while still offering alternative gateways into the Ceph network where needed.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 4. 🧩 MicroK8s CSI Driver Configuration (The Path Fix)
|
||||||
|
|
||||||
|
This resolves the **`staging path does not exist on node`** error for the Node Plugin.
|
||||||
|
|
||||||
|
* **Update `kubeletDir`:** When deploying the CSI driver (via Helm or YAML), the `kubeletDir` parameter must be set to the MicroK8s-specific path.
|
||||||
|
```yaml
|
||||||
|
# Correct path for MicroK8s Kubelet root directory
|
||||||
|
kubeletDir: /var/snap/microk8s/common/var/lib/kubelet
|
||||||
|
```
|
||||||
0
docs/index.md
Normal file
0
docs/index.md
Normal file
8
mkdocs.yaml
Normal file
8
mkdocs.yaml
Normal file
@@ -0,0 +1,8 @@
|
|||||||
|
site_name: FJLA Documentation
|
||||||
|
nav:
|
||||||
|
- Home: index.md
|
||||||
|
- Virtualisation:
|
||||||
|
- Combining K8s, PVE & Ceph: Virtualisation/pve-k8s-ceph-config.md
|
||||||
|
|
||||||
|
theme:
|
||||||
|
name: material
|
||||||
2
requirements.txt
Normal file
2
requirements.txt
Normal file
@@ -0,0 +1,2 @@
|
|||||||
|
mkdocs
|
||||||
|
mkdocs-material
|
||||||
Reference in New Issue
Block a user