server edition, first version complete

This commit is contained in:
mrkmntal 2025-12-18 21:07:59 -05:00
commit 7c8ba859fb
6 changed files with 848 additions and 551 deletions

403
README.md
View file

@ -1,142 +1,265 @@
# 🚀 MNMIVM-SE (Server Edition)
# 🚀 MNMIVM ### *A LAN-Native VM Cloud with a Minimal Control Plane*
### *Fire-and-Forget Virtual Machines*
![MNMIVM Hero](./assets/tuxrockets.jpg) ![MNMIVM Hero](./assets/tuxrockets.jpg)
**MNMIVM** is a minimal, single-binary VM launcher built on **QEMU + KVM + cloud-init**. **MNMIVM-SE** is the **server-focused edition** of MNMIVM — a minimal, single-binary VM launcher built on **QEMU + KVM + cloud-init** that turns your **LAN into a local VM cloud**.
It is designed for *fast iteration*, *ephemeral usage*, and *zero ceremony*.
If Docker feels too constrained, and OpenStack / Proxmox feel like overkill — MNMIVM sits comfortably in the middle. Unlike traditional platforms, MNMIVM-SE exposes the raw infrastructure primitives directly:
bridges, TAP devices, MAC addresses, static IPs, and Linux processes.
> Spin it up. > Your LAN is the fabric.
> Get a random port. > The kernel is the scheduler.
> SSH in. > The CLI is the control plane.
> Done.
--- ---
## ✨ What MNMIVM Is ## ☁️ What MNMIVM-SE Is
* A **fire-and-forget VM launcher** * A **local VM cloud** built directly on your LAN
* A **CLI-first** virtualization tool * A **process-native control plane**
* A **thin orchestration layer**, not a platform * A **CLI-first infrastructure tool**
* A way to spin up *real Linux VMs* without running a control plane * A Proxmox-style networking model **without Proxmox**
MNMIVM intentionally avoids: Each VM:
- Has a persistent MAC address
- Has a static IP on your LAN
- Appears as a first-class network device
- Can host real infrastructure services (DNS, CI, storage, routing, etc.)
* Long-lived port bindings Routers, firewalls, and switches see MNMIVM-SE VMs as **real machines**, not NAT artifacts.
* Static network assumptions
* Cluster state
* Databases
* APIs
* Daemons
It launches a VM, hands you SSH + VNC, and gets out of the way.
--- ---
## 🧠 Design Philosophy ## 🧠 Control Plane Model
MNMIVM is built around a few hard rules: MNMIVM-SE **does have a control plane** — its just intentionally **minimal, local, and explicit**.
* **No background services** The control plane is implemented as:
* **No required config files** - A single CLI binary
* **No global state beyond `/var/lib/microvm`** - A file-backed state store
* **Every VM is self-contained** - Linux process lifecycle tracking
* **Cloud-init is the source of truth**
* **Root-only, explicit control**
This makes MNMIVM ideal for: There is:
- No always-on daemon
- No API server
- No database
- No reconciliation loop
- No scheduler service
* Homelabs Instead:
* CI runners
* Testing OS images - VM lifecycle = Linux process lifecycle
* Disposable dev environments - State = files under `/var/lib/microvm`
* Learning QEMU/KVM internals - Configuration changes = cloud-init regeneration
* “I just need a VM *right now* - Access = SSH + VNC
> The filesystem is the state store.
> `/proc` is the source of truth.
> Each CLI command is a deliberate control action.
This makes MNMIVM-SE closer to **early private IaaS** and **bare-metal virtualization** than modern hyperscaler platforms.
--- ---
## 🆚 How It Compares (At a Glance) ## 🧱 Supported Host Operating Systems
| Feature | MNMIVM | Docker | LXC/LXD | Proxmox | OpenStack | MNMIVM-SE is conservative about host support and only documents what is tested.
| ---------------------- | ----------- | ------- | ------- | ------- | --------- |
| Real VMs | ✅ | ❌ | ⚠️ | ✅ | ✅ |
| Cloud-init | ✅ | ❌ | ⚠️ | ✅ | ✅ |
| Requires Daemons | ❌ | ✅ | ✅ | ✅ | ✅ |
| Random Ports | ✅ | ❌ | ❌ | ❌ | ❌ |
| Cluster-aware | ❌ | ❌ | ⚠️ | ✅ | ✅ |
| Stateful Control Plane | ❌ | ❌ | ⚠️ | ✅ | ✅ |
| Setup Time | **Seconds** | Minutes | Minutes | Hours | Days |
> MNMIVM behaves more like **`docker run` for VMs**, not like a cloud. ### ✅ Supported
| Host OS | Version |
|------|---------|
| **Debian** | 12+ |
| **Alpine Linux** | 3.22+ |
### 🕒 Coming Soon
| Host OS | Notes |
|------|------|
| Ubuntu | Netplan-based host networking support planned |
> Ubuntu is not currently documented due to netplan-specific bridge handling.
> Support will be added, but is not a top priority.
### ❌ Not Supported
- Wi-Fionly hosts
- WSL / nested hypervisors
- Desktop laptop setups expecting NAT
--- ---
## 🧱 Architecture Overview ## 🧱 Architecture Overview
* **QEMU + KVM** for virtualization * **QEMU + KVM**
* **User-mode networking** (no bridges required) * **Linux bridge (`br0`)**
* **Cloud-init seed ISO** (`cidata`) for provisioning * **TAP devices**
* **QCOW2 backing images** * **Cloud-init seed ISO**
* **Ephemeral SSH + VNC ports** * **Static IP networking**
* **State stored on disk only** * **VNC console for recovery**
``` ```
/var/lib/microvm/ /var/lib/microvm/
├── images/ # Base cloud images ├── images/
└── vms/ └── vms/
└── vm1/ └── vm1/
├── disk.qcow2 ├── disk.qcow2
├── seed.iso ├── seed.iso
├── pubkey.pub ├── pubkey.pub
├── os.name ├── os.name
├── ssh.port ├── vm.ip
├── vnc.port ├── vm.mac
└── vm.pid ├── vnc.port
``` └── vm.pid
No database. ````
No API.
No libvirt.
No XML.
No daemon. No daemon.
--- ---
## 📦 Supported Operating Systems ## 🌐 Host Networking Requirements (CRITICAL)
| OS | Version | Boot Mode | MNMIVM-SE requires a **proper Linux bridge**.
| ------ | ----------- | --------- |
| Ubuntu | 24.04 LTS | BIOS |
| Debian | 13 (Trixie) | BIOS |
| Fedora | 43 Cloud | BIOS |
| Alpine | 3.22 | BIOS |
> UEFI images are intentionally avoided for simplicity and reliability. ### Example: `/etc/network/interfaces` (Debian)
```ini
auto lo
iface lo inet loopback
auto ens18
iface ens18 inet manual
auto br0
iface br0 inet static
address 192.168.86.10
netmask 255.255.255.0
gateway 192.168.86.1
dns-nameservers 1.1.1.1 8.8.8.8
bridge_ports ens18
bridge_stp off
bridge_fd 0
````
**Rules that must be followed:**
* The host IP must live on `br0`
* The physical NIC must have no IP
* Wi-Fi cannot be bridged
* VMs attach via TAP devices
--- ---
## 🔑 SSH & Identity Management ## 🔥 Kernel Bridge Filtering (THIS WILL BREAK VMs)
* SSH keys are injected via **cloud-init** Linux defaults can silently block bridged traffic.
* Keys are **fully replaced**, not appended
* Old keys are **revoked**
* `update-cloud` regenerates the seed ISO
* Changes apply on **next boot**
This gives you **real identity revocation**, not just key sprawl. This **must** be disabled:
```bash
cat /proc/sys/net/bridge/bridge-nf-call-iptables
# must be 0
```
If set to `1`, VMs will:
* Boot successfully
* Have valid IPs
* Be completely unreachable
### Fix (runtime)
```bash
sudo sysctl -w net.bridge.bridge-nf-call-iptables=0
sudo sysctl -w net.bridge.bridge-nf-call-ip6tables=0
sudo sysctl -w net.bridge.bridge-nf-call-arptables=0
```
### Persistent fix
`/etc/sysctl.d/99-bridge.conf`:
```ini
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-arptables = 0
```
--- ---
## 📀 Disk Behavior ## 🔐 QEMU Bridge Permissions
* Base disk size is **12GB** (configurable constant) QEMU must be allowed to attach TAP devices.
* Uses **QCOW2 backing images**
* Root filesystem auto-expands on first boot
* Predictable, uniform VM sizing
This keeps resource usage balanced and intentional. ### `/etc/qemu/bridge.conf`
```ini
allow br0
```
Verify helper:
```bash
ls -l /usr/lib/qemu/qemu-bridge-helper
```
---
## 🐧 Alpine Linux Host Notes (3.22+)
Alpine does not ship a hypervisor stack by default.
Install required packages:
```bash
apk add \
qemu-system-x86_64 \
qemu-img \
qemu-hw-display-virtio-vga \
bridge-utils \
cdrkit \
go
```
Notes:
* `cdrkit` provides `genisoimage`
* `bridge-utils` provides `brctl`
* `qemu-hw-display-virtio-vga` is required for VNC
* No libvirt or services are used
* OpenRC is sufficient
Alpine works well as a **minimal KVM host** once assembled.
---
## ⚙️ Server Edition Configuration (Code-Level)
Networking and sizing are configured **in code**, not via runtime flags.
Edit these constants in `main.go` (around lines 2530):
```go
// Networking
bridgeName = "br0"
lanCIDR = "192.168.86.0/24"
lanGW = "192.168.86.1"
lanDNS1 = "192.168.86.1"
lanDNS2 = "8.8.8.8"
// VM sizing
baseDiskSize = "12G"
memMB = "1024"
cpus = "1"
```
This keeps runtime behavior explicit and predictable.
--- ---
@ -145,95 +268,91 @@ This keeps resource usage balanced and intentional.
### Create a VM ### Create a VM
```bash ```bash
sudo mnmivm create vm1 --os debian --pubkey-path ~/.ssh/id_ed25519.pub sudo mnmivm-se create vm1 \
--os debian \
--pubkey-path ~/.ssh/id_ed25519.pub \
--ip 192.168.86.53
``` ```
### Start a VM ### Start a VM
```bash ```bash
sudo mnmivm start vm1 sudo mnmivm-se start vm1
```
### SSH in
```bash
ssh debian@192.168.86.53
``` ```
### Stop a VM ### Stop a VM
```bash ```bash
sudo mnmivm stop vm1 sudo mnmivm-se stop vm1
``` ```
### Update SSH key (cloud-init) ### Update cloud-init (SSH key / IP)
```bash ```bash
sudo mnmivm update-cloud vm1 --pubkey-path newkey.pub sudo mnmivm-se update-cloud vm1 \
``` --pubkey-path newkey.pub \
--ip 192.168.86.54
### List VMs
```bash
sudo mnmivm list
```
### Delete a VM
```bash
sudo mnmivm delete vm1
# Requires typing YES in all caps
``` ```
--- ---
## 📊 VM List Output ## 🔑 Security Model
MNMIVM renders a clean Unicode table showing: * SSH keyonly access
* No password authentication
* No root login
* Static IPs (no DHCP ambiguity)
* MAC addresses pinned via cloud-init
* VNC console for recovery only
* Name This follows **server-grade discipline**, not container ergonomics.
* Running state
* OS
* SSH endpoint
* VNC endpoint
* SSH public key path
This makes it usable *without* scripting.
--- ---
## ⚠️ What MNMIVM Is Not ## ⚠️ What MNMIVM-SE Is Not
* ❌ A cloud platform * ❌ A managed cloud service
* ❌ A hypervisor manager * ❌ A multi-tenant platform
* ❌ A replacement for Proxmox * ❌ A scheduler or orchestrator
* ❌ A Kubernetes node orchestrator * ❌ A UI-driven system
* ❌ A long-lived VM manager * ❌ A laptop-friendly NAT tool
If you want **policy, HA, scheduling, quotas, tenants** use Proxmox or OpenStack. If you want **policy, HA, quotas, tenants**, use Proxmox or OpenStack.
If you want **a VM right now**, MNMIVM wins. If you want **direct control over real infrastructure**, MNMIVM-SE is the tool.
--- ---
## 🧪 Project Status ## 🐧 Why MNMIVM-SE Exists
* ✅ Actively working Because sometimes you dont want:
* ✅ Under ~600 lines of Go
* ✅ No external Go dependencies
* ⚠️ API not stabilized yet
* ⚠️ CLI flags may evolve
This project is intentionally **hackable** and **readable**. * libvirt
* XML
* dashboards
* APIs
* orchestration layers
You want:
> “Put a VM on my LAN, give it an IP, and let me build infrastructure.”
MNMIVM-SE does exactly that — and nothing more.
--- ---
## 🐧 Why MNMIVM Exists ### ⚠️ Final Note
Sometimes you dont want: If you break networking with MNMIVM-SE, it isnt a bug.
* A cluster Its Linux doing exactly what you told it to do.
* A UI
* A dashboard
* A service mesh
You just want to: And thats the point.
> “Launch a VM and throw it into orbit.”
Thats MNMIVM.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 288 KiB

After

Width:  |  Height:  |  Size: 427 KiB

Before After
Before After

View file

@ -1,2 +1,2 @@
#!/bin/sh #!/bin/sh
go build -o ./mnmivm go build -o ./mnmivm-se

4
go.mod
View file

@ -1,3 +1,3 @@
module tux-microvm module mnmivm
go 1.24.4 go 1.19

990
main.go

File diff suppressed because it is too large Load diff

BIN
mnmivm-se Executable file

Binary file not shown.