6.7 KiB
🚀 MNMIVM-SE (Server Edition)
A LAN-Native VM Cloud with a Minimal Control Plane
MNMIVM-SE is the server-focused edition of MNMIVM — a minimal, single-binary VM launcher built on QEMU + KVM + cloud-init that turns your LAN into a local VM cloud.
Unlike traditional platforms, MNMIVM-SE exposes the raw infrastructure primitives directly: bridges, TAP devices, MAC addresses, static IPs, and Linux processes.
Your LAN is the fabric.
The kernel is the scheduler.
The CLI is the control plane.
☁️ What MNMIVM-SE Is
- A local VM cloud built directly on your LAN
- A process-native control plane
- A CLI-first infrastructure tool
- A Proxmox-style networking model without Proxmox
Each VM:
- Has a persistent MAC address
- Has a static IP on your LAN
- Appears as a first-class network device
- Can host real infrastructure services (DNS, CI, storage, routing, etc.)
Routers, firewalls, and switches see MNMIVM-SE VMs as real machines, not NAT artifacts.
🧠 Control Plane Model
MNMIVM-SE does have a control plane — it’s just intentionally minimal, local, and explicit.
The control plane is implemented as:
- A single CLI binary
- A file-backed state store
- Linux process lifecycle tracking
There is:
- No always-on daemon
- No API server
- No database
- No reconciliation loop
- No scheduler service
Instead:
- VM lifecycle = Linux process lifecycle
- State = files under
/var/lib/microvm - Configuration changes = cloud-init regeneration
- Access = SSH + VNC
The filesystem is the state store.
/procis the source of truth.
Each CLI command is a deliberate control action.
This makes MNMIVM-SE closer to early private IaaS and bare-metal virtualization than modern hyperscaler platforms.
🧱 Supported Host Operating Systems
MNMIVM-SE is conservative about host support and only documents what is tested.
✅ Supported
| Host OS | Version |
|---|---|
| Debian | 12+ |
| Alpine Linux | 3.22+ |
🕒 Coming Soon
| Host OS | Notes |
|---|---|
| Ubuntu | Netplan-based host networking support planned |
Ubuntu is not currently documented due to netplan-specific bridge handling.
Support will be added, but is not a top priority.
❌ Not Supported
- Wi-Fi–only hosts
- WSL / nested hypervisors
- Desktop laptop setups expecting NAT
🧱 Architecture Overview
- QEMU + KVM
- Linux bridge (
br0) - TAP devices
- Cloud-init seed ISO
- Static IP networking
- VNC console for recovery
/var/lib/microvm/
├── images/
└── vms/
└── vm1/
├── disk.qcow2
├── seed.iso
├── pubkey.pub
├── os.name
├── vm.ip
├── vm.mac
├── vnc.port
└── vm.pid
No libvirt.
No XML.
No daemon.
🌐 Host Networking Requirements (CRITICAL)
MNMIVM-SE requires a proper Linux bridge.
Example: /etc/network/interfaces (Debian)
auto lo
iface lo inet loopback
auto ens18
iface ens18 inet manual
auto br0
iface br0 inet static
address 192.168.86.10
netmask 255.255.255.0
gateway 192.168.86.1
dns-nameservers 1.1.1.1 8.8.8.8
bridge_ports ens18
bridge_stp off
bridge_fd 0
Rules that must be followed:
- The host IP must live on
br0 - The physical NIC must have no IP
- Wi-Fi cannot be bridged
- VMs attach via TAP devices
🔥 Kernel Bridge Filtering (THIS WILL BREAK VMs)
Linux defaults can silently block bridged traffic.
This must be disabled:
cat /proc/sys/net/bridge/bridge-nf-call-iptables
# must be 0
If set to 1, VMs will:
- Boot successfully
- Have valid IPs
- Be completely unreachable
Fix (runtime)
sudo sysctl -w net.bridge.bridge-nf-call-iptables=0
sudo sysctl -w net.bridge.bridge-nf-call-ip6tables=0
sudo sysctl -w net.bridge.bridge-nf-call-arptables=0
Persistent fix
/etc/sysctl.d/99-bridge.conf:
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-arptables = 0
🔐 QEMU Bridge Permissions
QEMU must be allowed to attach TAP devices.
/etc/qemu/bridge.conf
allow br0
Verify helper:
ls -l /usr/lib/qemu/qemu-bridge-helper
🐧 Alpine Linux Host Notes (3.22+)
Alpine does not ship a hypervisor stack by default.
Install required packages:
apk add \
qemu-system-x86_64 \
qemu-img \
qemu-hw-display-virtio-vga \
bridge-utils \
cdrkit \
go
Notes:
cdrkitprovidesgenisoimagebridge-utilsprovidesbrctlqemu-hw-display-virtio-vgais required for VNC- No libvirt or services are used
- OpenRC is sufficient
Alpine works well as a minimal KVM host once assembled.
⚙️ Server Edition Configuration (Code-Level)
Networking and sizing are configured in code, not via runtime flags.
Edit these constants in main.go (around lines 25–30):
// Networking
bridgeName = "br0"
lanCIDR = "192.168.86.0/24"
lanGW = "192.168.86.1"
lanDNS1 = "192.168.86.1"
lanDNS2 = "8.8.8.8"
// VM sizing
baseDiskSize = "12G"
memMB = "1024"
cpus = "1"
This keeps runtime behavior explicit and predictable.
🧰 CLI Usage
Create a VM
sudo mnmivm-se create vm1 \
--os debian \
--pubkey-path ~/.ssh/id_ed25519.pub \
--ip 192.168.86.53
Start a VM
sudo mnmivm-se start vm1
SSH in
ssh debian@192.168.86.53
Stop a VM
sudo mnmivm-se stop vm1
Update cloud-init (SSH key / IP)
sudo mnmivm-se update-cloud vm1 \
--pubkey-path newkey.pub \
--ip 192.168.86.54
🔑 Security Model
- SSH key–only access
- No password authentication
- No root login
- Static IPs (no DHCP ambiguity)
- MAC addresses pinned via cloud-init
- VNC console for recovery only
This follows server-grade discipline, not container ergonomics.
⚠️ What MNMIVM-SE Is Not
- ❌ A managed cloud service
- ❌ A multi-tenant platform
- ❌ A scheduler or orchestrator
- ❌ A UI-driven system
- ❌ A laptop-friendly NAT tool
If you want policy, HA, quotas, tenants, use Proxmox or OpenStack.
If you want direct control over real infrastructure, MNMIVM-SE is the tool.
🐧 Why MNMIVM-SE Exists
Because sometimes you don’t want:
- libvirt
- XML
- dashboards
- APIs
- orchestration layers
You want:
“Put a VM on my LAN, give it an IP, and let me build infrastructure.”
MNMIVM-SE does exactly that — and nothing more.
⚠️ Final Note
If you break networking with MNMIVM-SE, it isn’t a bug.
It’s Linux doing exactly what you told it to do.
And that’s the point.
