A version of my KVM VM manager mnmivm but supporting bridge networking and targeted to run on servers
Find a file
2025-12-21 23:51:11 +00:00
assets upload demo video output.mp4 to assets 2025-12-21 23:36:19 +00:00
.gitignore port randomization added for VNC + SSH on run 2025-12-15 14:35:36 -05:00
build.sh server edition, first version complete 2025-12-18 21:07:59 -05:00
go.mod server edition, first version complete 2025-12-18 21:07:59 -05:00
main.go Adds debian-forky to supported guest OSes 2025-12-18 21:33:14 -05:00
mnmivm-se server edition, first version complete 2025-12-18 21:07:59 -05:00
README.md Update README.md 2025-12-21 23:51:11 +00:00
tuxrockets.jpg Replace tuxrockets.jpg file 2025-12-21 23:48:22 +00:00

🚀 MNMIVM-SE (Server Edition)

A LAN-Native VM Cloud with a Minimal Control Plane

MNMIVM Hero

Demo Video

MNMIVM-SE is the server-focused edition of MNMIVM (https://mentalnet.xyz/forgejo/markmental/mnmivm) — a minimal, single-binary VM launcher built on QEMU + KVM + cloud-init that turns your LAN into a local VM cloud.

Unlike traditional platforms, MNMIVM-SE exposes the raw infrastructure primitives directly: bridges, TAP devices, MAC addresses, static IPs, and Linux processes.

Your LAN is the fabric.
The kernel is the scheduler.
The CLI is the control plane.


☁️ What MNMIVM-SE Is

  • A local VM cloud built directly on your LAN
  • A process-native control plane
  • A CLI-first infrastructure tool
  • A Proxmox-style networking model without Proxmox

Each VM:

  • Has a persistent MAC address
  • Has a static IP on your LAN
  • Appears as a first-class network device
  • Can host real infrastructure services (DNS, CI, storage, routing, etc.)

Routers, firewalls, and switches see MNMIVM-SE VMs as real machines, not NAT artifacts.


🧠 Control Plane Model

MNMIVM-SE does have a control plane — its just intentionally minimal, local, and explicit.

The control plane is implemented as:

  • A single CLI binary
  • A file-backed state store
  • Linux process lifecycle tracking

There is:

  • No always-on daemon
  • No API server
  • No database
  • No reconciliation loop
  • No scheduler service

Instead:

  • VM lifecycle = Linux process lifecycle
  • State = files under /var/lib/microvm
  • Configuration changes = cloud-init regeneration
  • Access = SSH + VNC

The filesystem is the state store.
/proc is the source of truth.
Each CLI command is a deliberate control action.

This makes MNMIVM-SE closer to early private IaaS and bare-metal virtualization than modern hyperscaler platforms.


🧱 Supported Host Operating Systems

MNMIVM-SE is conservative about host support and only documents what is tested.

Supported

Host OS Version
Debian 12+
Alpine Linux 3.22+

🕒 Coming Soon

Host OS Notes
Ubuntu Netplan-based host networking support planned

Ubuntu is not currently documented due to netplan-specific bridge handling.
Support will be added, but is not a top priority.

Not Supported

  • Wi-Fionly hosts
  • WSL / nested hypervisors
  • Desktop laptop setups expecting NAT

🧱 Architecture Overview

  • QEMU + KVM
  • Linux bridge (br0)
  • TAP devices
  • Cloud-init seed ISO
  • Static IP networking
  • VNC console for recovery

/var/lib/microvm/
├── images/
└── vms/
└── vm1/
├── disk.qcow2
├── seed.iso
├── pubkey.pub
├── os.name
├── vm.ip
├── vm.mac
├── vnc.port
└── vm.pid

No libvirt.
No XML.
No daemon.


🌐 Host Networking Requirements (CRITICAL)

MNMIVM-SE requires a proper Linux bridge.

Example: /etc/network/interfaces (Debian)

auto lo
iface lo inet loopback

auto ens18
iface ens18 inet manual

auto br0
iface br0 inet static
    address 192.168.86.10
    netmask 255.255.255.0
    gateway 192.168.86.1
    dns-nameservers 1.1.1.1 8.8.8.8
    bridge_ports ens18
    bridge_stp off
    bridge_fd 0

Rules that must be followed:

  • The host IP must live on br0
  • The physical NIC must have no IP
  • Wi-Fi cannot be bridged
  • VMs attach via TAP devices

🔥 Kernel Bridge Filtering (THIS WILL BREAK VMs)

Linux defaults can silently block bridged traffic.

This must be disabled:

cat /proc/sys/net/bridge/bridge-nf-call-iptables
# must be 0

If set to 1, VMs will:

  • Boot successfully
  • Have valid IPs
  • Be completely unreachable

Fix (runtime)

sudo sysctl -w net.bridge.bridge-nf-call-iptables=0
sudo sysctl -w net.bridge.bridge-nf-call-ip6tables=0
sudo sysctl -w net.bridge.bridge-nf-call-arptables=0

Persistent fix

/etc/sysctl.d/99-bridge.conf:

net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-arptables = 0

🔐 QEMU Bridge Permissions

QEMU must be allowed to attach TAP devices.

/etc/qemu/bridge.conf

allow br0

Verify helper:

ls -l /usr/lib/qemu/qemu-bridge-helper

🐧 Alpine Linux Host Notes (3.22+)

Alpine does not ship a hypervisor stack by default.

Install required packages:

apk add \
  qemu-system-x86_64 \
  qemu-img \
  qemu-hw-display-virtio-vga \
  bridge-utils \
  cdrkit \
  go

Notes:

  • cdrkit provides genisoimage
  • bridge-utils provides brctl
  • qemu-hw-display-virtio-vga is required for VNC
  • No libvirt or services are used
  • OpenRC is sufficient

Alpine works well as a minimal KVM host once assembled.


⚙️ Server Edition Configuration (Code-Level)

Networking and sizing are configured in code, not via runtime flags.

Edit these constants in main.go (around lines 2530):

// Networking
bridgeName = "br0"
lanCIDR    = "192.168.86.0/24"
lanGW      = "192.168.86.1"
lanDNS1    = "192.168.86.1"
lanDNS2    = "8.8.8.8"

// VM sizing
baseDiskSize = "12G"
memMB        = "1024"
cpus         = "1"

This keeps runtime behavior explicit and predictable.


🧰 CLI Usage

Create a VM

sudo mnmivm-se create vm1 \
  --os debian \
  --pubkey-path ~/.ssh/id_ed25519.pub \
  --ip 192.168.86.53

Start a VM

sudo mnmivm-se start vm1

SSH in

ssh debian@192.168.86.53

Stop a VM

sudo mnmivm-se stop vm1

Update cloud-init (SSH key / IP)

sudo mnmivm-se update-cloud vm1 \
  --pubkey-path newkey.pub \
  --ip 192.168.86.54

🔑 Security Model

  • SSH keyonly access
  • No password authentication
  • No root login
  • Static IPs (no DHCP ambiguity)
  • MAC addresses pinned via cloud-init
  • VNC console for recovery only

This follows server-grade discipline, not container ergonomics.


⚠️ What MNMIVM-SE Is Not

  • A managed cloud service
  • A multi-tenant platform
  • A scheduler or orchestrator
  • A UI-driven system
  • A laptop-friendly NAT tool

If you want policy, HA, quotas, tenants, use Proxmox or OpenStack.

If you want direct control over real infrastructure, MNMIVM-SE is the tool.


🐧 Why MNMIVM-SE Exists

Because sometimes you dont want:

  • libvirt
  • XML
  • dashboards
  • APIs
  • orchestration layers

You want:

“Put a VM on my LAN, give it an IP, and let me build infrastructure.”

MNMIVM-SE does exactly that — and nothing more.


⚠️ Final Note

If you break networking with MNMIVM-SE, it isnt a bug.

Its Linux doing exactly what you told it to do.

And thats the point.