Overview Architecture Features Get Started Availability Lineage FAQ Contact Notes Return To Top

Overview

Cells for NetBSD is an early-stage but steadily maturing system for lightweight, kernel-enforced isolation on NetBSD.

It closes the operational gap between simple chroot environments and full virtualization platforms such as Xen.

The project runs multiple workloads on a single host with:

  • Strong process isolation
  • System hardening profiles
  • Supervised service execution
  • Unified lifecycle management
  • Centralized logging
  • Snapshot-based metrics export

The system stays fully NetBSD-native: isolation and policy enforcement are built into the kernel security framework, not delegated to a separate runtime layer.

The goal is not to replicate Linux-style container ecosystems, but to provide a focused operating model with minimal dependencies, no external control services, and explicit operational boundaries.

As with any kernel-based isolation, security depends on kernel correctness; stronger trust separation may still require virtualization such as Xen.

Overall, the project is evolving into a practical, end-to-end isolation stack that fits naturally into existing NetBSD administration workflows.

Architecture

The implementation is built around the following components:

Layer Model

  • secmodel_cell Kernel security model responsible for cell identity, policy enforcement, and snapshot telemetry.

  • cellctl Low-level runtime adapter for create/destroy/exec operations and kernel-facing snapshots.

  • cellmgr Host-side control plane for desired manifests, runtime reconciliation, apply plans, and backup workflows.

  • cellui Optional interactive TUI that consumes cellmgr data and actions via a persistent IPC bridge.

Features

Cells for NetBSD focuses on practical, operator-friendly isolation with a balanced emphasis on policy, observability, and daily operations.


Kernel Isolation Foundation (secmodel_cell)

secmodel_cell is the enforcement core of the system.

  • Cell identity and process boundaries are enforced in the kernel
  • Cross-cell process inspection and signaling are blocked
  • Snapshot telemetry is produced at the same enforcement layer

Hardened Access Profiles

Security profiles (low, medium, high) constrain host-impacting operations per cell.

  • Restricts privileged actions such as mount and host-state manipulation
  • Keeps hardening explicit and auditable in per-cell configuration

Host-Centric Networking and Port Ownership

Cells share the host network stack by design.

  • Host routing, firewall, and interface workflows stay simple
  • Reserved ports are assigned per cell to prevent accidental conflicts
  • Kernel-level ownership checks protect service boundaries

Built-in Supervisor and Logging

Foreground workloads run under host-visible supervision.

  • Automatic restart with backoff and deterministic lifecycle control
  • No hidden runtime daemons between operator and process tree
  • stdout/stderr forwarding into syslog for centralized host-level logs

Runtime Telemetry

Cells expose lightweight runtime telemetry for monitoring and diagnostics.

  • CPU ticks (1s delta and rolling 10s average)
  • Process count and reference count
  • Sampled virtual-memory size and age
  • Available via cellctl list and cellctl stats, including Prometheus-friendly output

Integrated Volume Management

Persistent data is managed as first-class volumes, separate from ephemeral runtime overlay state.

  • Clean separation between service runtime and durable data
  • Explicit per-cell volume mounts with predictable targets and mode control
  • Safer lifecycle operations when replacing, rebuilding, or reconciling cells

Built-in Backup and Restore

Backup workflows are part of the core toolchain, not an afterthought.

  • Volume backup/restore for persistent data
  • Cell overlay backup/restore for runtime filesystem state
  • Safety checks and confirmation gates on destructive restore and delete paths

Operate Fleets with cellmgr

cellmgr is the unified host-side control plane across desired and runtime state.

  • Bootstrap workflow for module/config/base layout preparation
  • Persistent manifest-driven configuration for cells and volumes
  • Consistent read and lifecycle workflows (list|show|fields, create|set|start|stop|restart)
  • Declarative converge flow via apply, including healthchecks and drift handling

Manage Cells with Style: cellui

cellui adds a fast interactive TUI layer on top of cellmgr operations.

  • Runs on wscons just as well as in xterm
  • Gives an immediate operational overview of cells, storage, health, and backups
  • Reduces routine typing while keeping control explicit and transparent
  • Includes a hand-picked theme collection that can teleport you straight into the 80s ;-)
cellui overview screen
cellui storage screen
cellui backup screen
cellui full-width screen

Taken together, this yields a coherent stack from kernel enforcement up to daily operations UX: one model, one toolchain, and clear boundaries end to end.

Get Started

This section demonstrates a minimal, reproducible workflow with the current cellmgr command surface.

The example bootstraps the host, creates a desired cell manifest for a simple HTTP service, adds a declarative apply plan, converges runtime state, and checks that the cell is running.


1. Bootstrap Host Integration

Initialize host integration, prepare base layers, and verify that required kernel/runtime prerequisites are present.

vhost# cellmgr system bootstrap 

2. Create Desired Cell Manifest

Create the desired-state manifest for one HTTP workload. This writes configuration into /etc/cellmgr only (--scope desired) and does not yet start the service.

vhost# cellmgr cell create mysite-edge-httpd \
  --autostart YES \
  --profile medium \
  --reserved-ports 8080 \
  --log-facility local1 \
  --stdout-level info \
  --stderr-level err \
  --log-tag cell-mysite-edge-httpd \
  --cmd '/usr/libexec/httpd -I 8080 -X -f -s /var/www/mysite-edge-httpd' \
  --healthcheck 'test -f /var/www/mysite-edge-httpd/index.html' \
  --scope desired
Created manifest /etc/cellmgr/mysite-edge-httpd.cell

3. Add Declarative Apply Plan

Define a small apply plan that creates the initial web content inside the cell. Plans are declarative, versionable, and executed by cellmgr apply during reconciliation.

vhost# vi /etc/cellmgr/mysite-edge-httpd.apply

Plan content:

FILE_BEGIN /var/www/mysite-edge-httpd/index.html
<html>
        Hello NetBSD
</html>
FILE_END

4. Converge Desired to Runtime

Run reconciliation to render runtime state from manifests, execute the apply plan, start supervised service processes, and run the configured healthcheck.

vhost# cellmgr apply
apply: dry-run=NO reapply=NO restart-changed=NO verbose=NO
cell mysite-edge-httpd
  CREATE       render runtime cell state
  APPLY        run /etc/cellmgr/mysite-edge-httpd.apply
  START        supervised service after apply
  HEALTHCHECK  test -f /var/www/mysite-edge-httpd/index.html
  RESULT       changed

summary: cells=1 changed=1 failed=0 dry-run=NO

5. Verify Runtime State

Inspect the live cell view and confirm that the instance is running with an assigned CID and increasing age.

vhost# cellmgr cell list -o name,running,cid,age
NAME               RUNNING  CID  AGE
mysite-edge-httpd  YES      1    31s

6. Open the Service

Confirm the HTTP endpoint from your client or browser:

http://vhost.local:8080/


7. Export Prometheus-Compatible Metrics

cellctl stats -P -h emits Prometheus text format with a minimal HTTP header. This can be wired into inetd for a very lightweight metrics endpoint without additional exporter software.

vhost# cellctl stats -P -h
HTTP/1.1 200 OK
Content-Type: text/plain

# TYPE cell_cpu_ticks_1s gauge
# TYPE cell_cpu_ticks_10s_avg gauge
# TYPE cell_processes_current gauge
# TYPE cell_references_current gauge
# TYPE cell_memory_vmsize_bytes gauge
# TYPE cell_age_seconds gauge
cell_cpu_ticks_1s{cid="2",name="mysite-edge-httpd",root="/var/cellmgr/cells/mysite-edge-httpd/root"} 0
cell_cpu_ticks_10s_avg{cid="2",name="mysite-edge-httpd",root="/var/cellmgr/cells/mysite-edge-httpd/root"} 0
cell_processes_current{cid="2",name="mysite-edge-httpd",root="/var/cellmgr/cells/mysite-edge-httpd/root"} 1
cell_references_current{cid="2",name="mysite-edge-httpd",root="/var/cellmgr/cells/mysite-edge-httpd/root"} 1
cell_memory_vmsize_bytes{cid="2",name="mysite-edge-httpd",root="/var/cellmgr/cells/mysite-edge-httpd/root"} 137601024
cell_age_seconds{cid="2",name="mysite-edge-httpd",root="/var/cellmgr/cells/mysite-edge-httpd/root"} 596

Next Steps

For deeper operational guides and reference material, continue in the documentation.

The docs are still being built out, but they already include polished end-to-end recipes, including a MantisBT 3-tier setup (three cells, multiple volumes) and a Luanti gameserver example.

Availability

Cells for NetBSD is under active development and entering a more stabilized pre-release phase.

The project provides source access and a pre-release ISO build for hands-on testing.


Source Code

The project is maintained in a dedicated source tree and now primarily developed on a branch based on netbsd-11.

Repository (NetBSD 11, active):
https://github.com/MatthiasPetermann/netbsd-src/tree/netbsd-11-cells-dev

Bug Reports (GitHub Issues): https://github.com/MatthiasPetermann/netbsd-src/issues


Evaluation Images

Pre-release installation images based on NetBSD 11.0 RC3 (amd64) with integrated Cells support are available:

Download:

Verify Checksums:

$ sha256sum NetBSD-11.0_RC3-Cells_ALPHA8-amd64-dvd.iso
d3ae807094e5aa986d5360ad435430bc6b671a89bae85c3f8620d06eaab4ea38  NetBSD-11.0_RC3-Cells_ALPHA8-amd64-dvd.iso
$ sha256sum NetBSD-11.0_RC3-Cells_ALPHA8-amd64-install.img.gz
42a56a3ef1c12b9f1da23796331eb367bb031e73b9aea25c8cdd11c41da49cc4  NetBSD-11.0_RC3-Cells_ALPHA8-amd64-install.img.gz

Unlike former images, these come with X11 sets included.


Important Notice

The provided images are no official NetBSD releases.

They are pre-release builds derived from a NetBSD source tree with additional modifications for Cells support.

They are intended for practical validation of workflows and behavior in non-production environments.

It is an independent effort and is not aligned with or endorsed by the NetBSD core team.

For official NetBSD releases, please refer to:
https://www.netbsd.org/


Status

This project is currently in an early-access maturity phase.

It is suitable for development, evaluation, and controlled pilot-style environments. Interfaces, behavior, and internal structures may still change as the design is hardened.

The goal is to deliver a reliable NetBSD-native isolation stack with clear operational contracts. The project should be seen as a focused implementation path that is actively being stabilized.

Lineage

Cells for NetBSD relates to earlier research and isolation efforts in the NetBSD ecosystem.

Project Focus Status
GAOLS (P3A, 2008) Jail-like process isolation via kauth(9) hooks Research prototype; never integrated
MULT (P5A, 2008) Resource isolation by instantiating full kernel subsystems Highly invasive research prototype
netbsd-sandbox Userland sandboxing via chroot, secmodel_sandbox, rlimits, capabilities, Lua policies Hardening tool; not a jail model
Systrace (NetBSD 2.0) / sysjail Syscall interposition–based process isolation Deprecated / removed

Cells for NetBSD focuses on cell-scoped enforcement integrated into the existing kauth security framework, without syscall interposition or full subsystem replication.

FAQ

This FAQ addresses common questions and critical points about scope, security model, and project direction.


“Does ’not a container ecosystem’ mean it is not for security?”

Not at all. Security is a core goal.

The statement means this project is not a general-purpose container ecosystem and not full virtualization. It focuses on one clear scope: kernel-enforced process isolation with explicit operational boundaries.

As with any kernel isolation model, high-risk trust separation can still require virtualization (for example Xen), and we state that openly.


“Cells are containers, right? Why say it is not a container platform?”

The term “container” is overloaded.

Here, “not a container platform” means: no OCI runtime stack, no image distribution workflow, and no orchestration control plane. Cells for NetBSD is a NetBSD-native isolation model with a smaller, predictable toolchain.


“Why the name ‘cells’ instead of ‘jails’?”

Earlier iterations of the project used the term “jails”. During community discussion it became clear that the name strongly suggests full-stack FreeBSD jail compatibility, which is not the goal of this work.

Following feedback from the BSD community and a public naming discussion, the term “cells” was adopted instead. The name reflects the idea of small, isolated execution domains while avoiding confusion with existing FreeBSD jail semantics.


“How is this different from FreeBSD jails?”

FreeBSD jails are established and feature-rich, including resource limits and multiple forms of virtual networking.

Cells for NetBSD is intentionally narrower in scope: it targets NetBSD specifically, with a different API and a strong emphasis on operational simplicity, explicit boundaries, and host-visible supervision. Advanced resource limiting and alternative virtual networking models are out of scope.

The primary goal is a simple operational layer that improves security and delivers a strong out-of-the-box experience without requiring additional tools.


“Linux container internals can feel like a patchwork”

Many operators share that experience.

This project intentionally keeps the model compact: one host-centric network model, kernel-backed policy decisions, and fewer moving parts in day-to-day operation.


“Looks easy to use”

Thank you. Ease of use is intentional.

The goal is to keep advanced isolation approachable for regular NetBSD administration workflows.

Notes

This website is operated in accordance with applicable German law.

Content is provided without warranty as to accuracy, completeness, or timeliness. External links are the responsibility of their respective operators. All content is subject to German copyright law.

NetBSD® is a registered trademark of The NetBSD Foundation, Inc.
All other product and service names are the trademarks of their respective owners and are used for identification purposes only.

For full provider information and detailed legal disclosures,
please refer to the complete Legal Notice.


AI notice: AI tools were used for code analysis and prototyping.