I tend to use them for a bit, and then destroy/rebuild. Think days, not weeks.
Code + issues are active under https://github.com/TritonDataCenter (smartos-live, illumos-joyent, triton, etc.), and docs are at https://docs.smartos.org/.
SmartOS is released every two weeks, and Triton is released every 8 weeks -- see https://www.tritondatacenter.com/downloads
And Triton object storage will have S3 support in the next release!
[edit: removed semicolon from link!]
Does anyone know if something like this is possible with Proxmox? I've got three servers I'm thinking of setting up as a small cluster and would like to boot them from a single image instead of manually setting PVE on each. Ansible or salt is an option but that tends to degrade over time.
but there's also VM config info under `/etc/pve` or something similar. I'm pretty sure that's some kind of FUSE filesystem, it's supposed to be synchronized between cluster members.. you might be able to host that externally somehow. But that'll probably take some effort.
You'll also need to figure out how to configure `/etc/network/interfaces` on boot for your network config. But that's doable.
Would be pretty neat.
Personally, I feel that "smartOS does not support booting from a local block device like a normal, sane operating system" might be a drawback and is a peculiar thing to brag about.
In the case of smartOS (which I've never used) it would seem like that is achieved in the design because the USB isn't changing. Reboot and you are back to a clean slate.
Isn't this how game arcades boot machines? They all netboot from a single image for the game you have selected? That is what it seems smartOS is doing but maybe I'm missing the point.
I think if you really-really want declaratively for host machines, you'd need to ditch Proxmox in favor of Incur on top of NixOS.
There is also https://github.com/SaumonNet/proxmox-nixos, but it's pretty new and therefore full of rough edges.
lots of other stuff will do the "boot from single image" part...say...https://fogproject.org/
https://blog.kail.io/pxe-booting-on-proxmox.html
But why bother? A read-only disk image would be simpler.
Joyent, the company behind SmartOS, was since acquired, and I don’t usually see anyone talking about SmartOS nowadays.
Is anyone on HN using SmartOS these days?
The global zone works great as a hypervisor if you prefer working over SSH in a real shell, and being able to run a lot of services natively just makes things like memory allocation to VM's and having a birds eye view of performance easier. Being able to CoW cp/mv files between zones because it's actually the same filesystem makes certain operations much easier than with actual VM's. Bhyve works well for the things that need an actual Linux kernel or other OS, at the cost of losing some of the zone benefits mentioned earlier.
Highlighting a few things we today run on SmartOS, grouped by their technology stacks: C (haproxy, nginx, PostgreSQL, MariaDB), PHP (various web apps), Java (Keycloak), Elixir/Phoenix (Plausible, fork of Firezone), Rust (rathole, some internal glue services), Go (Grafana, Consul, Prometheus). Most of those are readily available in the package manager, and a few offer native Solaris binaries which run fine on illumos. Others we do local builds in a utility zone before copying the binary package to the where it actually runs.
On LX zones we also run a number of services without problems, usually because they have Debian packaging available but are not in pkgsrc (for example Consul/Nomad, Fabio, some internal things that was already Linux-specific and we haven't bothered to port yet).
And at home a LX zone also runs Jellyfin just fine. (:
Yes, ansible exists but it's actually quite hard to run ansible on a few hundred machines -- you need lots of RAM just to run the playbook and your first hundred or so separate deployments, you do need to reach for something like Kubernetes.
As for LX, why emulate linux when it's .... right there? The linux kernel is not a lot of overhead vs having to justify emulating the linux ABI on an OS the industry has largely abandoned.
I’ve been able to do almost everything in native zones. I had a bhyve zone set up to run a photo related GitHub code base that really needed Linux.
SMF is a joy to use for services and package management with pkgsrc is great. The whole thing just feels very thoughtfully put together.
You can probably achieve all this on Linux with docker and the right iptables (or whatever succeeded it) config I imagine? But on smartos I am using facilities that are integrated deeply into the os going back like 20 years now. I also just prefer the old sun stuff.
I couldn't point to any one single major reason that prompted the switch - just lots of small annoyances stemming from the world expecting you to be running Linux instead of Solaris, and once you move away from zones, you lose one of the most compelling reasons for being on SmartOS
https://rfd.shared.oxide.computer/rfd/0026
https://github.com/oxidecomputer/heliosAre there any workloads (other than as a VM host) that run on SunOS derived OSes?
Dtrace, Zones, and an "untainted branch" of ZFS are the main reasons given when I asked why illumos and not Linux. I did later see the light (heh) with the Dtrace part for sure.
> Are there any workloads (other than as a VM host) that run on SunOS derived OSes?
Pretty much any workload that runs on Linux or BSD. The exceptions that are notable are Ceph and "big network" applications like XDP/VPP/DPDK centric stuff like edge router or DDoS protection.
Zones provide full security isolation. A downstream user can have root in an illumos Zone and there isn't anything to worry about other than CPU side-channel flaws (which are or are not a problem depending on use case). FreeBSD's Jails, as shown by a 39C3 talk given this winter showed that the FreeBSD kernel is highly vulnerable to processes running as root within a Jail. Security isolation that can be relied on for untrusted workloads in Linux, in the form of containers at least, never really materialized.
But that is the same for most server images nowdays.
What in portend is that Oxide upstreams all their work so 'traditional' users should get benefit from it too.
I never used Solaris in my real life but I can understand the appeal for people who did.
[1] https://www.tritondatacenter.com/blog/a-new-chapter-begins-f...
https://www.catb.org/esr/faqs/smart-questions.htmlDoesn't linux have that as well? https://www.kernel.org/doc/html/next/filesystems/smb/ksmbd.h...
They’ve written up their reasoning in this RFD: https://rfd.shared.oxide.computer/rfd/0026#_comparison_illum...
Because Linux is just a kernel and users have to provide all of their own user space and system services there is a lot of opportunity for churn. Illumos is a traditional operating system that goes from the kernel to the systemd layer. Illumos is also very stable at this point so most of the churn is managed up front
The choice is between porting a handful of apps to illumos or jumping on to the Debian treadmill while pioneering a new to Linux hypervisor. Would Linux have enabled a faster development cycle or just a easier MVP?
The justifications for bhyve over kvm are similarly inscrutable; you can simply not build the code you don't want. Nobody's forcing you to use shadow paging. Comments like "reportedly iffy on AMD" are bizarre. What does "iffy" mean? This wasn't worth testing? Why should I, a potential customer, believe that these people are better at this than the established companies who have been producing nearly-identical products for twenty years? At the domain of development they're discussing why bother using an x86_64 processor from a manufacturer who does not bother to push code into the kernel you've chosen?
Again, it's their company, and if they (as I suspect) chose these tools because they're familiar, that's a totally supportable position. I just can't understand why we get handwaving and assurances instead of any meat.
Now, in your defense, an update on RFD 26 is likely merited: the document itself is five years old, and in the interim we built the whole thing, shipped to customers, are supporting it, etc. In short, we have learned a lot and it merits elucidating it. Of course, given the non-attention you gave to the document, it's unlikely you would read any update either, so let me give you the tl;dr: in addition to the motivation outlined in RFD 26, there are quite a few reasons -- meaty ones! -- that we didn't anticipate that give us even greater resolve in the decision that we made.
It is simultaneously an assertion of the culturally determined preferences of a group of people steeped in Sun Microsystems engineering culture (and Joyent trauma?), and a clinical assessment of the technology. The key is that technology options are evaluated against values of that culture (hence the outcome seems predictable).
For example, if you value safety over performance, you'll prioritise the safety of the DTrace interpreter over "performance at all costs" JIT of eBPF. This and many other value judgements form the "meat" of the document.
The ultimate judge is the market. Does open firmware written in Rust result in higher CSAT? This is one of the many bets Oxide is making.
Frankly, I don't think Oxide would capture so much interest among technical folks if it was just the combination of bcantrill fandom + radically open engineering. The constant stream of non-conformist/NIH technology bets is why everyone is gripping their popcorn. I get to shout "Ooooooh, nooo! Tofino is a mistake!" into my podcast app, while I'm feeding the dog, and that makes my life just a little bit richer.
It was several assertions, plus your admission of confusion. I mean, there are no stupid questions, but there wasn't even a question there, so I don't blame anyone for thinking you're communicating poorly.
Furthermore, advanced readers are generally able to infer from "I am not sure why x" that a similar flow of discussion might be as feasible as if it were phrased "why x?".
In any case in the 6 years os smartos we never had any dataloss from failed disks, sure fifo and smartos had their warts but lx-zones works amazing well and i think we got Garrett D'Amore to go back to BSD land for some time . In the end we had to jump to VMWARE when Heinz gave up on fifo.
snarl/howl/chunter
https://project-fifo.net/ https://www.arista.com/en/support/pluribus-resources
Who exactly has the environment where you can add, let alone promptly repair/replace, USB key sticks, on your server? Or run PXE when you have just a single server? How exactly do you do that in Hetzner or OVH? Let alone any other service where you get just a single dedicated server or two.
So, we're big enough to have our own quarter-rack in a collocation facility, let's do PXE. Now you have to have a whole separate infrastructure server, just for your other servers to be able to boot properly? (And how exactly does that server itself boot?) Plus, have an extra infra server for redundancy?
Sorry, but this is the reason noone would use SmartOS. You can't build a fortress on such a shaky foundation.
It's simply out of touch with the target market. At least with FreeBSD or OpenBSD, you known it'll just work™ on any single server, as long as serial console access is available, which is standard-enough. Going against the mainstream of Linux is already hard-enough, there's no reason to make it any harder.
SmartOS sounds like a lot of work, for negligible or even negative benefit.
There's zero good reasons why any machine with 450GB+ of zfs-backed redundant storage, needs to rely on USB keys or networking, in order to function properly. There's a reason Samsung's Joyent entirely abandoned and divested of SmartOS, because this sort of over-engineered mentality, simply doesn't compute. It prevents all sorts of usecases, and even with a growth mindset, still prevents the organic growth from a couple of servers to a rack and more.
it's too bad too. The concepts behind Manta were such a great idea. I still want tools that combine traditional unix pipes with services that can map-reduce over a big farm of hyperconverged compute/storage. I'm somewhat surprised that the kubernetes/cncf-adjacent world didn't reinvent it.
I believe it was removed shortly after i left the project..
SmartOS was developed by Joyent for their cloud computing product, it's primary use case isn't desktop computing. I think the advantages mentioned above were probably a bigger factor than the disk space. I would also guess that PXE would be the standard way to boot in a datacenter, not USB.
How exactly does it make any sense to use ECC memory and ZFS RAID for error correction and redundancy, but then rely on the modern floppy disk for the OS itself?
Illumos started as "remove all close source bits and replace with OSS", after Oracle closed down OpenSolaris, Illumos became a full-on fork and Solaris-like rather than another version of Solaris.
From there, multiple distros were born (because Illumos didn't want to be distro), notably OpenIndiana and SmartOS. OpenIndiana being a general purpose distro of Illumos. While SmartOS went with something like "OS for HCI datacenters"
So it's Solaris > OpenSolaris > Illumos.
I'll have to give it a spin.
Their website is indeed out of date. Reminds me of Haxe in that aspect. The language itself is receiving significant development, but the website looks abandoned, and no new blog posts have been posted in a while.
judging by https://doc.qubes-os.org/en/latest/_images/qubes-trust-level... it looks very linux-centered.
It's just a usage detail that Qubes may have a slightly higher percentage of linux containers vs smartos - at this point both are probably mostly linux containers on both OSes in terms of usage. (Qubes can also do Windows vms and they amped up support for this in the latest release, while smartOS has native zones and i believe you can do freebsd and maybe others on bhyve.)
Differences are many, including that Qubes has no concept of a "native" VM (dom0 is just a thin fedora wrapper around Xen) and that the global zone in SmartOS is significantly beefier than dom0 in Qubes, since Qubes offloads networking and usb io and bluetooth and sound to independent service qubes (VMs). And their development has been entirely separate. But they are spiritually siblings. I think it's an inspired comparison.
In reality, I ended up running almost everything in VMs. The only thing worked well natively was nginx. MongoDB, Mysql, even our php backend (some libraries) had issues, unfortunately.
A year ago, I considered SmartOS again as a home lab driver, and no success again, Linux just has better support: drivers, pci passthrough, etc... and now with containers+vm through Proxmox or anything else. You can even run a k8s+kubevirt with zfs practically out of the box as a complete overkill though.