Tag Archives: operating system

Will the Supermachine squash the Virtual Machine

We’re in training this week for a system provisioning tool called RPath, which is really cool stuff.


It’s similar to a lot of configuration management tools available today, except for entire systems. There is a GUI, or command line tool, that allows you to compose a system from all the pieces you need — and just the pieces you need — and a ready-to-run image of the system pops out the other end. It can produce VM images, tars, or even ISOs (I believe). Cool stuff.


As slick as that is, and as we continue to go down this path of virtualization, every time I log onto a virtual box that I just created, I really have to wonder why I’m doing that. In a world where applications are deployed as virtual appliances, I’ve been getting the feeling more and more that having a full infrastructure set up to run a single servlet seems really kind of backwards and heavyhanded.

For example, these appliances tend to be isolated from broader networks by private IP addresses, and allow admin interaction only through ssh. User interaction is only through standard, narrowly-confined ports. So why do we need a password file or user permissions? As an appliance with dedicated functionality baked in to it, there’s little reason to log in to the thing at all… why do I need all the complexity of a multiuser system?

I do a “ps -eaf” on one of these boxes and see all sorts of processes running. Why? What are they doing that needs to be rolled up into a system isolated from all the others?

It seems like the model the technical world is moving to, is that the *real* operating system is the virtual hosting environment. It’s the technology that reels in all the physical hosts, and hands out CPU, disk, and network for services to run with. Why am I deploying to a virtual ‘nix or ‘does box at all?

I have the feeling that there is this awesome technology that tears down a lot of the constructs that, granted, were critical to getting us here in the first place. But now these virtual, dedicated appliances only do one thing. There aren’t 16 or 32 users logged in at a time. There aren’t multiple user’s files scattered around a file system, requiring security and permissions.

Seems like the problem is we have a collection of resources: disk space, CPU time, network bandwith. And we have a number of tasks to complete. Virtual machines marry those two in creative ways, but wouldn’t we be well served to take a step back and look at a more direct way to bundle all that together.

I guess an analogy would be this: it’s seems like we’re die-casting instances of Stone Axes in solid titanium, because Stone Axes are well-understood, comfortable and familiar.

I don’t think a lot of the virtual or “cloud” providers will have any trouble if the traditional computer system just kind of evaporates into the cloud. They are very flexible about the kind of appliances they can bust out.

But I’m curious to see how that will unfold.

EDIT: lol should have read about Google App Engine before I wrote this 😉


Leave a comment

Filed under cloud computing, computer architecture

Will REST give us an Internet OS?

We were in a week of training, and it was pretty exhausting. The last day was the most interesting, because we got into the “advanced” stuff. The guy training us was a really smart guy, and had some good ideas. At one point he offered his vision of a sort of file system distributed across the web, where he could have, say, pictures scattered all over the place and just pull them in.

I perked up at that and observed that’s more or less the vision of REST… that by making access to resources uniform, you could just go out and grab them from whatever service was holding them at the time.

I didn’t mention that I hold a patent — for what that’s worth hehe — or at least my employer does 😉 — on a system for managing arbitrary resources by relating URIs to each other, with a state machine for managing the lifecycles of those relationships. Which is kind of part of what he was talking about.

Then, in another conversation, one of the guys on my team — a really, really sharp guy — was creating a RESTful interface for launching Map/Reduce jobs in Hadoop. As we were chatting I recommended he actually expose three addressible resources for that purpose: a mapper resource, a reducer resource — and a control resource that ties the other two together through URIs.

Anyway, the upshot of all this is that as I pondered it some more, it occurred to me that I’ve always been talking about REST in the context of *services*. That is, how cool would it be if my service were just like yours and I didn’t have to spend 2 days of coding to write a client to your service.

And that’s a noble thought, but I think the broader, more powerful model that’s going to emerge is one of combined data and processing services across the internet. What would it take to turn the Internet into a giant OS?

An OS needs to store data, and it also needs to provide execution units. And we’re getting there with cloud computing. But the units of execution are still tightly bound to the idea of “a box”. We call our boxes “virtual”, but they are still boxes.

So what if “the box” an application ran on was The Internets?

This blog is a place for me to throw out (up?) any half-baked, often whiny ideas that pop into my head, so I don’t know if that’s just me being artistic, or it’s me missing the boat by 5 years again, or if that’s actually a good idea. But it seems like there’s something there.

And maybe if that happens, I can put that stupid patent of mine to work, finally.

1 Comment

Filed under REST