Category Archives: computer architecture

Simple Explaination of Monad

Here’s my shot at a simple explanation of what a monad is in the computer world. First, I have to explain what a monoid is.

A monoid is (roughly)  a group of functions who take a parameter of a certain type and return a result of that type.

That’s useful because it allows easy “chaining” or “composition” of functions. Any fluent interface in java is a monoid:

MyThing result = myThing.spin()

Note that the input to each function is a MyThing, and the result is a MyThing.

A caution; strictly speaking, a monoid needs a few extra things. It needs an “identity” function that just returns the parameter, it needs a binary function that takes two items of the type and returns a new instance of the type, and those binary functions must be associative, so you can change the order of evaluation: ==

A monad takes the idea of monoid one step further; instead of mapping a class onto itself, it takes a container type which accepts any underlying type, and lifts that into a monoid space. So it’s a monoid using a container type:

A monad is (roughly) a group of functions who take a container type as an argument and return a result of the container type.

But note that the underlying type of the container may change. So you could map Container<String> to Container<Integer>. But the functions would still line up.

MyContainer<Int> myContainer = new MyContainer(1);
MyContainer<String> result = myContainer.<Int>rehash()

So that you are able to “chain” or “compose” the functions and get a result of any arbitrary type! That’s a pretty strong claim for a statically-typed language like Java.

(Note that in Haskell, the signatures of the functions would be a little different. They would be functions of (Int, MyContainer) or (String, MyContainer), and it would be the job of the container to appropriately apply the function to it’s contents. But it’s hard to make a direct comparison from Java to Haskell, because Haskell works much more directly with function composition. In Haskell, the monad is actually composing the type constructors, rather than with instances of the type.)

But the core idea behind the monad is to create a monoid on a container type, in order to get the benefits of chaining and composition across different underlying types.

Leave a comment

Filed under computer architecture, computer OOD, design patterns

Wherefore Art Thou Architecture?

How should one define “architecture” as opposed to “engineering”?

I’ve always seen the seniority of a engineer as a question of how big a problem they can solve on their own.

Roughly, to convey the idea:

You can give someone new to programming small, contained tasks with lots of explicit instructions about how the task needs to integrate with other pieces

A mid-level dev is someone who can take a description of some portion of an application, and make it work within the context of that application.

A senior dev can build a medium-sized application from scratch, within the technical constraints of a shop.

A more senior dev can do that, and make technology choices along the way about what technologies to use to make it work well.

…but those aren’t hard and fast rules. And some people come out the gate as “senior” IMHO, even if they have to spend some time with a different title.

What the architect is asking is to view the problem even more generally than that. If you had to put together a number of applications to make the system work:

  1. What applications and services will you need?
  2. What pieces interact with customers, and which interact with each other?
  3. How will they communicate?
  4. Where will they store data?
  5. Where are the risks of failures?
  6. How will you provide reliability?
  7. How will you provide security?

So, in a sense technical architecture is like a building architecture. It’s the layout, or the plan. It shows whats needed for the various parts, how they hold together, and just as importantly why.

(BTW, I’ve had a similar growth curve explained to me for architects, ranging from architecting a family of related applications or a set of very complex features, to setting technical direction for a group, to making strategic technical decisions for an entire organization.)

That said, I think most engineers at all levels have to do some “architecting” as well. It’s not a bright line. It sounds like if you focus on the Big Picture first, and not get hung up on the implementation details, you’ll be more in line with what he’s looking for. BTW the ability to see the Big Picture as well as the Little Details is a huge asset in this industry, so this sounds like a great opportunity.

…Here’s an analogy. Let’s say you’re asked to create a Magic Black Box. As an engineer, you’re expected to obsess about the inner workings of the Magic Black Box. But as an architect, your focus is different. You might peek into the box out of curiosity, but you’re expected to obsess about everything around all the Magic Black Boxes.

Similar to that analogy, you might think about the architecture role as viewing the whole system as the magic box. So if take a Gigantic Glass Box and you put the customers, the client apps, the firewall, the service tier, the database, and even the devops guys inside, then as architect you’re to obsess about how to make that huge system box work well.

Leave a comment

Filed under computer architecture, computer career, opinionizing

JSON the Magical Decoupling Tool

JSON/HTTP is a really good decoupling mechanism for communication between applications. The rapid industry adoption of JSON/HTTP interfaces really speaks well about how people view the usefulness of that model. and I’ll throw out a couple of suggestions that will make it even more loosely coupled.

Enforce a MUST IGNORE rule.

That is, when parsing the JSON (client or server), the app MUST IGNORE any fields it don’t recognize.

XML went in the with idea that the app MUST UNDERSTAND each field or else the document was invalid. But that created problems with versioning, because with almost any change, clients needed to upgrade every time the server did. Even adding an informational field broke the spec. With MUST IGNORE, the server can add new fields any time, and as long as it doesn’t remove or change the meaning of other fields (see below). Existing clients can just ignore the new fields. Rather, they MUST IGNORE the new fields.

A search on MUST IGNORE and MUST UNDERSTAND will reveal lots of good articles that talk about that.

Minimize breaking changes.

A “breaking change” is a change that will break existing clients. That is, removing a field the clients use. Or changing the meaning of a field (i.e. changing an amount field from dollars to Yen). That is, something that invalidates a client’s assumptions about the data it’s currently using.

With a breaking change, every client needs to make a change to support the new semantics or stop relying on missing fields. Do don’t do that unless its necessary.

In the extreme you would never make a breaking change. That is, have full backward-compatibility for every release. That may or may not be realistic, and it may require carrying along baggage from early versions, but it will spare a lot of churn for the clients.

How many APIs?

Because of the ubiquity of JSON parsers, maintaining a single API regardless of client will make life a lot easier. Sometimes it doesn’t work out — sometimes a client can only understand XML or some proprietary protocol, but starting simple & adding complexity makes life easier.


Just as a tangent, OAuth 2 is a really good bet for a well-thought out, standardized security protocol for a JSON/HTTP API. You could sit down and design something simpler, depending on what compromises are OK. But OAuth is a good fleshed-out protocol that has undergone years of industry scrutiny, so they’ve had lots of time to work out the kinks. And standard libraries are readily available for both client and server. I used an OAuth plugin to DJango for one project and it worked out really well.

Leave a comment

Filed under computer architecture, OAuth, opinionizing, REST

Anti-pattern: Framework Superclasses

Michael Feathers called out the anti-pattern of frameworks that require extension of classes in the framework. Hooray!

Thank you for saying this out loud! Frameworks based on class extension are extremely appealing at first, particularly to the control freaks who want tight coupling in their applications (unbelievably I work with quite a few of those). Over time the loss of design flexibility and moving control from the “application” or the “problem domain” over to the framework leads to a lot of stale code that is hard to test and extend. In particular, I’ve seen systems with useful business logic that can’t be reused because it’s locked into framework hierarchies.

This is one of those anti-patterns that once you see it, it’s hard to unsee it.

Essentially, when you’re doing class design, object hierarchies should follow the problem domain. So in an accounting model, you should have classes like “Account”, “Creditor”, “Ledger” and whatnot. When building an application, you end up modelling the application space, which is not as clean, but hard to get away from. So you end up modelling artifacts like “…Handler”, “…Request”, “…Response”, “…Manager”.

A lot of engineers I work with see the problem domain as dumb data objects you pass to Handler or Manager classes, rather than modelling them as functional classes directly. But oh well.

What the superclass anti-pattern calls out is that the framework is taking control of the modelling effort and forcing application artifacts into its own hierarchy. That really moves the modelling effort away from the problem domain and locks it into the framework’s domain. A lot of engineers don’t see why that’s bad, because they just want the framework to tell them what to do. But the system as a whole pays the price for it.

That’s why the surge of POJO-oriented technologies have become so popular. It’s the recognition that I should be able to take my class, as-in, in my own problem domain and hand it to a framework for handling. There’s growing recognition of the power that provides.

Leave a comment

Filed under computer architecture, design patterns, opinionizing

Fewer Data Scientists, More Big Data?

Interesting article breaking out some aspects of Big Data

He talks about data architecture, machine learning, and analytics as the gateways to a company harnessing big data without a big team of number crunchers and Hadoop experts.

While we’re at it, here’s an article from a guy who transformed himself into a data scientist:

Leave a comment

Filed under bigdata, computer architecture

REST and Shared State Between Client and Server

Every now and then the question comes up where to draw the line on stateless communications as mandated in RESTful services. I remember one conversation at another job where a group said they couldn’t do such-and-such because it required shared state between the client and server. And that was that.

At the time, I knew that wasn’t right, but I’ve been wrestling with exactly why. Then last night it clicked. We share state between client and server all the time. Shopping carts, authentication and authorization status, what page a user is on, a step that a user just took and now we’re verifying, etc. There are tons of examples of where the client and server have to share state.

But it is bad when *A* client shares state with *A* server. If server #432 in your server farm blanks out, and you lose the state for a client, then, yes, you have server affinity. Or stickyness. And that is a bad thing.

But if your client comes to your server bank, and you share the client state either through some clustering, synchronization, or pushing the state into a distributed datastore or cache, then you’re OK. The acid test is, can I walk over to my server farm and pull the plug on a machine, or even a rack, and not lose track of what the clients are doing? If so, you’re OK.

That allows both reliability and scalability, because if a server goes away, there are lots others to take up the load. And since any one server isn’t remembering anything particular, then you can just add more servers and they’ll just join the mix.

So the pushback from that group was off-target. The problem isn’t with shared state in general — that’s the basis of REST. Rather, the prohibition is against *A* particular client sharing state with *A* particular server.

I just wanted to clear the air on that. Thank you for your time.


Filed under computer architecture, computer scaling, open standards, opinionizing, REST

Article: REST vs. SOAP

This article has a ton of information. I’m going to be moving into a shop that is SOAP-heavy but REST-curious. So I need to be conversant on the differences, benefits, and costs between the two.

This goes way deep into the background, so I’m going to be drawing heavily on it.

1 Comment

Filed under computer architecture, open standards, REST