Every now and then the question comes up where to draw the line on stateless communications as mandated in RESTful services. I remember one conversation at another job where a group said they couldn’t do such-and-such because it required shared state between the client and server. And that was that.
At the time, I knew that wasn’t right, but I’ve been wrestling with exactly why. Then last night it clicked. We share state between client and server all the time. Shopping carts, authentication and authorization status, what page a user is on, a step that a user just took and now we’re verifying, etc. There are tons of examples of where the client and server have to share state.
But it is bad when *A* client shares state with *A* server. If server #432 in your server farm blanks out, and you lose the state for a client, then, yes, you have server affinity. Or stickyness. And that is a bad thing.
But if your client comes to your server bank, and you share the client state either through some clustering, synchronization, or pushing the state into a distributed datastore or cache, then you’re OK. The acid test is, can I walk over to my server farm and pull the plug on a machine, or even a rack, and not lose track of what the clients are doing? If so, you’re OK.
That allows both reliability and scalability, because if a server goes away, there are lots others to take up the load. And since any one server isn’t remembering anything particular, then you can just add more servers and they’ll just join the mix.
So the pushback from that group was off-target. The problem isn’t with shared state in general — that’s the basis of REST. Rather, the prohibition is against *A* particular client sharing state with *A* particular server.
I just wanted to clear the air on that. Thank you for your time.