A case for breaking backward compatibility in V1

Browsers have legacy support but HTLM5 is not backward compatible, you can’t open a HTML5 page in IE5 .

The best you can do is keep using the old protocol for a while so that most clients are compatible when you do the switch. That is what we had, messages were versioned, and we were able to support older versions while sending new ones.

When we started using it, we announced that it will make new messages unreadable to older versions after n releases, and the debate on backward compatibility started. It was the change in the way message-id are calculated. The proper solution IMO would have been to make the fix in v1 message-type like we did, AND prepare a v2 free of legacy, on which we would have switched after n releases. But the PR at the time only did the later, while the debate ended up on the decision to do the former. That first fix became the model.

Having the communication protocol in the js thread with limited capabilities also lead to poor decisions like using transit on the wire, because it is an order of magnitude faster to parse for clojurescript compared to json. But having the protocol on status-go, we could have used protobuf and json and eventually pass transit to status-react. Back then we were somehow much more reluctant to move anything beyond go-ethereum patches to status-go.

1 Like

" API backwards compatibility

When it comes to software APIs, there’s a school of thought that says that you should never break backwards compatibility for some classes of widely used software. A well-known example is Linus Torvalds:
People should basically always feel like they can update their kernel and simply not have to worry about it. I refuse to introduce “you can only update the kernel if you also update that other program” kind of limitations. If the kernel used to work for you, the rule is that it continues to work for you. … I have seen, and can point to, lots of projects that go “We need to break that use case in order to make progress” or “you relied on undocumented behavior, it sucks to be you” or “there’s a better way to do what you want to do, and you have to change to that new better way”, and I simply don’t think that’s acceptable outside of very early alpha releases that have experimental users that know what they signed up for. The kernel hasn’t been in that situation for the last two decades. … We do API breakage inside the kernel all the time. We will fix internal problems by saying “you now need to do XYZ”, but then it’s about internal kernel API’s, and the people who do that then also obviously have to fix up all the in-kernel users of that API. Nobody can say “I now broke the API you used, and now you need to fix it up”. Whoever broke something gets to fix it too. … And we simply do not break user space.
Raymond Chen quoting Colen:
Look at the scenario from the customer’s standpoint. You bought programs X, Y and Z. You then upgraded to Windows XP. Your computer now crashes randomly, and program Z doesn’t work at all. You’re going to tell your friends, “Don’t upgrade to Windows XP. It crashes randomly, and it’s not compatible with program Z.” Are you going to debug your system to determine that program X is causing the crashes, and that program Z doesn’t work because it is using undocumented window messages? Of course not. You’re going to return the Windows XP box for a refund. (You bought programs X, Y, and Z some months ago. The 30-day return policy no longer applies to them. The only thing you can return is Windows XP.)
While this school of thought is a minority, it’s a vocal minority with a lot of influence. It’s much rarer to hear this kind of case made for UI backwards compatibility. You might argue that this is fine – people are forced to upgrade nowadays, so it doesn’t matter if stuff breaks. But even if users can’t escape, it’s still a bad user experience.
The counterargument to this school of thought is that maintaining compatibility creates technical debt. It’s true! "

But we aren’t discussing mindset or culture here, we are discussing using the opportunity of switching from Beta(which is unstable, unfinished, experimental software) to V1(which is stable and should be reliable moving forward). As far as I can tell you wrote a wall of text that amounts to “we should have endless conversations because it’s easier than making a decision”.

No, this is a great conversation to finish and get it over with. It’s easy to talk about work endlessly, especially when you have a big team of engineers, but eventually we have to stop talking and do actual work. Or we can keep throwing walls of text at each other and never get anywhere. The bigger the walls of text the less likely people will be to respond, and whoever generates the most noise wins by stopping the conversation, despite being in the minority with their opinion.

Your “Re-framing of the conversation” hasn’t re-framed anything, more like muddled it.
Erics proposal is very simple:

  • Drop the DB, which allows us to drop Realm.js and a shitload of old DB migrations
  • Avoid having to design, develop, and test a migration process for accounts
  • Give people new accounts, which gives us a clean break for introducing multi-account
  • Allows us to change the protocol to use JSON/Protobuf for easier usage by 3rd parties

This is the only time we can do this. If we wont, we will make future changes even harder to make backwards compatible and make development slower.

I really hope the call on Thursday results in some kind of decision.

3 Likes

i like your style, talk is cheap, just do it. :+1:

2 Likes

:clap:

This. Blow it up, if at least just this one time when there’s a forgiving chance to do so.

2 Likes

So I think from a protocol level enforcing backwards compatibility is quite easy, and if not backwards compatibility allowing for multiple versions to run at the same time should be simple as long as clients expose which version they are on.

3 Likes