Tag Archives: architecture

What I learned from $2500 Udi Dahan course

Around the beginning of April 2020 Udi Dahan, owner of Particular Software, released his course in a form of online videos, for free. The big deal is that Udi is one of the world’s foremost experts on Service-Oriented Architecture, Distributed Systems, and Domain-Driven Design. This was a trigger for me and my whole team to watch the course and have a weekly discussion session to talk through completed chapters. Here is what I learned.

Use messaging

In his course, Udi highlighted many times, that messaging can solve most of your service-to-service communication. It may be slower than traditional RPC calls, but it’s more reliable and stable. It also scales better, because it doesn’t really matter how many subscribers there are, but rather how easy we can change the existing architecture.

With RPCs, services are strongly bonded together and need to support the same contract. When the change is needed, we often need to change more than one service. With messaging we can use messages that are already sent and build a service that we need. 

It is also worth mentioning that messaging systems are in the business for a long time and there is a handful of solutions to choose from. It’s not a technology that is still changing quickly, but it’s been here for a while.

There’s no single best solution

This is something that was repeated dozens of times across the course. Udi showed the best available approaches and architectural styles available: RPCs, messaging, micro-services, CQRS, Sagas, and showed their good and bad usages. All those concepts are great, but it doesn’t mean that we should use one of them for everything. It’s actually better to combine a few approaches, technologies, database models and use whatever suits the problem best.

For example, if CQRS works great for one e-commerce solution, it doesn’t mean that now it is the formula for every e-commerce there is. Therefore we shouldn’t take unconsciously whatever worked for previous project without greater analysis.

Find your service boundaries

This was one of the most important chapters and an a-ha moment for me and my teammates. Udi explained how important it is to find the service boundaries – places where a business domain can be divided into smaller pieces. It also defines the best places where big codebase can be divided into smaller, independent services.

Let’s take a look at the example. Let’s say we are building a hospitality system, that a guest can interact with.

So we have nicely separated micro-services, each with separate front-end and DB. They are communicating via messages in Service Bus and save data they need in DB, in the form they need them. Every change done by the user is saved in DB and propagated via message.

So what’s wrong?

It may seem that it’s a perfect decoupled solution, that can be scaled according to needs. However, there are a few problems. The first one is that we have a lot on infrastructure to maintain: services and databases must have a deployment process and with this approach, we are going to have more and more small services.

The second one is scalability. They seem easy to scale because everything is separate, but in most cases, there will be no need to do so. We will end up using resources we do not need. The other thing is that it’s not easy to maintain many instances of a service, where you have the same database. Stateful services are hard to scale, where stateless services can be scaled easily. Scalable services need to be designed with this specific goal in mind.

The third and less obvious one is that those services are actually not decoupled. They are working on the same data: user basic data, reservation details, reservation suborders, and payments. We will end up having all those data in every database. Don’t get me wrong, data duplication isn’t a bad thing when it comes to reading. But when a change is necessary, it has to be synchronized with every service that uses this data. Now, let’s have a look at a different approach.

In this approach, we still have separate front-ends but we have a single API and DB. The deployment process is easier, applying a change of contract is also easier, because we can do it all at once and we need much less synchronization. Of course, this is only an example and it wouldn’t fit every scenario, but here it makes sense.

Micro-services approach doesn’t have to be better than the monolith

When looking at an example above, we clearly see that having slightly bigger services can bring certain advantages to our architecture. When building micro-service it is a common pattern, that it should do one thing only. It can lead to services, that would have a single method in their APIs. While it may make sense in some cases, it would result in creating many very small services, that contain more synchronization code, than the actual business logic.

On the other hand, a monolith does not need to be a bad idea. If we can make it performant enough and the codebase isn’t that bad, then it actually has some advantages. For example, introducing a change is far easier and quicker than in many micro-services. Also, we have all business logic and contracts in front of our eyes, where we can easily see it. It’s also easier to write unit tests in the monolith than integration tests with a micro-service approach.

Don’t try to model reality – it doesn’t exist

A very powerful sentence, that Udi said when talking about domain modeling. Although it is said that Object-Oriented Programming should mimic the real world, it doesn’t seem to be the right approach. Let me bring the example that Udi mentioned.

We need to count the amount for transactions that were done this week and the last week. When reflecting that as a real-life model, we would end up having a table of transactions with dates and amounts. Like this:

Then when we need the sum, we would calculate it on the fly. This, of course, would work, but calculating the sum of transactions every time is just a bad idea. If we shift our thinking, we could just have a structure like this:

It doesn’t remind a transaction list from the real-world, but in this case, we simply don’t need it. It will require some maintenance when the week is over, but it will be far less than calculating the sum on every read. 

The summary

Learn Advanced Distributed Systems Design course changed the way I’m thinking about distributed architecture and micro-services. Udi is an amazing speaker, that used his experience from many real-life projects and was able to explain the most complicated concepts in a way that child would understand. 

Although it wasn’t easy to go through 30+ hours of the course, it was a huge dose of knowledge and I would recommend it to anyone.

BTW, here is the link: https://particular.net/adsd

Why duplication isn’t always a bad thing in micro-services

From an early development age, I was taught, that duplication is a bad thing. Especially when it comes to storing data. Relational databases were invented to show data that relates to each other and be able to store them efficiently. There are even a few normalization rules, to be able to avoid data redundancy. We are slowly abandoning this approach to non-relational databases because of their simplicity and storage price reduction. Nevwrtheless, having the same thing in two places leads to ambiguity and chaos. It also refers to DRY rule:

DRY – don’t repeat yourself. Every piece of knowledge must have a single, unambiguous, authoritative representation within a system. (Wikipedia)

The concept of breaking DRY rule is called WET, commonly taken to stand for either “write everything twice”, “we enjoy typing” or “waste everyone’s time”. This isn’t nice, right? So when duplication is acceptable?

Let’s look at the example.

In this example, we see a network of stateful services that exchange data about products. Data can came from many sources and we need to send them to many destinations. Data flow from one service to another. Here are a couple of rules:

  • every micro-service has data about its own specific thing and keeps it persistent
  • services work in a publisher-subscriber model, where every service can publish the data and receive the data it needs
  • services don’t know about each other

Does this sounds familiar?

Event-driven microservices

Event-driven programming isn’t a new thing, which we can check in Wikipedia. It is a program, where the flow of the program is triggered by events. It is extensively used in graphical user interfaces and web applications, where user interactions trigger program execution.

Every major platform supports event-driven programming, there is AWS Lambda by Amazon, Azure Functions by Microsoft and Google Cloud Functions by Google. All of those technologies offer event triggering.

In back-end micro-services, an event can be for example a web request or a service bus message. Let’s use Service Bus messages, where every service will be connected to the bus and can act both as a publisher and subscriber.

In this architecture usage of Service Bus is crucial, because it provides some distinctive features:

  • services are lously coupled, they don’t know about each other. Building another micro-service that needs specific data is just a matter of subscribing to right publishers
  • it’s good for scalability – it doesn’t matter if 1 or 10 instances of services subscribes to a certain topic – it will work without any code change
  • it can handle big load – messages will be kept in a queue and service can consume it in its own pace
  • it has build-in mechanisms for failure – if message could not be processed, it will be put back to the queue for set amount of times. You can also specify custom retry policy, that can exponentialy extend wait time between retries

If you’d like to know more about Microsoft Service Bus in .Net Core, jump to my blog posts:

What happens when it fails?

When we notice that there might be something fishy going on with or micro-service, we have to be able to fix it. It might miss some data, but how to know exactly what data this micro-service should have? When micro-services are stateful, we have whole state saved in every one of them. This means, that we can make other services send data to one in failed state. Or even better – tell a service to fix itself!

You can see how a micro-services state can be fixed by a single admin request. Service can get all of the data only because other services are stateful. This came in handy not always in crisis situations, but also when debugging. Investigating stateless services when you actually don’t know what data came in and what came out can be really painful.

But it’s data duplication!

That’s right! In the mentioned scenario each micro-service is stateful and have its own database. Apart from all the good things I mention I just need to add, that those services are small and easy to maintain. However, they could be merged into one bigger micro-service, but that wouldn’t micro service anymore, would it? Also when services have less to do, they also work faster.

Ok, but back to the problem. It’s data duplication! With a big D! Almost all services share some parts of the same thing, how do we know which one is correct and which one to use? The answer is simple: keep data everywhere, but use only one source.

Single source of truth – it is one source for getting certain data. Whenever you want some data that are consistent and up-to-date, you need to take it from the source of truth. It guarantees that when two clients request the data at the same time, they will get the same result. This is very important in a distributed system, where one client can feed data on a product page showing titles and prices and another one should show the same data in a cart or checkout.

In our example single source of truth for products would be Product Service, and for marketing content would be Marketing Content Service.

An inspiration

Some time ago I got inspired by Mastering Chaos – A Netflix Guide to Microservices by Josh Evans talking about Netflix micro-services architecture. I strongly encourage you to watch it.

Below you can see how micro-services talk to each other and process data.

Yes, it’s a cool gif from mentioned presentation that I really wanted to show you 🙂

Analyze code with NDepend

Recently I got my hands on NDepend, a static code analysis tool for .Net framework. Because it can work as a plugin for Visual Studio, it offers great integration with code and rapid results. So what it can do? Let’s see!

Getting started

To start working with NDepend you need to download it and install for your Visual Studio. It’s not a free tool, but you can choose to start a 14-day trial period. Then you need to attach a new NDepend project.

And choose assemblies to analyze.

Then wait a few seconds… and voila! You can see a dashboard with results or full HTML report.

What does it mean?

NDepend analyses code in many ways, checking if it is written in a correct way. In the top, you can see that it compared code to the one from 36 days and shows how it has changed. Let’s have a look at some of the results:

  • Lines of Code – this is a number of logical lines of code, so code that contains some logic. Quoting the website: “Interfaces, abstract methods, and enumerations have a LOC equals to 0. Only concrete code that is effectively executed is considered when computing LOC”
  • Debt – an estimation of the effort needed to fix issues. A debt-ratio is obtained by comparing the debt with an estimation of the effort needed for the development of the analyzed code, hence the percentage. Those are numbers that you can use to compare different versions of your app and also plan refactorings
  • Types – apart from LOC, shows how big is your project and how big is the recent change
  • Comment – comments code coverage
  • Quality gates – overall check that should suggest that this code is good enough to proceed with it (for example to merge it)
  • Rules – shows what rules are violated and their level
  • Issues – problems with your code, that needs an attention

To clear this our – you might have 149 issues, that categorize into 30 failing rules and that can result in two main quality gates failure.

It’s getting more interesting if you analyze your project on a day to day basis and you start to notice some trends and effects of recent changes. This is an insight that you cannot measure even if you’re very well accustomed to the project. Let’s look at the analysis for a small micro-service over its whole life – a few months. You could see the dashboard already, so here are some charts.

You can see how the project is getting bigger and how the number of issues is rising and then dropping. This is a normal phase of software development, where you code first and fix bugs later. You can also see that we started to write comments after a few months of project development.

The report

Along with the analysis, NDepend created a full HTML report that presents event more data.

There are some useful diagrams:

  • Dependency Graph – shows you how your assemblies are connected to each other
  • Dependency Matrix – sums up all dependencies and you can easily see which assemblies have way too much responsibility
  • Treemap Metric View – shows in a colorful way what namespaces have issues
  • Abstractness vs. Instability – points what assembly in your solution might need an attention

The report also lists rules that are violated by your app.

There are many rules that check our code. From Object-Oriented Programming and potential code smells to best practices in design and architecture. In Visual Studio you can see that same report as well and you can easily navigate to code that has problems. Let’s click on the first one: avoid methods with too many parameters.

Then we can go inside by double clicking to see places in the code that applies.

In this example, the constructor has too many parameters – 10 in this example. This might mean, that those classes have too much responsibility and might need refactoring. Also writing tests for it might be challenging.

What is cool about NDepend is that every rule can be customized and edited, cause it is written in C#. We can have a quick look at this rule and check how many parameters in a method is good enough.

So 7 is the number here 🙂 Methods with 6 parameters are ok.

What is it for?

I showed you that this tool can analyze code and show some advanced metrics, but what I can use it for in my project?

  • Finds code smells – it’s ideal for identifying technological debt, that needs fixing. When planning a technical sprint, where you only polish the code, this tool will be perfect to spot any flaws
  • It’s quick – you can easily analyze a new project and check if it is written following good practices
  • Shows progress – you can track how you codebase quality changes while you’re changing the code

That last point surprised me. Some time ago I was making a quite big refactoring in a legacy project. I was convinced that deleting a lot of difficult code and introducing simpler, shorter classes will dramatically improve codebase quality. In fact, technological debt grew a little bit, because I didn’t follow all good practices. While old code seemed as unlogical and hard to debug, the new one is just easier to understand, because I wrote it.

It shocked me

I used NDepend on a project I’m proud of, which I was involved in from the beginning and I used a lot of time to make it as close to perfection as it was possible in a reasonable time. As a group of senior developers, we almost always find something during a code review, that could be done better or just can be discussed. I was sure that code is really good. How shocked I was when I found out, that NDepend didn’t fully agree. The project did get an A rating, but there were still a few things that could have been done better. And this is its real value.

Would I recommend it?

I need to admit, that I got an NDepend license for free to test it out. If it wasn’t for Patrick Smacchia from NDepend, I wouldn’t discover that anytime soon. And I would never learn, that no matter how good you get in your work, you always can do better.

I recommend you to try NDepend yourself and brace yourself… you might be surprised 🙂

Why sharing your DTOs as dedicated nuget package is a bad idea

I’m working in a solution, that has many micro-services and some of them share the same DTOs. Let’s say I have a Product dto, that is assembled and processed through 6 micro-services and all of them has the same contract. In my mind an idea emerged:

Can I have a place to keep this contract and share across my projects?

And the other thoughts came to my mind:

It can be distributed as a NuGet package!
I’ll update it once and the rest is just using it.
It will always be correct, no more typos in contracts that lead to bugs.

And boy I was lucky not to do so.

After some time I had to extend my contract – that was fine. But while services where evolving over time, Product started to mean different things in those services. In one it was just Product, that represents everything, in others it was Product coming from that specific data source. There were CustomProduct, LeasingProduct and ExtendedProduct as well. And I’m not mentioning DigitalProduct, BonusProduct, DbProduct, ProductDto, etc.

The thing is that a group of services might use the same data, but it doesn’t mean that they are in the same domain.

What is wrong with sharing a contract between different domains?

  • Classes and fields mean different things in a different domain. So the name Product for everything isn’t perfect. Maybe in some domain, more specific name will be more accurate
  • Updating contract with non-breaking changes is fine, but introducing breaking change will always require supervision. When it’s a nuget package, it’s just easy to update package without thinking about consequences
  • When we receive a contract and we do not need all of it, we can just specify fields we actually need. This way receiving and deserializing an object is a bit faster
  • When extending a shared contract as NuGet package we immediately see in a receiver that something changed and we need to update it and do some work on our side, where normally we wouldn’t care to update contract, as long as we need it

When having a contract in a NuGet package can be beneficial? For example when we need exactly this contract to communicate with something. For example, a popular service can share it’s client and DTOs as a NuGet package to ease integration with it.

And what do you think? Is is a good thing, or a bad thing?

Getting started with Microsoft Orleans

Microsoft Orleans is a developer-friendly framework for building distributed, high-scale computing applications. It does not require from developer to implement concurrency and data storage model. It requires developer to use predefined code blocks and enforces application to be build in a certain way. As a result Microsoft Orleans empowers developer with a framework with an exceptional performance.

Orleans proved its strengths in many scenarios, where the most recognizable ones are cloud services for Halo 4 and 5 games.

The framework

Microsoft Orleans is a framework that is build as an actor model. It it not a new idea in computer science, thus it originated in 1973. It is a concept of a concurrent model that treats actors as universal primitives. As everything is an object in object oriented programming, here everything is an actor.  An actor is a entity, that when received a message, and can:

  • send finite numer of messages to other actors
  • create finite number of new actors
  • designate the behavior to be used for the next message it receives

Every operation is asynchronous, so that it returns a Task and operations on actors can be handled simultaneously. In Orleans actors are called grains and they are almost singletons, so that it is almost impossible to execute work on the same actor in parallel. Grains can hold and persist it’s state, so that every actor can have it’s own data that it manage. I mentioned that every operation can be executed in parallel and that means, that we are not sure if certain operations will be executed before others. This means that we also cannot be sure, that application state is consistent, so Microsoft assures eventual consistency. We are not sure that application state is correct, but we know it will be eventually. Orleans also handles errors gracefully and if a grain fails, it will be created anew and it’s state will be recovered.

An example

Let’s assume, that e-mail accounts are grains and an operation on an actor is just sending and removing e-mails. Model of an actor can look like this:

Now sending an e-mail will mean, that we need to have at least two e-mail accounts involved.

Every grain is managing it’s own state and no one else can access it. When grain receives a message to send and e-mail, it sends messages to all recipient actors that should be notified and they update their state. Very simple scenario with clear responsibilities. Now if we follow the rule, that everything is an actor, then we can say that e-mail message is also an actor and handles it’s own state and every property of an account can be an actor. It can go as deep as we need to, however simpler solutions are just easier to maintain.

Where can I use it?

Actor model is best suited for data that is well grained, so that actors can be easily identified and their state can be easily decoupled. Accessing data by an actor is instant, because it holds it in memory and the same goes to notifying other actors. Taking that into account, Microsoft Orleans will be most beneficial where application needs to handle many small operations that changes application state. In a traditional storage, in example SQL database, application needs to handle concurrency when accessing the data, where in Orleans data are well divided. You may think that there have to be data updates that changes shared storage, but that’s a matter of changing the way the architecture is planned.

You can think of an actor as a micro-service with it’s own database and message bus to other micro-services. There can be millions of micro-services, but all of them will be unique and will hold it’s own state.

If you’re interested into an introduction by one of an Orleans creator, have a look at this: https://youtu.be/7CWEc8dBH38?t=412

There’s also very good example of usage by NCR company here: https://www.youtube.com/watch?v=hI9hjwwaWBw

Implementing deferral mechanism in ServiceBus

Deferral is a method to leave a message in a queue or subscription when you cannot process it at the moment. When using PeekLock read mode you read a message but leave it in a queue. When processing of a message is done, you call Complete and message is removed from queue, but when something goes wrong, you can call Abandon and message will be available in a queue for next read. Important thing to remember is that there is a time interval for locking the message in a queue, it’s LockDuration. If message will not be processed and completed during that time, it will be again available in the queue and you will get MessageLockLostException when trying to do anything with it. When message is losing it’s lock and stays in a queue, either by being abandoned or lock has expired, it will get it’s DeliveryCount incremented. After reaching limit, which is by default 10, message will be moved to dead-letter queue.

message-lifecycle

It is a great life-cycle, where message that we just cannot process, will go to dead-letter queue. With retry policy all transient errors that occurs while connecting to Service Bus will be handled internally – you do not have to worry about it. Problems may occur when you would like to handle connection problems not related to Service Bus. Solution that Microsoft is proposing is to defer a message that you cannot process, leave it in the queue, but hide it from receivers. This message will be available only when asking for it with it’s sequence number. Whole loop can look like this:

private static async Task StartListenLoopWithDeferral()
{
    var client = GetQueueClient(ReceiveMode.PeekLock);
    var deferredMessages = new List<KeyValuePair>();

    while (true)
    {
        var messages = Enumerable.Empty();

        try
        {
            messages = await client.ReceiveBatchAsync(50, TimeSpan.FromSeconds(10));
            messages = messages ?? Enumerable.Empty();
            if (!messages.Any())
            {
                continue;
            }

            foreach (var message in messages)
            {
                Console.WriteLine("Received a message: " + message.GetBody());
            }
            await client.CompleteBatchAsync(messages.Select(m => m.LockToken));

            // handling dererred messages
            var messagesToProcessAgain = deferredMessages.Where(d => d.Value < DateTime.Now).Take(10).ToList();
            foreach (var messageToProcess in messagesToProcessAgain)
            {
                BrokeredMessage message = null;
                try
                {
                    deferredMessages.Remove(messageToProcess);
                    message = await client.ReceiveAsync(messageToProcess.Key);

                    if (message != null)
                    {
                        // processing
                        Console.WriteLine("Received a message: " + message.GetBody());

                        await client.CompleteAsync(message.LockToken);
                    }
                }
                catch (MessageNotFoundException) { }
                catch (Exception e)
                {
                    Console.WriteLine(e);
                    deferredMessages.Add(new KeyValuePair(
                        message.SequenceNumber,
                        DateTimeOffset.Now + TimeSpan.FromMinutes(2)));
                }
            }
        }
        catch (MessageLockLostException e)
        {
            Console.WriteLine(e);

            foreach (var message in messages)
            {
                await message.AbandonAsync();
            }
        }
        catch  (Exception e)
        {
            Console.WriteLine(e);

            // defer messages
            foreach (var message in messages)
            {
                deferredMessages.Add(new KeyValuePair(
                    message.SequenceNumber,
                    DateTimeOffset.Now + TimeSpan.FromMinutes(2)));
                await message.DeferAsync();
            }
        }
    }
}

Code contains of endless loop, that ReceiveBatchAsync messages and process them. If something goes wrong, messages are added to deferredMessages list with 2 minutes time span. After completing messages successfully, program checks if there are any messages that should be processed again. If there would be any problem again,  message will be added to deferredMessages list again. There is also a check for MessageLockLostException, that might occur when message went to dead-letter queue and we should no longer ask for it. Message ids are kept in memory and that is obvious potential issue, cause when program will be restarted, this list will be wpied and there will be no way to get those messages from the queue.

Deferral pros and cons

Deferral mechanism can be useful, because it basis on original Azure infrastructure and handling messages this way keeps messages DeliveryCount property incrementing and eventually moves message to dead-letter queue. It is also a way to handle one message and make it available again after a custom time.

However, algorithm itself is not easy and needs to handle many edge cases, that makes it error prone. Second thing is that it breaks the order of messages, where in some cases it is essential to keep FIFO. I wouldn’t recommend it to in every scenario. Maybe a better approach would be to add a simple wait, when something goes wrong and try again after a while.

Azure Service Bus – introduction

Azure Service Bus is a Microsoft implementation of a messaging system, that works seamlessly in the cloud and does not require to set up a server of any kind. Messaging is a good alternative for communication between micro-services. Let’s compare the two.

REST communication

service-bus-communication

  • contract constrains may be a risk
  • synchronous model be default
  • load balancing is harder

Messaging communication

REST-communication

  • asynchronous
  • easy for load balancing, can have multiple competing readers
  • fire and forget

Using messages for communication is much more elastic in planning architecture. You can add move receivers and more senders at will. It’s a fire and forget model, when you do not care when message will be processed and you don’t need to worry how it will reach the receiver – your messaging system will care about it for you.

Azure Service Bus

To explore Service Bus options, go to your Azure portal and search for Service Bus.

service-bus-search

 

Go inside and you’ll need to create of choose a namespace. Namespaces can be useful when you would like to group multiple queries or topics for example by different contexts. It’s much easier to browse through them when you have for example all orders related queues in a orders namespace. Inside the namespace you’ll see list of your queues and topics.

topics-and-queues

 

Queues

Queue is a FIFO (First In, First out) is a messages delivery to one or more competing consumers.  Each message is received and processed by only one consumer and messaging system centrally manage this process,  so no deadlock will occur.

queue

It is expected that messages are processed in the same order which they arrived. Messages however do not have to be processed right away when they arrived. They can wait safely in the queue for first free consumer. With possibility of having multiple consumers is in very easy to balance load and add new consumers in the infrastructure, so messages will be processed faster. This is of course with an assumption that messages can be processed independently and do not relate to one another. Another key feature is that work of consumers do not affect work done by publisher. During heavy load or high usage of the system messages will be stored in the queue and consumers will not be overloaded with multiple call as it is in REST services, but continue to work and process messages with the same speed as usual.

In Azure Service Bus queues size of a queue can be huge, even up to 16GB. Message size limit is rather small – maximum of 256KB. However sessions support allows creation of unlimited-size sequences of related messages.

Topics

topic

 

Comparing to queue where only one consumer is processing a message, in topics messages are cloned in to subscriptions, which contains the same messages. This represents one-to-many form of communication in a publish/subscribe pattern. The subscriptions can use additional filters to restrict the messages that they want to receive. Messages can be filtered by their attributes, where published can for example set recipient who should receive that message. Consumers instead of connecting directly to topic, connects to a subscriptions, that can be understood as a virtual queue. The same way as queues, subscriptions can have multiple competing consumers and this gives even more possibilities to plan services architecture.

An example usage a topic can be for example notifications about product stock status changed. Topic where messages are sent can have multiple subscriptions each filtering messages according to their needs. One may need all updates but other may be interested only in digital products like games. With SqlFilterExpression class filter rule can look like this:

ProductType = ‘Digital’ AND QuantityForSale > 0

Queues, topics and subscriptions gives a lot possibilities and are easier to scale and adapt in the future. Comparing to REST services they are slightly more complicated to implement, needs centralized system but offers much more in return.