Our Architecture overview

Trukky is based upon a simple concept: push a button, get an estimate. What started as a way to request premium logistic services now offers a range of products, coordinating millions of rides per day across hundreds of cities. Today, we are documenting and sharing our architecture which makes things happen for our customers.

No reliable piece of software has survived over a long period of time without its creators being wary of architecture design or patterns.

However, the point of this article is not to give birth to another shiny new architecture. But to demonstrate principles we followed, decisions we took and why. We hope these insights will help you when you rewrite an existing app or build a new one.

Remember: What we do in our app may or may not work for most considering the nature of our problems. Focus on structuring what works and what doesn’t. Start with the bare minimum and keep refactoring.

Our Architecture

As such, there is no proper definition of Microservices aka Microservice Architecture, but you can say that it is a framework that consists of small, individually deployable services performing different operations.

Microservices focuses on a single business domain that can be implemented as fully independent deployable services and implement them on different technology stacks. In a Microservice Architecture, each service is self-contained and implements a single business capability.

Microservice architectures allow teams to roll out new features and bug fixes for their services independent of other services, increasing developer velocity. For instance, imagine that a team owns four services (referred to together as System 1) with agreed-upon SLAs that regularly interacts with multiple other services with their own SLAs.

Microservice-based architectures are still evolving and becoming instrumental facilitators of agility for both developers and organizations at large. A carefully planned multi-tenant architecture can increase developer productivity and support evolving lines of business.

All the business logic is built upon reliable backend technologies as these individual units are the most crucial in handling messages coming from multiple users simultaneously. Consider some use cases below to understand how the backend works in conjunction with the frontend to display and communicate with several microservices.

The user taps on a button, like sign-in, and the View passes the interaction to the Presenter. The Presenter calls a sign-in method on the Interactor that results in a service call to actually sign-in. The token that’s returned is published on a stream by the service. An Interactor listening to the stream switches to the Home component.

A service call, like status, fetches data from the backend. This places the data on an immutable model stream. An Interactor listening to this stream notices the new data and passes it to the Presenter. The Presenter formats the data and sends it to the View.

The whole app is divided into multiple small, self-contained components. Components let you split the app into independent, reusable pieces, and make you think about each piece in isolation (it’s exactly the same as the definition of a Component in React). This philosophy or idea is used to generate a progressive web app (PWA). 

The most important property of a component is it should do just one thing. Nearly all components on average are about 100 lines of code. When a component starts getting bigger, we break it into more components.

We currently achieve this by aggressively refactoring our code after every feature is built. We’d love to have some tooling around it, but haven’t given it much thought and are open to ideas.

A component takes all its inputs via the constructor. Specifying, the necessary inputs in the constructor ensures some compile-time safety by guaranteeing that we have all values available before creating the Component.

As discussed in our last article, our data, and machine learning pipeline have been carefully designed as such that it takes less time to organize and filter important data so that much of the time can be spent on analyzing millions of data points to achieve the best customer experience. The tools used for this task have always been NumPy and Pandas libraries from Python.

To make our pipelines efficient and clean we use tools like TensorFlow, Hadoop, and Spark which remain reliable with great support and developer experience. Moreover, we use Tableau for our internal data visualizations.

0 Shares:
Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like