What sets the Platform for Kubernetes service from vshosting~ apart from similar solutions by Amazon, Google or Microsoft? There is a surprising amount of differences. 

Kubernetes services development

Clients often ask us how our new Platform for Kubernetes service differs from similar products by Amazon, Google or Microsoft, for example. There are in fact a great many differences so we decided to dig into the details in this article.

Individual infrastructure design

The majority of traditional cloud providers offer an infrastructure platform, but the design and individual creation of the infrastructure is left to the client – or rather to their developers. The overwhelming majority of developers will of course tell you that they’d rather deal with development than read a 196-page guide to using Amazon EKS. Furthermore, unlike most manuals, you really need to read this one since setting up Kubernetes on Amazon isn’t very intuitive at all. 

At vshosting~ we know how frustrating this is for most companies. The development team should be able to concentrate on development and not waste time on something that they’re not specialised in. Therefore, we make sure that unlike traditional cloud services, our Kubernetes solution is tailor-made for each client. With us, you can skip the complex task of choosing from predefined packages, reading overly long manuals, and having to work out which type of infrastructure best meets your needs. We will design Kubernetes infrastructure exactly according to the needs of your application, including load balancing, networking, storage and other essentials. 

In addition, we would love to help you analyse your application before switching to Kubernetes, if you don’t already use it. Based on your requirements we’ll recommend you a selection of the most suitable technologies (consultation is included in the price!), so that everything runs as it should and any subsequent scaling is as straightforward as possible. 

In terms of scaling, with Zerops it’s simple. Again there is no choosing from performance packages etc. at vhosting~ you simply scale according to your current needs, no hassle. We also offer the option of fine scaling for only the required resources. Does your application need more RAM or disk space because of customer growth? No problem. 

After we create a customised infrastructure design, we’ll carry out the individual installation and set up of Kubernetes and load balancers before putting it into live operation. Just for some perspective, with Google, Amazon or Microsoft, all of this would be on your shoulders. At vshosting~ we carefully fine-tine everything in consultation with you. Once launched, Kubernetes will run on our cloud or on the highest quality hardware in our own data centre, ServerPark. 

The option of combining physical servers and the cloud

Another benefit of Kubernetes from vshosting~ is the option of combining physical servers with the cloud – other Kubernetes providers do not allow this at all. With this option you can start testing Kubernetes on a lower performance Virtual Machine and only then transfer the project into production by adding physical servers (all at runtime) with the possibility of maintaining the existing VMs for development. 

For comparison: Google will for example offer you either the option of on-prem Google Kubernetes Engine or a cloud variant, but you have to choose one or the other. What’s more, you have to manage the on-prem variant “off your own back”. You won’t find the option of combining physical servers with the cloud with Amazon or Microsoft. 

You save up to 50% compared to global Kubernetes providers. Take a look at how we compare.

Global Kubernetes providers

With us you can combine physical servers with the cloud as you see fit and we’ll also take care of administration – leaving you to focus on development. We’ll oversee the management of the operating systems for all Kubernetes nodes and load balancers and we’ll provide regular upgrades of operating systems, kernels etc. (and even an upgrade of Kubernetes, if agreed).

High level of SLA and senior support 24/7

One of the most important criteria in choosing a good Kubernetes platform is its availability. You might be surprised to learn that neither Microsoft AKS nor Google GEK provide a SLA (financially-backed service level agreement) and only claim to “strive to ensure at least 99.5% availability”. 

Although Amazon talk about a 99.9% SLA, when you look at their credit return conditions, in reality it’s only a guarantee of 95% availability – since Amazon only return 100% of credit below this level of availability, If availability drops only slightly below 99.9%, they only return 10% of credit. 

At vshosting~ we contractually guarantee 99.97% availability, that is to say more than Amazon’s somewhat theoretical SLA and significantly more than the 99.5% not guaranteed by Microsoft and Google. In reality, availability with vshosting~ is more like 99.99%. In addition., our Managed Kubernetes solution works in high-availability cluster mode which means that if one of the servers or part of the cloud malfunctions the whole solution immediately starts on a reserve server or on another part of the cloud. 

We also guarantee high-speed connectivity and unlimited data flows to the whole world. In addition we ensure dedicated Internet bandwidth for every client. Our network has capacity up to 1 Tbps and each route is backed up many times over.  

Thanks to the high-availability cluster regime, high network capability, and back up connection, the Kubernetes solution from vshosting~ is particularly resistant to outages of any part of the cluster. Furthermore, our experienced team will continuously monitor your solution and quickly identify any issues that emerge before they can have an effect on the end user. We also have robust AntiDDoS protection which effectively defends the entire cluster against cyber attacks. 

Debugging and monitoring of the entire infrastructure

Unlike traditional cloud providers, at vshosting~ our team of senior administrators and technicians monitor your solution continuously 24 hours a day directly from our data centre and will react to any problems which may arise within 60 seconds – even on Saturday at 2am. These experts continuously monitor dozens of parameters relating to the entire solution (hardware, load balancers, Kubernetes) and as a result are able to prevent most issues before they become a problem. In addition, we guarantee to repair or replace a malfunctioning server within 60 minutes. 

To keep things as simple as possible, we’ll provide you with just one service contact for all your services-  whether it’s about Kubernetes itself, its administration or anything to do with infrastructure. We’ll take care of routine maintenance and complex debugging. Included in the Platform for Kubernetes price we also offer consultation regarding specific Dockerfile formats (3 hours a month).


DevOps and containerisation are among the most popular IT buzzwords these days. Not without reason. A combination of these two approaches happens to be one of the main reasons why developer work keeps getting more efficient. In this article, we’ll focus on 9 main reasons why even your project could benefit from DevOps and containers. 

A couple of introductory remarks

DevOps is a composition of two words: Development and Operations. It’s pretty much a software development approach that emphasizes the cooperation of developers with IT specialists taking care of running the applications. This leads to many advantages, the most important of which we will discuss shortly.

Containerization fits into DevOps perfectly. We can see it as a supportive instrument of the DevOps approach. Similar to physical containers that standardised the transportation of goods, software containers represent a standard “transportation” unit of software. Thanks to that, IT experts can implement them across environments with hardly any adjustments (just like you can easily transfer a physical container from a ship to a train or a truck).

Top 9 DevOps and container advantages

1) Team synergies

With the DevOps approach, developers and administrators collaborate closely and all of them participate in all parts of the development process. These two worlds have traditionally been separated but their de facto merging brings forth many advantages. 

Close cooperation leads to increased effectiveness of the entire process of development and administration and thus to its acceleration. Another aspect is that the cooperation of colleagues from two different areas often results in various innovative, out of the box solutions that would otherwise remain undiscovered. 

2) Transparent communication

A common issue not only in IT companies is quality communication (or rather lack thereof). Everybody is swamped with work and focuses solely on his or her tasks. However, this can easily result in miscommunication and incorrect assumptions and by extension into conflicts and unnecessary workload. 

Establishing transparent and regular communication between developers and administrators is a big part of DevOps. Because of this, everyone feels more like a part of the same team. Both groups are also included in all phases of application development. 

3) Fewer bugs and other misfortunes

Another great DevOps principle is the frequent releasing of smaller parts of applications (instead of fewer releases of large bits). That way, the risk of faulty code affecting the entire application is pretty much eliminated. In other words: if something does go wrong, at least it doesn’t break the app as a whole. Together with a focus on thorough testing, this approach leads to a much lower number of bugs and other issues.

If you decide to combine containers with DevOps, you can benefit from their standardisation. Standardisation, among other things, ensures that the development, testing, and production environments (i.e. where the app runs) are defined identically. This dramatically reduces the occurrence of bugs that didn’t show up during development and testing and only present themselves when released into production. 

4) Easier bug hunting (and fixing)

Eventual bug fixes and ensuring smooth operation of the app is also made possible by the methodical storage of all code version that’s typical for DevOps. As a result, it becomes very easy to identify any problem that might arise when releasing a new app version.

If an error does occur, you can simply switch the app back to its previous version – it takes a few minutes at the most. The developers can then take their time finding and fixing the bug while the user is none the wiser. Not to mention the bug hunting is so much easier because of the frequent releases of small bits of code. 

5) Hassle-free scalability and automation

Container technology makes scaling easy too and allows the DevOps team to automate certain tasks. For example, the creation and deployment of containers can be automated via API which saves precious development time (and cost). 

When it comes to scalability, you can run the application in any number of container instances according to your immediate need. The number of containers can be increased (e.g. during the Christmas season) or decreased almost immediately. You’ll thus be able to save a significant amount of infrastructure costs in the periods when the demand for your products is not as high. At the same time, if the demand suddenly shoots up – say that you’re an online pharmacy during a pandemic – you can increase capacity in a flash. 

6) Detailed monitoring of business metrics

DevOps and containerization go hand in hand with detailed monitoring, which helps you quickly identify any issues. Monitoring, however, is also key for measuring business indicators.  Those allow you to evaluate whether the recently released update helps achieve your goals or not. 

For example: imagine that you’ve decided to redesign the homepage of your online store with the objective of increasing the number of orders by 10 %. Thanks to detailed monitoring, you can see whether you’re hitting the 10 % goal or not shortly after the homepage release. On the other hand, if you made 5 changes in the online store all at once, the evaluation of their individual impact would be much more difficult. Say that the collective result of the 5 changes would be the increase of order number by 7 %. Which of the new features contributed the most to the increase? And don’t some of them cause the order number to go down? Who knows.

7) Faster and more agile development

All of the above results in significant acceleration of the entire development process – from writing the code to its successful release. The increase in speed can reach 60 % or even more (!). 

How much efficiency DevOps will provide (and how much savings and extra revenue) depends on many factors. The most important ones are your development team size and the degree of supportive tool use – e.g. containers, process automation, and the choice of flexible infrastructure. Simply put, the bigger your team and the more you utilise automation and infrastructure flexibility, the more efficient the entire process will become. 

8) Decreased development costs 

It is hardly a surprise that faster development, better communication and cooperation preventing unnecessary work, and fewer bugs lead to lowering development costs. Especially in companies with large IT departments, the savings can reach dozens of percent (!).

Oftentimes the synergies and higher efficiency show that you don’t need to have, say, 20 IT specialists in the team. Perhaps just 17 or so will suffice. That’s one heck of a saving right there as well.

9) Happier customers

Speeding up development also makes your customers happy. Your business is able to more flexibly react to their requests and e.g. promptly add that new feature to your online store that your customers have been asking for. Thanks to the previously mentioned detailed monitoring, you can easily see which of the changes are welcomed by your users and which you should rather throw out of the window. This way, you’ll be able to better differentiate yourself from the competition and build up a tribe of fans that will rarely go get their stuff anywhere else. 

Key takeaways

To sum it all up, from a developer’s point of view, DevOps together with containers simplify and speed up work, improve communication with administrators, and drastically reduce the occurrence of bugs. Business-wise this translates to radical cost reductions and more satisfied customers (and thus increased revenues). The resulting equation “increased revenues + decreased costs = increased profitability” requires no further commentary. 

In order for everything to run as it should, you’ll also need a great infrastructure provider – typically some form of a Kubernetes platform. For most of us, what first comes to mind are the traditional clouds of American companies. Unfortunately, according to our clients’ experience, the user (un)friendliness of these providers won’t make things easier for you. Another option is a provider that will get the Kubernetes platform ready for you, give you much needed advice as well as nonstop phone support. And for a lower price. Not to toot our own horn but these are exactly the criteria that our Kubernetes platform fits perfectly. 

Example of infrastructure utilising container technology – vshosting~


At vshosting~, we make it our mission to not only provide our clients with top-notch hosting services but also to advise them well. In the 14 years that we’ve been on the market, we’ve seen quite a lot already. Therefore, we have a pretty good idea about what works and what spells trouble. One of the key (and very costly) issues we see time and again is a shared infrastructure for both development and production. This tends to be the case even with large online projects that risk losing enormous amounts of money should something go awry.

Considering how big a risk shared dev and production environment poses, something going awry is a matter of time. Why is this so dangerous? And how to set up your infrastructure so that you eliminate the risks? We put together the most important points. 

Development vs. production environment 

Development (and testing) environment should just and only serve new software and feature development or its testing. This encompasses not only the changes in your main application but also e.g. updates of your software equipment on the server. In the dev environment, developers should be able to experiment without worrying about endangering production.

The production environment, on the other hand, is where the app runs for everyone to see. For instance, an online store, where customers browse and search for items, add them to carts and pay for orders. Production is simply all that is visible for your average user plus all the things in the background that are key for app operation (e.g. databases, warehousing systems, etc.).

But most importantly: the production environment is where the money is made. Therefore, we should keep our distance from it and play it soothing classical music. As any problem in production rapidly translates into lost revenue.

Risks posed by A shared infrastructure

If you don’t separate development from production, it can easily happen that your developers will release insufficiently tested software, which will in turn damage or break the entire application. In other words: your production will go down. Should you be sufficiently unlucky, it will take your developers several hours or even days to fix the app. If your app is a large online store, this translates into tens of thousands of euros in lost revenue. Not to mention the extra development expenditures.

Such a situation becomes especially painful if it occurs during a time of high traffic on your website. In the case of online stores, this is typically the case before Christmas – take a look at how much would just an hour-long outage cost you. It’s not just Christmas, though – take any period of time you spend investing in e.g. a TV commercial. This is a very expensive endeavor and cannot be simply turned off because your online store is down.

Unfortunately, we’ve witnessed way too many nightmarish situations like this. This is why we recommend all our clients develop software in a separate environment and only after testing it in a testing environment release it into production. The same can be said for expanding the software equipment of their production servers. Only by thoroughly testing outside of production can you avoid discovering a critical bug in production on a Friday night just before Christmas.

Inside a separated development environment, you can deploy new app versions (e.g. an update online store) risk-free. There you can also test everything well before deployment to production. It will also allow you to update server components to their new versions (DB, PH, etc.) and test their compatibility and function. Only when you are certain everything works the way it should, can you move it to production. All in all, you’ll save yourself lots of trouble and cut costs to boot.

How to separate development from production

When choosing a hosting solution, take the issue of separating development and production into consideration. Ideally, you should run development and testing on a separate server and production should “live” on another one (or on a cluster of servers). At vshosting~, we’re happy to offer you a free consultation regarding the best solution for your project – just drop us a line.

We’ll help you design appropriate configuration for your development and testing environment so that it fully reflects that of production but at the same time doesn’t cost you a fortune in unused performance you don’t need. As the development environment receives little traffic, it doesn’t have to be as robust. For example, if your production consists of a cluster of three powerful servers, one smaller virtual one will likely be just enough for your development purposes. We recommend using the cloud for development because it’s typically the most cost-efficient option.

If you opt for one of our managed hosting services, we’re even happy to create the development environment for you. Simply put, we’ll “clone” your production and adjust it in such a way, that the environment remains identical but its performance is not unnecessarily high. That way, you’ll get all the benefits of separating development from production and save time and money while at it. Then, you’ll be able to make all your changes in development and, only after successful testing, transfer them to production.


Our clients often ask how our new service Platform for Kubernetes differs from similar products provided by e.g. Amazon, Google, etc. There are quite a few distinctions so we decided to describe them in detail in this article.

Individualised Infrastructure Design

Most of the traditional clouds provide a platform for the infrastructure but the design and creation itself remains the clients’ responsibility – or more accurately, the clients’ developers’ responsibility. Most developers, however, would much rather spend their time developing (surprise!) as opposed to reading a 196-page manual on how to use Amazon EKS. Unlike most manuals in life, this one really needs to be read – setting up Kubernetes on Amazon is not particularly intuitive.

In addition, we’re happy to assist you in analysing your application readiness for transfer to Kubernetes, if it’s not utilising it yet. Based on your requirements, we’ll also help you select the most suitable technologies (at no extra cost!) to make sure everything works the way it should and so that eventual scaling is as easy as possible. 

At vshosting~, we understand how frustrating this can be for many companies. The development team should concern themselves with development and not waste time on something outside their expertise. Therefore, unlike traditional clouds, we put great emphasis on custom designing the Kubernetes solution ourselves for each client. There’s no need to engage in complicated selection among predefined packages, read lengthy manuals or wreck your brain thinking about the best infrastructure design. We’ll prepare the Kubernetes infrastructure precisely based on the needs of your application, including load balancing, networking, storage, and other necessities.

Speaking of scaling: that’s exceedingly simple with our Kubernetes solution. Again: no package selection required. At vshosting~, you simply scale up or down with full flexibility, exactly according to your current needs. We also offer the option of fine scaling of only the necessary resources. Does your application need more RAM or disc space because you got a lot of new clients? No problem.

Once we finish designing your fully customised infrastructure, we conduct an individualised installation and set up Kubernetes and load balancers before transferring everything to live traffic. Just to clarify – all of these tasks would be your responsibility if using Google’s, Amazon’s, or Microsoft’s Kubernetes solution. We’ll carefully tweak everything in close cooperation with you. After launching, Kubernetes will run on our cloud or hardware in our own data center ServerPark.

Option to Combine Physical Servers with Cloud 

Another advantage of Kubernetes from vshosting~ is that you can combine cloud and physical servers as needed – other Kubernetes providers don’t offer this. Thanks to this feature, you can e.g. start testing Kubernetes on a Virtual Machine with lower performance and only after that transfer the project to production by adding physical servers (all that with zero downtime) with eventual maintenance of the current VMs for development purposes.

Point of comparison: e.g. Google offers either the option of on-prem Google Kubernetes Engine or running Kubernetes in the cloud but you have to choose one or the other. Plus you have to manage the on-prem variant on your own. You won’t find a physical server + cloud combo option at Amazon or Microsoft either.

At vshosting~, you can mix and match physical servers and cloud as you please and we take care of the entire management to boot. You can focus solely on development and leave the operations to us. We take care of managing the operating systems of all Kubernetes nodes and load balancers, ensure upgrades of operating systems, kernel, etc. (we can even upgrade Kubernetes itself if you like). 

High SLA and 24×7 Senior Support 

One of the most important criteria when choosing a good Kubernetes platform is its availability. Which is why it may come as a surprise that neither Microsoft AKS nor Google GEK offer an SLA (i.e. a „financially-backed service level agreement“) and only claim that they’ll “do their best to ensure the availability of at least 99,5%“.

Amazon EKS does mention a 99,9% SLA but considering their credit refund conditions, it is, in fact, more of a 95% availability guarantee – only below that level does Amazon refund 100 % of your credit. In the event of only a small drop below 99,9% availability, just 10 % of your credit gets refunded.

At vshosting~, we contractually guarantee 99,97% availability: that is even more than the somewhat theoretical SLA at Amazon and much more than the not-guaranteed 99,5% availability at Microsoft and Google. In reality, our availability hovers around 99,99 %. In addition, our managed Kubernetes solution also operates in the high-availability cluster mode, so if a server or a part of the cloud malfunctions, the solution immediately starts running on a backup server or in a different part of the cloud.

Moreover, we guarantee high-speed connectivity as well as unlimited data streams to anywhere in the world. Each client also gets a guaranteed dedicated bandwidth. Our network has a capacity of up 1 Tbps and each pathway is backed up multiple times. 

Thanks to the high-availability cluster mode, high network capacity, and backed up connection, the vshosting~ Kubernetes solution is exceptionally resistant to outages of any part of the cluster. Besides, our experienced teams are continually monitoring your solution and quickly identify eventual beginning problems before they can manifest to the end-user. We also have robust AntiDDoS protection which effectively prevents any cyberattacks on the cluster.

Debugging and Monitoring of the Entire Infrastructure

In contrast to the traditional clouds, at vshosting~, teams of senior administrators and technicians that sit directly in our datacentre watch over your solution 24/7. In the event of a problem, they react within 60 seconds – even at, say, 2 am on a Saturday. These experts are monitoring dozens of parameters of the entire solution (hardware, load balancers, Kubernetes) and as a result, can eliminate most of the problems before they start causing trouble. On top of all that, we guarantee a repair or an exchange of a malfunctioning server within 60 minutes.

For maximum simplification, you’ll get a single contact from us that you can use for all services you have with us: be it Kubernetes itself, its management, or anything regarding infrastructure. We’ll take care of standard maintenance as well as complicated debugging. Consultations regarding the concrete form of Docker files (3 hours monthly) are also included in the price of our Platform for Kubernetes service.


“That hosting partner of ours isn’t worth much – our website even goes down a few times a year – but most of the time, everything works somehow. Most importantly, we don’t want to migrate anywhere!”

The biggest obstacle to switching from the current hosting provider to a better one is almost always migration. The dreaded data transfer from one hosting solution to another is without exception accompanied by an outage and quite a few risks. Moreover, it is often the case that when contemplating migration, the necessity to make changes to the client’s application is discovered, without which the migration cannot move forward (i.e. the application wouldn’t work properly after migrating it to the new solution).  All in all, web migration is no picnic. 

But what about the risks that go along with not migrating? Many don’t even consider them since “everything works” but these invisible dangers are often much larger and their potential consequences much more severe.

Let’s take a look at the main anti-migration arguments, how we address those at vshosting~, and what dangers go along with preserving the status quo at all costs. 

Application or technology changes 

The number one factor deterring from web migration is most often the necessity to make application or technology changes. This is a usual requirement for migration given the current application runs on outdated technology or is incompatible with the new hosting solution. 

The necessity of such change unequivocally presents extra workload for the client’s development team that needs to update the app or learn to work with a different technology.  This might be further complicated by the fact that some clients don’t have a development team at their disposal which happens to be quite common among smaller projects. 

On the other hand, the outdatedness or inadequacy of the technologies used is in no way merely an obstacle to migrating to another hosting provider. It is also, and perhaps more importantly, a hindrance to further growth of the internet project, a risk for its security and more.  Therefore, it is advisable to implement the recommended applicational and technological changes regardless of migration. After they have been put in place, switching over to virtually any provider will have become a piece of cake.  

Outdated technologies

An application using a no longer supported or utterly obscure technology often proves to be an obstacle to migration. For example,  an app written in PHP 5.2 is essentially un-migratable because it lacks compatibility with virtually any of the current technologies. It is, therefore, necessary to update it to a more recent, fully supported version. 

Application changes are no picnic, that’s for sure, and they cost a lot of developer time. On the other hand, running an application using outdated technology is exceptionally dangerous – migration or no migration. For instance, PHP 5.2 is no longer supported and eligible for neither security updates or bug fixes. Aside from incompatibility with modern hosting solutions, such application is then vulnerable to various security attacks and hacks. Considering the current GDPR legislation, this presents a risk of fines that can be catastrophic for the business (fine value of up to 4 % of revenue). Besides, outdated applications don’t tend to be prepared to deal with a significant rise in traffic so if you wish to further grow your business, updating your app is unavoidable either way. 

Simply put: if an application cannot be migrated, it pays off to thoroughly consider why that is and to fix the problem. Since with extremely high probability, something is terribly wrong. Regardless of migration and hosting, serious risks endanger your business. 

Compatibility with the hosting solution

Another common scenario is the necessity to shift towards a new technology that will be compatible with the newly selected hosting solution. This typically happens in the event of a client deciding to migrate from simple, non-redundant infrastructure to a cluster or if he aims to move towards a scalable solution but his application is not prepared for scaling.

An example would be migration from a single database server to a database cluster, where we recommend to our client to switch to Galera from a single node to ensure ideal functionality. Galera is the perfect solution for a cluster and will prove to be an advantage for the client in the long run. However, his developers will have to learn to work with a new technology, which is rarely a welcome situation. 

Service windows and other inconveniences

A further source of worry when it comes to migration is the necessity of a certain service window, where the client’s app simply doesn’t run. This step cannot be circumvented and in the case of large projects can even encompass an entire night. Even the toughest e-shoppers feel distressed by the idea.

At vshosting~, we do everything in our power to make the outage as short as possible. Unfortunately, the entire process has its technological limitations that are set in stone. For this reason, it is key to schedule the service window so that the impact on the client’s business is as small as possible. Furthermore, we thoroughly test the new hosting solution before the migration itself to prevent the emergence of complications that would prolong the out-of-operation period. 

What if something goes wrong?

migration complications

Migration is a complicated process and there is a lot of room for making mistakes. In consequence, it is important to only switch over to providers who have extensive migration experience. Those can minimise the potential risks via thorough analysis and diligent testing of the new hosting infrastructure. And should something go sideways nonetheless, they’re capable of rapidly solving the situation. 

A good example can again be the migration from a single node database to e.g. a 3-node one. Should the balancing among the nodes not work perfectly after migration, experienced administrators are able to temporarily direct the database solution to a single node. As a result, the application can function without any issues and the administrators have time to get to the bottom of the balancing issue in the background. When all is resolved, they switch the database over to the 3-node solution.

In emergency cases, there is always the option of doing a rollback, that is returning everything to its original, pre-migration state. Based on our experience, however, it is more effective to try and solve the given problem as quickly as possible (e.g. by an emergy change of server settings as in the database example above) and finish migrating. The problem, which tends to be of the application sort, can be addressed after that. Unsurprisingly, even here we recommend choosing an experienced hosting provider who is capable of dealing with such unexpected situations in an agile manner. 

At vshosting~, you don’t need to fear migration 

Migration to vshosting~ is no reason for concerns – we have an experienced team of professionals directly in our datacentre who migrate internet projects on a daily basis. When it comes to very large migrations, we conduct dozens of those each year. Thanks to our extensive know-how, we are able to prevent the vast majority of potential risks and make sure everything runs smoothly. 

Before the migration itself, we thoroughly analyse and test the application – we are, among other things, able to evaluate the performance of the entire solution using specialised tests. Based on the initial analysis, we provide recommendations regarding application changes to the client and point out what to pay attention to, what to change, and what to steer clear of. 

Moreover, we design hosting solutions individually and customise them to the needs of each application. The new solution is then thoroughly tested including its compatibility with the client’s app and its synchronisation with all implemented systems (e.g. the warehousing system, CMS, redaction system, etc.). Thanks to that, the new solution gets tweaked to perfection before we even start with the migration.

In specific cases, for instance, when the client has no IT team of his own, we are even capable of conducting the entire migration process for him (although only in cases where no application changes are necessary). The client then needs to put in only minimal effort: to test the functionality of the new solution, to agree to migration start and so on.

To minimise the impact of migration on the client’s business, we carefully plan its date and time together with him. Because our experienced administrators and technicians are present in the datacenter nonstop, we have no problem whatsoever with conducting the migration in the middle of the night, any day of the week.

Should complications arise, preventative measures notwithstanding, we very quickly identify their causes and solve the problem because our experts monitor dozens of the web’s parameters, 24/7. 

Damir Špoljarič

Throughout the last few months, the most typical request of vshosting~ clients has unanimously been horizontal scaling. Let’s take a look at what this requirement truly represents and where it comes from.

It’s important to mention right from the start that nowadays, horizontal scaling is often required mostly for business reasons. The technical side typically lags behind. In addition, in more than one instance, horizontal scaling is a requirement that shouldn’t be a priority for the application operator.

Horizontal vs. Vertical Scaling

Scaling can be either vertical or horizontal. Vertical scaling has been here with us for many years and, simply put, concerns increasing the output of a given server (be it physical or virtual) or container. It is the easiest way to provide more computing resources for that given app.

A disadvantage of this method is primarily its limitation (the server output cannot be increased infinitely). The answer to this drawback, at least theoretically, is horizontal scaling. That allows for parallel additions of independent computing capacities and distribution of load among these capacities – infinitely in an ideal case. From the point of view of scaling a business, this sounds amazing. However…

When it comes to extensive applications with the vision of rapid growth, horizontal scaling truly is the only option to ensure maximum scalability. Nonetheless, it is still more advantageous for many services to scale vertically rather than horizontally (e.g. load balancers, etc.). Unfortunately, many platforms (such as AWS) to this day force the user to choose from pre-defined packages of processor and storage capacity. At vshosting~, we consider this to be an anachronism.

Cloud platforms and hosting should be able to dynamically adjust to the real needs of concrete computing resources of each and every client. In other words, every time a client needs to increase their application output, they should have the option of simply adding x amount of storage and y number of processors without having to go through an elaborate investigation into which of the offered “packages” is most appropriate for them at that moment.

Horizontal Scaling Stumbling Blocks: Practical Experience

As we have already mentioned, in some cases, not even the option of dynamic vertical scaling remains sufficient and horizontal scaling thus becomes necessary. However, it pays off to thoroughly consider whether this really is the case. No matter how attractive horizontal scaling may seem in theory, it clashes with multiple conceptual disadvantages in reality.

First of all, horizontal scaling puts pressure on developers to create applications that are prepared for parallel operation and task fulfillment. We know from experience, that writing such applications is much more demanding on developer knowledge, time, and testing (and, as a result, on finances).

Furthermore, the limitations of external services need to be taken into account. Typically, the biggest complication arises in the event that the application meant to operate in a parallel mode uses a shared filesystem. Scalability is then reduced for the entire application simply due to the way this decades-old technology operates (object storage would be a suitable replacement). Even cloud platforms or infrastructure built on a distributed solution (e.g. GlusterFS) sooner or later run into the limitations of the technology used, that tend to be very difficult to overcome during operation.

Another problematic aspect is usually relational databases that also due to their conceptual design don’t account for being clustered. Technology in this area has advanced significantly, for instance, we have a great experience with MariaDB Galera Cluster – we have been successfully operating the Shoptet platform for tens of thousands of e-shops on it. From this point of view, it is a more advanced solution than what Amazon offers with AWS Aurora.

AWS Aurora only allows for horizontal scaling using so-called “replicas”, which are read-only replicas of one database instance. It is, therefore, a very limited form of horizontal scaling that loses its meaning this day and age because by limiting the operation from a relational database (to make the application more horizontally scalable), especially read operations are limited – for the offloading of which, tools such as ElasticSearch are used. These tools, on the other hand, are very easily horizontally scaled but aren’t in principle suitable for storing data of key importance.

Read-only replicas are once again a decades-old mechanism that we don’t consider to be a suitable technology for horizontal scaling. That is the case primarily because this method cannot ensure 1:1 scaling for all operation types. AWS offers no alternatives for horizontal scaling of SQL databases.

Hassle-free Horizontal Scaling Thanks to vshosting~

We see the biggest issue with modern applications built for horizontal scaling in their complexity. This makes it difficult to predict which part of the application or its component will reach its limit.

At vshosting~, we assist our clients’ developers in order to make their applications more easily and quickly scalable. We advise them which elements in their applications are or will become limiting for further growth and prepare individual, robust, and fully scalable infrastructure without compromises.

There is no point in reinventing the wheel: thanks to our unique know-how at vshosting~, we will let you know what to focus on when designing an application. We will prepare your project for global expansion, also thanks to our own CDN, which we’ll soon be launching on a third continent (Asia).

Damir Špoljarič

CEO & Co-Founder

Damir Špoljarič

E-shop development costs can climb up to millions, not to mention the time such development requires. Therefore, it is especially important that such investment isn’t made in vain.

At vshosting~, we host thousands of e-shops and have seen a lot in our 13 years on the market. That’s why we put together a list of things we recommend our clients to watch out for when developing an e-shop. Provided their goal is to maximise the return on their investment, that is.

How to Approach the Overall App Design

Stick to Tried and Tested Technologies

There’s a good chance your developers will try to persuade you that proven technologies are “old and boring” and that you should use some new hot tech instead. Here it’s important to back up for a moment and consider this: brand new technologies are indeed cool but also carry a significant risk of becoming obsolete within a year or two.

Should that become the unfortunate reality, incompatibility with many systems necessary for your e-shop operation would ensue. As would, of course, the issue with trying to find developers who are able to work with such niche tech. All this combined would lead to a compromised functionality of your e-shop and, as a result, to loss of revenue.

If you don’t feel comfortable taking that risk, we recommend you stick to the most popular technologies used to develop e-shops such as PHP, MySQL, ElasticSearch, MongoDB, or Redis.

Think about Horizontal Scalability

Undoubtedly, you’re developing your new e-shop with a vision of future growth. In order for your technical solution to keep the pace with increasing demand, it needs to be easily scalable. As a result, we recommend minimising the use of relational databases and avoiding the ones that are difficult to scale (e.g. PostgreSQL) to our clients.

Another appropriate measure that makes horizontal scaling easier is the elimination of a shared file system and the use of object storage in its place. Just like a relational database, a shared file system can quickly become an unnecessary hindrance to growth.  

Don’t be Afraid to Develop Using Technologies in Testing

The development of a complex e-shop can easily take a year or two which is, given the lifecycle of many technologies, a rather long time. A nightmare scenario is that where after investing millions and spending 1-2 years developing, you launch your new e-shop only to find out it’s already pretty much technologically outdated.

In order to increase the length of your e-shop’s lifecycle, it is ideal for your developers to use new versions of proven technologies that are only in their testing phase. Thanks to that, your e-shop will age more slowly and the return on your investment will thus be much better.

Move on to Microservices

Monolithic (i.e. “built in one piece”) applications are on the decline in today’s development world, and for good practical reasons. Whenever you need to change or fix a part of such an application, it often leads to errors all over it. As a result, any changes or implementation of new features are very problematic.

For this reason, so-called microservices, thanks to which it is possible to develop sustainable applications with the option of only replacing their parts, are gaining popularity. If using microservices, your developers won’t spend all of their time fixing bugs and will be able to devote their efforts to developing new features instead.

Hidden Threats of E-shop Development

Technologies to Stay Clear of

Here are the top 3 technologies that can become a stumbling block when developing and operating an e-shop: Varnish, PostgreSQL, and Magento.


Varnish is an application cache designed to speed up the application. However, an application that needs Varnish in order to run fast enough is quite suspicious. To give you the big picture, out of our thousands of clients only 2% use Varnish. Others don’t need it because their applications are fast enough without it.


As has been already mentioned with relation to horizontal scalability, PostgreSQL can very quickly become a hindrance for large e-shops. It is very difficult to scale and until today can’t do synchronous replica very well. Therefore, we recommend more scalable technologies for large e-shops (or small ones with the ambition to grow).


At vshosting~, we nicknamed Magento “solution for those with unlimited budgets and no need for scaling”. As you can imagine, we can’t in our experience recommend Magento to e-shops that aim to grow. While it does make development easier, it lacks scalability.

Watch out for Hidden Vendor Lock-in or Licences

Another threat to e-shop development can be too much dependence on an external service. Providers often make it difficult to migrate to another service in the event of issues or say a significant price increase. Interestingly, it tends to be quite easy to provide most of such services in-house. For example, a full-text search doesn’t need to be outsourced to an external service at all if you use ElasticSearch in-house.

A proprietary database is another such example: if it is custom-written to fit an external service, it cannot be transferred to another one. It is also important to watch out for licenses such as Java SDK.

How Not to Blow Money Invested in Advertising

Last but not least, here are a few pieces of technically-operational advice that’ll help you ensure that the money you invest in advertising will not be spent in vain due to the limitations of your e-shop.

Separate Development, Testing, and Production Environments

Tip number 1 is the separation of development, testing, and production environments. Production environment (i.e. the one making money) is sacred – one shouldn’t touch it unless absolutely necessary because each issue promptly turns into lost profit.

Developers should create new features in the development environment, then test them in the testing environment and only after that deploy them into production. Only by thoroughly testing things outside of production can you avoid the situation where a bug is discovered in production on a Friday night during Christmas shopping season.

Know the Limits of Your E-shop and Test Regularly

Presumably, we can all agree that only just discovering the limits of your e-shop shortly after launching a costly TV ad campaign is not the most opportune moment. Therefore, we recommend our clients to invest in so-called performance tests. Those serve to find limits and weaknesses by simulating high traffic to the website.

Moreover, good performance tests don’t only overload the application with traffic but emulate the behavior of an actual website visitor – e.g. by viewing product details, adding products to the shopping cart, full-text searching, etc. Thanks to that, you can find out more precisely where the weak spots of your web are and will be able to act on it accordingly.

Security (Beyond a Secure Application Design)

Not only due to GDPR is the old saying “security above all” more valid now than ever. Nowadays, if passwords are leaked from your e-shop, you risk having to pay a handsome fine and facing other legal consequences on top of losing the trust of your customers.

To minimise the risk of anything like that happening, we can recommend 3 most important measures to take:

  • Regular penetration tests (they test application weaknesses from a security standpoint)
  • AntiDDOS protection (just this year, we have noted over 1500 attacks on our clients’ webs, a large portion of which was conducted automatically)
  • Backups + the option of a quick restore (how long does it take to restore the system from backups? A few minutes or a few days?)


Database servers are key parts of any web project’s infrastructure and with the project’s increasing in size the database grows in significance. Sooner or later, however, we come to a point where database performance requirements can no longer be solved by the mere addition of extra memory and processor improvements. Increasing resources within one server has its limits and eventually, it becomes necessary to distribute the load among multiple servers.

Before implementing such a step, it is more than appropriate to clarify, what it is that we aim to accomplish. Some load distribution models will only allow us to manage an increase in the number of requests while others can also solve the issue of potential unavailability of one of the machines.

Scaling, High Availability, and Other Important Terms

First of all, let’s take a look at the basic terms we’ll be needing today. There are not many of them but without their knowledge, we won’t be able to move on. Experienced scalers can feel free to skip this section.


The ability of a system (in our case a database environment) to react to increased or decreased resource need. In practice, we distinguish between two basic types of scaling: vertical and horizontal.

In the case of vertical scaling, we increase the resources the given database has at its disposal. Typically, this means adding accessible server memory and increasing the number of cores. Practically any application can be scaled vertically but sooner or later we run into hardware limits of the platform.

An alternative to this is horizontal scaling where we increase the performance of the application by adding more servers. This way we can increase the application’s performance almost limitlessly, however, the application must account for this kind of distribution.

High Availability

The ability of a system to react to a part of that system being down. The prerequisite for high availability is the ability to run the application in question in multiple instances.

The other instances can be fully replaceable and process requests in parallel (in this case we’re talking about active-active setup) or they can be in standby mode, where they only mirror data but aren’t able to process requests (so-called active-passive setup). Should a problem occur, one of the instances in passive mode is selected and turned into an active one.

Master node

The driving component of the system. In the case of databases, the master node is an instance operating in both read and write mode. If we have multiple full-featured master nodes, we speak of a so-called multi-master setup.

Slave node

A backup copy of data. In a standard situation, it only mirrors data and operates only in read mode. In the event of a master node failure, one of the slave nodes is selected and turned into a master node. Once the original master node is operational again, the new master will either return to being a slave node or it remains to be the master and the original master becomes a slave node.

Asynchronous replication

After inserting data into the master node, this insertion is confirmed to the client and written into the transaction log. At a later time, this change is replicated to slave nodes. Until the replication is completed, the new or changed data is only available on the master node and should it fail they would become inaccessible. Asynchronous replication is typical for MySQL.

Synchronous replication

The data insertion is confirmed to the client only after the data is saved to all nodes in the cluster. The risk of new data loss is eliminated in this case (the data is either changed everywhere or nowhere) but the solution is significantly more prone to issues in the network connecting the nodes.

Should the network be down, the performance of the cluster becomes temporarily downgraded. Alternatively, the reception of new requests for data change may even become temporarily suspended. This type of replication is used in the case of multi-master setups in combination with the Galera plugin.

Master-Slave Replication

Master-slave is the basic type of database cluster replication. In this setup, there is a single master node that receives all request types. The slave node (or multiple slave nodes) mirror changes using asynchronous replication. Slave nodes don’t necessarily have the newest copy of the data at their disposal.

Should the master node fail, the slave node with the newest data copy is selected to become the new master. Each slave node evaluates how delayed it is compared to the master node. This value can be found within the Seconds_behind_master variable and it is essential to monitor it. An increasing value indicates an issue in change replication at the master node.

The slave node operates in a read-only mode and can thus deal with select type requests. In this case, we’re talking about the so-called read/write split, which we’ll discuss in a moment.

Master-Master Replication

Master-master setup is such where we have two master nodes. Both are able to deal with all types of requests but between the two of them, asynchronous replication is the modus operandi. This presents a disadvantage when the data inserted into one node may not be immediately accessible from the second one. In practice, we set this up in such a way, so that each node is also a slave node to the other.

This setup is advantageous when we install a load balancer before the MySQL servers, which directs half of the connections to each machine. Each node is a separate master at the same time and knows not the other server is a master too. It is, therefore, necessary to set up the auto-increment step to the value of 2. If we don’t do this, a collision of primary keys that use auto-increment will ensue.

Each of the master nodes can have additional slave nodes that can be used for data reading (read/write split) and as a backup.

Multi-Master Replication

If there are more than two master nodes in a cluster, we’re talking about a multi-master setup. This setup cannot be built in basic MySQL but an implementation of the wsrep protocol, e.g. Galera, has to be used.

Wsrep implements synchronous replication and as such is very sensitive to network issues. In addition, it requires time synchronization of all nodes. On the other hand, it allows for all request types to be sent to all nodes in the cluster which makes it very suitable for a load balancing solution. A disadvantage being that all replicated tables have to use the innodb engine. Table utilising a different engine will not be replicated.


Sharding is dividing the data into logical segments. In MySQL, the term partitioning is used to describe this type of data storage and in essence, this means that the data of a single table is divided among several servers, tables or data files within a single server.

Data sharding is appropriate, if the data we have, forms separate groups. Typical examples are historical records (sharding according to time) or user data (sharding according to user ID). Thanks to such data division, we can effectively combine different storage types where we store the most recent data on fast SSD discs and older data that we don’t expect to be used very often on cheaper rotation discs.

Sharding is very often used in NoSQL databases, e.g. ElasticSearch.

Read-Write Splitting

In the Master-Slave replication mode, we have the performance of slave nodes at our disposal but cannot use it for write operations. However, if we have an application where most of the requests are just selects (typical in web projects), we can use their performance for read operations. In this case, the application directs write operations (insert, delete, update) to the master node but sends selects to the group of slave nodes.

Thanks to the fact that a single master node can have many slave nodes, this read/write splitting will help us increase the response rate of the entire application by distributing the read operations.

This behavior doesn’t require any configuration on the side of the database server but it needs to be dealt with in the application side. The easiest option is to maintain two connections in the application: one for read and one for write operations. The application then decides on which connection to use for every given request based on its type

The second option, which is useful if we are unable to implement read/write splitting at the application level, is using application proxy that understands requests and is able to automatically send them to appropriate nodes. The application then maintains only one connection to the proxy and doesn’t concern itself with request types. A typical example of this solution is Maxscale. Unfortunately, this is a commercial product but it provides a free version limited to three database nodes.

We Scale for You

Don’t have the capacity to maintain and scale you databases? We’ll do it for you.

We will take care of even very complex maintanance and optimisation of a wide range of databases. We’ll ensure their maximum stability, availibility, and scalability. Our admin team manages tens of thousands of database servers and cluster so you’ll be in the hands of true experts.


Most manuals for application dockerization that you’ll find online are written for a specific language and environment. We will, however, look into general guidelines meant for virtually any type of application and show you, how to ensure their operation in Docker containers.

Base Image Selection

For issue-free operation and further simple edits and upgrades, choosing the most ideal (and author-supported) base image is critical. Considering that absolutely anyone can upload an image to the Docker Hub, it is advisable to take a close look at your selected image and make sure that it contains no malicious software or e.g. outdated library versions with security issues. 

Images labeled as “Docker certified” are a good choice for the start as that status is a certain guarantee that the image is legitimate and regularly updated. Good examples of such images are PHP or Node.js.

Furthermore, we can recommend the Bitnami company collection that contains a number of ready-made image applications and development environments. 

Additional Software Installation

Depending on the image you have chosen for your project, you can install extra software so that all prerequisites necessary for smooth application operation are fulfilled.  

The best solution is the use of a package distribution system, on which the image is based (usually Ubuntu/Debian, Alpine Linux, or CentOS). It is also very important to maintain the narrowest possible list of installed software, e.g. not install text editors, compilators, and other development tools into the containers.

Own Files in the Docker Image

You’ll also want to add your own files into the final image – be it configuration, source codes, or binary files from the app. In Dockerfile, the commands ADD or COPY are used, COPY being more transparent but not allowing for some more advanced functions such as archive unpacking into the image.

Authorisation Definition

Despite it being the easiest way, avoid running the app in a container as the root user. This poses many security risks and increases the chance of container leak if the application becomes compromised or if a security error in third-party software you’re using is exploited.

Service Port Definition

If your application doesn’t use the root user or has no enhanced capabilities (CAP_NET_ADMIN), it is not possible to utilise the so-called privileged ports. (1-1024). However, that is not necessary for Docker. Use any higher port (e.g. 8080 and 8443 in place of 80/443 with a web server) and conduct port mapping via the  Docker parameters.

Running the Application in the Container

However easy it is to directly run the binary file of your application(or web server, Node.js, etc.), the much more sophisticated way is to create your own so-called entrypoint – that is a script, which will conduct the initial application configuration, can react to a variable environment etc. We can find a good example of this solution in the official PostgreSQL image.

Configuration Methods

Most applications require correct configuration to run properly. It is certainly possible to directly use a configuration file (e.g. in a mounted directory on the outside of the container) but in most cases, it is better to use a prepared entry point script, which will prepare proper configuration for running the application using a template and the variable environment of the container.

Application Data

Avoid saving data to the container filesystem – in the standard configuration, all the data will be deleted after the container is restarted. Use bind mounts (addressbook outside the container directory on the outside of the container) or mounted volume.

In addition, it is necessary to figure out how to save/send logs. The best option is certainly using centralised logging for all of your applications (ELK stack), however, even a basic  remote syslog does a good enough job.

What next?

There is always room for improvement. Beyond the scope of this article is considering different configuration management options, ELK stack for logging, application and system metrics collection via Prometheus, and the option of reaching load balancing and high-availability for your application using Kubernetes – which at vshosting~, we will gladly build for you and tailor it to your application’s needs 🙂


Magento is a powerful e-commerce platform offering everything you need to sell online. Besides the e-shop itself, it can manage stock, marketing, invoicing, and accounting. Currently, Magento comes in two versions: Magento Open Source (formerly Magento Community Edition) and Magento Commerce (for larger organisations with in-house developers). 

Magento is one of the most-used e-commerce platforms and is consistently highly rated by users. However, in order to run Magento correctly it’s necessary to have a developer who is experienced with such applications as well as an experienced hosting provider who knows how to optimise servers for Magento. 

The need for powerful hosting

As mentioned, it’s crucial not to underplay the importance of hosting when it comes to Magento. The platform is performance intensive and the hosting parameters must reflect that. Putting to one side the problems and errors within the application itself, the majority of problems with Magento are caused by a lack of power from the web server, or as the case may be, the environment where Magento is being run. Magento is slower and has a much higher volume of requests compared with other systems, but there are several ways to speed up the platform. 

Speeding Magento up

Magento cannot usually run directly on the web server, but instead it’s necessary to use a caching proxy (currently only Varnish is supported). 

If you wish to (as is standard these days) run a SSL version, it’s critical to put Nginx or another SSL terminator in front of the Varnish proxy. Regarding the web server itself, it’s possible to use Apache with a PHP module or Nginx with PHP-FPM. One of the most effective ways to speed up Magento is to use a PHP accelerator. For basic installations, APC is considered the best option. 

There are therefore two possible solutions:

  • NGINX → VARNISH → APACHE (PHP-MODULE + APC, Memcache, Redis) → MariaDB
  • NGINX → VARNISH → NGINX (PHP-FPM + APC, Memcache, Redis) → MariaDB

For a larger installation, it is appropriate to use Redis for the cache and as a session handler. If you choose to run the whole installation on one server, it’s imperative to have powerful hardware or hosting which directly supports Magento. PHP, MySQL and Varnish will all compete for memory, CPU and IOPS. 

Running Magento in a cluster

If you choose to run Magento in any kind of cluster mode (either because of power or high availability), Magento doesn’t account for this option by default. There are several ways to assemble the cluster:

  • The more complicated way is to use Varnish as a load balancer on each backend but then it is necessary to properly deal with cache invalidation and with other similar problems.
  • An easier way is to leave balancing to the Varnish cache, which is powerful enough on its own to be able to pass requests to several backends.

For cluster installations, it is also recommended to use a dedicated server for backend admin, for which, paradoxically, caching isn’t particularly desirable. For version 2 and below it is recommended that you enable the Magento Compilation function to speed up Magento storage. 

Our experience shows that the most common instances of slowdown are a result of the following application errors:

44% SQL queries within a loop

25% loading the same model multiple times

14% using a redundant data file

10% calculating the field size at each loop iteration

7% inefficient memory usage 

Conclusion: An experienced hosting provider and programmer will fine-tune how Magento runs

The advantages of Magento are that it is robust and universal. On the other hand, it is hard to ensure that it runs optimally. If you are considering using Magento for your e-shop, be sure that:

  1. Your programmer has experience with the Magento platform
  2. The hosting provider you choose knows how to fine-tune a server for Magento so that everything works perfectly without any loss of speed. 

We have successfully assisted with migrations for hundreds of clients over the course of 17 years. Join them.

  1. Schedule a consultation

    Simply leave your details. We’ll get back to you as soon as possible.

  2. Free solution proposal

    A no commitment discussion about how we can help. We’ll propose a tailored solution.

  3. Professional implementation

    We’ll create the environment for a seamless migration according to the agreed proposal.

Leave us your email or telephone number

    Or contact us directly

    +44 3301 900 777 Available 24/7
    We'll get back to you right away.