Not too long ago, hosting a web application meant having a server on-premise, storing the application files in the server, and opening it up for the Internet. That means the developer is responsible for everything — from infrastructure (e.g., the server, networking, OS, etc.) to application-specific services.
With the advent of renting server space in the early 2000s, developers could rent a server without having to manage it physically. That started the journey to what is Cloud today.
In 2010 and around, the most popular way to get a small web application used to go like this. We set everything up locally; maybe with PHP (and a content management system such as Drupal) or Ruby on Rails. Then, we rent a server space — often shared with others — from a service like Linode. Finally, we create an FTP connection to the server and update the local files there. That was the standard definition of Cloud. Even though Amazon Web Services (AWS) has been around for a few years with AWS S3 and EC2, they were not popular in small to medium use cases.
While PHP, Drupal, Ruby on Rails, and Linode are all thriving well — especially within enterprises — we have come a long way since then. Before we dive deeper into the evolution, let’s scope through the specifics of the rent-a-shared-space paradigm.
The servers were often true bare metal. A cheaper rented server would share resources with others. It ran Linux, as most servers run today. However, the developers were often responsible for maintaining the software — they needed to ssh tunnel into the server and update/upgrade whenever necessary.
The database needed to be set up and managed by the developer as well. Tools such as cPanel helped locally visualize and interact with the database. Nonetheless, the performance and security of the database and the rest of the application were on the developer.