AWS cloud migration process
Amazon Web Services is a leading cloud computing service providing services for storing and analyzing data from cloud computing systems and devices. It offers scalable cloud computing environments for businesses to deploy. Migration is no easy job. The articles present Amazon’s basic framework for migration, describing basic migration steps relevant to all AWS migration projects.
The AWS migration consulting service is part of the AWS migration Program, which assists businesses in identifying and selecting the world’s best APN Partners with proven technical expertise and client success in specialized solution areas.
Migrate your application workloads
AWS is an excellent platform for Windows applications today and in the future. The company expects to generate an average annual revenue of 444% from Windows on the Amazon Web Services platform. SAP has been providing software integration to SAP Landscapes since 2011. AWS supports most types of instances in Cloud for a variety of different applications. VMware has partnered with AWS to develop and deliver VMware cloud-based workload solutions.
AWS Cloud Migration Phases
Amazon’s cloud migration plan covers five stages. Stage 2: Migration preparation and business planning. Create business cases for Amazon migration and define your goals. How can I improve my business processes? Determine a specific application to migrate to the cloud using this strategy. Phase 1: Developing plans.
AWS migration solutions
Our migration solution focuses all aspects of the process, technology and finances to ensure that your projects achieve the desired results for the organization.
– Migration Methodology
Moving millions of data and applications into the cloud requires a progressively oriented approach which includes evaluation readiness planning, migration and operational steps with all phases extending from the previous. AWS’pre-scriptive guides provide the method and techniques for each step of your migration journey.
– AWS Managed Services
AWS managed services (AMS) provides e-commerce and business-grade infrastructure that allows migration of manufacturing workloads within days. For compliance, AMS only updates the required applications for security reasons. AMS takes charge of running the cloud environment.
– AWS Migration Competency Partners
AWS migration expertise partner can assist you with completing migration faster. Global systems integrators and regional partners demonstrate successfully completion of multiple big migrations to AWS to gain migration competency partnership status.
– AWS Migration Acceleration Program
The Accelerated Migration Program at AWS (MAP) aims to improve efficiency of the organization’s operations by leveraging a comprehensive migration platform, with the investment to reduce the cost of migration to a new location.
– AWS Training and Certification
The AWS Training Team has the knowledge and expertise needed for cloud development for organizations. Cloud adoption of new technologies will be as fast as 20% if you employ a highly skilled workforce.
Why should I migrate to AWS Cloud?
Several enterprises that have moved into Amazon Web Services have reported that their IT infrastructure is undergoing a 36% upgrade.
Faster time to business results
Automating and data-oriented guidance helps simplify migration and decreases time and complexity. So a faster migration will reduce time to realize value in cloud migration. Ebooks: Maximize business value by using cloud technology for e-commerce.
Migration to AWS: 5 challenges and solutions
Migration to AWS can be a complicated process with many challenges. Here is one of the most common problems.
Plan for security
Challenge: Cloud environment security is not as secure as in-house environment and its security characteristics are very distinct. The potential risk is that the existing technologies will no longer work in a security vacuum when application migration from the cloud to the offsite.
Solution: Identify the security requirements for the application you are moving and ensure that the application meets the corresponding security standards. Find solutions for security issues on the AWS platform similar to the one you have on-premise.
Moving On-Premise Data and Managing Storage on AWS
How can you migrate data to the cloud?
Solution: AWS Direct connect provides a solution that can support enterprise applications that can provide highly reliable and dedicated Internet connectivity from the public clouds to their virtual premise. It also allows a synchronized workflow with an e-commerce site that gives users centralised visibility. CloudWatch can be utilized to remove impact from user migration. CloudWatch detects performance problems immediately and resolves them without users being affected.
Resilience for computation and network resources
Your application should be highly available to users on AWS. In cloud instances the application cannot be kept forever. A secondary requirement is enabling reliable connectivity — ensuring the availability of all the resources in a cloud.
In calculation you can select reserved instances that will help you maintain your machine instance. Replications or using services managing deployment or availability such as Elastic Beanstalk are available for download on the web.
How do I manage my costs?
Several organizations have been moving to a cloud environment without identifying specific KPIs for how much money the cloud can cost. At the same time, the question can be answered whether the move was successful in the economic sense.
A cloud environment is very dynamic – the cost may change rapidly as you adopt services or scale your application. Solution Before moving, create an objective business plan and understand the value of your cloud migration to another cloud service.
Log Analysis and Metric Collection
Challenge: After migrating to AWS you can have an extremely responsive and dynamic system. Earlier methods for logging your software may not be applicable. The centralization of data would be essential to analyze log files on computers that were shut down yesterday.
Problem: Ensure the data is stored in a central place in the system to allow a central view of a log file. Utilize Amazon CloudWatch for centralized logging with Amazon CloudWatch Lambdas & Cognito.
What are the three phases of AWS cloud migration?
AWS tries to manage large migration processes by assessing, mobilizing, and deploying migration. Each phase builds upon previous phases. This prescriptive guidance plan covers the assessment phase as well as the mobilization phase.
You’re traveling through another dimension, a dimension not only of sight and sound but of mind. A journey into a wondrous land whose boundaries are that of programmatic bits and bytes. That’s the signpost up ahead – your next stop, the Cloud!
As customer begin to embrace digital transformation and start to implement their previously adjudicated ‘Cloud First’ strategy the most common and important question that arises is which application should we move to the Cloud? This decision is usually not made on a single factor alone. The choice of which applications to move (and not move) to the cloud should be based on a thorough analysis of your IT landscape. Below are a set of categories and questions that will help guide your decision on application migration from on-prem into the cloud.
6 Migration Paths for Your Application to the Cloud
Enterprises need to begin their journey to the cloud by deciding how to get their application from where it lives today to where it needs to be tomorrow. From this perspective there is one key question to ask: How will you migrate this application into a cloud?
1 Rehosting – Lift-n-Shift
Every IT organization has a set of large-scale legacy applications that must have continuity maintained but cannot incur the cost, time or effort of refactoring. Rehosting is essentially a forklift approach to migrating applications to the cloud, essentially moving them without any modification. This is an efficient non-resource-intensive migration process. Often, however, lift-n-shift migrations don’t benefit from cloud-native features like elasticity. And while they may be more cost-effective to run in the cloud than on-prem, it could be even cheaper if you were to replatform or refactor.
2 Replatforming – Lift-Adjust-n-Shift
Replatforming can be considered a lite weight or pseudo-refactoring. Instead of rearchitecting the software entirely it may be possible to optimize or tweaks basic elements of your application to operate successfully in the cloud. While this path is more glacial than rehosting, the approach offers a middle ground between rehosting and refactoring, allowing workloads to take advantage of native cloud functionality and cost optimization, without the huge resource commitment you find with refactoring.
3 Refactoring – Redeveloping an application
Rearchitecting or redeveloping an application is typically driven by business need to add features, scalability, or performance that would otherwise be difficult to achieve in its current state. This approach is the most time-consuming and resource-intensive but offers the inherent benefits of being able to leverage native-cloud functionality and maximizing operational cost efficiency in the cloud.
4 Repurchasing – Move to a different product
Most enterprise software vendors have or are in the process of creating cloud-based versions of their application. If it makes business sense, this is often a suitable way of getting your application into the cloud. If your current vendor does not offer a cloud-native solution their marketplace is wrought with competitors who likely do have an application that fits your needs, already designed to operate in the cloud of your choice.
5 Retiring – Getting rid of the application altogether
In the IT landscape there are a lot of legacy products that might be able to be replaced or remove without replacement. Once you’ve discovered everything in your environment it becomes a worthwhile exercise to determine if an application is actually needed, and if not, reitre it.
6 Retaining – Keeping the application in its current home
You should only migrate what makes sense from both a business and technical perspective. If the cost is too high, compliance is too limited, or the complexity just makes things impossible in many cases it may make sense to NOT migrate an application into a cloud. An alternative strategy would be to employ a hybrid cloud model where some of the application (or data) reside on prem but still are able to leverage the computer and elastics powers of the cloud.
When considering application migration (not just to the cloud) organizations must review the architecture of each application and determine viability for its new platform. Reviewing the application architecture ensures that potential migration of these applications to the cloud is or is not the right decision. Let’s consider two common architectural questions.
Does your application require a specific operating system?
You must consider the obvious question, does the cloud provider you are contracting with support that OS? While you can often upload or configure whatever you want, have some level of built-in support is important for a business-critical application. If a problem occurs, can you get assistance? What is the time-to-resolution from the point of view of the cloud provider?
Does the application have hardware or infrastructure requirements?
When it comes to public cloud providers we generally don’t have any clue about the underlying hardware. Does the use of unknown or commodity hardware pose any risk? Are there external hardware dependencies that must be in place for things to work correctly (E.G. a Load Balancer)? Does the application have external software dependencies and if so, can this application work in the cloud without migrating all the dependencies? (E.G. NIS, AD, etc)
For operating system dependencies there is an obvious go/no-go here. If a Cloud Vendor doesn’t support your required OS then this is rate limiting. The same is true if there are hardware of infrastructure dependencies that cannot by properly setup or configured in a cloud setting. However, a solution to these problems is often a Hybrid Cloud scenario where the dependencies can be “on-prem” and the application can still function in the cloud.
One of the biggest areas of consideration in this journey to the cloud is around the applications themselves. When it comes to specificity versus universality software developers run the gambit from writing code that is highly compatible (think open-source Linux projects) to extremely specific in its requirements (think SAP). Thus, it is very important to consider the needs of each individual application from an internal operability perspective. Let’s consider these five questions.
Does the application observe consistent or fluctuating CPU usage?
Having applications that constantly run versus ones that spin up and spin down upon completion of the job can significantly change the cost analysis. For example, in AWS On-Demand instances, you pay for compute capacity by per hour or per second depending on which instances you run. Thus, the architecture of your application (including the container, VM or other higher-tier controller software) is critical to the cost-effectiveness of using a cloud model.
Does your application have latency and throughput requirements?
Public clouds can only guarantee certain levels of IOPS and latency requirements. The biggest culprit is usually network bandwidth. Bandwidth utilization increases and network links often become oversaturated, which can degrade overall application performance. Furthermore, network architectures, where the traffic is routed to a common gateway in the data center, and cloud-based applications end up traveling a greater distance to reach users when compared to on-prem.
Does your application have specific compute requirements?
Public clouds are an ideal place to garner more compute resources without CapEx expenditure. But, applications that require vertical scaling may not be suited for all clouds. Vertical scaling means that you scale by adding more power (CPU, RAM) to an existing machine, which ultimately can cost significantly more money.
Does your application have supportability requirements?
If your application needs ongoing support, does the cloud provider meet the basic listed requirements for supportability? If so, can your (or the vendors) support team gather the necessary information and do troubleshooting when issues arise?
Are there any software licensing issues that prevent or limit cloud usage?
Some of your contracts may not address cloud computing specifically because the licensing model may predate cloud. While others offer cloud-specific software licenses which can introduce its own complications. It behooves of each IT organization to thoroughly examine the licensing models of all core-software before undertaking a cloud migration.
Business and Industry Criteria
The last and arguably most important areas of consideration around cloud migration are that of the business. Thus far we have discussed technology but that is all for not if basic business requirements cannot be met. This will include area such as business continuity, compliance and security – all of which we will discuss.
Does your application have specific backup, HA or business continuity requirements?
Clouds are fully featured when it comes to backup to be sure your RTO’s and RPO’s can be met. However, can those requirements be met within the defined cost structure? Storage space is cheap but having alternative HA sites and higher-end business continuity plans could be more expensive in the long run – it just depends on your business requirements.
Does your application (or business) have compliance specifications or requirements?
Not all cloud technologies are suitable for all compliance issues. HIPAA, SOX, GPRD or any other regulation can be met in the Cloud but must be considered and evaluated beforehand. The first area of importance is to be fully aware of the type of cloud services that are being use. Once you have done that, you can look at the data that will be migrated. The second area of importance to look at, once you know which data you are going to put on the cloud is to look at the contracts with your cloud provider. If it is an internal cloud, are you going to have internal SLAs and internal compliance checklists? If it’s external, you have to clearly identify with the provider what type of data should reside on their cloud services, how they’re going to protect it, how they’re going to back it up and how you may reserve the right to audit that process.
Does your application (or business) have stringent security requirements?
While at a high level, cloud environments experience the same threats as on-prem data centers, they also offer a unique set of threats and risks. Those include:
- Consumers have reduced visibility and control
- Immediate self-service functionality makes unauthorized use easier
- Public API’s offer an attractive target to hackers
- Multi-tenancy increases the chance of a surface attack
- Data deletion is unrealistic
- Credentials can more easily be stolen
- Cloud vendors have access to your data
By moving into a public cloud there will be some compromises on security. Taking a hard look at your application and business requirements from this perspective is a crucial step in determining cloud-hosting viability.
Keep in mind that these are common, but still general guidelines, and your decision about moving applications to the cloud should be based on your own situation. However, if you apply all these questions to your application and IT landscape you will be well-positioned to know what should and should not be migrated. I hope that your move to the cloud is expedient, efficient and effective.
AWS today announced the beta launch of Amazon Honeycode, a new, fully managed low-code/no-code development tool that aims to make it easy for anybody in a company to build their own applications. All of this, of course, is backed by a database in AWS and a web-based, drag-and-drop interface builder.
Image Credits: Amazon/AWS
“Customers have told us that the need for custom applications far outstrips the capacity of developers to create them,” said AWS VP Larry Augustin in the announcement. “Now with Amazon Honeycode, almost anyone can create powerful custom mobile and web applications without the need to write code.”
Like similar tools, Honeycode provides users with a set of templates for common use cases like to-do list applications, customer trackers, surveys, schedules and inventory management. Traditionally, AWS argues, a lot of businesses have relied on shared spreadsheets to do these things.
“Customers try to solve for the static nature of spreadsheets by emailing them back and forth, but all of the emailing just compounds the inefficiency because email is slow, doesn’t scale, and introduces versioning and data syncing errors,” the company notes in today’s announcement. “As a result, people often prefer having custom applications built, but the demand for custom programming often outstrips developer capacity, creating a situation where teams either need to wait for developers to free up or have to hire expensive consultants to build applications.”
It’s no surprise then that Honeycode uses a spreadsheet view as its core data interface, which makes sense, given how familiar virtually every potential user is with this concept. To manipulate data, users can work with standard spreadsheet-style formulas, which seems to be about the closest the service gets to actual programming. ‘Builders,” as AWS calls Honeycode users, can also set up notifications, reminders and approval workflows within the service.
AWS says these databases can easily scale up to 100,000 rows per workbook. With this, AWS argues, users can then focus on building their applications without having to worry about the underlying infrastructure.
As of now, it doesn’t look like users will be able to bring in any outside data sources, though that may still be on the company’s roadmap. On the other hand, these kinds of integrations would also complicate the process of building an app and it looks like AWS is trying to keep things simple for now.
Honeycode currently only runs in the AWS US West region in Oregon but is coming to other regions soon.
Among Honeycode’s first customers are SmugMug and Slack.
What is AWS SSM?
AWS Systems Manager is an agent-based platform for managing servers across any infrastructure, including AWS, on-premises and other clouds. You can now deploy applications and application configurations with a single command to AWS. The EC2 Run Command is still available, but there’s also a new service that offers this functionality called AWS OpsWorks (OpsWorks for short). Previously, there was no single solution that could be used to manage all servers. This resulted in ASM coming into existence and filling the gap.
Features of SSM (AWS Systems Manager)
Being a remote command, this enables us to go into your servers and do ad-hoc things easily. Previously, we would utilise Ansible, Bastion Hosts and other similar services to run ad-hoc commands to our remote servers. There are many different solutions, but they all take time to set up & it can be difficult to determine precisely who is doing what. By integrating with AWS Identity and Access Management (IAM), SSM provides significantly better control over controlling remote command executions. It saves remote administration records to audit usage. Security documentation may also be produced for often used commands.
New vulnerabilities are discovered every day, so there’s no way to keep your network safe. State Manager makes it extremely simple to maintain the proper state for our application environment by allowing us to run a collection of commands utilising SSM documents on a regular basis. If we want to disable SSH temporarily on all servers, a strategy could be to use an Systems Manager document that schedules a shutdown of the SSH demon on each of our servers every half hour (30 min).
With this upgrade to the Run Command feature, we’re now able to remotely run commands on various instances. This isn’t all that automation has to offer; we can use AWS API’s as part of these executions. We may combine many stages to complete complicated tasks by using an Systems Manager automation type document. Please keep in mind that Automation documents are run on SSM Service and have a maximum execution time of 1,000,000 seconds per AWS account per region.
It’s easy to track what applications are running on our servers and services we use from Systems Manager Inventories. This is done by linking an SSM document to a managed instance, which then collects inventory data about these items at regular intervals and makes them available for examination afterwards.
Even the environment needs to be updated with new patches. Using SSM Patch Manager, we can define patch baselines and apply them to managed instances during Maintenance Windows. This is done automatically whenever the Maintenance Window time arrives, reducing the possibility of a manual oversight.
Amazon offers a way to schedule tasks to execute on AWS infrastructure at certain intervals, called recurring tasks. You can count on us to perform patch fixes, install software, and upgrade the OS while your computer is in the shop. We may utilise SSM Run commands and Automation features during maintenance windows.
This is an SSM reporting method that tells us if our instances are patch baseline or States Manager association compliant. This capability may be used to drill deeper into issues and resolve them using SSM Run commands or Automation.
By leveraging the AWS KMS service, this functionality eliminates the possibility of exposing database passwords and other sensitive parameters we’d like to include in our SSM Documents. This is a minor component of SSM, but it is necessary for the service to function properly.
SSM comes with a number of pre-made documents that may be used with Run Commands, Automation, and States Manager. We can also create our own unique documents. SSM Document permissions are connected with AWS IAM, allowing us to use AWS IAM policies to manage who has execution privileges on which documents.
With AWS, you can run commands and automation documents in parallel by specifying a percentage or a count of target instances. We may also halt operations if the number of target instances throwing errors reaches a certain threshold.
Security is a complicated concept and the Systems Manager Agent implements it by running as root on the servers. This better affords visibility into the security of our work environment.
- The SSM agent retrieves pending orders from the SSM service and executes them on the instance via a pull mechanism.
- Communication between the SSM agent and the service takes place through a secure channel that employs the HTTPS protocol.
- Because the SSM agent code is open source, we know exactly what it does.
- To log all API calls, the SSM service may be linked with AWS CloudTrail.
Start using AWS Systems Manager for free – Try the 13 free features available with the AWS Free Tier.
AWS Systems Manager is a cloud-based service for managing, monitoring, and maintaining the health of your IT infrastructure.
AWS Systems Manager is a cloud-based service for managing, monitoring, and maintaining the health of your IT infrastructure. It provides a centralized console to view the state of all your AWS resources, as well as one-click actions to fix common issues.
Overall, AWS Systems Manager is an impressive production-ready tool that lets you manage your servers and other AWS resources remotely.
If you are an Amazon Web Services user, you may have seen the warning that your data may be at risk due to a maintenance incident. AWS was down for over an hour, which is enough time for data to be compromised.
Generally speaking, in technical terms, the Clubhouse is three SaaS services, rolled up quickly with blue duct tape. Specifically:
- Agora.io, the voice part itself + the infrastructure for it
- PubNub for real-time updates
- AWS for storing statics (avatars)
While the developers of the application delay the release lines of the Android version, we found a good alternative.
It’s not an official app, but it works.
The developer @grishka11 made the compilation for users of Android devices in just one day.
How do I build this?
Import into Android Studio and click “run”. Or, there’s an apk you can install in the releases section.
Do more with less.
Which PaaS Hosting to Choose?
In the process of elaborating a web project be it a pure API or a thoroughgoing web app, a product manager eventually comes to the point of choosing a hosting service.
Once the tech stack (Python vs. Ruby vs. Node.js vs. anything else) is defined, the software product needs a platform to be deployed and become available to the web world. Fortunately, the present day does not fall short of hosting providers, and everyone can pick the most applicable solution based on particular requirements.
At the same time, the abundance of digital server options is often a large stumbling block many startups can trip on. The first question that arises is what type of web hosting is needed. In this article, we decided to skip such shallow options as shared hosting and virtual private server, and also excluded the dedicated server availability. Our focus is cloud hosting which can serve as a proper project foundation and a tool for deploying, monitoring, and scaling the pipeline. Therefore, it’s worthwhile to review the two most famous representatives of cloud services namely Heroku vs. Amazon.
So let’s talk about popular arguments we can read about everywhere, the same arguments I’m hearing from my colleagues at work 😄
Dedicated and shared hosting services are two extremes, from which cloud hosting is distinct. Its principal hallmark is the provision of digital resources on demand. It means you are not limited to capabilities of your physical server. If more processing power, RAM, memory, and so on are necessary, they can be scaled fast manually with a few clicks of a button, and even automatically (e.g., Heroku automatic scaling) depending on traffic spikes.
Meanwhile, the number of services and a type of virtual server architecture generate another classification of the host providing options depending on what users get – function, software, platform or an entire infrastructure. Serverless architecture, where the server is abstracted away, also falls under this category and has good chances of establishing itself in the industry over the next few years, as we suggested in our recent blog post. The options we’re going to review here are considered hosting platforms.
Platform as a service
This a cloud computing model features a platform for speedy and accurate app creation. You are released from tasks related to servers, virtualization, storage, and networking – the provider is responsible for them. Therefore, an app creator doesn’t have any worries related to operating systems, middleware, software updates, etc. PaaS is like a playground for web engineers who can enjoy a bunch of services out-of-the-box. Digital resources including CPU, RAM, and others are manageable via a visual administrative panel. The following short intro to the advantages and disadvantages of PaaS will be a good explanation of why this cloud hosting option has been popular lately.
The following reasons make PaaS attractive to companies regardless of their size:
- Cost-efficiency (you are charged only for the amount of resources you use)
- Provides plenty of assistance services
- Dynamic scaling
- Rapid testing and implementation of apps
- Agile deployment
- Emphasis on app development instead of supplementary tasks (maintain, upgrade, or support infrastructure)
- Allows easy migration to the hybrid model
- Integrated web services and databases
These items might cause you to doubt whether this is the option for you:
- Information is stored off-site, which is not appropriate for certain types of businesses
- Though the model is cost-efficient, do not expect a low budget solution. A good set of services may be quite pricey.
- Reaction to security vulnerabilities is not particularly fast. For example, patches for Google Kubernetes clusters take 2-4 weeks to be applied. Some companies may deem this timeline unacceptable.
As a rule, the hosting providers reviewed herein stand out amid other PaaS options. The broad picture would be like Heroku vs. AWS vs. Google App Engine vs. Microsoft Azure, and so on. We took a look at this in our blog post on the best Node.js hosting services. Here we go.
Amazon Web Services (AWS)
Judging from the article’s title, the Heroku platform should have been the opener of our comparison. Nevertheless, we cannot neglect the standing and reputation of AWS. This provider can not boast an unlimited number of products, but they do have around one hundred. You can calculate the actual number on their product page if needed. However, the point is that AWS is holding not only the PaaS niche. The user’s capability to choose solutions for storage, analytics, migration, application integration and others lets us consider this provider as an infrastructure as a service. Meanwhile, the AWS’ opponent within this comparison cannot boast the same set of services. Therefore, it would only be fair to select the same weight class of competitor and reshape our comparison into Elastic Beanstalk vs. Heroku, since the former is the PaaS provided by Amazon. So, in the context of this article, AWS will be represented by Beanstalk.
You can find this product in the ‘Compute’ tab on the AWS home page. Officially, Elastic Beanstalk is a product which allows to deploy web apps. It is appropriate for apps built in RoR, Python, Java, PHP, and other tech stacks. The deployment procedure is agile and automatized. The service carries out auto-scaling, capacity provisioning and other essential tasks for you. The infrastructure management can also be automated. Nevertheless, users are in control of resources leveraged to power the app.
Among the companies that chose this AWS product to host their products, you can encounter BMW, Speed 3D, Ebury, etc. Let’s see what features like Elastic Beanstalk pricing or manageability attract and repel users.
Pros & Cons
|Easy to deploy an appImproved developer productivityA bunch of automated functionalities including the scaling, configuration, setup, and othersFull control over the resourcesManageable pricing – you manage your costs depending on the resources you leverageEasy integration with other AWS productsMedium learning curve||Deployment speed may stretch up to 15 minutes per appLack of transparency (zero information on version upgrades, old app versions archiving, lack of documentation around stack)DevOps skills are required|
In addition to this PaaS product, Amazon can boast an IaaS solution called Elastic Compute Cloud or EC2. It involves detailed delving into the configuration of server infrastructure, adding database instances, and other activities related to app deployment. At some point in your activities, you might be want to migrate to it from Beanstalk. It is important to mention that such migration can be done seamlessly, which is great!
In 2007, when this hosting provider just began its activities, Ruby on Rails was the only supported tech stack. After the lapse of over 10 years, Heroku has enhanced its scope and is now available for dealing with the apps built with Node.js, Python, Perl, and others. Meanwhile, it is a pure PaaS product which makes inappropriate to compare Heroku vs. EC2.
It’s a generally known fact that this provider rests on AWS servers. In this regard, do we really need to compare AWS vs. Heroku? We do, because this cloud-based solution differs from the products we mentioned above and has its own quirks to offer. These include over 180 add-ons – tools and services for developing, monitoring, testing, image processing, and cover other operations with your app, an ocean of buttons and buildpacks. The latter is especially useful for automation of the build processes for tech stacks. As for the big names that leverage Heroku, there are Toyota, Facebook, and GitHub.
Traditionally, we need to learn what benefits of Heroku you can experience and why you may dislike this hosting provider.
Pros & Cons
|Easy to deploy an appImproved developer productivityFree tier is available (not only the service itself but also a bunch of add-ons are available for free)Auto-scaling is supportedA bunch of supportive toolsEasy setupBeginner and startup-friendlyShort learning curve||Rather expensive for large and high-traffic appsSlow deployment for larger appsLimited in types of instancesNot applicable for heavy-computing projects|
Which is more popular – Heroku or AWS?
Heroku has been in the market four years longer than Elastic Beanstalk and has never lost in terms of popularity to this Amazon PaaS.
Meanwhile, the range of services provided by AWS has been growing in high gear. Its customers have more freedom of choice and flexibility to handle their needs. That resulted in a rapid increase in search interest starting from 2013 until today.
Heroku vs. AWS pricing through the Mailtrap example
Talking about pricing, it’s essential to note that Elastic Beanstalk does not require any additional charge. So, is it no charge? The answer is yes – the service itself is free. Nevertheless, the budget will be spent on the resources required for deploying and hosting your app. These include the EC2 instances that comprise different combinations of CPU, memory, storage, and networking capacity, S3 storage, and so on. As a trial version, all new users can opt for a free usage tier to deploy a low-traffic app.
With Heroku, there is no need to gather different services and set up your hosting plan as LEGO. You have to select a Heroku dyno (a lightweight Linux container prepacked with particular resources), database-as-a-service and support to scale resources depending on your app’s requirements. A free tier is also available, but you will be quite limited in resources with this option. Despite its simplicity of use, this cloud service provider is far from being cheap.
We haven’t mentioned any figures here because both services follow a customized approach to pricing. That means you pay for what you use and avoid wasting your money on unnecessary resources. On that account, costs will differ depending on the project. Nevertheless, Heroku is a great solution to start, but Amazon AWS pricing seems cheaper. Is it so in practice?
We decided to show you the probable difference in pricing for one of Railsware’s most famous products – Mailtrap. Our engineers agreed to disclose a bit of information regarding what AWS services are leveraged and how much they cost the company per month. Unfortunately, Heroku services are not as versatile as AWS, and some products like EC2 instances have no equivalent alternatives on the Heroku side. Nevertheless, we tried to find the most relevant options to make the comparison as precise as possible.
At Mailtrap, we use a set of the on-demand Linux instances including m4.large, c5.xlarge, r4.2xlarge, and others. They differ in memory and CPU characteristics as well as prices. For example, c5.xlarge provides 8GiB of memory and 4 vCPU for $0.17 per hour. As for Heroku, there are only six dyno types with the most powerful one offering 14GB of memory. Therefore, we decided to pick the more or less identical instances and calculate their costs per month.
|Cloud computing||EC2 On-Demand Linux instances:t3.micro (1GiB) – $0.0104 per hour|
$7.48 per montht3.small (2GiB) – $0.0208 per hour
$14.98 per monthc5.2xlarge (16GiB) – $0.34 per hour
$244.8 per month
$50.00 per month performance-m (2.5GB)
$250.00 per month performance-l (14GB)
$500.00 per month
The computing cloud costs for Mailtrap per month are almost $2,000 based on eight different AWS instances with the memory characteristics from 4GiB to 122 GiB, the costs for Elastic Load Balancing, and Data Transfer. Even if we chose the largest Heroku dyno, Performance-l, the costs would amount to $4,000 per month! It is important also to mention that Heroku cannot satisfy the need for heavy-computing capacity because the largest dyno is limited to 14GB of RAM.
For the database-related purposes, both hosting providers offer powerful suite of tools – Relational Database Service (RDS) for PostgreSQL and Heroku Postgres correspondingly. We picked two almost equal instances to show you the price difference.
|Database||RDS for PostgreSQL:|
db.r4.xlarge (30.5 GiB) – $0.48 per hour
$345.6 per month
EBS Provisioned IOPS SSD (io1) volumes – $0.125 per GB
$439.35 per month (at the rate of 750GB storage)
Standard 4 (30 GB RAM, 750 GB storage)
$750.00 per month
In-memory data store
Both providers offer managed solutions to seamlessly deploy, run, and scale in-memory data stores. Everything is simple to compare. We took an ElastiCache instance used at Mailtrap and set it against the most relevant solution by Heroku Redis. Here is what we’ve got.
|In-memory storage (i.e., cache)||ElastiCache:|
cache.r4.large (12.3 GiB) – $0.228 per hour
$164.16 per month
$1,450.00 per month
In addition to RDS instance, you will have to choose an Elastic Block Store (EBS) option, which refers to HDD or SSD volume. At Mailtrap, the EBS costs are almost $600 per month.
As the main storage for files, backups, etc., Heroku has nothing to offer, and they recommend using Amazon S3. You can make the integration between S3 and Heroku seamless thanks to using an add-on like Bucketeer. In this case, the main storage costs will be equal for both PaaS (except for the fact that you’ll have to pay for the chosen add-on on Heroku). At Mailtrap, we use a Standard Storage instance “First 50 TB / Month – $0.023 per GB”, as well as instances “PUT, COPY, POST, or LIST Requests – $0.005 per 1,000” and “GET, SELECT and all other Requests – $0.0004 per 1,000”. All in all, the costs are a bit more than $800 per month.
Though this point has no relation to Mailtrap hosting, we decided to show the options provided by AWS and Heroku in terms of real-time data streaming. Amazon can boast of Kinesis Data Streams (KDS), and Heroku has Apache Kafka. The latter is simple to calculate since you need to choose one of the options available (basic, standard or extended) depending on the required capacity. With KDS, you’ll have to either rack your brains or leverage Simple Monthly Calculator. That’s what we’ve got for 4MB/sec data input.
|Data streaming services||KDS:|
4 shard hours – $0.015 per hour
527.04 million PUT Payload Units – $0.014 per 1,000,000 units
$50.58 per month
$175 per month
Heroku offers three support options – Standard, Premium, and Enterprise. The former is free, while the price for the latter two starts from $1,000. As for AWS, there are four support plans – Basic, Developer, Business, and Enterprise. The Basic one is provided to all customers, while the price for the others is calculated according to AWS usage for a particular amount of costs. For example, if you spend $5,000 on Amazon products, the price for support will be $500.
Now, let’s sum up all the expenses and see how much we would have paid if Mailtrap was hosted on Heroku.
In-memory data store
These figures are rough, but they fairly present the idea that less haste with infrastructure management is rather pricey. Heroku gives you more time to focus on app creation but drains purse. AWS offers a variety of options and solutions to manage your hosting infrastructure and definitely saves the budget.
Below we compared the most relevant points of the two cloud hosting providers.
|PaaS||AWS Elastic Beanstalk||Heroku|
|Programming language support||Ruby|
|Key features||AWS Service Integration|
App Health Dashboard
Code and data rollback
Smart containers (dynos)
Full GitHub Integration
|Management & monitoring tools||Management Console|
Command Line Interface (AWS CLI)
|Featured customers||BMW, Samsung Business, GeoNet||Toyota, Thinking Capital, Zenrez|
Why use Heroku web hosting
In practice, this hosting provider offers a lot of benefits like a lightning-fast server set up (using the command line, you can make it within 10 sec), easy deployment with Git Push, a plethora of add-ons to optimize the work, and versatile auxiliary tools like Redis and Docker. A free tier is also a good option for those who want to try or experiment with cloud computing. Moreover, since January 2017, auto-scaling has been available for web dynos.
It’s undisputed that Heroku cloud is great for beginners. Moreover, it may be good for low-budget projects due to the lack of DevOps costs needed to set up the infrastructure (and potentially hire someone to do this). However, many startups choose this provider as a launching pad due to its supreme simplicity in operation.
Why choose Amazon Web Services
This solution is more attractive in terms of cost-efficiency. At the same time, it loses out as for usability. Users can enjoy a tremendous amount of features and products for web hosting provided by Amazon. It’s easy to set up and deploy, and definitely provides everything that Heroku does but for less money. However, Elastic Beanstalk is not as easy-to-use as its direct competitor.
Numerous supplementary products like AWS Lightsail, which was described in our blog post dedicated to Ruby on Rails hosting providers, Lambda, EC2, and others let you enhance your app hosting options and control your cloud infrastructure. At the same time, they usually require DevOps skills to use them.
So, which provider is worth your while – Heroku servers that are attractive in terms of usability and beginner-friendliness or AWS products that are cheaper but more intricate in use?
|Heroku is the option for:||AWS is the option for:|
|– startups those who prioritize time over money; |
– those who prefer dealing with creating an app rather than devoting yourself to infrastructure mundane tasks;
– those whose goal is to deploy and test an MVP;
– products needed to be constantly updated;
– those who do not plan to spend money on hiring DevOps engineers.
|– those who have already worked with Amazon web products;|
– those who want to avoid numerous tasks related to app deployment;
– those whose goal is to build a flexible infrastructure;
– those who have strong DevOps skills or ready to hire the corresponding professionals;
– projects requiring huge computing power.