Serverless Computing

Serverless computing is a cloud computing architecture model in which the cloud service provider allocates hardware resources on a need basis. The cloud service providers own the physical server installation setup, management, and maintenance. The service provider allocates server resources only during execution and is free during idle time or when the application is not in use. The end-users or the customers pay for what they use to these service providers. The end-users using these server services need not worry about the backend infrastructure capacity or maintenance.

Serverless Computing offerings

There are several reasons to adopt Serverless computing over the conventional cloud-based or server-centric data centers or cloud computing. Serverless computing architecture is flexible and economical. It offers greater scalability and a short turn-around time to release. Organizations save cost and time in planning infrastructure space, purchasing hardware, installation, and maintenance of servers.

Advantages of Serverless Computing

No server management

Since the vendor manages the physical server’s planning, installation, organizations, or developers need not worry about the physical server’s maintenance or DevOps. The labor and logistic costs are saved, and the organizations can re-invest them in opportunities that are productive or areas that give returns.

Scalability

Serverless computing can scale automatically as per demand or usage increases. If a function needs to be run in multiple instances, the service provider’s servers will start, run, and end them as they are needed, using the concept of containers.

Pay-as-you-go

Code only runs when backend functions are needed by the serverless application, and the code automatically scales up as needed. There are server services that can track finer, and smaller details or timings of the server used and provide better and more accurate expenses. This results in heavy savings compared to the conventional server system where organizations must bear operational expenses of servers irrespective of their usage.

Performance for continuously running code

If an application is using code regularly, then the performance will be the same on a serverless environment compared to a traditional server environment, irrespective of the number of instances being run in parallel. If the code is continuously running it might need a short cycle to start called the “warm start”.

Reduced latency

Since the code and application are hosted on an origin server, the code can be run from anywhere and very close to the origin server, thus decreasing the latency.

Faster deployment and easy updates

In a serverless environment, since the code is maintained in the cloud, developers can quickly and easily update their code to develop and release newer versions of the application. Developers can choose to upload the code either one function at a time or all at once because the application is typically a collection of functions provisioned by the service provider.

This helps in applying and releasing the patches or updates with bug fixes or new features quite fast.

Disadvantages of serverless computing

Security concerns

“Multitenancy” – means sharing the same infrastructure with multiple and independent end-users like sharing HW resources in serverless computing. So, in a serverless environment, during the sharing of resources across different end-users, if the multi-tenant servers are not configured properly, it can result in potential security breaches and data theft or misuse. But if configured and maintained correctly, through proper vetting it can reap benefits.

Costly for long-running processes

If the applications are designed to run for long durations, then sometimes the cost using serverless compute, may overwhelm compared to using traditional server services. So, the benefits of serverless computing would be trimmed to applications meant to run for short durations.

Challenges in testing and debugging

It is challenging to simulate a serverless environment to experience how the code will execute once deployed. And with no access to the backend processes or the application is broken into smaller functions, sometimes developers may find it hard to test or debug the issues.

Performance degrades for irregular runs

There may be cases where the serverless code may not be constantly running in which case, to boot up and start the code, might take considerable time, and impact the performance. This is called a “cold start”.

Single vendor dependency

Cloud service provider selection should be depending on the more open and generic APIs that the features and workflows offer, giving us an option for easy switching to other vendors if need be. Sticking to one serverless cloud service provider can often be risky for unforeseen circumstances. 

When do you need a serverless architecture?

Serverless architecture is most preferred for developers if:

1. They want to reduce their go-to-market time.

2. Build lightweight and flexible applications.

3. Apps need to be scalable or updated frequently and quickly.

4. Apps have inconsistent usage, peak periods, or traffic.

5. App functions need to be closer to the end-user to reduce latency.

When should you avoid using a serverless architecture?

Large applications running for longer durations or having consistent and predictable workloads may better be benefitted from a traditional server compared to a serverless architecture both in terms of cost and architecture.

Inovar implements Serverless Cloud Computing

Achieving business goals with a collaborative approach and tailoring the use of apt cloud computing technology infrastructure is very important.

We provided many of our customers with better infrastructure, ensured effortless and uninterrupted operations. We delivered all the features that would help find information for the SMEs and enable them with the right guidance. For many of our clients, we have digitalized the entire process with end-to-end workflow applications. Inovar helped a few clients to deploy a hybrid cloud solution for data security and access control and boosted processes with serverless engines.

We ensured that the client’s services are met and addressed to the business needs with improved user experience and substantial cost savings. If you are looking for services with the best-in-class infrastructure and leaders who ensure strategy lives for ages, reach out to us.

Cloud migration typically refers to the process of moving digital assets- like Data, workloads, IT resources, or application from on-premises to the cloud infrastructure. According to Gartner, Inc. forecast, the worldwide public cloud service market has grown 17% in 2020 to a total of $266.4* billion.  But the problem is in migrating applications to the public cloud, IT teams are confronting two separate but related issues that add costs and complexity: refactoring and repatriation. Let us look at the reasons behind avoid refactoring and repatriation before we delve into the probable solutions for the cloud migration journey.

public-cloud-services

Why business avoid Refactoring 

Refactoring applications frequently happen when you are looking at custom-built apps. With the approach of refactoring, you will have to rewrite the code and completely re-engineer your application from scratch to make the application more cloud ready. APIs and integrations with computing, storage, and network resources are also included under the refactoring process. 

Refactoring turns legacy apps into cloud-native apps, makes it more feasible for developers to use modern tools such as containers and microservices, and saves money in the long run. Although this approach makes an application more scalable and responsive than on-premises counterparts, it takes a lot of additional time and resources, meaning your upfront costs are going to be much higher. The chances of risks are high because the IT team needs to be careful that they are not impacting the external behavior of the application while rewriting the code.

cloud-migration-options

IT team can apply the lift and shift method for moving an application from the data center to the cloud and avoid the process of refactoring completely. This method simplifies the migration process, saves time and cost, and accelerates the shift to the cloud. They can also use pervasive automation to make the lift-and-shift migration as seamless as possible.

Apart from that, IT consultants can leverage a cloud platform that tightly integrates on-premises infrastructure with cloud management software. By using the Microsoft Azure platform as a migration path, IT can build a cloud strategy that delivers agility and cost optimization. This holistic approach will not only help you navigate the journey successfully but also ensure that your organization releases new benefits including, agility, scale, and efficiency, especially when your workloads are running in the cloud. 

Why business avoid Repatriation

“Unclouding” or “repatriation” is the process of pushing back application workloads and data from the public cloud to local infrastructure environments within an on-premises data center. As mentioned by TechTarget, concerns about operating costs, availability, changing business needs, security and compliance, and cloud-based workload performance are the most common reasons for repatriation. 

Repatriation is also too expensive and time consuming that businesses want to avoid. Initially, the organizations go through the expense of migrating an app to the cloud then they realize it is spending too much money in the cloud and bearing other issues. Finally, they decide to go through the expense of migrating that application back to the data center. The whole process ultimately creates a lot of difficulties and confusion for the business.

cloud-repatriation

The cost of repatriation may vary as per the application, workload, and company infrastructure. According to a survey result, one company can save almost $75** million in infrastructure costs over two years by moving apps to the data center from the cloud. Moreover, it would reduce operational costs 66% by reducing downtime and increasing capacity by 25%**. Sometimes even IT experts face troubles understanding the complexities of the public cloud. AWS, Azure, and other public cloud vendors are even offering a gateway to the public cloud and back with solutions like AWS Outpost and Azure Stack. Businesses can consider using Azure Import/Export to securely import or export large amounts of data to Azure Blob storage and Azure files by shipping disk drives to an Azure data center.

Simplifying the cloud migration path The migration to the cloud does not have to be difficult if an organization executed proper planning. Choosing the right cloud platform can also increase speed and reduce risk. We hope the below three simple steps will help every enterprise in advanced migration.

Step 1: Assess your application inventory carefully

First, an organization needs to plan a proper assessment that will categorize the application portfolio into different groups (like modern apps, legacy apps, or everything else including, Java, .NET, web applications) and include an assessment of the dependencies or ecosystem around the application. In Inovar Consulting, we first communicate with our clients to understand the physical and virtual server configurations, security and compliance requirements, existing support, network topology, and data dependencies. It helps us to see where to begin for maximum results and create a proper strategy plan as per the present needs.

Step 2: Create a proper plan

The best way to avoid the costs and complexities of refactoring and repatriation is to do a thorough analysis of the applications. Before suggesting any migration path, we ask the below questions and encourage our customers to think beyond the applications.

  1. What are the business goals, and why are you considering the cloud?
  2. What is the architecture of the application? Does it follow the cloud-native principles for high availability?
  3. What are the interdependencies between each app and workload?
  4. What tools are being used to manage and enforce the security policies around the application?
  5. How much will the migration cost and, once it is completed, what will be the ongoing operational costs?
  6. How much you aware of the underlying topology and performance characteristics?

You have to prioritize the migration based on the answers to this question. For example, you may decide that the application is better served by incorporating public cloud services in a hybrid cloud model that involves a lift-and-shift migration. Or if the analysis determines that refactoring is mandatory then, it is vital to choose the right migration path so you can avoid repatriation in the future.

migration-to-cloud

Step 3: Migrate quickly to cover the cost

As per our past experiences, we have seen the most cost-effective path to the cloud often, that approach ends up costing more in the long run. The reason is during migration organizations are bearing two infrastructures, incurring costs on both sides. If organizations want to cover the cost and begin to realize savings, then speedy migration with mitigated risks is the best approach for them. The statistics show organizations can see 40 to 60%*** operational cost savings when moving enterprise applications to a managed private or public cloud service.

Taking the next step to the future

Refactoring and repatriation are complex issues for organizations. But in most cases, cloud migration suffers because enterprises do not have the experience, knowledge, or resources to undertake the analysis, set the specific requirements, and make the best decision for the long term.

Now, cloud migration has become most crucial for digital transformation and for embracing to future technology trends. Therefore, it is the best time to enlist the expertise of your trusted partners can have a dramatic impact on your success.

If you are still confronting with the questions Refactor? Repatriate? Lift-and-shift? Then, it’s time to reach us to find out how the combination of cloud-first strategy and Azure technology platform can simplify your cloud migrations.

References:

*Gartner Forecasts Worldwide Public Cloud Revenue to Grow 17% in 2020 Retrieved from Gartner.com

**Malachy. J, (2019) Psst. Hey. Hey you. We have to whisper this in case the cool kidz hear, but… it’s OK to pull your data back from the cloud Retrieved from TheRegister.com

***Pierrefricke (2019) 3 Steps to Simplify Application Migration to the Cloud Retrieved from Rackspace.com

Cloud is about how you do computing, not where you do computing.”, rightly said by Paul Maritz, CEO of VMWare

The corona virus quarantine has made a lot of organisations across various industries establish remote operations and not all companies are able to handle the forced move to a virtual office. Before the impact of the Corona virus, only 62* percent of the workloads were in the Cloud but as per 87 percent of the IT decision makers, 95 percent of the workload will be in the Cloud by 2025. This acceleration was fuelled by Covid-19 acting as a catalyst for cloud migration.

The current state of remote work was largely unforeseen, no disaster recovery plan included anything for a mass outbreak of a virus. This transition to remote work on such a massive scale would not have been possible in the server-led infrastructure 15 to 20 years ago. Large enterprises can now deliver new services 30 to 60** percent faster through cloud migration. After several months into quarantine, organisations have started refining and optimising workloads into the Cloud. When and how businesses will be able to resume on-premise activities at the office remains a big question.

cloud-first

The cost of cloud migration was one of the major reasons for many companies to not migrate to the Cloud. But, the current circumstances have led some of the organisations around the globe to renew their efforts to get into the public cloud. It is time one stops thinking about everything being a corporate owned machine, in a corporate office, rather utilise the opportunity to focus on virtualisation of servers, storage and networks. At this crisis time, virtualisation needs to be brought to end-user devices and Mobile Device Management has to be something every company needs to think about.

Though corporate IT resources are built to offer high levels of security, quarantines mean that direct, in-person access to them is limited if not completely unavailable. Enterprises considering digital transformation prior to the pandemic might have only wanted to move up to 30-40*** percent of their existing infrastructure to the public cloud. But now, more than 70** percent of executives have indicated a belief that cloud will help them innovate faster while reducing implementation and operational risks.

Long term plans for organizations may include use of public cloud, mobile computing, and moving to 5G wireless network. This allows companies to operate anytime and anywhere, which is much easier for born-in-the-cloud companies. Large enterprises cannot move nimbly, but the circumstances have shown the need for rapid changes beyond static systems with datacentres. Organisations that embrace flexibility will be able to recover faster than their competitors.

cloud-migration-process

The entire process from start to finish, requires significant changes and change management with how an organisation’s teams interact, process and share their data amongst each other. The sweeping global transition to remote work has seen virtual collaboration tools thrown into the spotlight of economic activity and their demand has sky-rocketed.

“Beyond the emergency action needed at the start of the pandemic, many organizations have turned to mitigating risk through flexibility of infrastructure”, says David Linthicum, chief cloud strategy officer with audit and consulting advisor Deloitte.

While cloud adoption offers a powerful opportunity to unlock business value, there remains a distinguishable hesitation around a few challenges of this transition. Cybersecurity is the biggest concern and remains a significant barrier when companies think of migrating to the cloud. Security threats have increased substantially during Covid-19, and organisations need to recognise and respond. Advanced cybersecurity solutions are now available which can help boost the security architecture.

cloud-migration-architecturew

Cloud computing, which has been touted for its flexibility, reliability and security, has emerged as one of the few saving graces for businesses during this pandemic. Its use is critical for companies to maintain operations, but even more critical for their ability to continue to service their customers.

Cloud adaptation provides an avenue of growth which can help balance the economic challenges faced by various organizations. Cloud budgets today account for approximately 5** percent of the average IT budgets, a figure that is likely to double by 2023.

As organisations have started adjusting to the new reality of the pandemic, cloud adoption represents a multi-billion-dollar opportunity for businesses in every region of the world. The world will eventually emerge from this period of remote work, but the way we do business will be transformed forever.

References:

*Sead Fadilpasic : Cloud migration set for major rise following pandemic, June’20. Retrieved from ITProPortal

**Luv Grimond and Alain Schneuwly : Accelerating Cloud Adoption After Pandemic, June’20. Retrieved from Jakarta Globe

***Joao-Pierre S. Ruth : Next Steps for Cloud Infrastructure Beyond the Pandemic, April’20. Retrieved from Information Week

Not to name names, but I’ve been reading in several publications that one of the main reasons to go to multicloud is to avoid vendor lock-in. While I can see the logic behind this assumption—that having more cloud providers means you can be more independent—the reality is much different.

For example, if you have an application in the cloud, and you’re using a multicloud architecture, you’ll have two or three choices where to place that application workload and associated data: Amazon Web Services, Microsoft Azure, and/or Google Cloud Platform.

You’ll pick one cloud for that application and do the standard migration processes to get it up and running. What most people don’t understand is that, as part of that migration, you’re likely to make the application workloads cloud-native. That means you’re going to alter the applications, slightly or heavily, to take advantage of the native cloud services, such as API management, governance, security, and storage.

By altering the application to be cloud-native, you’re locking yourself in to that cloud provider, for that application. If you don’t go the cloud-native approach, it’ll be easier to migrate that applocation to a different provider later—but at the price of a suboptimal deployment due to not being cloud-native.

You have to make a trade-off between using advanced application capabilities (cloud-native), and thus also accepting vendor lockin, or keeping the app geberic and less optimal to avoid that lockin. It does not matter if you’re using a single-cloud architecture or a multicloud architecture, the lockin trade-off is the same.

In this 60-second video, learn how the cloud-native approach is changing the way enterprises structure their technologies, from Craig McLuckie, founder and CEO of Heptio, and one of the inventors of open-source system Kubernetes.

IT INSIGHTS
What is the cloud-native approach?

cloud-nativeOf course, it is an advantage to have another public cloud connected and integrated into your architecture so that other public cloud options are available if and when needed. But you still have to make that same trade-off between being cloud-native and avoiding lockin.

You might think you can avoid the trade-off by using containers or otherwise writing applications so they are portable. But there is a trade-off there as well. Containers are great, and they do provide cloud-to-cloud portability, but you’ll have to modify most applications to take full advantage of containers. That could be an even bigger cost than going cloud-native. Is it worth the avoided lockin? That’s a question you’ll need to answer for each case.

Moreover, writing applications so they are portable typically leads to the least-common-denominator approach to be able to work with all platforms. And that means that they will not work well everywhere, because they are not cloud-native. I suppose you could write portable applications that are cloud-native to multiple clouds, but then you’re really writing the application multiple times in advance and just using one instance at a time. That’s really complex and expensive.

Lockin is unavoidable. But lockin is a choice we all must make in several areas: language, tooling, architecture, and, yes, platform. The key is to choose each lockin poont wisely, to mimimize the need to change horses. But when a change must happen, there’ll be a price to be paid. If you make the right choices, you’ll pay that price not very often, and you”ll have gained more from those choices in the meantime than you would have gained from a least-common-denominator approach