In the application development field, there are machine learning (ML) apps and there are cloud-native ML apps. ML apps are just what the name indicates: apps that incorporate ML.
The cloud-native variety refers to ML applications that, in addition to their machine learning functions, are designed to exploit the benefits of the cloud. That includes its elasticity and flexible storage capabilities.
ML solutions can be developed using open-source frameworks, such as TensorFlow and CNTK, that run on in-house hardware. Or they can be built using cloud resources, making them cloud-native apps.
Regardless of how they’re created, apps that incorporate ML are increasing in popularity due to their wide-ranging capabilities. Today, everything from virtual personal assistants to self-driving cars uses machine learning. Nonetheless, the methodology used to develop them does make a difference. And in the case of cloud-native development, so does the cloud services provider.
The Benefits of Cloud Over In-house Hardware
Among the issues with the in-house hardware option is that training real-world models usually requires large compute clusters. That makes it difficult to scale workloads. It also requires staff with specialized skills in building, training, and deploying the models. Combined with computational and special-purpose hardware requirements, it all results in high costs for labor, development, and infrastructure.
The cloud-native option solves those issues and offers other benefits as well. Cloud-native ML apps are typically built with services packaged in containers, deployed as microservices, and managed on elastic infrastructure through agile DevOps processes and continuous delivery workflows. Among the advantages of using these modern development methodologies: reduced application development time and increased agility — both at a lower cost.
There’s no need to invest in special hardware. The cloud supplies the necessary speed and compute power on a pay-per-use basis. This makes it more cost-effective for dealing with fluctuating, often compute-intensive workloads. That, in turn, facilitates experimentation with different ML capabilities. When the projects go into production and demand for the ML features increases, development teams can simply scale up.
Another benefit of cloud-native ML app development is the availability of a variety of cloud-centric resources designed to accelerate and optimize the process. Amazon Web Services (AWS) is a leader in this area. The company offers an ever-broadening range of tools and services, as well as its reliable, scalable, low-cost cloud infrastructure platform.
Why AWS for Cloud-Native Machine Learning App Development
AWS, itself, notes on its web site, “AWS offers the broadest and deepest set of machine learning services and supporting cloud infrastructure, putting machine learning in the hands of every developer, data scientist, and expert practitioner. Named a leader in Gartner’s Cloud AI Developer services’ Magic Quadrant, AWS is helping tens of thousands of customers accelerate their machine learning journey.”
One of the most useful resources is AWS SageMaker, an ML platform for building, training, and deploying ML models for virtually any use case. The typical workflow of SageMaker is to provision a Jupyter Notebook and explore and visualize the data sets hosted inside of the AWS ecosystem. There’s built-in support for single-click training of petabyte-scale data. The model can be tuned automatically and then deployed with auto-scaling clusters and support for built-in A/B testing.
It’s important to note that AWS SageMaker is no one-trick pony. It contains numerous components and capabilities for use in ML app development. There’s Amazon SageMaker Ground Truth. That service makes it easy to build highly accurate ML training data sets using custom or built-in data labeling workflows for text, images, videos, and 3D-point clouds.
Amazon SageMaker JumpStart provides a set of customizable solutions for the most common use cases. It also supports one-click deployment and fine-tuning of more than 150 popular open-source models such as natural language processing, object detection, and image classification models.
As is customary with AWS, the company keeps expanding SageMaker’s features. There are several that we’re particularly excited about using here at ClearScale, including SageMaker Edge Manager. It helps manage and monitor ML models running on edge devices. This is particularly useful as analyzing large amounts of data based on complex ML algorithms requires significant computational capabilities. That’s why most of the data processing takes place in on-premises data centers or cloud-based infrastructure.
With the increasing availability of powerful, low-energy consumption Internet of Things (IoT) devices, computations can now be executed on edge devices such as robots. This has given rise to the era of deploying advanced ML methods such as convolutional neural networks (CNNs) at the edges of the network for “edge-based” ML.
AWS Machine Learning Services in Real-world Apps
We can attest to the value of AWS ML resources at ClearScale. We’ve successfully used many of them to help our clients develop and deploy innovative, problem-solving, cloud-native ML apps. By taking advantage of the inherent features of the cloud, such as elasticity, scalability, and economies of scale, these resources enable us to reduce development time and costs in the process. And they allow us to enable capabilities that would have been more difficult to incorporate using traditional development methods.
One project, in particular, that we like to reference was done in conjunction with Creative Practice Solutions (CPS), a medical consulting firm. Incorporating services such as Amazon Transcribe and Amazon Translate, we were able to create an app that translates recorded medical appointment notes and uses the information to generate medical codes more accurately. It also has the ability to recognize both US Spanish and US English and translate US Spanish to US English.
In addition, the use of AWS services helps reduce costs. For example, the solution uses Amazon Comprehend Medical. This HIPAA-eligible natural language processing (NLP) service employs ML to extract health data from medical text. CPS is only charged based on the amount of text processed on a monthly basis in particular, and the user base in general. Likewise, the use of Amazon Lambda enables code to be run for virtually any app or backend service with zero administration. That reduces administration and management time — and subsequently costs.
Take Your Machine Learning Apps to the Cloud
You can also get a free AI/machine learning assessment. You’ll come away with guidance on how you can build and operate reliable, secure, and cost-effective ML workloads on AWS.