At the AWS re:Invent Conference last week in Las Vegas, a number of new products were launched. These ranged from Elastic Inference and a new machine learning chip in the machine learning space to new cloud computing and serverless capabilities to the launch of two new blockchain services.
Amazon Elastic Inference
One of the key announcements was Amazon Elastic Inference, which allows customers to attach GPU-powered inference acceleration to all Amazon EC2 instances and in so doing, lessens the cost of deep learning by up to 75%.
Inference in machine learning (ML) is the second half of training, involving putting a model to work and estimating results for new data samples. Amazon Elastic Inference involves attaching “just the right amount of GPU-powered Inference acceleration”.
“What we see typically is that the average utilization of these P3 instances GPUs are about 10 to 30 percent, which is pretty wasteful with elastic inference. You don’t have to waste all that costs and all that GPU,” AWS chief executive Andy Jassy said at the conference. “[Amazon Elastic Inference] is a pretty significant game changer in being able to run inference much more cost-effectively”.
The notebook instances and endpoints in Amazon Sagemaker can also run Amazon Elastic Inference, supporting acceleration in built-in algorithms and other deep learning environments.
It will support machine learning frameworks Apache MxNet, TensorFlow and ONNX. Changes to existing code are minimal.
Inferentia: New Machine Learning Chip
Another launch at the conference was Inferentia, a dedicated machine learning chip. “Inferentia will be a very high-throughput, low-latency, sustained-performance very cost-effective processor,” AWS CEO Andy Jassy explained.
Inferentia supports various frameworks, including INT8, FP16 and mixed precision. It also supports several machine learning frameworks, including TensorFlow, Caffe2 and ONNX.
Unsurprisingly, it also supports data from AWS’ own products, including EC2, SageMaker and the new Elastic Inference engine.
AWS has been far behind companies like Google and Nvidia in its approach to machine learning. Google Cloud Platform announced its first customer machine learning chip, Tensor Processing Unit, back in 2016. It is now on the third iteration of TPU. Inferentia and the other ML products represent an attempt to get into the game.
Holger Mueller, an analyst at Constellation Research told Techcrunch, “The speed and cost of running machine learning operations — ideally in deep learning — are a competitive differentiator for enterprises. Speed advantages will make or break the success of enterprises (and nations when you think of warfare). That speed can only be achieved with custom hardware, and Inferentia is AWS’s first step to get in to this game”.
AWS said it had designed Inferentia as GPU makers have been too focused on training and not enough on inference.
The chip is not yet immediately available, but will become so next year.
Two New Blockchain Services
Only a year ago AWS CEO Andy Jassy said he wasn’t especially interested in blockchain services. Things have changed. The two new blockchain services AWS introduced at re:Invent were Quantum Ledger Database (QLDB) and Amazon Managed Blockchain.
QLDB is a fully managed ledger database with a central authority that can be trusted. The service just launched in preview mode and provides an append-only, immutable journal, which can track the history of every change made to it. Each change is cryptographically chained and verifiable. Amazon said the database is capable of executing two to three times more transactions than other current products.
On the QLDB website, Amazon says the new database “eliminates the need to engage in the complex development effort of building your own ledger-like applications. With QLDB, your data’s change history is immutable – it cannot be altered or deleted – and using cryptography, you can easily verify that there have been no unintended modifications to your application’s data”.
QLDB lets developers work inside a familiar SQL-like API, a document data model that is flexible, and offers transactional support. Additionally, QLDB is serverless, meaning it automatically scales to support the needs of each specific application.
Amazon Managed Blockchain is a fully managed service, the goal of which is to “allow you to set up and manage a scalable blockchain network with just a few clicks”. AWS says that the service will scale to thousands of applications, letting users run millions of transactions. Starting to use Managed Blockchain is fairly straightforward, involving use of the AWS Console and the configuration of nodes, adding members and running applications. Once it is running, it performs maintenance and management tasks, such as managing certificates, tracking operational metrics like memory usage and storage resources.
AWS Managed Blockchain supports Hyperledger Fabric and will support Ethereum within the next few months.
New Time Series Database
Another new product announced was Amazon Timestream, a database that is fully managed, fast and scalable. It is specifically geared towards IoT and operational applications, aiming to “make it easy to store and analyze trillions of events per day at 1/10th the cost of relational databases”.
Data which measures how things change across time is one of the most quickly growing data types because of the growth in numbers of IoT devices, smart industrial machines and new types of IT system.
Time-series data has particular features, including typically arriving in time order form, data being append-only and queries always occur across a specific time interval. Relational databases are able to store this kind of data, but they are not efficient in terms of how they process this data because they lack optimization methods like storing and retrieving data by time intervals. This is precisely what Timestream does.
The database is targeted at the easy storage and analysis of log data for DevOps, sensor data for IoT applications, and industrial telemetry data for maintenance of equipment.
Timestream is serverless and automates various tasks, such as rollups, retention, tiering and data compression, which reduces the costs of managing data.
AWS claims Timestream “gives you the scale and speed to process trillions of events per day, with up to 1,000X faster query performance at 1/10th the cost of relational databases”.
AWS Security Hub
Security was another area that AWS has launched new services in, including the AWS Security Hub, “a new place for businesses to centrally manage compliance and identify security across AWS environments”, according to Andy Jassy. The Security Hub is intended to help users gain insights and take action inspired by understanding attack patterns and techniques.
“This is going to pretty radically change how easy it is to look at what’s happening security-wise across … AWS,” Jassy said. “Whether you’re using AWS security services like Inspector for vulnerability scanning or GuardDuty for network intrusion or Macie for anomalous data patterns or whether you’re using a very large number of third-party software security services in our ecosystem.”
AWS already has various partners ready to trial the new service at launch, including Tenable, Symantec, CrowdStrike and McAfee.
Industry buzz made it clear that AWS has been specifically investing in the technologies, tools and applications necessary to help it maintain its position at the head of the pack in cloud computing.
James Kobielus, writing for Silicon Angle, noted the way in which “AWS jam-packed the keynotes and other sessions with dozens of announcements of new and enhanced cloud services that deepen its differentiation with respect to Microsoft Azure, Google Cloud Platform and other public cloud providers”. Many of the innovations are developer-facing, including:
- The launch of a fully managed open-source streaming service named Amazon Managed Streaming for Kafka – no infrastructure management expertise in Kafka is required and it is straightforward for users to migrate existing applications in Kafka to the AWS cloud without changes in code;
- New open-source tools geared towards serverless application developers – AWS Toolkits for PyCharm (already available), IntelliJ and Visual Studio Code (in preview). The toolkits support creation, debugging and deployment of serverless functions in multiple languages;
- Lambda Layers and Lambda Runtime AP are generally available – to support the sharing, discovery and deployment of libraries and serverless lambda functions;
- A fully managed workflow with eight AWS service integrations for AWS Step Functions, accelerated microservice discovery via a managed service registry on the Amazon platform and a cloud-native microservice mesh on AWS
AWS Outposts is AWS’ own datacenter hardware, which involves bringing AWS cloud hardware on-premises. It involves configurable compute, storage racks, and is a fully managed service that lets customers run compute and storage on-premises and connect to other parts of AWS’ cloud.
AWS Outposts comes in two guises, the first via the VMware Cloud on AWS and the second as AWS native.
Customers who intend to use the same VMware control plane and APIs they’ve already been using to run their infrastructure can run VMware Cloud on AWS locally on AWS Outposts. This will deliver the VMware Software-Defined Data Center (SDDC) compute, storage, and networking infrastructure on-premises. It is managed as a Service from the same console as VMware Cloud on AWS.
The second option is geared towards customers who would prefer to use the same APIs and control plane they normally run in AWS’ cloud, however, on-premises, they are able to use the AWS native variant of AWS Outposts.
“Customers want to work on-premises and in the cloud the exact same way,” Jassy said.
VMware and AWS also announced several other new capabilities that involve VMware’s technologies being brought to AWS, such as VMware Cloud Foundation for EC2.
“These customers will have the opportunity to run other software with native AWS Outposts, starting with a new integrated offering from VMware called VMware Cloud Foundation for EC2, which will feature popular VMware technologies and services that work across VMware, and Amazon EC2 environments, like NSX, VMware AppDefense, and VMware vRealize Automation,” the companies explained in a statement.
“Customers are telling us that they don’t want a hybrid experience that attempts to recreate a stunted version of a cloud on-premises, because it’s perpetually out of sync with the cloud version and requires a lot of heavy lifting, managing custom hardware, different control planes, different tooling, and manual software updates,” Jassy added.
“There just isn’t a lot of value in that type of on-premises offering and that’s why these solutions aren’t getting much traction.”
In its announcements from its annual Las Vegas conference, AWS demonstrated once again its lack of fear at competing directly with smaller companies, including some of its own customers that offer similar product types on Amazon’s cloud. This isn’t surprising given that AWS is the biggest public cloud in the world, generating $6.68 billion in revenue for Amazon in the third quarter alone; however, as Amazon expands into more and more areas, from grocery stores to machine learning, companies that have worked with Amazon previously have had to come to terms with a company that was previously a partner becoming the competition. Some of those partners such as Gap and Walmart have chosen to move to a different cloud provider, such as Microsoft and Google. The same thing may happen in the software provider world as Amazon’s cloud business continues on its current path of widening its product lineup.
“People basically want the real thing, so in that sense, it’s good for the open-source companies,” Jay Kreps, a creator of Kafka and co-founder and CEO of Confluent, told CNBC on Thursday. “This is obviously hard, because it’s kind of a dual thing. Rather than collaborate with the company and kind of build the ecosystem of those offerings in their cloud, they obviously want to do it themselves.”