Powering all your applications from cloud to edge with Azure infrastructure

By Dustin Ward

Organizations are transforming from cloud to edge, migrating and optimizing existing workloads, building new cloud-native apps, unlocking new scenarios at the edge, and combining these strategies to meet a diverse set of business needs. Microsoft is committed to helping at every step of the way with continuous technology innovation. Today, we’re announcing product updates and enhancements across the…

Intelligently split multi-form document packages with Amazon Textract and Amazon Comprehend

By Dustin Ward

AWS FeedIntelligently split multi-form document packages with Amazon Textract and Amazon Comprehend Many organizations spanning different sizes and industry verticals still rely on large volumes of documents to run their day-to-day operations. To solve this business challenge, customers are using intelligent document processing services from AWS such as Amazon Textract and Amazon Comprehend to help…

Why Do Aftermarket Capabilities Often Sit in the Backseat of Digital Transformation Priorities?

By Dustin Ward

AWS FeedWhy Do Aftermarket Capabilities Often Sit in the Backseat of Digital Transformation Priorities? By Carolyn Rostetter, Sr. Director, Industry Principal – Pegasystems Inc.By Pugal Janakiraman, Principal PDS – AWS Pegasystems Internet of Things (IoT) technologies increasingly connect the automotive and industrial manufacturing world through advances in artificial intelligence (AI) and machine learning (ML). Manufacturers…

Announcing Amazon Corretto 17 support roadmap

By Dustin Ward

AWS FeedAnnouncing Amazon Corretto 17 support roadmap In September, we announced the general availability of Amazon Corretto 17. Amazon Corretto is a no-cost, multi-platform, production-ready distribution of the Open Java Development Kit (OpenJDK). The JDK community has declared that OpenJDK 17 will be a long-term supported (LTS) version, which means it will continue to be…

Choosing between storage mechanisms for ML inferencing with AWS Lambda

By Dustin Ward

AWS FeedChoosing between storage mechanisms for ML inferencing with AWS Lambda This post is written by Veda Raman, SA Serverless, Casey Gerena, Sr Lab Engineer, Dan Fox, Principal Serverless SA. For real-time machine learning inferencing, customers often have several machine learning models trained for specific use-cases. For each inference request, the model must be chosen dynamically based…