Jump to Navigation

Feed aggregator

Airbnb – Reinventing the Hospitality Industry on AWS

AWS Blog - Wed, 03/23/2016 - 15:22

Airbnb is a classic story of how a few people with a great idea can disrupt an entire industry. Launched in 2008, over 80 million guests have stayed on Airbnb in over 2 million homes in over 190 countries. They recently opened 4,000 homes in Cuba to travelers around the globe. The company was also an early adopter of AWS.

In the guest post below, Airbnb Engineering Manager Kevin Rice talks about how AWS was an important part of the company’s startup days, and how it stays that way today.

— Jeff;

PS – Learn more about how startups can use AWS to get their business going.

Early Days
Our founders recognized that for Airbnb to succeed, they would need to move fast and stay lean. Critical to that was minimizing the time and resources devoted to infrastructure. Our teams needed to focus on getting the business off the ground, not on basic hosting tasks.

Fortunately, at the time, Amazon Web Services had built up a pretty mature offering of compute and storage services that allowed our staff to spin up servers without having to contact anyone or commit to minimum usage requirements. They decided to migrate nearly all of the company’s cloud computing functions to AWS. When you’re a small company starting out, you need to be as leveraged as possible with your available resources. The company’s employees wanted to focus on things that were unique to the business success.

Airbnb quickly adopted many of the essential services of AWS, such as Amazon EC2 and Amazon S3. The original MySQL database was migrated to the Amazon Relational Database Service (Amazon RDS) because RDS greatly simplifies so many of the time-consuming administrative tasks typically associated with databases, like automating replication and scaling procedures with a basic API call or through the AWS Management Console.

Sample Airbnb Listings for Barcelona, Spain as of March 23, 2016

Continuous Innovation
A big part of our success is due to an intense focus on continual innovation. For us, an investment in AWS is really about making sure our engineers are focused on the things that are uniquely core to our business. Everything that we do in engineering is ultimately about creating great matches between people. Every traveler and every host is unique, and people have different preferences for what they want out of a travel experience.

So a lot of the work that we do in engineering is about matching the right people together for a real world, offline experience. Part of it is machine learning, part of it is search ranking, and part of it is fraud detection—getting bad people off of the site and verifying that people are who they say they are. Part of it is about the user interface and how we get explicit signals about your preferences. In addition, we build infrastructure that both enables these services and that supports our engineers to be productive and to safely deploy code any time of the day or night.

We’ve stayed with AWS through the years because we have a close relationship, which gives us insight and input in to the AWS roadmap. For example, we considered building a key management system in house, then saw that the AWS Key Management Service could provide the functionality we were looking for to enhance security. Turning to KMS saved three engineers about six months of development time—valuable resources that we could redirect to other business challenges, like making our matching engine even better. Or take Amazon RDS, which we’ve now relied on for years. We take advantage of the RDS Multi-AZ deployments for failover, which would be really time-consuming to create in house. It’s a huge feature for us that protects our main data store.

Supporting Growth
As we’ve grown from a startup to a company with a global presence, we’re still paying close attention to the value of our hosting platform. The flexibility AWS gives us is important. We experiment quickly and continuously with new ideas. We are constantly looking at ways to better serve our customers. We don’t always know what’s coming and what kind of technology we’ll need for new projects, and being able to go to AWS and get the hosting and services we need within a matter of minutes is huge.

We haven’t slowed down as we’ve gotten bigger, and we don’t intend to. We still view ourselves as a scrappy startup, and we’ll continue to need the same things we’ve always needed from AWS.

I should mention that we are looking for developers with AWS experience. Here are a couple of openings:

  • Software Engineer, Site Reliability.
  • Software Engineer, Production Infrastructure.

— Kevin Rice, Engineering Manager, Airbnb


Categories: Cloud

Amazon RDS for SQL Server – Support for Windows Authentication

AWS Blog - Wed, 03/23/2016 - 12:20

Regular readers of this blog will know that I am a big fan of Amazon Relational Database Service (RDS). As a managed database service, it takes care of the more routine aspects of setting up, running, and scaling a relational database.

We first launched support for SQL Server in 2012. Since that time we have added many features including SSL support, major version upgrades, transparent data encryption, and Multi-AZ.  Each of these features broadened the applicability of RDS for SQL Server and opened the door to additional use cases.

Many organizations store their account credentials and the associated permissions in Active Directory. The directory provides a single, coherent source for this information and allows for centralized management.  Given that you can use the AWS Directory Service to run the Enterprise Edition of Microsoft Active Directory in the AWS Cloud,  it is time to take the next step!

Support for Windows Authentication
You can now allow your applications to authenticate against Amazon RDS for SQL Server using credentials stored in the AWS Directory Service for Microsoft Active Directory (Enterprise Edition). Keeping all of your credentials in the same directory will save you time and effort because you will no longer have to find and update each copy. This may also improve your overall security profile.

You can enable this feature and choose an Active Directory when you create a new database instance that runs SQL Server. You can also enable it for an existing database instance. Here’s how you choose a directory when you create a new database instance (you can also create a new one):

To learn more, read about Using Microsoft SQL Server Windows Authentication with a SQL Server DB Instance.

Now Available
This feature is now available in the US East (Northern Virginia), US West (Oregon), Europe (Ireland), Asia Pacific (Sydney), Asia Pacific (Tokyo), and Asia Pacific (Singapore) Regions and you can start using it today. There is no charge for the feature, but you will pay the standard rate for the use of AWS Directory Service for Microsoft Active Directory.

— Jeff;



Categories: Cloud

Additional Pricing Options for AWS Marketplace Products

AWS Blog - Tue, 03/22/2016 - 19:09

Forward-looking ISVs (Indepdendent Software Vendors) are making great use of AWS Marketplace.  Users can find, buy, and start using products in minutes, without having to procure hardware or install any software. This streamlined delivery method can help ISVs to discover new customers while also decreasing the length of the sales cycle. The user pays for the products via their existing AWS account, per the regular AWS billing cycle.

As part of the on-boarding process for AWS Marketplace, each ISV has the freedom to determine the price of the software. The ISV can elect to offer prices for monthly and/or annual usage, generally with a discount. For software that is traditionally licensed on something other than time, ISVs make multiple entries in AWS Marketplace, representing licensing options on their chosen dimension.

This model has worked out well for many types of applications. However, as usual, there’s room to do even better!

More Pricing Options
ISVs have told us that they would like to have some more flexibility when it comes to packaging and pricing their software and we are happy to oblige. Some of them would like to extend the per-seat model without having to create multiple entries. Others would like to charge on other dimensions. A vendor of security products might want to charge by the number of hosts that were scanned. Or, a vendor of analytic products might want to charge based on the amount of data processed.

In order to accommodate all of these options, ISVs can now track and report on usage based on a pricing dimension that makes sense for their product (number of hosts scanned, amount of data processed, and so forth). They can also establish a per-unit price for this usage ($0.50 per host, $0.25 per GB of data, and so forth). Charges for this usage will appear on the user’s AWS bill.

I believe that this change will open the door to an even wider variety of products in the AWS Marketplace.

Implementing New Pricing Options
If you are an ISV and would like to use this new model price to your AWS Marketplace products, you need to add a little bit of code to your app. You simply measure usage along the appropriate dimension(s) and then call a new AWS API function to report on the usage. You must send this data (also known as a metering record) once per hour, even if there’s no usage for the hour. AWS Marketplace expects each running copy of the application to generate a metering record each hour in order to confirm that the application is still functioning properly. If the application stops sending records, AWS will email the customer and ask them to adjust their network configuration.

Here’s a sample call to the new MeterUsage function:

AWSMarketplaceMetering::MeterUsage("4w1vgsrkqdkypbz43g7qkk4uz","2015-05-19T07:31:23Z", "HostsScanned", 2);

The parameters are as follows:

  1. AWS Marketplace product code.
  2. Timestamp (UTC), in ISO-8601 format.
  3. Usage dimension.
  4. Usage quantity.

The usage data will be made available to you as part of the daily and monthly seller reports.

Some Examples
Here are a couple of examples of products that are already making use of this new pricing option. As you can see in the Infrastructure Fees, these vendors have chosen to price their products along a variety of interesting (and relevant) dimensions:

SoftNAS Cloud NAS:


Aspera faspex On-Demand:

Chef Server:

Trend Micro Deep Security:

Available Now
This new pricing option is available now and you can start using it today!

— Jeff;
Categories: Cloud

How should you decouple Drupal?

Drupal News - Tue, 03/22/2016 - 08:38

Republished from buytaert.net

With RESTful web services in Drupal 8 core, Drupal can function as an API-first back end serving browser applications, native applications on mobile devices, in-store displays, even in-flight entertainment systems (Lufthansa is doing so in Drupal 8!), and much more. When building a new website or web application in 2016, you may ask yourself: how should I decouple Drupal? Do I build my website with Drupal's built-in templating layer or do I use a JavaScript framework? Do I need Node.js?

There is a lot of hype around decoupled architectures, so before embarking on a project, it is important to make a balanced analysis. Your choice of architecture has implications on your budget, your team, time to launch, the flexibility for content creators, the ongoing maintenance of your website, and more. In this blog post, I'd like to share a flowchart that can help you decide when to use what technology.

This flowchart shows three things:

First, using coupled Drupal is a perfectly valid option for those who don't need extensive client-side rendering and state management. In this case, you would use Drupal's built-in Twig templating system rather than heavily relying on a JavaScript framework. You would use jQuery to take advantage of limited JavaScript where necessary. Also, with BigPipe in Drupal 8.1, certain use cases that typically needed asynchronous JavaScript can now be done in PHP without slowing down the page (i.e. communication with an external web service delaying the display of user-specific real-time data). The advantage of this approach is that content marketers are not blocked by front-end developers as they assemble their user experiences, thus shortening time to market and reducing investment in ongoing developer support.

Second, if you want all of the benefits of a JavaScript framework without completely bypassing Drupal's HTML generation and all that you get with it, I recommend using progressively decoupled Drupal. With progressive decoupling, you start with Drupal's HTML output, and then use a JavaScript framework to add client-side interactivity on the client side. One of the most visited sites in the world, The Weather Channel (100 million unique visitors per month), does precisely this with Angular 1 layered on top of Drupal 7. In this case, you can enjoy the benefits of having a “decoupled" team made up of both Drupal and JavaScript developers progressing at their own velocities. JavaScript developers can build richly interactive experiences while leaving content marketers free to assemble those experiences without needing a developer's involvement.

Third, whereas fully decoupled Drupal makes a lot of sense when building native applications, for most websites, the leap to fully decoupling is not strictly necessary, though a growing number of people prefer using JavaScript these days. Advantages include some level of independence on the underlying CMS, the ability to tap into a rich toolset around JavaScript (e.g. Babel, Webpack, etc.) and a community of JavaScript front-end professionals. But if you are using a universal JavaScript approach with Drupal, it's also important to consider the drawbacks: you need to ask yourself if you're ready to add more complexity to your technology stack and possibly forgo functionality provided by a more integrated content delivery system, such as layout and display management, user interface localization, and more. Losing that functionality can be costly, increase your dependence on a developer team, and hinder the end-to-end content assembly experience your marketing team expects, among other things.

It's worth noting that over time we are likely to see better integrations between Drupal and the different JavaScript frameworks (e.g. Drupal modules that export their configuration, and SDKs for different JavaScript frameworks that use that configuration on the client-side). When those integrations mature, I expect more people will move towards fully decoupled Drupal.

To be performant, fully decoupled websites using JavaScript employ Node.js on the server to improve initial performance, but in the case of Drupal this is not necessary, as Drupal can do the server-side pre-rendering for you. Many JavaScript developers opt to use Node.js for the convenience of shared rendering across server and client rather than for the specific things that Node.js excels in, like real-time push, concurrent connections, and bidirectional client-server communication. In other words, most Drupal websites don't need Node.js.

In practice, I believe many organizations want to use all of these content delivery options. In certain cases, you want to let your content management system render the experience so you can take full advantage of its features with minimal or no development effort (coupled architecture). But when you need to build a website that needs a much more interactive experience or that integrates with unique devices (i.e. on in-store touch screens), you should be able to use that same content management system's content API (decoupled architecture). Fortunately, Drupal allows you to use either. The beauty of choosing from the spectrum of fully decoupled Drupal, progressively decoupled Drupal, and coupled Drupal is that you can do what makes the most sense in each situation.

Special thanks to Preston So, Alex Bronstein and Wim Leers for contributions to this blog post. We created at least 10 versions of this flowchart before settling on this one.

Continue the conversation on buytaert.net

Categories: Drupal

New – CloudWatch Metrics for Spot Fleets

AWS Blog - Mon, 03/21/2016 - 18:50

You can launch an EC2 Spot fleet with a couple of clicks. Once launched, the fleet allows you to draw resources from multiple pools of capacity, giving you access to cost-effective compute power regardless of the fleet size (from one instance to many thousands). For more information about this important EC2 feature, read my posts: Amazon EC2 Spot Fleet API – Manage Thousands of Spot Instances with One Request and Spot Fleet Update – Console Support, Fleet Scaling, CloudFormation.

I like to think of each Spot fleet as a single, collective entity. After a fleet has been launched, it is an autonomous group of EC2 instances. The instances may come and go from time to time as Spot prices change (and your mix of instances is altered in order to deliver results as cost-effectively as possible) or if the fleet’s capacity is updated, but the fleet itself retains its identity and its properties.

New Spot Fleet Metrics
In order to make it even easier for you to manage, monitor, and scale your Spot fleets as collective entities, we are introducing a new set of Spot fleet CloudWatch metrics.

The metrics are reported across multiple dimensions: for each Spot fleet, for each Availability Zone utilized by each Spot fleet, for each EC2 instance type within the fleet, and for each Availability Zone / instance type combination.

The following metrics are reported for each Spot fleet (you will need to enable EC2 Detailed Monitoring in order to ensure that they are all published):

  • AvailableInstancePoolsCount
  • BidsSubmittedForCapacity
  • CPUUtilization
  • DiskReadBytes
  • DiskReadOps
  • DiskWriteBytes
  • DiskWriteOps
  • EligibleInstancePoolCount
  • FulfilledCapacity
  • MaxPercentCapacityAllocation
  • NetworkIn
  • NetworkOut
  • PendingCapacity
  • StatusCheckFailed
  • StatusCheckFailed_Instance
  • StatusCheckFailed_System
  • TargetCapacity
  • TerminatingCapacity

Some of the metrics will give you some insights into the operation of the Spot fleet bidding process. For example:

  • AvailableInstancePoolsCount – Indicates the number of instance pools included in the Spot fleet request.
  • BidsSubmittedForCapacity – Indicates the number of bids that have been made for Spot fleet capacity.
  • EligibleInstancePoolsCount – Indicates the number of instance pools that are eligible for Spot instance requests. A pool is ineligible when either (1) The Spot price is higher than the On-Demand price or (2) the bid price is lower than the Spot price.
  • FulfilledCapacity – Indicates the amount of capacity that has been fulfilled for the fleet.
  • PercentCapacityAllocation – Indicates the percent of capacity allocated for the given dimension. You can use this in conjunction with the instance type dimension to determine the percent of capacity allocated to a given instance type.
  • PendingCapacity – The difference between TargetCapacity and FulfilledCapacity.
  • TargetCapacity – The currently requested target capacity for the Spot fleet.
  • TerminatingCapacity – The fleet capacity for instances that have received Spot instance termination notices.

These metrics will allow you to determine the overall status and performance of each of your Spot fleets. As you can see from the names of the metrics, you can easily observe the disk, CPU, and network resources consumed by the fleet. You can also get a sense for the work that is happening behind the scenes as bids are placed on your behalf for Spot capacity.

You can further inspect the following metrics across the Availability Zone and/or instance type dimensions:

  • CPUUtilization
  • DiskReadBytes
  • DiskReadOps
  • DiskWriteBytes
  • FulfilledCapacity
  • NetworkIn
  • NetworkOut
  • StatusCheckFailed
  • StatusCheckFailed_Instance
  • StatusCheckFailed_System

These metrics will allow you to see if you have an acceptable distribution of load across Availability Zones and/or instance types.

You can aggregate these metrics using Max, Min, or Avg in order to observe the overall utilization of your fleet. However, be aware that using Avg does not always make sense when used across a fleet comprised of two or more types of instances!

Available Now
The new metrics are available now.

— Jeff;
Categories: Cloud

AWS Week in Review – March 14, 2016

AWS Blog - Mon, 03/21/2016 - 12:28

Let’s take a quick look at what happened in AWS-land last week:


March 14

  • We announced that the Developer Preview of AWS SDK for C++ is Now Available.
  • We celebrated Ten Years in the AWS Cloud.
  • We launched Amazon EMR 4.4.0 with Sqoop, HCatalog, Java 8, and More.
  • The AWS Compute Blog announced the Launch of AWS Lambda and Amazon API Gateway in the EU (Frankfurt) Region.
  • The Amazon Simple Email Service Blog annouced that Amazon SES Now Supports Custom Email From Domains.
  • The AWS Java Blog talked about Using Amazon SQS with Spring Boot and Spring JMS.
  • The AWS Partner Network Blog urged you to Take Advantage of AWS Self-Paced Labs.
  • The AWS Windows and .NET Developer Blog showed you how to Retrieve Request Metrics from the AWS SDK for .NET.
  • The AWS Government, Education, & Nonprofits Blog announced the New Amazon-Busan Cloud Innovation and Technology Center.
  • We announced Lumberyard Beta 1.1 is Now Available.
  • Bometric shared AWS Security Best Practices: Network Security.
  • CloudCheckr listed 5 AWS Security Traps You Might be Missing.
  • Serverless Code announced that ServerlessConf is Here!
  • Cloud Academy launched 2 New AWS Courses – (Advanced Techniques for AWS Monitoring, Metrics and Logging and Advanced Deployment Techniques on AWS).
  • Cloudonaut reminded you to Avoid Sharing Key Pairs for EC2.
  • 8KMiles talked about How Cloud Computing Can Address Healthcare Industry Challenges.
  • Evident discussed the CIS Foundations Benchmark for AWS Security.
  • Talkin’ Cloud shared 10 Facts About AWS as it Celebrates 10 Years.
  • The Next Platform reviewed Ten Years of AWS And a Status Check for HPC Clouds.
  • ZephyCloud is AWS-powered Wind Farm Design Software.

March 15

  • We announced the AWS Database Migration Service.
  • We announced that AWS CloudFormation Now Supports Amazon GameLift.
  • The AWS Partner Network Blog reminded everyone that Friends Don’t Let Friends Build Data Centers.
  • The Amazon GameDev Blog talked about Using Autoscaling to Control Costs While Delivering Great Player Experiences.
  • We updated the AWS SDK for JavaScript, the AWS SDK for Ruby, and the AWS SDK for Go.
  • Calorious talked about Uploading Images into Amazon S3.
  • Serverless Code showed you How to Use LXML in Lambda.
  • The Acquia Developer Center talked about Open-Sourcing Moonshot.
  • Concurrency Labs encouraged you to Hatch a Swarm of AWS IoT Things Using Locust, EC2 and Get Your IoT Application Ready for Prime Time.

March 16

  • We announced an S3 Lifecycle Management Update with Support for Multipart Upload and Delete Markers.
  • We announced that the EC2 Container Service is Now Available in the US West (Oregon) Region.
  • We announced that Amazon ElastiCache now supports the R3 node family in AWS China (Beijing) and AWS South America (Sao Paulo) Regions.
  • We announced that AWS IoT Now Integrates with Amazon Elasticsearch Service and CloudWatch.
  • We published the Puppet on the AWS Cloud: Quick Start Reference Deployment.
  • We announced that Amazon RDS Enhanced Monitoring is now available in the Asia Pacific (Seoul) Region.
  • I wrote about Additional Failover Control for Amazon Aurora (this feature was launched earlier in the year).
  • The AWS Security Blog showed you How to Set Up Uninterrupted, Federated User Access to AWS Using AD FS.
  • The AWS Java Blog talked about Migrating Your Databases Using AWS Database Migration Service.
  • We updated the AWS SDK for Java and the AWS CLI.
  • CloudWedge asked Cloud Computing: Cost Saver or Additional Expense?
  • Gathering Clouds reviewed New 2016 AWS Services: Certificate Manager, Lambda, Dev SecOps.

March 17

  • We announced the new Marketplace Metering Service for 3rd Party Sellers.
  • We announced Amazon VPC Endpoints for Amazon S3 in South America (Sao Paulo) and Asia Pacific (Seoul).
  • We announced AWS CloudTrail Support for Kinesis Firehose.
  • The AWS Big Data Blog showed you How to Analyze a Time Series in Real Time with AWS Lambda, Amazon Kinesis and Amazon DynamoDB Streams.
  • The AWS Enterprise Blog showed you How to Create a Cloud Center of Excellence in your Enterprise, and then talked about Staffing Your Enterprise’s Cloud Center of Excellence.
  • The AWS Mobile Development Blog showed you How to Analyze Device-Generated Data with AWS IoT and Amazon Elasticsearch Service.
  • Stelligent initiated a series on Serverless Delivery.
  • CloudHealth Academy talked about Modeling RDS Reservations.
  • N2W Software talked about How to Pre-Warm Your EBS Volumes on AWS.
  • ParkMyCloud explained How to Save Money on AWS With ParkMyCloud.

March 18

  • The AWS Government, Education, & Nonprofits Blog told you how AWS GovCloud (US) Helps ASD Cut Costs by 50% While Dramatically Improving Security.
  • The Amazon GameDev Blog discussed Code Archeology: Crafting Lumberyard.
  • Calorious talked about Importing JSON into DynamoDB.
  • DZone Cloud Zone talked about Graceful Shutdown Using AWS AutoScaling Groups and Terraform.

March 19

  • DZone Cloud Zone wants to honor some Trailblazing Women in the Cloud.

March 20

  •  Cloudability talked about How Atlassian Nailed the Reserved Instance Buying Process.
  • DZone Cloud Zone talked about Serverless Delivery Architectures.
  • Gorillastack explained Why the Cloud is THE Key Technology Enabler for Digital Transformation.

New & Notable Open Source

  • Tumbless is a blogging platform based only on S3 and your browser.
  • aws-amicleaner cleans up old, unused AMIs and related snapshots.
  • alexa-aws-administration helps you to do various administration tasks in your AWS account using an Amazon Echo.
  • aws-s3-zipper takes an S3 bucket folder and zips it for streaming.
  • aws-lambda-helper is a collection of helper methods for Lambda.
  • CloudSeed lets you describe a list of AWS stack components, then configure and build a custom stack.
  • aws-ses-sns-dashboard is a Go-based dashboard with SES and SNS notifications.
  • snowplow-scala-analytics-sdk is a Scala SDK for working with Snowplow-enriched events in Spark using Lambda.
  • StackFormation is a lightweight CloudFormation stack manager.
  • aws-keychain-util is a command-line utility to manage AWS credentials in the OS X keychain.

New SlideShare Presentations

  • Account Separation and Mandatory Access Control on AWS.
  • Crypto Options in AWS.
  • Security Day IAM Recommended Practices.
  • What’s Nearly New.

New Customer Success Stories

  • AdiMap measures online advertising spend, app financials, and salary data. Using AWS, AdiMap builds predictive financial models without spending millions on compute resources and hardware, providing scalable financial intelligence and reducing time to market for new products.
  • Change.org is the world’s largest and fastest growing social change platform, with more than 125 million users in 196 countries starting campaigns and mobilizing support for local causes and global issues. The organization runs its website and business intelligence cluster on AWS, and runs its continuous integration and testing on Solano CI from APN member Solano Labs.
  • Flatiron Health has been able to reach 230 cancer clinics and 2,200 clinicians across the United States with a solution that captures and organizes oncology data, helping to support cancer treatments. Flatiron moved its solution to AWS to improve speed to market and to minimize the time and expense that the startup company needs to devote to its IT infrastructure.
  • Global Red specializes in lifecycle marketing, including strategy, data, analytics, and execution across all digital channels. By re-architecting and migrating its data platform and related applications to AWS, Global Red reduced the time to onboard new customers for its advertising trading desk and marketing automation platforms by 50 percent.
  • GMobi primarily sells its products and services to Original Design Manufacturers and Original Equipment Manufacturers in emerging markets. By running its “over the air” firmware updates, mobile billing, and advertising software development kits in an AWS infrastructure, GMobi has grown to support 120 million users while maintaining more than 99.9 percent availability
  • Time Inc.’s new chief technology officer joined the renowned media organization in early 2014, and promised big changes. With AWS, Time Inc. can leverage security features and functionality that mirror the benefits of cloud computing, including rich tools, best-in-class industry standards and protocols and lower costs.
  • Seaco Global is one of the world’s largest shipping companies. By using AWS to run SAP applications, it also reduced the time needed to complete monthly business processes to just one day, down from four days in the past.

New YouTube Videos

  • AWS Database Migration Service.
  • Introduction to Amazon WorkSpaces.
  • AWS Pop-up Loft.
  • Save the Date – AWS re:Invent 2016.

Upcoming Events

  • March 22nd – Live Event (Seattle, Washington) – AWS Big Data Meetup – Intro to SparkR.
  • March 22nd – Live Broadcast – VoiceOps: Commanding and Controlling Your AWS environments using Amazon Echo and Lambda.
  • March 23rd – Live Event (Atlanta, Georgia) – AWS Key Management Service & AWS Storage Services for a Hybrid Cloud (Atlanta AWS Community).
  • April 6th – Live Event (Boston, Massachusetts) AWS at Bio-IT World.
  • April 18th & 19th – Live Event (Chicago, Illinois) – AWS Summit – Chicago.
  • April 20th – Live Event (Melbourne, Australia) – Inaugural Melbourne Serverless Meetup.
  • April 26th – Live Event (Sydney, Australia) – AWS Partner Summit.
  • April 26th – Live Event (Sydney, Australia) – Inaugural Sydney Serverless Meetup.
  • ParkMyCloud 2016 AWS Cost-Reduction Roadshow.
  • AWS Loft – San Francisco.
  • AWS Loft – New York.
  • AWS Loft – Tel Aviv.
  • AWS Public Sector Events.
  • AWS Global Summit Series.

Help Wanted

  • AWS Careers.

Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.

— Jeff;

Categories: Cloud

S3 Lifecycle Management Update – Support for Multipart Uploads and Delete Markers

AWS Blog - Wed, 03/16/2016 - 17:45

It is still a bit of a shock to me to realize that Amazon S3 is now ten years old! The intervening decade has simply flown by.

For several years, you have been able to use S3’s Lifecycle Management feature to control the storage class and the lifetime of your objects. As you may know, you can set up rules on a per-bucket or per-prefix basis. Each rule specifies an action to be taken when objects reach a certain age.

Today we are adding two rules that will give you additional control over two special types of objects: incomplete multipart uploads and expired object delete markers. Before we go any further, I should define these objects!

Incomplete Multipart Uploads – S3’s multipart upload feature accelerates the uploading of large objects by allowing you to split them up into logical parts that can be uploaded in parallel.  If you initiate a multipart upload but never finish it, the in-progress upload occupies some storage space and will incur storage charges. However, these uploads are not visible when you list the contents of a bucket and (until today’s release) had to be explicitly removed.

Expired Object Delete Markers – S3’s versioning feature allows you to preserve, retrieve, and restore every version of every object stored in a versioned bucket. When you delete a versioned object, a delete marker is created. If all previous versions of the object subsequently expire, an expired object delete marker is left. These markers do not incur storage charges. However, removing unneeded delete markers can improve the performance of S3’s LIST operation.

New Rules
You can now exercise additional control over these objects using some new lifecycle rules, lowering your costs and improving performance in the process. As usual, you can set these up using the AWS Management Console, the S3 APIs, the AWS Command Line Interface (CLI), or the AWS Tools for Windows PowerShell.

Here’s how you set up a rule for incomplete multipart uploads using the Console. Start by opening the console and navigating to the desired bucket (mine is called jbarr):

Then click on Properties, open up the Lifecycle section, and click on Add rule:

Decide on the target (the whole bucket or the prefixed subset of your choice) and then click on Configure Rule:

Then enable the new rule and select the desired expiration period:

As a best practice, we recommend that you enable this setting even if you are not sure that you are actually making use of multipart uploads. Some applications will default to the use of multipart uploads when uploading files above a particular, application-dependent, size.

Here’s how you set up a rule to remove delete markers for expired objects that have no previous versions:

S3 Best Practices
While you are here, here are some best practices that you should consider using for your own S3-based applications:

Versioning – You can enable Versioning for your S3 buckets in order to be able to recover from accidental overwrites and deletes. With versioning turned on, you can preserve, retrieve, and restore earlier versions of your data.

Replication – Take advantage of S3’s Cross-Region Replication in order to meet your organization’s compliance policies by creating a replica of your data in a second AWS Region.

Performance -If you anticipate a consistently high number of PUT, LIST, DELETE, or GET requests against your buckets, you can optimize your application’s performance by implementing the tips outlined in the performance section of the Amazon S3 documentation.

Cost Management – You can reduce your costs by setting up S3 lifecycle policies that will transition your data to other S3 storage tiers or expire data that is no longer needed.

— Jeff;


Categories: Cloud

Additional Failover Control for Amazon Aurora

AWS Blog - Wed, 03/16/2016 - 13:21

Amazon Aurora is a fully-managed, MySQL-compatible, relational database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source database (read my post, Amazon Aurora – New Cost-Effective MySQL-Compatible Database Engine for Amazon RDS, to learn more).

Aurora allows you create up to 15 read replicas to increase read throughput and for use as failover targets. The replicas share storage with the primary instance and provide lightweight, fine-grained replication that is almost synchronous, with a replication delay on the order of 10 to 20 milliseconds.

Additional Failover Control
Today we are making Aurora even more flexible by giving you control over the failover priority of each read replica. Each read replica is now associated with a priority tier (0-15).  In the event of a failover, Amazon RDS will promote the read replica that has the highest priority (the lowest numbered tier). If two or more replicas have the same priority, RDS will promote the one that is the same size as the previous primary instance.

You can set the priority when you create the Aurora DB instance:

This feature is available now and you can start using it today. To learn more, read about Fault Tolerance for an Aurora DB Cluster.

— Jeff;
Categories: Cloud

AWS Database Migration Service

AWS Blog - Tue, 03/15/2016 - 15:12

Do you currently store relational data in an on-premises Oracle, SQL Server, MySQL, MariaDB, or PostgreSQL database? Would you like to move it to the AWS cloud with virtually no downtime so that you can take advantage of the scale, operational efficiency, and the multitude of data storage options that are available to you?

If so, the new AWS Database Migration Service (DMS) is for you! First announced last fall at AWS re:Invent, our customers have already used it to migrate over 1,000 on-premises databases to AWS. You can move live, terabyte-scale databases to the cloud, with options to stick with your existing database platform or to upgrade to a new one that better matches your requirements.  If you are migrating to a new database platform as part of your move to the cloud, the AWS Schema Conversion Tool will convert your schemas and stored procedures for use on the new platform.

The AWS Database Migration Service works by setting up and then managing a replication instance on AWS. This instance unloads data from the source database and loads it into the destination database, and can be used for a one-time migration followed by on-going replication to support a migration that entails minimal downtime.  Along the way DMS handles many of the complex details associated with migration, including data type transformation and conversion from one database platform to another (Oracle to Aurora, for example). The service also monitors the replication and the health of the instance, notifies you if something goes wrong, and automatically provisions a replacement instance if necessary.

The service supports many different migration scenarios and networking options  One of the endpoints must always be in AWS; the other can be on-premises, running on an EC2 instance, or running on an RDS database instance. The source and destination can reside within the same Virtual Private Cloud (VPC) or in two separate VPCs (if you are migrating from one cloud database to another). You can connect to an on-premises database via the public Internet or via AWS Direct Connect.

Migrating a Database
You can set up your first migration with a couple of clicks! You simply create the target database, migrate the database schema, set up the data replication process, and initiate the migration. After the target database has caught up with the source, you simply switch to using it in your production environment.

I start by opening up the AWS Database Migration Service Console (in the Database section of the AWS Management Console as DMS) and clicking on Create migration.

The Console provides me with an overview of the migration process:

I click on Next and provide the parameters that are needed to create my replication instance:

For this blog post, I selected one of my existing VPCs and unchecked Publicly accessible. My colleagues had already set me up with an EC2 instance to represent my “on-premises” database.

After the replication instance has been created, I specify my source and target database endpoints and then click on Run test to make sure that the endpoints are accessible (truth be told, I spent some time adjusting my security groups in order to make the tests pass):

Now I create the actual migration task. I can (per the Migration type) migrate existing data, migrate and then replicate, or replicate going forward:

I could have clicked on Task Settings to set some other options (LOBs are Large Objects):

The migration task is ready, and will begin as soon as I select it and click on Start/Resume:

I can watch for progress, and then inspect the Table statistics to see what happened (these were test tables and the results are not very exciting):

At this point I would do some sanity checks and then point my application to the new endpoint. I could also have chosen to perform an ongoing replication.

The AWS Database Migration Service offers many options and I have barely scratched the surface. You can, for example, choose to migrate only certain tables. You can also create several different types of replication tasks and activate them at different times.  I highly recommend you read the DMS documentation as it does a great job of guiding you through your first migration.

If you need to migrate a collection of databases, you can automate your work using the AWS Command Line Interface (CLI) or the Database Migration Service API.

Price and Availability
The AWS Database Migration Service is available in the US East (Northern Virginia), US West (Oregon), US West (Northern California), Europe (Ireland), Europe (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Singapore),  and Asia Pacific (Sydney) Regions and you can start using it today (we plan to add support for other Regions in the coming months).

— Jeff;


Categories: Cloud

Thank You Splunk – We’re Happy to be Your Alliance Partner

AWS Blog - Tue, 03/15/2016 - 12:48

The AWS Partner Network (APN) helps our partners to build successful businesses around AWS. Members of APN provide consulting services (APN Consulting Partners) or software solutions (APN Technology Partners) that are integrated with the AWS platform.

I am happy to be able announce that AWS Advanced Technology Partner Splunk (read their APN entry) has named Amazon Web Services to be their Worldwide Alliance Partner of the Year (read the press release to learn more). We are thrilled to be able to work with them to make their solution available to AWS customers worldwide.

The Splunk App for AWS is one of the most popular apps on Splunkbase.  The app provides you with insight into the operational and security issues associated with your AWS account. It works in conjunction with AWS Config, AWS CloudTrail, VPC Flow Logs, AWS Billing, and S3 to provide you a a logical, toplogically-oriented dashboard designed to help you to optimize resources and detect problems.

— Jeff;


Categories: Cloud

PHPTour 2016

PHP News - Tue, 03/15/2016 - 02:54
Categories: PHP

Amazon EMR 4.4.0 – Sqoop, HCatalog, Java 8, and More

AWS Blog - Mon, 03/14/2016 - 16:27

Rob Leidle, Development Manager for Amazon EMR, wrote the guest post below to introduce you to the latest and greatest version!

— Jeff;

Today we are announcing Amazon EMR release 4.4.0, which adds support for Apache Sqoop (1.4.6) and Apache HCatalog 1.0.0, an upgraded release of Apache Mahout (0.11.1), and upgraded sandbox releases for Presto (0.136) and Apache Zeppelin (0.5.6). We have also enhanced our default Apache Spark settings and added support for Java 8.

New Applications in Release 4.4.0
Amazon EMR provides an easy way to install and configure distributed big data applications in the Hadoop and Spark ecosystems on managed clusters of Amazon EC2 instances. You can create Amazon EMR clusters from the Amazon EMR Create Cluster Page in the AWS Management Console, AWS Command Line Interface (CLI), or using a SDK with an EMR API. In the latest release, we added support for several new versions of the following applications:

  • Zeppelin 0.5.6 – Zeppelin is an open-source interactive and collaborative notebook for data exploration using Spark. Zeppelin 0.5.6 adds the ability to import or export a notebook, notebook storage in GitHub, auto-save on navigation, and better Pyspark support. View the Zeppelin release notes or learn more about Zeppelin on Amazon EMR.
  • Presto 0.136 – Presto is an open-source, distributed SQL query engine designed for low-latency queries on large datasets in Amazon S3 and HDFS. This is a minor version release, with support for larger arrays, SQL binary literals, the ability to call connector-defined procedures, and improvements to the web interface. View the Presto release notes or learn more about Presto on Amazon EMR.
  • Sqoop 1.4.6 – Sqoop is a tool for transferring bulk data between HDFS, S3 (using EMRFS), and structured datastores such as relational databases.  You can use Sqoop to transfer structured data from RDS and Aurora to EMR for processing, and write out results back to S3, HDFS, or another database. Learn more about Sqoop on Amazon EMR.
  • Mahout 0.11.1 – Mahout is a collection of tools and libraries for building distributed machine learning applications. This release includes support for Spark as well as a new math environment based on Spark named Samsara. Learn more about Mahout on Amazon EMR.
  • HCatalog 1.0.0 – HCatalog is a sub-project within the Apache Hive project. It is a table and storage management layer for Hadoop which utilizes the Hive Metastore. It enables tools to execute SQL on Hadoop through an easy to use REST interface.

Enhancements to the default settings for Spark
We have improved our default configuration for Spark executors from the Apache defaults to better utilize resources on your cluster. Starting with release 4.4.0, EMR has enabled dynamic allocation of executors by default, which lets YARN determine how many executors to utilize when running a Spark application. Additionally, the amount of memory used for each executor is now automatically determined by the instance family used for your cluster’s core instance group.

Enabling dynamic allocation and customizing the executor memory allows Spark to utilize all resources on your cluster, place additional executors on nodes added to your cluster, and better allow for multitenancy for Spark applications. The previous maximizeResourceAllocation parameter is still available. However, this doesn’t use dynamic allocation, and specifies a static number of executors for your Spark application. You can also still override the new defaults by using the configuration API or passing additional parameters when submitting your Spark application using spark-submit. Learn more about Spark configuration on Amazon EMR.

Using Java 8 with your applications on Amazon EMR
By default, applications on your Amazon EMR cluster use the Java Development Kit 7 (JDK 7) for their runtime environment. However, on release 4.4.0, you can use JDK 8 by setting JAVA_HOME to point to JDK 8 for the relevant environment variables using a configuration object (though please note that JDK 8 is not compatible with Apache Hive). Learn more about using Java 8 on Amazon EMR.

Launch an Amazon EMR Cluster with Release 4.4.0 Today
To create an Amazon EMR cluster with 4.4.0, select release 4.4.0 on the Create Cluster page in the AWS Management Console, or use the release label emr-4.4.0 when creating your cluster from the AWS CLI or using a SDK with the EMR API.

— Rob Leidle – Development Manager, Amazon EMR

Categories: Cloud

AWS Week in Review – March 7, 2016

AWS Blog - Mon, 03/14/2016 - 12:51

Let’s take a quick look at what happened in AWS-land last week:


March 7

  • We launched Notifications for AWS CodeCommit.
  • We announced that New AWS Accounts Now Default to Long EC2 Resource IDs.
  • The AWS Security Blog showed you How to Automate Restricting Access to a VPC by Using AWS IAM and AWS CloudFormation.
  • Botmetric talked about Tackling AWS Security Threat Landscapes: Access Controls.
  • CloudCheckr shared 5 Tips to Best Leverage Diverse AWS Services.
  • CloudEndure listed the Top 5 Cloud Computing Books to Read in 2016.
  • Cloud Academy published Part 3 of a series on Centralized Log Management with AWS CloudWatch.
  • Trek10 talked about Lambda Fanout, What is is Good For?

March 8

  • We announced Availability of t2.nano Instances in the EU (Frankfurt) and Asia Pacific (Sydney) Regions.
  • We announced you can now Run XCTest UI Tests with AWS Device Farm.
  • The AWS Partner Network Blog talked about Modeling SaaS Tenant Profiles on AWS.
  • The AWS Security Blog showed you How to Reduce Security Threats and Operating Costs Using AWS WAF and Amazon CloudFront.
  • N2W Software explained How to Automated Your Backup Operations in AWS.
  • Sungard showed you How to Implement Microservices using AWS Lambda and Deploy with CloudFormation.
  • Cloudyn explained How to Measure Your Core-Hours Costs to Gain Another Level of Cloud Cost Optimization.
  • Cloud Academy published an Introduction and Walkthrough of AWS Config.
  • ParkMyCloud showed you How to Manage Parking Recommendations in ParkMyCloud.
  • Localytics wrote about Serverless Slackbots Powered by AWS.

March 9

  • We announced that Amazon ElastiCache now supports Memcached Auto-Discovery for PHP 7.
  • Guest posts showed you how to Use Enhanced RDS Monitoring with Datadog and told the story of Flatiron Health – Using AWS to Help Improve Cancer Treatment.
  •  We updated the AWS CLI, AWS SDK for Java, AWS SDK for Go, AWS SDK for JavaScript, and the AWS SDK for Ruby.
  • 8KMiles listed 5 Reasons Why Pharmaceutical Companies Need to Migrate to the Cloud.
  • Stelligent showed you how to Create a Pipeline Using the AWS CodePipeline Console.
  • Spotinst talked about Implementing Blue/Green Deployments with Elastigroup on AWS.
  • Netflix described How We Build Code at Netflix.
  • Cloud Technology Partners shared A Bulletproof DevOps Strategy to Ensure Success in the Cloud.
  • DZone Cloud Zone talked about Automatic Deployment Through a Bastion (Gateway) Server.
  • Gathering Clouds talked about The #1 AWS Cloud Security Tool for Retailers and eCommerce.
  • Gorillastack asked Is Virtual Reality The Next Frontier For Amazon Web Services To Conquer?
  • Trek10 introduced LambdaClock.
  • Serverworks wrote about Parallel Image Processing for Fluid Mechanics with AWS Lambda.

March 10

  • We announced that Amazon CloudWatch Logs now has AWS CloudTrail support and new Amazon CloudWatch Metrics.
  • We announced that AWS CodeDeploy is Now Available in the South America (Sao Paulo) Region.
  • We announced that Amazon CloudWatch Logs Available in the South America (Sao Paulo) Region.
  • We announced that Amazon Redshift Now Supports Table Level Restore.
  • We updated the AWS SDK for Ruby and the AWS SDK for Go.
  • We published the Second Amazon Linux AMI 2016.03 Release Candidate.
  • The Amazon GameDev Blog announced New Regions and Autoscaling Features for Amazon GameLift.
  • The AWS Big Data Blog shared a partner post from Attunity.
  • The AWS Government, Education, & Nonprofits Blog explained How Cities Can Stop Wasting Money, Move Faster, and Innovate.
  • The AWS Partner Network Blog talked about Architecting Microservices Using Weave Net and Amazon EC2 Container Service.
  • James Hamilton wrote about A Decade of Innovation.
  • ParkMyCloud showed you How to Save Money with AWS Scripting.
  • Skeddly showed you how to Change EBS Volume Action.

March 11

  • I reviewed some Hot Startups on AWS.
  • Werner Vogels shared 10 Lessons from 10 Years of Amazon Web Services.
  • The AWS Government, Education, & Nonprofits Blog talked about the Cities of the Future, Today.
  • 8KMiles hosted a Tweet Chat on Amazon KMS.
  • Mark Litwintschik examined A Billion Taxi Rides on Amazon EMR Running Presto.

March 12

  • The AWS Government, Education, & Nonprofits Blog announced that the AWS 2016 City on a Cloud Innovation Challenge is Live.
  • Toby Hede is writing The Complete and Most Excellent Micro Manual for Hosting a Static Website on AWS.

March 13

  • Serverless Code announced Zappa, Django, and Lambda VPC Support, discussed Using Python in the Serverless Framework, and talked about Using Scikit-Learn in AWS Lambda.

New & Notable Open Source

  • sqs-to-lambda-via-lambda implements Amazon SQS to Lambda using Lambda.
  • akiro magically compiles NPM packages with native extensions for Lambda.
  • cloudwatch-to-sumo sends metrics from CloudWatch to Sumo Logic.
  • awsam is an AWS Account Manager modeled after rvm.
  • aws-jwt-auth is an API Gateway custom authorizer to validate JWTs created by WSO2.
  • aws_mbedtls_mqtt is the source code to use the mbedTLS library to connect to AWS IoT.
  • jaxrs-lib contains Jersey and Hibernate Components for building REST APIs hosted on Elastic Beanstalk.
  • autosignr is a Puppet Certificate Auto-signer for AWS.
  • llama-cli is Chaos Llama, a tool for testing resiliency and recoverability of AWS-based architectures.
  • cfn-amibaker bakes EC2 AMIs using CloudFormation and Lambda.

New SlideShare Presentations

  • Intro to AWS IoT.

Upcoming Events

  • March 14th – Live Event (Seattle, Washington) – Seattle AWS Architects & Engineers – Lambda + Alexa AWS Teams.
  • March 15th – Live Event (San Francisco, California) – Amazon Lumberyard team at GDC 2016.
  • March 17th – Webinar – Security Best Practices for Retailers on AWS.
  • March 17th – Live Event (Netherlands) – Security in the Cloud.
  • March 22nd – Live Broadcast – VoiceOps: Commanding and Controlling Your AWS environments using Amazon Echo and Lambda.
  • March 23rd – Live Event (Atlanta, Georgia) – AWS Key Management Service & AWS Storage Services for a Hybrid Cloud (Atlanta AWS Community).
  • April 6th – Live Event (Boston, Massachusetts) AWS at Bio-IT World.
  • April 18th & 19th – Live Event (Chicago, Illinois) – AWS Summit – Chicago.
  • April 20th – Live Event (Melbourne, Australia) – Inaugural Melbourne Serverless Meetup.
  • April 26th – Live Event (Sydney, Australia) – Inaugural Sydney Serverless Meetup.
  • AWS Loft – San Francisco.
  • AWS Loft – New York.
  • AWS Loft – Tel Aviv.
  • AWS Global Summit Series.

Help Wanted

  • AWS Careers.

Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.

— Jeff;

Categories: Cloud

Developer Preview of AWS SDK for C++ is Now Available

AWS Blog - Mon, 03/14/2016 - 11:25

My colleague Jonathan Henson has great news for C++ developers who would like to use AWS.

— Jeff;

I am happy to announce that the AWS SDK for C++ is now available as a developer preview. Last fall, we released the SDK in an experimental state to gather feedback and improve the APIs. Since then, we have received more than 100 issues and pull requests on GitHub. Many excited developers in the open source community gave valuable feedback that helped to improve the stability and expand the features of this SDK.

Changes and Additions
Here are some additions we’ve made since our experimental release:

  • Full service coverage parity with the rest of the SDKs.
  • Visual Studio 2015 support.
  • OS X El Capitan support.
  • Presigned URL support.
  • Expansion of and improvements to the Amazon S3 TransferClient.
  • Inline documentation improvements.
  • More integration for custom memory management.
  • Forward-compatible enumeration support.
  • Improvements to our CMake exports to simplify consumer builds.
  • Unicode support.
  • Several service client fixes and improvements.
  • Ability to build only the clients you need.
  • Custom signed regions and endpoints.
  • Common Crypto support for Apple platforms (OpenSSL is no longer required on iOS and OS X).
  • Several stability updates related to multi-threading in our Curl interface on Unix and Linux.
  • The Service Client Generator is now open sourced and integrated into the build process.

Also, NSURL support for Apple platforms will be committed within a week or so. After that, Curl will no longer be required on iOS or OS X.

The team would like to to thank those who have been involved in improving this SDK over the past six months. Please continue contributing and leaving feedback on our GitHub Issues page.

Before we move to General Availability, we would like to receive another round of feedback to help us pin down the API with a stable 1.0 release. If you are a C++ developer, please feel free to give this new SDK a try and let us know what you think.

In Other News
Here are a few other things that you may find interesting:

  • We have moved our GitHub repository from the awslabs organization to aws/aws-sdk-cpp.
  • We are now providing new releases for new services and features with the rest of the AWS SDKs.
  • We now have a C++ developer blog. We’ll post tutorials and samples there throughout the year. We’ll also announce improvements and features there, so stay tuned!
  • We will distribute pre-built binaries for our most popular platforms in the near future. We’ll let you know when they go live.

Sample Code
Here is some sample code that writes some data to a Kinesis stream and then consumes the data:

#include <aws/kinesis/model/PutRecordsRequest.h> #include <aws/kinesis/KinesisClient.h> #include <aws/core/utils/Outcome.h> using namespace Aws::Utils; using namespace Aws::Kinesis; using namespace Aws::Kinesis::Model; class KinesisProducer { public: KinesisProducer(const Aws::String& streamName, const Aws::String& partition) : m_partition(partition), m_streamName(streamName) {} void StreamData(const Aws::Vector& data) { PutRecordsRequest putRecordsRequest; putRecordsRequest.SetStreamName(m_streamName); for(auto& datum : data) { PutRecordsRequestEntry putRecordsRequestEntry; putRecordsRequestEntry.WithData(datum) .WithPartitionKey(m_partition); putRecordsRequest.AddRecords(putRecordsRequestEntry); } m_client.PutRecordsAsync(putRecordsRequest, std::bind(&KinesisProducer::OnPutRecordsAsyncOutcomeReceived, this, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3, std::placeholders::_4)); } private: void OnPutRecordsAsyncOutcomeReceived(const KinesisClient*, const Model::PutRecordsRequest&, const Model::PutRecordsOutcome& outcome, const std::shared_ptr&) { if(outcome.IsSuccess()) { std::cout << "Records Put Successfully " << std::endl; } else { std::cout << "Put Records Failed with error " << outcome.GetError().GetMessage() << std::endl; } } KinesisClient m_client; Aws::String m_partition; Aws::String m_streamName; }; int main() { KinesisProducer producer("kinesis-sample", "announcements"); while(true) { Aws::String annoucement1("AWS SDK for C++"); Aws::String annoucement2("Is Now in Developer Preview"); producer.StreamData( { ByteBuffer((unsigned char*)annoucement1.c_str(), annoucement1.length()), ByteBuffer((unsigned char*)annoucement2.c_str(), annoucement2.length()) }); std::this_thread::sleep_for(std::chrono::milliseconds(5)); } return 0; }

— Jonathan Henson, Software Development Engineer (SDE)



Categories: Cloud

Ten Years in the AWS Cloud – How Time Flies!

AWS Blog - Mon, 03/14/2016 - 06:02

Ten years ago today I announced the launch 0f Amazon S3 with a simple blog post! It is hard to believe that a decade has passed since then, or that I have written well over 2000 posts during that time.

Future Shock
When I was in high school, I read and reported on a relatively new (for 1977) book titled Future Shock. In the book, futurist Alvin Toffler argued that the rapid pace of change had the potential to overwhelm, stress, and disorient people. While the paper I wrote has long since turned to dust, I do remember arguing that change was good, and that people and organizations would be better served by preparing to accept and to deal with it.

Early in my career I saw that many supposed technologists were far better at clinging to the past than they were at moving into the future. By the time I was 21 I had decided that it would be better for me to live in the future than in the past, and to not just accept change and progress, but to actively seek it out. Now, 35 years after that decision, I can see that I chose the most interesting fork in the road. It has been a privilege to be able to bring you AWS news for well over a decade (I wrote my first post in 2004).

A Decade of IT Change
Looking back at the past decade, it is pretty impressive to see just how much the IT world has changed. Even more impressive, the change is not limited to technology. Business models have changed, as has the language around it. At the same time that changes on the business side have brought about new ways to acquire, consume, and pay for resources (empowering both enterprises and startups in the process), the words that we use to describe what we do have also changed! A decade ago we would not have spoken of the cloud, microservices, serverless applications, the Internet of Things, containers, or lean startups. We would not have practiced continuous integration, continuous delivery, DevOps, or ChatOps. While you are still trying to understand and implement ChatOps, don’t forget that something even newer called VoiceOps (powered by Alexa) is already on the horizon.

Of course, dealing with change is not easy. When looking in to the future, you need to be able to distinguish between flashy distractions and genuine trends, while remaining flexible enough to pivot if yesterday’s niche becomes today’s mainstream technology. I often use JavaScript to illustrate this phenomenon. If you (like me), as a server-side developer initially brushed off JavaScript as a simple, browser-only language and chose to ignore it, you were undoubtedly taken by surprise when it was first used to build rich, dynamic Ajax applications and then run on the server in the form of Node.js.

Today, keeping current means staying abreast of developments in programming languages, system architectures, and industry best practices. It means that you spend time every day improving your current skills and looking for new ones. It means becoming comfortable in a new world where multiple deployments per day are commonplace, powered by global teams, and managed by consensus, all while remaining focused on delivering value to the business!

A Decade of AWS
While I hate to play favorites, I would like to quickly review some of my favorite AWS launches and blog posts of the past decade.

First and Still Relevant (2006) – Amazon S3. Incredibly simple in concept yet surprisingly complex behind the scenes, S3 was, as TechCrunch said at the time, game changing!

Servers by the Hour (2006) – Amazon EC2. I wrote the blog post while sitting poolside in Cabo San Lucas. The launch had been imminent for several months, and then became a fact just as I was about to hop on the plane.  From that simple start (one instance type, one region, and CLI-only access), EC2 has added feature after feature (most of them driven by customer requests) and is just as relevant today as it was in 2006.

Making Databases Easy (2009) – Amazon Relational Database Service – Having spent a lot of time installing, tuning, and managing MySQL as part of a long-term personal project, I was in a perfect position to appreciate how RDS simplified every aspect of my work.

Advanced Networking (2009) – Amazon Virtual Private Cloud – With the debut of VPC, even conservative enterprises began to take a closer look at AWS. They saw that we understood the networking and isolation challenges that they faced, and were pleased that we were able to address them.

Internet-Scale Data Storage (2012) – Amazon DynamoDB – The NoSQL market was in a state of flux when we launched DynamoDB. Now that the smoke has cleared, I routinely hear about customers that use DynamoDB to store huge amounts of data and to support some pretty incredible request rates.

Data Warehouses in Minutes not Quarters (2012) – Amazon Redshift  – Many companies measure implementation time for a data warehouse in terms of quarters or even years. Amazon Redshift showed them that there was a better way to get started.

Desktop Computing in the Cloud (2013) – Amazon WorkSpaces – All too often dismissed as either pedestrian or “great for someone else,” virtual desktops have become an important productivity tool for me and for our customers.

Real Time? How Much Data? (2013) – Amazon Kinesis – Capturing, processing, and deriving value from voluminous streams of data became easier and simpler when we launched Kinesis.

A New Programming Model (2014) – AWS Lambda – This is one of those disruptive, game-changers that you need to be ready for! I have been impressed by the number of traditional organizations that have already built and deployed sophisticated Lambda-powered applications. My expectation that Lambda would be most at home in startups building applications from scratch turned out to be wrong.

Devices are the Future (2015) – AWS IoT – Mass-produced compute power and widespread IP connectivity combine to allow all sorts of interesting devices to be connected to the Internet.

Moving Forward
A decade ago, discussion about the risks of cloud computing centered around adoption. It was new and unproven, and raised more questions than it answered. That era passed some time ago. These days, I hear more talk about the risk of not going to the cloud. Organizations of all shapes and sizes want to be nimble, to use modern infrastructure, and to be able to attract professionals with a strong desire to do the same. Today’s employees want to use the latest and most relevant technology in order to be as productive as possible.

I can promise you that the next decade of the cloud will be just as exciting as the one that just concluded. Keep on learning, keep on building, and share your successes with us!

— Jeff;

PS – As you can tell from this post, I strongly believe in the value of continuing education. I discussed this with my colleagues and they have agreed to make the entire set of qwikLABS online labs and learning quests available to all current and potential AWS customers at no charge through the end of March. To learn more, visit qwikLABS.com.

Categories: Cloud

Hot Startups on AWS – March 2016

AWS Blog - Fri, 03/11/2016 - 08:05

We love startups!

When energy, enthusiasm, creativity, and passion for changing the world come together to build new and exciting businesses and applications, everyone benefits. Today I am kicking off a new series of posts. Every month I am going to feature a handful of hot, AWS-powered startups and tell you a little bit about what they built. I hope to explore a bit of the motivation behind the products and the startups and to show you how AWS has empowered them to put that energy, enthusiasm, creativity, and passion to use.

Today’s post features the following startups:

  • Intercom – One place for every team in an Internet business to see and talk to customers, personally, at scale.
  • Tile – A popular key locator product that works with an app to help people find their stuff.
  • Bugsnag – A tool to capture and analyze runtime errors in production web & mobile applications.
  • DroneDeploy – Making the sky productive and accessible for everyone.

The founders of Intercom previously ran a SaaS business in Dublin, Ireland. They had a problem- they didn’t know who their customers were, and couldn’t easily communicate with them. They were working on a solution when they observed a coffee shop owner casually interacting with his customers, greeting them by name, making offers tailored to their interests, addressing questions, and heading off potential problems. The founders decided to build a tool that would allow others building online businesses to have a personal touch with their customers, as opposed to simply treating them like rows in a database.

The resulting platform, Intercom, is a fundamentally new way to communicate with customers. It allows web and mobile businesses to track live customer data, and use that data to communicate with customers in a personal way on their website, inside web and mobile apps, and by email. A little bit of JavaScript (for web apps) or simple SDKs for (iOS and Android) powers live chat, marketing automation, customer feedback, and customer support.

Intercom chose AWS to allow them to move fast without having to have a large operations team. With thousands of businesses already using the product, they needed to keep the real-time conversations running at a consistent speed and with low latency. When they anticipated running up against the limits of their existing relational database and began to consider a shared solution, they put Amazon Aurora to the test and found that it was able to handle their current load, with plenty of room to grow. They avoided the complexity of sharing, lowered their costs, and reduced the latency of their queries.

One of the founders of Tile was frustrated because his spouse had a habit of losing things. After looking in to some ways to help her, he realized two things. First, this was a very common problem (and, to be fair, one that is not gender-specific). Second, no one was addressing it. Seeing an opportunity, he co-founded Tile in 2013 and created a crowdfunding campaign to secure capital. This campaign surpassed the initial goal of 20,000 units by 20x, which delivered a key indicator that the team had found a good solution to an unmet need. Currently, the company has sold over 4.5 million Tiles, making this one of the most successful crowdfunded companies to date.

The Tiles themselves are small and simple. They can be attached to all different sorts of objects, and use Bluetooth Low Energy to communicate. When the mobile app is activated, it displays a proximity radar with range of about 100 feet, and the app can also be used to trigger a loud (90 decibel) chime on the Tile. Conversely, the Tile itself can be used to find a missing smartphone. The app can even display the last known location of each Tile on a built-in map; this is useful if the Tile is out of Bluetooth range. Finally, if the misplaced item is well and truly lost, a community-based feature can be used to provide an anonymous ping if another user’s running app comes within Bluetooth range of the missing item. Based on these functionality options, Tile is ideal for finding anything that can be lost or misplaced, from lost keys, remote controls, cell phones, and other high-value objects, large or small.

Tile chose AWS to allow them to scale rapidly and to have a global presence (they have devices in 214 countries & territories). They run multiple applications (the Tile Web App, Customer Service, and the Tile Network) on AWS using EC2, Route 53, RDS, CloudWatch, SNS, Kinesis, and Redshift. They currently process over 100 million location updates every day and regularly add new servers, modify load balancers, and update DNS entries.

This hot startup was founded in a tiny San Francisco apartment that was home to Simon and James (the two founders), their respective partners, and a four-pack of cats. They wanted to provide developers of web and mobile applications with a tool that would intercept, track, and report on application crashes with an eye toward aggregated, prioritized reporting and analysis. Given the fragmented state of the mobile device world, being able to use Bugsnag to identify issues that are peculiar to one platform, device, or version ensures that developers are focused on fixing bugs that affect the most users.

Bugsnag helps thousands of companies to improve the quality of their web and mobile applications. It integrates with many languages and environments including Rails, JavaScript, Python, Go, PHP, iOS, and Android. The product captures detailed crash data, packages it up for analysis (including an encryption step), and then uploads the information to AWS where it can be used to create tickets, issue notification to tools like HipChat and Slack, and so forth. Bugsnag also includes a dashboard that supports analysis of trends over time, data-driven root cause analysis, and multiple key/value filters.

The load on Bugsnag depends on the applications shipped by their customers and can vary greatly from day to day. They currently process up to a billion crashes per day. In order to handle this large, unpredictable load as economically as possible they make use of a multitude of AWS services including a mix of On-Demand and Spot instances. Their worker fleet is comprised of a mix of both kinds of instances, managed by a pair of Auto Scaling groups. The first group contains the Spot instances. It scales up aggressively and scales down slowly. The second group contains the On-Demand instances. It scales up conservatively and scales down aggressively.  To learn more about how they did this, read their blog post, Responsive infrastructure with Auto Scaling.

In 2013, three entrepreneurs in South Africa got together to plan a new venture. After observing that off-the-shelf drone hardware was maturing far more rapidly than the software needed to get the most value out of that hardware, they started DroneDeploy. Their vision was to make the sky productive and accessible to everyone. They wanted to remove complexity in order to allow companies to operate fleets of drones safely, reliably, and simply. They also wanted to give their customers the ability to process the data collected by the drones.

They launched the first version of their code in 2014. Since then they have attracted customers in industries as diverse as construction, agriculture, surveying, and mining (many interesting stories can be found on the DroneDeploy Blog). Here are a few examples:

  • A customer in Mexico processed 1000 km of road imagery in just 3 weeks (114,043 images / 8 terabytes of data).
  • A potato farmer in North Dakota mapped a 150 acre field, processed the data (30 minutes), and evaluated crop damage.
  • A construction manager in Oklahoma used DroneDeploy to monitor the construction of oil tanks and pipelines, producing 3D models in the process.

DroneDeploy is processing images from 100 countries into interactive maps and 3D models. They host their core infrastructure on AWS. They make heavy use of EC2 for image processing and S3 for storage (multiple petabytes). The image processing fleet is auto scaled up and down based on the number and priority of jobs, spread out across multiple Availability Zones.

— Jeff;
Categories: Cloud

Using Enhanced RDS Monitoring with Datadog

AWS Blog - Wed, 03/09/2016 - 09:03

Today’s guest post comes from K Young, Director of Strategic Initiatives at Datadog!

— Jeff;

AWS recently announced enhanced monitoring for Amazon RDS instances running MySQL, MariaDB, and Aurora. Enhanced monitoring includes over 50 new CPU, memory, file system, and disk I/O metrics which can be collected on a per-instance basis as frequently as once per second.

AWS and Datadog
AWS worked closely with Datadog to help customers send this new high-resolution data to Datadog for monitoring. Datadog is an infrastructure monitoring platform that is very popular with AWS customers—you can see historical trends with full granularity and also visualize and alert on live data from any part of your stack.

With a few minutes of work your enhanced RDS metrics will immediately begin populating a pre-built, customizable dashboard in Datadog:

Connect RDS and Datadog
The first step is to send enhanced RDS metrics to CloudWatch Logs. You can enable the metrics during instance creation, or on an existing RDS instance by selecting it in the RDS Console and then choosing Instance OptionsModify:

Set Granularity to 1–60 seconds; every 15 seconds is often a good choice. Once enabled, enhanced metrics will be sent to CloudWatch Logs.

The second step is to send the CloudWatch Log data to Datadog. Begin by setting up a Lambda function to process the logs and send the metrics:

  1. Create a role for your Lambda function. Name it something like lambda-datadog-enhanced-rds-collector and select AWS Lambda as the role type.
  2. From the Encryption Keys tab on the IAM Management Console, create a new encryption key. Enter an Alias for the key like lambda-datadog-key. On the next page, add the appropriate administrators for the key. Next you’ll be prompted to add users to the key. Add at least two: yourself (so that you can encrypt the Datadog API key from the AWS CLI in the next step), and the role created above, e.g. lambda-datadog-enhanced-rds-collector (so that it can decrypt the API key and submit metrics to Datadog). Finish creating the key.
  3. Encrypt the token using the AWS Command Line Interface (CLI), providing the Alias of your just-created key (e.g. lambda-datadog-key) as well as your Datadog keys, available here. Use KMS to encrypt your key, like this: $ aws kms encrypt --key-id alias/ALIAS_KEY_NAME --plaintext '{"api_key":"DATADOG_API_KEY", "app_key":"DATADOG_APP_KEY"}'

    Save the output of this command; you will need it for the next step.

  4. From the Lambda Management Console, create a new Lambda Function. Filter blueprints by datadog, and select the datadog-process-rds-metrics blueprint.
  5. Choose RDSOSMetrics from the Log Group dropdown, enter the Filter Name of your choice, and go to the next page. If you have not yet enabled enhanced monitoring, you must do so before RDSOSMetrics will be presented an as option (see the instructions under Connect RDS and Datadog above):
  6. Give your function a name like send-enhanced-rds-to-datadog. In the Lambda function code area, replace the string after KMS_ENCRYPTED_KEYS with the ciphertext blob part of the CLI command output above.
  7. Under Lambda function handler and role, choose the role you created in step 2, e.g. lambda-datadog-enhanced-rds-collector. Go to the next page, select the Enable Now radio button, and create your function.

That’s It
Once you have enabled RDS in Datadog’s AWS integration tile, Datadog will immediately begin displaying your enhanced RDS metrics. Your RDS instances will be individually identifiable in Datadog via automatically-created tags of the form dbinstanceidentifier:YOUR_DB_INSTANCE_NAME, as well as any tags you added through the RDS console.

You can clone the pre-built dashboard and customize it however you want: add RDS metrics that are not displayed by default, or start correlating RDS metrics with the performance of the rest of your stack.

— K Young, Director of Strategic Initiatives


Categories: Cloud

Flatiron Health – Using AWS to Help Improve Cancer Treatment

AWS Blog - Wed, 03/09/2016 - 06:29

Flatiron Health is a hot startup with a great idea – providing cancer patients, physicians, researchers, and drug firms with a solution that organizes global oncology information. Currently more than 230 cancer clinics and about 2,200 clinicians across the United States use their products and services, which support the treatment of approximately one in five U.S. cancer patients. I asked Alex to tell us about Flatiron’s decision to leverage AWS’s platform. In the guest post below, Alex Lo (Engineering Manager of Developer Infrastructure) tells us how they have put AWS to use!

— Jeff;

Flatiron Health began after our founders witnessed family members and friends battle cancer. They were really frustrated by the inefficiencies in accessing and benefiting from all the diverse and siloed oncology data in fragmented medical record systems. Their vision was to build a disruptive software platform that connects cancer centers across the country, with the goal of supporting deeper insights and understanding to transform how cancer care is delivered.

Time to Market
We’re a startup trying to get our solutions like our OncologyCloud software suite to market as quickly as possible, and our team was unable to iterate and innovate as quickly and as reproducibly as we wanted them to due to a lack of mature automation tools and APIs. We want technology to work for us so we can solve business problems, instead of spending time dealing with computers.

I joined Flatiron in 2015, and fortunately by that time AWS had implemented the healthcare industry compliance standards and processes that we needed. This made us confident that our efforts would be successful. By mid-2015 we began an AWS adoption project. We felt AWS would be the best platform for future growth because of its best-in-class features, rich ecosystem, and excellent HIPAA-eligible features that include encryption, fine-grained security, and auditability. It lets us deliver a unique solution and quickly iterate on our products.

OncoAnalytics, part of Flatiron Health’s OncologyCloud suite, is an analytics tool that unlocks data from multiple systems and delivers detailed clinical insights and business intelligence.

AWS Usage
We use a range of AWS services. Amazon EC2 has advanced features that give us access to virtual machine system logs from the administrative console. Amazon S3 provides us infinitely scaling durable storage, and the encryption features provided with AWS make it straightforward for us to store Protected Health Information (PHI) in S3. We’re running in an Amazon VPC, and use Amazon VPN connections to other networks for secure connectivity. We’re also using AWS IAM, which is wonderful in our environment. It gives us fine-grained security controls so we can enable our engineers to create resources without being full administrators. We use it lot and are experimenting with some of the more advanced features, like the AWS Security Token Service and EC2 Roles. We’re also using auditing and security tools, including AWS Trusted Advisor, AWS CloudTrail, AWS Config, and Amazon CloudWatch.

An additional benefit of AWS is the expertise that we get with the platform. AWS gives really good advice on how to build HIPAA-compliant applications, with account reps specializing in life sciences and Solutions Architects with health tech backgrounds. Plus, AWS has a developer ecosystem that is more mature than what other cloud providers offer. For example, Ansible has an out-of-the-box EC2 inventory module that helps us manage our fleet. We also use both the AWS Command Line Interface and Boto—the AWS SDK for Python—to automate other routine tasks. This automation would be more difficult on other cloud providers.

Almost There
We’re nearing the end of our first consolidation project on AWS. There are a lot of variables and planning involved in moving into a cloud platform and it’s taken us about 10 months, but we expect the migration to be completed by April 2016. Our development teams have been benefitting from the AWS environment for months now, and it’s exciting to see them move faster. We’re looking to leverage AWS in more ways going forward, possibly moving some dedicated hosting assets into the elastic cloud.

To us, the benefits of using AWS are clear. AWS, with its support team, compliance team, tools, ecosystem, and continued feature growth is helping us iterate faster to solve problems that matter in improving cancer care.

Learning More
Read an AWS whitepaper on architecting for HIPAA security and compliance on AWS.

Read a two-part deep-dive article by an AWS solutions architect who worked closely with the Flatiron Engineering team. Part 1 is here and Part 2 is here.

— Alex Lo, Engineering Manager of Developer Infrastructure, Flatiron Health

PS – We are looking for great developers interested in making a difference in the fight against cancer. If you’re interested, get in touch with us.

Categories: Cloud

New – Notifications for AWS CodeCommit

AWS Blog - Mon, 03/07/2016 - 18:54

AWS CodeCommit is a fully-managed source control service that makes it easy for you to host a secure and highly scalable private Git repository.Today we are making CodeCommit even more useful by adding support for repository triggers. You can use these triggers to integrate your existing unit tests and deployment tools into your source code management workflow. Because triggers are efficient and scalable, they are more broadly applicable than a model that is built around polling for changes. I believe that you will find these triggers to be helpful as you move toward a development methodology based on  Continuous Integration and Continuous Delivery.

All About Notifications
You can create up to 10 triggers for each of your CodeCommit repositories. The triggers are activated in response to actions on the repository including code pushes, branch/tag creation, and branch/tag deletion. Triggers can be set to run for a specific branch of a repository or for all branches.

Triggers can send a notification to an Amazon Simple Notification Service (SNS) topic or can invoke a AWS Lambda function. Each trigger can also be augmented with custom data (an uninterpreted string) that you can use to distinguish the trigger from others that run for the same event. You can use triggers to subscribe to repository events through email or SMS. You can wire up SNS to SQS and queue up jobs for your CI/CD tools, or you can use SNS to activate webhooks provided by your tools. In any case, the actions you designate will be triggered by the changes in your CodeCommit repository. You can also use Lambda functions to trigger builds, check syntax, capture code complexity metrics, measure developer producitivity (less is more, of course), and so forth. My colleagues have also come up with some off-unusual ideas that you can find at the end of this post!

You can create, view, and manage your triggers from the AWS Management Console, AWS Command Line Interface (CLI), or via the CodeCommit API.  I used the Console. The navigation column on the left now includes an entry for Triggers:

I simply click on Create Trigger to get started. Then I select the event (or events), pick the branch (or all branches), and fill in the details that are needed to publish a notification or invoke a Lambda function:

Here’s how I choose the events and branches of interest:


Then I point to my SNS topic or Lambda function (after ensuring that the proper permissions are in place), use Test Trigger to make sure that it all works as expected, and click on Create.

You can use Test Trigger to verify that your IAM permissions are working as expected. For example, here’s an error that I triggered on purpose:

I fixed this by reading How to Allow AWS CodeCommit to run the Function in the documentation!

Available Now
This new functionality is available now and you can start using it today.To learn more, read about Managing Triggers for an AWS CodeCommit Repository.

My colleague Claire Liguori suggested some creative uses for CodeCommit triggers, above and beyond the usual integration with your CI/CD process:

  • Video Deployment – Have you Lambda function check to see if a new video or a new version of an existing video has been committed, and deploy the video to YouTube.
  • Party Time – Automatically throw and cater a party (using APIs for sandwiches, pizza, and beer) when you deploy a new release.
  • Advertise Releases – When a new release is ready, automatically generate and run a Facebook and and publicize the release on social media.

I am looking forward to hearing about the creative ways that you make use of these triggers within your development process. Leave me a comment and let me know!

— Jeff;


Categories: Cloud

AWS Week in Review – February 29, 2016

AWS Blog - Mon, 03/07/2016 - 11:02

Let’s take a quick look at what happened in AWS-land last week:


February 29

  • We announced that AWS Import/Export Snowball Now Supports Export.
  • We announced that Amazon CloudWatch Events are Now Available in the EU (Frankfurt) Region.
  • We announced VPC ClassicLink and ClassicLink Support in the South America (Sao Paulo) Region.
  • We announced that the AWS Storage Gateway Now Supports EMC Networker 8.x with Gateway-VTL.
  • The AWS Windows and .NET Developer Blog published a 2-part series on Exploring ASP.NET Core (Part 1 – Deploying from GitHub, Part 2- Continuous Delivery).
  • The AWS Partner Network Blog shared a new partner success story: Business Model Transformation on AWS – Wipro.
  • The AWS Security Blog announced Industry Best Practices for Securing AWS Resources.
  • 8KMiles shared 19 Best Practices for Creating Amazon CloudFormation Templates, 27 Best Practice Tips on Amazon Web Services Security Groups, 25 Best Practice Tips for architecting Amazon VPC, and Amazon RDS Second Tier Read Replicas.
  • CloudCheckr listed 6 Common Availability Issues.
  • N2W Software talked about File-Level Recovery for EBS Snapsnots.
  • Serverless Code reviewed AWS Lambda, a Guide to Serverless Microservices.
  • Stackshare talked about Scaling Zapier to Automate Billions of Tasks.
  • Aerobatic listed Five Reasons to Host a Static Site with Aerobatic Instead of S3.
  • Forbes listed Five Facts that are Fueling Serverless Computing.

March 1

  • We announced Local Time Zone Support for Amazon Aurora.
  • We announced the AWS Config Rules Repository on GitHub.
  • We announced Support for Security Group References in a Peered VPC.
  • We announced a Data Egress Discount for Researchers.
  • We updated the AWS CLI, AWS SDK for Go, AWS SDK for Ruby, and the AWS SDK for JavaScript.
  • The Amazon Mobile App Distribution Blog talked about Developing Alexa Skills Locally with Node.js.
  • The AWS Java Blog talked about Parallelizing Large Uploads for Speed and Reliability.
  • The AWS Startup Collection shared What Startups Should Know Before Choosing a CDN.
  • Stelligent discussed Infrastructure as Code.
  • RightScale announced some Product Updates including access to AWS Spot Prices.
  • Netflix discussed Caching for a Global Netflix (across multiple AWS Regions).
  • A Cloud Guru talked about Easy Video Transcoding in AWS.
  • Cloudyn talked about Measuring Compute Consumption in the Cloud with Instance Core-Hours.
  • Cloud Academy talked about Amazon EMR, Apache Spark, and Apache Zeppelin for Big Data.
  • DZone Cloud Zone talked about Learning How to Become an AWS Cloud Guru.
  • ParkMyCloud announced Version 2.0, with Multi-User and Multi-Account Capabilities.
  • Trek10 discussed the Serverless Framework for Processes, Projects, and Scale.

March 2

  • We announced that Amazon SNS Now Sends Notifications for AWS Directory Service.
  • We launched the Backup and Archive Calculator.
  • We updated the AWS Schema Conversion Tools and the AWS TCO Calculator.
  • The AWS DevOps Blog talked about Using Locust on AWS Elastic Beanstalk for Distributed Load Generation and Testing.
  • The AWS Partner Network Blog shared a new AWS success story: IT Costs Savings of $5.25 Million on AWS – Dodge Data & Analytics.
  • The AWS Ruby Development Blog announced the Aws::Record Developer Preview.
  • Cloudlytics talked about How Tag-Based Billing Reports Help Manage Your Bills Across Multiple AWS Environments.
  • Ahead talked about Better Billing Through Metadata: Tracking Amazon WorkSpaces.
  • Cloud Academy announced a New Course: Advanced Use of AWS CloudFormation.
  • Aerobatic announced Continuous Build and Deployment of Jekyll Sites and shared their Lambda-powered architecture.

March 3

  • We updated the AWS CLI, AWS SDK for Go, AWS SDK for JavaScript, and the AWS SDK for Ruby.
  • The Amazon Mobile App Distribution Blog announced that Alexa is Now Available on Two New Devices – Echo Dot and Amazon Tap.
  • The AWS Big Data Blog showed you how to Analyze Your Data on Amazon DynamoDB with Apache Spark.
  • Serverless Code reviewed Serverless Single Page Apps.
  • ParkMyCloud talked about How to Save Money with AWS Auto Scaling Groups.
  • We updated the Continuous Delivery and Continuous Integration pages.
  • CloudCheckr published a new white paper: Amazon Cloud: The Ultimate Guide to Cost Management.

March 4

  • We announced New VPN Features in the South America (Sao Paulo) Region.
  • We updated the AWS SDK for Java.
  • The AWS Enterprise Blog explored The Future of Managed Services in the Cloud.
  • The AWS Government, Education, & Nonprofits Blog announced AWS Cloud Credits for Nonprofits with TechSoup Global.
  • Cloud Academy announced a New Course: Advanced High Availability on AWS.

March 5

  •  Nothing happened!

March 6

  •  Cloud Enlightened talked about Cracking AWS Solution Architect Professional Certification.

New & Notable Open Source

  • LENA is a Lambda Executed NAT Migration Tool.
  • tvarit-maven contains some AWS DevOps automation tools to run Wildfly on OpsWorks.
  • bamboo-ebs is an EBS provisioner for EBS and Bamboo.
  • lambda-packages contains some popular packages, precompiled for use with Lambda.
  • aws-workshop is a set of workshops for AWS, starting with S3.
  • proftpd-mod_aws is an AWS configuration for ProFTPD.
  • keymaker implements lightweight key management on EC2.
  • nubis-nat creates an AWS NAT instance and a Squid proxy.
  • alfresco-cloudformation-chef is a set of CloudFormation templates for the Alfresco One Reference Architecture.
  • humilis-firehose-resource is a custom CloudFormation resource to deploy Kinesis Firehose delivery streams.

New SlideShare Presentations

  • February 2016 Webinar Series:
    • Architectural Patterns for Big Data on AWS.
    • Migrate Your Apps from Parse to AWS.
    • Introducing VPC Support for AWS Lambda.
    • EC2 Container Service Deep Dive.
    • Introduction to AWS Database Migration Service.
    • Use AWS Cloud Storage as the Foundation for Hybrid Strategy.
    • Best Practices for IoT Security in the Cloud.
    • Automate Your App Tests with Appium and AWS Device Farm.
    • Introduction to DynamoDB.
    • Achieving Business Value with Big Data.

New Customer Success Stories

  • CrowdStrike – CrowdStrike uses AWS to implement a scalable, cloud-based solution for preventing cyber breaches with on-demand resources, thereby simplifying maintenance, reducing cost, and improving performance.
  • Edmunds.com – After evaluating several Git hosting solutions, Edmunds.com migrated its source code repositories to the cloud on AWS CodeCommit.
  • IXD – By basing its secure email and fax delivery services on AWS, IXD has disrupted the secure document transmission and messaging industry by offering its services for 80 percent less than some of its competitors.
  • Amazon.com – Amazon.com cut costs while improving performance and reliability for its enterprise communications and collaboration software, including Microsoft Exchange, Microsoft Lync, and Microsoft SharePoint.

Upcoming Events

  • March 10th – Live Event (San Francisco, CA) – DevDay at AWS Loft.
  • March 14 – Live Event (Seattle, Washington) – Seattle AWS Architects & Engineers – Lambda + Alexa AWS Teams.
  • March 15th – Live Event (San Francisco, California)- Amazon Lumberyard team at GDC 2016.
  • March 17th – Webinar – Security Best Practices for Retailers on AWS.
  • April 6th – Live Event (Boston, Massachusetts) AWS at Bio-IT World.
  • April 18th & 19th – Live Event (Chicago, Illinois) – AWS Summit – Chicago.
  • AWS Loft – San Francisco.
  • AWS Loft – New York.
  • AWS Loft – Tel Aviv.
  • AWS Global Summit Series.

Help Wanted

  • AWS Careers.

Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.

— Jeff;

Categories: Cloud


Subscribe to LAMP, Database and Cloud Technical Information aggregator

Main menu 2