Notion’s hours-long outage was caused by phishing complaints

Last week’s hours-long outage at online workspace startup Notion was caused by phishing complaints, according to the startup’s domain registrar.

Notion was offline for most of the morning on Friday, plunging its more than four million users into organization darkness because of what the company called a “very unusual DNS issue that occurred at the registry operator level.” With the company’s domain offline, users were unable to access their files, calendars, and documents.

Notion registered its domain name notion.so through Name.com, but all .so domains are managed by Hexonet, a company that helps connect Sonic, the .so top-level domain registry, with domain name registrars like Name.com.

That complex web of interdependence is in large part what led to the communications failure that resulted in Notion falling offline for hours.

In an email to TechCrunch, Name.com spokesperson Jared Ewy said: “Hexonet received complaints about user-generated Notion pages connected to phishing. They informed Name.com about these reports, but we were unable to independently confirm them. Per its policies, Hexonet placed a temporary hold on Notion’s domain.”

“Noting the impact of this action, all teams worked together to restore service to Notion and its users. All three teams are now partnering on new protocols to ensure this type of incident does not happen again. The Notion team and their avid followers were responsive and a pleasure to work with throughout. We thank everyone for their patience and understanding,” said Ewy.

There are several threads on Reddit discussing concerns about Notion being used to host phishing sites, and security researchers have shown examples of Notion used in active phishing campaigns. A Notion employee said almost a year ago that Notion would “soon” move its domain to notion.com, which the company owns.

Notion’s outage is almost identical to what happened with Zoho in 2018, which like Notion, resorted to tweeting at its domain registrar after it blocked zoho.com following complaints about phishing emails sent from Zoho-hosted email accounts.

It sounds like there’s no immediate danger of a repeat outage, but Notion did not return TechCrunch’s email over the weekend asking what it plans to do to prevent phishing on its platform in the future.

Read more:

#crime, #cybercrime, #dns, #email, #internet, #notion, #phishing, #security, #sonic, #spamming, #spokesperson, #top-level-domain, #web-hosting, #world-wide-web

0

Vantage makes managing AWS easier

Vantage, a new service that makes managing AWS resources and their associated spend easier, is coming out of stealth today. The service offers its users an alternative to the complex AWS console with support for most of the standard AWS services, including EC2 instances, S3 buckets, VPCs, ECS and Fargate and Route 53 hosted zones.

The company’s founder, Ben Schaechter, previously worked at AWS and Digital Ocean (and before that, he worked on Crunchbase, too). Yet while DigitalOcean showed him how to build a developer experience for individuals and small businesses, he argues that the underlying services and hardware simply weren’t as robust as those of the hyperclouds. AWS, on the other hand, offers everything a developer could want (and likely more), but the user experience leaves a lot to be desired.

Image Credits: Vantage

“The idea was really born out of ‘what if we could take the user experience of DigitalOcean and apply it to the three public cloud providers, AWS, GCP and Azure,” Schaechter told me. “We decided to start just with AWS because the experience there is the roughest and it’s the largest player in the market. And I really think that we can provide a lot of value there before we do GCP and Azure.”

The focus for Vantage is on the developer experience and cost transparency. Schaechter noted that some of its users describe it as being akin to a “Mint for AWS.” To get started, you give Vantage a set of read permissions to your AWS services and the tool will automatically profile everything in your account. The service refreshes this list once per hour, but users can also refresh their lists manually.

Given that it’s often hard enough to know which AWS services you are actually using, that alone is a useful feature. “That’s the number one use case,” he said. “What are we paying for and what do we have?”

At the core of Vantage is what the team calls “views,” which allows you to see which resources you are using. What is interesting here is that this is quite a flexible system and allows you to build custom views to see which resources you are using for a given application across regions, for example. Those may include Lambda, storage buckets, your subnet, code pipeline and more.

On the cost-tracking side, Vantage currently only offers point-in-time costs, but Schaechter tells me that the team plans to add historical trends as well to give users a better view of their cloud spend.

Schaechter and his co-founder bootstrapped the company and he noted that before he wants to raise any money for the service, he wants to see people paying for it. Currently, Vantage offers a free plan, as well as paid “pro” and “business” plans with additional functionality.

Image Credits: Vantage 

#amazon-web-services, #cloud, #cloud-computing, #cloud-infrastructure, #cloud-storage, #computing, #developer, #digitalocean, #gcp, #tc, #web-hosting, #world-wide-web

0

Google expands its cloud with new regions in Chile, Germany and Saudi Arabia

It’s been a busy year of expansion for the large cloud providers, with AWS, Azure and Google aggressively expanding their data center presence around the world. To cap off the year, Google Cloud today announced a new set of cloud regions, which will go live in the coming months and years. These new regions, which will all have three availability zones, will be in Chile, Germany and Saudi Arabia. That’s on top of the regions in Indonesia, South Korea, the U.S. (Last Vegas and Salt Lake City) that went live this year — and the upcoming regions in France, Italy, Qatar and Spain the company also announced over the course of the last twelve months.

Image Credits: Google

In total, Google currently operates 24 regions with 73 availability zones, not counting those it has announced but that aren’t live yet. While Microsoft Azure is well ahead of the competition in terms of the total number of regions (though some still lack availability zones), Google is now starting to pull even with AWS, which currently offers 24 regions with a total of 77 availability zones. Indeed, with its 12 announced regions, Google Cloud may actually soon pull ahead of AWS, which is currently working on six new regions.

The battleground may soon shift away from these large data centers, though, with a new focus on edge zones close to urban centers that are smaller than the full-blown data centers the large clouds currently operate but that allow businesses to host their services even closer to their customers.

All of this is a clear sign of how much Google has invested in its cloud strategy in recent years. For the longest time, after all, Google Cloud Platform lagged well behind its competitors. Only three years ago, Google Cloud offered only 13 regions, for example. And that’s on top of the company’s heavy investment in submarine cables and edge locations.

#amazon-web-services, #aws, #chile, #cloud-computing, #cloud-infrastructure, #france, #germany, #google, #google-cloud-platform, #indonesia, #italy, #microsoft, #nuodb, #qatar, #salt-lake-city, #saudi-arabia, #south-korea, #spain, #tc, #united-states, #web-hosting, #web-services

0

AWS updates its edge computing solutions with new hardware and Local Zones

AWS today closed out its first re:Invent keynote with a focus on edge computing. The company launched two smaller appliances for its Outpost service, which originally brought AWS as a managed service and appliance right into its customers’ existing data centers in the form of a large rack. Now, the company is launching these smaller versions so that its users can also deploy them in their stores or office locations. These appliances are fully managed by AWS and offer 64 cores of compute, 128GB of memory and 4TB of local NVMe storage.

In addition, the company expanded its set of Local Zones, which are basically small extensions of existing AWS regions that are more expensive to use but offer low-latency access in metro areas. This service launched in Los Angeles in 2019 and starting today, it’s also available in preview in Boston, Houston and Miami. Soon, it’ll expand to Atlanta, Chicago, Dallas, Denver, Kansas City, Las Vegas, Minneapolis, New York, Philadelphia, Phoenix, Portland and Seattle. Google, it’s worth noting, is doing something similar with its Mobile Edge Cloud.

The general idea here — and that’s not dissimilar from what Google, Microsoft and others are now doing — is to bring AWS to the edge and to do so in a variety of form factors.

As AWS CEO Andy Jassy rightly noted, AWS always believed that the vast majority of companies, “in the fullness of time” (Jassy’s favorite phrase from this keynote), would move to the cloud. Because of this, AWS focused on cloud services over hybrid capabilities early on. He argues that AWS watched others try and fail in building their hybrid offerings, in large parts because what customers really wanted was to use the same control plane on all edge nodes and in the cloud. None of the existing solutions from other vendors, Jassy argues, got any traction (though AWSs competitors would surely deny this) because of this.

The first result of that was VMware Cloud on AWS, which allowed customers to use the same VMware software and tools on AWS they were already familiar with. But at the end of the day, that was really about moving on-premises services to the cloud.

With Outpost, AWS launched a fully managed edge solution that can run AWS infrastructure in its customers’ data centers. It’s been an interesting journey for AWS, but the fact that the company closed out its keynote with this focus on hybrid — no matter how it wants to define it — shows that it now understands that there is clearly a need for this kind of service. The AWS way is to extend AWS into the edge — and I think most of its competitors will agree with that. Microsoft tried this early on with Azure Stack and really didn’t get a lot of traction, as far as I’m aware, but it has since retooled its efforts around Azure Arc. Google, meanwhile, is betting big on Anthos.

#amazon-web-services, #atlanta, #aws-reinvent-2020, #boston, #chicago, #cloud, #cloud-applications, #cloud-computing, #cloud-infrastructure, #cloud-services, #computing, #dallas, #denver, #developer, #enterprise, #google, #houston, #kansas-city, #las-vegas, #los-angeles, #miami, #microsoft, #minneapolis, #mobile-edge, #new-york, #philadelphia, #phoenix, #portland, #seattle, #tc, #vmware, #web-hosting, #web-services

0

AWS launches Glue Elastic Views to make it easier to move data from one purpose-built data store to another

AWS has launched a new tool to let developers move data from one store to another called Glue Elastic Views.

At the AWS:Invent keynote CEO Andy Jassy announced Glue Elastic Views, a service that lets programmers move data across multiple data stores more seamlessly.

The new service can take data from disparate silos and move them together. That AWS ETL service allows programmers to write a little bit of SQL code to have a materialized view tht can move from one source data store to another.

For instance, Jassy said, a programmer can move data from DynamoDB to Elastic Search allowing a developer to set up a materialized view to copy that data — all the while managing dependencies. That means if data changes in the source data lake, then it will automatically be updated in the other data stores where the data has been relocated, Jassy said.

“When you have the ability to move data… and move that data easily from data store to data store… that’s incredibly powerful,” said Jassy.

#amazon-web-services, #andy-jassy, #cloud-infrastructure, #cloud-storage, #computing, #data-lake, #data-management, #elasticsearch, #programmer, #sql, #tc, #web-hosting

0

AWS brings the Mac mini to its cloud

AWS today opened its re:Invent conference with a surprise announcement: the company is bringing the Mac mini to its cloud. These new EC2 Mac instances, as AWS calls them, are now available in preview. They won’t come cheap, though.

The target audience here — and the only one AWS is targeting for now — is developers who want cloud-based build and testing environments for their Mac and iOS apps. But it’s worth noting that with remote access, you get a fully-featured Mac mini in the cloud, and I’m sure developers will find all kinds of other use cases for this as well.

Given the recent launch of the M1 Mac minis, it’s worth pointing out that the hardware AWS is using — at least for the time being — are i7 machines with six physical and 12 logical cores and 32 GB of memory. Using the Mac’s built-in networking options, AWS connects them to its Nitro System for fast network and storage access. This means you’ll also be able to attach AWS block storage to these instances, for example.

Unsurprisingly, the AWS team is also working on bringing Apple’s new M1 Mac minis into its data centers. The current plan is to roll this out “early next year,” AWS tells me, and definitely within the first half of 2021. Both AWS and Apple believe that the need for Intel-powered machines won’t go away anytime soon, though, especially given that a lot of developers will want to continue to run their tests on Intel machines for the foreseeable future.

David Brown, AWS’s vice president of EC2, tells me that these are completely unmodified Mac minis. AWS only turned off Wi-Fi and Bluetooth. It helps, Brown said, that the minis fit nicely into a 1U rack.

“You can’t really stack them on shelves — you want to put them in some sort of service sled [and] it fits very well into a service sled and then our cards and all the various things we have to worry about, from an integration point of view, fit around it and just plug into the Mac mini through the ports that it provides,” Brown explained. He admitted that this was obviously a new challenge for AWS. The only way to offer this kind of service is to use Apple’s hardware, after all.

Image Credits: AWS

It’s also worth noting that AWS is not virtualizing the hardware. What you’re getting here is full access to your own device that you’re not sharing with anybody else. “We wanted to make sure that we support the Mac Mini that you would get if you went to the Apple store and you bought a Mac mini,” Brown said.

Unlike with other EC2 instances, whenever you spin up a new Mac instance, you have to pre-pay for the first 24 hours to get started. After those first 24 hours, prices are by the second, just like with any other instance type AWS offers today.

AWS will charge $1.083 per hour, billed by the second. That’s just under $26 to spin up a machine and run it for 24 hours. That’s quite a lot more than what some of the small Mac mini cloud providers are charging (we’re generally talking about $60 or less per month for their entry-level offerings and around two to three times as much for a comparable i7 machine with 32GB of RAM).

Image Credits: Ron Miller/TechCrunch

Until now, Mac mini hosting was a small niche in the hosting market, though it has its fair number of players, with the likes of MacStadium, MacinCloud, MacWeb and Mac Mini Vault vying for their share of the market.

With this new offering from AWS, they are now facing a formidable competitor, though they can still compete on price. AWS, however, argues that it can give developers access to all of the additional cloud services in its portfolio, which sets it apart from all of the smaller players.

“The speed that things happen at [other Mac mini cloud providers] and the granularity that you can use those services at is not as fine as you get with a large cloud provider like AWS,” Brown said. “So if you want to launch a machine, it takes a few days to provision and somebody puts a machine in a rack for you and gives you an IP address to get to it and you manage the OS. And normally, you’re paying for at least a month — or a longer period of time to get a discount. What we’ve done is you can literally launch these machines in minutes and have a working machine available to you. If you decide you want 100 of them, 500 of them, you just ask us for that and we’ll make them available. The other thing is the ecosystem. All those other 200-plus AWS services that you’re now able to utilize together with the Mac mini is the other big difference.”

Brown also stressed that Amazon makes it easy for developers to use different machine images, with the company currently offering images for macOS Mojave and Catalina, with Big Sure support coming “at some point in the future.” And developers can obviously create their own images with all of the software they need so they can reuse them whenever they spin up a new machine.

“Pretty much every one of our customers today has some need to support an Apple product and the Apple ecosystem, whether it’s iPhone, iPad or  Apple TV, whatever it might be. They’re looking for that bold use case,” Brown said. “And so the problem we’ve really been focused on solving is customers that say, ‘hey, I’ve moved all my server-side workloads to AWS, I’d love to be able to move some of these build workflows, because I still have some Mac minis in a data center or in my office that I have to maintain. I’d love that just to be on AWS.’ ”

AWS’s marquee launch customers for the new service are Intuit, Ring and mobile camera app FiLMiC.

“EC2 Mac instances, with their familiar EC2 interfaces and APIs, have enabled us to seamlessly migrate our existing iOS and macOS build-and-test pipelines to AWS, further improving developer productivity,” said Pratik Wadher, vice president of Product Development at Intuit. “We‘re experiencing up to 30% better performance over our data center infrastructure, thanks to elastic capacity expansion, and a high availability setup leveraging multiple zones. We’re now running around 80% of our production builds on EC2 Mac instances, and are excited to see what the future holds for AWS innovation in this space.”

The new Mac instances are now available in a number of AWS regions. These include US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland) and Asia Pacific (Singapore), with other regions to follow soon.

#amazon-web-services, #apple, #apple-inc, #asia-pacific, #aws-reinvent, #bluetooth, #cloud, #cloud-infrastructure, #computing, #david-brown, #developer, #europe, #ipad, #iphone, #ireland, #mac-mini, #macintosh, #ohio, #oregon, #singapore, #steve-jobs, #tc, #web-hosting

0

Amazon Web Services outage takes a portion of the internet down with it

Amazon Web Services is currently having an outage, taking a chunk of the internet down with it.

Several AWS services were experiencing problems as of early Wednesday, according to its status page. That means any app, site or service that relies on AWS might also be down, too. (As I found out the hard way this morning when my Roomba refused to connect.)

Amazon says the issue is largely localized to North America. The company didn’t give a reason for the outage, only that it was experiencing increased error rates and that it was working on a resolution. The irony is that the outage is also affecting the company’s “ability to post updates to the Service Health Dashboard,” so not even Amazon is immune from its own downtime.

So far a number of companies that rely on AWS have tweeted out that they’re experiencing issues as a result, including Adobe and Roku.

We’ll keep you updated as this outage continues. On the bright side TechCrunch is still up, so here are a few things to read.

Extra Crunch:

#amazon-web-services, #cloud, #cloud-infrastructure, #computing, #north-america, #roomba, #web-hosting, #web-services, #world-wide-web

0

Porn distribution company loses piracy suit appeal against Web host

Who needs the letter "B" when you can have a jolly roger?

Enlarge / Who needs the letter “B” when you can have a jolly roger? (credit: Brasil2 | Getty Images)

A federal appeals court has upheld a ruling that site hosts are not liable for copyright infringement committed by the sites they host, so long as they take the “simple measures” of forwarding claims to the site owner.

The ruling follows a legal battle between adult content company ALS Scan and site hosting service Steadfast. The Ninth Circuit Court of Appeals ruled 2 to 1 on Friday (PDF) that even though ALS has a “whack-a-mole problem” with pirated content popping up on Imagebam, a site Steadfast hosts, the host did its part to prevent the piracy.

Working as intended

A copyright owner, such as ALS, can file a claim against a site, such as Imagebam, that is unlawfully sharing its copyrighted content. That often means sending notice to the site host—the entity you’d find listed in a whois search—about it. The host, in this case Steadfast, is then required to forward the notice along to the site owner and check that the site owner does in fact take the content down.

Read 8 remaining paragraphs | Comments

#copyright-infringement, #copyright-law, #piracy, #policy, #steadfast, #web-hosting

0

Docker partners with AWS to improve container workflows

Docker and AWS today announced a new collaboration that introduces a deep integration between Docker’s Compose and Desktop developer tools and AWS’s Elastic Container Service (ECS) and ECS on AWS Fargate. Previously, the two companies note, the workflow to take Compose files and run them on ECS was often challenging for developers. Now, the two companies simplified this process to make switching between running containers locally and on ECS far easier .

docker/AWS architecture overview“With a large number of containers being built using Docker, we’re very excited to work with Docker to simplify the developer’s experience of building and deploying containerized applications to AWS,” said Deepak Singh, the VP for Compute Services at AWS. “Now customers can easily deploy their containerized applications from their local Docker environment straight to Amazon ECS. This accelerated path to modern application development and deployment allows customers to focus more effort on the unique value of their applications, and less time on figuring out how to deploy to the cloud.”

In a bit of a surprise move, Docker last year sold off its enterprise business to Mirantis to solely focus on cloud-native developer experiences.

“In November, we separated the enterprise business, which was very much focused on operations, CXOs and a direct sales model, and we sold that business to Mirantis,” Docker CEO Scott Johnston told TechCrunch’s Ron Miller earlier this year. “At that point, we decided to focus the remaining business back on developers, which was really Docker’s purpose back in 2013 and 2014.”

Today’s move is an example of this new focus, given that the workflow issues this partnership addresses had been around for quite a while already.

It’s worth noting that Docker also recently engaged in a strategic partnership with Microsoft to integrate the Docker developer experience with Azure’s Container Instances.

#amazon, #amazon-web-services, #aws, #cloud, #cloud-computing, #computing, #developer, #docker, #enterprise, #free-software, #software, #tc, #web-hosting, #web-services

0

Backblaze challenges AWS by making its cloud storage S3 compatible

Backblaze today announced that its B2 Cloud Storage service is now API-compatible with Amazon’s S3 storage service.

Backblaze started out as an affordable cloud backup service but over the last few years, the company has also taken its storage expertise and launched the developer-centric B2 Cloud Storage service, which promises to be significantly cheaper than similar offerings from the large cloud vendors. Pricing for B2 starts at $0.005 per GB/month. AWS S3 starts at $0.023 per GB/month.

The storage price alone isn’t going to make developers switch providers, though. There are some costs involved in supporting multiple heterogeneous systems, too.

By making B2 compatible with the S3 API, developers can now simply redirect their storage to Backblaze without the need for any extensive rewrites.

“For years, businesses have loved our astonishingly easy-to-use cloud storage for supporting
them in achieving incredible outcomes,” said Gleb Budman, the co-founder and CEO of
Backblaze. “Today we’re excited to do all the more by enabling many more businesses to use
our storage with their existing tools and workflows.”

Current B2 customers include the likes of American Public Television, Patagonia and Verizon’s Complex Networks (with Verizon being the corporate overlords of Verizon Media Group, TechCrunch’s parent company). Backblaze says it has about 100,000 total customers for its B2 service. Among the launch partners for today’s launch are Cinafilm, IBM’s Aspera file transfer and streaming service, storage specialist Quantum and cloud data management service Veeam.

“Public cloud storage has become an integral part of the post-production process. This latest enhancement makes Backblaze B2 Cloud Storage more accessible—both for us as a vendor, and for customers,” said Eric Bassier, Senior Director, Product Marketing at Quantum. “We can now use the new S3 Compatible APIs to add BackBlaze B2 to the list of StorNext compatible public cloud storage targets, taking another step toward enabling hybrid and multi-cloud workflows.”

#amazon, #api, #aspera, #backblaze, #cloud, #cloud-computing, #cloud-storage, #computing, #developer, #ibm, #patagonia, #tc, #techcrunch, #verizon, #verizon-media-group, #web-hosting

0

Cloud Foundry renews its focus on developer experience as it looks beyond the enterprise

The Cloud Foundry Foundation (CFF) just went through a major leadership change, with executive director Abby Kearns stepping down after five years (and becoming a CTO at Puppet) and the CFF’s CTO Chip Childers stepping into the top leadership role in the organization. For the most part, though, these changes are only accelerating some of the strategic moves the organization already made in the last few years.

If you’re unfamiliar with the open-source Cloud Foundry project, it’s a Platform-as-a-Service that’s in use by the majority of Fortune 500 enterprises. After a lot of technical changes, which essentially involved building out support for containers and adding Kubernetes as an option for container orchestration next to the container tools Cloud Foundry built long before the rise of Google’s open-source tool, the technical underpinnings of the project are now stable. And as Childers has noted before, that now allows the project to refocus its efforts on developer experience.

That, after all, was always the selling point of Cloud Foundry. Developers stick to a few rules and, in return, they can easily push their apps to Cloud Foundry with a single command (“cf push”) and know that it will run, while the enterprises that employ them get the benefits of faster development cycles.

On the flip side, though, actually managing that Cloud Foundry install was never easy, and required either a heavy lift from internal infrastructure teams or the help of outside firms like Pivotal, IBM, SAP, Suse and others to run and manage the platform. That pretty much excluded smaller companies, and especially startups, from using the platform. As Childers noted, some still did use it, but that was never the project’s focus.

Now, with the Kubernetes underpinnings in place, he believes that it will become easier for non-enterprise users to also get started with the platform. And projects like KubeCF and CF for K8s now offers a full Cloud Foundry distribution for Kubernetes, which makes it relatively easy to use the platform on top of modern infrastructure.

To highlight some of these changes, the CFF today unveiled its new tutorial hub that will not just explain what Cloud Foundry is, but also feature tutorials to get started. Some of these will be hosted and written by the Foundation itself, while community members will contribute others.

“Our community has created a learning hub, curated by the Cloud Foundry Foundation, of open-source tutorials for folks to learn Cloud Foundry and related cloud native technologies,” said Childers. “The hub includes an interactive hands-on lab for first-time Cloud Foundry users to experience how easy the platform makes deploying applications to Kubernetes, and is open for the community to contribute.”

#abby-kearns, #chip-childers, #cloud, #cloud-computing, #cloud-foundry, #cloud-foundry-foundation, #cloud-infrastructure, #computing, #developer, #google, #ibm, #kubernetes, #sap, #suse, #tc, #web-hosting, #web-services

0

AWS launches Amazon AppFlow, its new SaaS integration service

AWS today launched Amazon AppFlow, a new integration service that makes it easier for developers to transfer data between AWS and SaaS applications like Google Analytics, Marketo, Salesforce, ServiceNow, Slack, Snowflake and Zendesk. Like similar services, including Microsoft Azure’s Power Automate, for example, developers can trigger these flows based on specific events, at pre-set times or on-demand.

Unlike some of its competitors, though, AWS is positioning this service more as a data transfer service than a way to automate workflows, and, while the data flow can be bi-directional, AWS’s announcement focuses mostly on moving data from SaaS applications to other AWS services for further analysis. For this, AppFlow also includes a number of tools for transforming the data as it moves through the service.

“Developers spend huge amounts of time writing custom integrations so they can pass data between SaaS applications and AWS services so that it can be analysed; these can be expensive and can often take months to complete,” said AWS principal advocate Martin Beeby in today’s announcement. “If data requirements change, then costly and complicated modifications have to be made to the integrations. Companies that don’t have the luxury of engineering resources might find themselves manually importing and exporting data from applications, which is time-consuming, risks data leakage, and has the potential to introduce human error.”

Every flow (which AWS defines as a call to a source application to transfer data to a destination) costs $0.001 per run, though, in typical AWS fashion, there’s also cost associated with data processing (starting at 0.02 per GB).

“Our customers tell us that they love having the ability to store, process, and analyze their data in AWS. They also use a variety of third-party SaaS applications, and they tell us that it can be difficult to manage the flow of data between AWS and these applications,” said Kurt Kufeld, vice president, AWS. “Amazon AppFlow provides an intuitive and easy way for customers to combine data from AWS and SaaS applications without moving it across the public internet. With Amazon AppFlow, our customers bring together and manage petabytes, even exabytes, of data spread across all of their applications — all without having to develop custom connectors or manage underlying API and network connectivity.”

At this point, the number of supported services remains comparatively low, with only 14 possible sources and four destinations (Amazon Redshift and S3, as well as Salesforce and Snowflake). Sometimes, depending on the source you select, the only possible destination is Amazon’s S3 storage service.

Over time, the number of integrations will surely increase, but for now, it feels like there’s still quite a bit more work to do for the AppFlow team to expand the list of supported services.

AWS has long left this market to competitors, even though it has tools like AWS Step Functions for building serverless workflows across AWS services and EventBridge for connections applications. Interestingly, EventBridge currently supports a far wider range of third-party sources, but as the name implies, its focus is more on triggering events in AWS than moving data between applications.

#amazon, #amazon-web-services, #aws-lambda, #cloud, #cloud-applications, #cloud-computing, #cloud-infrastructure, #computing, #data-processing, #developer, #enterprise, #google, #marketo, #microsoft, #saas, #salesforce, #servicenow, #software-as-a-service, #web-hosting, #zendesk

0