Google will let enterprises store their Google Workspace encryption keys

As ubiquitous as Google Docs has become in the last year alone, a major criticism often overlooked by the countless workplaces who use it is that it isn’t end-to-end encrypted, allowing Google — or any requesting government agency — access to a company’s files. But Google is finally addressing that key complaint with a round of updates that will let customers shield their data by storing their own encryption keys.

Google Workspace, the company’s enterprise offering that includes Google Docs, Slides and Sheets, is adding client-side encryption so that a company’s data will be indecipherable to Google.

Companies using Google Workspace can store their encryption keys with one of four partners for now: Flowcrypt, Futurex, Thales, or Virtru, which are compatible with Google’s specifications. The move is largely aimed at regulated industries — like finance, healthcare, and defense — where intellectual property and sensitive data are subject to intense privacy and compliance rules.

(Image: Google / supplied)

The real magic lands later in the year when Google will publish details of an API that will let enterprise customers build their own in-house key service, allowing workplaces to retain direct control of their encryption keys. That means if the government wants that company’s data, they have to knock on their front door — and not sneak around the back by serving the key holder with a legal demand.

Google published technical details of how the client-side encryption feature works, and will roll out as a beta in the coming weeks.

Tech companies giving their corporate customers control of their own encryption keys has been a growing trend in recent years. Slack and cloud vendor Egnyte bucked the trend by allowing their enterprise users to store their own encryption keys, effectively cutting themselves out of the surveillance loop. But Google has dragged its feet on encryption for so long that startups are working to build alternatives that bake in encryption from the ground up.

Google said it’s also pushing out new trust rules for how files are shared in Google Drive to give administrators more granularity on how different levels of sensitive files can be shared, and new data classification labels to mark documents with a level of sensitivity such as “secret” or “internal”.

The company said it’s improving its malware protection efforts by now blocking phishing and malware shared from within organizations. The aim is to help cut down on employees mistakenly sharing malicious documents.

#api, #cloud-storage, #computing, #cryptography, #data-protection, #data-security, #egnyte, #encryption, #end-to-end-encryption, #finance, #google, #google-workspace, #google-drive, #healthcare, #privacy, #security, #technology, #thales

0

Gatheround raises millions from Homebrew, Bloomberg and Stripe’s COO to help remote workers connect

Remote work is no longer a new topic, as much of the world has now been doing it for a year or more because of the COVID-19 pandemic.

Companies — big and small — have had to react in myriad ways. Many of the initial challenges have focused on workflow, productivity and the like. But one aspect of the whole remote work shift that is not getting as much attention is the culture angle.

A 100% remote startup that was tackling the issue way before COVID-19 was even around is now seeing a big surge in demand for its offering that aims to help companies address the “people” challenge of remote work. It started its life with the name Icebreaker to reflect the aim of “breaking the ice” with people with whom you work.

“We designed the initial version of our product as a way to connect people who’d never met, kind of virtual speed dating,” says co-founder and CEO Perry Rosenstein. “But we realized that people were using it for far more than that.” 

So over time, its offering has evolved to include a bigger goal of helping people get together beyond an initial encounter –– hence its new name: Gatheround.

“For remote companies, a big challenge or problem that is now bordering on a crisis is how to build connection, trust and empathy between people that aren’t sharing a physical space,” says co-founder and COO Lisa Conn. “There’s no five-minute conversations after meetings, no shared meals, no cafeterias — this is where connection organically builds.”

Organizations should be concerned, Gatheround maintains, that as we move more remote, that work will become more transactional and people will become more isolated. They can’t ignore that humans are largely social creatures, Conn said.

The startup aims to bring people together online through real-time events such as a range of chats, videos and one-on-one and group conversations. The startup also provides templates to facilitate cultural rituals and learning & development (L&D) activities, such as all-hands meetings and workshops on diversity, equity and inclusion. 

Gatheround’s video conversations aim to be a refreshing complement to Slack conversations, which despite serving the function of communication, still don’t bring users face-to-face.

Image Credits: Gatheround

Since its inception, Gatheround has quietly built up an impressive customer base, including 28 Fortune 500s, 11 of the 15 biggest U.S. tech companies, 26 of the top 30 universities and more than 700 educational institutions. Specifically, those users include Asana, Coinbase, Fiverr, Westfield and DigitalOcean. Universities, academic centers and nonprofits, including Georgetown’s Institute of Politics and Public Service and Chan Zuckerberg Initiative, are also customers. To date, Gatheround has had about 260,000 users hold 570,000 conversations on its SaaS-based, video platform.

All its growth so far has been organic, mostly referrals and word of mouth. Now, armed with $3.5 million in seed funding that builds upon a previous $500,000 raised, Gatheround is ready to aggressively go to market and build upon the momentum it’s seeing.

Venture firms Homebrew and Bloomberg Beta co-led the company’s latest raise, which included participation from angel investors such as Stripe COO Claire Hughes Johnson, Meetup co-founder Scott Heiferman, Li Jin and Lenny Rachitsky. 

Co-founders Rosenstein, Conn and Alexander McCormmach describe themselves as “experienced community builders,” having previously worked on President Obama’s campaigns as well as at companies like Facebook, Change.org and Hustle. 

The trio emphasize that Gatheround is also very different from Zoom and video conferencing apps in that its platform gives people prompts and organized ways to get to know and learn about each other as well as the flexibility to customize events.

“We’re fundamentally a connection platform, here to help organizations connect their people via real-time events that are not just really fun, but meaningful,” Conn said.

Homebrew Partner Hunter Walk says his firm was attracted to the company’s founder-market fit.

“They’re a really interesting combination of founders with all this experience community building on the political activism side, combined with really great product, design and operational skills,” he told TechCrunch. “It was kind of unique that they didn’t come out of an enterprise product background or pure social background.”

He was also drawn to the personalized nature of Gatheround’s platform, considering that it has become clear over the past year that the software powering the future of work “needs emotional intelligence.”

“Many companies in 2020 have focused on making remote work more productive. But what people desire more than ever is a way to deeply and meaningfully connect with their colleagues,” Walk said. “Gatheround does that better than any platform out there. I’ve never seen people come together virtually like they do on Gatheround, asking questions, sharing stories and learning as a group.” 

James Cham, partner at Bloomberg Beta, agrees with Walk that the founding team’s knowledge of behavioral psychology, group dynamics and community building gives them an edge.

“More than anything, though, they care about helping the world unite and feel connected, and have spent their entire careers building organizations to make that happen,” he said in a written statement. “So it was a no-brainer to back Gatheround, and I can’t wait to see the impact they have on society.”

The 14-person team will likely expand with the new capital, which will also go toward helping adding more functionality and details to the Gatheround product.

“Even before the pandemic, remote work was accelerating faster than other forms of work,” Conn said. “Now that’s intensified even more.”

Gatheround is not the only company attempting to tackle this space. Ireland-based Workvivo last year raised $16 million and earlier this year, Microsoft  launched Viva, its new “employee experience platform.”

#asana, #bloomberg-beta, #chan-zuckerberg-initiative, #cloud-storage, #coinbase, #computing, #digitalocean, #facebook, #funding, #fundings-exits, #groupware, #homebrew, #hunter-walk, #hustle, #li-jin, #meetup, #obama, #operating-systems, #perry-rosenstein, #recent-funding, #remote-work, #saas, #scott-heiferman, #social-media, #startup, #startups, #telecommuting, #united-states, #venture-capital, #walk

0

Wasabi scores $112M Series C on $700M valuation to take on cloud storage hyperscalers

Taking on Amazon S3 in the cloud storage game would seem to be a fool-hearty proposition, but Wasabi has found a way to build storage cheaply and pass the savings onto customers. Today the Boston-based startup announced a $112 million Series C investment on a $700 million valuation.

Fidelity Management & Research Company led the round with participation from previous investors. It reports that it has now raised $219 million in equity so far, along with additional debe financing, but it takes a lot of money to build a storage business.

CEO David Friend says that business is booming and he needed the money to keep it going. “The business has just been exploding. We achieved a roughly $700 million valuation on this round, so  you can imagine that business is doing well. We’ve tripled in each of the last three years and we’re ahead of plan for this year,” Friend told me.

He says that demand continues to grow and he’s been getting requests internationally. That was one of the primary reasons he went looking for more capital. What’s more, data sovereignty laws require that certain types of sensitive data like financial and healthcare be stored in-country, so the company needs to build more capacity where it’s needed.

He says they have nailed down the process of building storage, typically inside co-location facilities, and during the pandemic they actually became more efficient as they hired a firm to put together the hardware for them onsite. They also put channel partners like managed service providers (MSPs) and value added resellers (VARs) to work by incentivizing them to sell Wasabi to their customers.

Wasabi storage starts at $5.99 per terabyte per month. That’s a heck of a lot cheaper than Amazon S3, which starts at 0.23 per gigabyte for the first 50 terabytes or $23.00 a terabyte, considerably more than Wasabi’s offering.

But Friend admits that Wasabi still faces headwinds as a startup. No matter how cheap it is, companies want to be sure it’s going to be there for the long haul and a round this size from an investor with the pedigree of Fidelity will give the company more credibility with large enterprise buyers without the same demands of venture capital firms.

“Fidelity to me was the ideal investor. […] They don’t want a board seat. They don’t want to come in and tell us how to run the company. They are obviously looking toward an IPO or something like that, and they are just interested in being an investor in this business because cloud storage is a virtually unlimited market opportunity,” he said.

He sees his company as the typical kind of market irritant. He says that his company has run away from competitors in his part of the market and the hyperscalers are out there not paying attention because his business remains a fraction of theirs for the time being. While an IPO is far off, he took on an institutional investor this early because he believes it’s possible eventually.

“I think this is a big enough market we’re in, and we were lucky to get in at just the right time with the right kind of technology. There’s no doubt in my mind that Wasabi could grow to be a fairly substantial public company doing cloud infrastructure. I think we have a nice niche cut out for ourselves, and I don’t see any reason why we can’t continue to grow,” he said.

#boston-startups, #cloud, #cloud-storage, #enterprise, #fidelity-investments, #funding, #recent-funding, #startups, #storage, #tc, #wasabi

0

DigitalOcean says customer billing data ‘exposed’ by a security flaw

DigitalOcean has emailed customers warning of a data breach involving customers’ billing data, TechCrunch has learned.

The cloud infrastructure giant told customers in an email on Wednesday, obtained by TechCrunch, that it has “confirmed an unauthorized exposure of details associated with the billing profile on your DigitalOcean account.” The company said the person “gained access to some of your billing account details through a flaw that has been fixed” over a two-week window between April 9 and April 22.

The email said customer billing names and addresses were accessed, as well as the last four digits of the payment card, its expiry date, and the name of the card-issuing bank. The company said that customers’ DigitalOcean accounts were “not accessed,” and passwords and account tokens were “not involved” in this breach.

“To be extra careful, we have implemented additional security monitoring on your account. We are expanding our security measures to reduce the likelihood of this kind of flaw occuring [sic] in the future,” the email said.

DigitalOcean said it fixed the flaw and notified data protection authorities, but it’s not clear what the apparent flaw was that put customer billing information at risk.

In a statement, DigitalOcean’s security chief Tyler Healy said 1% of billing profiles were affected by the breach, but declined to address our specific questions, including how the vulnerability was discovered and which authorities have been informed.

Companies with customers in Europe are subject to GDPR, and can face fines of up to 4% of their global annual revenue.

Last year, the cloud company raised $100 million in new debt, followed by another $50 million round, months after laying off dozens of staff amid concerns about the company’s financial health. In March, the company went public, raising about $775 million in its initial public offering. 

#cloud, #cloud-computing, #cloud-infrastructure, #cloud-storage, #computing, #data-breach, #digitalocean, #enterprise, #security, #spokesperson, #web-hosting, #web-services, #world-wide-web

0

Solving the security challenges of public cloud

Experts believe the data-lake market will hit a massive $31.5 billion in the next six years, a prediction that has led to much concern among large enterprises. Why? Well, an increase in data lakes equals an increase in public cloud consumption — which leads to a soaring amount of notifications, alerts and security events.

Around 56% of enterprise organizations handle more than 1,000 security alerts every day and 70% of IT professionals have seen the volume of alerts double in the past five years, according to a 2020 Dark Reading report that cited research by Sumo Logic. In fact, many in the ONUG community are on the order of 1 million events per second. Yes, per second, which is in the range of tens of peta events per year.

Now that we are operating in a digitally transformed world, that number only continues to rise, leaving many enterprise IT leaders scrambling to handle these events and asking themselves if there’s a better way.

Why isn’t there a standardized approach for dealing with security of the public cloud — something so fundamental now to the operation of our society?

Compounding matters is the lack of a unified framework for dealing with public cloud security. End users and cloud consumers are forced to deal with increased spend on security infrastructure such as SIEMs, SOAR, security data lakes, tools, maintenance and staff — if they can find them — to operate with an “adequate” security posture.

Public cloud isn’t going away, and neither is the increase in data and security concerns. But enterprise leaders shouldn’t have to continue scrambling to solve these problems. We live in a highly standardized world. Standard operating processes exist for the simplest of tasks, such as elementary school student drop-offs and checking out a company car. But why isn’t there a standardized approach for dealing with security of the public cloud — something so fundamental now to the operation of our society?

The ONUG Collaborative had the same question. Security leaders from organizations such as FedEx, Raytheon Technologies, Fidelity, Cigna, Goldman Sachs and others came together to establish the Cloud Security Notification Framework. The goal is to create consistency in how cloud providers report security events, alerts and alarms, so end users receive improved visibility and governance of their data.

Here’s a closer look at the security challenges with public cloud and how CSNF aims to address the issues through a unified framework.

The root of the problem

A few key challenges are sparking the increased number of security alerts in the public cloud:

  1. Rapid digital transformation sparked by COVID-19.
  2. An expanded network edge created by the modern, work-from-home environment.
  3. An increase in the type of security attacks.

The first two challenges go hand in hand. In March of last year, when companies were forced to shut down their offices and shift operations and employees to a remote environment, the wall between cyber threats and safety came crashing down. This wasn’t a huge issue for organizations already operating remotely, but for major enterprises the pain points quickly boiled to the surface.

Numerous leaders have shared with me how security was outweighed by speed. Keeping everything up and running was prioritized over governance. Each employee effectively held a piece of the company’s network edge in their home office. Without basic governance controls in place or training to teach employees how to spot phishing or other threats, the door was left wide open for attacks.

In 2020, the FBI reported its cyber division was receiving nearly 4,000 complaints per day about security incidents, a 400% increase from pre-pandemic figures.

Another security issue is the growing intelligence of cybercriminals. The Dark Reading report said 67% of IT leaders claim a core challenge is a constant change in the type of security threats that must be managed. Cybercriminals are smarter than ever. Phishing emails, entrance through IoT devices and various other avenues have been exploited to tap into an organization’s network. IT teams are constantly forced to adapt and spend valuable hours focused on deciphering what is a concern and what’s not.

Without a unified framework in place, the volume of incidents will spiral out of control.

Where CSNF comes into play

CSNF will prove beneficial for cloud providers and IT consumers alike. Security platforms often require integration timelines to wrap in all data from siloed sources, including asset inventory, vulnerability assessments, IDS products and past security notifications. These timelines can be expensive and inefficient.

But with a standardized framework like CSNF, the integration process for past notifications is pared down and contextual processes are improved for the entire ecosystem, efficiently reducing spend and saving SecOps and DevSecOps teams time to focus on more strategic tasks like security posture assessment, developing new products and improving existing solutions.

Here’s a closer look at the benefits a standardized approach can create for all parties:

  • End users: CSNF can streamline operations for enterprise cloud consumers, like IT teams, and allows improved visibility and greater control over the security posture of their data. This enhanced sense of protection from improved cloud governance benefits all individuals.
  • Cloud providers: CSNF can eliminate the barrier to entry currently prohibiting an enterprise consumer from using additional services from a specific cloud provider by freeing up added security resources. Also, improved end-user cloud governance encourages more cloud consumption from businesses, increasing provider revenue and providing confidence that their data will be secure.
  • Cloud vendors: Cloud vendors that provide SaaS solutions are spending more on engineering resources to deal with increased security notifications. But with a standardized framework in place, these additional resources would no longer be necessary. Instead of spending money on such specific needs along with labor, vendors could refocus core staff on improving operations and products such as user dashboards and applications.

Working together, all groups can effectively reduce friction from security alerts and create a controlled cloud environment for years to come.

What’s next?

CSNF is in the building phase. Cloud consumers have banded together to compile requirements, and consumers continue to provide guidance as a prototype is established. The cloud providers are now in the process of building the key component of CSNF, its Decorator, which provides an open-source multicloud security reporting translation service.

The pandemic created many changes in our world, including new security challenges in the public cloud. Reducing IT noise must be a priority to continue operating with solid governance and efficiency, as it enhances a sense of security, eliminates the need for increased resources and allows for more cloud consumption. ONUG is working to ensure that the industry stays a step ahead of security events in an era of rapid digital transformation.

#cloud, #cloud-computing, #cloud-infrastructure, #cloud-services, #cloud-storage, #column, #computer-security, #cybersecurity, #opinion, #security, #tc

0

Grocery startup Mercato spilled years of data, but didn’t tell its customers

A security lapse at online grocery delivery startup Mercato exposed tens of thousands of customer orders, TechCrunch has learned.

A person with knowledge of the incident told TechCrunch that the incident happened in January after one of the company’s cloud storage buckets, hosted on Amazon’s cloud, was left open and unprotected.

The company fixed the data spill, but has not yet alerted its customers.

Mercato was founded in 2015 and helps over a thousand smaller grocers and specialty food stores get online for pickup or delivery, without having to sign up for delivery services like Instacart or Amazon Fresh. Mercato operates in Boston, Chicago, Los Angeles, and New York, where the company is headquartered.

TechCrunch obtained a copy of the exposed data and verified a portion of the records by matching names and addresses against known existing accounts and public records. The data set contained more than 70,000 orders dating between September 2015 and November 2019, and included customer names and email addresses, home addresses, and order details. Each record also had the user’s IP address of the device they used to place the order.

The data set also included the personal data and order details of company executives.

It’s not clear how the security lapse happened since storage buckets on Amazon’s cloud are private by default, or when the company learned of the exposure.

Companies are required to disclose data breaches or security lapses to state attorneys-general, but no notices have been published where they are required by law, such as California. The data set had more than 1,800 residents in California, more than three times the number needed to trigger mandatory disclosure under the state’s data breach notification laws.

It’s also not known if Mercato disclosed the incident to investors ahead of its $26 million Series A raise earlier this month. Velvet Sea Ventures, which led the round, did not respond to emails requesting comment.

In a statement, Mercato chief executive Bobby Brannigan confirmed the incident but declined to answer our questions, citing an ongoing investigation.

“We are conducting a complete audit using a third party and will be contacting the individuals who have been affected. We are confident that no credit card data was accessed because we do not store those details on our servers. We will continually inform all authoritative bodies and stakeholders, including investors, regarding the findings of our audit and any steps needed to remedy this situation,” said Brannigan.


Know something, say something. Send tips securely over Signal and WhatsApp to +1 646-755-8849. You can also send files or documents using our SecureDrop. Learn more

#amazon, #boston, #california, #chicago, #cloud-computing, #cloud-infrastructure, #cloud-storage, #computer-security, #computing, #data-breach, #data-security, #ecommerce, #food, #instacart, #los-angeles, #mercato, #new-york, #security, #technology, #united-states, #velvet-sea-ventures

0

Risk startup LogicGate confirms data breach

Risk and compliance startup LogicGate has confirmed a data breach. But unless you’re a customer, you probably didn’t hear about it.

An email sent by LogicGate to customers earlier this month said on February 23 an unauthorized third-party obtained credentials to its Amazon Web Services-hosted cloud storage servers storing customer backup files for its flagship platform Risk Cloud, which helps companies to identify and manage their risk and compliance with data protection and security standards. LogicGate says its Risk Cloud can also help find security vulnerabilities before they are exploited by malicious hackers.

The credentials “appear to have been used by an unauthorized third party to decrypt particular files stored in AWS S3 buckets in the LogicGate Risk Cloud backup environment,” the email read.

“Only data uploaded to your Risk Cloud environment on or prior to February 23, 2021, would have been included in that backup file. Further, to the extent you have stored attachments in the Risk Cloud, we did not identify decrypt events associated with such attachments,” it added.

LogicGate did not say how the AWS credentials were compromised. An email update sent by LogicGate last Friday said the company anticipates finding the root cause of the incident by this week.

But LogicGate has not made any public statement about the breach. It’s also not clear if the company contacted all of its customers or only those whose data was accessed. LogicGate counts Capco, SoFi, and Blue Cross Blue Shield of Kansas City as customers.

We sent a list of questions, including how many customers were affected and if the company has alerted U.S. state authorities as required by state data breach notification laws. When reached, LogicGate chief executive Matt Kunkel confirmed the breach but declined to comment citing an ongoing investigation. “We believe it’s best to communicate developments directly to our customers,” he said.

Kunkel would not say, when asked, if the attacker also exfiltrated the decrypted customer data from its servers.

Data breach notification laws vary by state, but companies that fail to report security incidents can face heavy fines. Under Europe’s GDPR rules, companies can face fines of up to 4% of their annual turnover for violations.

In December, LogicGate secured $8.75 million in fresh funding, totaling more than $40 million since it launched in 2015.


Are you a LogicGate customer? Send tips securely over Signal and WhatsApp to +1 646-755-8849. You can also send files or documents using our SecureDrop. Learn more

#amazon, #amazon-web-services, #blue-cross-blue-shield, #capco, #cloud, #cloud-computing, #cloud-storage, #computer-security, #computing, #data-breach, #data-security, #europe, #health-insurance, #securedrop, #security, #security-breaches, #sofi, #united-states

0

Aqua Security raises $135M at a $1B valuation for its cloud native security service

Aqua Security, a Boston- and Tel Aviv-based security startup that focuses squarely on securing cloud-native services, today announced that it has raised a $135 million Series E funding round at a $1 billion valuation. The round was led by ION Crossover Partners. Existing investors M12 Ventures, Lightspeed Venture Partners, Insight Partners, TLV Partners, Greenspring Associates and Acrew Capital also participated. In total, Aqua Security has now raised $265 million since it was founded in 2015.

The company was one of the earliest to focus on securing container deployments. And while many of its competitors were acquired over the years, Aqua remains independent and is now likely on a path to an IPO. When it launched, the industry focus was still very much on Docker and Docker containers. To the detriment of Docker, that quickly shifted to Kubernetes, which is now the de facto standard. But enterprises are also now looking at serverless and other new technologies on top of this new stack.

“Enterprises that five years ago were experimenting with different types of technologies are now facing a completely different technology stack, a completely different ecosystem and a completely new set of security requirements,” Aqua CEO Dror Davidoff told me. And with these new security requirements came a plethora of startups, all focusing on specific parts of the stack.

Image Credits: Aqua Security

What set Aqua apart, Dror argues, is that it managed to 1) become the best solution for container security and 2) realized that to succeed in the long run, it had to become a platform that would secure the entire cloud-native environment. About two years ago, the company made this switch from a product to a platform, as Davidoff describes it.

“There was a spree of acquisitions by CheckPoint and Palo Alto [Networks] and Trend [Micro],” Davidoff said. “They all started to acquire pieces and tried to build a more complete offering. The big advantage for Aqua was that we had everything natively built on one platform. […] Five years later, everyone is talking about cloud-native security. No one says ‘container security’ or ‘serverless security’ anymore. And Aqua is practically the broadest cloud-native security [platform].”

One interesting aspect of Aqua’s strategy is that it continues to bet on open source, too. Trivy, its open-source vulnerability scanner, is the default scanner for GitLab’s Harbor Registry and the CNCF’s Artifact Hub, for example.

“We are probably the best security open-source player there is because not only do we secure from vulnerable open source, we are also very active in the open-source community,” Davidoff said (with maybe a bit of hyperbole). “We provide tools to the community that are open source. To keep evolving, we have a whole open-source team. It’s part of the philosophy here that we want to be part of the community and it really helps us to understand it better and provide the right tools.”

In 2020, Aqua, which mostly focuses on mid-size and larger companies, doubled the number of paying customers and it now has more than half a dozen customers with an ARR of over $1 million each.

Davidoff tells me the company wasn’t actively looking for new funding. Its last funding round came together only a year ago, after all. But the team decided that it wanted to be able to double down on its current strategy and raise sooner than originally planned. ION had been interested in working with Aqua for a while, Davidoff told me, and while the company received other offers, the team decided to go ahead with ION as the lead investor (with all of Aqua’s existing investors also participating in this round).

“We want to grow from a product perspective, we want to grow from a go-to-market [perspective] and expand our geographical coverage — and we also want to be a little more acquisitive. That’s another direction we’re looking at because now we have the platform that allows us to do that. […] I feel we can take the company to great heights. That’s the plan. The market opportunity allows us to dream big.”

 

#acrew-capital, #aqua, #aqua-security, #boston, #checkpoint, #cloud, #cloud-computing, #cloud-infrastructure, #cloud-storage, #computing, #docker, #enterprise, #greenspring-associates, #insight-partners, #ion-crossover-partners, #kubernetes, #lightspeed-venture-partners, #palo-alto, #recent-funding, #security, #serverless-computing, #software, #startups, #tc, #tel-aviv, #tlv-partners

0

Project management service ZenHub raises $4.7M

ZenHub, the GitHub-centric project management service for development teams, today announced that it has raised a $4.7 million seed funding round from Canada’s BDC Capital and Ripple Ventures. This marks the first fundraise for the Vancouver, Canada-based startup after the team bootstrapped the service, which first launched back in 2014. Additional angel investors in this round include Adam Gross (former CEO of Heroku), Jiaona Zhang (VP Product at Webflow) and Oji Udezue (VP Product at Calendly).

In addition to announcing this funding round, the team also today launched its newest automation feature, which makes it easier for teams to plan the development sprints, something that is core to the Agile development process but often takes a lot of time and energy — something teams are better off spending on the actual development process.

“This is a really exciting kind of pivot point for us as a business and gives us a lot of ammunition, I think, to really go after our vision and mission a little bit more aggressively than we have even in the past,” ZenHub co-founder and CEO Aaron Upright told me. The team, he explained, used the beginning of the pandemic to spend a lot of time with customers to better understand how they were reacting to what was happening. In the process, customers repeatedly noted that development resources were getting increasingly expensive and that teams were being stretched even farther and under a lot of pressure.

ZenHub’s answer to this was to look into how it could automate more of the processes that constitute the most complex parts of Agile. Earlier this year, the company launched its first efforts in this area, with new tools for improving developer handoffs in GitHub and now, with the help of this new funding, it is putting the next pieces in place by helping teams automate their sprint planning.

Image Credits: ZenHub

“We thought about automation as an answer to [the problems development teams were facing] and that we could take an approach to automation and to help guide teams through some of the most complex and time-consuming parts of the Agile process,” Upright said. “We raised money so that we can really accelerate toward that vision. As a self-funded company, we could have gone down that path, albeit a little bit slower. But the opportunity that we saw in the market — really brought about by the pandemic, and teams working more remotely and this pressure to produce — we wanted to provide a solution much, much faster.”

The spring planning feature itself is actually pretty straightforward and allows project managers to allocate a certain number of story points (a core Agile metric to estimate the complexity of a given action item) to each sprint. ZenHub’s tool can then use that to automatically generate a list of the most highly prioritized items for the next sprint. Optionally, teams can also decide to roll over items that they didn’t finish during a given sprint into the next one.

Image Credits: ZenHub

With that, ZenHub Sprints can automate a lot of the standard sprint meetings and lets teams focus on thinking about the overall process. Of course, teams can always overrule the automated systems.

“There’s nothing more that developers hate than sitting around the table for eight hours, planning sprints, when really they all just want to be working on stuff,” Upright said.

With this new feature, sprints become a core feature of the ZenHub experience. Typically, project managers worked around this by assigning milestones in GitHub, but having a dedicated tool and these new automation features will make this quite a bit easier.

Coming soon, ZenHub will also build a new feature that will automate some parts of the software estimation process, too, by launching a new tool that will help teams more easily allocate story points to routing action items so that their discussions can focus on the more contentious ones.

#agile-software-development, #canada, #ceo, #cloud-infrastructure, #cloud-storage, #computing, #energy, #github, #heroku, #salesforce-com, #serverless-computing, #tc, #technology, #vancouver, #webflow

0

Microsoft’s Dapr open-source project to help developers build cloud-native apps hits 1.0

Dapr, the Microsoft-incubated open-source project that aims to make it easier for developers to build event-driven, distributed cloud-native applications, hit its 1.0 milestone today, signifying the project’s readiness for production use cases. Microsoft launched the Distributed Application Runtime (that’s what “Dapr” stand for) back in October 2019. Since then, the project released 14 updates and the community launched integrations with virtually all major cloud providers, including Azure, AWS, Alibaba and Google Cloud.

The goal for Dapr, Microsoft Azure CTO Mark Russinovich told me, was to democratize cloud-native development for enterprise developers.

“When we go look at what enterprise developers are being asked to do — they’ve traditionally been doing client, server, web plus database-type applications,” he noted. “But now, we’re asking them to containerize and to create microservices that scale out and have no-downtime updates — and they’ve got to integrate with all these cloud services. And many enterprises are, on top of that, asking them to make apps that are portable across on-premises environments as well as cloud environments or even be able to move between clouds. So just tons of complexity has been thrown at them that’s not specific to or not relevant to the business problems they’re trying to solve.”

And a lot of the development involves re-inventing the wheel to make their applications reliably talk to various other services. The idea behind Dapr is to give developers a single runtime that, out of the box, provides the tools that developers need to build event-driven microservices. Among other things, Dapr provides various building blocks for things like service-to-service communications, state management, pub/sub and secrets management.

Image Credits: Dapr

“The goal with Dapr was: let’s take care of all of the mundane work of writing one of these cloud-native distributed, highly available, scalable, secure cloud services, away from the developers so they can focus on their code. And actually, we took lessons from serverless, from Functions-as-a-Service where with, for example Azure Functions, it’s event-driven, they focus on their business logic and then things like the bindings that come with Azure Functions take care of connecting with other services,” Russinovich said.

He also noted that another goal here was to do away with language-specific models and to create a programming model that can be leveraged from any language. Enterprises, after all, tend to use multiple languages in their existing code, and a lot of them are now looking at how to best modernize their existing applications — without throwing out all of their current code.

As Russinovich noted, the project now has more than 700 contributors outside of Microsoft (though the core commuters are largely from Microsoft) and a number of businesses started using it in production before the 1.0 release. One of the larger cloud providers that is already using it is Alibaba. “Alibaba Cloud has really fallen in love with Dapr and is leveraging it heavily,” he said. Other organizations that have contributed to Dapr include HashiCorp and early users like ZEISS, Ignition Group and New Relic.

And while it may seem a bit odd for a cloud provider to be happy that its competitors are using its innovations already, Russinovich noted that this was exactly the plan and that the team hopes to bring Dapr into a foundation soon.

“We’ve been on a path to open governance for several months and the goal is to get this into a foundation. […] The goal is opening this up. It’s not a Microsoft thing. It’s an industry thing,” he said — but he wasn’t quite ready to say to which foundation the team is talking.

 

#alibaba, #alibaba-cloud, #aws, #cloud, #cloud-computing, #cloud-infrastructure, #cloud-services, #cloud-storage, #computing, #developer, #enterprise, #google, #hashicorp, #mark-russinovich, #microservices, #microsoft, #microsoft-azure, #new-relic, #serverless-computing, #tc

0

An argument against cloud-based applications

In the last decade we’ve seen massive changes in how we consume and interact with our world. The Yellow Pages is a concept that has to be meticulously explained with an impertinent scoff at our own age. We live within our smartphones, within our apps.

While we thrive with the information of the world at our fingertips, we casually throw away any semblance of privacy in exchange for the convenience of this world.

This line we straddle has been drawn with recklessness and calculation by big tech companies over the years as we’ve come to terms with what app manufacturers, large technology companies, and app stores demand of us.

Our private data into the cloud

According to Symantec, 89% of our Android apps and 39% of our iOS apps require access to private information. This risky use sends our data to cloud servers, to both amplify the performance of the application (think about the data needed for fitness apps) and store data for advertising demographics.

While large data companies would argue that data is not held for long, or not used in a nefarious manner, when we use the apps on our phones, we create an undeniable data trail. Companies generally keep data on the move, and servers around the world are constantly keeping data flowing, further away from its source.

Once we accept the terms and conditions we rarely read, our private data is no longer such. It is in the cloud, a term which has eluded concrete understanding throughout the years.

A distinction between cloud-based apps and cloud computing must be addressed. Cloud computing at an enterprise level, while argued against ad nauseam over the years, is generally considered to be a secure and cost-effective option for many businesses.

Even back in 2010, Microsoft said 70% of its team was working on things that were cloud-based or cloud-inspired, and the company projected that number would rise to 90% within a year. That was before we started relying on the cloud to store our most personal, private data.

Cloudy with a chance of confusion

To add complexity to this issue, there are literally apps to protect your privacy from other apps on your smart phone. Tearing more meat off the privacy bone, these apps themselves require a level of access that would generally raise eyebrows if it were any other category of app.

Consider the scenario where you use a key to encrypt data, but then you need to encrypt that key to make it safe. Ultimately, you end up with the most important keys not being encrypted. There is no win-win here. There is only finding a middle ground of contentment in which your apps find as much purchase in your private data as your doctor finds in your medical history.

The cloud is not tangible, nor is it something we as givers of the data can access. Each company has its own cloud servers, each one collecting similar data. But we have to consider why we give up this data. What are we getting in return? We are given access to applications that perhaps make our lives easier or better, but essentially are a service. It’s this service end of the transaction that must be altered.

App developers have to find a method of service delivery that does not require storage of personal data. There are two sides to this. The first is creating algorithms that can function on a local basis, rather than centralized and mixed with other data sets. The second is a shift in the general attitude of the industry, one in which free services are provided for the cost of your personal data (which ultimately is used to foster marketing opportunities).

Of course, asking this of any big data company that thrives on its data collection and marketing process is untenable. So the change has to come from new companies, willing to risk offering cloud privacy while still providing a service worth paying for. Because it wouldn’t be free. It cannot be free, as free is what got us into this situation in the first place.

Clearing the clouds of future privacy

What we can do right now is at least take a stance of personal vigilance. While there is some personal data that we cannot stem the flow of onto cloud servers around the world, we can at least limit the use of frivolous apps that collect too much data. For instance, games should never need access to our contacts, to our camera and so on. Everything within our phone is connected, it’s why Facebook seems to know everything about us, down to what’s in our bank account.

This sharing takes place on our phone and at the cloud level, and is something we need to consider when accepting the terms on a new app. When we sign into apps with our social accounts, we are just assisting the further collection of our data.

The cloud isn’t some omnipotent enemy here, but it is the excuse and tool that allows the mass collection of our personal data.

The future is likely one in which devices and apps finally become self-sufficient and localized, enabling users to maintain control of their data. The way we access apps and data in the cloud will change as well, as we’ll demand a functional process that forces a methodology change in service provisions. The cloud will be relegated to public data storage, leaving our private data on our devices where it belongs. We have to collectively push for this change, lest we lose whatever semblance of privacy in our data we have left.

#big-data, #cloud, #cloud-computing, #cloud-infrastructure, #cloud-storage, #column, #opinion, #privacy, #security

0

Vantage makes managing AWS easier

Vantage, a new service that makes managing AWS resources and their associated spend easier, is coming out of stealth today. The service offers its users an alternative to the complex AWS console with support for most of the standard AWS services, including EC2 instances, S3 buckets, VPCs, ECS and Fargate and Route 53 hosted zones.

The company’s founder, Ben Schaechter, previously worked at AWS and Digital Ocean (and before that, he worked on Crunchbase, too). Yet while DigitalOcean showed him how to build a developer experience for individuals and small businesses, he argues that the underlying services and hardware simply weren’t as robust as those of the hyperclouds. AWS, on the other hand, offers everything a developer could want (and likely more), but the user experience leaves a lot to be desired.

Image Credits: Vantage

“The idea was really born out of ‘what if we could take the user experience of DigitalOcean and apply it to the three public cloud providers, AWS, GCP and Azure,” Schaechter told me. “We decided to start just with AWS because the experience there is the roughest and it’s the largest player in the market. And I really think that we can provide a lot of value there before we do GCP and Azure.”

The focus for Vantage is on the developer experience and cost transparency. Schaechter noted that some of its users describe it as being akin to a “Mint for AWS.” To get started, you give Vantage a set of read permissions to your AWS services and the tool will automatically profile everything in your account. The service refreshes this list once per hour, but users can also refresh their lists manually.

Given that it’s often hard enough to know which AWS services you are actually using, that alone is a useful feature. “That’s the number one use case,” he said. “What are we paying for and what do we have?”

At the core of Vantage is what the team calls “views,” which allows you to see which resources you are using. What is interesting here is that this is quite a flexible system and allows you to build custom views to see which resources you are using for a given application across regions, for example. Those may include Lambda, storage buckets, your subnet, code pipeline and more.

On the cost-tracking side, Vantage currently only offers point-in-time costs, but Schaechter tells me that the team plans to add historical trends as well to give users a better view of their cloud spend.

Schaechter and his co-founder bootstrapped the company and he noted that before he wants to raise any money for the service, he wants to see people paying for it. Currently, Vantage offers a free plan, as well as paid “pro” and “business” plans with additional functionality.

Image Credits: Vantage 

#amazon-web-services, #cloud, #cloud-computing, #cloud-infrastructure, #cloud-storage, #computing, #developer, #digitalocean, #gcp, #tc, #web-hosting, #world-wide-web

0

AWS launches Glue Elastic Views to make it easier to move data from one purpose-built data store to another

AWS has launched a new tool to let developers move data from one store to another called Glue Elastic Views.

At the AWS:Invent keynote CEO Andy Jassy announced Glue Elastic Views, a service that lets programmers move data across multiple data stores more seamlessly.

The new service can take data from disparate silos and move them together. That AWS ETL service allows programmers to write a little bit of SQL code to have a materialized view tht can move from one source data store to another.

For instance, Jassy said, a programmer can move data from DynamoDB to Elastic Search allowing a developer to set up a materialized view to copy that data — all the while managing dependencies. That means if data changes in the source data lake, then it will automatically be updated in the other data stores where the data has been relocated, Jassy said.

“When you have the ability to move data… and move that data easily from data store to data store… that’s incredibly powerful,” said Jassy.

#amazon-web-services, #andy-jassy, #cloud-infrastructure, #cloud-storage, #computing, #data-lake, #data-management, #elasticsearch, #programmer, #sql, #tc, #web-hosting

0

Thousands of U.S. lab results and medical records spilled online after a security lapse

NTreatment, a technology company that manages electronic health and patient records for doctors and psychiatrists, left thousands of sensitive health records exposed to the internet because one of its cloud servers wasn’t protected with a password.

The cloud storage server server was hosted on Microsoft Azure and contained 109,000 files, a large portion of which contained lab test results from third-party providers like LabCorp, medical records, doctor’s notes, insurance claims, and other sensitive health data for patients across the U.S., a class of data considered protected health information under the Health Insurance Portability and Accountability Act (HIPAA). Running afoul of HIPAA can result in steep fines.

None of the data was encrypted, and nearly all of the sensitive files were viewable in the browser. Some of the medical records belonged to children.

TechCrunch found the exposed data as part of a separate investigation. It wasn’t initially clear who owned the storage server, but many of the electronic health records that TechCrunch reviewed in an effort to trace the source of the data spillage were tied to doctors and psychiatrists and healthcare workers working at hospitals or networks known to use nTreatment. The storage server also contained some internal company documents, including a non-disclosure agreement with a major prescriptions provider.

The data was secured on Monday after TechCrunch contacted the company. In an email, NTreatment co-founder Gregory Katz said the server was “used as a general purpose storage,” but did not say how long the server was exposed.

Katz said the company would notify affected providers and regulators of the incident.

It’s the latest in a series of incidents involving the exposure of medical data. Earlier this year we found a bug in LabCorp’s website that exposed thousands of lab results, and reported on the vast amounts of medical imaging floating around the web.

#articles, #cloud-storage, #co-founder, #data-breach, #electronic-health-records, #health, #medical-imaging, #security, #technology, #united-states

0

Google Photos is the latest “Unlimited” plan to impose hard limits

Screenshot of user interface for Google Photos.

Enlarge / Google is no longer offering unlimited photo storage—except to Pixel users, that is. (credit: Google)

Today, Google Photos VP Shimrit Ben-Yair announced the end of Google Photos’ unlimited photo storage policy. The plan already came with significant caveats—unlimited storage was for the tier Google deems “High Quality,” which includes compressed media only, capped at 16 megapixels for photos and 1080p for videos. Uncompressed or higher-resolution photos and videos saved in original quality count against the 15GiB cap for the user’s Google Drive account.

As of June 2021, High Quality photos and videos will also begin counting against a user’s Google Drive storage capacity. That said, if you’ve already got a terabyte of High Quality photos and videos stored in Photos, don’t panic—the policy change affects new photos and videos created or stored after June 2021 only. Media that’s already saved to Google Photos is grandfathered in and will not be affected by the new policy change.

Original Quality—again, meaning either uncompressed or resolution over 16mp still / 1080p video—is also unaffected, since those files were already subject to the user’s Google Drive quota. Any additional capacity purchased through Google One membership also applies to media storage—if you lease 100GiB of capacity at Google One’s $2/month or $20/year plans, that capacity applies to your Google Photos data as well.

Read 2 remaining paragraphs | Comments

#cloud-storage, #gmail, #google, #google-one, #google-photos, #google-drive, #tech, #unlimited

0

Come June 1, 2021, all of your new photos will count against your free Google storage

Come June 1, 2021, Google will change its storage policies for free accounts — and not for the better. Basically, if you’re on a free account and a semi-regular Google Photos user, get ready to pay up next year and subscribe to Google One.

Currently, every free Google Account comes with 15 GB of online storage for all your Gmail, Drive and Photos needs. Email and the files you store in Drive already counted against those 15 GB, but come June 1, all Docs, Sheets, Slides, Drawings, Forms or Jamboard files will count against the free storage as well. Those tend to be small files, but what’s maybe most important here, virtually all of your Photos uploads will now count against those 15 GB as well.

That’s a bid deal because today, Google Photos lets you store unlimited images (and unlimited video, if it’s in HD) for free as long as they are under 16MP in resolution or you opt to have Google degrade the quality. Come June of 2021, any new photo or video uploaded in high quality, which currently wouldn’t count against your allocation, will count against those free 15 GB.

Image Credits: Google

As people take more photos every year, that free allotment won’t last very long. Google argues that 80 percent of its users will have at least three years to reach those 15 GB. Given that you’re reading TechCrunch, though, chances are you’re in those 20 percent that will run out of space much faster (or you’re already on a Google One plan).

Some good news: to make this transition a bit easier, photos and videos uploaded in high quality before June 1, 2021 will not count toward the 15 GB of free storage. As usual, original quality images will continue to count against it, though. And if you own a Pixel device, even after June 1, you can still upload an unlimited number of high-quality images from those.

To let you see how long your current storage will last, Google will now show you personalized estimates, too, and come next June, the company will release a new free tool for Photos that lets you more easily manage your storage. It’ll also show you dark and blurry photos you may want to delete — but then, for a long time Google’s promise was you didn’t have to worry about storage (remember Google’s old Gmail motto? ‘Archive, don’t delete!’)

In addition to these storage updates, there’s a few additional changes worth knowing about. If your account is inactive in Gmail, Drive or Photos for more than two years, Google ‘may’ delete the content in that product. So if you use Gmail but don’t use Photos for two years because you use another service, Google may delete any old photos you had stored there. And if you stay over your storage limit for two years, Google “may delete your content across Gmail, Drive and Photos.”

Cutting back a free and (in some cases) unlimited service is never a great move. Google argues that it needs to make these changes to “continue to provide everyone with a great storage experience and to keep pace with the growing demand.”

People now upload more than 4.3 million GB to Gmail, Drive and Photos every day. That’s not cheap, I’m sure, but Google also controls every aspect of this and must have had some internal projections of how this would evolve when it first set those policies.

To some degree, though, this was maybe to be expected. This isn’t the freewheeling Google of 2010 anymore, after all. We’ve already seen some indications that Google may reserve some advanced features for Google One subscribers in Photos, for example. This new move will obviously push more people to pay for Google One and more money from Google One means a little bit less dependence on advertising for the company.

#cloud-applications, #cloud-computing, #cloud-storage, #computing, #gmail, #google, #google-one, #google-photos, #online-storage, #storage, #tc, #web-applications, #world-wide-web

0

Microsoft announces its first Azure data center region in Taiwan

After announcing its latest data center region in Austria earlier this month and an expansion of its footprint in Brazil, Microsoft today unveiled its plans to open a new region in Taiwan. This new region will augment its existing presence in East Asia, where the company already runs data centers in China (operated by 21Vianet), Hong Kong, Japan and Korea. This new region will bring Microsoft’s total presence around the world to 66 cloud regions.

Similar to its recent expansion in Brazil, Microsoft also pledged to provide digital skilling for over 200,000 people in Taiwan by 2024 and it is growing its Taiwan Azure Hardware Systems and Infrastructure engineering group, too. That’s in addition to investments in its IoT and AI research efforts in Taiwan and the startup accelerator it runs there.

“Our new investment in Taiwan reflects our faith in its strong heritage of hardware and software integration,” said Jean-Phillippe Courtois, Executive Vice President and President, Microsoft Global Sales, Marketing and Operations. “With Taiwan’s expertise in hardware manufacturing and the new datacenter region, we look forward to greater transformation, advancing what is possible with 5G, AI and IoT capabilities spanning the intelligent cloud and intelligent edge.”

Image Credits: Microsoft

The new region will offer access to the core Microsoft Azure services. Support for Microsoft 365, Dynamics 365 and Power Platform. That’s pretty much Microsoft’s playbook for launching all of its new regions these days. Like virtually all of Microsoft’s new data center region, this one will also offer multiple availability zones.

#artificial-intelligence, #austria, #brazil, #china, #cloud, #cloud-computing, #cloud-infrastructure, #cloud-storage, #computing, #internet-of-things, #iot, #japan, #microsoft, #microsoft-365, #microsoft-azure, #taiwan

0

Microsoft Azure launches new availability zones in Canada and Australia

Microsoft Azure offers developers access to more data center regions than its competitors, but it was late to the game of offering different availability zones in those regions for high-availability use cases. After a few high-profile issues a couple of years ago, it accelerated its roadmap for building availability zones. Currently, 12 of Microsoft’s regions feature availability zones and as the company announced at its Ignite conference, both the Canada Central and Australia region will feature availability zones now.

In addition, the company today promised that it would launch availability zones in each country it operates data centers in within the next 24 months.

The idea of an availability zone is to offer users access to data centers that in the same geographic region but are physically separate and each feature their own power, networking and connectivity infrastructure. That way, in case one of those data centers goes offline for whatever reason, there is still another one in the same area that can take over.

In its early days, Microsoft Azure took a slightly different approach and focus on regions without availability zones, arguing that geographic expansion was more important than offering zones. Google took a somewhat similar approach, but it now offers three availability zones for virtually all of its regions (and four in Iowa). The general idea here was that developers could always choose multiple regions for high-availability applications, but that still introduces additional latencies, for example.

#australia, #cloud, #cloud-computing, #cloud-infrastructure, #cloud-storage, #computing, #data-center, #data-management, #google, #iowa, #microsoft, #microsoft-ignite-2020, #microsoft-azure

0

Unity launches its Cloud Content Delivery service for game developers

Unity, the company behind the popular real-time 3D engine, today officially launched its Cloud Content Delivery service. This new service, which is engine-agnostic, combines a content delivery network and backend-as-a-service platform to help developers distribute and update their games. The idea here is to offer Unity developers — and those using other game engines — a live game service option that helps them get the right content to their players at the right time.

As Unity’s Felix The noted, most game developers currently use a standard CDN provider, but that means they must also develop their own last-mile delivery service in order to be able to make their install and update process more dynamic and configurable. Or, as most gamers can attest, the developers simply opt to ship the game as a large binary and with every update, the user has to download that massive file again.

“That can mean the adoption of your new game content or any content will trail a little bit behind because you are reliant on people doing the updates necessary,” The said.

And while the Cloud Delivery Service can be used across platforms, the team is mostly focusing on mobile for now. “We are big fans of focusing on a certain segment when we start and then we can decide how we want to expand. There is a lot of need in the mobile space right now — more so than the rest,” The said. To account for this, the Cloud Content Delivery service allows developers to specify which binary to send to which device, for example.

Having a CDN is one thing, but that last-mile delivery, as The calls it, is where Unity believes it can solve a real pain point for developers.

“CDNs, you get content. Period,” The said. “But in this case, if you want to, as a game developer, test a build — is this QA ready? Is this something that is still being QAed? The build that you want to assign to be downloaded from our Cloud Content Delivery will be different. You want to soft launch new downloadable content for Canada before you release it in the U.S.? You would use our system to configure that. It’s really purpose-built with video games in mind.”

The team decided to keep pricing simple. All developers pay for is the egress pricing, plus a very small fee for storage. There is no regional pricing either, and the first 50GB of bandwidth usage is free, with Unity charging $0.08 per GB for the next 50TB, with additional pricing tiers for those who use more than 50TB ($0.06/GB) and 500TB ($0.03).

“Our intention is that people will look at it and don’t worry about ‘what does this mean? I need a pricing calculator. I need to simulate what’s it going to cost me,’ but really just focus on the fact that they need to make great content,” The explained.

It’s worth highlighting that the delivery service is engine-agnostic. Unity, of course, would like you to use it for games written with the help of the Unity engine, but it’s not a requirement. The argues that this is part of the company’s overall philosophy.

“Our mission has always been centered around democratizing development and making sure that people — regardless of their choices — will have access to success,” he said. “And in terms of operating your game, the decision of a gaming engine typically has been made well before operating your game ever comes into the picture. […] Developer success is at the heart of what we want to focus on.”

#canada, #cloud-computing, #cloud-infrastructure, #cloud-storage, #computing, #content-delivery-network, #developer, #distributed-computing, #game-engine, #gaming, #streaming, #tc, #united-states, #unity, #unity-technologies

0

Drew Houston will talk about building a startup and digital transformation during COVID at TechCrunch Disrupt

Dropbox CEO Drew Houston will be joining us for a one on one interview at this year’s TechCrunch Disrupt happening next week from September 14-18.

Houston has been there and done that as a startup founder. After attending Y Combinator in 2007 and launching at the TechCrunch 50 (the precursor to TechCrunch Disrupt) in 2008, he went on to raise $1.7 billion from firms like Blackrock, Sequoia and Index Ventures before taking his company public in 2018.

Houston and his co-founder Arash Ferdowsi had a simple idea to make it easier to access your stuff on the internet. Instead of carrying your files on a thumb drive or emailing them to yourself, as was the norm at that time, you could have a hard drive in the cloud. This meant that you could log on wherever you were, even when you were not on your own computer, and access your files.

Houston and Ferdowsi wanted to make it dead simple to do this, and in the days before smart phones and tablets,  they achieved that goal and grew a company that reported revenue of $467.4 million — or a run rate of over $1.8 billion — in its most recent earning’s report. Today, Dropbox has a market cap of over $8 billion.

And as we find ourselves in the midst of pandemic, businesses like Houston’s are suddenly hotter than ever, as companies are accelerating their move to the cloud with employees working from home needing access to work files and the ability to share them easily with colleagues in a secure way.

Dropbox has expanded beyond pure consumer file sharing in the years since the company launched with business tools for sharing files with teams, administering and securing them from a central console, and additional tools like a password manager, online vault for important files, full backup and electronic signature and workflow via the purchase of HelloSign last year.

Houston will join us at TechCrunch Disrupt 2020 to discuss all of this including how he helped build the company from that initial idea to where it is today, and he will talk about what it takes to achieve the kind of success that every startup founder dreams about. Get your Digital Pro Pass or your Startup Alley Exhibitor Package or even a Digital Pass for $45 to hear this session on the Disrupt stage . We hope you’ll join us.

#cloud, #cloud-storage, #collaboration, #disrupt-2020, #drew-houston, #dropbox, #file-sharing, #saas, #storage, #tc

0

Facebook’s photo porting tool adds support for Dropbox and Koofr

Facebook’s photo and video portability tool has added support for two more third party services for users to send data via encrypted transfer — namely: cloud storage providers Dropbox and (EU-based) Koofr.

The tech giant debuted the photo porting tool in December last year, initially offering users in its EU HQ location of Ireland the ability to port their media direct to Google Photos, before going on to open up access in more markets. It completed a global rollout of that first offering in June.

Facebook users in all its markets now have three options to choose from if they want to transfer Facebook photos and videos elsewhere. A company spokesman confirmed support for other (unnamed) services is also in the works, telling us: “There will be more partnership announcements in the coming months.”

The transfer tool is based on code developed via Facebook’s participation in the Data Transfer Project — a collaborative effort started last year, with backing from other tech giants including Apple, Google, Microsoft and Twitter.

To access the tool, Facebook users need to navigate to the ‘Your Facebook Information’ menu and select ‘Transfer a copy of your photos and videos’. Facebook will then prompt you to re-enter your password prior to initiating the transfer. You will then be asked to select a destination service from the three on offer (Google Photos, Dropbox or Koofr) and asked to enter your password for that third party service — kicking off the transfer.

Users will receive a notification on Facebook and via email when the transfer has been completed.

The encrypted transfers work from both the desktop version of Facebook or its mobile app.

Last month, the tech giant signalled in comments to the FTC ahead of a hearing on portability scheduled for later this month that it would be expanding the scope of its data portability offerings — including hinting it might offer direct transfers for more types of content in future, such as events or even users’ “most meaningful” posts.

For now, though, Facebook only supports direct, encrypted transfers for photos and videos uploaded to Facebook.

While Google and Dropbox are familiar names, the addition of a smaller, EU-based cloud storage provider in the list of supported services does stand out a bit. On that, Facebook’s spokesperson told us it reached out to discuss adding Koofr to the transfer tool after a staffer came across an article on Mashable discussing it as an EU cloud storage solution.

A bigger question is when — or whether — Facebook will offer direct photo portability to users of its photo sharing service, Instagram . It has not mentioned anything specific on that front when discussing its plans to expand portability.

When we asked Facebook about bringing the photo porting tool to Instagram, a spokesman told us: “Facebook have prioritised portability tools on Facebook at the moment but look forward to exploring expansion to the other apps in the future.”

In a blog post announcing the new destinations for users of the Facebook photo & video porting tool, the tech giant repeats its call for lawmakers to come up with “clearer rules” to govern portability, writing that: “We want to continue to build data portability features people can trust. To do that, the Internet needs clearer rules about what kinds of data should be portable and who is responsible for protecting that data as it moves to different services. Policymakers have a vital role to play in this.”

It also writes that it’s keen for other companies to join the Data Transfer Project — “to expand options for people and push data portability innovation forward”.

In recent years Facebook has been lobbying for what it calls ‘the right regulation’ to wrap around portability — releasing a white paper on the topic last year which plays up what it couches as privacy and security trade-offs in a bid to influence regulatory thinking around requirements on direct data transfers.

Portability is in the frame as a possible tool for helping rebalance markets in favor of new entrants or smaller players as lawmakers dig into concerns around data-fuelled barriers to competition in an era of platform giants.

#apple, #apps, #cloud-applications, #cloud-storage, #data-portability, #dropbox, #european-union, #facebook, #federal-trade-commission, #google, #google-photos, #instagram, #interoperability, #koofr, #microsoft, #policy, #social, #twitter

0

Google One now offers free phone backups up to 15GB on Android and iOS

Google One, Google’s subscription program for buying additional storage and live support, is getting an update today that will bring free phone backups for Android and iOS devices to anybody who installs the app — even if they don’t have a paid membership. The catch: While the feature is free, the backups count against your free Google storage allowance of 15GB. If you need more you need — you guessed it — a Google One membership to buy more storage or delete data you no longer need. Paid memberships start at $1.99/month for 100GB.

Image Credits: Google

Last year, paid members already got access to this feature on Android, which stores your texts, contacts, apps, photos and videos in Google’s cloud. The “free” backups are now available to Android users. iOS users will get access to it once the Google One app rolls out on iOS in the near future.

Image Credits: Google

With this update, Google is also introducing a new storage manager tool in Google One, which is available in the app and on the web, and which allows you to delete files and backups as needed. The tool works across Google properties and lets you find emails with very large attachments or large files in your Google Drive storage, for example.

With this free backup feature, Google is clearly trying to get more people onto Google One. The free 15GB storage limit is pretty easy to hit, after all (and that’s for your overall storage on Google, including Gmail and other services) and paying $1.99 for 100GB isn’t exactly a major expense, especially if you are already part of the Google ecosystem and use apps like Google Photos already.

#android, #cloud-storage, #computing, #google, #google-drive, #icloud, #ios, #ios-devices, #mobile-app, #operating-systems, #smartphones, #tc

0

Microsoft employs experimental undersea data center in search for COVID-19 vaccine

Part of the challenge in seeking out an effective treatment for COVID-19 is simply one of scale – protein folding is key to understanding how the virus that causes COVID-19 attaches to health cells in order to infect them. Modeling said folding gets a big boost from distributed computing efforts like the Folding@home global program, which employs even consumer computers as processing nodes to tackle big problems. Microsoft is testing pre-packed, shipping container-sized data centres that can be spun up on demand and run deep under the ocean’s surface fo sustainable, high-efficiency and cool operation to contribute to such efforts in a big way, and it’s now using one in Scotland to model viral proteins that lead to COVID-19.

This research project isn’t new for Microsoft – it’s been operating the data center at a depth of 117 feet for two years now. But the shift of its focus to COVID-19 represents a new development, and is obviously a response to the imminent need for more advances around our understanding of the SARS-CoV-19 virus and potential therapies that we could use to treat or prevent it from infecting people.

Within the tubular submerged datacenter are 864 servers, providing significant computing power. The idea of packing them into a submersible tube is intended to provide efficiencies in terms of operating temperatures. Cooling and thermal management is essential for any high-capacity processing equipment, since all that computing power generates a tremendous amount of heat. It’s why you see such elaborate cooling equipment in high-performance gaming PC builds, and it’s doubly crucial when you’re operating at the level of the data center. Deep underwater, the thermal environment provides natural cooling that allows processors to run consistently at higher speeds, without the need to pump more energy in to run fans or more elaborate liquid cooling systems.

Should this project, which Microsoft has dubbed “Natick,” work as designed, future distributed computing projects could benefit immensely from the on-demand deployment of a number of these distributed sea-floor data centers.

#biotech, #cloud-storage, #computer-virus, #computing, #coronavirus, #covid-19, #data-center, #data-management, #foldinghome, #health, #microsoft, #science, #scotland, #software, #tc

0

Azure Arc, Microsoft’s service for managing cloud resources anywhere, is now in public preview

At its Build developer conference, Microsoft today announced that Azure Arc, its service for managing cloud resources anywhere, including competing clouds like AWS and GCP and platforms like Red Hat’s Open Shift, is now in public preview.

Microsoft first announced this Kubernetes-based solution at its Ignite event in Orland last September. One feature that makes it stand out is that it takes some of what Microsoft has learned from its Azure Stack project for bringing Azure Services to its customers’ data centers (and unsurprisingly, Azure Arc also supports deployments on Azure Stack). Thanks to this, Azure Arc doesn’t just allow you to manage containerized workloads anywhere but also includes the ability to bring services like Azure SQL Database and Azure Database for PostgreSQL to these platforms. It’s also worth noting that while this is a Microsoft service, it supports both Windows and Linux servers.

As part of today’s public preview launch, Microsoft also announced that Arc now supports SUSE Linux Enterprise Server and the SUSE CaaS Platform. “Azure Arc for servers gives customers a central management control plane with security and governance capabilities for SUSE Linux Enterprise Server systems hosted outside of the Azure cloud, such as edge deployments,” says SUSE President of Engineering and Innovation Thomas Di Giacomo.

It’s no secret that most large cloud vendors now have some kind of multi-cloud management service that’s similar to Azure Arc. Google is betting heavily on Anthos, for example, while AWS offers its fully-managed Outpost service. They all have slightly different characteristics and philosophies, but the fact that every major cloud player is now offering some version of this is a clear sign that enterprises don’t want to be locked into using a single cloud — even as these services make them place a bet on a specific vendor for their management services, though.

In a related set of announcements, Microsoft also launched a large set of new features for Azure Stack. This includes the private preview of Azure Stack Hub fleet management for monitoring deployments across Azure and Azure Stack Hub, as well as GPU partitioning using AMD GPU’s, which is also now in private preview. This last part matters not just for using those GPUs for visualization but also for enabling graphics-intensive workloads on virtualized desktop environments through Azure Stack Hub for enterprises that use AMD GPUs in their servers. With GPU partitioning, admins can give multiple users access to their share of the overall GPUs power.

#amd, #artificial-intelligence, #azure, #cloud-computing, #cloud-infrastructure, #cloud-storage, #computing, #fleet-management, #google, #linux, #microsoft, #microsoft-build-2020, #microsoft-windows, #microsoft-azure, #postgresql, #red-hat, #sql, #tc

0

Microsoft partners with Redis Labs to improve its Azure Cache for Redis

For a few years now, Microsoft has offered Azure Cache for Redis, a fully managed caching solution built on top of the open-source Redis project. Today, it is expanding this service by adding Redis Enterprise, Redis Lab’s commercial offering, to its platform. It’s doing so in partnership with Redis Labs and while Microsoft will offer some basic support for the service, Redis Labs will handle most of the software support itself.

Julia Liuson, Microsoft’s corporate VP of its developer tools division, told me that the company wants to be seen as a partner to open-source companies like Redis Labs, which was among the first companies to change its license to prevent cloud vendors from commercializing and repackaging their free code without contributing back to the community. Last year, Redis Labs partnered with Google Cloud to bring its own fully managed service to its platform and so maybe it’s no surprise that we are now seeing Microsoft make a similar move.

Liuson tells me that with this new tier for Azure Cache for Redis, users will get a single bill and native Azure management, as well as the option to deploy natively on SSD flash storage. The native Azure integration should also make it easier for developers on Azure to integrate Redis Enterprise into their applications.

It’s also worth noting that Microsoft will support Redis Labs’ own Redis modules, including RediSearch, a Redis-powered search engine, as well as RedisBloom and RedisTimeSeries, which provide support for new datatypes in Redis.

“For years, developers have utilized the speed and throughput of Redis to produce unbeatable responsiveness and scale in their applications,” says Liuson. “We’ve seen tremendous adoption of Azure Cache for Redis, our managed solution built on open source Redis, as Azure customers have leveraged Redis performance as a distributed cache, session store, and message broker. The incorporation of the Redis Labs Redis Enterprise technology extends the range of use cases in which developers can utilize Redis, while providing enhanced operational resiliency and security.”

#caching, #cloud, #cloud-infrastructure, #cloud-storage, #computing, #data-management, #developer, #enterprise, #flash, #google, #google-cloud, #microsoft, #nosql, #redis, #redis-labs, #search-engine

0

Backblaze challenges AWS by making its cloud storage S3 compatible

Backblaze today announced that its B2 Cloud Storage service is now API-compatible with Amazon’s S3 storage service.

Backblaze started out as an affordable cloud backup service but over the last few years, the company has also taken its storage expertise and launched the developer-centric B2 Cloud Storage service, which promises to be significantly cheaper than similar offerings from the large cloud vendors. Pricing for B2 starts at $0.005 per GB/month. AWS S3 starts at $0.023 per GB/month.

The storage price alone isn’t going to make developers switch providers, though. There are some costs involved in supporting multiple heterogeneous systems, too.

By making B2 compatible with the S3 API, developers can now simply redirect their storage to Backblaze without the need for any extensive rewrites.

“For years, businesses have loved our astonishingly easy-to-use cloud storage for supporting
them in achieving incredible outcomes,” said Gleb Budman, the co-founder and CEO of
Backblaze. “Today we’re excited to do all the more by enabling many more businesses to use
our storage with their existing tools and workflows.”

Current B2 customers include the likes of American Public Television, Patagonia and Verizon’s Complex Networks (with Verizon being the corporate overlords of Verizon Media Group, TechCrunch’s parent company). Backblaze says it has about 100,000 total customers for its B2 service. Among the launch partners for today’s launch are Cinafilm, IBM’s Aspera file transfer and streaming service, storage specialist Quantum and cloud data management service Veeam.

“Public cloud storage has become an integral part of the post-production process. This latest enhancement makes Backblaze B2 Cloud Storage more accessible—both for us as a vendor, and for customers,” said Eric Bassier, Senior Director, Product Marketing at Quantum. “We can now use the new S3 Compatible APIs to add BackBlaze B2 to the list of StorNext compatible public cloud storage targets, taking another step toward enabling hybrid and multi-cloud workflows.”

#amazon, #api, #aspera, #backblaze, #cloud, #cloud-computing, #cloud-storage, #computing, #developer, #ibm, #patagonia, #tc, #techcrunch, #verizon, #verizon-media-group, #web-hosting

0