Thursday, May 2, 2019

Microsoft brings Plug and Play to IoT

Microsoft today announced that it wants to bring the ease of use of Plug and Play, which today allows you to plug virtually any peripheral into a Windows PC without having to worry about drivers, to IoT devices. Typically, getting an IoT device connected and up and running takes some work, even with modern deployment tools. The promise of IoT Plug and Play is that it will greatly simplify this process and do away with the hardware and software configuration steps that are still needed today.

As Azure corporate vice president Julia White writes in today’s announcement, “one of the biggest challenges in building IoT solutions is to connect millions of IoT devices to the cloud due to heterogeneous nature of devices today – such as different form factors, processing capabilities, operational system, memory and capabilities.” This, Microsoft argues, is holding back IoT adoption.

IoT Plug and Play, on the other hand, offers developers an open modeling language that will allow them to connect these devices to the cloud without having to write any code.

Microsoft can’t do this alone, though, since it needs the support of the hardware and software manufacturers in its IoT ecosystem, too. The company has already signed up a number of partners, including Askey, Brainium, Compal, Kyocera, STMicroelectronics, Thundercomm and VIA Technologies. The company says that dozens of devices are already Plug and Play-ready and potential users can find them in the Azure IoT Device Catalog.

Microsoft launches a drag-and-drop machine learning tool

Microsoft today announced three new services that all aim to simplify the process of machine learning. These range from a new interface for a tool that completely automates the process of creating models, to a new no-code visual interface for building, training and deploying models, all the way to hosted Jupyter-style notebooks for advanced users.

Getting started with machine learning is hard. Even to run the most basic of experiments take a good amount of expertise. All of these new tools great simplify this process by hiding away the code or giving those who want to write their own code a pre-configured platform for doing so.

The new interface for Azure’s automated machine learning tool makes creating a model as easy importing a data set and then telling the service which value to predict. Users don’t need to write a single line of code, while in the backend, this updated version now supports a number of new algorithms and optimizations that should result in more accurate models. While most of this is automated, Microsoft stresses that the service provides “complete transparency into algorithms, so developers and data scientists can manually override and control the process.”

For those who want a bit more control from the get-go, Microsoft also today launched a visual interface for its Azure Machine Learning service into preview that will allow developers to build, train and deploy machine learning models without having to touch any code.

This tool, the Azure Machine Learning visual interface looks suspiciously like the existing Azure ML Studio, Microsoft’s first stab at building a visual machine learning tool. Indeed, the two services look identical. The company never really pushed this service, though, and almost seemed to have forgotten about it despite that fact that it always seemed like a really useful tool for getting started with machine learning.

Microsoft says that this new version combines the best of Azure ML Studio with the Azure Machine Learning service. In practice, this means that while the interface is almost identical, the Azure Machine Learning visual interface extends what was possible with ML Studio by running on top of the Azure Machine Learning service and adding that services’ security, deployment and lifecycle management capabilities.

The service provides an easy interface for cleaning up your data, training models with the help of different algorithms, evaluating them and, finally, putting them into production.

While these first two services clearly target novices, the new hosted notebooks in Azure Machine Learning are clearly geared toward the more experiences machine learning practitioner. The notebooks come pre-packaged with support for the Azure Machine Learning Python SDK and run in what the company describes as a “secure, enterprise-ready environment.” While using these notebooks isn’t trivial either, this new feature allows developers to quickly get started without the hassle of setting up a new development environment with all the necessary cloud resources.

Microsoft launches a fully managed blockchain service

Microsoft didn’t rush to bring blockchain technology to its Azure cloud computing platform, but over the course of the last year, it started to pick up the pace with the launch of its blockchain development kit and the Azure Blockchain Workbench. Today, ahead of its Build developer conference, it is going a step further by launching Azure Blockchain Services, a fully managed service that allows for the formation, management and governance of consortium blockchain networks.

We’re not talking cryptocurrencies here, though. This is an enterprise service that is meant to help businesses build applications on top of blockchain technology. It is integrated with Azure Active Directory and offers tools for adding new members, setting permissions and monitoring network health and activity.

The first support ledger is J.P. Morgan’s Quorum. “Because it’s built on the popular Ethereum protocol, which has the world’s largest blockchain developer community, Quorum is a natural choice,” Azure CTO Mark Russinovich writes in today’s announcement. “It integrates with a rich set of open-source tools while also supporting confidential transactions—something our enterprise customers require.” To launch this integration, Microsoft partnered closely with J.P. Morgan.

The managed service is only one part of this package, though. Microsoft also today launched an extension to Visual Studio Code to help developers create smart contracts. The extension allows Visual Studio Code users to create and compiled Etherium smart contracts and deploy them other on the public chain or on a consortium network in Azure Blockchain Service. The code is then managed by Azure DevOps.

Building applications for these smart contracts is also going to get easier thanks to integrations with Logic Apps and Flow, Microsoft’s two workflow integration services, as well as Azure Functions for event-driven development.

Microsoft, of course, isn’t the first of the big companies to get into this game. IBM, especially, made a big push for blockchain adoption in recent years and AWS, too, is now getting into the game after mostly ignoring this technology before. Indeed, AWS opened up its own managed blockchain service only two days ago.

Microsoft announces the $3,500 HoloLens 2 Development Edition

As part of its rather odd Thursday afternoon pre-Build news dump, Microsoft today announced the HoloLens 2 Development Edition. The company announced the much-improved HoloLens 2 at MWC Barcelona earlier this year, but it’s not shipping to developers yet. Currently, the best release date we have is “later this year.” The Development Edition will launch alongside the regular HoloLens 2.

The Development Edition, which will retail for $3,500 to own outright or on a $99 per month installment plan, doesn’t feature any special hardware. Instead, it comes with $500 in Azure credits and 3-month trials of Unity Pro and the Unity PiXYZ plugin for bringing engineering renderings into Unity.

To get the Development Edition, potential buyers have to join the Microsoft Mixed Reality Developer Program and those who already pre-ordered the standard edition will be able to change their order later this year.

As far as HoloLens news goes, that’s all a bit underwhelming. Anybody can get free Azure credits, after all (though usually only $200) and free trials of Unity Pro are also readily available (though typically limited to 30 days).

Oddly, the regular HoloLens 2 was also supposed to cost $3,500. It’s unclear if the regular edition will now be somewhat cheaper, cost the same but come without the credits, or really why Microsoft isn’t doing this at all. Turning this into a special “Development Edition” feels more like a marketing gimmick than anything else, as well as an attempt to bring some of the futuristic glamour of the HoloLens visor to today’s announcements.

The folks at Unity are clearly excited, though. “Pairing HoloLens 2 with Unity’s real-time 3D development platform enables businesses to accelerate innovation, create immersive experiences, and engage with industrial customers in more interactive ways,” says Tim McDonough, GM of Industrial at Unity, in today’s announcement. “The addition of Unity Pro and PiXYZ Plugin to HoloLens 2 Development Edition gives businesses the immediate ability to create real-time 2D, 3D, VR, and AR interactive experiences while allowing for the importing and preparation of design data to create real-time experiences.”

Microsoft also today noted that Unreal Engine 4 support for HoloLens 2 will become available by the end of May.

Microsoft brings Azure SQL Database to the edge (and Arm)

Microsoft today announced an interesting update to its database lineup with the preview of Azure SQL Database Edge, a new tool that brings the same database engine that powers Azure SQL Database in the cloud to edge computing devices, including, for the first time, Arm-based machines.

Azure SQL Edge, Azure corporate vice president Julia White writes in today’s announcement, “brings to the edge the same performant, secure and easy to manage SQL engine that our customers love in Azure SQL Database and SQL Server.”

The new service, which will also run on x64-based devices and edge gateways, promises to bring low-latency analytics to edge devices as it allows users to work with streaming data and time-series data, combined with the built-in machine learning capabilities of Azure SQL Database. Like its larger brethren, Azure SQL Database Edge will also support graph data and comes with the same security and encryption features that can, for example, protect the data at rest and in motion, something that’s especially important for an edge device.

As White rightly notes, this also ensures that developers only have to write an application once and then deploy it to platforms that feature Azure SQL Database, good old SQL Server on premises and this new edge version.

SQL Database Edge can run in both connected and fully disconnected fashion, something that’s also important for many use cases where connectivity isn’t always a given, yet where users need the kind of data analytics capabilities to keep their businesses (or drilling platforms, or cruise ships) running.

Takeaways from F8 and Facebook’s next phase

Extra Crunch offers members the opportunity to tune into conference calls led and moderated by the TechCrunch writers you read every day. This week, TechCrunch’s Josh Constine and Frederic Lardinois discuss major announcements that came out of Facebook’s F8 conference and dig into how Facebook is trying to redefine itself for the future.

Though touted as a developer-focused conference, Facebook spent much of F8 discussing privacy upgrades, how the company is improving its social impact, and a series of new initiatives on the consumer and enterprise side. Josh and Frederic discuss which announcements seem to make the most strategic sense, and which may create attractive (or unattractive) opportunities for new startups and investment.

“This F8 was aspirational for Facebook. Instead of being about what Facebook is, and accelerating the growth of it, this F8 was about Facebook, and what Facebook wants to be in the future.

That’s not the newsfeed, that’s not pages, that’s not profiles. That’s marketplace, that’s Watch, that’s Groups. With that change, Facebook is finally going to start to decouple itself from the products that have dragged down its brand over the last few years through a series of nonstop scandals.”

(Photo by Justin Sullivan/Getty Images)

Josh and Frederic dive deeper into Facebook’s plans around its redesign, Messenger, Dating, Marketplace, WhatsApp, VR, smart home hardware and more. The two also dig into the biggest news, or lack thereof, on the developer side, including Facebook’s Ax and BoTorch initiatives.

For access to the full transcription and the call audio, and for the opportunity to participate in future conference calls, become a member of Extra Crunch. Learn more and try it for free. 

Couchbase’s mobile database gets built-in ML and enhanced synchronization features

Couchbase, the company behind the eponymous NoSQL database, announced a major update to its mobile database today that brings some machine learning smarts, as well as improved synchronization features and enhanced stats and logging support to the software.

“We’ve led the innovation and data management at the edge since the release of our mobile database five years ago,” Couchbase’s VP of Engineering Wayne Carter told me. “And we’re excited that others are doing that now. We feel that it’s very, very important for businesses to be able to utilize these emerging technologies that do sit on the edge to drive their businesses forward, and both making their employees more effective and their customer experience better.”

The latter part is what drove a lot of today’s updates, Carter noted. He also believes that the database is the right place to do some machine learning. So with this release, the company is adding predictive queries to its mobile database. This new API allows mobile apps to take pre-trained machine learning models and run predictive queries against the data that is stored locally. This would allow a retailer to create a tool that can use a phone’s camera to figure out what part a customer is looking for.

To support these predictive queries, Couchbase mobile is also getting support for predictive indexes. “Predictive indexes allow you to create an index on prediction, enabling correlation of real-time predictions with application data in milliseconds,” Carter said. In many ways, that’s also the unique value proposition for bringing machine learning into the database. “What you really need to do is you need to utilize the unique values of a database to be able to deliver the answer to those real-time questions within milliseconds,” explained Carter.

The other major new feature in this release is delta synchronization, which allows businesses to push far smaller updates to the databases on their employees mobile devices. That’s because they only have to receive the information that changed instead of a full updated database. Carter says this was a highly requested feature but until now, the company always had to prioritize work on other components of Couchbase.

This is an especially useful feature for the company’s retail customers, a vertical where it has been quite successful. These users need to keep their catalogs up to data and quite a few of them supply their employees with mobile devices to help shoppers. Rumor has it that Apple, too, is a Couchbase user.

The update also includes a few new features that will be more of interest to operators, including advanced stats reporting and enhanced logging support.

 

InCountry raises $7M to help multinationals store private data in countries of origin

The last few years have seen a rapid expansion of national regulations that, in the name of data protection, govern how and where organizations like healthcare and insurance companies, financial services companies and others store residents’ personal data that is used and collected through their services.

But keeping abreast of and following those rules has proven to be a minefield for companies. Now, a startup is coming out of stealth with a new product to to help.

InCountry, which provides “data residency-as-a-service” to businesses and other organizations, is launching with $7 million in funding and its first product: Profile, which focuses on user profile and registration information in 50 countries on six continents. There will be more products launched covering payment, transaction and health data later in the year, co-founder and CEO Peter Yared said in an interview.

The funding — a seed round — is coming from Caffeinated Capital , Felicis Ventures, Ridge Ventures, Bloomberg Beta, Charles River Ventures, Global Founders Capital.

InCountry is founded and led by Yared, a repeat entrepreneur who most recently co-founded and eventually sold the “micro-app” startup Sapho, which was acquired by Citrix. Other companies he’s sold startups to include VMWare, Sun, and Oracle, and he was also once the CIO of CBS Interactive. 

Yared told me in an interview that he has actually been self-funding, running and quietly accruing customers for InCountry for two years. He decided to raise this seed round — a number of investors in this list are repeat backers of his ventures — to start revving up the engines. (One of those ‘revs’ is an interesting talent hire. Today the company is also announcing Alex Castro as chief product officer. Castro was an early employee working on Amazon Web Services and Mircosoft’s move into CRM, and also worked on autonomous at Uber.)

If you have never heard of the term “data residency-as-a-service”, that might be because it’s something that has been coined by Yared himself to describe the function of his startup.

InCountry is part tech provider, part consultancy.

On the tech side, it provides the technical aspects of providing personal data storage in a specific national border for companies that might otherwise run other aspects of their services from other locations. That includes SDKs that link to a variety of data centers and cloud service providers that allow new countries to be added in under 10 minutes; two types of encryption on the data to make sure that it remains secure; and managed services for its biggest clients. (InCountry is not disclosing any client names right now, except for video-editing company Revl.)

On the consultancy side, it has an in-house team of researchers and partnerships with law firms to continually update its policies and ensure that customers remain compliant with any changes. InCountry says that to provide further assurance to customers, it provides insurance of up to three times the value of a customer’s spend.

InCountry’s aim is twofold: first, to solve the many pain points that a company or other organization has to go through when considering how to comply with data hosting regulations; and second, to make sure that by making it easy, companies actually do what’s required of them.

As Yared describes it, the process for becoming data compliant can be painful, but his startup is applying an economy of scale, since the process is essentially one that everyone will have to follow:

“They have to figure out what the requirements are, find the facility, audit the facility, which includes making sure it’s not owned by the state, make sure the network is properly segregated, develop the right software layer to manage the data, hire program managers, network operations people and more,” he said. And for those handling this themselves, cloud service providers will typically cover a smaller footprint of regions, 17 at most for the biggest. “We take care of all that, and add on more as we need to.”

The problem is that because the process is so painful, many companies often flout the requirements, which isn’t good for its customers, nor for the companies themselves, which run the risk of getting fined.

“It’s universally acknowledged that the way data is stored and handled by most companies and handled is not meeting the average requirements of citizens rights,” Yared said. “That’s why we now have GDPR, and will see more GDPR-like regulations get rolled out.”

One thing that InCountry is not touching is data such as messages between users and other kinds of personal files — data that has been the subject of sometimes very controversial data regulations. Its limit are the pieces of personal information about users — bank details, health information, social security numbers, and so on — that are part and parcel of what we provide to companies in the course of interacting with them online.

“In early outreach, we have had people as for private data storage, but we would be ethically uncomfortable with that,” Yared said. “We want to be in the business of helping people who have regulated data, by storing that in a compliant manner that is more helpful, and more fruitful to users.”

The aim will be to add more services over time covering ever more countries, to keep in line with the growing trend among regulators to put more data residency laws in place.

“We’re witnessing more countries signing in data laws each week, and we’re only going to see those numbers increase,” said Sundeep Peechu, Managing Director at Felicis Ventures, in a statement. “We’re excited to be leading the round and reinvesting in Peter as he launches his seventh company. He recognized the problem early on and started working on a solution nearly two years ago that goes beyond regional data centers and patchwork in-house DIY solutions.”

Tuesday, April 30, 2019

Oculus announces a VR subscription service for enterprises

Oculus is getting serious about monetizing VR for enterprise.

The company has previously sold specific business versions of the headsets, but now they’re adding a pricey annual device-management subscription.

Oculus Go for business starts at $599 (64 GB) and the enterprise Oculus Quest starts at $999 (128 GB). These fees include the first year of enterprise device management and support, which goes for $180 per year per device.

Here’s what that fee gets you:

This includes a dedicated software suite offering device setup and management tools, enterprise-grade service and support, and a new user experience customized for business use cases.

The new Oculus for Business launches in the fall.

Facebook Messenger will get desktop apps, co-watching, emoji status

To win chat, Facebook Messenger must be as accessible as SMS, yet more entertaining than Snapchat. Today, Messenger pushes on both fronts with a series of announcements at Facebook’s F8 conference, including that it will launch Mac and PC desktop apps, a faster and smaller mobile app, simultaneous video co-watching and a revamped Friends tab, where friends can use an emoji to tell you what they’re up to or down for.

Facebook is also beefing up its tools for the 40 million active businesses and 300,000 businesses on Messenger, up from 200,000 businesses a year ago. Merchants will be able to let users book appointments at salons and masseuses, collect information with new lead generation chatbot templates and provide customer service to verified customers through authenticated m.me links. Facebook hopes this will boost the app beyond the 20 billion messages sent between people and businesses each month, which is up 10X from December 2017.

“We believe you can build practically any utility on top of messaging,” says Facebook’s head of Messenger Stan Chudnovsky. But he stresses that “All of the engineering behind it is has been redone” to make it more reliable, and to comply with CEO Mark Zuckerberg’s directive to unite the backends of Messenger, WhatsApp and Instagram Direct. “Of course, if we didn’t have to do all that, we’d be able to invest more in utilities. But we feel that utilities will be less functional if we don’t do that work. They need to go hand-in-hand together. Utilities will be more powerful, more functional and more desired if built on top of a system that’s interoperable and end-to-end encrypted.”

Here’s a look at the major Messenger announcements and why they’re important:

Messenger Desktop – A stripped-down version of Messenger focused on chat, audio and video calls will debut later this year. Chudnovsky says it will remove the need to juggle and resize browser tabs by giving you an always-accessible version of Messenger that can replace some of the unofficial knock-offs. Especially as Messenger focuses more on businesses, giving them a dedicated desktop interface could convince them to invest more in lead generation and customer service through Messenger.

Facebook Messenger’s upcoming desktop app

Project Lightspeed – Messenger is reengineering its app to cut 70 mb off its download size so people with low-storage phones don’t have to delete as many photos to install it. In testing, the app can cold start in merely 1.3 seconds, which Chudnovsky says is just 25 percent of where Messenger and many other apps are today. While Facebook already offers Messenger Light for the developing world, making the main app faster for everyone else could help Messenger swoop in and steal users from the status quo of SMS. The Lightspeed update will roll out later this year.

Video Co-Watching – TechCrunch reported in November that Messenger was building a Facebook Watch Party-style experience that would let users pick videos to watch at the same time as a friend, with reaction cams of their faces shown below the video. Now in testing before rolling out later this year, users can pick any Facebook video, invite one or multiple friends and laugh together. Unique capabilities like this could make Messenger more entertaining between utilitarian chat threads and appeal to a younger audience Facebook is at risk of losing.

Watch Videos Together on Messenger

Business Tools – After a rough start to its chatbot program a few years ago, where bots couldn’t figure out users’ open-ended responses, Chudnovsky says the platform is now picking up steam with 300,000 developers on board. One option that’s worked especially well is lead-generation templates, which teach bots to ask people standardized questions to collect contact info or business intent, so Messenger is adding more of those templates with completion reminders and seamless hand-off to a live agent.

To let users interact with appointment-based businesses through a platform they’re already familiar with, Messenger launched a beta program for barbers, dentists and more that will soon open to let any business handle appointment booking through the app. And with new authenticated m.me links, a business can take a logged-in user on their website and pass them to Messenger while still knowing their order history and other info. Getting more businesses hooked on Messenger customer service could be very lucrative down the line.

Appointment booking on Messenger

Close Friends and Emoji Status – Perhaps the most interesting update to Messenger, though, is its upcoming effort to help you make offline plans. Messenger is in the early stages of rebuilding its Friends tab into “Close Friends,” which will host big previews of friends’ Stories, photos shared in your chats, and let people overlay an emoji on their profile pic to show friends what they’re doing. We first reported this “Your Emoji” status update feature was being built a year ago, but it quietly cropped up in the video for Messenger Close Friends. This iteration lets you add an emoji like a home, barbell, low battery or beer mug, plus a short text description, to let friends know you’re back from work, at the gym, might not respond or are interested in getting a drink. These will show up atop the Close Friends tab as well as on location-sharing maps and more once this eventually rolls out.

Messenger’s upcoming Close Friends tab with Your Emoji status

Facebook Messenger is the best poised app to solve the loneliness problem. We often end up by ourselves because we’re not sure which of our friends are free to hang out, and we’re embarrassed to look desperate by constantly reaching out. But with emoji status, Messenger users could quietly signal their intentions without seeming needy. This “what are you doing offline” feature could be a whole social network of its own, as apps like Down To Lunch have tried. But with 1.3 billion users and built-in chat, Messenger has the ubiquity and utility to turn a hope into a hangout.

Docker updates focus on simplifying containerization for developers

Over the last five years, Docker has become synonymous with software containers, but that doesn’t mean every developer understands the technical details of building, managing and deploying them. At DockerCon this week, the company’s customer conference taking place in San Francisco, it announced new tools that have been designed to make it easier for developers, who might not be Docker experts, to work with containers.

As the technology has matured, the company has seen the market broaden, but in order to take advantage of that, it needs to provide a set of tools that make it easier to work with. “We’ve found that customers typically have a small cadre of Docker experts, but there are hundreds, if not thousands, of developers who also want to use Docker. And we reasoned, how can we help them get productive very, very quickly, without them having to become Docker experts,” Scott Johnston, chief product officer at Docker told TechCrunch.

To that end, it announced a Beta of Docker Enterprise 3.0, which includes several key components. For starters, Docker Desktop Enterprise lets IT set up a Docker environment with the kind of security and deployment templates that make sense for each customer. The developers can then pick the templates that make sense for their implementations, while conforming with compliance and governance rules in the company.

“These templates already have IT-approved container images, and have IT-approved configuration settings. And what that means is that IT can provide these templates through these visual tools that allow developers to move fast and choose the ones they want without having go back for approval,” Johnston explained.

The idea is to let the developers concentrate on building applications, and the templates provide all the Docker tooling pre-built and ready to go, so they don’t have to worry about all of that.

Another piece of this is Docker Applications, which allows developers to build complex containerized applications as a single package and deploy them to any infrastructure they wish — on-prem or in the cloud. Five years ago when Docker really got started with containers, they were a simpler idea, often involving just a single one, but as developers broke down those larger applications into microservices, it created a new level of difficulty, especially for operations who had to deploy these increasingly large sets of application containers.

“Operations can now programmatically change the parameters for the containers, depending on the environments without having to go in and change the application. So you can imagine that ability lowers the friction of having to manage all these files in the first place,” he said.

The final piece of that is the orchestration layer and the popular way to handle that today is with Kubernetes. Docker has created its own flavor of Kubernetes, based on the open source tool. Johnston says, as with the other two pieces, the goal here is to take a powerful tool like Kubernetes and reduce the overall complexity associated with running it, while making it fully compatible with a Docker environment.

For that, Docker announced Docker Kubernetes Service (DKS), which has been designed with Docker users in mind including support for Docker Compose, a scripting tool that has been popular with Docker users. While you are free to use any flavor of Kubernetes you wish, Docker is offering DKE as a Docker-friendly version for developers.

All of these components have one thing in common besides being part of Docker Enterprise 3.0. They are trying to reduce the complexity associated with deploying and managing containers and to abstract away the most difficult parts, so that developers can concentrate on developing without having to worry about connecting to the technical underpinnings of building and deploying containers. At the same time, Docker is trying to make it easier for the operations team to manage it all. That is the goal, at least. In the end, DevOps teams will be the final judges on how well Docker has done, once these tools become generally available later this year.

The Docker Enterprise 3.0 Beta will be available later this quarter.

UiPath nabs $568M at a $7B valuation to bring robotic process automation to the front office

Companies are on the hunt for ways to reduce the time and money it costs their employees to perform repetitive tasks, so today a startup that has built a business to capitalize on this is announcing a huge round of funding to double down on the opportunity.

UiPath — a robotic process automation startup originally founded in Romania that uses artificial intelligence and sophisticated scripts to build software to run these tasks — today confirmed that it has closed a Series D round of $568 million at a post-money valuation of $7 billion.

From what we understand, the startup is “close to profitability” and is going to keep growing as a private company. Then, an IPO within the next 12-24 months the “medium term” plan.

“We are at the tipping point. Business leaders everywhere are augmenting their workforces with software robots, rapidly accelerating the digital transformation of their entire business and freeing employees to spend time on more impactful work,” said Daniel Dines, UiPath co-founder and CEO, in a statement. “UiPath is leading this workforce revolution, driven by our core determination to democratize RPA and deliver on our vision of a robot helping every person.”

This latest round of funding is being led by Coatue, with participation from Dragoneer, Wellington, Sands Capital, and funds and accounts advised by T. Rowe Price Associates, Accel, Alphabet’s CapitalG, Sequoia, IVP and Madrona Venture Group.

CFO Marie Myers said in an interview in London that the plan will be to use this funding to expand UiPath’s focus into more front-office and customer-facing areas, such as customer support and sales.

“We want to move into automation into new levels,” she said. “We’re advancing quickly into AI and the cloud, with plans to launch a new AI product in the second half of the year that we believe will demystify it for our users.” The product, she added, will be focused around “drag and drop” architecture and will work both for attended and unattended bots — that is, those that work as assistants to humans, and those that work completely on their own. “Robotics has moved out of the back office and into the front office, and the time is right to move into intelligent automation.”

Today’s news confirms Kate’s report from last month noting that the round was in progress: in the end, the amount UiPath raised was higher than the target amount we’d heard ($400 million), with the valuation on the more “conservative” side (we’d said the valuation would be higher than $7 billion).

“Conservative” is a relative term here. The company has been on a funding tear in the last year, raising $418 million ($153 million at Series A and $265 million at Series B) in the space of 12 months, and seeing its valuation go from a modest $110 million in April 2017 to $7 billion today, just two years later.

Up to now, UiPath has focused on internal and back-office tasks in areas like accounting, human resources paperwork, and claims processing — a booming business that has seen UiPath expand its annual run rate to more than $200 million (versus $150 million six months ago) and its customer base to more than 400,000 people.

Customers today include American Fidelity, BankUnited, CWT (formerly known as Carlson Wagonlit Travel), Duracell, Google, Japan Exchange Group (JPX), LogMeIn, McDonalds, NHS Shared Business Services, Nippon Life Insurance Company, NTT Communications, Orange, Ricoh Company, Ltd., Rogers Communications, Shinsei Bank, Quest Diagnostics, Uber, the US Navy, Voya Financial, Virgin Media, and World Fuel Services.

Moving into more front-office tasks is an ambitious but not surprising leap for UiPath: looking at that customer list, it’s notable that many of these organizations have customer-facing operations, often with their own sets of repetitive processes that are ripe for improving by tapping into the many facets of AI — from computer vision to natural language processing and voice recognition, through to machine learning — alongside other technology.

It also begs the question of what UiPath might look to tackle next. Having customer-facing tools and services is one short leap from building consumer services, an area where the likes of Amazon, Google, Apple and Microsoft are all pushing hard with devices and personal assistant services. (That would indeed open up the competitive landscape quite a lot for UiPath, beyond the list of RPA companies like AutomationAnywhere, Kofax and Blue Prism who are its competitors today.)

Robotics has been given a somewhat bad rap in the world of work: critics worry that they are “taking over all the jobs“, removing humans and their own need to be industrious from the equation; and in the worst-case scenarios, the work of a robot lacks the nuance and sophsitication you get from the human touch.

UiPath and the bigger area of RPA are interesting in this regard: the aim (the stated aim, at least) isn’t to replace people, but to take tasks out of their hands to make it easier for them to focus on the non-repetitive work that “robots” — and in the case of UiPath, software scripts and robots — cannot do.

Indeed, that “future of work” angle is precisely what has attracted investors.

“UiPath is enabling the critical capabilities necessary to advance how companies perform and how employees better spend their time,” said Greg Dunham, vice president at T. Rowe Price Associates, Inc., in a statement. “The industry has achieved rapid growth in such a short time, with UiPath at the head of it, largely due to the fact that RPA is becoming recognized as the paradigm shift needed to drive digital transformation through virtually every single industry in the world.”

As we’ve written before, the company has has been a big hit with investors because of the rapid traction it has seen with enterprise customers.

There is an interesting side story to the funding that speaks to that traction: Myers, the CFO, came to UiPath by way of one of those engagements: she had been a senior finance executive with HP tasked with figuring out how to make some of its accounting more efficient. She issued an RFP for the work, and the only company she thought really addressed the task with a truly tech-first solution, at a very competitive price, was an unlikely startup out of Romania, which turned out to be UiPath. She became one of the company’s first customers, and eventually Dines offered her a job to help build his company to the next level, which she leaped to take.

“UiPath is improving business performance, efficiency and operation in a way we’ve never seen before,” said Philippe Laffont, founder of Coatue Management, in a statement. “The Company’s rapid growth over the last two years is a testament to the fact that UiPath is transforming how companies manage their resources. RPA presents an enormous opportunity for companies around the world who are embracing artificial intelligence, driving a new era of productivity, efficiency and workplace satisfaction.” 

Monday, April 29, 2019

Canonical’s Mark Shuttleworth on dueling open-source foundations

At the Open Infrastructure Summit, which was previously known as the OpenStack Summit, Canonical founder Mark Shuttleworth used his keynote to talk about the state of open-source foundations — and what often feels like the increasing competition between them. “I know for a fact that nobody asked to replace dueling vendors with dueling foundations,” he said. “Nobody asked for that.”

He then put a point on this, saying, “what’s the difference between a vendor that only promotes the ideas that are in its own interest and a foundation that does the same thing. Or worse, a foundation that will only represent projects that it’s paid to represent.”

Somewhat uncharacteristically, Shuttleworth didn’t say which foundations he was talking about, but since there are really only two foundations that fit the bill here, it’s pretty clear that he was talking about the OpenStack Foundation and the Linux Foundation — and maybe more precisely the Cloud Native Computing Foundation, the home of the incredibly popular Kubernetes project.

It turns out, that’s only part of his misgivings about the current state of open-source foundations, though. I sat down with Shuttleworth after his keynote to discuss his comments, as well as Canonical’s announcements around open infrastructure.

One thing that’s worth noting at the outset is that the OpenStack Foundation is using this event to highlight that fact that it has now brought in more new open infrastructure projects outside of the core OpenStack software, with two of them graduating from their pilot phase. Shuttleworth, who has made big bets on OpenStack in the past and is seeing a lot of interest from customers, is not a fan. Canonical, it’s worth noting, is also a major sponsor of the OpenStack Foundation. He, however, believes, the foundation should focus on the core OpenStack project.

“We’re busy deploying 27 OpenStack clouds — that’s more than double the run rate last year,” he said. “OpenStack is important. It’s very complicated and hard. And a lot of our focus has been on making it simpler and cleaner, despite the efforts of those around us in this community. But I believe in it. I think that if you need large-scale, multi-tenant virtualization infrastructure, it’s the best game in town. But it has problems. It needs focus. I’m super committed to that. And I worry about people losing their focus because something newer and shinier has shown up.”

To clarify that, I asked him if he essentially believes that the OpenStack Foundation is making a mistake by trying to be all things infrastructure. “Yes, absolutely,” he said. “At the end of the day, I think there are some projects that this community is famous for. They need focus, they need attention, right? It’s very hard to argue that they will get focus and attention when you’re launching a ton of other things that nobody’s ever heard of, right? Why are you launching those things? Who is behind those decisions? Is it a money question as well? Those are all fair questions to ask.”

He doesn’t believe all of the blame should fall on the Foundation leadership, though. “I think these guys are trying really hard. I think the common characterization that it was hapless isn’t helpful and isn’t accurate. We’re trying to figure stuff out.” Shuttleworth indeed doesn’t believe the leadership is hapless, something he stressed, but he clearly isn’t all that happy with the current path the OpenStack Foundation is on either.

The Foundation, of course, doesn’t agree. As OpenStack Foundation COO Mark Collier told me, the organization remains as committed to OpenStack as ever. “The Foundation, the board, the community, the staff — we’ve never been more committed to OpenStack,” he said. “If you look at the state of OpenStack, it’s one of the top-three most active open-source projects in the world right now […] There’s no wavering in our commitment to OpenStack.” He also noted that the other projects that are now part of the foundation are the kind of software that is helpful to OpenStack users. “These are efforts which are good for OpenStack,” he said. In addition, he stressed that the process of opening up the Foundation has been going on for more than two years, with the vast majority of the community (roughly 97 percent) voting in favor.

OpenStack board member Allison Randal echoed this. “Over the past few years, and a long series of strategic conversations, we realized that OpenStack doesn’t exist in a vacuum. OpenStack’s success depends on the success of a whole network of other open-source projects, including Linux distributions and dependencies like Python and hypervisors, but also on the success of other open infrastructure projects which our users are deploying together. The OpenStack community has learned a few things about successful open collaboration over the years, and we hope that sharing those lessons and offering a little support can help other open infrastructure projects succeed too. The rising tide of open source lifts all boats.”

As far as open-source foundations in general, he surely also doesn’t believe that it’s a good thing to have numerous foundations compete over projects. He argues that we’re still trying to figure out the role of open-source foundations and that we’re currently in a slightly awkward position because we’re still trying to determine how to best organize these foundations. “Open source in society is really interesting. And how we organize that in society is really interesting,” he said. “How we lead that, how we organize that is really interesting and there will be steps forward and steps backward. Foundations tweeting angrily at each other is not very presidential.”

He also challenged the notion that if you just put a project into a foundation, “everything gets better.” That’s too simplistic, he argues, because so much depends on the leadership of the foundation and how they define being open. “When you see foundations as nonprofit entities effectively arguing over who controls the more important toys, I don’t think that’s serving users.”

When I asked him whether he thinks some foundations are doing a better job than others, he essentially declined to comment. But he did say that he thinks the Linux Foundation is doing a good job with Linux, in large parts because it employs Linus Torvalds. “I think the technical leadership of a complex project that serves the needs of many organizations is best served that way and something that the OpenStack Foundation could learn from the Linux Foundation. I’d be much happier with my membership fees actually paying for thoughtful, independent leadership of the complexity of OpenStack rather than the sort of bizarre bun fights and stuffed ballots that we see today. For all the kumbaya, it flatly doesn’t work.” He believes that projects should have independent leaders who can make long-term plans. “Linus’ finger is a damn useful tool and it’s hard when everybody tries to get reelected. It’s easy to get outraged at Linus, but he’s doing a fucking good job, right?”

OpenStack, he believes, often lacks that kind of decisiveness because it tries to please everybody and attract more sponsors. “That’s perhaps the root cause,” he said, and it leads to too much “behind-the-scenes puppet mastering.”

In addition to our talk about foundations, Shuttleworth also noted that he believes the company is still on the path to an IPO. He’s obviously not committing to a time frame, but after a year of resetting in 2018, he argues that Canonical’s business is looking up. “We want to be north of $200 million in revenue and a decent growth rate and the right set of stories around the data center, around public cloud and IoT.” First, though, Canonical will do a growth equity round.

Mirantis makes configuring on-premises clouds easier

Mirantis, the company you may still remember as one of the biggest players in the early days of OpenStack, launched an interesting new hosted SaSS service today that makes it easier for enterprises to build and deploy their on-premises clouds. The new Mirantis Model Designer, which is available for free, lets operators easily customize their clouds — starting with OpenStack clouds next month and Kubernetes clusters in the coming months — and build the configurations to deploy them.

Typically, doing so typically involves writing lots of YAML files by hand, something that’s error-prone and few developers love. Yet that’s exactly what’s at the core of the infrastructure-as-code model. Model Designer, on the other hand, takes what Mirantis learned from its highly popular Fuel installer for OpenStack and takes it a step further. The Model Designer, which Mirantis co-founder and CMO Boris Renski demoed for me ahead of today’s announcement, presents users with a GUI interface that walks them through the configuration steps. What’s smart here is that every step has a difficulty level (modeled after Doom’s levels ranging from “I’m too young to die” to “ultraviolence” — though it’s missing Dooms ‘nightmare’ setting), which you can choose based on how much you want to customize the setting.

Model Designer is an opinionated tool, but it does give users quite a bit of freedom, too. Once the configuration step is done, Mirantis actually takes the settings and runs them through its Jenkins automation server to validate the configuration. As Renski pointed out, that step can’t take into account all of the idiosyncrasies of every platform, but it can ensure that the files are correct. After this, the tools provides the user with the configuration files and actually deploying the OpenStack cloud is then simply a matter of taking the files, together with the core binaries that Mirantis makes available for download, to the on-premises cloud and executing a command-line script. Ideally, that’s all there is to the process. At this point, Mirantis’ DriveTrain tools take over and provision the cloud. For upgrades, users simply have to repeat the process.

Mirantis’ monetization strategy is to offer support, which range from basic support to fully managing a customer’s cloud. Model Designer is yet another way for the company to make more users aware of itself and then offer them support as they start using more of the company’s tools.

With Kata Containers and Zuul, OpenStack graduates its first infrastructure projects

Over the course of the last year and a half, the OpenStack Foundation made the switch from purely focusing on the core OpenStack project to opening itself up to other infrastructure-related projects as well. The first two of these projects, Kata Containers and the Zuul project gating system, have now exited their pilot phase and have become the first top-level Open Infrastructure Projects at the OpenStack Foundation.

The Foundation made the announcement at its first Open Infrastructure Summit (previously known as the OpenStack Summit) in Denver today after the organization’s board voted to graduate them ahead of this week’s conference. “It’s an awesome milestone for the projects themselves,” OpenStack Foundation executive direction Jonathan Bryce told me. “It’s a validation of the fact that in the last 18 months, they have created sustainable and productive communities.”

It’s also a milestone for the OpenStack Foundation itself, though, which is still in the process of reinventing itself in many ways. It can now point at two successful projects that are under its stewardship, which will surely help it as it goes out an tries to attract others who are looking to bring their open-source projects under the aegis of a foundation.

In addition to graduating these first two projects, Airship — a collection of open-source tools for provisioning private clouds that is currently a pilot project — hit version 1.0 today. “Airship originated within AT&T,” Bryce said. “They built it from their need to bring a bunch of open-source tools together to deliver on their use case. And that’s why, from the beginning, it’s been really well aligned with what we would love to see more of in the open source world and why we’ve been super excited to be able to support their efforts there.”

With Airship, developers use YAML documents to describe what the final environment should like like and the result of that is a production-ready Kubernetes cluster that was deployed by OpenStack’s Helm tool – though without any other dependencies on OpenStack.

AT&T’s assistant vice president, Network Cloud Software Engineering, Ryan van
Wyk, told me that a lot of enterprises want to use certain open-source components, but that the interplay between them is often difficult and that while it’s relatively easy to manage the lifecycle of a single tool, it’s hard to do so when you bring in multiple open-source tools, all with their own lifecycles. “What we found over the last five years working in this space is that you can go and get all the different open-source solutions that you need,” he said. “But then the operator has to invest a lot of engineering time and build extensions and wrappers and perhaps some orchestration to manage the lifecycle of the various pieces of software required to deliver the infrastructure.”

It’s worth noting that nothing about Airship is specific to the telco world, though it’s no secret that OpenStack is quite popular in the telco world and unsurprisingly, the Foundation is using this week’s event to highlight the OpenStack project’s role in the upcoming 5G rollouts of various carriers.

In addition, the event will also showcase OpenStack’s bare metal capabilities, an area the project has also focused on in recent releases. Indeed, the Foundation today announced that its bare metal tools now manage over a million cores of compute. To codify these efforts, the Foundation also today launched the OpenStack Ironic Bare Metal program, which brings together some of the project’s biggest users like Verizon Media (home of TechCrunch, though we don’t run on the Verizon cloud), 99Cloud, China Mobile, China Telecom, China Unicom, Mirantis, OVH, Red Hat, SUSE, Vexxhost and ZTE.

Tray.io hauls in $37 million Series B to keep expanding enterprise automation tool

Tray.io, the startup that wants to put automated workflows within reach of line of business users, announced a $37 million Series B investment today.

Spark Capital led the round with help from Meritech Capital, along with existing investors GGV Capital, True Ventures and Mosaic Ventures. Under the terms of the deal Spark’s Alex Clayton will be joining the Tray’s board of directors. The company has now raised over $59 million.

Rich Waldron, CEO at Tray, says the company looked around at the automation space and saw tools designed for engineers and IT pros and wanted to build something for less technical business users.

“We set about building a visual platform that would enable folks to essentially become programmers without needing to have an engineering background, and enabling them to be able to build out automation for their day-to-day role.”

He added, “As a result, we now have a service that can be used in departments across an organization, including IT, whereby they can build extremely powerful and flexible workflows that gather data from all these disparate sources, and carry out automation as per their desire.”

Alex Clayton from lead investor Spark Capital sees Tray filling in a big need in the automation space in a spot between high end tools like Mulesoft, which Salesforce bought last year for $6.5 billion, and simpler tools like Zapier. The problem, he says, is that there’s a huge shortage of time and resources to manage and really integrate all these different SaaS applications companies are using today to work together.

“So you really need something like Tray because the problem with the current Status Quo [particularly] in marketing sales operations, is that they don’t have the time or the resources to staff engineering for building integrations on disparate or bespoke applications or workflows,” he said.

Tray is a seven year old company, but started slowly taking the first 4 years to build out the product. They got $14 million Series A 12 months ago and have been taking off ever since. The company’s annual recurring revenue (ARR) is growing over 450 percent year over year with customers growing by 400 percent, according to data from the company. It already has over 200 customers including Lyft, Intercom, IBM and SAP.

The company’s R&D operation is in London, with headquarters in San Francisco. It currently has 85 employees, but expects to have 100 by the end of the quarter as it begins to put the investment to work.

Friday, April 26, 2019

Slack files to go public, reports $138.9M in losses on revenue of $400.6M

Slack has filed to go public via a direct listing. Similar to what Spotify did last year, this means that the company won’t have a traditional IPO, and will instead allow existing shareholders to sell their stock to investors.

The company’s S-1 filing says it plans to make $100 million worth of shares available, but that’s probably a placeholder figure.

The S-1 offers data about the company’s financial performance, reporting a net loss of $138.9 million and revenue of $400.6 million in the fiscal year ending January 31, 2019. That’s compared to a loss of $140.1 million on revenue of $220.5 million for the year before.

The company attributes these losses to its decision “to invest in growing our business to capitalize on our market opportunity,” and notes that they’re shrinking as a percentage of revenue.

Slack also says that in the three months ending on January 31, it had more than 10 million daily active users across more than 600,000 organizations — 88,000 on the paid plan and 550,000 on the free plan.

In the filing, the company says the Slack team created the product to meet its own collaboration needs.

“Since our public launch in 2014, it has become apparent that organizations worldwide have similar needs, and are now finding the solution with Slack,” it says. “Our growth is largely due to word-of-mouth recommendations. Slack usage inside organizations of all kinds is typically initially driven bottoms-up, by end users. Despite this, we (and the rest of the world) still have a hard time explaining Slack. It’s been called an operating system for teams, a hub for collaboration, a connective tissue across the organization, and much else. Fundamentally, it is a new layer of the business technology stack in a category that is still being defined.”

The company suggests that the total market opportunity for Slack and other makers of workplace collaboration software is $28 billion, and it plans to grow through strategies like expanding its footprint within organizations already using Slack, investing in more enterprise features, expanding internationally and growing the developer ecosystem.

The risk factors mentioned in the filing sound pretty boilerplate and/or similar to other Internet companies going public, like the aforementioned net losses and the fact that its current growth rate might not be sustainable, as well as new compliance risks under Europe’s GDPR.

Slack has previously raised a total of $1.2 billion in funding, according to Crunchbase, from investors including Accel, Andreessen Horowitz, Social Capital, SoftBank, Google Ventures and Kleiner Perkins.

Thursday, April 25, 2019

AWS expands cloud infrastructure offerings with new AMD EPYC-powered T3a instances

Amazon is always looking for ways to increase the options it offers developers in AWS, and to that end, today it announced a bunch of new AMD EPYC-powered T3a instances. These were originally announced at the end of last year at re:Invent, AWS’s annual customer conference.

Today’s announcement is about making these chips generally available. They have been designed for a specific type of burstable workload, where you might not always need a sustained amount of compute power.

“These instances deliver burstable, cost-effective performance and are a great fit for workloads that do not need high sustained compute power but experience temporary spikes in usage. You get a generous and assured baseline amount of processing power and the ability to transparently scale up to full core performance when you need more processing power, for as long as necessary,” AWS’s Jeff Barr wrote in a blog post.

These instances are build on the AWS Nitro System, Amazon’s custom networking interface hardware that the company has been working on for the last several years. The primary components of this system include the Nitro Card I/O Acceleration, Nitro Security Chip and the Nitro Hypervisor.

Today’s release comes on top of the announcement last year that the company would be releasing EC2 instances powered by Arm-based AWS Graviton Processors, another option for developers, who are looking for a solution for scale-out workloads.

It also comes on the heels of last month’s announcement that it was releasing EC2 M5 and R5 instances, which use lower-cost AMD chips. These are also built on top of the Nitro System.

The EPCY processors are available starting today in seven sizes in your choice of spot instances, reserved instances or on-demand, as needed. They are available in US East in northern Virginia, US West in Oregon, Europe in ireland, US East in Ohio and Asia-Pacific in Singapore.

SalesLoft nabs $70M at $500M valuation for its sales engagement platform

Artificial intelligence and other tech for automating some of the more repetitive aspects of human jobs continues to be a growing category of software, and today a company that builds tools to address this need for salespeople has raised a tidy sum to grow its business.

SalesLoft, an Atlanta-based startup that has built a platform for salespeople to help them engage with their clients — providing communications tools, supporting data, and finally analytics to ‘coach’ salespeople to improve their processes — has raised $70 million in a Series D round of funding led by Insight Venture Partners with participation from HarbourVest.

Kyle Porter, SalesLoft’s co-founder and CEO, would not disclose the amount of funding in an interview but he did confirm that it is double its valuation from the previous round, a $50 million Series C that included LinkedIn among the investors (more on that below). That round was just over a year ago and would have valued the firm at $250 million. That would put SalesLoft’s current valuation at about $500 million.

While there are a number of CRM and sales tools out in the market today, Porter believes that many of the big ones might better be described as “dumb databases or repositories” of information rather than natively aimed at helping source and utilise data more effectively.

“They are not focused on improving how to connect buyers to sales teams in sincere ways,” he said. “And anytime a company like Salesforce has moved into tangential areas like these, they haven’t built from the ground up, but through acquisitions. It’s just hard to move giant aircraft carriers.”

SalesLoft is not the only one that has spotted this opportunity, of course. There are dozens of others that are either competing on single or all aspects of the same services that SalesLoft provides, including the likes of Clari, Chorus.ai, Gong, Conversica, Afiniti and not least Outreach — which is seen as a direct competitor on sales engagement and itself raised $114 million on a $1.1 billion valuation earlier this month.

One of the notable distinctions for SalesLoft is that one of its strategic investors is LinkedIn, which participated in its Series C. Before Microsoft acquired it, LinkedIn was seen as a potential competitor to SalesForce, and many thought that Microsoft’s acquisition was made squarely to help it compete against the CRM giant.

These days, Porter said that his company and LinkedIn have a tight integration by way of LinkedIn’s Sales Navigator product, which SalesLoft users can access and utilise directly within SalesLoft, and they have a hotline to be apprised of and help shape LinkedIn’s API developments. SalesLoft is also increasingly building links into Microsoft Dynamics, the company’s CRM business.

“We are seeing the highest usage in our LinkedIn integration among all the other integrations we provide,” Porter told me. “Our customers find that it’s the third most important behind email and phone calls.” Email, for all its cons, remains the first.

The fact that this is a crowded area of the market does speak to the opportunity and need for something effective, however, and the fact that SalesLoft has grown revenues 100 percent in each of the last two years, according to Porter, makes it a particularly attractive horse to bet on.

“So many software companies build a product to meet a market need and then focus purely on selling. SalesLoft is different. This team is continually innovating, pushing the boundaries, and changing the face of sales,” said Jeff Horing, co-founder and MD of Insight Venture Partners, in a statement. “This is one reason the company’s customers are so devoted to them. We are privileged to partner with this innovative company on their mission to improve selling experiences all over the world.”

Going forward, Porter said that in addition to expanding its footprint globally — recent openings include a new office in London — the company is going to go big on more AI and “intelligence” tools. The company already offers something it calls its “coaching network” which is not human but AI-based and analyses calls as they happen to provide pointers and feedback after the fact (similar to others like Gong and Chorus, I should note).

“We want to give people a better way to deliver an authentic but ultimately human way to sell,” he said.

Wednesday, April 24, 2019

Slack to extend collaboration to folks who don’t want to give up email

As Slack gathered with its growing customer base this week at the Frontiers Conference in San Francisco, it announced several enhancements to the product including extending collaboration to folks who want to stick with email instead of hanging with their co-workers in Slack.

Some habits are tough to break and using email as your file sharing and collaboration tool is one of them. Email is great for certain types of communications, but it was never really designed to be a full-fledged communications tool. While a tool like Slack might not ever fully replace email, it is going after it hard.

But Andy Pflaum, director of project management at Slack says, rather than fight those folks, Slack decided to make it easier to include them with a new email and calendar bridge that enables team members who might not have made the leap to Slack to continue to be kept in the loop.

Instead of opening Slack and seeing the thread, the message will come to these stragglers in their trusty old email inbox, just the way they like it. Earlier this month the company announced tighter integration between Slack and Outlook calendar and email (building on a similar integration with GMail and Google Calendar) where emails and calendar entries can be shared inside Slack. Pflaum says that the company is trying to take that email and calendar bridge idea one step further.

The non-Slack users would get an email instead with the Slack thread. It bundles together multiple responses to a thread in which the person has been engaging in an email, so the recipient isn’t getting an email for every response, according to Pflaum.

The person can respond by clicking a Slack button in the email and having Slack open, or they can simply reply to the email and the response will go to Slack automatically. If they choose the former, it might be a sneaky way to get them used to using Slack instead of email, but Pflaum says that it is not necessarily the intent.

Slack is simply responding to a request by customers to have this ability because apparently there are a percentage of people who would prefer to continue working inside email. The ability to open Slack to reply will be available soon. The ability to reply to Slack with the Reply button will be available later this year.

Microsoft beats expectations with $30.6B in revenue as Azure’s growth continues

Microsoft reported its quarterly earnings for Q3 2019 today. Overall, Wall Street expected earnings of about $1 per share and revenue of $29.84 billion. The company handily beat this with revenue of $30.6 billion (up 14 percent from the year-ago quarter) and earnings per share of $1.14.

With Microsoft focusing heavily on its cloud business, with both Azure and its other cloud-based services, it’s no surprise that this is also what Wall Street really cares about. The expectation here, according to some analysts, was that the company’s overall commercial cloud business would hit a run rate of about $38.5 billion. Those analysts we’re off by only a tiny bit. Microsoft today reported that its commercial cloud run-rate hot $38.4 billion.

And indeed, Microsoft Azure had a pretty good quarter, with revenue growing 73 percent. That’s a bit lower than last quarter’s results, but only by a fraction, and shows that there is plenty of growth left for Microsoft’s cloud infrastructure business.

Azure’s growth slowed somewhat in recent quarters. In some ways, that’s to be expected, though. Microsoft’s cloud is now a massive business and posting 100 percent growth when you have a run rate of almost $40 billion becomes a bit harder.

“Demand for our cloud offerings drove commercial cloud revenue to $9.6 billion this quarter, up 41% year-over-year,” said Amy Hood, executive vice president and chief financial officer of Microsoft. “We continue to drive growth in revenue and operating income with consistent execution from our sales teams and partners and targeted strategic investments.”

The company’s ‘intelligent cloud’ segment, which includes Azure and other cloud- and server-based products, reported revenue of $9.7 billion, up 22 percent from the year-ago quarter.

Microsoft’s productivity applications also fared well, with total revenue up by 14 percent to $10.2 billion. Here, revenue from LinkedIn also increased by 27 percent and the company highlighted that LinkedIn sessions also increased 24 percent.

Other highlights of the report include an increase in Surface revenue of 21 percent, which was expected given the number of new devices the company released in recent quarters.

“Leading organizations of every size in every industry trust the Microsoft cloud. We are accelerating our innovation across the cloud and edge so our customers can build the digital capability increasingly required to compete and grow,” said Satya Nadella, CEO of Microsoft.

For more financial details, you can find the full report here.