Wednesday, February 27, 2019

Open-source communities fight over telco market

When you think of MWC Barcelona, chances are you’re thinking about the newest smartphones and other mobile gadgets, but that’s only half the story. Actually, it’s probably far less than half the story because the majority of the business that’s done at MWC is enterprise telco business. Not too long ago, that business was all about selling expensive proprietary hardware. Today, it’s about moving all of that into software — and a lot of that software is open source.

It’s maybe no surprise then that this year, the Linux Foundation (LF) has its own booth at MWC. It’s not massive, but it’s big enough to have its own meeting space. The booth is shared by the three LF projects: the Cloud Native Computing Foundation (CNCF), Hyperleger and Linux Foundation Networking, the home of many of the foundational projects like ONAP and the Open Platform for NFV (OPNFV) that power many a modern network. And with the advent of 5G, there’s a lot of new market share to grab here.

To discuss the CNCF’s role at the event, I sat down with Dan Kohn, the executive director of the CNCF.

At MWC, the CNCF launched its testbed for comparing the performance of virtual network functions on OpenStack and what the CNCF calls cloud-native network functions, using Kubernetes (with the help of bare-metal host Packet). The project’s results — at least so far — show that the cloud-native container-based stack can handle far more network functions per second than the competing OpenStack code.

“The message that we are sending is that Kubernetes as a universal platform that runs on top of bare metal or any cloud, most of your virtual network functions can be ported over to cloud-native network functions,” Kohn said. “All of your operating support system, all of your business support system software can also run on Kubernetes on the same cluster.”

OpenStack, in case you are not familiar with it, is another massive open-source project that helps enterprises manage their own data center software infrastructure. One of OpenStack’s biggest markets has long been the telco industry. There has always been a bit of friction between the two foundations, especially now that the OpenStack Foundation has opened up its organizations to projects that aren’t directly related to the core OpenStack projects.

I asked Kohn if he is explicitly positioning the CNCF/Kubernetes stack as an OpenStack competitor. “Yes, our view is that people should be running Kubernetes on bare metal and that there’s no need for a middle layer,” he said — and that’s something the CNCF has never stated quite as explicitly before but that was always playing in the background. He also acknowledged that some of this friction stems from the fact that the CNCF and the OpenStack foundation now compete for projects.

OpenStack Foundation, unsurprisingly, doesn’t agree. “Pitting Kubernetes against OpenStack is extremely counterproductive and ignores the fact that OpenStack is already powering 5G networks, in many cases in combination with Kubernetes,” OpenStack COO Mark Collier told me. “It also reflects a lack of understanding about what OpenStack actually does, by suggesting that it’s simply a virtual machine orchestrator. That description is several years out of date. Moving away from VMs, which makes sense for many workloads, does not mean moving away from OpenStack, which manages bare metal, networking and authentication in these environments through the Ironic, Neutron and Keystone services.”

Similarly, ex-OpenStack Foundation board member (and Mirantis co-founder) Boris Renski told me that “just because containers can replace VMs, this doesn’t mean that Kubernetes replaces OpenStack. Kubernetes’ fundamental design assumes that something else is there that abstracts away low-level infrastructure, and is meant to be an application-aware container scheduler. OpenStack, on the other hand, is specifically designed to abstract away low-level infrastructure constructs like bare metal, storage, etc.”

This overall theme continued with Kohn and the CNCF taking a swipe at Kata Containers, the first project the OpenStack Foundation took on after it opened itself up to other projects. Kata Containers promises to offer a combination of the flexibility of containers with the additional security of traditional virtual machines.

“We’ve got this FUD out there around Kata and saying: telco’s will need to use Kata, a) because of the noisy neighbor problem and b) because of the security,” said Kohn. “First of all, that’s FUD and second, micro-VMs are a really interesting space.”

He believes it’s an interesting space for situations where you are running third-party code (think AWS Lambda running Firecracker) — but telcos don’t typically run that kind of code. He also argues that Kubernetes handles noisy neighbors just fine because you can constrain how many resources each container gets.

It seems both organizations have a fair argument here. On the one hand, Kubernetes may be able to handle some use cases better and provide higher throughput than OpenStack. On the other hand, OpenStack handles plenty of other use cases, too, and this is a very specific use case. What’s clear, though, is that there’s quite a bit of friction here, which is a shame.

Box fourth quarter revenue up 20 percent, but stock down 22 percent after hours

By most common sense measurements, Box had a pretty good earnings report today, reporting revenue up 20 percent year over year to $163.7 million. That doesn’t sound bad, yet Wall Street was not happy with the stock getting whacked, down more than 22 percent after hours as we went to press. It appears investors were unhappy with the company’s guidance.

 

Part of the problem says Alan Pelz-Sharpe principle analyst at Deep Analysis, a firm that watches the content management space, is that the company failed to hit its projections, but he points out the future does look bright for the company.

Box did miss its estimates and got dinged pretty hard today, however the bigger picture is still of solid growth. As Box moves more and more into the enterprise space, the deal cycle takes longer to close and I think that has played a large part in this shift. The onus is on Box to close those bigger deals over the next couple of quarters, but if it does then that will be a real warning shot to the legacy enterprise vendors as Box starts taking a chunk out of their addressable market,” Pelz-Sharpe told TechCrunch.

This fits with what company CEO Aaron Levie was saying. “Wall Street did have higher expectations with our revenue guidance for next year, and I think that’s totally fair, but we’re very focused as a company right now on driving reacceleration in our growth rate and the way that we’re going to do that is by really bringing the full suite of Box’s capabilities to more of our customers,” Levie told TechCrunch.

On the positive side, Levie pointed out that the company achieved positive non-GAAP growth rate for the first time in its 14 year history with projections for the first full year of non gap profitability for FYI 20th of the year that it just kicked off.

The company was showing losses on a cost per share of .14 a share for the most recent quarter, but even that was a smaller loss than the .24 cents a share from the previous fiscal year. It would seem that the revenue is heading generally in the correct direction, but Wall Street did not see it that way, flogging the cloud content management company.

Chart: Box

Wall Street tends to try to project future performance. What a company has done this quarter is not as important to investors, who are apparently not happy with the projections, but Levie pointed out the opportunity here is huge. “We’re going after 40 plus billion dollar market, so if you think about the entirety of spend on content management, collaboration, storage infrastructure — as all of that moves to the cloud, we see that as the full market opportunity that we’re going out and serving,” Levie explained

Pelz-Sharpe also thinks Wall Street could be missing the longer-range picture here. “The move to true enterprise started a couple of years back at Box, but it has taken time to bring on the right partners and infrastructure to deal with these bigger and more complex migrations and implementations,” Pelz-Sharpe explained. Should that happen, Box could begin capturing much larger chunks of that $40 billion addressable cloud content management market, and the numbers could ultimately be much more to investor’s liking.

Compass acquires Contactually, a CRM provider to the real estate industry

Compass, the real estate tech platform that is now worth $4.4 billion, has made an acquisition to give its agents a boost when it comes to looking for good leads on properties to sell. It is acquiring Contactually, an AI-based CRM platform designed specifically for the industry, which includes features like linking up a list of homes sold by a brokerage with records of sales in the area and other property indexes to determine which properties might be good targets to tap for future listings.

Contactually had already been powering Compass’s own CRM service that it launched last year so there is already a degree of integration between the two.

Terms of the deal are not being disclosed. Crunchbase notes that Contactually had raised around $18 million from VCs that included Rally Ventures, Grotech and Point Nine Capital, and it was last valued at around $30 million in 2016, according to PitchBook. From what I understand, the startup had strong penetration in the market so it’s likely that the price was a bit higher than this previous valuation.

The plan is to bring over all of Contactually’s team of 32 employees, led by Zvi Band, the co-founder and CEO, to integrate the company’s product into Compass’s platform completely. They will report to CTO Joseph Sirosh and head of product Eytan Seidman. It will also mean a bigger operation for Compass in Washington, DC, which is where Contactually had been based.

“The Contactually team has worked for the past 8 years to build a best-in-class CRM that aggregates relationships and automatically documents every touchpoint,” said Band in a statement “We are proud that our investment into machine learning has resulted in new features like Best Time to Email and other data-driven, follow-up recommendations which help agents be more effective in their day-to-day. After working extensively with the Compass team, it was apparent that joining forces would accelerate our missions of building the future of the industry.”

For the time being, customers who are already using the product — and a large number of real estate brokers and agents in the US already were, at prices that ranged from $59/month to $399/month depending on the level of service — will continue their contracts as before, for the time being.

I suspect that the longer-term plan, however, will be a little different: you have to wonder if agents who compete against Compass would be happy to use a service where their data is being processed by it, and for Compass itself, I would suspect that having this tech for itself would give it an edge over the others.

Compass, I understand from sources, is on track to make $2 billion in revenues in 2019 (its 2018 targets were $1 billion on $34 billion in property sales, and it had previously said it would be doubling that this year). Now in 100 cities, it’s come a long way from its founding in 2012 Ori Allon and Robert Reffkin.

The bigger picture beyond real estate is that, as with many other analog industries, those who are tackling them with tech-first approaches are sweeping up not only existing business, but in many cases helping the whole market to expand. Contactually, as a tool that can help source potential properties for sale that owners hadn’t previously considered putting on the market, could end up serving that very end for Compass.

The focus on using tech to storm into a legacy industry is also coming at an interesting time. As we’ve pointed out before, the housing market is predicted to cool this year, and that will put the squeeze on agents who do not have strong networks of clients and the tools to maximise whatever opportunities there are out there to list and sell properties.

The likes of Opendoor — which appears to be raising money and inching closer to Compass in terms of valuation — is also trying out a different model, which essentially involves becoming a middle part in the chain, buying properties from sellers and selling them on to buyers, to speed up the process and cut out some of the expenses for the end users. That approach underscores the fact that, while the infusion of technology is an inevitable trend, there will be multiple ways of applying that.

This appears to be Compass’s first full acquisition of a tech startup, although it has made partial acquihires in the past.

Threads emerges from stealth with $10.5M from Sequoia for a new take on enabaling work conversations

The rapid rise of Slack has ushered in a new wave of apps, all aiming to solve one challenge: creating a user-friendly platform where coworkers can have productive conversations. Many of these are based around real-time notifications and “instant” messaging, but today a new startup called Threads coming out of stealth to address the other side of the coin: a platform for asynchronous communication that is less time-sensitive, and creating coherent narratives out of those conversations.

Armed with $10.5 million in funding from Sequoia, the company is launching a beta of its service today.

Roussau Kazi, the startup’s CEO who co-founded threads with Jon McCord, Mark Rich and Suman Venkataswamy, cut his social teeth working for six years at Facebook (with a resulting number of patents to his name around the mechanics of social networking), says that the mission of Threads is to become more inclusive when it comes to online conversations.

“After a certain number of people get involved in an online discussion, conversations just break and messaging becomes chaotic,” he said. (McCord and Rich are also Facebook engineering alums, while Venkataswamy is a Bright Roll alum who worked with McCord on another startup before this.)

And if you have ever used Twitter, or even been in a popular channel in Slack, you will understand what he is talking about. When too many people begin to talk, the conversation gets very noisy and it can mean losing the “thread” of what is being discussed, and seeing conversation lurch from one topic to another, often losing track of important information in the process.

And there is an argument to be made for whether a platform that was built for real-time information is capable of handling a difference kind of cadence. Twitter, as it happens, is trying to figure that out right now. Slack, meanwhile, has itself introduced threaded comments to try to address this too — although the practical application of its own threading feature is not actually very user friendly.

Threads answer is to view its purpose as addressing the benefit of “asynchronous” conversation.

To start, those who want to start threads first register as organizations on the platform. Then, those who are working on a project or in a specific team creates a “space” for themselves within that org. You can then start threads within those spaces. And when a problem has been solved or the conversation has come to a conclusion, the last comment gets marked as the conclusion.

The idea is that topics and conversations that can stretch out over hours, days or even longer, around specific topics. Threads doeesn’t want to be the place you go for red alerts or urgent requests, but where you go when you have thoughts about a work-related subject and how to tackle it.

[gallery ids="1789615,1789614,1789613,1789612,1789611,1789609,1789608,1789607,1789606"]

These resulting threads, when completed or when in progress, can in turn be looked at as straight conversations, or as annotated narratives.

For now, it’s up to users themselves to annotate what might be important to highlight for readers, although when I asked him, Kazi told me he would like to incorporate over time more features that might use natural language processing to summarize and pull out what might be worth following up or looking at if you only want to skim read a longer conversation. Ditto the ability to search threads. Right now it’s all based around keywords but you can imagine a time when more sophisticated and nuanced searches to surface conversations relevant to what you might be looking for.

Indeed, in this initial launch, the focus is all about what you want to say on Threads itself — not lots of bells and whistles, and not trying to compete against the likes of Slack, or Workplace (Facebook’s effort in this space), or Yammer or Teams from Microsoft, or any of the others in the messaging mix.

There are no integrations of other programs to bring data into Threads from other places, but there is a Slack integration in the other direction: you can create an alert there so that you know when someone has updated a Thread.

“We don’t view ourselves as a competitor to Slack,” Kazi said. “Slack is great for transactional conversation but for asynchronous chats, we thought there was a need for this in the market. We wanted something to address that.”

It’s may not be a stated competitor, but Threads actually has something in common with Slack: the latter’s launched with the purpose of enabling a certain kind of conversation between co-workers in a way that was easier to consume and engage with than email.

You could argue that Threads has the same intention: email chains, especially those with multiple parties, can also be hard to follow and are in any case often very messy to look at: something that the conversations in Threads also attempt to clear up.

But email is not the only kind of conversation medium that Threads thinks it can replace.

“With in-person meetings there is a constant tension between keeping the room small for efficiency and including more people for transparency,” said Sequoia partner Mike Vernal in a statement. “When we first started chatting with the team about what is now Threads, we saw an opportunity to get rid of this false dichotomy by making decision-making both more efficient and more inclusive. We’re thrilled to be partnering with Threads to make work more inclusive.”

The startup was actually formed in 2017, and for months now it has been running a closed, private version of the service to test it out with a small amount of users. So far, the company sizes have ranged between 5 and 60 employees, Kazi tells me.

“By using Threads as our primary communications platform, we’ve seen incredible progress streamlining our operations,” said one of the testers, Perfect Keto & Equip Foods Founder and CEO, Anthony Gustin. “Internal meetings have reduced by at least 80 percent, we’ve seen an increase in participation in discussion and speed of decision making, and noticed an adherence and reinforcement of company culture that we thought was impossible before. Our employees are feeling more ownership and autonomy, with less work and time that needs to be spent — something we didn’t even know was possible before Threads.”

Kazi said that the intention is ultimately to target companies of any size, although it will be worth watching what features it will have to introduce to help handle the noise, and continue to provide coherent discussions, when and if they do start to tackle that end of the market.

 

Tuesday, February 26, 2019

New VMware Kubernetes product comes courtesy of Heptio acquisition

VMware announced a new Kubernetes product today called VMware Essential PKS, which has been created from its acquisition of Heptio for $550 million at the end of last year.

VMware already had two flavors of Kubernetes, a fully managed cloud product and an enterprise version with all of the components such as registry and network pre-selected by VMware. What this new version does is provide a completely open version of Kubernetes where the customer can choose all of the components, giving a flexible option for those who want it, according to Scott Buchanan, senior director of product marketing for cloud native apps at VMware.

Buchanan said that the new product comes directly from the approach that Heptio had taken to selling Kuberentes prior to the acquisition . “We’re introducing a new offering called VMware Essential PKS, and that offering is a packaging of the approach that Heptio took to market and that gained a lot of traction, and that approach is a natural complement to the other Kubernetes products in the VMware portfolio,” he explained.

Buchanan acknowledged that a large part of the market is going to go for the fully managed or fully configured approaches, but there is a subset of buyers that will want more choice in their Kubernetes implementation.

“Larger enterprises with more complex infrastructure want to have a very customized approach to how they build out their architecture They don’t want to be integrated. They just want a foundation on which to build because the organizations are larger and more complex and they’re also more likely to have an internal DevOps or SREOps team to operate the platform on a day-to-day basis,” he explained.

While these organizations want flexibility, they also require more of a more consultative approach to the sale. Heptio had a 40-person field service engineering team that came over in the acquisition, and VMware is in the process of scaling that team. These folks consult with the customer and help them select the different components that make up a Kubernetes installation to fit the needs of each organization.

Buchanan, who also came over in the acquisition, says that being part of VMware (which is part of the Dell family of companies) means they have several layers of sales with VMware, Pivotal and Dell all selling the product.

Heptio is the Kubernetes startup founded by Craig McLuckie and Joe Beda, the two men who helped develop the technology while they were at Google. Heptio was founded in 2016 and raised $33.5 million prior to the acquisition, according to Crunchbase data.

Sunday, February 24, 2019

Microsoft announces an Azure-powered Kinect camera for enterprise

Today’s Mobile World Congress kickoff event was all about the next Hololens, but Microsoft still had some surprises up its sleeve. One of the more interesting additions is the Azure Kinect, a new enterprise camera system that leverages the company’s perennially 3D imaging technology to create a 3D camera for enterprises.

The device is actually a kind of companion hardware piece for Hololens in the enterprise, giving business a way to capture depth sensing and leverage its Azure solutions to collect that data.

“Azure Kinect is an intelligent edge device that doesn’t just see and hear but understands the people, the environment, the objects and their actions,” Azure VP Julia White said at the kick off of today’s event. “The level of accuracy you can achieve is unprecedented.”

What started as a gesture-based gaming peripheral for the Xbox 360 has since grown to be an incredibly useful tool across a variety of different fields, so it tracks that the company would seek to develop a product for business. And unlike some of the more far off Hololens applications, the Azure Kinect is the sort of product that could be instantly useful, right off the the shelf.

A number of enterprise partners have already begun testing the technology, including Datamesh, Ocuvera and Ava, representing an interesting cross-section of companies. The system goes up for pre-order today, priced at $399. 

Say hello to Microsoft’s new $3,500 HoloLens with twice the field of view

Microsoft unveiled the latest version of its HoloLens ‘mixed reality’ headset at MWC Barcelona today. The new HoloLens 2 features a significantly larger field of view, higher resolution and a device that’s more comfortable to wear. Indeed, Microsoft says the device is three times as comfortable to wear (though it’s unclear how Microsoft measured this).

Later this year, HoloLens 2 will be available in the United States, Japan, China, Germany, Canada, United Kingdom, Ireland, France, Australia and New Zealand for $3,500.

One of the knocks against the original HoloLens was its limited field of view. When whatever you wanted to look at was small and straight ahead of you, the effect was striking. But when you moved your head a little bit or looked at a larger object, it suddenly felt like you were looking through a stamp-sized screen. HoloLens 2 features a field of view that’s twice as large as the original.

“Kinect was the first intelligent device to enter our homes,” HoloLens chief Alex Kipman said in today’s keynote, looking back the the device’s history. “It drove us to create Microsoft HoloLens. […] Over the last few years, individual developers, large enterprises, brand new startup have been dreaming up beautiful things, helpful things.”

The HoloLens was always just as much about the software as the hardware, though. For HoloLens, Microsoft developed a special version of Windows, together with a new way of interacting with the AR objects through gestures like air tap and bloom. In this new version, the interaction is far more natural and lets you tap objects. The device also tracks your gaze more accurately to allow the software to adjust to where you are looking.

“HoloLens 2 adapts to you,” Kipman stressed. “HoloLens 2 evolves the interaction model by significantly advancing how people engage with holograms.”

In its demos, the company clearly emphasized how much faster and fluid the interaction with HoloLens applications becomes when you can use slides, for example, by simply grabbing the slider and moving it, or by tapping on a button with either a finger or two or with your full hand. Microsoft event built a virtual piano that you can play with ten fingers to show off how well the HoloLens can track movement. The company calls this ‘instinctual interaction.’

Microsoft first unveiled the HoloLens concept at a surprise event on its Redmond campus back in 2015. After a limited, invite-only release that started days after the end of MWC 2016, the device went on sale to everybody in August  2016. Four years is a long time between hardware releases, but the company clearly wanted to seed the market and give developer a chance to build the first set of HoloLens applications on a stable platform.

To support developers, Microsoft is also launching a number of Azure services for HoloLens today. These include spatial anchors and remote rendering to help developers stream high-polygon content to HoloLens.

It’s worth noting that Microsoft never positioned the device as consumer hardware. I may have shown off the occasional game, but its focus was always on business applications, with a bit of educational applications thrown in, too. That trend continued today. Microsoft showed off the ability to have multiple people collaborate around a single hologram, for example. That’s not new, of course, but goes to show how Microsoft is positioning this technology.

For these enterprises, Microsoft will also offer the ability to customize the device.

“When you change the way you see the world, you change the world you see,” Microsoft CEO Satya Nadella said, repeating a line from the company’s first HoloLens announcement four years ago. He noted that he believes that connecting the physical world with the virtual world will transform the way we will work.

Thursday, February 21, 2019

JFrog acquires Shippable, adding continuous integration and delivery to its DevOps platform

JFrog, the popular DevOps startup now valued at over $1 billion after raising $165 million last October, is making a move to expand the tools and services it provides to developers on its software operations platform: it has acquired Shippable, a cloud-based continuous integration and delivery platform (CI/CD) that developers use to ship code and deliver app and microservices updates, and plans to integrate it into its Enterprise+ platform.

Terms of the deal — JFrog’s fifth acquisition — are not being disclosed, said Shlomi Ben Haim, JFrog’s co-founder and CEO, in an interview. From what I understand, though, it was in the ballpark of Shippable’s most recent valuation, which was $42.6 million back in 2014 when it raised $8 million, according to PitchBook data.  (And that was the last time it had raised money.)

Shippable employees are joining JFrog and plan to release the first integrations with Enterprise+ this coming summer, and a full integration by Q3 of this year.

Shippable, founded in 2013, made its name early on as a provider of a containerized continuous integration and delivery platform based on Docker containers, but as Kubernetes has overtaken Docker in containerized deployments, the startup had also shifted its focus beyond Docker containers.

The acquisition speaks to the consolidation that is afoot in the world of DevOps, where developers and organizations are looking for more end-to-end toolkits not just to help develop, update, and run their apps and microservices, but to provide security and more — or at least, makers of DevOps tools hope they will be, as they themselves look to grow their margins and business.

As more organizations run ever more of their opertions as apps and microservices, DevOps have risen in prominence and are offered both toolkits from standalone businesses as well as those whose infrastructure is touched and used by DevOps tools. That means a company like JFrog has an expanding pool of competitors that include not just the likes of Docker, Sonatype and GitLab, but also AWS, Google Cloud Platform and Azure and “the Red Hats of the world,” in the words of Ben Haim.

For Shippable customers, the integration will give them access to security, binary management and other enterprise development tools.

“We’re thrilled to join the JFrog family and further the vision around Liquid Software,” said Avi Cavale, founder and CEO of Shippable, in a statement. “Shippable users and customers have long enjoyed our next-generation technology, but now will have access to leading security, binary management and other high-powered enterprise tools in the end-to-end JFrog Platform. This is truly exciting, as the combined forces of JFrog and Shippable can make full DevOps automation from code to production a reality.”

On the part of JFrog, the company will be using Shippable to provide a native CI/CD tool directly within JFrog.

“Before most of our users would use Jenkins, Circle CI and other CI/CD automation tools,” Ben Haim said. “But what you are starting to see in the wider market is a gradual consolidation of CI tools into code repository.”

He emphasized that this will not mean any changes for developers who are already happy using Jenkins or other integrations: just that it will now be offering a native solution that will be offered alongside these (presumably both with easier functionality and with competitive pricing).

JFrog today has 5,000 paying customers, up from 4,500 in October, including “most of the Fortune 500,” with marquee customers including the likes of Apple and Adobe, but also banks, healthcare organizations and insurance companies — “conservative businesses,” said Ben Haim, that are also now realizing the importance of using DevOps.

Redis Labs changes its open-source license — again

Redis Labs, fresh off its latest funding round, today announced a change to how it licenses its Redis Modules. This may not sound like a big deal, but in the world of open-source projects, licensing is currently a big issue. That’s because organizations like Redis, MongoDB, Confluent and others have recently introduced new licenses that make it harder for their competitors to take their products and sell them as rebranded services without contributing back to the community (and most of these companies point directly at AWS as the main offender here).

“Some cloud providers have repeatedly taken advantage of successful opensource projects, without significant contributions to their communities,” the Redis Labs team writes today. “They repackage software that was not developed by them into competitive, proprietary service offerings and use their business leverage to reap substantial revenues from these open source projects.”

The point of these new licenses it to put a stop to this.

This is not the first time Redis Labs has changed how it licenses its Redis Modules (and I’m stressing the “Redis Modules” part here because this is only about modules from Redis Labs and does not have any bearing on how the Redis database project itself is licensed). Back in 2018, Redis Labs changed its license from AGPL to Apache 2 modified with Commons Clause. The “Commons Clause” is the part that places commercial restrictions on top of the license.

That created quite a stir, as Redis Labs co-founder and CEO Ofer Bengal told me a few days ago when we spoke about the company’s funding.

“When we came out with this new license, there were many different views,” he acknowledged. “Some people condemned that. But after the initial noise calmed down — and especially after some other companies came out with a similar concept — the community now understands that the original concept of open source has to be fixed because it isn’t suitable anymore to the modern era where cloud companies use their monopoly power to adopt any successful open source project without contributing anything to it.”

The way the code was licensed, though, created a bit of confusion, the company now says, because some users thought they were only bound by the terms of the Apache 2 license. Some terms in the Commons Clause, too, weren’t quite clear (including the meaning of “substantial,” for example).

So today, Redis Labs is introducing the Redis Source Available License. This license, too, only applies to certain Redis Modules created by Redis Labs. Users can still get the code, modify it and integrate it into their applications — but that application can’t be a database product, caching engine, stream processing engine, search engine, indexing engine or ML/DL/AI serving engine.

By definition, an open-source license can’t have limitations. This new license does, so it’s technically not an open-source license. In practice, the company argues, it’s quite similar to other permissive open-source licenses, though, and shouldn’t really affect most developers who use the company’s modules (and these modules are RedisSearch, RedisGraph, RedisJSON, RedisML and RedisBloom).

This is surely not the last we’ve heard of this. Sooner or later, more projects will follow the same path. By then, we’ll likely see more standard licenses that address this issue so other companies won’t have to change multiple times. Ideally, though, we won’t need it because everybody will play nice — but since we’re not living in a utopia, that’s not likely to happen.

Microsoft bringing Dynamics 365 mixed reality solutions to smartphones

Last year Microsoft introduced several mixed reality business solutions under the Dynamics 365 enterprise product umbrella. Today, the company announced it would be moving these to smartphones in the spring, starting with previews.

The company announced Remote Assist on HoloLens last year. This tool allows a technician working onsite to show a remote expert what they are seeing. The expert can then walk the less experienced employee through the repair. This is great for those companies that have equipped their workforce with HoloLens for hands-free instruction, but not every company can afford the new equipment.

Starting in the spring, Microsoft is going to help with that by introducing Remote Assist for Android phones. Just about everyone has a phone with them, and those with Android devices will be able to take advantage of Remote Assist capabilities without investing in HoloLens. The company is also updating Remote Assist to include mobile annotations, group calling, deeper integration with Dynamics 365 for Field Service along with improved accessibility features on the HoloLens app.

IPhone users shouldn’t feel left out though because the company announced a preview of Dynamics 365 Product Visualize for iPhone. This tool enables users to work with a customer to visualize what a customized product will look like as they work with them. Think about a furniture seller working with a customer in their homes to customize the color, fabrics and design in place in the room where they will place the furniture, or a car dealer offering different options such as color and wheel styles. Once a customer agrees to a configuration, the data gets saved to Dynamics 365 and shared in Microsoft Teams for greater collaboration across a group of employees working with a customer on a project.

Both of these features are part of the Dynamics 365 spring release and are going to be available in preview starting in April. They are part of a broader release that includes a variety of new artificial intelligence features such as customer service bots and a unified view of customer data across the Dynamics 365 family of products.

Wednesday, February 20, 2019

Mixmax brings LinkedIn integration and better task automation to its Gmail tool

Mixmax today introduced version 2.0 of its Gmail-based tool and plugin for Chrome that promises to make your daily communications chores a bit easier to handle.

With version 2.0, Mixmax gets an updated editor that better integrates with the current Gmail interface and that gets out of the way of popular extensions like Grammarly. That’s table stakes, of course, but I’ve tested it for a bit and the new version does indeed do a better job of integrating itself into the current Gmail interface and feels a bit faster, too.

What’s more interesting in that the service now features a better integration with LinkedIn. There’s both an integration with the LinkedIn Sales Navigator, LinkedIn’s tool for generating sales leads and contacting them, and LinkedIn’s messaging tools for sending InMail and connection requests — and see info about a recipient’s LinkedIn profile, including the LinkedIn Icebreakers section — right from the Mixmax interface.

Together with its existing Salesforce integration, this should make the service even more interesting to sales people. And the Salesforce integration, too, is getting a bit of a new feature that can now automatically create a new contact in the CRM tool when a prospect’s email address — maybe from LinkedIn — isn’t in your database yet.

Also new in Mixmax 2.0 is something the company calls “Beast Mode.” Not my favorite name, I have to admit, but it’s an interesting task automation tool that focusing on helping customer-facing users prioritize and complete batches of tasks quickly and that extends the service’s current automation tools.

Finally, Mixmax now also features a Salesforce-linked dialer widget for making calls right from the Chrome extension.

“We’ve always been focused on helping business people communicate better, and everything we’re rolling out for Mixmax 2.0 only underscores that focus,” said Mixmax CEO and Co-Founder Olof Mathé. “Many of our users live in Gmail and our integration with LinkedIn’s Sales Navigator ensures users can conveniently make richer connections and seamlessly expand their networks as part of their email workflow.”

Whether you get these new features depends on how much you pay, though. Everybody, including free users, get access to the refreshed interface. Beast Mode and the dialer are available with the enterprise plan, the company’s highest-level plan which doesn’t have a published price. ThedDialer is also available for an extra $20/user/month on the $49/month/user Growth plan. LinkedIn Sales Navigator support is available with the growth and enterprise plans.

Sadly, that means that if you are on the cheaper Starter and Small Business plans ($9/user/month and $24/user/month respectively), you won’t see any of these new features anytime soon.

Google’s managed hybrid cloud platform is now in beta

Last July, at its Cloud Next conference, Google announced the Cloud Services Platform, its first real foray into bringing its own cloud services into the enterprise data center as a managed service. Today, the Cloud Services Platform (CSP) is launching into beta.

It’s important to note that the CSP isn’t — at least for the time being — Google’s way of bringing all of its cloud-based developer services to the on-premises data center. In other words, this is a very different project from something like Microsoft’s Azure Stack. Instead, the focus is on the Google Kubernetes Engine, which allows enterprises to then run their applications in both their own data centers and on virtually any cloud platform that supports containers.As Google Cloud engineering director Chen Goldberg told me, the idea here it to help enterprises innovate and modernize. “Clearly, everybody is very excited about cloud computing, on-demand compute and managed services, but customers have recognized that the move is not that easy,” she said and noted that the vast majority of enterprises are adopting a hybrid approach. And while containers are obviously still a very new technology, she feels good about this bet on the technology because most enterprises are already adopting containers and Kubernetes — and they are doing so at exactly the same time as they are adopting cloud and especially hybrid clouds.

It’s important to note that CSP is a managed platform. Google handles all of the heavy lifting like upgrades and security patches. And for enterprises that need an easy way to install some of the most popular applications, the platform also supports Kubernetes applications from the GCP Marketplace.

As for the tech itself, Goldberg stressed that this isn’t just about Kubernetes. The service also uses Istio, for example, the increasingly popular service mesh that makes it easier for enterprises to secure and control the flow of traffic and API calls between its applications.

With today’s release, Google is also launching its new CSP Config Management tool to help users create multi-cluster policies and set up and enforce access controls, resource quotas and more. CSP also integrates with Google’s Stackdriver Monitoring service and continuous delivery platforms.

“On-prem is not easy,” Goldberg said, and given that this is the first time the company is really supporting software in a data center that is not its own, that’s probably an understatement. But Google also decided that it didn’t want to force users into a specific set of hardware specifications like Azure Stack does, for example. Instead, CSP sits on top of VMware’s vSphere server virtualization platform, which most enterprises already use in their data centers anyway. That surely simplifies things, given that this is a very well-understood platform.

Arm expands its push into the cloud and edge with the Neoverse N1 and E1

For the longest time, Arm was basically synonymous with chip designs for smartphones and very low-end devices. But more recently, the company launched solutions for laptops, cars, high-powered IoT devices and even servers. Today, ahead of MWC 2019, the company is officially launching two new products for cloud and edge applications, the Neoverse N1 and E1. Arm unveiled the Neoverse brand a few months ago, but it’s only now that it is taking concrete form with the launch of these new products.

“We’ve always been anticipating that this market is going to shift as we move more towards this world of lots of really smart devices out at the endpoint — moving beyond even just what smartphones are capable of doing,” Drew Henry, Arms’ SVP and GM for Infrastructure, told me in an interview ahead of today’s announcement. “And when you start anticipating that, you realize that those devices out of those endpoints are going to start creating an awful lot of data and need an awful lot of compute to support that.”

To address these two problems, Arm decided to launch two products: one that focuses on compute speed and one that is all about throughput, especially in the context of 5G.

ARM NEOVERSE N1

The Neoverse N1 platform is meant for infrastructure-class solutions that focus on raw compute speed. The chips should perform significantly better than previous Arm CPU generations meant for the data center and the company says that it saw speedups of 2.5x for Nginx and MemcacheD, for example. Chip manufacturers can optimize the 7nm platform for their needs, with core counts that can reach up to 128 cores (or as few as 4).

“This technology platform is designed for a lot of compute power that you could either put in the data center or stick out at the edge,” said Henry. “It’s very configurable for our customers so they can design how big or small they want those devices to be.”

The E1 is also a 7nm platform, but with a stronger focus on edge computing use cases where you also need some compute power to maybe filter out data as it is generated, but where the focus is on moving that data quickly and efficiently. “The E1 is very highly efficient in terms of its ability to be able to move data through it while doing the right amount of compute as you move that data through,” explained Henry, who also stressed that the company made the decision to launch these two different platforms based on customer feedback.

There’s no point in launching these platforms without software support, though. A few years ago, that would have been a challenge because few commercial vendors supported their data center products on the Arm architecture. Today, many of the biggest open-source and proprietary projects and distributions run on Arm chips, including Red Hat Enterprise Linux, Ubuntu, Suse, VMware, MySQL, OpenStack, Docker, Microsoft .Net, DOK and OPNFV. “We have lots of support across the space,” said Henry. “And then as you go down to that tier of languages and libraries and compilers, that’s a very large investment area for us at Arm. One of our largest investments in engineering is in software and working with the software communities.”

And as Henry noted, AWS also recently launched its Arm-based servers — and that surely gave the industry a lot more confidence in the platform, given that the biggest cloud supplier is now backing it, too.

Xage brings role-based single sign-on to industrial devices

Traditional industries like oil and gas and manufacturing often use equipment that was created in a time when remote access wasn’t a gleam in an engineer’s eye, and hackers had no way of connecting to them. Today, these devices require remote access and some don’t have even rudimentary authentication. Xage, the startup that wants to make industrial infrastructure more secure, announced a new solution today to bring single sign-on and role-based control to even the oldest industrial devices.

Company CEO Duncan Greatwood says that some companies have adopted firewall technology, but if a hacker breaches the firewall, there often isn’t even a password to defend these kinds of devices. He adds that hackers have been increasingly targeting industrial infrastructure.

Xage has come up with a way to help these companies with its latest product called Xage Enforcement Point (XEP). This tool gives IT a way to control these devices with a single password, a kind of industrial password manager. Greatwood says that some companies have hundreds of passwords for various industrial tools. Sometimes, whether because of distance across a factory floor, or remoteness of location, workers would rather adjust these machines remotely when possible.

While operations wants to simplify this for workers with remote access, IT worries about security and the tension can hold companies back, force them to make big firewall investments or in some cases implement these kinds of solutions without adequate protection.

XEP helps bring a level of protection to these pieces of equipment. “XEP is a relatively small piece of software that can run on a tiny credit-card size computer, and you simply insert it in front of the piece of equipment you want to protect,” Greatwood explained.

The rest of the Xage platform adds additional security. The company introduced fingerprinting last year, which gives unique identifiers to these pieces of equipment. If a hacker tries to spoof a piece of equipment, and the device lacks a known fingerprint, they can’t get on the system.

Xage also makes use of the blockchain and a rules engine to secure industrial systems. The customer can define rules and use the blockchain as an enforcement mechanism where each node in the chain carries the rules, and a certain number of nodes as defined by the customer, must agree that the person, machine or application trying to gain access is a legitimate actor.

The platform taken as a whole provides several levels of protection in an effort to discourage hackers who are trying to breach these systems. Greatwood says that while companies don’t usually get rid of tools they already have like firewalls, they may scale back their investment after buying the Xage solution.

Xage was founded at the end of 2017. It has raised $16 million to this point and has 30 employees. Greatwood didn’t want to discuss a specific number of customers, but did say they were making headway in oil and gas, renewable energy, utilities and manufacturing.

Why Daimler moved its big data platform to the cloud

Like virtually every big enterprise company, a few years ago, the German auto giant Daimler decided to invest in its own on-premises data centers. And while those aren’t going away anytime soon, the company today announced that it has successfully moved its on-premises big data platform to Microsoft’s Azure cloud. This new platform, which the company calls eXtollo, is Daimler’s first major service to run outside of its own data centers, though it’ll probably not be the last.

As Daimler’s head of its corporate center of excellence for advanced analytics and big data Guido Vetter told me, that the company started getting interested in big data about five years ago. “We invested in technology — the classical way, on-premise — and got a couple of people on it. And we were investigating what we could do with data because data is transforming our whole business as well,” he said.

By 2016, the size of the organization had grown to the point where a more formal structure was needed to enable the company to handle its data at a global scale. At the time, the buzzword was ‘data lakes’ and the company started building its own in order to build out its analytics capacities.

Electric Line-Up, Daimler AG

“Sooner or later, we hit the limits as it’s not our core business to run these big environments,” Vetter said. “Flexibility and scalability are what you need for AI and advanced analytics and our whole operations are not set up for that. Our backend operations are set up for keeping a plant running and keeping everything safe and secure.” But in this new world of enterprise IT, companies need to be able to be flexible and experiment — and, if necessary, throw out failed experiments quickly.

So about a year and a half ago, Vetter’s team started the eXtollo project to bring all the company’s activities around advanced analytics, big data and artificial intelligence into the Azure Cloud and just over two weeks ago, the team shut down its last on-premises servers after slowly turning on its solutions in Microsoft’s data centers in Europe, the U.S. and Asia. All in all, the actual transition between the on-premises data centers and the Azure cloud took about nine months. That may not seem fast, but for an enterprise project like this, that’s about as fast as it gets (and for a while, it fed all new data into both its on-premises data lake and Azure).

If you work for a startup, then all of this probably doesn’t seem like a big deal, but for a more traditional enterprise like Daimler, even just giving up control over the physical hardware where your data resides was a major culture change and something that took quite a bit of convincing. In the end, the solution came down to encryption.

“We needed the means to secure the data in the Microsoft data center with our own means that ensure that only we have access to the raw data and work with the data,” explained Vetter. In the end, the company decided to use thethAzure Key Vault to manage and rotate its encryption keys. Indeed, Vetter noted that knowing that the company had full control over its own data was what allowed this project to move forward.

Vetter tells me that the company obviously looked at Microsoft’s competitors as well, but he noted that his team didn’t find a compelling offer from other vendors in terms of functionality and the security features that it needed.

Today, Daimler’s big data unit uses tools like HD Insights and Azure Databricks, which covers more than 90 percents of the company’s current use cases. In the future, Vetter also wants to make it easier for less experienced users to use self-service tools to launch AI and analytics services.

While cost is often a factor that counts against the cloud since renting server capacity isn’t cheap, Vetter argues that this move will actually save the company money and that storage cost, especially, are going to be cheaper in the cloud than in its on-premises data center (and chances are that Daimler, given its size and prestige as a customer, isn’t exactly paying the same rack rate that others are paying for the Azure services).

As with so many big data AI projects, predictions are the focus of much of what Daimler is doing. That may mean looking at a car’s data and error code and helping the technician diagnose an issue or doing predictive maintenance on a commercial vehicle. Interestingly, the company isn’t currently bringing any of its own IoT data from its plants to the cloud. That’s all managed in the company’s on-premises data centers because it wants to avoid the risk of having to shut down a plant because its tools lost the connection to a data center, for example.

Tuesday, February 19, 2019

Google acquires cloud migration platform Alooma

Google today announced its intention to acquire Alooma, a company that allows enterprises to combine all of their data sources into services like Google’s BigQuery, Amazon’s Redshift, Snowflake and Azure. The promise of Alooma is that handles the data pipelines and manages for its users. In addition to this data integration service, though, Alooma also helps with migrating to the cloud, cleaning up this data and then using it for AI and machine learning use cases.

“Here at Google Cloud, we’re committed to helping enterprise customers easily and securely migrate their data to our platform,” Google VP of engineering Amit Ganesh and Google Cloud Platform director of product management Dominic Preuss write today. “The addition of Alooma, subject to closing conditions, is a natural fit that allows us to offer customers a streamlined, automated migration experience to Google Cloud, and give them access to our full range of database services, from managed open source database offerings to solutions like Cloud Spanner and Cloud Bigtable.”

Before the acquisition, Alooma had raised about $15 million, including an $11.2 million Series A round ed by Lightspeed Venture Partners and Sequoia Capital in early 2016. The two companies did not disclose the price of the acquisition, but chances are we are talking about a modest price given how much Alooma had previously raised.

Neither Google now Alooma said much about what will happen to the existing products and customers — and whether it will continue to support migrations to Google’s competitors. We’ve reached out to Google and will update this post once we hear more.

Alooma’s co-founder do stress, though, that “the journey is not over.” “Alooma has always aimed to provide the simplest and most efficient path toward standardizing enterprise data from every source and transforming it into actionable intelligence,” they write. “Joining Google Cloud will bring us one step closer to delivering a full self-service database migration experience bolstered by the power of their cloud technology, including analytics, security, AI, and machine learning.”

Slack off. Send videos instead with $11M-funded Loom

If a picture is worth a thousand words, how many emails can you replace with a video? As offices fragment into remote teams, work becomes more visual, and social media makes us more comfortable on camera, it’s time for collaboration to go beyond text. That’s the idea behind Loom, a fast-rising startup that equips enterprises with instant video messaging tools. In a click, you can film yourself or narrate a screenshare to get an idea across in a more vivid, personal way. Instead of scheduling a video call, employees can asynchronously discuss projects or give ‘stand-up’ updates without massive disruptions to their workflow.

In the 2.5 years since launch, Loom has signed up 1.1 million users from 18,000 companies. And that was just as a Chrome extension. Today Loom launches its PC and Mac apps that give it a dedicated presence in your digital workspace. Whether you’re communicating across the room or across the globe, “Loom is the next best thing to being there” co-founder Shahed Khan tells me.

Now Loom is ready to spin up bigger sales and product teams thanks to an $11 million Series A led by Kleiner Perkins. The firm’s partner Ilya Fushman, formally Dropbox’s head of business and corporate development, will join Loom’s board. He’ll shepherd through today’s launch of its $10 per month per user Pro version that offers HD recording, calls-to-action at the end of videos, clip editing, live annotation drawings, and analytics to see who actually watched like they’re supposed to.

“We’re ditching the suits and ties and bringing our whole selves to work. We’re emailing and messaging like never before. but though we may be more connected, we’re further apart” Khan tells me. “We want to make it very easy to bring the humanity back in.”

Loom co-founder Shahed Khan

Back in 2016, Loom was just trying to survive. Khan had worked at Upfront Ventures after a stint as a product designer at website builder Weebly. Him and two close friends, Joe Thomas and Vinay Hiremath, started Opentest to let app makers get usabilty feedback from experts via video. But after six months and going through the NFX accelerator, they were running out of bootstrapped money. That’s when they realized it was the video messaging that could be a business as teams sought to keep in touch with members working from home or remotely.

Together they launched Loom in mid-2016, raising a pre-seed and seed round amounting to $4 million. Part of its secret sauce is that Loom immediately starts uploading bytes of your video while you’re still recording so it’s ready to send the moment you’re finished. That makes sharing your face, voice and screen feel as seamless as firing off a Slack message, but with more emotion and nuance.

“Sales teams use it to close more deals by sending personalized messages to leads. Marketing teams use Loom to walk through internal presentations and social posts. Product teams use Loom to capture bugs, stand ups, etc” Khan explains.

Loom has grown to a 16-person team that will expand thanks to the new $11 million Series A from Kleiner, Slack, Cue founder Daniel Gross, and actor Jared Leto that brings it to $15 million in funding. They predict the new desktop apps that open Loom to a larger market will see it spread from team to team for both internal collaboration and external discussions from focus groups to customer service.

Loom will have to hope that after becoming popular at a company, managers will pay for the Pro version that shows exactly how long each viewer watched for. That could clue them in that they need to be more concise, or that someone is cutting corners on training and cooperation. It’s also a great way to onboard new employees. “Just watch this collection of videos and let us know what you don’t understand.”

Next Loom will have to figure out a mobile strategy — something it surprisingly lacks. Khan imagines users being able to record quick clips from their phone to relay updates from travel and client meetings. It also plans to build out voice transcription to add automatic subtitles to videos and even divide videos into thematic sections you can fast-forward between. Loom will have to stay ahead of competitors like Vidyard’s GoVideo and Wistia’s Soapbox that have cropped up since its launch. But Khan says Loom looms largest in the space thanks to customers at Uber, Dropbox, Airbnb, Red Bull, and 1100 employees at Hubspot.

“The overall space of collaboration tools is becoming deeper than just email + docs” says Fushman, citing Slack, Zoom, Dropbox Paper, Coda, Notion, Intercom, Productboard, and Figma. To get things done the fastest, businesses are cobbling together B2B software so they can skip building it in-house and focus on their own product.

No piece of enterprise software has to solve everything. But Loom is dependent on apps like Slack, Google Docs, Convo, and Asana. Since it lacks a social or identity layer, you’ll need to send the links to your videos through another service. Loom should really build its own video messaging system into its desktop app. But at least Slack is an investor, and Khan says “they’re trying to be the hub of text-based in communication” and the soon-to-be-public unicorn tells him anything it does in video will focus on real-time interaction.

Still the biggest threat to Loom is apathy. People already feel overwhelmed with Slack and email, and if recording videos comes off as more of a chore than an efficiency, workers will stick to text. But Khan thinks the ubiquity of Instagram Stories is making it seem natural to jump on camera briefly. And the advantage is that you don’t need a bunch of time-wasting pleasantries to ensure no one misinterprets your message as sarcastic or pissed off.

Khan concludes “We believe instantly sharable video can foster more authentic communication between people at work, and convey complex scenarios and ideas with empathy.”