Tuesday, July 31, 2018

The Istio service mesh hits version 1.0

Istio, the service mesh for microservices from Google, IBM, Lyft, Red Hat and many other players in the open source community, launched version 1.0 of its tools today.

If you’re not into service meshes, that’s understandable. Few people are. But Istio is probably one of the most important new open source projects out there right now. It sits at the intersection of a number of industry trends like containers, microservices and serverless computing and makes it easier for enterprises to embrace them. Istio now has over 200 contributors and the code has seen over 4,000 check-ins since the launch of  version 0.1.

Istio, at its core, handles the routing, load balancing, flow control and security needs of microservices. It sits on top of existing distributed applications and basically helps them talk to each other securely, while also providing logging, telemetry and the necessary policies that keep things under control (and secure). It also features support for canary releases, which allow developers to test updates with a few users before launching them to a wider audience, something that Google and other webscale companies have long done internally.

“In the area of microservices, things are moving so quickly,” Google product manager Jennifer Lin told me. “And with the success of Kubernetes and the abstraction around container orchestration, Istio was formed as an open source project to really take the next step in terms of a substrate for microservice development as well as a path for VM-based workloads to move into more of a service management layer. So it’s really focused around the right level of abstractions for services and creating a consistent environment for managing that.”

Even before the 1.0 release, a number of companies already adopted Istio in production, including the likes of eBay and Auto Trader UK. Lin argues that this is a sign that Istio solves a problem that a lot of businesses are facing today as they adopt microservices. “A number of more sophisticated customers tried to build their own service management layer and while we hadn’t yet declared 1.0, we hard a number of customers — including a surprising number of large enterprise customer–  say, ‘you know, even though you’re not 1.0, I’m very comfortable putting this in production because what I’m comparing it to is much more raw.'”

IBM Fellow and VP of Cloud Jason McGee agrees with this and notes that “our mission since Istio’s launch has been to enable everyone to succeed with microservices, especially in the enterprise. This is why we’ve focused the community around improving security and scale, and heavily leaned our contributions on what we’ve learned from building agile cloud architectures for companies of all sizes.”

A lot of the large cloud players now support Istio directly, too. IBM supports it on top of its Kubernetes Service, for example, and Google even announced a managed Istio service for its Google Cloud users, as well as some additional open source tooling for serverless applications built on top of Kubernetes and Istio.

Two names missing from today’s party are Microsoft and Amazon. I think that’ll change over time, though, assuming the project keeps its momentum.

Istio also isn’t part of any major open source foundation yet. The Cloud Native Computing Foundation (CNCF), the home of Kubernetes, is backing linkerd, a project that isn’t all that dissimilar from Istio. Once a 1.0 release of these kinds of projects rolls around, the maintainers often start looking for a foundation that can shepherd the development of the project over time. I’m guessing its only a matter of time before we hear more about where Istio will land.

Monday, July 30, 2018

A pickaxe for the AI gold rush, Labelbox sells training data software

Every artificial intelligence startup or corporate R&D lab has to reinvent the wheel when it comes to how humans annotate training data to teach algorithms what to look for. Whether its doctors assessing the size of cancer from a scan or drivers circling street signs in self-driving car footage, all this labeling has to happen somewhere. Often that means wasting six months and as much as a million dollars just developing a training data system. With nearly every type of business racing to adopt AI, that spend in cash and time adds up.

LabelBox builds artificial intelligence training data labeling software so nobody else has to. What Salesforce is to a sales team, LabelBox is to an AI engineering team. The software-as-a-service acts as the interface for human experts or crowdsourced labor to instruct computers how to spot relevant signals in data by themselves and continuously improve their algorithms’ accuracy.

Today, LabelBox is emerging from six months in stealth with a $3.9 million seed round led by Kleiner Perkins and joined by First Round and Google’s Gradient Ventures.

“There haven’t been seamless tools to allow AI teams to transfer institutional knowledge from their brains to software” says co-founder Manu Sharma. “Now we have over 5000 customers, and many big companies have replaced their own internal tools with Labelbox.”

Kleiner’s Ilya Fushman explains that “If you have these tools, you can ramp up to the AI curve much faster, allowing companies to realize the dream of AI.”

Inventing The Best Wheel

Sharma knew how annoying it was to try to forge training data systems from scratch because he’d see it done it before at Planet Labs, a satellite imaging startup. “One of the thing that I observed was that Planet Labs has a superb AI team, but that team had been for over 6 months building labeling and training tools. Is this really how teams around the world are approaching building AI?” he wondered.

Before that, he’d worked at DroneDeploy alongside Labelbox co-founder and CTO Daniel Rasmuson who was leading the aerial data startup’s developer platform. “Many drone analytics companies that were also building AI were going through the same pain point” Sharma tells me. In September, the two began to explore the idea and found that 20 other companies big and small were also burning talent and capital on the problem. “We thought we could make that much smarter so AI teams can focus on algorithms” Sharma decided.

Labelbox’s team Co-founders: Ysiad Ferreiras (third from left), Manu Sharma (fourth from left), Brian Rieger (sixth from left), Daniel Rasmuson (seventh from left)

Labelbox launched its early alpha in January and saw swift pickup from the AI community that immediately asked for additional features. With time, the tool expanded with more and more ways to manually annotate data, from gradation levels like how sick a cow is for judging its milk production to matching systems like whether a dress fits a fashion brand’s aesthetic. Rigorous data science is applied to weed out discrepancies between reviewers’ decisions and identify edge cases that don’t fit the models.

“There are all these research studies about how to make training data” that Labelbox analyzes and applies, says co-founder and COO Ysiad Ferreiras, who’d led all of sales and revenue at fast-rising grassroots campaign texting startup Hustle. “We can let people tweak different settings so they can run their own machine learning program the way they want to, instead of being limited by what they can build really quickly.” When Norway mandated all citizens get colon cancer screenings, it had to build AI for recognizing polyps. Instead of spending half a year creating the training tool, they just signed up all the doctors on Labelbox.

Any organization can try Labelbox for free, and Ferreiras claims hundreds of thousands have. Once they hit a usage threshold, the startup works with them on appropriate SAAS pricing related to the revenue the client’s AI will generate. One called Lytx makes DriveCam, a system installed on half a million trucks with cameras that use AI to detect unsafe driver behavior so they can be coached to improve. Conde Nast is using Labelbox to match runway fashion to related items in their archive of content.

The big challenge is convincing companies that they’re better off leaving the training software to the experts instead of building it in-house where they’re intimately, though perhaps inefficiently, involved in every step of development. Some turn to crowdsourcing agencies like CrowdFlower, which have their own training data interface, but they only work with generalist labor, not the experts required for many fields. Labelbox wants to cooperate rather than compete here, serving as the management software that treats outsourcers as just another data input.

Long-term, the risk for Labelbox is that it’s arrived too early for the AI revolution. Most potential corporate customers are still in the R&D phase around AI, not at scaled deployment into real-world products. The big business isn’t selling the labeling software. That’s just the start. Labelbox wants to continuously managage the fine-tuning data to help optimize an algorithm through its entire lifecycle. That requires AI being part of the actual engineering process. Right now it’s often stuck as an experiment in the lab. “We’re not concerned about our ability to build the tool to do that. Our concern is ‘will the industry get there fast enough?'” Ferreiras declares.

Their investor agrees. Last year’s big joke in venture capital was that suddenly you couldn’t hear a startup pitch without ‘AI’ being referenced.. “There was a big wave where everything was AI. I think at this point it’s almost a bit implied” says Fushman. But it’s corporations that already have plenty of data, and plenty of human jobs to obfuscate, that are Labelbox’s opportunity. “The bigger question is ‘when does that [AI] reality reach consumers, not just from the Googles and Amazons of the world, but the mainstream corporations?”

Labelbox is willing to wait it out, or better yet, accelerate that arrival — even if it means eliminating jobs. That’s because the team believes the benefits to humanity will outweigh the transition troubles.

“For a colonoscopy or mammogram, you only have a certain number of people in the world who can do that. That limits how many of those can be performed. In the future, that could only be limited by the computational power provided so it could be exponentially cheaper” says co-founder Brian Rieger. With Labelbox, tens of thousands of radiology exams can be quickly ingested to produce cancer-spotting algorithms he says studies show can become more accurate than humans. Employment might get tougher to find, but hopefully life will get easier and cheaper too. Meanwhile, improving underwater pipeline inspections could protect the environment from its biggest threat: us.

“AI can solve such important problems in our society” Sharma concludes. “We want to accelerate that by helping companies tell AI what to learn.”

Google Calendar makes rescheduling meetings easier

Nobody really likes meetings — and the few people who do like them are the ones with whom you probably don’t want to have meetings. So when you’ve reached your fill and decide to reschedule some of those obligations, the usual process of trying to find a new meeting time begins. Thankfully, the Google Calendar team has heard your sighs of frustration and built a new tool that makes rescheduling meetings much easier.

Starting in two weeks, on August 13th, every guest will be able to propose a new meeting time and attach to that update a message to the organizer to explain themselves. The organizer can then review and accept or deny that new time slot. If the other guests have made their calendars public, the organizer can also see the other attendees’ availability in a new side-by-side view to find a new time.

What’s a bit odd here is that this is still mostly a manual feature. To find meeting slots to begin with, Google already employs some of its machine learning smarts to find the best times. This new feature doesn’t seem to employ the same algorithms to proposed dates and times for rescheduled meetings.

This new feature will work across G Suite domains and also with Microsoft Exchange. It’s worth noting, though, that this new option won’t be available for meetings with more than 200 attendees and all-day events.

Thursday, July 26, 2018

Slack forms key alliance as Atlassian throws in the towel on enterprise collaboration

With today’s announcement from Atlassian that it was selling the IP assets of its two enterprise communications tools, Hipchat and Stride, to Slack, it closes the book on one of the earliest competitors in the modern enterprise collaboration space. It was also a clear  signal that Slack is not afraid to take on its giant competitors by forming key alliances.

The fact the announcement came from Slack co-founder and CEO Stewart Butterfield on Twitter only exacerbated that fact. Atlassian has a set of popular developer tools like Jira, Confluence and Bitbucket. At this point, Hipchat and Stride had really become superfluous to the company and they sold the IP to their competitor.

Not only was Slack buying the assets and Atlassian was effectively shutting down these products, Atlassian was also investing in Slack, a move that shows it’s throwing its financial weight behind the company as well and forming an alliance with them.

Slack has been burning it up since in launched in 2014 with just 16,000 daily active users. At last count in May the company was reporting 8 million active users, 3 million of which were paid. That’s up from 6 million DAUs and 2 million paid users in September 2017. At the time, the company was reporting $200 million in annual recurring revenue. It’s a fair bet with the number of paid users growing by third at last count, that revenue number has increased significantly as well

Slack and products of its ilk like Workplace by Facebook, Google Hangouts and Microsoft Teams are trying to revolutionize the way we communicate and collaborate inside organizations. Slack has managed to advance the idea of enterprise communications that began in the early 2000 with chat clients, advanced to Enterprise 2.0 tools like Yammer and Jive in the mid-2000s, and finally evolved into modern tools like Slack we are using today in the mobile-cloud era.

Slack has been able to succeed so well in business because it does much more than provide a channel to communicate. It has built a platform on top of which companies can plug in an assortment of tools they are using every day to do their jobs like ServiceNow for help desk tickets, Salesforce for CRM and marketing data and Zendesk for customer service information.

This ability to provide a simple way to do all of your business in one place without a lot of task switching has been a holy grail of sorts in the enterprise for years. The two previously mentioned iterations, chat clients and Enterprise 2.0 tools, tried and failed to achieve this, but Slack has managed to create this single platform and made it easy for companies to integrate services.

This has been automated even further by the use of bots, which can act as trusted assistants inside of Slack providing additional information and performing tasks for you on your behalf when it makes sense.

Slack has an otherworldly valuation of over $5 billion right now and is on its way to an eventual IPO. Atlassian might have thrown in the towel on enterprise communications, but it has opened the door to getting a piece of that IPO action while giving its customers what they want and forming a strong bond with Slack.

Others like Facebook and Microsoft also have a strong presence in this space and continue to build out their products. It’s not as though anyone else is showing signs of throwing up their hands just yet. In fact, Just today Facebook bought Redkix to enhance its offering by giving users the ability to collaborate via email or the Workplace by Facebook interface, but Atlassian’s acquiescence is a strong signal that if you had any doubt, Slack is a leader here and they got a big boost with today’s announcement.

Amazon’s AWS continues to lead its performance highlights

Amazon’s web services AWS continue to be the highlight of the company’s balance sheet, once again showing the kind of growth Amazon is looking for in a new business for the second quarter — especially one that has dramatically better margins than its core retail business.

Despite now running a grocery chain, the company’s AWS division — which has an operating margin over 25 percent compared to its tiny margins on retail — grew 49 percent year-over-year in the quarter compared to last year’s second quarter. It’s also up 49 percent year-over-year when comparing the most recent six months to the same period last year. AWS is now on a run rate well north of $10 billion annually, generating more than $6 billion in revenue in the second quarter this year. Meanwhile, Amazon’s retail operations generated nearly $47 billion with a net income of just over $1.3 billion (unaudited). Amazon’s AWS generated $1.6 billion in operating income on its $6.1 billion in revenue.

So, in short, Amazon’s dramatically more efficient AWS business is its biggest contributor to its actual net income. The company reported earnings of $5.07 per share, compared to analyst estimates of around $2.50 per share, on revenue of $52.9 billion. That revenue number fell under what investors were looking for, so the stock isn’t really doing anything in after-hours, and Amazon still remains in the race to become a company with a market cap of $1 trillion alongside Google, Apple and Microsoft.

This isn’t extremely surprising, as Amazon was one of the original harbingers of the move to a cloud computing-focused world, and, as a result, Microsoft and Google are now chasing it to capture up as much share as possible. While Microsoft doesn’t break out Azure, the company says it’s one of its fastest-growing businesses, and Google’s “other revenue” segment that includes Google Cloud Platform also continues to be one of its fastest-growing divisions. Running a bunch of servers with access to on-demand compute, it turns out, is a pretty efficient business that can account for the very slim margins that Amazon has on the rest of its core business.

GitHub and Google reaffirm partnership with Cloud Build CI/CD tool integration

When Microsoft acquired GitHub for $7.5 billion smackeroos in June, it sent some shock waves through the developer community as it is a key code repository. Google certainly took notice, but the two companies continue to work closely together. Today at Google Next, they announced an expansion of their partnership around Google’s new CI/CD tool, Cloud Build, which was unveiled this week at the conference.

Politics aside, the purpose of the integration is to make life easier for developers by reducing the need to switch between tools. If GitHub recognizes a Docker file without a corresponding CI/CD tool, the developer will be prompted to grab one from the GitHub Marketplace with Google Cloud Build offered prominently as one of the suggested tools.

Photo: GitHub

Should the developer choose to install Cloud Build, that’s where the tight integration comes into play. Developers can run Cloud Build against their code directly from GitHub, and the results will appear directly in the GitHub interface. They won’t have to switch applications to make this work together, and that should go a long way toward saving developer time and effort.

Google Cloud Build. Photo: Google

This is part of GitHub’s new “Smart Recommendations,” which will be rolling out to users in the coming months.

Melody Meckfessel, VP of Engineering for Google Cloud says that the two companies have a history and a context and they have always worked extremely well together on an engineer-to-engineer level. “We have been working together from an engineering standpoint for so many years. We both believe in doing the right thing for developers. We believe that success as it relates to cloud adoption comes from collaborating in the ecosystem,” she said.

Given that close relationship, it had to be disappointing on some level when Microsoft acquired GitHub. In fact, Google Cloud head, Diane Greene expressed sadness about the deal in an interview with CNBC earlier this week, but GitHub’s SVP of Technology Jason Warner believes that Microsoft will be a good steward and that the relationship with Google will remain strong.

Warner says the company’s founding principles were about not getting locked in to any particularly platform and he doesn’t see that changing after the acquisition is finalized. “One of the things that was critical in any discussion about an acquisition was that GitHub shall remain an open platform,” Warner explained.

He indicated that today’s announcement is just a starting point, and the two companies intend to build on this integration moving forward. “We worked pretty closely on this together. This announcement is a nod to some of the future oriented partnerships that we will be announcing later in the year,” he said. And that partnership should continue unabated, even after the Microsoft acquisition is finalized later this year.

Facebook acquires Redikix to enhance communications on Workplace by Facebook

Facebook had a rough day yesterday when its stock plunged after a poor earnings report. What better way to pick yourself up and dust yourself off than to buy a little something for yourself. Today the company announced it has acquired Redkix, a startup that provides tools to communicate more effectively by combining email with a more formal collaboration tool. The companies did not reveal the acquisition price.

Redkix burst out of the gate two years ago with a $17 million seed round, a hefty seed amount by any measure. What prompted this kind of investment was a tool that combined a collaboration tool like Slack or Workplace by Facebook with email. People could collaborate in Redkix itself, or if you weren’t a registered user, you could still participate by email, providing a more seamless way to work together.

Alan Lepofsky, who covers enterprise collaboration at Constellation Research, sees this tool as providing a key missing link. “Redkix is a great solution for bridging the worlds between traditional email messaging and more modern conversational messaging. Not all enterprises are ready to simply switch from one to the other, and Redkix allows for users to work in whichever method they want, seamlessly communicating with the other,” Lepofsky told TechCrunch.

As is often the case with these kinds of acquisitions, the company bought the technology  itself along with the team that created it. This means that the Redikix team including the CEO and CTO will join Facebook and they will very likely be shutting down the application after the acquisition is finalized.

After yesterday’s earning’s debacle, Facebook could be looking for ways to enhance its revenue in areas beyond the core Facebook platform. The enterprise collaboration tool does offer a possible way to do that in the future, and if they can find a way to incorporate email into it, it could make it a more attractive and broader offering.

Facebook is competing with Slack, the darling of this space and others like Microsoft, Cisco and Google around communications and collaboration. When it launched in 2015, it was trying to take that core Facebook product and put it in a business context, something Slack had been doing since the beginning.

To succeed in business, Facebook had to think differently than as a consumer tool, driven by advertising revenue and had to convince large organizations that they understood their requirements. Today, Facebook claims 30,000 organizations are using the tool and over time they have built in integrations to other key enterprise products and keep enhancing it.

Perhaps with today’s acquisition, they can offer a more flexible way to interact with platform and could increase those numbers over time.

Wednesday, July 25, 2018

Qualcomm says it will drop its massive $44B offer to acquire NXP

Qualcomm today said it wouldn’t extend its offer to buy NXP for $44 billion today as part of its release for its quarterly earnings, and instead be returning $30 billion to investors in the form of a share buy-back.

So, barring any last-second changes in the approval process in China or “other material developments”, the deal is basically dead after failing to clear China’s SAMR. As the tariff battle between the U.S. and China has heated up, it appears the Qualcomm/NXP deal — one of the largest in the semiconductor industry ever — may be one of its casualties. The White House announced it would impose tariffs on Chinese tech products in May earlier this year, kicking off an extended delay in the deal between Qualcomm and NXP even after Qualcomm tried to close the deal in an expedient fashion. Qualcomm issued the announcement this afternoon, and the company’s shares rose more than 5% when its earnings report came out.

“We reported results significantly above our prior expectations for our fiscal third quarter, driven by solid execution across the company, including very strong results in our licensing business,” Qualcomm CEO Steve Mollenkopf said in a statement with the report. “We intend to terminate our purchase agreement to acquire NXP when the agreement expires at the end of the day today, pending any new material developments. In addition, as previously indicated, upon termination of the agreement, we intend to pursue a stock repurchase program of up to $30 billion to deliver significant value to our stockholders.”

Today’s termination also marks the end of another chapter for a tumultuous couple of months for Qualcomm. The White House blocked Broadcom’s massive takeover attempt of Qualcomm in March earlier this year, and there’s the still-looming specter of its patent spat with Apple. Now Qualcomm will instead be returning an enormous amount of capital to investors instead of tacking on NXP in the largest ever consolidation deal in the semiconductor industry.

Virtu teams up with Google to bring its end-to-end encryption service to Google Drive

Virtu, which is best known for its email encryption service for both enterprises and consumers, is announcing a partnership with Google today that will bring the company’s encryption technology to Google Drive.

Only a few years ago, the company was still bolting its solution on top of Gmail without Google’s blessing, but these days, Google is fully on board with Virtu’s plans.

Its new Data Protection for Google Drive extends its service for Gmail to Google’s online file storage service. It ensures that files are encrypted before upload, which ensures the files remain protected, even when they are shared outside of an organization. The customer remains in full control of the encryption keys, so Google, too, has no access to these files, and admins can set and manage access policies by document, folder and team drive.

Virtu’s service uses the Trusted Data Format, an open standard the company’s CTO Will Ackerly developed at the NSA.

While it started as a hack, Virtu is Google’s only data protection partner for G Suite today, and its CEO John Ackerly tells me the company now gets what he and his team are trying to achieve. Indeed, Virtu now has a team of engineers that works with Google. As John Ackerly also noted, GDPR and the renewed discussion around data privacy is helping it gain traction in many businesses, especially in Europe, where the company is opening new offices to support its customers there. In total, about 8,000 organization now use its services.

It’s worth noting that while Virtu is announcing this new Google partnership today, the company also supports email encryption in Microsoft’s Office 365 suite.

Google is baking machine learning into its BigQuery data warehouse

There are still a lot of obstacles to building machine learning models and one of those is that in order to build those models, developers often have to move a lot of data back and forth between their data warehouses and wherever they are building their models. Google is now making this part of the process a bit easier for the developers and data scientists in its ecosystem with BigQuery ML, a new feature of its BigQuery data warehouse, by building some machine learning functionality right into BigQuery.

Using BigQuery ML, developers can build models using linear and logistical regression right inside their data warehouse without having to transfer data back and forth as they build and fine-tune their models. And all they have to do to build these models and get predictions is to write a bit of SQL.

Moving data doesn’t sound like it should be a big issue, but developers often spend a lot of their time on this kind of grunt work — time that would be better spent on actually working on their models.

BigQuery ML also promises to make it easier to build these models, even for developers who don’t have a lot of experience with machine learning. To get started, developers can use what’s basically a variant of standard SQL to say what kind of model they are trying to build and what the input data is supposed to be. From there, BigQuery ML then builds the model and allows developers to almost immediately generate predictions based on it. And they won’t even have to write any code in R or Python.

These new features are now available in beta.

Google launches a standalone version of Drive for businesses that don’t want the full G Suite

If you are a business and want to use Google Drive, then your only option until now was to buy a full G Suite subscription, even if you don’t want or need access to the rest of the company’s productivity tools. Starting today, though, these businesses will be able to buy a subscription to a standalone version of Google Drive, too.

Google says that a standalone version of Drive has been at the top of the list of requests from prospective customers, so it’s now giving this option to them in the form of this new service (though to be honest, I’m not sure how much demand there really is for this product). Standalone Google Drive will come with all the usual online storage and sharing features as the G Suite version.

Pricing will be based on usage. Google will charge $8 per month per active user and $0.04 per GB stored in a company’s Drive.

Google’s idea here is surely to convert those standalone Drive users to full G Suite users over time, but it’s also an acknowledgement on Google’s part that not every business is ready to move away from legacy email tools and desktop-based productivity applications like Word and Excel just yet (and that its online productivity suite may not be right for all of those businesses, too).

Drive, by the way, is going to hit a billion users this week, Google keeps saying. I guess I appreciate that they don’t want to jump the gun and are actually waiting for that to happen instead of just announcing it now when it’s convenient. Once it does, though, it’ll become the company’s eighth product with more than a billion users.

Google takes on Yubico and builds its own hardware security keys

Google today announced it is launching its own hardware security keys for two-factor authentication. These so-called Titan Security Keys will go up against similar keys from companies like Yubico, which Google has long championed as the de facto standard for hardware-based two-factor authentication for Gmail and other services.

The FIDO-compatible Titan keys will come in two versions. One with Bluetooth support for mobile devices and one that plugs directly into your computer’s USB port. In terms of looks and functionality, those keys look quite a lot like the existing keys from Yubico, though our understanding is that these are Google’s own designs.

Unsurprisingly, the folks over at Yubico got wind of today’s announcement ahead of time and have already posted a reaction to today’s news (and the company is exhibiting at Google Cloud Next, too, which may be a bit awkward after today’s announcement).

“Yubico strongly believes there are security and privacy benefits for our customers, by manufacturing and programming our products in USA and Sweden,” Yubico founder and CEO Stina Ehrensvard writes, and goes on to throw a bit of shade on Google’s decision to support Bluetooth. “Google’s offering includes a Bluetooth (BLE) capable key. While Yubico previously initiated development of a BLE security key, and contributed to the BLE U2F standards work, we decided not to launch the product as it does not meet our standards for security, usability and durability. BLE does not provide the security assurance levels of NFC and USB, and requires batteries and pairing that offer a poor user experience.”

It’s unclear who is manufacturing the Titan keys for Google (the company spokesperson didn’t know when asked during the press conference), but the company says that it developed its own firmware for the keys. And while Google is obviously using the same Titan brand it uses for the custom chips that protect the servers that make up its cloud, it’s also unclear if there is any relation between those.

No word on pricing yet, but the keys are now available to Google Cloud customers and will be available for purchase for anyone in the Google Store, soon.

Google brings its search technology to the enterprise

One of Google’s first hardware products was its search appliance, a custom-built server that allowed businesses to bring Google’s search tools to the data behind their firewalls. That appliance is no more, but Google today announced the spiritual successor to it with an update to Cloud Search. Until today, Cloud Search only indexed G Suite data. Now, it can pull in data from a wide variety of third-party services that can run on-premise or in the cloud, too, making the tool far more useful for large businesses that want to make all of their data searchable by their employees.

“We are essentially taking all of Google expertise in search and are applying it to your enterprise content,” Google said.

One of the launch customers for this new service is Whirlpool, which built its own search portal and indexed over 12 million documents from more than a dozen services using this new service.

“This is about giving employees access to all the information from across the enterprise, even if it’s traditionally siloed data whether that’s in a database or a legacy productivity tool and make all of that available in a single index,” Google explained.

To enable this functionality, Google is making a number of software adapters available that will bridge the gap between these third-party services and Cloud Search. Over time, Google wants to add support for more services and bring this cloud-based technology on par with what its search appliance was once capable of.

The service is now rolling out to a select number of users and over time, it’ll become available to both G Suite users and as a stand-alone version.

Snark AI looks to help companies get on-demand access to idle GPUs

Riding on a wave of an explosion in the use of machine learning to power, well, just about everything is the emergence of GPUs as one of the go-to methods to handle all the processing for those operations.

But getting access to those GPUs — whether using the cards themselves or possibly through something like AWS — might still be too difficult or too expensive for some companies or research teams. So Davit Buniatyan and his co-founders decided to start Snark AI, which helps companies rent GPUs that aren’t in use across a distributed network of companies that just have them sitting there, rather than through a service like Amazon. While the larger cloud providers offer similar access to GPUs, Buniatyan’s hope is that it’ll be attractive enough to companies and developers to tap a different network if they can lower that barrier to entry. The company is launching out of Y Combinator’s Summer 2018 class.

“We bet on that there will always be a gap between mining and AWS or Google Cloud prices,” Buniatyan said. “If the mining will be [more profitable than the cost of running a GPU], anyone can get into AWS and do mining and be profitable. We’re building a distributed cloud computing platform for clients that can easily access the resources there but are not used.”

The startup works with companies with a lot of spare GPUs that aren’t in use, such as gaming cloud companies or crypto mining companies. Teams that need GPUs for training their machine learning models get access to the raw hardware, while teams that just need those GPUs to handle inference get access to them through a set of APIs. There’s a distinction between the two because they are two sides to machine learning — the former building the model that the latter uses to execute some task, like image or speech recognition. When the GPUs are idle, they run mining to pay the hardware providers, and Snark AI also offers the capability to both mine and run deep learning inference on a piece of hardware simultaneously, Buniatyan said.

Snark AI matches the proper amount of GPU power to whatever a team needs, and then deploys it across a network of distributed idle cards that companies have in various data centers. It’s one way to potentially reduce the cost of that GPU over time, which may be a substantial investment initially but get a return over time while it isn’t in use. If that’s the case, it may also encourage more companies to sign up with a network like this — Snark AI or otherwise — and deploy similar cards.

There’s also an emerging trend of specialized chips that focus on machine learning or inference, which look to reduce the cost, power consumption, or space requirements of machine learning tasks. That ecosystem of startups, like Cerebras Systems, Mythic, Graphcore, or any of the other well-funded startups, all potentially have a shot at unseating GPUs for machine learning tasks. There’s also the emergence of ASICs, customized chips that are better suited to tasks like crypto mining, which could fracture an ecosystem like this — especially if the larger cloud providers decide to build or deploy something similar (such as Google’s TPU). But this also means that there’s room to potentially create some new interface layer that can snap up all the leftovers for tasks that companies might need, but don’t necessarily need bleeding-edge technology like that from those startups.

There’s always going to be the same argument that was made for Dropbox prior to its significant focus on enterprises and collaboration: the price falls dramatically as it becomes more commoditized. That might be especially true for companies like Amazon and Google, which have already run that playbook before, and could leverage their dominance in cloud computing to put a significant amount of pressure on a third-party network like Snark AI. Google also has the ability to build proprietary hardware like the TPU for specialized operations. But Buniatyan said that the company’s focus on being able to juggle inference and mining, in addition to keeping that cost low for idle GPUs of companies that are just looking to deploy, should keep it viable even amid a changing ecosystem that’s focusing on machine learning.

Google Cloud introduces shielded VMs for additional security

While we might like to think all of our applications are equal in our eyes, in reality some are more important than others and require an additional level of security. To meet those requirements, Google introduced shielded virtual machines at Google Next today.

As Google describes it, “Shielded VMs leverage advanced platform security capabilities to help ensure your VMs have not been tampered with. With Shielded VMs, you can monitor and react to any changes in the VM baseline as well as its current runtime state.”

These specialized VMs run on GCP and come with a set of partner security controls to defend against things like rootkits and bootkits, according to Google. There are a whole bunch of things that happen even before an application launches inside a VM, and each step in that process is vulnerable to attack.

That’s because as the machine starts up, before you even get to your security application, it launches the firmware, the boot sequence, the kernel, then the operating system — and then and only then, does your security application launch.

That time between startup and the security application launching could leave you vulnerable to certain exploits that take advantage of those openings. The shielded VMs strip out as much of that process as possible to reduce the risk.

“What we’re doing here is we are stripping out any of the binary that doesn’t absolutely have to be there. We’re ensuring that every binary that is there is signed, that it’s signed by the right party, and that they load in the proper sequence,” a Google spokesperson explained. All of these steps should reduce overall risk.

Shielded VMs are available in Beta now

Google is rolling out a version of Google Voice for enterprise G Suite customers

Google today said it will be rolling out an enterprise version of its Google Voice service for G Suite users, potentially tapping a new demand source for Google that could help attract a whole host of new users.

Google voice has been a long-enjoyed service for everyday consumers, and offers a lot of benefits beyond just having a normal phone number. The enterprise version of Google Voice appears to give companies a way to offer those kinds of tools, including AI-powered parts of it like voicemail transcription, that employees may be already using and potentially skirting the guidelines of a company. Administrators can provision and port phone numbers, get detailed reports and set up call routing functionality. They can also deploy phone numbers to departments or employees, giving them a sort of universal number that isn’t tied to a device — and making it easier to get in touch with someone where necessary.

All of this is an effort to spread adoption of G Suite among larger enterprises as it offers a nice consistent business for Google. While its advertising business continues to grow, the company is investing in cloud products as another revenue stream. That division offers a lot of overhead while Google figures out where the actual total market capture of its advertising is and starts to work on other projects like its hardware, Google Home, and others.

While Google didn’t explicitly talk about it ahead of the conference today, there’s another potential opportunity for something like this: call centers. An enterprise version of Google Voice could give companies a way to provision out certain phone numbers to employees to handle customer service requests and get a lot of information about those calls. Google yesterday announced that it was rolling out a more robust set of call center tools that lean on its expertise in machine learning and artificial intelligence, and getting control of the actual numbers that those calls take in is one part of that equation.

There’s also a spam filtering feature, which will probably be useful in handling waves of robo-calls for various purposes. It’s another product that Google is porting over to its enterprise customers with a bit better controls for CTOs and CIOs after years of understanding how normal consumers are using it and having an opportunity to rigorously test parts of the product. That time also gives Google an opportunity to thoroughly research the gaps in the product that enterprise customers might need in order to sell them on the product.

Google Voice enterprise is going to be available as an early adopter product.

Tuesday, July 24, 2018

Google’s Cloud Functions serverless platform is now generally available

Cloud Functions, Google’s serverless platform that competes directly with tools like AWS Lambda and Azure Functions from Microsoft, is now generally available, the company announced at its Cloud Next conference in San Francisco today.

Google first announced Cloud Functions back in 2016, so this has been a long beta. Overall, it also always seemed as if Google wasn’t quite putting the same resources behind its serverless play when compared to its major competitors. AWS, for example, is placing a major bet on serverless, as is Microsoft. And there are also plenty of startups in this space, too.

Like all Google products that come out of beta, Cloud Functions is now backed by an SLA and the company also today announced that the service now runs in more regions in the U.S. and Europe.

In addition to these hosted options, Google also today announced its new Cloud Services platform for enterprises that want to run hybrid clouds. While this doesn’t include a self-hosted Cloud Functions option, Google is betting on Kubernetes as the foundation for businesses that want to run serverless applications (and yes, I hate the term ‘serverless,’ too) in their own data centers.

Google announces a suite of updates to its contact center tools

As Google pushes further and further into enterprise services, it’s looking to leverage what it’s known for — a strong expertise in machine learning — to power some of the most common enterprise functions, including contact centers.

Now Google is applying a lot of those learnings in a bunch of new updates for its contact center tools. That’s basically leaning on a key focus Google has, which is using machine learning for natural language recognition and image recognition. Those tools have natural applications in enterprises, especially those looking to spin up the kinds of tools that larger companies have with complex customer service requests and niche tools. Today’s updates, announced at the Google Cloud Next conference, including a suite of AI tools for its Google Cloud Contact Center.

Today the company said it is releasing a couple updates to its Dialogflow tools, including a new one called phone gateway, which helps companies automatically assign a working phone number to a virtual agent. The company says you can begin taking those calls in “less than a minute” without infrastructure, with the rest of the machine-learning powered functions like speech recognition and natural language understanding managed by Google.

Google is adding AI-powered tools to the contact center with agent assistant tools, which can quickly pull in with relevant information, like suggested articles. It also has an update to its analytics tools, which lets companies sift through historical audio data to pull in trends — like common calls and complaints. One application here would be to be able to spot some confusing update or a broken tool based on a high volume of complaints, and that helps companies get a handle on what’s happening without a ton of overhead.

Other new tools include sentiment analysis, correcting spelling mistakes, tools to understand unstructured documents within a company like knowledge base articles — streaming that into Dialogflow. Dialogflow is also getting native audio response.

Outlier raises $6.2 M Series A to change how companies use data

Traditionally, companies have gathered data from a variety of sources, then used spreadsheets and dashboards to try and make sense of it all. Outlier wants to change that and deliver a handful of insights that matter most for your job, company and industry right to your inbox. Today the company announced a $6.2 million Series A to further develop that vision.

The round was led by Ridge Ventures with assistance from 11.2 Capital, First Round Capital, Homebrew, Susa Ventures and SV Angel. The company has raised over $8 million.

The startup is trying to solve a difficult problem around delivering meaningful insight without requiring the customer to ask the right questions. With traditional BI tools, you get your data and you start asking questions and seeing if the data can give you some answers. Outliers wants to bring a level of intelligence and automation by pointing out insight without having to explicitly ask the right question.

Company founder and CEO Sean Byrnes says his previous company, Flurry, helped deliver mobile analytics to customers, but in his travels meeting customers in that previous iteration, he always came up against the same question: “This is great, but what should I look for in all that data?”

It was such a compelling question that after he sold Flurry in 2014 to Yahoo for more than $200 million, that question stuck in the back of his mind and he decided to start a business to solve it. He contends that the first 15 years of BI was about getting answers to basic questions about company performance, but the next 15 will be about finding a way to get the software to ask good questions for you based on the huge amounts data.

Byrnes admits that when he launched, he didn’t have much sense of how to put this notion into action, and most people he approached didn’t think it was a great idea. He says he heard “No” from a fair number of investors early on because the artificial intelligence required to fuel a solution like this really wasn’t ready in 2015 when he started the company.

He says that it took four or five iterations to get to today’s product, which lets you connect to various data sources, and using artificial intelligence and machine learning delivers a list of four or five relevant questions to the user’s email inbox that points out data you might not have noticed, what he calls “shifts below the surface.” If you’re a retailer that could be changing market conditions that signal you might want to change your production goals.

Outlier email example. Photo: Outlier

The company launched in 2015. It took some time to polish the product, but today they have 14 employees and 14 customers including Jack Rogers, Celebrity Cruises and Swarovski.

This round should allow them to continuing working to grow the company. “We feel like we hit the right product-market fit because we have customers [generating] reproducible results and really changing the way people use the data,” he said.

Google Cloud goes all-in on hybrid with its new Cloud Services Platform

The cloud isn’t right for every business, be that because of latency constraints at the edge, regulatory requirements or because it’s simply cheaper to own and operate their own data centers for their specific workloads. Given this, it’s maybe no surprise that the vast majority of enterprises today use both public and private clouds in parallel. That’s something Microsoft has long been betting on as part of its strategy for its Azure cloud, and Google, too, is now taking a number of steps in this direction.

With the open-source Kubernetes project, Google launched one of the fundamental building blocks that make running and managing applications in hybrid environments easier for large enterprises. What Google hadn’t done until today, though, is launch a comprehensive solution that includes all of the necessary parts for this kind of deployment. With its new Cloud Services Platform, though, the company is now offering businesses an integrated set of cloud services that can be deployed on both the Google Cloud Platform and in on-premise environments.

As Google Cloud engineering director Chen Goldberg noted in a press briefing ahead of today’s announcement, many businesses also simply want to be able to manage their own workloads on-premise but still be able to access new machine learning tools in the cloud, for example. “Today, to achieve this, use cases involve a compromise between cost, consistency, control and flexibility,” she said. “And this all negatively impacts the desired result.”

Goldberg stressed that the idea behind the Cloud Services Platform is to meet businesses where they are and then allow them to modernize their stack at their own pace. But she also noted that businesses want more than just the ability to move workloads between environments. “Portability isn’t enough,” she said. “Users want consistent experiences so that they can train their team once and run anywhere — and have a single playbook for all environments.”

The two services at the core of this new offering are the Kubernetes container orchestration tool and Istio, a relatively new but quickly growing tool for connecting, managing and securing microservices. Istio is about to hit its 1.0 release.

We’re not simply talking about a collection of open-source tools here. The core of the Cloud Services Platform, Goldberg noted, is “custom configured and battle-tested for enterprises by Google.” In addition, it is deeply integrated with other services in the Google Cloud, including the company’s machine learning tools.

GKE On-Prem

Among these new custom-configured tools are a number of new offerings, which are all part of the larger platform. Maybe the most interesting of these is GKE On-Prem. GKE, the Google Kubernetes Engine, is the core Google Cloud service for managing containers in the cloud. And now Google is essentially bringing this service to the enterprise data center, too.

The service includes access to all of the usual features of GKE in the cloud, including the ability to register and manage clusters and monitor them with Stackdriver, as well as identity and access management. It also includes a direct line to the GCP Marketplace, which recently launched support for Kubernetes-based applications.

Using the GCP Console, enterprises can manage both their on-premise and GKE clusters without having to switch between different environments. GKE on-prem connects seamlessly to a Google Cloud Platform environment and looks and behaves exactly like the cloud version.

Enterprise users also can get access to professional services and enterprise-grade support for help with managing the service.

“Google Cloud is the first and only major cloud vendor to deliver managed Kubernetes on-prem,” Goldberg argued.

GKE Policy Management

Related to this, Google also today announced GKE Policy Management, which is meant to provide Kubernetes administrators with a single tool for managing all of their security policies across clusters. It’s agnostic as to where the Kubernetes cluster is running, but you can use it to port your existing Google Cloud identity-based policies to these clusters. This new feature will soon launch in alpha.

Managed Istio

The other major new service Google is launching is Managed Istio (together with Apigee API Management for Istio) to help businesses manage and secure their microservices. The open source Istio service mesh gives admins and operators the tools to manage these services and, with this new managed offering, Google is taking the core of Istio and making it available as a managed service for GKE users.

With this, users get access to Istio’s service discovery mechanisms and its traffic management tools for load balancing and routing traffic to containers and VMs, as well as its tools for getting telemetry back from the workloads that run on these clusters.

In addition to these three main new services, Google is also launching a couple of auxiliary tools around GKE and the serverless computing paradigm today. The first of these is the GKE serverless add-on, which makes it easy to run serverless workloads on GKE with a single-step deploy process. This, Google says, will allow developers to go from source code to container “instantaneously.” This tool is currently available as a preview and Google is making parts of this technology available under the umbrella of its new native open source components. These are the same components that make the serverless add-on possible.

And to wrap it all up, Google also today mentioned a new fully managed continuous integration and delivery service, Google Cloud Build, though the details around this service remain under wraps.

So there you have it. By themselves, all of those announcements may seem a bit esoteric. As a whole, though, they show how Google’s bet on Kubernetes is starting to pay off. As businesses opt for containers to deploy and run their new workloads (and maybe even bring older applications into the cloud), GKE has put Google Cloud on the map to run them in a hosted environment. Now, it makes sense for Google to extend this to its users’ data centers, too. With managed Kubernetes from large and small companies like SUSE, Platform 9, containership is starting to become a big business. It’s no surprise the company that started it all wants to get a piece of this pie, too.

InVision CEO Clark Valberg to talk design at Disrupt SF

To Clark Valberg, the screen is the most important place in the world. And he’s not the only one who thinks so. It isn’t just tech companies spending their money on design. The biggest brands in the world are pouring money into their digital presence, for many, the first step is InVision.

InVision launched back in 2011 with a simple premise: What if, instead of the back-and-forth between designers and engineers and executives, there was a program that let these interested parties collaborate on a prototype?

The first iteration simply let designers build out prototypes, complete with animations and transitions, so that engineers didn’t spend time building things that would only change later.

As that tool grew, InVision realized that it was in conversation with designers across the industry, and that it hadn’t yet fixed one of their biggest pain points. That’s why, in 2017, InVision launched Studio, a design platform that was built specifically for designers building products.

Alongside Studio, InVision also launched its own app store for design programs to loop into the larger InVision platform. And the company also launched a fund to invest in early-stage design companies.

The idea here is to become the Salesforce of the design world, with the entire industry centering around this company and its various offerings.

InVision has raised more than $200 million, and serves 4 million users, including 80 percent of the Fortune 500. We’re absolutely thrilled to have Clark Valberg, InVision cofounder and CEO, join us at Disrupt SF in September.

The full agenda is here. Passes for the show are available at the Early-Bird rate until July 25 here.

Watch the Google Cloud Next day one keynote live right here

Google is hosting its big cloud conference, Google Cloud Next, this morning over at the Moscone center in San Francisco. Obviously it’s not quite as large as its flagship event I/O earlier this year, but Google’s cloud efforts have become one of its brightest points over the past several quarters.

With heavy investments in Google Cloud’s infrastructure, its enterprise services, as well as a suite of machine learning tools layered on top of all that, Google is clearly trying to make Google Cloud a core piece of its business going forward. Traditionally an advertising juggernaut, Google is now figuring out what comes next after that, even as that advertising business continues to grow at a very healthy clip.

The keynote starts at 9 a.m. Pacific time, and the TechCrunch team is on the ground here covering all the newsiest and best bits. Be sure to check out our full coverage on TechCrunch as the keynote moves forward.