Thursday, January 31, 2019

Google’s Cloud Firestore NoSQL database hits general availability

Google today announced that Cloud Firestore, its serverless NoSQL document database for mobile, web and IoT apps, is now generally available. In addition, Google is also introducing a few new features and bringing the service to ten new regions.

With this launch, Google is giving developers the option to run their databases in a single region. During the beta, developers had to use multi-region instances and while that obviously has some advantages with regard to resilience, it’s also more expensive and not every app needs to run in multiple regions.

“Some people don’t need the added reliability and durability of a multi-region application,” Google product manager Dan McGrath told me. “So for them, having a more cost-effective regional instance is very attractive, as well as data locality and being able to place a Cloud Firestore database as close as possible to their user base.”

The new regional instance pricing is up to 50 percent cheaper than the current multi-cloud instance prices. Which solution you pick does influence the SLA guarantee Google gives you, though. While the regional instances are still replicated within multiple zones inside the region, all of the data is still within a limited geographic area. Hence, Google promises 99.999 percent availability for multi-region instances and 99.99 percent availability for regional instances.

And talking about regions, Cloud Firestore is now available in ten new regions around the world. Firestore launched with a single location when it launched and added two more during the beta. With this, Firestore is now available in 13 locations (including the North America and Europe multi-region offerings). McGrath tells me Google is still in the planning phase for deciding the next phase of locations but he stressed that the current set provides pretty good coverage across the globe.

Also new in this release is deeper integration with Stackdriver, the Google Cloud monitoring service, which can now monitor read, write and delete operations in near-real time. McGrath also noted that Google plans to add the ability to query documents across collections and to increment database values without needing a transaction soon.

It’s worth noting that while Cloud Firestore falls under the Googe Firebase brand, which typically focuses on mobile developers, Firestore offers all of the usual client-side libraries for Compute Engine or Kubernetes Engine applications, too.

“If you’re looking for a more traditional NoSQL document database, then Cloud Firestore gives you a great solution that has all the benefits of not needing to manage the database at all,” McGrath said. “And then, through the Firebase SDK, you can use it as a more comprehensive back-end as a service that takes care of things like authentication for you.”

One of the advantages of Firestore is that it has extensive offline support, which makes it ideal for mobile developers but also IoT solutions. Maybe it’s no surprise then that Google is positioning it as a tool for both Google Cloud and Firebase users.

Tuesday, January 29, 2019

Figma’s design and prototyping tool gets new enterprise collaboration features

Figma, the design and prototyping tool that aims to offer a web-based alternative to similar tools from the likes of Adobe, is launching a few new features today that will make the service easier to use to collaborate across teams in large organizations. Figma Organization, as the company calls this new feature set, is the company’s first enterprise-grade service that features the kind of controls and security tools that large companies expect. To develop and test these tools, the company partnered with companies like Rakuten, Square, Volvo and Uber, and introduced features like unified billing and audit reports for the admins and shared fonts, browsable teams and organization-wide design systems for the designers.

For designers, one of the most important new features here is probably organization-wide design systems. Figma already had tools to create design systems, of course, but this enterprise version now makes it easier for teams to share libraries and fonts with each other to ensure that the same styles are applied to products and services across a company.

Businesses can now also create as many teams as they would like and admins will get more controls over how files are shared and with whom they can be shared. That doesn’t seem like an especially interesting feature, but because many larger organizations work with customers outside of the company, it’s something that will make Figma more interesting to these large companies.

After working with Figma on these new tools, Uber, for example, moved all of its company over to the service and 90 percent of its product design work now happens on the platform. “We needed a way to get people in the right place at the right time — in the right team with the right assets,” said Jeff Jura, staff product designer who focuses on Uber’s design systems. “Figma does that.”

Other new enterprise features that matter in this context are single sign-on support, activity logs for tracking activities across users, teams, projects and files, and draft ownership to ensure that all the files that have been created in an organization can be recovered after an employee leaves the company.

Figma still offers free and professional tiers (at $12/editor/month). Unsurprisingly, the new Organization tier is a bit more expensive and will cost $45/editor/month.

SAP job cuts prove harsh realities of enterprise transformation

As traditional enterprise companies like IBM, Oracle and SAP try to transform into more modern cloud companies, they are finding that making that transition, while absolutely necessary, could require difficult adjustments along the way. Just this morning, SAP announced that it was restructuring in order to save between $750 million and 800 million euro (between approximately $856 million an $914 million).

While the company tried to put as positive a spin on the announcement as possible, it could involve up to 4000 job cuts as SAP shifts into more modern technologies. “We are going to move our people and our focus to the areas where the new economy needs SAP the most: artificial intelligence, deep machine learning, IoT, blockchain and quantum computing,” CEO Bill McDermott told a post-earnings press conference.

If that sounds familiar, it should. It is precisely the areas that IBM has been trying to concentrate on its transformation over the last several years. IBM has struggled to make this change and has also framed workforce reduction as moving to modern skill sets. It’s worth pointing out that SAP’s financial picture has been more positive than IBM’s.

CFO Luca Mucic tried to stress this was not about cost cutting, so much as ensuring the long-term health of the company, but did admit it did involve job cuts. These could include early retirement and other incentives to leave the company voluntarily. “We still expect that there will be a number probably slightly higher than what we saw in the 2015 program where we had around 3000 employees leave the company, where at the end of this process will leave SAP,” he said.

The company believes that in spite of these cuts, it will actually have more employees by this time next year than it has now, but they will be shifted to these new technology areas. “This is a growth company move, not a cost cutting move every dollar that we gain from a restructuring initiative will be invested back into headcount and more jobs,” McDermott said. SAP kept stressing that cloud revenue will reach $35 billion in revenue by 2023.

Holger Mueller, an analyst who watches enterprise companies like SAP for Constellation Research, says the company is doing what it has to do in terms of transformation. “SAP is in the midst of upgrading its product portfolio to the 21st century demands of its customer base,” Mueller told TechCrunch. He added that this is not easy to pull off, and it requires new skill sets to build, operate and sell the new technologies.

McDermott stressed that the company would be offering a generous severance package to any employee leaving the company as a result of today’s announcement.

Today’s announcement comes after the company made two multi-billion dollar acquisitions to help in this transition in 2018, paying $8 billion for Qualtrics and $2.4 billion for CallidusCloud.

Timescale announces $15M investment and new enterprise version of TimescaleDB

It’s a big day for Timescale, makers of the open source time series database, TimescaleDB. The company announced a $15 million investment and a new enterprise version of the product.

The investment is technically an extension of the $12.4 million Series A it raised last January, which it’s referring to as A1. Today’s round is led by Icon Ventures with existing investors Benchmark, NEA and Two Sigma Ventures also participating. With today’s funding, the startup has raised $31 million.

Timescale makes a time series database. That means it can ingest large amounts of data and measure how it changes over time. This comes in handy for a variety of use cases from financial services to smart homes to self-driving cars — or any data-intensive activity  you want to measure over time.

While there are a number of time scale database offerings on the market, Timescale co-founder and CEO Ajay Kulkarni says that what makes his company’s approach unique is that it uses SQL, one of the most popular languages in the world. Timescale wanted to take advantage of that penetration and build its product on top of Postgres, the popular open source SQL database. This gave it an offering that is based on SQL and highly scalable.

Timescale admittedly came late to the market in 2017, but by offering a unique approach and making it open source, it has been able to gain traction quickly. “Despite entering into what is a very crowded database market, we’ve seen quite a bit of community growth because of this message of SQL and scale for time series,” Kulkarni told TechCrunch.

In just over 22 months, the company has over a million downloads and a range of users from older guard companies like Charter, Comcast and Hexagon Mining to more modern companies like Nutanix and and TransferWise.

With a strong base community in place, the company believes that it’s now time to commercialize its offering, and in addition to an open source license, it’s introducing a commercial license.”Up until today, our main business model has been through support and deployment assistance. With this new release, we will be also will have enterprise features that are available with a commercial license,” Kulkarni explained.

The commercial version will offer a more sophisticated automation layer for larger companies with greater scale requirements. It will also provide better lifecycle management, so companies can get rid of older data or move it to cheaper long-term storage to reduce costs. It’s also offering the ability to reorder data in an automated fashion when that’s required, and finally, it’s making it easier to turn the time series data into a series of data points for analytics purposes. The company also hinted that a managed cloud version is on the road map for later this year.

The new money should help Timescale continue fueling the growth and development of the product, especially as it builds out the commercial offering. Timescale, which was founded in 2015 in NYC, currently has 30 employees. With the new influx of cash, it expects to double that over the next year.

Monday, January 28, 2019

Dropbox snares HelloSign for $230M, gets workflow and eSignature

Dropbox announced today that it has purchased HelloSign, a company that provides lightweight document workflow and eSignature services. The company paid a hefty $230 million for the privilege.

Dropbox’s SVP of engineering, Quentin Clark, sees this as more than simply bolting on electronic signature functionality to the Dropbox solution. For him, the workflow capabilities that HelloSign added in 2017 were really key to the purchase.

“What is unique about HelloSign is that the investment they’ve made in APIs and the workflow products is really so aligned with our long term direction,” Clark told TechCrunch. “It’s not just a thing to do one more activity with Dropbox, it’s really going to help us pursue that broader vision,” he added. That vision involves extending the storage capabilities that is as the core of the Dropbox solution

This can also been seen in the context of the Extension capability that Dropbox added last year. HelloSign was actually one of the companies involved at launch. While Clark say the company will continue to encourage companies to extend the Dropbox solution, today’s acquisition gives it a capability of its own that doesn’t require a partnership.

HelloSign CEO Joseph Walla says being part of Dropbox gives HelloSign access to resources of a much larger public company, which should allow it to reach a broader market than it could on its own. “We share a design philosophy based on building the best experience for end-users, fueling our efficient business models and sales strategies. Together with Dropbox, we can bring more seamless document workflows to even more customers and dramatically accelerate our impact,”  Walla said in a blog post announcing the deal.

Whitney Bouck, COO at HelloSign, who previous held stints at Box and EMC Documentum, said the company will remain an independent entity. That means it will continue to operate with its current management structure and Clark indicated that all of the employees will be offered employment at Dropbox as part of the deal.

When you consider that HelloSign, a Bay area startup that launched in 2011, raised just $16 million, it appears to be a impressive return for investors.

This is a developing story. More to come.

Saturday, January 26, 2019

Has the fight over privacy changed at all in 2019?

Few issues divide the tech community quite like privacy. Much of Silicon Valley’s wealth has been built on data-driven advertising platforms, and yet, there remain constant concerns about the invasiveness of those platforms.

Such concerns have intensified in just the last few weeks as France’s privacy regulator placed a record fine on Google under Europe’s General Data Protection Regulation (GDPR) rules which the company now plans to appeal. Yet with global platform usage and service sales continuing to tick up, we asked a panel of eight privacy experts: “Has anything fundamentally changed around privacy in tech in 2019? What is the state of privacy and has the outlook changed?” 

This week’s participants include:

TechCrunch is experimenting with new content forms. Consider this a recurring venue for debate, where leading experts – with a diverse range of vantage points and opinions – provide us with thoughts on some of the biggest issues currently in tech, startups and venture. If you have any feedback, please reach out: Arman.Tabatabai@techcrunch.com.


Thoughts & Responses:


Albert Gidari

Albert Gidari is the Consulting Director of Privacy at the Stanford Center for Internet and Society. He was a partner for over 20 years at Perkins Coie LLP, achieving a top-ranking in privacy law by Chambers, before retiring to consult with CIS on its privacy program. He negotiated the first-ever “privacy by design” consent decree with the Federal Trade Commission. A recognized expert on electronic surveillance law, he brought the first public lawsuit before the Foreign Intelligence Surveillance Court, seeking the right of providers to disclose the volume of national security demands received and the number of affected user accounts, ultimately resulting in greater public disclosure of such requests.

There is no doubt that the privacy environment changed in 2018 with the passage of California’s Consumer Privacy Act (CCPA), implementation of the European Union’s General Data Protection Regulation (GDPR), and new privacy laws enacted around the globe.

“While privacy regulation seeks to make tech companies betters stewards of the data they collect and their practices more transparent, in the end, it is a deception to think that users will have more “privacy.””
For one thing, large tech companies have grown huge privacy compliance organizations to meet their new regulatory obligations. For another, the major platforms now are lobbying for passage of a federal privacy law in the U.S. This is not surprising after a year of privacy miscues, breaches and negative privacy news. But does all of this mean a fundamental change is in store for privacy? I think not.

The fundamental model sustaining the Internet is based upon the exchange of user data for free service. As long as advertising dollars drive the growth of the Internet, regulation simply will tinker around the edges, setting sideboards to dictate the terms of the exchange. The tech companies may be more accountable for how they handle data and to whom they disclose it, but the fact is that data will continue to be collected from all manner of people, places and things.

Indeed, if the past year has shown anything it is that two rules are fundamental: (1) everything that can be connected to the Internet will be connected; and (2) everything that can be collected, will be collected, analyzed, used and monetized. It is inexorable.

While privacy regulation seeks to make tech companies betters stewards of the data they collect and their practices more transparent, in the end, it is a deception to think that users will have more “privacy.” No one even knows what “more privacy” means. If it means that users will have more control over the data they share, that is laudable but not achievable in a world where people have no idea how many times or with whom they have shared their information already. Can you name all the places over your lifetime where you provided your SSN and other identifying information? And given that the largest data collector (and likely least secure) is government, what does control really mean?

All this is not to say that privacy regulation is futile. But it is to recognize that nothing proposed today will result in a fundamental shift in privacy policy or provide a panacea of consumer protection. Better privacy hygiene and more accountability on the part of tech companies is a good thing, but it doesn’t solve the privacy paradox that those same users who want more privacy broadly share their information with others who are less trustworthy on social media (ask Jeff Bezos), or that the government hoovers up data at rate that makes tech companies look like pikers (visit a smart city near you).

Many years ago, I used to practice environmental law. I watched companies strive to comply with new laws intended to control pollution by creating compliance infrastructures and teams aimed at preventing, detecting and deterring violations. Today, I see the same thing at the large tech companies – hundreds of employees have been hired to do “privacy” compliance. The language is the same too: cradle to grave privacy documentation of data flows for a product or service; audits and assessments of privacy practices; data mapping; sustainable privacy practices. In short, privacy has become corporatized and industrialized.

True, we have cleaner air and cleaner water as a result of environmental law, but we also have made it lawful and built businesses around acceptable levels of pollution. Companies still lawfully dump arsenic in the water and belch volatile organic compounds in the air. And we still get environmental catastrophes. So don’t expect today’s “Clean Privacy Law” to eliminate data breaches or profiling or abuses.

The privacy world is complicated and few people truly understand the number and variety of companies involved in data collection and processing, and none of them are in Congress. The power to fundamentally change the privacy equation is in the hands of the people who use the technology (or choose not to) and in the hands of those who design it, and maybe that’s where it should be.


Gabriel Weinberg

Gabriel Weinberg is the Founder and CEO of privacy-focused search engine DuckDuckGo.

Coming into 2019, interest in privacy solutions is truly mainstream. There are signs of this everywhere (media, politics, books, etc.) and also in DuckDuckGo’s growth, which has never been faster. With solid majorities now seeking out private alternatives and other ways to be tracked less online, we expect governments to continue to step up their regulatory scrutiny and for privacy companies like DuckDuckGo to continue to help more people take back their privacy.

“Consumers don’t necessarily feel they have anything to hide – but they just don’t want corporations to profit off their personal information, or be manipulated, or unfairly treated through misuse of that information.”

We’re also seeing companies take action beyond mere regulatory compliance, reflecting this new majority will of the people and its tangible effect on the market. Just this month we’ve seen Apple’s Tim Cook call for stronger privacy regulation and the New York Times report strong ad revenue in Europe after stopping the use of ad exchanges and behavioral targeting.

At its core, this groundswell is driven by the negative effects that stem from the surveillance business model. The percentage of people who have noticed ads following them around the Internet, or who have had their data exposed in a breach, or who have had a family member or friend experience some kind of credit card fraud or identity theft issue, reached a boiling point in 2018. On top of that, people learned of the extent to which the big platforms like Google and Facebook that collect the most data are used to propagate misinformation, discrimination, and polarization. Consumers don’t necessarily feel they have anything to hide – but they just don’t want corporations to profit off their personal information, or be manipulated, or unfairly treated through misuse of that information. Fortunately, there are alternatives to the surveillance business model and more companies are setting a new standard of trust online by showcasing alternative models.


Melika Carroll

Melika Carroll is Senior Vice President, Global Government Affairs at Internet Association, which represents over 45 of the world’s leading internet companies, including Google, Facebook, Amazon, Twitter, Uber, Airbnb and others.

We support a modern, national privacy law that provides people meaningful control over the data they provide to companies so they can make the most informed choices about how that data is used, seen, and shared.

“Any national privacy framework should provide the same protections for people’s data across industries, regardless of whether it is gathered offline or online.”
Internet companies believe all Americans should have the ability to access, correct, delete, and download the data they provide to companies.

Americans will benefit most from a federal approach to privacy – as opposed to a patchwork of state laws – that protects their privacy regardless of where they live. If someone in New York is video chatting with their grandmother in Florida, they should both benefit from the same privacy protections.

It’s also important to consider that all companies – both online and offline – use and collect data. Any national privacy framework should provide the same protections for people’s data across industries, regardless of whether it is gathered offline or online.

Two other important pieces of any federal privacy law include user expectations and the context in which data is shared with third parties. Expectations may vary based on a person’s relationship with a company, the service they expect to receive, and the sensitivity of the data they’re sharing. For example, you expect a car rental company to be able to track the location of the rented vehicle that doesn’t get returned. You don’t expect the car rental company to track your real-time location and sell that data to the highest bidder. Additionally, the same piece of data can have different sensitivities depending on the context in which it’s used or shared. For example, your name on a business card may not be as sensitive as your name on the sign in sheet at an addiction support group meeting.

This is a unique time in Washington as there is bipartisan support in both chambers of Congress as well as in the administration for a federal privacy law. Our industry is committed to working with policymakers and other stakeholders to find an American approach to privacy that protects individuals’ privacy and allows companies to innovate and develop products people love.


Johnny Ryan

Dr. Johnny Ryan FRHistS is Chief Policy & Industry Relations Officer at Brave. His previous roles include Head of Ecosystem at PageFair, and Chief Innovation Officer of The Irish Times. He has a PhD from the University of Cambridge, and is a Fellow of the Royal Historical Society.

Tech companies will probably have to adapt to two privacy trends.

“As lawmakers and regulators in Europe and in the United States start to think “purpose specification” as a tool for anti-trust enforcement, tech giants should beware.”

First, the GDPR is emerging as a de facto international standard.

In the coming years, the application of GDPR-like laws for commercial use of consumers’ personal data in the EU, Britain (post-EU), Japan, India, Brazil, South Korea, Malaysia, Argentina, and China bring more than half of global GDP under a similar standard.

Whether this emerging standard helps or harms United States firms will be determined by whether the United States enacts and actively enforces robust federal privacy laws. Unless there is a federal GDPR-like law in the United States, there may be a degree of friction and the potential of isolation for United States companies.

However, there is an opportunity in this trend. The United States can assume the global lead by doing two things. First, enact a federal law that borrows from the GDPR, including a comprehensive definition of “personal data”, and robust “purpose specification”. Second, invest in world-leading regulation that pursues test cases, and defines practical standards. Cutting edge enforcement of common principles-based standards is de facto leadership.

Second, privacy and antitrust law are moving closer to each other, and might squeeze big tech companies very tightly indeed.

Big tech companies “cross-use” user data from one part of their business to prop up others. The result is that a company can leverage all the personal information accumulated from its users in one line of business, and for one purpose, to dominate other lines of business too.

This is likely to have anti-competitive effects. Rather than competing on the merits, the company can enjoy the unfair advantage of massive network effects even though it may be starting from scratch in a new line of business. This stifles competition and hurts innovation and consumer choice.

Antitrust authorities in other jurisdictions have addressed this. In 2015, the Belgian National Lottery was fined for re-using personal information acquired through its monopoly for a different, and incompatible, line of business.

As lawmakers and regulators in Europe and in the United States start to think “purpose specification” as a tool for anti-trust enforcement, tech giants should beware.


John Miller

John Miller is the VP for Global Policy and Law at the Information Technology Industry Council (ITI), a D.C. based advocate group for the high tech sector.  Miller leads ITI’s work on cybersecurity, privacy, surveillance, and other technology and digital policy issues.

Data has long been the lifeblood of innovation. And protecting that data remains a priority for individuals, companies and governments alike. However, as times change and innovation progresses at a rapid rate, it’s clear the laws protecting consumers’ data and privacy must evolve as well.

“Data has long been the lifeblood of innovation. And protecting that data remains a priority for individuals, companies and governments alike.”

As the global regulatory landscape shifts, there is now widespread agreement among business, government, and consumers that we must modernize our privacy laws, and create an approach to protecting consumer privacy that works in today’s data-driven reality, while still delivering the innovations consumers and businesses demand.

More and more, lawmakers and stakeholders acknowledge that an effective privacy regime provides meaningful privacy protections for consumers regardless of where they live. Approaches, like the framework ITI released last fall, must offer an interoperable solution that can serve as a model for governments worldwide, providing an alternative to a patchwork of laws that could create confusion and uncertainty over what protections individuals have.

Companies are also increasingly aware of the critical role they play in protecting privacy. Looking ahead, the tech industry will continue to develop mechanisms to hold us accountable, including recommendations that any privacy law mandate companies identify, monitor, and document uses of known personal data, while ensuring the existence of meaningful enforcement mechanisms.


Nuala O’Connor

Nuala O’Connor is president and CEO of the Center for Democracy & Technology, a global nonprofit committed to the advancement of digital human rights and civil liberties, including privacy, freedom of expression, and human agency. O’Connor has served in a number of presidentially appointed positions, including as the first statutorily mandated chief privacy officer in U.S. federal government when she served at the U.S. Department of Homeland Security. O’Connor has held senior corporate leadership positions on privacy, data, and customer trust at Amazon, General Electric, and DoubleClick. She has practiced at several global law firms including Sidley Austin and Venable. She is an advocate for the use of data and internet-enabled technologies to improve equity and amplify marginalized voices.

For too long, Americans’ digital privacy has varied widely, depending on the technologies and services we use, the companies that provide those services, and our capacity to navigate confusing notices and settings.

“Americans deserve comprehensive protections for personal information – protections that can’t be signed, or check-boxed, away.”
We are burdened with trying to make informed choices that align with our personal privacy preferences on hundreds of devices and thousands of apps, and reading and parsing as many different policies and settings. No individual has the time nor capacity to manage their privacy in this way, nor is it a good use of time in our increasingly busy lives. These notices and choices and checkboxes have become privacy theater, but not privacy reality.

In 2019, the legal landscape for data privacy is changing, and so is the public perception of how companies handle data. As more information comes to light about the effects of companies’ data practices and myriad stewardship missteps, Americans are surprised and shocked about what they’re learning. They’re increasingly paying attention, and questioning why they are still overburdened and unprotected. And with intensifying scrutiny by the media, as well as state and local lawmakers, companies are recognizing the need for a clear and nationally consistent set of rules.

Personal privacy is the cornerstone of the digital future people want. Americans deserve comprehensive protections for personal information – protections that can’t be signed, or check-boxed, away. The Center for Democracy & Technology wants to help craft those legal principles to solidify Americans’ digital privacy rights for the first time.


Chris Baker

Chris Baker is Senior Vice President and General Manager of EMEA at Box.

Last year saw data privacy hit the headlines as businesses and consumers alike were forced to navigate the implementation of GDPR. But it’s far from over.

“…customers will have trust in a business when they are given more control over how their data is used and processed”
2019 will be the year that the rest of the world catches up to the legislative example set by Europe, as similar data regulations come to the forefront. Organizations must ensure they are compliant with regional data privacy regulations, and more GDPR-like policies will start to have an impact. This can present a headache when it comes to data management, especially if you’re operating internationally. However, customers will have trust in a business when they are given more control over how their data is used and processed, and customers can rest assured knowing that no matter where they are in the world, businesses must meet the highest bar possible when it comes to data security.

Starting with the U.S., 2019 will see larger corporations opt-in to GDPR to support global business practices. At the same time, local data regulators will lift large sections of the EU legislative framework and implement these rules in their own countries. 2018 was the year of GDPR in Europe, and 2019 be the year of GDPR globally.


Christopher Wolf

Christopher Wolf is the Founder and Chair of the Future of Privacy Forum think tank, and is senior counsel at Hogan Lovells focusing on internet law, privacy and data protection policy.

With the EU GDPR in effect since last May (setting a standard other nations are emulating),
“Regardless of the outcome of the debate over a new federal privacy law, the issue of the privacy and protection of personal data is unlikely to recede.”
with the adoption of a highly-regulatory and broadly-applicable state privacy law in California last Summer (and similar laws adopted or proposed in other states), and with intense focus on the data collection and sharing practices of large tech companies, the time may have come where Congress will adopt a comprehensive federal privacy law. Complicating the adoption of a federal law will be the issue of preemption of state laws and what to do with the highly-developed sectoral laws like HIPPA and Gramm-Leach-Bliley. Also to be determined is the expansion of FTC regulatory powers. Regardless of the outcome of the debate over a new federal privacy law, the issue of the privacy and protection of personal data is unlikely to recede.

Friday, January 25, 2019

Pentagon stands by finding of no conflict of interest in JEDI RFP process

A line in a new court filing by the Department of Defense suggests that it might reopen investigation into a possible conflict of interest interest in the JEDI contract RFP process involving a former AWS employee. The story has attracted a great deal of attention in major news publications including the Washington Post and Wall Street Journal, but a Pentagon spokesperson has told TechCrunch that nothing has changed.

In the document, filed with the court on Wednesday, the government’s legal representatives sought to outline its legal arguments in the case. The line that attracted so much attention stated, “Now that Amazon has submitted a proposal, the contracting officer is considering whether Amazon’s re-hiring Mr. Ubhi creates an OCI that cannot be avoided, mitigated, or neutralized.” OCI stands for Organizational Conflict of Interest in DoD lingo.

When asked about this specific passage, Pentagon spokesperson Heather Babb made clear the conflict had been investigated earlier and that Ubhi had recused himself from the process. “During his employment with DDS, Mr. Deap Ubhi recused himself from work related to the JEDI contract. DOD has investigated this issue, and we have determined that Mr. Ubhi complied with all necessary laws and regulations,” Babb told TechCrunch.

She repeated that statement when asked specifically about the language in the DoD’s filing. Ubhi did work at Amazon prior to joining the DoD and returned to work for them after he left.

The Department of Defense’s decade-long, $10 billion JEDI cloud contract process has attracted a lot of attention, and not just for the size of the deal. The Pentagon has said this will be a winner-take-all affair. Oracle and IBM have filed formal complaints and Oracle filed a lawsuit in December alleging among other things that there was a conflict of interest by Ubhi, and that they believed the single-vendor approach was designed to favor AWS. The Pentagon has denied these allegations.

The DoD completed the RFP process at the end of October and is expected to choose the winning vendor in April.

Vodafone pauses Huawei network supply purchases in Europe

Huawei had a very good 2018, and it’s likely to have a very good 2019, as well. But there’s one little thing that keeps putting a damper on the hardware maker’s global expansion plans. The U.S. and Canada have already taken action over the company’s perceived link to the Chinese government, and now Vodafone’s is following suit over concerns that other countries may join. 

The U.K.-based telecom giant announced this week that it’s enacting a temporary halt on purchases from the Chinese hardware maker. The move arrives out of concern that additional countries may ban Huawei products putting the world’s second largest carrier in a tricky spot as it works to roll out 5G networks across the globe,

For now, the move is focused on European markets. As The Wall Street Journal notes, there remains some possibility that Vodafone could go forward with Huawei networking gear in other markets, including India, Turkey and parts of Africa. In Europe, however, these delays could ultimately work to raise the price and/or delay its planned 5G push.

“We have decided to pause further Huawei in our core whilst we engage with the various agencies and governments and Huawei just to finalize the situation, of which I feel Huawei is really open and working hard,” Vodafone CEO Nick Read said in a statement.

Huawei has continued to deny all allegations related to Chinese government spying.

Thursday, January 24, 2019

Apple finally brings Microsoft Office to the Mac App Store, and there is much rejoicing

That slow clap you hear spreading around the internet today could be due to the fact that Apple has finally added Microsoft Office to the Mac App Store. The package will include Word, Excel, PowerPoint, Outlook and OneNote.

Shaan Pruden, senior director of worldwide developer relations at Apple, says that when the company overhauled the App Store last year, it added the ability to roll several apps into a subscription package with the idea of bringing Microsoft Office into the fold. That lack of bundling had been a stumbling block to an earlier partnership.

“One of the features that we brought specifically in working with Microsoft was the ability to subscribe to bundles, which is obviously something that they would need in order to bring Office 365 to the Mac App Store.”

That’s because Microsoft sells Office 365 subscriptions as a package of applications, and it didn’t want to alter the experience by forcing customers to download each one individually, Jared Spataro, corporate vice president for Microsoft 365 explained.

PowerPoint on the Mac. Photo: Apple

Spataro said that up until now, customers could of course go directly to Microsoft or another retail outlet to subscribe to the same bundle, but what today’s announcement does is wrap the subscription process into an integrated Mac experience where installation and updates all happen in a way you expect with macOS.

“The apps themselves are updated through the App Store, and we’ve done a lot of great work between the two companies to make sure that the experience really feels good and feels like it’s fully integrated,” he said. That includes support for dark mode, photo continuity to easily insert photos into Office apps from Apple devices and app-specific toolbars for the Touch Bar.

A subscription will run you $69 for an individual or $99 for a household. The latter allows up to six household members to piggy back on the subscription, and each person gets one terabyte of storage to boot. What’s more, you can access your subscription across all of your Apple, Android and Windows devices and your files, settings and preferences will follow wherever you go.

Businesses can order Microsoft Office bundles through the App Store and then distribute them using the Apple Business Manager, a tool Apple developed last year to help IT manage the application distribution process. Once installed, users have the same ability to access their subscriptions complete with settings across devices.

Microsoft OneNote on the Mac. Photo: Apple

While Apple and Microsoft have always had a complicated relationship, the two companies have been working together in one capacity or another for nearly three decades now. Neither company was willing to discuss the timeline it took to get to this point, or the financial arrangements between the two companies, but in the standard split for subscriptions, the company gets 70 percent of the price the first year with Apple getting 30 percent for hosting fees. That changes to an 85/15 split in subsequent years.

Apple noted that worldwide availability could take up to 24 hours depending on your location, but you’ve waited this long, you can wait one more day, right?

Microsoft acquires Citus Data

Microsoft today announced that it has acquired Citus Data, a company that focused on making PostgreSQL database faster and more scalable. Citus’ open source PostgreSQL extension essentially turns the application into a distributed database and while there has been a lot of hype around the NoSQL movement and document stores, relational database — and especially PostgreSQL — are still a growing market, in part because of tools from companies like Citus that overcome some of their earlier limitations.

Unsurprisingly, Microsoft plans to work with the Citus Data team to “accelerate the delivery of key, enterprise-ready features from Azure to PostgreSQL and enable critical PostgreSQL workloads to run on Azure with confidence.” The Citus co-founders echo this in their own statement, noting that “as part of Microsoft, we will stay focused on building an amazing database on top of PostgreSQL that gives our users the game-changing scale, performance, and resilience they need. We will continue to drive innovation in this space.”

PostgreSQL is obviously an open source tool and while the fact that Microsoft is now a major open source contributor doesn’t come as a surprise anymore, it’s worth noting that the company stresses that it will continue to work with the PostgreSQL community. In an email, a Microsoft spokesperson also noted that “the acquisition is a proof point in the company’s commitment to open source and accelerating Azure PostgreSQL performance and scale.”

Current Citus customers include the likes of real-time analytics service Chartbeat, email security service Agari and PushOwl, though the company notes that it also counts a number of Fortune 100 companies among its users (they tend to stay anonymous). The company offers both a   database as a service, an on-premises enterprise version and the free open source edition. For the time being, it seems like that’s not changing, though over time, I would suspect that Microsoft will transition users of the hosted service to Azure.

The price of the acquisition was not disclosed. Citus Data, which was founded in 2010 and graduated from the Y Combinator program, previously raised over $13 million from the likes of Khosla Ventures, SV Angel and Data Collective.

Blue Prism to issue $130M in stock to raise new funds

Just this morning robotic process automation (RPA) firm, Blue Prism, announced enhancements to its platform. A little later the company, which went public on the London Stock Exchange in 2016, announced it was raising £100 million (approximately $130 million) by issuing new stock. The announcement comes after reporting significant losses in its most recent fiscal year, which ended in October.

The company indicated that it plans to sell the new shares on the public market, and that they will be made available to new and existing shareholders including company managers and directors.

CEO Alastair Bathgate attempted to put the announcement in the best possible light. “The outcome of this placing, which builds on another year of significant progress for the company, highlights the meteoric growth opportunity with RPA and intelligent automation,” he said in a statement.

While the company’s revenue more than doubled last fiscal year from £24.5 million (approximately $32 million) in 2017 to £55.2 million (approximately $72 million) in 2018, losses also increased dramatically from £10.1 million (approximately $13 million) in 2017 to £26.0 million (approximately $34 million), according to reports.

The move, which requires shareholder approval, will be used to push the company’s plans, outlined in a TechCrunch article earlier this morning, to begin enhancing the platform with help from partners, a move the company hopes will propel it into the future.

Today’s announcement included a new AI engine, an updated marketplace where companies can share Blue Prism extensions and a new lab, where the company plans to work on AI innovation in-house.

Bathgate isn’t wrong about the market opportunity. Investors have been pouring big bucks into this market for the last couple of years. As we noted, in this morning’s article, “UIPath, a NYC RPA company has raised almost $450 million. Its most recent round in September was for $225 million on a $3 billion valuation. Automation Anywhere, a San Jose RPA startup, has raised $550 million including an enormous $300 million investment from SoftBank in November on a valuation of $2.6 billion.”

Blue Prism looks to partners to expand robotic process automation with AI

Blue Prism helped coin the term robotic process automation (RPA) when the company was founded back in 2001 to help companies understand the notion of automating mundane business processes. Today, it’s releasing updates to that platform including an updated marketplace for exchanging connectors to extend the main product, and in some cases, adding a layer of intelligence.

The product at its core has allowed non-technical users to automate a business process by simply dragging components into an interface. All of the process coding has been automated on the back end. You could have a process that scans a check, enters a figure in a spreadsheet and sends an automated message to another employee (or digital process) when it’s done.

Moss sees a world in which companies are looking to digitization to stave off growing competition. Big insurance companies, financial services and other workflow-intensive organizations need to look beyond the automation capabilities his company has given them and that is going to require an intelligence layer.

Today, the company wants to extend its core capability by offering more advanced tools in the Blue Prism Digital Exchange marketplace. The Exchange gives partners and customers the ability to create and share tools to enhance Blue Prism. To encourage those entities to add AI capabilities, the company also announced a new AI engine for building connectors to advanced AI tools from Amazon, Google, IBM and other AI platforms.

But the company doesn’t want to simply leave it to partners to provide the innovation. It wants that happening in-house as well, and to that end it has created Blue Prism Labs, where it will work with these same technologies looking for ways to inject its RPA products with artificial intelligence. This could lead to more sophisticated automated workflows down the road such as using image recognition technology to add metadata about a photo automatically.

While Blue Prism has been a public company since 2016, the market has attracted a slew of startups, which have in turn been attracting big bucks from investors on gaudy valuations. UIPath, a NYC RPA company has raised almost $450 million. Its most recent round in September was for $225 million on a $3 billion valuation. Automation Anywhere, a San Jose RPA startup, has raised $550 million including an enormous $300 million investment from SoftBank in November on a valuation of $2.6 billion.

Wednesday, January 23, 2019

AWS launches WorkLink to make accessing mobile intranet sites and web apps easier

If your company uses a VPN and/or a mobile device management service to give you access to its intranet and internal web apps, then you know how annoying those are. AWS today launched a new product, Amazon WorkLink,  that promises to make this process significantly easier.

WorkLink is a fully managed service that, for $5 per month and user, allows IT admins to give employees one-click access to internal sites, no matter whether they run on AWS or not.

After installing WorkLink on their phones, employees can then simply use their favorite browser to surf to an internal website (other solutions often force users to use a sub-par proprietary browser). WorkLink the goes to work, securely requests that site and — and that’s the smart part here — a secure WorkLink container converts the site into an interactive vector graphic and sends it back to the phone. Nothing is stored or cached on the phone and AWS says WorkLink knows nothing about personal device activity either. That also means when a device is lost or stolen, there’s no need to try to wipe it remotely because there’s simply no company data on it.

IT can either use a VPN to connect from an AWS Virtual Private Cloud to on-premise servers or use AWS Direct Connect to bypass a VPN solution. The service works with all SAML 2.0 identity providers (which is the majority of identity services used in the enterprise, including the likes of Okta and Ping Identity) and as a fully managed service, it handles scaling and updates in the background.

“When talking with customers, all of them expressed frustration that their workers don’t have an easy and secure way to access internal content, which means that their employees either waste time or don’t bother trying to access content that would make them more productive,” says Peter Hill, Vice President of Productivity Applications at AWS, in today’s announcement. “With Amazon WorkLink, we’re enabling greater workplace productivity for those outside the corporate firewall in a way that IT administrators and security teams are happy with and employees are willing to use.”

WorkLink will work with both Android and iOS, but for the time being, only the iOS app (iOS 12+) is available. For now, it also only works with Safar, with Chrome support coming in the next few weeks. The service is also only available in Europe and North America for now, with additional regions coming later this year.

For the time being, AWS’s cloud archrivals Google and Microsoft don’t offer any services that are quite comparable with WorkLink. Google offers its Cloud Identity-Aware Proxy as a VPN alternative and as part of its BeyondCorp program, though that has a very different focus, while Microsoft offers a number of more traditional mobile device management solutions.

Oracle says racial discrimination lawsuit is ‘meritless’

Oracle says the racial discrimination lawsuit filed by the U.S. Department of Labor’s Office of Federal Contract Compliance Programs is “meritless.” This comes after Oracle declined yesterday to comment on the OFCCP’s filing that alleges Oracle withheld $400 million in wages from underrepresented employees.

“This meritless lawsuit is based on false allegations and a seriously flawed process within the OFCCP that relies on cherry picked statistics rather than reality,” Oracle EVP and General Counsel Dorian Daley said in a statement to TechCrunch. “We fiercely disagree with the spurious claims and will continue in the process to prove them false. We are in compliance with our regulatory obligations, committed to equality, and proud of our employees.”

In a filing yesterday, the OFCCP alleged Oracle withheld $400 million in wages from racially underrepresented workers (black, Latinx and Asian) as well as women. The department argues that Oracle’s “stark patterns of discrimination” started back in 2013 and continues into the present day. More specifically, the OFCCP alleges Oracle discriminated against black, Asian and female employees. This has all ultimately resulted in the collective loss of more than $400 million for this group of employees, the suit alleges.

Two years after being acquired by Cisco, AppDynamics keeps expanding monitoring vision

Two years ago this week, AppDynamics was about to IPO. Then Cisco swooped in with a big fat check for $3.7 billion and plans changed quickly. Today, as part of Cisco, the company announced it was expanding its monitoring vision across the business with a number of enhancements to its product suite.

AppDynamics CEO David Wadhwani says the company wants to monitor your technology wherever it lives in the enterprise, from serverless to mainframe. That kind of comprehensive view of a customer’s computing environment requires a level of built-in intelligence, and being part of a large organization like Cisco helped move more quickly toward this approach.

Last year when Cisco bought Perspica, a machine learning startup, it folded the engineering team into AppDynamics with a plan to make the product more intelligent. Given the sheer amount of information, a product like AppDynamics is monitoring, it’s a perfect use case for machine learning, which feeds on copious amounts of data.

Today the company announced the fruit of that labor in the form of a new Cognition Engine. Instead of simply pointing out that there is a problem, and leaving it to the DevOps team to figure out the root cause, the Cognition Engine handles both in an automated way. When you combine that with a rules engine, you can move from detection to root cause analysis to remediation much more quickly than in the past. Eventually Wadhwani expects the Cognition Engine can learn from the rules engine and begin to build even more automated fixes.

Root Cause Analysis. Screen: AppDynamics

The company is also announcing some new monitoring capabilities, including AWS Lambda, the serverless service, which has been gaining momentum in recent years among developers. The approach poses challenges to a monitoring tool like AppDynamics because the application doesn’t sit on a defined virtual machine, but instead uses ephemeral resources, served up by AWS at any given moment based on resource requirements. AppDynamics now offers a way to trace transactions on this type of infrastructure.

Finally, now that it’s part of the Cisco family, the product is looking not only at the application layer, it is expanding that vision to incorporate the networking infrastructure as well to help understand issues and set policies just as it does with applications.

All of this is part of what Cisco is calling a “central nervous system” for enterprise computing. It’s a marketing term designed to encompasses the overall vision of trying to locate issues, find the causes and fix them in as automated a way as possible across the enterprise computing landscape.

Open-source leader Confluent raises $125m on $2.5b valuation

Confluent, the commercial company built on top of the open source Apache Kafka project, announced a $125 million Series D round this morning on an enormous $2.5 billion valuation.

The round was led by existing investor Sequoia Capital with participation from Index Ventures and Benchmark, who also participated in previous rounds. Today’s investment brings the total raised to $206 million, according the company.

The valuation soared from the previous round when the company was valued at $500 million. What’s more, the company’s bookings have scaled along with the valuation.

Graph: Confluent

 

While CEO Jay Kreps wouldn’t comment directly on a future IPO, he hinted that it is something the company is looking to do at some point. “With our growth and momentum so far, and with the latest funding, we are in a very good position to and have a desire to build a strong, independent company…” Kreps told TechCrunch.

Confluent and Kafka have developed a streaming data technology that processes massive amounts of information in real time, something that comes in handy in today’s data-intensive environment. The base streaming database technology was developed at LinkedIn as a means of moving massive amounts of messages. The company decided to open source that technology in 2011, and Confluent launched as the commercial arm in 2014.

Kreps, writing in a company blog post announcing the funding, said that the events concept encompasses the basic building blocks of businesses. “These events are the orders, sales and customer experiences, that constitute the operation of the business. Databases have long helped to store the current state of the world, but we think this is only half of the story. What is missing are the continually flowing stream of events that represents everything happening in a company, and that can act as the lifeblood of its operation,” he wrote.

Kreps pointed out that as an open source project, Confluent depends on the community. “This is not something we’re doing alone. Apache Kafka has a massive community of contributors of which we’re just one part,” he wrote.

While the base open source component remains available for free download, it doesn’t include the additional tooling the company has built to make it easier for enterprises to use Kafka.
Recent additions include a managed cloud version of the product and a marketplace, Confluent Hub, for sharing extensions to the platform.

As we watch the company’s valuation soar, it does so against a backdrop of other companies based on open source selling for big bucks in 2018 including IBM buying Red Hat for $34 billion in October and Salesforce acquiring Mulesoft in June for $6.5 billion.

The company’s most recent round was $50 million in March, 2017.

Anchorage emerges with $17M from a16z for ‘omnimetric’ crypto security

I’m not allowed to tell you exactly how Anchorage keeps rich institutions from being robbed of their cryptocurrency, but the off-the-record demo was damn impressive. Judging by the $17 million Series A this security startup raised last year led by Andreessen Horowitz and joined by Khosla Ventures, Max Levchin, Elad Gil, Mark McCombe of Blackrock, and AngelList’s Naval Ravikant, I’m not the only one who thinks so. In fact, crypto funds like Andreessen’s a16zcrypto, Paradigm, and Electric Capital are already using it.

They’re trusting in the guys who engineered Square’s first encrypted card reader and Docker’s security protocols. “It’s less about us choosing this space and more about this space choosing us. If you look our backgrounds and you look at the problem, it’s like the universe handed us on a silver platter the venn diagram of our skillset” co-founder Diogo Monica tells me.

Today, Anchorage is coming out of stealth and launching its cryptocurrency custody service to the public. Anchorage holds and safeguards crypto assets for institutions like hedge funds and venture firms, and only allows transactions verified by an array of biometrics, behavioral analysis, and human reviewers. And since it doesn’t use “buried in the backyard” cold storage, asset holders can actually earn rewards and advantages for participating in coin-holder votes without fear of getting their Bitcoin, Ethereum, or other coins stolen.

The result is a crypto custody service that could finally lure big-time commercial banks, endowments, pensions, mutual funds, and hedgies into the blockchain world. Whether they seek short-term gains off of crypto volatility or want to HODL long-term while participating in coin governance, Anchorage promises to protect them.

Evolving Past “Pirate Security”

Anchorage’s story starts eight years ago when Monica and his co-founder Nathan McCauley met after joining Square the same week. Monica had been getting a PhD in distributed systems while McCauley designed anti-reverse engineering tech to keep US military data from being extracted from abandoned tanks or jets. After four years of building systems that would eventually move over $80 billion per year in credit card transactions, they packaged themselves as a “pre-product acquihire” Monica tells me, and they were snapped up by Docker.

As their reputation grew from work and conference keynotes, cryptocurrency funds started reaching out for help with custody of their private keys. One had lost a passphrase and the $1 million in currency it was protecting in a display of jaw-dropping ignorance. The pair realized there were no true standards in crypto custody, so they got to work on Anchorage.

“You look at the status quo and it was and still is cold storage. It’s the same technology used by pirates in the 1700s” Monica explains. “You bury your crypto in a treasure chest and then you make a treasure map of where those gold coins are” except with USB keys, security deposit boxes, and checklists. “We started calling it Pirate Custody.” Anchorage set out to develop something better — a replacement for usernames and passwords or even phone numbers and two-factor authentication that could be misplaced or hijacked.

This led them to Andreessen Horowitz partner and a16zcrypto leader Chris Dixon, who’s now on their board. “We’ve been buying crypto assets running back to Bitcoin for years now here at a16zcrypto and it’s hard to do it in a way that’s secure, regulatory compliant, and lets you access it. We felt this pain point directly.”

Andreessen Horowith partner and Anchorage board member Chris Dixon

it’s at this point in the conversation when Monica and McCauley give me their off the record demo. While there are no screenshots to share, the enterprise security suite they’ve built has the polish of a consumer app like Robinhood. What I can say is that Anchorage works with clients to whitelist employees’ devices. It then uses multiple types of biometric signals and behavioral analytics about the person and device trying to log in to verify their identity.

But even once they have access, Anchorage is built around quorum-based approvals. Withdrawls, other transactions, and even changing employee permissions requires approval from multiple users inside the client company. They could set up Anchorage so it requires five of seven executives’ approval to pull out assets. And finally, outlier detection algorithms and a human review the transaction to make sure it looks legit. A hacker or rogue employee can’t steal the funds even if they’re logged in since they need consensus of approval.

That kind of assurance means institutional investors can confidently start to invest in crypto assets. That swell of capital could help replace the retreating consumer investors who’ve fled the market this year leading to massive price drops. The liquidity provided by these asset managers could keep the whole blockchain industry moving. “Institutional investing has had centuries to build up a set of market infrastructure. Custody was something that for other asset classes was solved hundreds of years ago so it’s just now catching up [for crypto]” says McCauley. “We’re creating a bigger market in and of itself” Monica adds.

With Anchorage steadfastly handling custody, the risk these co-founders admit worries them lies in the smart contracts that govern the cryptocurrencies themselves. “We need to be extremely wide in our level of support and extremely deep because each blockchain has details of implementation. This is inherently a very difficult problem” McCauley explains. It doesn’t matter if the coins are safe in Anchorage’s custody if a janky smart contract can botch their transfer.

There are plenty of startups vying to offer crypto custody, ranging from Bitgo and Ledger to well-known names like Coinbase and Gemini. Yet Anchorage offers a rare combination of institutional-since-day-one security rigor with the ability to participate in votes and governance of crypto assets that’s impossible if they’re in cold storage. Down the line, Anchorage hints that it might serve clients recommendations for how to vote to maximize their yield and preserve the sanctity of their coin.

They’ll have crypto investment legend Chris Dixon on their board to guide them. “What you’ll see is in the same way that institutional investors want to buy stock in Facebook and Google and Netflix, they’ll want to buy the equivalent in the world 10 years from now and do that safely” Dixon tells me. “Anchorage will be that layer for them.”

But why do the Anchorage founders care so much about the problem? McCauley concludes that “When we look at what’s potentially possible with crypto, there a fundamentally more accessible economy. We view ourselves as a key component of bringing that future forward.”

Tuesday, January 22, 2019

Jupiter Networks invests $2.5M in enterprise tech accelerator Alchemist

Alchemist, which began as an experiment to better promote enterprise entrepreneurs, has morphed into a well-established Silicon Valley accelerator.

To prove it, San Francisco-based Alchemist is announcing a fresh $2.5 million investment ahead of its 20th demo day on Wednesday. Jupiter Networks, a networking and cybersecurity solutions business, has led the round, with participation from Siemens’ venture capital unit Next47.

Launched in 2012 by former Draper Fisher Jurvetson investor Ravi Belani, Alchemist provides participating teams with six months of mentorship and a $36,000 investment. Alchemist admits companies whose revenue stream comes from enterprises, not consumers, with a bent toward technical founders.

According to numbers provided by the accelerator, dubbed the “Y Combinator of Enterprise,” 115 Alchemist portfolio companies have gone on to raise $556 million across several VC deals. Another 25 have been acquired, including S4 Capital’s recent $150 million acquisition of media consultancy MightyHive, Alchemist’s largest exit to date.

Other notable alums include Rigetti Computing, LaunchDarkly, which helps startups soft-launch features and drone startup Matternet.

Alchemist has previously raised venture capital funding, including a $2 million financing in 2017 led by GE and an undisclosed investment from Salesforce.

Nineteen companies will demo products onstage tomorrow. You can live stream Alchemist’s 20th demo day here.