The 30% Internet Gorilla Tax

I’ve written before about powerful advantages Google, Apple, Amazon, and Facebook have in the software industry.  These four companies control major parts of the ecosystem, take out upstarts when they get too big, corner talent markets in key areas, and enjoy a ~30% “tax” (directly or indirectly) across most other software companies.

I first noted this nearly 5 years ago, but more recently, some of the Internet thought leaders have written about the theme.  For example, Fred Wilson wrote:

Google, Facebook, and to a lesser extent Apple and Amazon will be seen as monopolists by government and individuals in the US (as they have been for years outside the US). Things like the fake news crisis will make clear to everyone how reliant we have become on these tech powerhouses and there will be a backlash. …

And, Sam Altman wrote in the YC Annual Letter:

Companies like Amazon, Facebook, Google, Apple, and Microsoft have powerful advantages that are still not fully understood by most founders and investors. I expect that they will continue to do a lot of things well, have significant data and computation advantages, be able to attract a large percentage of the most talented engineers, and aggressively buy companies that get off to promising starts. This trend is unlikely to reverse without antitrust action, and I suggest people carefully consider its implications for startups. …

(Emphases added)

Now, Snap(chat) has revealed they’ve committed $3b to Google and Amazon over the next five years, or about $600m/year.  When we line that up with revenue estimates ($5.7b over the next three years), we find that the gorillas are getting….. ~30%!

The Internet is Ready for Things

I’m not a fan of the term “Internet of Things” (IoT), but it is the best way to describe a future where more and more devices are Internet-connected.  As computation and communication get cheaper, more “dumb” devices will be “smart” and on-line.

With the current hype around IoT, it’s not surprising that companies and entrepreneurs are pursuing opportunities to “own” various aspects of IoT infrastructure.  I’ve seen a ton of startup pitches, and several big companies (Xively, PTC, etc. ) are pursuing IoT platforms.

I’m skeptical.

The infrastructure elements already exist, as the Internet is exceptional at expanding and shifting to accommodate new kids on the block.  Consider mobile: there was a time when it was a very distinct thing (e.g. Qualcomm BREW, WAP, etc.) and the business folks talked about being “on deck”.

Now, it’s clear that mobile is an extension of the Web.  Mobile HTML is just HTML with a few mobile-specific features.   Mobile and desktop browsers share the same core rendering engine.  4G/LTE is a pipe for IP packets.  Cell phone apps POST JSON payloads over HTTP/HTTPS just like everyone else. Designing a compelling user experience for a small touch-based screen is different, but the underlying tech infrastructure is nearly identical to the desktop.

Though the rollout has been slow, Ipv6 enables direct addressability to every individual “thing”. Cheap Wifi (with an assist from BTLE) gets things on-line with existing infrastructure, and DNS provides a directory service.  Oauth2 defines how things to get secure, bounded access to assets, and HTTPS+JSON provides secure, remote procedure calls.

I’m not sure we need new stuff!

Google’s Car vs A Boston Winter

During the legendary Boston winter of 2015, I pulled out of a downtown parking garage one evening and nearly rear-ended a dumpster. It was sitting in the middle of a usually busy three-lane road, a place where no dumpster should ever be. It was dark and there were no cones, no markers, no construction signs…nothing.

This scenario is why (I feel) 100% autonomous, “no-steering-wheel”, driverless cars are much further off than experts predict. I highly doubt my dumpster case is in any machine learning training set, and it will be a long time before it ever is. My human brain was able to put it all together: the front loader down the street loading another dumpster, snow piles all around, the city’s urgency to remove snow, etc. Until machines approach human cognition, there are a LOT of real world cases that are more than just turning the wheel and tapping the brakes — too many cases to “remove the steering wheel” anytime soon.

If we look closely at any new technology, the rollout is almost always very incremental. Historians love to write about revolutions, but the reality is always much more evolutionary. Consider the autonomous car evolution so far:

  • Cars that beep when you divert from your lane & when you need to brake
  • A steering wheel nudges you in the right direction when you divert from your lane (with self-braking)
  • Complete steering and braking to maintain your lane & following distance
  • All of the above, plus safe lane changing with a turn signal input
  • ..etc..

I feel that last phase (“cruise-control that steers”) will be with us for a while. Even though it’s not “send your 5yr old to their play date in the car” kind of autonomy, it’s still hugely valuable. Long trips and commutes will be much less tiring. Also, speed kills — computers will soon be the safest drivers on highways & major roads, in all conditions. There will be injuries and deaths under computer control, but many more injuries and deaths will be averted.

While Tesla gets a lot of press, long-haul trucking may be the first significant disruption. Truck drivers are under strict regulations regarding drive time vs rest time, and for most drivers, their truck isn’t moving (or earning!) when they’re resting or sleeping. With self-driving technology, each driver gets a “highway co-driver”.  After lunch, navigate to the freeway, engage cruise control, and take a nap.

As things advance, I hope the government will be a constructive part of the process. For example, some highway segments may be flagged as “OK for self-driving” (as is done today for tandem trailers), and the regulators could acknowledge that “self-drive” time is not “drive time” for safety quotas.

This is exciting stuff, but “piling into your car after a few too many for a safe ride home”?? That still may be a way off!

Startups Should Revolve Around Their Founders if They Want to Succeed Big

I read a recent Harvard Business School blog post titled “Startups Can’t Revolve Around Their Founders If They want to Succeed“.  The authors make a general argument that founders are the biggest obstacles to long-term startup growth, citing a new research paper (paywall, sorry) that hypothesizes:

For a given startup, the value of the startup varies inversely with the degree of control retained by founders.

From a statistical analysis of over 6,000 startups, the paper (and article) argue (roughly) that founders with board control, the CEO position, or both, can “harm the firm’s prospects, reducing pre-money valuation by up to 22%.”


While “founder scale-up” problems are real management issues that can put significant stress and strain on any startup team (I’ve lived it), the argument has a significant flaw:  it’s based on an unweighted startup data set.  If Uber’s value creation (for all stakeholders) is considered equal to Fred’s Wrecking, Storage and App Development, I’m skeptical we can conclude anything really useful.

For example, a full half of the top ten US companies had or have founder leadership to significant significant scale:  Apple, Google, Microsoft, Facebook and Amazon.  These five alone companies represent $1.5 trillion of value — over 8% of the total value of all public US companies!  And all of the top US companies founded within ~30 years are/were founder led.

Furthermore, while I’m quite skeptical of private “unicorn” valuations, all but one at the top of that list have founder CEOs: Uber, Airbnb, Palantir, Snapchat, SpaceX, Pinterest, Dropbox, WeWork, Theranos, Lyft, and Stripe.

So, here’s a completely different hypothesis:

Most startup value creation, by a wide margin, accrues to founder-led companies. (esp. in technology) 

Stated differently: would you rather have a portfolio with 7 out of 10 companies successful, or a portfolio with Facebook?

Deep Learning: A Sport of Kings?

The big news in the machine learning/deep learning world this week is Google’s release of TensorFlow, their deep learning toolkit. This has prompted some to ask: why would they give away “crown jewels” for such a strategic technology? The question is best answered with a machine learning joke (paraphrased): “the winners usually have the most data, not the best algorithms”.

Neural networks have been around for a while, but it’s only been within the past 10 yrs that researchers have figured out how to train networks with many, many layers (the “deep” in “deep learning”). That research has been greatly accelerated by using GPUs as very high-performance, general purpose, vector processors. If a researcher can turn around an algorithm experiment in a day (vs 3 months), a lot more research gets done.

But as the joke suggests, it’s all about that data: you need lots and lots and LOTS of data to train a high-performance deep learning network. And Google has more data than anyone else —so they don’t worry so much about giving away algorithms.

(Also, Google, Baidu, Twitter, Facebook, etc. are investing in GPU compute clusters that can only be described as the new “mainframe supercomputers”. Sure, you can rent GPU instances on Amazon, but there’s nothing like having the latest Nvidia board with lots of RAM and very high-performance interconnect).

What does this all mean for early stage startups? The situation creates several tough hurdles: first, freely available code and technology from Google (and Facebook) enables competitors and devalues whatever the startup might develop. Second, few startups have access to a large enough proprietary data source to compete at scale. And third, GPU compute clusters need real capital.

What’s left for startups? I see at least two interesting patterns:

  • Using deep learning as a key feature to enhance another app.  Use freely available technology to add magic.  Google Photos is a great example of this, and I think every photo and video app will soon be able to recognize stuff, people, people, items, etc. to enhance the functionality.
  • “Man-teaches-machine”.  Start out with a lot of humans doing some task and capture their work to train a network.  Over time, have the network handle the common cases, with the exceptions / ambiguous cases routed to humans for resolution.  Build a large, proprietary training set, enjoy compounded interest, and profit.

The GPU Overshadows the CPU

Ask a teenager about GPUs (Graphics Processing Unit) and you might get a surprisingly informed response.  As I watch my kids, nephews, and their friends build “gaming PCs”, they all seem quite current on the relative performance of AMD vs Nvidia, the merits of GPU memory, power issues, etc.  (And one important side effect:  a fairly healthy family ecosystem of hand-me-down GPUs).

While it’s great to run Battlefield 4 at 60fps on ultra detail across three HD monitors, what’s most interesting is how GPU capabilities are generalizing beyond graphics. This is one of my absolute favorite disruption patterns: “commodization+crossover”, where a technology is commoditized by demand for one application and then applied elsewhere.

GPUs began as very specialized (and expensive) 2D & 3D hardware accelerators. Things began to change in the 1990s, driven by demand for 3D games, first with arcade units and consoles, and then PCs. In 1999, Nvidia coined the term “GPU”, starting a consumer-driven 15yr+ price/performance ramp with no end in sight.

GPUs are also getting much more generalized.  The first, fairly rigid 3D-transform computation pipelines have gradually given way to more general stream processors.  So called graphics “shaders” are now nearly fully programmable:  GPU developers write compute “kernels” in C-like languages (such as Open GL GLSL or DirectX HLSL) that then run on hundreds or thousands of compute units on the GPU.  And more recent technologies, such as Nvidia’s CUDA and the OpenCL platforms, dispense with the graphics-centric worldview entirely,

Because of their parallel architecture, GPUs have continued to scale while single CPU performance has effectively flattened.  For certain “embarrasingly parallel” problems where a repeated operation is applied to large amounts of data, they are hard to beat. For example, $350 gets you ~3.4 trillion floating point ops/second, 42,000x faster than the original Cray supercomputer!  Amazon offers GPU instances, and even Intel has conceded in a way:  on a modern x86 multi-core processor, almost 2/3rds of the die area is GPU.

It’s not surprising to see GPU horsepower applied to more and more non-graphics applications, such as simulating physics, aligning genome sequences, and training deep neural networks.  I think this pattern will continue, with the GPU firmly entrenched in computing systems as a highly scalable vector co-processor.

Why We Need a Neutral Internet, Exhibit A

I received an email from Verizon a few days ago, stating several FOX channels are no longer available because “Verizon refused to accept an agreement that contained rates that are not in our customers’ best interests“.  Presumably, FOX wanted more than Verizon was willing to pay.  (In cable TV, it’s customary for the cable TV operators to pay networks to carry their content.)  Now, those channels are currently playing a looping video with Verizon spokespeople, urging subscribers to call Cox Media.

Contrast this with Verizon’s stance toward Netflix, where they want the opposite arrangement:  Netflix pays to deliver content over Verizon’s network, citingWhen one party’s getting all the benefit and the other’s carrying all the cost, issues will arise” (Other ISPs share this view and Netflix has entered such an agreement with Comcast & Verizon).
This inconsistent situation is precisely and exactly why we need a neutral Internet.
Payments flowing between ISPs and content providers distorts the market, introduces friction, and shifts control to the ISPs.  Ultimately, it hinders innovation: compare the closed, legacy platforms (cable TV, pre-smartphone cell phones) with the enormous economic, quality-of-life, and strategic benefits of the new, open platforms (the Internet, smartphones).  If standing up a new Web site was as hard as signing up cable TV providers for your new cable channel, or getting a carrier to carry your mobile app “on deck” (pre-smartphone), we’d be a fraction as advanced as we are today.
Allowing business models for legacy, closed networks onto the Internet is a fundamental policy mistake.  If we go that way, how long until:
Verizon is sorry to inform you that {Netflix,Amazon,Battlefield,Youtube,etc.} will be unavailable (or available only at a reduced performance) because [content provider] refused to accept content distribution rates in our customer’s best interests.

Teaching Kids Programming

Getting kids interested in programming is a lot harder than it used to be. I was lucky enough to come of age during the PC revolution. My brother and I would carefully enter multiple pages of BASIC code from computer magazines, and then play games for weeks (making our own modifications along the way).

The problem now is the threshold of “interesting & engaging” has risen dramatically: today’s kids are surrounded by games and applications that have had hundreds of person years of development with gorgeous 3D graphics rendered in 1080p on huge color screens. They all carry personal supercomputers, are never off-line, have all the world’s information at their fingertips, and can download any of ~1 million applications (many for free).

Hello world” doesn’t cut it anymore.

How do we get kids engaged with learning software development, without them first having to spend a month writing code?

Minecraft is a fabulous starting point. (I think it will go down in history as one of the most brilliant games ever.) In our household, it’s the virtual neighborhood playground. Quincy will often get on to play with a bunch of friends after school (with TeamSpeak, so they can trash talk while building secret hideouts, chasing monsters, designing complex contraptions, or just pushing each other off cliffs).

But what’s most interesting is Minecraft is fully programmable with “redstone“, a set of digital circuit components. You can build a combination lock for your secret room (that blows up with the wrong combination), a completely automated train system, or even a scientific calculator or 8 bit computer. It’s fun, it’s play, and it’s something to show off to friends.  And, it’s programming.

Taking the Minecraft a step further, there’s the physical world itself. Between Arduino, Raspberry Pi, and an ever-growing set of easy to use components and modules, it’s never been easier to sense and manipulate physical things with software. You want an alarm that goes off when somebody goes in your bedroom? No problem. Now, let’s enhance it so it only goes off when it’s your sister, and also sends a text message with a picture of the offender.  You’re not downloading that from the app store!

Presale Resistance Syndrome (PRS)

I’ve written previously about presales (e.g. Kickstarter or Indigogo) as a tool for hardware startups.  The model enables risky & crazy ideas that would normally never see the light of day. Most will fail, but some will get through and be hugely disruptive. For example, Pebble’s record setting Kickstarter campaign accelerated their business and more fundamentally, defined the entire smart watch category.

In spite of this, I still meet entrepreneurs that resist the idea. Objections vary, but include:

  • Our target demographic does not line up with Kickstarter’s.
  • OUYA had a very successful campaign, but still failed. We don’t want to be associated with that.
  • It’s a lot of marketing work and distraction.
  • We’d rather just raise equity financing [and not have to ship all those orders].
  • We’ve launched products before; we know how to do this.

A presale is the marketing analogy to software testing: it tests product-market fit & demand before risking production investment. Of course, it’s not perfect: just like a “passed” test case is no guarantee a system works, a successful presale does not guarantee market success.  But a failure is extremely telling, and a presale (like software testing) can be a powerful tool to de-risk the journey.

The Right to Remember

Earlier this year, Mario Costeja-González won the right to be forgotten.  The Court of Justice of the EU ruled Google had to remove search results linking to a 1998 newspaper article about the foreclosure of his home (due to unpaid debts he later paid).  In the ultimate irony, he’s now permanently and widely remembered for precisely what he wanted everyone to forget (the Streisand Effect).

Now, search engines must consider requests from individuals to remove search results that:

appear to be inadequate, irrelevant or no longer relevant or excessive in the light of the time that had elapsed 

This raises the key question:  who judges this?  Something “irrelevant” to one person might be highly relevant to another.  Not surprisingly, Google is making its point by notifying Web sites when results are removed.

This decision raises fundamental questions about the right to inform & freedoms of speech and press.  The newspaper’s freedom to publish the foreclosure news is clearly protected, I am free to link to the news, and this blog post will eventually show up in search results.  It seems arbitrary that some have freedoms and some don’t.

For better or worse, search technology has permanently changed the privacy calculus.  Since the dawn of time we’ve enjoyed “practical obscurity“, where a lot of personal information was hard to identify, locate, or access. That’s changed, and legislators will now chase the issue with law and rulings in a never-ending game of Whac-a-Mole. For example, how long until someone finds ways to detect links that were removed and publishes them?

(Given this new world, a far better strategy for Mr. Costeja-González would be to generate new content and bury the foreclosure news in the noise.)

The Internet never forgets; plan accordingly.