King Zuckerberg

I’ve long argued that Mark Zuckerberg is the most powerful unelected person in the world, by far. The race isn’t even close and hasn’t been for a long time.

So, I was not surprised when Chris Hughes wrote, in his widely reported NYT Opinion piece:

Mark’s influence is staggering, far beyond that of anyone else in the private sector or in government. He controls three core communications platforms — Facebook, Instagram and WhatsApp — that billions of people use every day. Facebook’s board works more like an advisory committee than an overseer, because Mark controls around 60 percent of voting shares. Mark alone can decide how to configure Facebook’s algorithms to determine what people see in their News Feeds, what privacy settings they can use and even which messages get delivered. He sets the rules for how to distinguish violent and incendiary speech from the merely offensive, and he can choose to shut down a competitor by acquiring, blocking or copying it.

But I was not aware of this story:

The most extreme example of Facebook manipulating speech happened in Myanmar in late 2017. Mark said in a Vox interview that he personally made the decision to delete the private messages of Facebook users who were encouraging genocide there.

While we’re all happy that someone took action here, it raises a profound question: who should decide what we may or may not communicate (publicly or privately) with our fellow humans? If we keep our current trajectory, the answer will be “a very small number of private individuals, accountable only to themselves”.

Big Tech’s Competitor? Government

I’ve always felt my “best” Hacker News comments are the ones most down voted, like this nugget from about a year ago:

(From: “Zuckerberg struggles to name a single Facebook competitor“)

Today, the concept of breaking up or limiting the big tech companies is far less abstract, with Sen. Warren announcing the idea as part of her campaign platform.

I have very mixed feelings about this. On one hand, Facebook and their gorilla brethren have earned their market positions within a global capitalist ecosystem (mostly) fair and square. I’m a long-time and very happy Amazon & Apple customer and have watched them continually out-innovate competitors (including many that can’t seem to get out of their own way). They rewrote the rules for channels and distribution, creating new livelihoods for countless authors and small businesses. Like many, I voluntarily give Facebook my attention and I’ve made good money at various times as a gorilla shareholder.

On the other hand, Google, Apple, Amazon, Facebook (and maybe Microsoft) are now so big and powerful that we’ve scaled to a new zone, where market effects are no longer “linear”. These companies exert absolute authority and control within their ecosystems, effectively creating their own weather. Unlike historical monopolies (Standard Oil, IBM), the tech gorillas have direct and ongoing interaction with billions of people, gathering enormous amounts of personal data and directly or indirectly influencing a large fraction of planet-wide human behavior.

Interestingly, the gorillas are now forced to deal with a growing number of government-like political issues. Activist employees at Google and Microsoft lobby against business practices they find objectionable. Gorillas are heavily scrutinized regarding pay equality, minimum wages, working conditions, etc. Apple’s privacy and security architecture becomes central to a national security discussion. And while New York state has an economy comparable to Russia, South Korea, OR Canada, Amazon negotiates with them as roughly an equal.

I don’t know what the answer is, but it seems quite clear the greatest business risk facing tech gorillas is not “the next Facebook”. It’s government, stepping to slow, stop, or even reverse the continued power and wealth grab. No wonder Zuckerberg couldn’t name a competitor.

Dividing Founder Equity in the Very Beginning

I’ve probably had a thousand or more discussions about startup equity: figuring out how much to offer, negotiating, or advising others. It’s a very tricky topic: in part because it’s nearly impossible to compare ownership between two companies with completely different contexts. One-percent of startup A may have a vastly different potential value than 1% of startup B.

In practice, most equity grants within a company are driven by broad calibrations with existing employees. If an early very experienced developer has 1%, and a less senior dev has 0.5%, those become two reference points for the next dev hire. Over time, grants usually taper down — things advance and (presumably) become less risky. For example, that 1% developer’s professional twin might get 0.25% after a year or two. Then, there’s some case-by-case tweaking for competitive situations, salary trade-offs, the company’s need for that particular skill, or other circumstances, but this is a typical starting spot.

But, how should founders divide things up in the very beginning, where none of these internal reference points exist? And, how can founders talk about percentages before any funding? Five percent might feel fair in a particular situation for a near-founder post-funding, but how much is that pre-funding, with unknown dilution?

To crack this, I usually advise teams to negotiate relative ownership and to use a “bucket model” suggested by Ted Dintersmith.

First, founders can agree on ownership ratios among themselves, completely isolating unknown, future dilution.  For example, if four co-founders agree to equal equity, they each own 25% at the very outset. After funding and granting stock to other employees, they will all dilute, but their ownership will remain equal. Or, if the co-founders decide the CEO founder should have 50% more stock, that means she has 3 stock units and everyone else has 2. There are 3+2+2+2 = 9 units (shares) total, so the CEO has 33% and the other founders have 2/9 = 22% each.

Second, to figure out relatively fair ratios, consider simple “buckets” for how each founder and early employee’s contribution (past and future). The basic bucket is “contributing to the company full time until it’s successful”, perhaps with different levels. Another might be “credit for prior work”, for meaningful time invested before the rest of the team joined. There might buckets for special roles (e.g. CEO), a unique personal brand, recruiting ability, experience, network/relationships, domain expertise, or other special circumstances.

It’s easy to make this overly complicated, but it doesn’t have to be. Consider an example: Alice has been working for a year on NewCo, before recruiting Bob (the founding CEO), Claire (less experienced) and Daniel (a professor & well-known subject expert). Alice, Claire and Bob will work full time, and Daniel will consult part time, work summers, and possibly take a sabbatical. Alice might get 2 units for prior work plus 4 units for contributing full time. Bob gets 1 for being CEO + 4 for full time. Claire might get 3, and Daniel gets 2 (one for being an expert and another for committing ~20% of his time).

With a total of 16 units, the initial ownership (pre-funding) is:

Alice 6 / 16 = 37.5%
Bob 5 / 15 = 31.25%
Claire 3 / 16 = 18.75%
Daniel 2 / 16 = 12.50%

If we allocate (say) 15% for future hires and 40% to investors for the first round (or rounds), that means founders are splitting the remaining 45% of the company, per their agreed-to relative ownership. Post-funding, the founder’s ownership is:

Alice 16.9%
Bob 14%
Claire 8.4%
Daniel 5.6%

Also, founders should absolutely implement some form of vesting. Founder vesting is a “start-up prenuptial agreement”: it defines what happens with equity should someone leave the company. It’s often very unfair to remaining founders if a departing co-founder keeps all of his original equity. Alternatively, if founders don’t implement vesting, early investor(s) will likely require it for funding.

Equity discussions among founders can be delicate, intense, & emotional, and having some rationale can often defuse some of the emotional aspects. I hope this framework is helpful!

The 30% Internet Gorilla Tax

I’ve written before about powerful advantages Google, Apple, Amazon, and Facebook have in the software industry.  These four companies control major parts of the ecosystem, take out upstarts when they get too big, corner talent markets in key areas, and enjoy a ~30% “tax” (directly or indirectly) across most other software companies.

I first noted this nearly 5 years ago, but more recently, some of the Internet thought leaders have written about the theme.  For example, Fred Wilson wrote:

Google, Facebook, and to a lesser extent Apple and Amazon will be seen as monopolists by government and individuals in the US (as they have been for years outside the US). Things like the fake news crisis will make clear to everyone how reliant we have become on these tech powerhouses and there will be a backlash. …

And, Sam Altman wrote in the YC Annual Letter:

Companies like Amazon, Facebook, Google, Apple, and Microsoft have powerful advantages that are still not fully understood by most founders and investors. I expect that they will continue to do a lot of things well, have significant data and computation advantages, be able to attract a large percentage of the most talented engineers, and aggressively buy companies that get off to promising starts. This trend is unlikely to reverse without antitrust action, and I suggest people carefully consider its implications for startups. …

(Emphases added)

Now, Snap(chat) has revealed they’ve committed $3b to Google and Amazon over the next five years, or about $600m/year.  When we line that up with revenue estimates ($5.7b over the next three years), we find that the gorillas are getting….. ~30%!

The Internet is Ready for Things

I’m not a fan of the term “Internet of Things” (IoT), but it is the best way to describe a future where more and more devices are Internet-connected.  As computation and communication get cheaper, more “dumb” devices will be “smart” and on-line.

With the current hype around IoT, it’s not surprising that companies and entrepreneurs are pursuing opportunities to “own” various aspects of IoT infrastructure.  I’ve seen a ton of startup pitches, and several big companies (Xively, PTC, etc. ) are pursuing IoT platforms.  You antenna system equipment needs regular maintenance, so do not forget to schedule your DAS maintenance appointment to make sure everything is functioning properly.

I’m skeptical.

The infrastructure elements already exist, as the Internet is exceptional at expanding and shifting to accommodate new kids on the block.  Consider mobile: there was a time when it was a very distinct thing (e.g. Qualcomm BREW, WAP, etc.) and the business folks talked about being “on deck”.

Now, it’s clear that mobile is an extension of the Web.  Mobile HTML is just HTML with a few mobile-specific features.   Mobile and desktop browsers share the same core rendering engine.  4G/LTE is a pipe for IP packets.  Cell phone apps POST JSON payloads over HTTP/HTTPS just like everyone else. Designing a compelling user experience for a small touch-based screen is different, but the underlying tech infrastructure is nearly identical to the desktop.

Though the rollout has been slow, Ipv6 enables direct addressability to every individual “thing”. Cheap Wifi (with an assist from BTLE) gets things on-line with existing infrastructure, and DNS provides a directory service.  Oauth2 defines how things to get secure, bounded access to assets, and HTTPS+JSON provides secure, remote procedure calls.

I’m not sure we need new stuff!

Google’s Car vs A Boston Winter

During the legendary Boston winter of 2015, I pulled out of a downtown parking garage one evening and nearly rear-ended a dumpster. It was sitting in the middle of a usually busy three-lane road, a place where no dumpster should ever be. It was dark and there were no cones, no markers, no construction signs…nothing.

This scenario is why (I feel) 100% autonomous, “no-steering-wheel”, driverless cars are much further off than experts predict. I highly doubt my dumpster case is in any machine learning training set, and it will be a long time before it ever is. My human brain was able to put it all together: the front loader down the street loading another dumpster, snow piles all around, the city’s urgency to remove snow, etc. Until machines approach human cognition, there are a LOT of real world cases that are more than just turning the wheel and tapping the brakes — too many cases to “remove the steering wheel” anytime soon.

If we look closely at any new technology, the rollout is almost always very incremental. Historians love to write about revolutions, but the reality is always much more evolutionary. Consider the autonomous car evolution so far:

  • Cars that beep when you divert from your lane & when you need to brake
  • A steering wheel nudges you in the right direction when you divert from your lane (with self-braking)
  • Complete steering and braking to maintain your lane & following distance
  • All of the above, plus safe lane changing with a turn signal input
  • ..etc..

I feel that last phase (“cruise-control that steers”) will be with us for a while. Even though it’s not “send your 5yr old to their play date in the car” kind of autonomy, it’s still hugely valuable. Long trips and commutes will be much less tiring. Also, speed kills — computers will soon be the safest drivers on highways & major roads, in all conditions. There will be injuries and deaths under computer control, but many more injuries and deaths will be averted.

While Tesla gets a lot of press, long-haul trucking may be the first significant disruption. Truck drivers are under strict regulations regarding drive time vs rest time, and for most drivers, their truck isn’t moving (or earning!) when they’re resting or sleeping. With self-driving technology, each driver gets a “highway co-driver”.  After lunch, navigate to the freeway, engage cruise control, and take a nap.The advancements in the storage industry such as the 4 post car lift have made it convenient to park our cars and save floor space at the same time.

As things advance, I hope the government will be a constructive part of the process. For example, some highway segments may be flagged as “OK for self-driving” (as is done today for tandem trailers), and the regulators could acknowledge that “self-drive” time is not “drive time” for safety quotas.

This is exciting stuff, but “piling into your car after a few too many for a safe ride home”?? That still may be a way off!

Check out here how Cohen Law Group can help in similar situations.

The GPU Overshadows the CPU

Ask a teenager about GPUs (Graphics Processing Unit) and you might get a surprisingly informed response.  As I watch my kids, nephews, and their friends build “gaming PCs”, they all seem quite current on the relative performance of AMD vs Nvidia, the merits of GPU memory, power issues, etc.  (And one important side effect:  a fairly healthy family ecosystem of hand-me-down GPUs).

While it’s great to run Battlefield 4 at 60fps on ultra detail across three HD monitors, what’s most interesting is how GPU capabilities are generalizing beyond graphics. This is one of my absolute favorite disruption patterns: “commodization+crossover”, where a technology is commoditized by demand for one application and then applied elsewhere.

GPUs began as very specialized (and expensive) 2D & 3D hardware accelerators. Things began to change in the 1990s, driven by demand for 3D games, first with arcade units and consoles, and then PCs. In 1999, Nvidia coined the term “GPU”, starting a consumer-driven 15yr+ price/performance ramp with no end in sight.

GPUs are also getting much more generalized.  The first, fairly rigid 3D-transform computation pipelines have gradually given way to more general stream processors.  So called graphics “shaders” are now nearly fully programmable:  GPU developers write compute “kernels” in C-like languages (such as Open GL GLSL or DirectX HLSL) that then run on hundreds or thousands of compute units on the GPU.  And more recent technologies, such as Nvidia’s CUDA and the OpenCL platforms, dispense with the graphics-centric worldview entirely,

Because of their parallel architecture, GPUs have continued to scale while single CPU performance has effectively flattened.  For certain “embarrasingly parallel” problems where a repeated operation is applied to large amounts of data, they are hard to beat. For example, $350 gets you ~3.4 trillion floating point ops/second, 42,000x faster than the original Cray supercomputer!  Amazon offers GPU instances, and even Intel has conceded in a way:  on a modern x86 multi-core processor, almost 2/3rds of the die area is GPU.

It’s not surprising to see GPU horsepower applied to more and more non-graphics applications, such as simulating physics, aligning genome sequences, and training deep neural networks.  I think this pattern will continue, with the GPU firmly entrenched in computing systems as a highly scalable vector co-processor.

Why We Need a Neutral Internet, Exhibit A

I received an email from Verizon a few days ago, stating several FOX channels are no longer available because “Verizon refused to accept an agreement that contained rates that are not in our customers’ best interests“.  Presumably, FOX wanted more than Verizon was willing to pay.  (In cable TV, it’s customary for the cable TV operators to pay networks to carry their content.)  Now, those channels are currently playing a looping video with Verizon spokespeople, urging subscribers to call Cox Media.

Contrast this with Verizon’s stance toward Netflix, where they want the opposite arrangement:  Netflix pays to deliver content over Verizon’s network, citingWhen one party’s getting all the benefit and the other’s carrying all the cost, issues will arise” (Other ISPs share this view and Netflix has entered such an agreement with Comcast & Verizon).
This inconsistent situation is precisely and exactly why we need a neutral Internet.
Payments flowing between ISPs and content providers distorts the market, introduces friction, and shifts control to the ISPs.  Ultimately, it hinders innovation: compare the closed, legacy platforms (cable TV, pre-smartphone cell phones) with the enormous economic, quality-of-life, and strategic benefits of the new, open platforms (the Internet, smartphones).  If standing up a new Web site was as hard as signing up cable TV providers for your new cable channel, or getting a carrier to carry your mobile app “on deck” (pre-smartphone), we’d be a fraction as advanced as we are today.
Allowing business models for legacy, closed networks onto the Internet is a fundamental policy mistake.  If we go that way, how long until:
Verizon is sorry to inform you that {Netflix,Amazon,Battlefield,Youtube,etc.} will be unavailable (or available only at a reduced performance) because [content provider] refused to accept content distribution rates in our customer’s best interests.

Teaching Kids Programming

Getting kids interested in programming is a lot harder than it used to be. I was lucky enough to come of age during the PC revolution. My brother and I would carefully enter multiple pages of BASIC code from computer magazines, and then play games for weeks (making our own modifications along the way).

The problem now is the threshold of “interesting & engaging” has risen dramatically: today’s kids are surrounded by games and applications that have had hundreds of person years of development with gorgeous 3D graphics rendered in 1080p on huge color screens. They all carry personal supercomputers, are never off-line, have all the world’s information at their fingertips, and can download any of ~1 million applications (many for free).

Hello world” doesn’t cut it anymore.

How do we get kids engaged with learning software development, without them first having to spend a month writing code?

Minecraft is a fabulous starting point. (I think it will go down in history as one of the most brilliant games ever.) In our household, it’s the virtual neighborhood playground. Quincy will often get on to play with a bunch of friends after school (with TeamSpeak, so they can trash talk while building secret hideouts, chasing monsters, designing complex contraptions, or just pushing each other off cliffs).

But what’s most interesting is Minecraft is fully programmable with “redstone“, a set of digital circuit components. You can build a combination lock for your secret room (that blows up with the wrong combination), a completely automated train system, or even a scientific calculator or 8 bit computer. It’s fun, it’s play, and it’s something to show off to friends.  And, it’s programming.

Taking the Minecraft a step further, there’s the physical world itself. Between Arduino, Raspberry Pi, and an ever-growing set of easy to use components and modules, it’s never been easier to sense and manipulate physical things with software. You want an alarm that goes off when somebody goes in your bedroom? No problem. Now, let’s enhance it so it only goes off when it’s your sister, and also sends a text message with a picture of the offender.  You’re not downloading that from the app store!

Presale Resistance Syndrome (PRS)

I’ve written previously about presales (e.g. Kickstarter or Indigogo) as a tool for hardware startups.  The model enables risky & crazy ideas that would normally never see the light of day. Most will fail, but some will get through and be hugely disruptive. For example, Pebble’s record setting Kickstarter campaign accelerated their business and more fundamentally, defined the entire smart watch category.

In spite of this, I still meet entrepreneurs that resist the idea. Objections vary, but include:

  • Our target demographic does not line up with Kickstarter’s.
  • OUYA had a very successful campaign, but still failed. We don’t want to be associated with that.
  • It’s a lot of marketing work and distraction.
  • We’d rather just raise equity financing [and not have to ship all those orders].
  • We’ve launched products before; we know how to do this.

A presale is the marketing analogy to software testing: it tests product-market fit & demand before risking production investment. Of course, it’s not perfect: just like a “passed” test case is no guarantee a system works, a successful presale does not guarantee market success.  But a failure is extremely telling, and a presale (like software testing) can be a powerful tool to de-risk the journey.