root@jsnl.io
:
~/essays
$
render startups.md# Startups - March 26, 2025
I met with someone this morning whom I describe as a recruiter / venture capitalist (VC) hybrid. Dan (whose name is not Dan) runs a recruiting agency that partners with venture capital firms to staff high-growth tech startups.
VC firms like Sequoia Capital invest in seed, early-, and growth-stage startups. Once it's clear which seed and Series A startups seem well-positioned to move into the growth stage, VCs engage Dan's firm to recruit employees to drive the company's (hopefully) rapid revenue growth. In exchange for the high-quality talent, the startups either give Dan's agency a cash payout or small slice of equity for each employee they recruit. I'm not sure how often they take the equity over the cash, but in those cases, Dan's company, nominally a recruiting agency, is effectively operating as a small venture capital firm. [1]
Dan shared an interesting idea with me that I've been circling for a few years.
When I think about joining a startup, one of my non-negotiables has been that I genuinely believe in the company's product. There are two components.
First, do I think the product and its implications are ethical? For example, I think social media platforms are generally more harmful than helpful. The business models almost always boil down:
Inspiring. [3]
The second is whether I think the product will do well in the market. Will it make money? While I'm an expert (in fact, the only expert) on what I feel is ethical, I'm the exact opposite on whether software products succeed.
When I reflect on how heavily I weigh this second criterion, I realize I'm falling for a kind of egocentric bias. I've never started nor invested in a company. I've never led a product to market. In fact, I literally don't know whether the products I have worked on sell well!
That's not to say I shouldn't at least try to assess a product's viability. Joining a startup and starting one are two instances of a more general decision which is betting that you know better than anyone else in the space.
But there's a difference between indulging in the naive and simplistic instinct that a product absolutely won't (or will) succeed and acknowledging that intuition while remaining aware my inuition is probably not highly predictive of outcomes. (And, maybe more importantly, that the VCs who ostensibly do have well-tuned intuitions may have invested millions of dollars into it. [4])
As an example, amongst my peers I'm a well-known AI skeptic [5]. If you show me an AI company or product, I have a strong bias for trying to figure out why it'll fail. I probably know more about AI than the average person. And, there are skeptics who indeed are experts in this field. But I should be cautious about knee-jerk dismissals of AI-centric business models, especially because so much money has been invested in the space.
The dot-com bubble at the turn of the millenium involved vast expenditure of venture capital on ultimately worthless businesses, but the underlying hype was ultimately correct: widespread adoption of the internet and World Wide Web was a very big deal.
You could argue that the same is happening now with AI. It's unclear whether the AI startups will actually materialize into valuable enterprises or if they too will go the way of the dot-com startup. It's also immaterial.
This is the point that Dan drove home.
When the biggest tech incumbents and most sucessful VC firms in the game are going all in on AI, does it really matter if you work at an AI startup that fails?
In January, I called with a venture capitalist from San Francisco who offered this piece of advice: "always be working on something relevant". Dan's flavor of this advice was instead of focusing on determining a company's viability, focus on the firms who are backing it, the people who work on it, and its growth trajectory.
If you make a bet that the top players make too, who can blame you if everyone loses?
And more than that-- the pool of people who work on startups backed by top VCs might transcend the startups themselves or the hype boom-bust cycles that those companies are spawned from. When (or if) the AI hype turns bust, will the early-stage teams of the most promising AI startups continue to build for the top VCs in whatever new domain captures VC attention?
Teams may not remain intact. Some individual contributors (ICs) from legacy teams will become CEOs and CTOs of the new era. And some founders will join other people's companies as ICs. New teams will be formed from the parts of former teams. But the pool (the 'community') of people working in this space will remain mostly the same between the era of AI startups and whatever's next. At least that's the idea Dan suggests.
If it's true, then there's no reason to wait to break into that pool. You might as well join the AI startup that you don't necessarily have full confidence in so long as the people who are ostensibly the best in the world at betting on startups think it's promising, and you're working on it with an outstanding team of people. Even if the company goes bust, the network persists. [6]
[1] Ok fine, I don't know what this stuff means either: 'seed' this, 'series' that. I skimmed this page and found some useful info. You might too. The TL;DR is we're talking about high-risk and high-growth companies. Venture capital firms place tons of bets on companies like this that either materialize into extremely profitable assets or go to zero. There's a middleground too, but the distribution has a heavy bias to one of those two outcomes. The hope is that on the aggregate, the expected outcome of the VC's betting is a profit.
Their business can be simplistically modeled as an expected value (EV as poker players call it) equation.
Suppose PortfolioValue is a
random variable representing
the portfolio value of a VC firm that's invested in N tech startups.
PortfolioValue can be computed by adding the value of the firm's stake in
each company (i.e. not the value of the entire company-- just the part the VC
owns).
PortfolioValue = StakeValue(Company_0) + StakeValue(Company_1) ... + StakeValue(Company_N-1)
For simplicity, assume that the firms stake in Company_i (i is in the range
[0, N-1]) is either worth $1 billion or $0 at the time the fund matures (i.e.
when the investors in the fund say "show me the money!") with probability P:
StakeValue(Company_i) = $1 billion with probability P
| $0 with probability 1 - P
In other words, there's a P * 100 percent chance any given company that the
firm invests in yields a $1 billion stake. If P = 0.1, then there's a 10%
chance the stake is worth $1 billion and a 90% (1 - 0.1 = 0.9) chance it's
worthless.
We can quantify the monetary value of a random variable (that involves monetary outcomes).
How much might you be willing to pay for the opportunity to press flip a fair coin and, if it lands on heads, win $100? (If that coin lands on tails you don't win anything, but you don't get the money you paid back.)
There's a 50% chance the coin lands on heads (i.e. P = 0.5 that you win $100).
There's an equal 50% chance that you win nothing. Given the ability to flip the
coin for free 10 times, we expect 5 of those flips to land on heads and 5 to
land on tails. In other words, we expect that you'll win 5 * $100 = $500.
Of course, if you actually flip a coin 10 times, it won't always yield 5 heads-up outcomes. But if you flip it 100 times, the outcome will be very close to a 50/50 split between heads and tails. If you flip it 1000 times, even closer. And so on.
This idea passes the smell tests. If I flip a coin 10 times and it lands on heads 4 times, you probably shouldn't accuse me (at least not with a great deal of confidence) of using a rigged coin. But if I flip a coin 10,000 times and only 4,000 are heads, then you should certainly at least accuse me of using a rigged coin.
Since we expect half of your flips to come up as heads, a single flip is worth
exactly $100 * P(heads) = $50. (P(X) is notation that represents the
probability of an event X happening.)
In mathematical notation, we'd express the expected value (EV) of the
opportunity to flip the coin like so:
EV[flipping the coin] = P(heads) * $100 + P(tails) * $0
= 0.5 * $100 + 0.5 * $0
EV[flipping the coin] = $50
VC firms do a similar (but more sophisticated) style of computation when they build their portfolio of startups. Each company is a coin flip, except they get a worse-than-50% chance of winning and the value of their stake in the company is a lot more than $100.
EV[PortfolioValue] = EV[StakeValue(Company_0)] + ... + EV[StakeValue(Company_N-1)]
= $1 billion * P + ... + $1 billion * P
= $1 billion * P * N
If the firm can afford to invest in 100 companies (i.e. N = 100), then their
portfolio is worth:
EV[PortfolioValue] = $100 billion * P
How much can the firm invest in any single company and break even if each
company has a 5% chance of succeeding (P = 0.05)?
EV[PortfolioValue] = $100 billion * 0.05 = $5 billion
$5 billion divided across 100 companies is $50 million for each. If we
model that with P = 0.005 (0.5% chance of success), then the firm can afford
to at most bet $5 million on each one.
Of course in reality, VC firms aren't trying to break even-- they're trying to make money.
This example could be reworked to remove the simplifying assumption that tech startups either yield $1 billion or $0 to their investors (and nothing else). The EV math is practically as simple as it is now, but more complicated to set up.
Finally, there's lots of interesting math when analyzing the risk of these types of deals. There is, after all, a non-zero chance that a VC makes bets on 100 companies and every single one fails despite a profitable expected outcome. Quantifying the precise probabilities of specific outcomes is a more advanced exercise (although not terribly so). In essence, EV is about calculating the average-case outcome. But how likely are those pesky non-average outcomes?
[2] In my opinion, the 'drive engagement' step of the modern social platform business model is where most of the harm happens.
Machine learning algorithms are extremely good at figuring out what you engage with. Did you linger for 20 milliseconds longer than usual on that fitness influencer's thrist trap? Instagram knows it. And TikTok will figure out you're gay before you do.
Once they know you, the algorithms give you an endless stream of what you want to see. (Aza Raskin, the inventor of infinite scrolling, has expressed his regrets: "[infinite scroll is designed] to deliberately keep them online for as long as possible".) Dopamine, historically earned by doing real shit like walking or laughing with friends, is accessible in endless quantities on your phone. Why eat an apple when you can buy a bag of sugar?
[3] I'm not above working at a company that sells ads (or builds infinite scroll dopamine injection platforms for that matter).
[4] Don't misunderstand this to mean that I think VCs always get it right. My analysis in comment [1] makes it clear that VCs can afford to be wrong often. But they, unlike me, need to make a good ROI to survive.
[5] AI is "artificial intelligence" which, for some reason, has colloquially come to mean "large language model" (LLM). The term artificial intelligence is more of a marketing term than a technical one. I could write a relatively simple program to predict the natural language of a text sample:
// This is pseudocode.
most_matches = 0
predicted_language = None
// For each natural language (like English or French), get the most frequently
// occurring words in that language (say the top 10).
for language, top_words in all_languages.GetTopWords():
matched_words = 0
// Count how many words in the sample text exist in the current language's
// top words set.
for word in sample_text:
if word in top_words:
matched_words += 1
// If the current language is better than our best, we use the current
// language as our new guess (for now)
if matched_words > most_matches:
predicted_language = language
most_matches = matched_words
// Print our prediction which is allowed to be None if no words match.
print("Prediction:", language)
I haven't actually written a program like this, but I boldly assert that it'd perform reasonably well on non-edge and Latin alphabet cases. Regardless, I am confident that I can claim this program is an instance of "artifical intelligence".
But when we talk about artificial intelligence, we don't really mean things like this program. (This is a heurstic algorithm which, although a dying class, once ruled in computer applications that we might describe as being "intelligent". Advancements in machine learning, including LLMs, have largely obsoleted this style of algorithm outside of simple or performance-sensitive use cases.)
[6] Assuming you're good at your job.
root@jsnl.io
:
~/essays
$