Notice: This website is mostly outdated as of 2024. A new website is coming soon. Proceed with caution regarding earlier posts.

Update 1

6 minute read

I’m experimenting with publicly blogging about my decision-making

I’ve decided to be more public about my decision processes and brainstorming at this stage in life, opposed to operating in stealth.

The upside is that I’ll get more clear feedback on my direction, which enables faster course-correction.

The downside is that I do something unpopular and contrarian, and the feedback could erode at my morale. Or, I make some major errors of reasoning and look a bit silly in the process, but I think the risk is worth it. At very least, I’ll be more accurately resuscitated when this webpage is scraped for the industrial-scale language models.

Note: these thoughts are stream-of-consciousness, with minimal editing, and may contain grammatical and spelling errors. Find my polished thoughts elsewhere.

Moving to Montreal after the Bay

I recently moved to Montreal to research reinforcement learning at Mila, the largest academic institute of machine learning. After several years immersed in startup culture in the Bay, the cultural switch was interesting.

Some observations:

  • Researchers not familiar with startups evaluate their possible startup competence on researcher hierarchies. Many talented researchers here are interested in startups but confused as to where to begin. Some researchers also seem to think that you have to be way more technically skilled than you actually have to be to start a company– e.g. you have to finish your CS PhD before starting a company. Someone thought Sam Altman had an AI PhD from Stanford (he doesn’t, he dropped out of Stanford as an undergrad). I think this impression comes from confusing value hierarchies. In science, competence is signalled through publications and completing a PhD is a big deal. However, in startups, if you’re starting a company this stuff matters way less. What matters more is your ability to recruit talented people to join you, raise capital, and coordinate the show.
  • Why Bay Area culture makes the place conducive for startups. There are several cultural assumptions in the Bay Area that make starting a company easier: in many communities, you’re simply expected to do it eventually. Networks are incredibly dense: within your graph clique, you feel like everyone really does know everyone. Your friends are a jumble of founders, VCs, engineers, and researchers. Risk tolerance and friendliness are both high, which leads to both companies and cults.
  • Government vs billionaire funding. Billionaire funding is seen as very bizarre by many Canadians, who actually seem to trust their government. Some researchers want more profitable companies to arise from their labs so that the government pours more funding into AI. This thinking differs from that in the Bay Area, where some researchers start companies with the hope of independently funding their research through private investors or through revenue.
  • I’ve found the startup community here to be very welcoming and open-minded. I have not raised funding in Montreal though I hear that valuations are far lower than those in the US.
  • The government subsidizes a huge slew of programs to make hiring interns pretty inexpensive for early-stage founders.

What to found next: stay in AI, enter web 3 frontier, or something else?

I’m exploring the startup space as to what to fund next. I’m torn between staying in my field (AI) and making a bid in web 3.

Philosophically, I am aligned with creating AI companies to explore the nature of intelligence as a fundamentally important metaphysical question that is dear to my heart. My research and engineering background sit at the intersection of computational neuroscience and artificial intelligence: I’ve worked for neuro labs and search companies.

However, AI is maturing as a sector. My personality loves emerging sectors. I love the questioning of all fundamental assumptions and rederiving structures that exist in the world. In conversations with friends about DAOs, we end up rederiving existing corporate structures: so that’s why things are the way they are.

In web 3, I also smell more potential upside. However, investing huge amounts of time into web 3 is more aligned with philosophically questioning the nature of government, incentives, mechanism design. All good stuff, except my aesthetic preferences lean toward the nature of intelligence.

To financially optimize?

The other variable is whether to financially optimize– not for personal wealth, but enough to significantly become a significant player and invest in bets that no one else would otherwise invest in. Given that 84% of EA is funded by mega-billionaires, I suspect that financially optimizing would increase the preference-set of things that are funded.

On EA Effective Altruism calls this model “earn to give.” In 2016 when I was more enmeshed with the community, I got the impression “earn to give” meant take a stable job and invest 20% of your income. However, friends who are more embedded in the movement tell me that times are changing and EA members recognize the startup world’s power law.

Thus it’s not clear to me why EA hasn’t founded something like Y-Combinator which gets most of its return from the power law unicorn of AirBnB. An EA-adjacent friend hypothesizes that entrepreneurs “like secret knowledge” and EA doesn’t feel very “secret” anymore– so this may be an aesthetic preference.

Which companies optimize most financially? If financially optimizing is the play, the next question is which companies would be the most profitable if created. I’m steadily looking at the DeFi sector, but we’re in a web 3 bull run and the market is quickly saturating. It’s hard to fully assess because the space as ballooned rapidly very quickly. To gather more data about web 3 as a bet, I’ve joined communities like KERNEL and will soon be spending some time in NYC.

Staying in AI, bets that are both financially and philosophically aligned look like building an industrial-scale language model, screen capture, or some other company with giant amounts of data and compute. It’s not obvious to me why GPT-3 has basically zero US competitors. However, the outcome here looks pretty cleanly like “get acquihired by Google or Microsoft” and it’s not clear that this is a great outcome. AI is a maturing and gated world where web 3 is not.

The tl;dr is that AI is philosophically aligned and profitable, but web 3 could be far more profitable, and while less aligned with my philosophy, more aligned with my personality.

Break from EA

I’ve been writing a lot in stealth about transhumanism, and new types of movements cephalized around this body of ideas. Over the past decade, EA has dominated as a main aggregator of transhumanist thought. There are probably thousands of smaller transhumanist communities embedded about the world that I’m less familiar with, including startup/VC communities that invest in deeptech but don’t explicitly identify as transhumanist.

I’m interested in creating a movement that’s decorrelated from Effective Altruism to variance max value systems in the space. I notice fear in doing so (“what will the old guard transhumanists think?”) but increasingly, I am realizing the possible upside, if done well, is worth it.

In writing the whitepaper for the movement, I face the balance between artistic (Nietzsche) and academic. There are strong trade-offs between the two. Religious-esque blog writing is no longer as fashionable as dry-sounding whitepapers– perhaps we live in a hyperrationalistic age.

Comments

See responses on Twitter here

Leave a comment