It has been way too long since I’ve written. Last post I wrote was like 6 months ago. Life has been busy! I’ve got <1 hour at a coffee shop so I’m just going to brain dump as much as I can on this topic. Let’s get after it. From my vantage point, it’s pretty clear that we’re going through the early stages of an incredible emerging technology. There’s definitely a little too much hype right now and most folks are still trying to find product-market fit in the face of a technology that is incredibly agile. Agile in the sense that it can commoditize entire workflows and “moats” very quickly. Startups are being forced back into the gym and get in shape in terms of speed, flexibility, agility, and market positioning.
Nice to hear an update from you Ryan. I remember back in 2017 when you were so bullish on Nvidia and convinced me to buy. I held it until last year (sold too soon) because they’re wildly overpriced at this point. Now I own puts on Nvidias. The reasons are exactly as you stated: competition from ASICs for LLMs and small fine tuned LLMs driving more business value than mega models like GPT4.
Jared! So good to hear from you. Hope you're doing well. Glad you held on to it ;)
Yeah, something has to give here and these new generation of semiconductors are easier to manufacturing, cheaper to build, better performance, etc.
The thing that will take time is the programming language. CUDA gives them such a great moat but I believe that's why each of these neruomorphic companies are going fully vertical (software/hardware). I suppose we'll see. Perhaps we get a black swan event where we're forced to make the leap in architecture (eg. China invades Taiwan; rapid supply chain shift).
Great to hear from you dude! We should catch up soon, do a collaborative post/podcast. Would love to hear your perspective on all the craziness happening.
Agreed, CUDA is a huge moat and a huge reason other hardware isn't used widely. Just today my teammate reported on how he is blocked on trying to convert our model to run on AWS' custom hardware called Inferentia. We've been overpaying by 4x for years using GPUs instead of Inferentia but are stuck due to the software hurdles of "compiling" our model for custom hardware. But we're a small team operating at a relatively small scale without the expertise to do this - bigger companies with bigger scale will have the motivation and know how to use custom hardware.
Nice to hear an update from you Ryan. I remember back in 2017 when you were so bullish on Nvidia and convinced me to buy. I held it until last year (sold too soon) because they’re wildly overpriced at this point. Now I own puts on Nvidias. The reasons are exactly as you stated: competition from ASICs for LLMs and small fine tuned LLMs driving more business value than mega models like GPT4.
Jared! So good to hear from you. Hope you're doing well. Glad you held on to it ;)
Yeah, something has to give here and these new generation of semiconductors are easier to manufacturing, cheaper to build, better performance, etc.
The thing that will take time is the programming language. CUDA gives them such a great moat but I believe that's why each of these neruomorphic companies are going fully vertical (software/hardware). I suppose we'll see. Perhaps we get a black swan event where we're forced to make the leap in architecture (eg. China invades Taiwan; rapid supply chain shift).
Great to hear from you dude! We should catch up soon, do a collaborative post/podcast. Would love to hear your perspective on all the craziness happening.
Agreed, CUDA is a huge moat and a huge reason other hardware isn't used widely. Just today my teammate reported on how he is blocked on trying to convert our model to run on AWS' custom hardware called Inferentia. We've been overpaying by 4x for years using GPUs instead of Inferentia but are stuck due to the software hurdles of "compiling" our model for custom hardware. But we're a small team operating at a relatively small scale without the expertise to do this - bigger companies with bigger scale will have the motivation and know how to use custom hardware.